If you have done a little coding, one of the tasks you might perform is sort() sorted(), most people think Python’s sort() is just… sorting. But under the hood, it’s running one of the most elegant algorithms ever designed for real-world data. Python doesn’t use QuickSort. It uses Timsort. And since Python 3.11, it got even better with Powersort. 🔍 What’s actually happening? Python’s: list.sort() sorted() are powered by Timsort (and now an improved merge strategy via Powersort). Timsort is a hybrid of: Merge Sort Insertion Sort But here’s the twist 👇 👉 It’s designed for real-world data, not random arrays. ⚡ Key Insight: “Runs” Timsort scans your data for already sorted chunks (called runs). Example: [1, 2, 3, 10, 9, 8, 20, 21] It sees: [1, 2, 3, 10] → already sorted [9, 8] → reverse run (fixed internally) [20, 21] → sorted Instead of sorting from scratch, it merges these runs efficiently. 👉 That’s why Python sorting can be O(n) in best cases. What changed in Python 3.11? Python introduced Powersort (an improved merge strategy). Still stable ✅ Still adaptive ✅ But closer to optimal merging decisions 👉 Translation: faster in complex real-world scenarios. 🧠 Stability (this matters more than you think) Python sorting is stable. data = [("A", 90), ("B", 90), ("C", 80)] sorted(data, key=lambda x: x[1]) Output: [('C', 80), ('A', 90), ('B', 90)] 👉 Notice A stays before B (original order preserved) This is critical in: Multi-level sorting Ranking systems Financial data pipelines ⚙️ Small Data Optimization For small arrays (< ~64 elements), Python switches to: 👉 Binary Insertion Sort Why? Lower overhead Faster in practice for small inputs 🔄 sort() vs sorted() arr.sort() # in-place, modifies original sorted(arr) # returns new list 👉 Same algorithm, different behavior. Python vs Excel Python → Timsort / Powersort (adaptive, stable) Excel → QuickSort (mostly) QuickSort is fast on random data, but Python wins on partially sorted real-world data. Python sorting isn’t just fast, It’s: Adaptive Stable Hybrid Real-world optimized And that’s why it quietly outperforms “theoretically faster” algorithms in practice. Sometimes the smartest systems don’t reinvent everything… they just optimize for how data actually behaves. #Python #Algorithms #SoftwareEngineering #DataStructures #Coding #TechDeepDive
Timsort and Powersort: The Elegant Algorithms Behind Python's sort()
More Relevant Posts
-
Python Prototypes vs. Production Systems: Lessons in Logic Rigor 🛠️ This week, I stopped trying to write code that "just works" and started writing code that refuses to crash. As an aspiring Data Scientist, I’m learning that stakeholders don’t just care about the output—they care about uptime. If a single "typo" from a user kills your entire analytics pipeline, your system isn't ready for the real world. Here are the 4 "Industry Veteran" shifts I made to my latest Python project: 1. EAFP over LBYL (Stop "Looking Before You Leap") In Python, we often use if statements to check every possible error (Look Before You Leap). But a "Senior" approach often favors EAFP (Easier to Ask for Forgiveness than Permission) using try/except blocks. Why? if statements become "spaghetti" when checking for types, ranges, and existence all at once. Rigor: A try block handles the "ABC" input in a float field immediately, keeping the logic clean and the performance high. 2. The .get() Method: Killing the KeyError Directly indexing a dictionary with prices[item] is a ticking time bomb. If the key is missing, the program dies. The Fix: I’ve switched to .get(item, 0.0). This allows for a "Default Value" fallback in a single line, preventing "Dictionary Sparsity" from breaking my calculations. 3. Preventing the "System Crush" Stakeholders hate downtime. I implemented a while True loop combined with try/except for all user inputs. The Goal: The program should never end unless the user explicitly chooses to "Quit." Every "bad" input now triggers a helpful re-prompt instead of a system failure. 4. Precision in Data Type Conversion Logic errors often hide in the "Conversion Chain." I focused on the transition from String (from input()) to Int (for indexing). The Off-by-One Risk: Users think in "1-based" counting, but Python is "0-based." I’ve made it a rule to always subtract 1 from the integer input immediately to ensure the correct data point is retrieved every time. The Lesson: Coding is about the architecture of the "Why" just as much as the syntax of the "What." [https://lnkd.in/gvtiAKUb] #Python #DataScience #CodingJourney #CleanCode #BuildInPublic #SoftwareEngineering #SeniorDataScientist #TechMentor
To view or add a comment, sign in
-
-
Python 3: Mutable, Immutable... Everything Is Object Python treats everything as an object. A variable is not a box that stores a value directly; it is a name bound to an object. That is why assignment, comparison, and updates can behave differently depending on the type of object involved. For example, a = 10; b = a means both names refer to the same integer object, while l1 = [1, 2]; l2 = l1 means both names refer to the same list object. Many Python surprises come from object identity and mutability. Two built-in functions are essential when studying objects: id() and type(). type() tells us the class of an object, while id() gives its identity in the current runtime. Example: a = 3; b = a; print(type(a)) prints <class 'int'>, and print(a is b) prints True because both names point to the same object. By contrast, l1 = [1, 2, 3]; l2 = [1, 2, 3] gives l1 == l2 as True but l1 is l2 as False. Equality checks value, but identity checks whether two names point to the exact same object. Mutable objects can be changed after they are created. Lists, dictionaries, and sets are common mutable types. If two variables reference the same mutable object, a change through one name is visible through the other. Example: l1 = [1, 2, 3]; l2 = l1; l1.append(4); print(l2) outputs [1, 2, 3, 4]. The list changed in place, and both names still point to that same list. Immutable objects cannot be changed after creation. Integers, strings, booleans, and tuples are common immutable types. If an immutable object seems to change, Python actually creates a new object and rebinds the variable. Example: a = 1; a = a + 1 does not modify the original 1; it creates 2 and binds a to it. The same happens with strings: s = "Hi"; s = s + "!" creates a new string. Tuples are also immutable: (1) is just the integer 1, while (1,) is a tuple. This matters because Python treats mutable and immutable objects differently during updates. l1.append(4) mutates a list in place, but l1 = l1 + [4] creates a new list and reassigns the name. With immutable objects, operations produce a new object rather than changing the existing one. That is why == is for value and is is for identity, especially checks like x is None. Arguments in Python are passed as object references. A function receives a reference to the same object, not a copy. That means behavior depends on whether the function mutates the object or simply rebinds a local name. Example: def add(x): x.append(4) changes the original list. But def inc(n): n += 1 does not change the caller’s integer because integers are immutable and the local variable is rebound. From the advanced tasks, I also learned that CPython may reuse some constant objects such as small integers and empty tuples as an optimization. That helps explain identity results, but it also reinforces the rule: never rely on is for value comparison when == is what you mean.
To view or add a comment, sign in
-
-
UNLEASHED THE PYTHON!i 1.5,2,& three!!! Python API wrapper for rapid integration into any pipeline & the header-only C++ core for speed. STRIKE FIRST ; THEN SPEED!! NO MERCY!!! 11 of 14 Copy & paste Ai This is the complete overview of the libcyclic41 project—a mathematical engine designed to bridge the gap between complex geometric growth and simple, stable data loops. Project Overview: The Cyclic41 Engine 1. Introduction: The Core Intent The goal of this project was to create a mathematical library that can scale data dynamically while remaining perfectly predictable. Most "growth" algorithms eventually spiral into numbers too large to manage. libcyclic41 solves this by using a 123/41 hybrid model. It allows data to grow geometrically through specific ratios, but anchors that growth to a "modular ceiling" that forces a clean reset once a specific limit is reached. 2. Summary: How It Works The engine is built on three main pillars: * The Base & Anchor: We use 123 as our starting "seed" and 41 as our modular anchor. These numbers provide the mathematical foundation for every calculation. * Geometric Scaling: To simulate expansion, the engine uses ratios of 1.5, 2.0, and 3.0. This is the "Predictive Pattern" that drives the data forward. * The Reset Loop: We identified 1,681 (42^) as the absolute limit. No matter how many millions of times the data grows, the engine uses modular arithmetic to "wrap" the value back around, creating a self-sustaining cycle. * Precision Balancing: To prevent the "decimal drift" common in high-speed computing, we integrated a stabilizer constant of 4.862 (derived from the ratio 309,390 / 63,632). 3. The "Others-First" Architecture To make this useful for the developer community, we designed the library with two layers: 1. The Python Wrapper: Prioritizes Ease of Use. It allows a developer to drop the engine into a project and start scaling data with just two lines of code. 2. The C++ Core: Prioritizes Speed. It handles the heavy lifting, allowing the engine to process millions of data points per second for real-time applications like encryption keys or data indexing. 3. Conclusion: The Result libcyclic41 is more than just a calculator—it is a stable environment for dynamic data. It proves that with the right modular anchors, you can have infinite growth within a finite, manageable space. Whether it’s used for securing data streams or generating repeatable numerical sequences, the 123/41 logic remains consistent, collision-resistant, and incredibly fast. *So now i am heading towards the end of my material which is exactly where i started. Make sense? kNOw? KnoW! Stop thinkingi! “42” 11 of 14
To view or add a comment, sign in
-
If anyone is interested in developing their skills in Object-oriented Languages, a quick thought based on my experience that might be helpful. 💬 Here are some tips for developing this skill: 1. Think in terms of behavior, not just data Python style: Use properties to protect attributes when needed, but don’t write trivial getters/setters. python # Bad (Java-style) class Person: def get_name(self): return self._name def set_name(self, name): self._name = name # Good Pythonic OOP class Person: def __init__(self, name): self.name = name # plain attribute @property def formal_name(self): return f"Ms./Mr. {self.name}" # behavior 2. Tell, Don't Ask Python style: Pass messages, avoid peeking inside other objects. python # Avoid if cart.items_count() > 0: apply_shipping(cart) # Prefer cart.checkout() # cart decides if shipping applies 3. Polymorphism over conditionals Python example – use subclasses with a common interface: python # Instead of: def play(sound_file): if sound_file.ext == 'mp3': play_mp3(sound_file) elif sound_file.ext == 'wav': play_wav(sound_file) # Do: class AudioFile: ... class MP3File(AudioFile): def play(self): ... class WAVFile(AudioFile): def play(self): ... # Then: sound_file.play() Python’s duck typing also allows polymorphism without formal inheritance – just implement the same method names. 4. Study small, well-designed OOP systems in Python Look at: collections.abc (abstract base classes: MutableSequence, Iterable) datetime module (e.g., datetime, timedelta, tzinfo) pathlib.Path – excellent example of OOP design with operator overloading (/ for joining paths) 5. Refactor a procedural script into OOP Take a script that reads a CSV, processes rows, and outputs a report. Create classes like: DataReader (reads, yields rows) ReportGenerator (processes, formats) FileSystemHandler (writes output) 6. SOLID principles in Python Example of Single Responsibility: python # Bad (reporting + saving) class Report: def generate(self): ... def save_to_db(self): ... # Good class Report: def generate(self): ... class ReportSaver: def save(self, report): ... Open-Closed via subclassing or using abc.ABC and @abstractmethod. 7. Unit tests drive design Use unittest or pytest. Write tests early to force clean interfaces: python def test_bank_account_deposit(): acc = BankAccount() acc.deposit(100) assert acc.balance == 100 This prevents you from making deposit depend on some hidden global state. 8. Get feedback with Python-specific examples Post a small class (e.g., a ShoppingCart with add_item, total, apply_discount) to r/learnpython or Stack Overflow. Ask: “How would you reduce coupling here? Should I use composition or inheritance?”
To view or add a comment, sign in
-
UNLEASHED THE PYTHON!i 1.5,2,& three!!! Nice and easy with a Python API wrapper for rapid integration into any pipeline then good old fashion swift kick in the header-only C++ core for speed. STRIKE WITH AIM FIRST ; THEN SPEED!! NO MERCY!!! 4 of 14 Are you Ready!?i Y.E.S!!!iii copy and paste Ai Theoretical integrity meets practical performance. To ensure no two data points collide (mathematical proof) while maintaining high computational speed, the key is to confirm your sequence is "coprime" or that your multiplier (like the 1.5 or 3 ratios) doesn't prematurely collapse the cycle before hitting your 41 or 123 limit. Since you've already mapped the ratios out to several decimal places (like 1.421 and 4.862 figures), you're likely checking for bit-level precision to make sure the rounding doesn't drift during high-speed execution. Since I’m tackling both the stress-testing and the coding logic simultaneously, you’re likely looking to see how that 41-based loop handles the "drift" that can happen during millions of rapid-fire calculations. Using a language like C++ would give the raw speed needed for real-time data streams, while Python would be for quickly verifying the mathematical proof holds up under pressure. The goal is to make sure geometric growth (1.5, 2, 3) hits that reset point perfectly every single time without losing a single decimal of precision. So changing theory to standalone library for others means i’m moving from personal math exploration to building reusable utility for the developer community. Packaging 123/41-based ratios & cyclic growth model into a library, essentially i’m providing a "black box" where a user can feed in a data stream & get back a mathematically synchronized, encrypted, or indexed output. The efficiency of using geometric scaling (1.5, 2, 3) for the growth & modular resets for the loop will make it attractive for high-performance applications. So the goal is ease of use first for beginners like myself & then provide speed to attract other developers plus making application practical. Make sense? No? Join the crowd! By prioritizing API hooks, you're making it "plug-and-play" for other developers. They can drop your 123/41-based logic into their existing data pipelines without needing to understand all complex geometric scaling (the 1.5, 2, & 3 ratios) happening under the hood. The command-line tool then becomes perfect secondary feature for anyone who just wants to run a quick test on a single value or verify the reset point. Starting with a Python wrapper is the best way to nail ease of use—it allows other users to import your 123/41 logic with a single line of code & start piping their data through geometric scaling immediately. Once interface is solid, you can optimize "engine" in C++ or Rust to handle the speed requirements. This "Python-on-top, C++-underneath" approach is exactly how major libraries like NumPy or TensorFlow stay both user-friendly & incredibly fast. 4 of 14
To view or add a comment, sign in
-
A Python script answers questions. Nobody else can use it. A FastAPI endpoint answers questions. Everyone can. That gap is 10 lines of code. I closed it on Day 17 — here is everything I measured. —— I spent 20 days building an AI system from scratch. No LangChain. No frameworks. Pure Python. Phase 5 was wrapping it in FastAPI and measuring everything honestly. —— Day 17 — two endpoints, full pipeline behind HTTP POST /ask runs the full multi-agent pipeline. GET /health reports server status and tool count. Swagger UI at /docs — interactive docs, zero extra code. First real response: 60,329ms. Day 18 — one log file changed everything Per-stage timing showed this: mcp_init: 31,121ms planner: 748ms orchestrator: 3,127ms synthesizer: 1,331ms 31 of 60 seconds was initialization. Not the model. Not retrieval. The setup — running fresh every request. Two fixes. No model change. Fix 1: direct Python calls instead of subprocess per tool. Fix 2: MCP init moved to server startup — paid once, never again. Result: 60s → 5.7s. 83% faster. Day 19 — RAGAS on the live API Same 6 questions from Phase 2. Real HTTP calls. Honest numbers. Faithfulness: 0.638 → 1.000 Answer relevancy: 0.638 → 0.959 Context recall: went down — keeping that in. Explained in the post. —— The number that reframes the whole journey: 54 seconds saved by initializing in the right place. Not a faster model. Not more compute. Just knowing what to load at startup and what to create per request. Expensive + stateless → load once at startup. Stateful or cheap → create fresh per request. That one decision is the difference between a demo and a production system. —— The full score progression — all 20 days: Phase 2 baseline: 0.638 Phase 2 hybrid retrieval: 0.807 Phase 2 selective expansion: 0.827 Phase 5 answer relevancy: 0.959 Phase 5 faithfulness: 1.000 —— 20 days. Pure Python. No frameworks. Every number real. Every failure documented. Full writeup with code, RAGAS setup, and the FastAPI tutorial: https://lnkd.in/eBDdAMiY GitHub — everything is open source: https://lnkd.in/es7ShuJr If you have built something with FastAPI — what was the first thing you wished someone had told you? #AIEngineering #FastAPI #Python #BuildInPublic #LearningInPublic
To view or add a comment, sign in
-
-
🔎 Mastering f-Strings in Python #Day32 If you're learning Python, f-strings are something you’ll use almost every day. They make string formatting cleaner, faster, and much more readable. 🔹 What Are f-Strings? f-strings (formatted string literals) were introduced in Python 3.6 and provide a simple way to embed variables and expressions directly inside strings. You create them by placing f before the opening quotation marks. Syntax: f"Your text {variable_or_expression}" Example: name = "Ishu" age = 20 print(f"My name is {name} and I am {age} years old.") ✅ Output: My name is Ishu and I am 20 years old. 🔹 Why Use f-Strings? Before f-strings, we used: 1. % Formatting (Old Style) name = "Ishu" print("Hello %s" % name) 2. .format() Method print("Hello {}".format(name)) 3. f-Strings (Modern Way) ✅ print(f"Hello {name}") 💡Cleaner 💡Easier to read 💡Faster than older methods 💡Supports expressions directly 🔹 Inserting Variables product = "Laptop" price = 55000 print(f"The {product} costs ₹{price}") Output: The Laptop costs ₹55000 🔹 Using Expressions Inside f-Strings You can perform calculations directly inside {} a = 10 b = 5 print(f"Sum = {a+b}") print(f"Product = {a*b}") Output: Sum = 15 Product = 50 🔹 Calling Functions Inside f-Strings name = "ishu" print(f"Uppercase: {name.upper()}") Output: Uppercase: ISHU 🔹 Formatting Numbers Decimal Places pi = 3.14159265 print(f"{pi:.2f}") Output: 3.14 .2f → 2 decimal places. 🔹 Adding Commas to Large Numbers num = 1000000 print(f"{num:,}") Output: 1,000,000 🔹 Formatting Percentages score = 0.89 print(f"{score:.2%}") Output: 89.00% 🔹 Padding and Alignment Left Align print(f"{'Python':<10}") Right Align print(f"{'Python':>10}") Center Align print(f"{'Python':^10}") Useful for reports and tables. 🔹 Using Expressions with Conditions marks = 85 print(f"Result: {'Pass' if marks>=40 else 'Fail'}") Output: Result: Pass Yes — even conditional logic works 😎 🔹 Date Formatting with f-Strings from datetime import datetime today = datetime.now() print(f"{today:%d-%m-%Y}") Example output: 26-04-2026 🔹 Debugging with f-Strings (Awesome Feature) Python allows this: x = 10 y = 20 print(f"{x=}, {y=}") Output: x=10, y=20 Great for debugging 🛠️ 🔹 Escaping Curly Braces Need literal braces? print(f"{{Python}}") Output: {Python} Use double braces {{ }} 🔹 f-Strings with Dictionaries student = {"name":"Ishu","marks":95} print(f"{student['name']} scored {student['marks']}") Output: Ishu scored 95 🔹 f-Strings with Loops for i in range(1,4): print(f"Square of {i} is {i*i}") Output: Square of 1 is 1 Square of 2 is 4 Square of 3 is 9 🔹 Final Thoughts f-Strings are one of Python’s most elegant features. They make your code: ✔More readable ✔More powerful ✔More Pythonic ✔More professional #Python #DataAnalytics #PythonProgramming #DataAnalysis #DataAnalysts #PowerBI #Excel #CodeWithHarry #Tableau #SQL #Consistency #DataVisualization #DataCleaning #DataCollection #MicrosoftPowerBI #MicrosoftExcel #F_Strings
To view or add a comment, sign in
-
Task Holberton Python: Mutable vs Immutable Objects During this trimester at Holberton, we started by learning the basics of the Python language. Then, as time went on, both the difficulty and our knowledge gradually increased. We also learned how to create and manipulate databases using SQL and NoSQL, what Server-Side Rendering is, how routing works, and many other things. This post will only show you a small part of everything we learned in Python during this trimester, as covering everything would be quite long. Enjoy your reading 🙂 Understanding how Python handles objects is essential for writing clean and predictable code. In Python, every value is an object with an identity (memory address), a type, and a value. Identity & Type x = 10 print(id(x)) print(type(x)) Mutable Objects Mutable objects (like lists, dicts, sets) can change without changing their identity. lst = [1, 2, 3] lst.append(4) print(lst) # [1, 2, 3, 4] Immutable Objects Immutable objects (like int, str, tuple) cannot be changed. Any modification creates a new object. x = 5 x = x + 1 # new object Why It Matters With mutable objects, changes affect all references: a = [1, 2] b = a b.append(3) print(a) # [1, 2, 3] With immutable objects, they don’t: a = "hi" b = a b += "!" print(a) # "hi" Function Arguments Python uses “pass by object reference”. Immutable example: def add_one(x): x += 1 n = 5 add_one(n) print(n) # 5 Mutable example: def add_item(lst): lst.append(4) l = [1, 2] add_item(l) print(l) # [1, 2, 4] Advanced Notes - Shallow vs deep copy matters for nested objects - Beware of aliasing: matrix = [[0]*3]*3 Conclusion Mutable objects can change in place, while immutable ones cannot. This impacts how Python handles variables, memory, and function arguments—key knowledge to avoid bugs.
To view or add a comment, sign in
-
Python is too slow for high-frequency backtesting. So, I ripped out the math layer and rewrote it in optimized C++17. I recently completed the core development of NSEAlphaFinder, a high-frequency backtesting engine The primary constraint in algorithmic backtesting is the compute bottleneck when iterating through millions of historical price bars and calculating continuous risk vectors. Python is excellent for routing, but iterating through standard deviation arrays or computing rolling covariances natively is a massive performance drag. To solve this, I decoupled the architecture into two dedicated layers using pybind11 as the bridge. The Tech Stack: ✦ Core Engine: Modern C++17 compiled with -03 and -march=native. ✦ Parallelization: OpenMP alongside SIMD/AVX instructions for multi-threaded math operations. ✦ API / Routing: Python 3 and FastAPI for a lightweight REST interface. ✦ Build System: CMake and PowerShell for cross-platform deterministic builds. Core Project Features & Quant Mechanics: ✦ Mathematical Vectorization: Rolling metrics like Bollinger standard deviations and MACD histograms are computed in strict O(N) time using continuous accumulation, side-stepping naive O(N*K) windowing lags. ✦ Low-Latency Compute: Computes SMA, EMA, RSI, MACD, and Bollinger Bands for 1,000,000 price bars in under 50 milliseconds (translating to a latency of ~45ns per bar). ✦ Dynamic Data Ingestion: A strict OHLCV parser that normalizes timestamps, validates data consistency (ensuring High ≥ Low), gracefully applies forward fill matrix transformations on missing tick data before it hits the memory arrays. ✦ Full Strategy Backtester: Executes deterministic, long-only backtests directly in C++. It mathematically evaluates trade arrays to generate aggregate performance metrics calculating exact Sharpe logic (E[R] / σ * √252) , tracking compounded max drawdowns, and discounting institutional broker transaction costs. ✦Memory Safety: The C++ layer completely avoids raw pointers. Everything is handled via contiguous standard vectors and zero-copy references to ensure CPU cache-line hits are maximized and memory leaks eliminated. I enforced strict PEP 8 compliance across the Python bindings, replaced bare exception handling with targeted error tracing, and ensured the architecture adheres to SOLID design principles. github repo: https://shorturl.at/V3LLh The resulting system is highly modular. You can send a CSV of pricing data directly to the server, and the C++ binary will hot-load it, run a 5-indicator overlay, resolve target trading signals, and map out the execution Sharpe metrics in a fraction of a second. If you work on quantitative infrastructure, order management systems, or low-latency C++, I’d be interested to hear how you structure your memory mapping and vector computations at scale. #quantitativeanalysis #algorithmictrading #hft #cpp #lowlatency #marketmicrostructure #quantitativefinance #cplusplus #fintech #systemdesign #softwareengineering
To view or add a comment, sign in
-
🚀 Python Dictionary Mastery: Complete Guide with Definitions, Examples & Outputs 🔹 1. What is a Dictionary? 📘 Definition: A Dictionary stores data in key-value pairs 👉 Example: student = {"name": "Madhav", "age": 20} 📌 Features: ✔ Ordered (Python 3.7+) ✔ Mutable (can change) ✔ No duplicate keys ✔ Keys must be immutable (str, int, tuple) 🔹 2. Creating a Dictionary (3 Ways) ✅ Method 1: Using {} cohort = {"course": "Python", "level": "Beginner"} print(cohort) 🟢 Output: {'course': 'Python', 'level': 'Beginner'} ✅ Method 2: Using dict() student = dict(name="Madhav", age=20) print(student) 🟢 Output: {'name': 'Madhav', 'age': 20} ✅ Method 3: Using List of Tuples student = dict([("name", "Madhav"), ("age", 20)]) print(student) 🟢 Output: {'name': 'Madhav', 'age': 20} 📌 Important Note: 👉 dict() constructor supports tuples & keyword arguments 🔹 3. Accessing Values ✅ Using [] student = {"name": "Madhav", "age": 20} print(student["name"]) 🟢 Output: Madhav ⚠️ Error if key not found ✅ Using .get() (Safe Way) print(student.get("age")) print(student.get("email", "Not Found")) 🟢 Output: 20 Not Found 📌 Why use get()? 👉 Avoids errors & allows default values 🔹 4. Dictionary Methods student = {"name": "Madhav", "age": 20, "grade": "A"} 🔸 keys() print(student.keys()) 🟢 Output: dict_keys(['name', 'age', 'grade']) 🔸 values() print(student.values()) 🟢 Output: dict_values(['Madhav', 20, 'A']) 🔸 items() print(student.items()) 🟢 Output: dict_items([('name', 'Madhav'), ('age', 20), ('grade', 'A')]) 🔸 Add / Update student["city"] = "Mathura" student["age"] = 21 🔸 Remove Data student.pop("age") 🟢 Output: 21 📌 Yes 👉 .pop() returns deleted value 🔹 5. Dictionary Iteration (Loops) student = {'name': 'Madhav', 'grade': 'A', 'city': 'Mathura'} 🔸 Loop Keys for key in student: print(key) 🟢 Output: name grade city 🔸 Loop Values for value in student.values(): print(value) 🟢 Output: Madhav A Mathura 🔸 Loop Key + Value for k, v in student.items(): print(k, v) 🟢 Output: name Madhav grade A city Mathura 🔹 6. Nested Dictionary 📘 Definition: Dictionary inside another dictionary students = { "s1": {"name": "Madhav", "age": 20}, "s2": {"name": "Keshav", "age": 25} } 🔸 Access Data print(students["s1"]["name"]) 🟢 Output: Madhav 📌 Use Case: Complex structured data (API, JSON) 🔹 7. Dictionary Comprehension 📘 Definition: One-line dictionary creation squares = {x: x*x for x in range(1, 6)} print(squares) 🟢 Output: {1: 1, 2: 4, 3: 9, 4: 16, 5: 25} 🔸 With Different Expression double = {x: x+x for x in range(1, 6)} 🟢 Output: {1: 2, 2: 4, 3: 6, 4: 8, 5: 10} 📌 Pro Tip: 👉 Can include conditions also 🔥 Important Interview Q&A ❓ Can keys be changed? 👉 ❌ No (keys must be immutable) ❓ Can keys be duplicate? 👉 ❌ No ❓ Which data types allowed as keys? 👉 ✔ String, Number, Tuple ❓ Why use dictionary? 👉 ✔ Fast lookup 👉 ✔ Structured data storage #Python #DataStructures #Coding #Learning #Programming #CareerGrowth
To view or add a comment, sign in
More from this author
-
From Banks to Netflix: How CAP, ACID, BASE, SOLID and KISS Shape Smarter Systems
Bharat Kumar Aryasomayajula 7mo -
Hashing vs. Encryption — Why They’re Different, and Why We Need Both
Bharat Kumar Aryasomayajula 7mo -
House Prices- Salaries & colour: Understanding Label Encoding vs One-Hot Encoding (and Beyond)
Bharat Kumar Aryasomayajula 7mo
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development