I spent 3 hours debugging a RecursionError at 2 AM. Turns out, I had no idea what recursion was actually doing to memory. Here's what changed everything for me 👇 ───────────────────── 🧠 WHAT RECURSION REALLY IS ───────────────────── Most tutorials say: "A function that calls itself." That's true. But incomplete. The real story? Every recursive call pushes a new stack frame into RAM. Local variables. Arguments. Return address. All of it — sitting in memory, waiting. For factorial(5), Python holds 6 frames simultaneously before returning a single value. ───────────────────── ⚠️ THE HIDDEN DANGER ───────────────────── Python's default recursion limit is 1000. Hit it → RecursionError. Ignore it → bloated memory. Each frame costs ~300–400 bytes. 1000 frames = ~400 KB of stack. And unlike Java or Scala, Python has NO tail-call optimization. Even "optimized" tail recursion still creates new frames. ───────────────────── ✅ THE FIX ───────────────────── → Use @lru_cache for overlapping subproblems (fib, DP) → Convert deep recursion to iteration → Use trampolining for functional-style recursion → Raise sys.setrecursionlimit() only when you understand why ───────────────────── 💡 THE MENTAL MODEL ───────────────────── Think of the call stack like a stack of plates. Each call = add a plate. Base case = stop adding. Return = remove plates one by one. You wouldn't stack 10,000 plates. Don't stack 10,000 frames. ───────────────────── Recursion isn't bad. Blind recursion is. Understand the memory. Write better code. ───────────────────── Found this useful? ♻️ Repost to help a developer who's debugging at 2 AM right now. Follow me for daily Python deep-dives that go beyond the surface. #Python #Programming #SoftwareEngineering #CodeQuality #PythonTips #RecursionExplained #LearnPython #Developer
Understanding Recursion and Memory in Python
More Relevant Posts
-
𝐏𝐲𝐭𝐡𝐨𝐧 𝐝𝐞𝐟𝐢𝐞𝐬 𝐥𝐨𝐠𝐢𝐜 (𝐰𝐞𝐥𝐥 𝐤𝐢𝐧𝐝𝐚). In the real world, if you take one undeniable truth and combine it with another undeniable truth, you just get...well, the absolute Truth (I guess). Even in strict computer science, if you run Boolean values through logic gates, the outcome is always logical. True OR/ AND True equals True . Nowhere in the realm of Boolean algebra does combining two truths suddenly generate the number two. This is where Python hisses (jk, jk). You might expect a logical output like True , or perhaps a run-time error telling you that you can't perform arithmetic on philosophical concepts. Instead, Python confidently bypasses formal logic entirely and spits out a number: 2. Wait, what? Since when do two truths equal two? The reason for this quirky behaviour lies in Python's rich history and a clever bit of language architecture. In its early days, like many languages, Python didn't actually have a dedicated Boolean data type. We simply used the integers 1 to represent True and 0 to represent False . It wasn't until Python 2.3 that True and False were officially introduced as built-in constants. However, to ensure that millions of lines of older code didn't suddenly break, the developers made a pragmatic compromise. They created the new bool class, but made it a direct subclass of the int (integer) class. Because of this inheritance, under the hood, Python still treats True as exactly equal to 1 and False as 0. When you ask it to calculate True + True using the standard addition operator, it drops the boolean masks, ignores the logic gates, and simply calculates 1 + 1. So, as a quirky byproduct of Python's commitment to backward compatibility, this feature remains built into the language today. It has become a favourite piece of trivia for developers, proving that in the Pythonic universe, logic is often just basic math hiding in plain sight! 𝐀𝐮𝐭𝐡𝐨𝐫𝐞𝐝 𝐛𝐲: Vybhav Chaturvedi
To view or add a comment, sign in
-
-
I made a short blog about how to pair Gemma 4 with mellea, a Python library for structured generative programs, to get typed, validated output with automatic repair when the model gets it wrong. https://lnkd.in/etEafHex #MelleaAI #GenerativeComputing #gemma4
To view or add a comment, sign in
-
Treating NumPy arrays like fancy Python lists, you’re leaving significant performance on the table. For senior devs and ML engineers, the difference between Basic and Advanced indexing isn't just syntax it's a fundamental shift in memory management. 1. The Trailing Comma Trap Consider these two operations on an array x: view = x[(1, 2, 3)] copy = x[(1, 2, 3),] To a junior dev, they look nearly identical. To the NumPy engine, they are worlds apart: Basic Indexing (x) returns a view. It manipulates internal strides and offsets without touching a single byte of raw data. This is time and memory. Advanced indexing (x[(1, 2, 3), ]) triggers a copy. Because you provided a tuple containing a sequence, NumPy allocates new RAM and physically moves data Advanced indexing always returns a copy of the data (contrast with basic slicing that returns a view. 2. The Mechanics of ndarray An ndarray is a contiguous block of memory. Its power comes from vectorization delegating loops to optimized C/C++ and SIMD instructions. Avoid: [abs(val) for val in large_array] (Slow Python interpreter overhead). Prefer: np.abs(large_array) (Fast, vectorized execution). 3. Practical Senior-Level Tip: np.newaxis Stop using .reshape() blindly. When you need to turn a row into a column for broadcasting (e.g., B[:, np.newaxis]), you are creating a view by adding a new dimension of length 1. it’s a zero-cost abstraction that keeps your data contiguous and your cache lines happy. The Rule of Thumb: If you don't need a copy, don't use a comma. Keep your indexing basic to keep your pipelines efficient. happy learning #Python #NumPy #DataEngineering #PerformanceOptimization #MachineLearning #SoftwareArchitecture
To view or add a comment, sign in
-
I’ve been working on an open-source Python library for building AI agents. It’s called Dendrux. The idea is that agent runtimes should handle more than just calling an LLM and tools. In production, you usually need persistence, crash recovery, human approvals, budgets, and guardrails. Dendrux brings it into the runtime. It handles: 1. Tool deny policies and human approval with pause/resume 2. PII redaction at the LLM boundary, so the model sees placeholders while tools receive real values 3. Advisory token budgets with threshold warnings 4. Crash recovery with stale-run sweeping 5. Client-tool bridging for browsers and spreadsheets It’s still early, currently v0.1.0a5, but the foundation is in place. Feedback, issues, and design critiques are welcome. GitHub: https://lnkd.in/gYbhpcdM
To view or add a comment, sign in
-
Why does every AI review start from zero? Why burn thousands of tokens re-discovering the same call chains, the same module structure, the same dependency graph every single time? The idea isn't new. code-review-graph already solved this in Python and it works. I've been using it. But Python has a ceiling: single-threaded parsing, GIL contention on large repos, and startup overhead that adds up when you're calling it 50 times a day through MCP. So I rewrote the entire thing in Go. Not a wrapper. Not bindings. A ground-up port designed around goroutines, channels, and SQLite WAL mode. Result: code-review-graph-go Same concept. Fundamentally different performance characteristics. Here's what changed in the Go version: → Goroutine-parallel parsing: Tree-sitter across 17 languages, N=NumCPU workers. 1,800 nodes and 10,000+ edges in ~1.5 seconds. The Python version does this sequentially. → SQLite with WAL mode concurrent readers, mutex-serialised writer. Incremental updates only re-parse what git diff says changed, then expand to dependents via multi-hop BFS. → Hybrid search engine: FTS5 BM25 + vector embeddings merged via Reciprocal Rank Fusion. "UserService" auto-boosts Class results. "get_user" auto-boosts Functions. Context-file boosting for what you're actively editing. The Python version has this too, the Go version adds the RRF merge with zero-allocation hot path. → 19 MCP tools that drop into Claude Code, Cursor, Windsurf, Zed, Continue, or OpenCode with a single install command. Full JSON-RPC 2.0 over stdio. → Execution flow tracing that walks every code path from entry point to leaf call, scored by criticality (file spread, security sensitivity, test coverage gaps). → Refactoring engine previews renames across every call site, detects dead code, suggests moves based on community structure, and applies changes with path-traversal safety checks. → Auto-generated wiki from your codebase's community structure. Markdown pages with member tables, flow summaries, and cross-community dependencies. → Context-aware hints the MCP server tracks your session, infers whether you're reviewing/debugging/refactoring, and appends next-step suggestions to every tool response. All of it runs locally. No API keys. No cloud. Just a single Go binary + SQLite. Full credit to the original Python project by Tirth Kanani for the architecture and the idea. Today I'm open-sourcing the whole thing. But here's what I really want: If you work on a codebase with 500+ files, try it. Run build, then search, then detect-changes. See if the graph catches relationships you didn't know existed. Then tell me what's broken. Want to compare it against the Python version on your repo? I'd genuinely love to see those benchmarks. This is better with a community. GitHub: https://lnkd.in/gSjZP3ay Drop a star if this resonates. PRs are very welcome. #opensource #golang #python #ai #codereview #mcp #llm #developer #tooling #treesitter #sqlite #performance
To view or add a comment, sign in
-
A tiny public anti-drift example in Python. Not proprietary. Not platform-specific. Just the basic law: - hash truth surfaces - classify write targets - deny runtime writes to canonical state - write mutable state atomically A lot of reliability problems become easier once you stop asking “how do we clean this up?” and start asking “why is live mutation happening on a truth surface at all?” That question scales far better than cleanup scripts. #Python #Reliability #SystemsDesign #AIGovernance 🎁 more gifts from __future__ import annotations from pathlib import Path import hashlib import json import os import tempfile def sha256_bytes(data: bytes) -> str: return hashlib.sha256(data).hexdigest() def detect_drift(canonical_path: Path, runtime_path: Path) -> dict: canonical = canonical_path.read_bytes() runtime = runtime_path.read_bytes() return { "canonical_sha256": sha256_bytes(canonical), "runtime_sha256": sha256_bytes(runtime), "match": canonical == runtime, } def decide_write(path_class: str, actor_type: str) -> str: """ path_class: canonical | runtime | derived | local actor_type: human | runtime | ci """ if path_class == "canonical" and actor_type == "runtime": return "deny" if path_class == "runtime" and actor_type in {"runtime", "ci"}: return "allow" if path_class == "derived" and actor_type in {"runtime", "ci"}: return "allow" return "review" def atomic_write_json(path: Path, payload: dict) -> None: path.parent.mkdir(parents=True, exist_ok=True) raw = json.dumps(payload, indent=2, sort_keys=True) + "\n" fd, tmp_name = tempfile.mkstemp(prefix=".tmp_", dir=str(path.parent)) try: with os.fdopen(fd, "w", encoding="utf-8") as fh: fh.write(raw) os.replace(tmp_name, path) except Exception: try: os.unlink(tmp_name) except FileNotFoundError: pass raise if __name__ == "__main__": canonical = Path("config/policy.json") runtime = Path("var/runtime/policy_runtime.json") result = detect_drift(canonical, runtime) print(result) decision = decide_write(path_class="canonical", actor_type="runtime") print({"write_decision": decision})
To view or add a comment, sign in
-
🔧 Building AI Agents from Scratch – Part 10: AI Agent Python Library Packaging is live! In this post, I explore how agents can be packaged and shared like any other Python library: ✨ From Scripts to Libraries – agents move beyond ad‑hoc scripts into structured, reusable packages. ✨ Packaging with setup.py / pyproject.toml – standard Python packaging ensures agents can be installed via pip. ✨ Wheel Files (.whl) – agents are compiled into distributable wheels, making installation fast and dependency‑safe. ✨ Distribution via Git – teams can version, share, and collaborate on agents across repositories. ✨ FastAPI Discovery Integration – packaged agents can register themselves automatically, enabling plug‑and‑play orchestration. This series continues to be based entirely on my work experience. It’s not about frameworks—it’s about learning the fundamentals and understanding what they’re built on. 👉 Read Part 10: https://lnkd.in/gAsxewjw If you’re curious about how packaging transforms agents into modular, reusable components, I’d love for you to follow along. #AI #Agents #Python #Packaging #AgenticAI #LearningByDoing
To view or add a comment, sign in
-
Two parent classes. Same method name. One child class. Which one does Python call? I assumed Python would just crash — or at least throw an error. It didn't. It silently picked one. And I had no idea which one or why. ━━━━━━━━━━━━━━━━━━━━━━ This is Multiple Inheritance in Python. class A: ····def hello(self): print("Hello from A") class B: ····def hello(self): print("Hello from B") class C(A, B): ····pass C().hello() ━━━━━━━━━━━━━━━━━━━━━━ Output: Hello from A But why A and not B? Python follows something called MRO — Method Resolution Order. It uses an algorithm called C3 Linearization. The rule is simple: Python reads left to right in the inheritance list, then goes up. C → A → B → object So it finds hello() in A first — and stops there. ━━━━━━━━━━━━━━━━━━━━━━ You can actually see Python's MRO yourself: print(C.mro) Output: ▶ C → A → B → object ━━━━━━━━━━━━━━━━━━━━━━ My Software Engineering brain connected this immediately. In Java, multiple inheritance isn't even allowed for classes — exactly because of this ambiguity. Java forces you to use interfaces instead. Python allows it — but quietly follows a strict order behind the scenes. The lesson: Python is never random. There's always a rule. You just have to find it. ━━━━━━━━━━━━━━━━━━━━━━ Senior developers — has MRO ever caused a bug in your production code that took you a while to trace? Genuinely curious how often this actually bites people. #Python #OOP #DataScience #SoftwareEngineering
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development