Tired of scattering print() statements across your FastAPI code just to chase down one bug? You restart the server. Hit the endpoint. Squint at logs. Still no idea what broke. The real issue? Most tutorials show debugging on a single main.py. The moment you have a subdirectory structure, a .venv, and a .env file, the same config silently breaks. Breakpoints don't fire. VS Code loads the wrong interpreter. You get "Could not import module" and have no idea why. Once I got the setup right, everything changed: ✅ Breakpoints that actually trigger on every request ✅ Live variable inspection mid-request — no prints needed ✅ Call stack navigation to see exactly how you got there ✅ Conditional breakpoints that pause only when a specific condition is true Zero changes to your source code. Please commit launch.json once; your whole team will get it. I wrote a full guide covering: 🔧 The exact launch.json that works and the one field most configs get wrong. 🐛 A 5-step mental model: if debugging fails, one of these broke. 🐳 Remote debugging inside Docker with debugpy. ⚡ Logpoints, conditional breakpoints, and exception pausing. If your breakpoints never hit, you'll recognize the fix within the first two minutes of reading. 👉 https://lnkd.in/ew4ueC8Z #FastAPI #Python #VSCode #Debugging #BackendEngineering
Fix FastAPI Debugging with launch.json and Remote Debugging
More Relevant Posts
-
I built a RAG layer for Claude Code that cuts token usage by 80–90% Most devs using Claude Code don't realize they're burning tokens on files Claude doesn't need to read. Ask Claude "how does auth work?" and it reads 3 full files — 1,500+ tokens just to answer with 40 relevant lines. I fixed that. What I built: A local hybrid RAG system that sits between Claude and your codebase: → Late chunking — splits every file into overlapping 40-line windows → Dense retrieval — semantic search with all-MiniLM-L6-v2 (runs fully local, no API key) → BM25 sparse retrieval — keyword matching for exact symbol names → Cross-encoder reranking — picks the 3 best chunks from 20 candidates → File watcher — auto-rebuilds the index within 2 seconds of any file save Claude Code reads the CLAUDE.md and knows: run pip package before opening any file. It gets back 3 precise snippets with file path + line range. It reads only those lines. Nothing else. Real numbers on my Volta Engine project (76 files): - Without RAG: 17,235 chars across 3 files for one question - With RAG: 3,073 chars the exact 3 chunks that matter - 82% fewer tokens. Same answer. The whole thing runs offline. No cloud embeddings. No API calls. Just a one time pip install and run it. Stack: sentence-transformers · rank-bm25 · watchdog · Python If you use Claude Code daily on a real codebase, this pays for itself in the first session. DM me if you want the scripts. 🧠 #AI #ClaudeCode #RAG #DeveloperTools #Python #LLM #Productivity
To view or add a comment, sign in
-
I was paying twenty cents a run for a hosted image pipeline on Replicate. At a few thousand runs a month, that started to hurt. No README. No docs. Just four input parameters and a price tag. I wanted to call the underlying models directly, but I had a hazy idea of what was chained together or in what order. Then I noticed Replicate's `predictions.create()` API returns a `logs` field. Raw stdout from the container. One call. The entire pipeline printed itself out with emojis. Step 1: LLM generates a contextual prompt ... Step 2: Segmentation extracts a face mask ... Step 3: Mask inversion (a detail that had been silently breaking my outputs) ... Step 4: Inpainting model does the swap ... Few lines of Python later, same output, roughly half the cost. Nothing clever. I just read what was already there. What stuck with me is how familiar the pattern felt. Recently someone reconstructed the full source of Claude Code from the shipped npm bundle. No breach. Just a minified file and an LLM to rename the variables. Observability, side channels, shipped bundles, container logs. Different layers, same lesson. A small reminder for builders: your debug output is part of your public interface. And for anyone integrating a closed system: check what it's already saying out loud before assuming it's opaque. What's the most useful thing you've learned from logs someone forgot to turn off? Details in the post in comments. #SoftwareEngineering #Security #MachineLearning #DeveloperTools
To view or add a comment, sign in
-
🚀 Sync vs Async in FastAPI — What I finally understood When I started using FastAPI, I kept seeing "def" vs "async def"… But the real difference clicked only after I faced performance issues. 🔍 Here’s the simple breakdown: 👉 Sync (def) - Executes one request at a time (blocking) - If a task takes time, everything waits - Best for CPU-heavy operations 👉 Async (async def) - Handles multiple requests concurrently (non-blocking) - Doesn’t wait idle during I/O tasks - Perfect for DB calls, API calls, file operations 💡 Real insight: I had an API that was slow because of waiting operations. Switching to async reduced response time significantly. ⚡ Rule I follow now: - CPU work → use sync - I/O work → use async 📌 Biggest takeaway: Async doesn’t make your code “faster” — it makes your API handle more requests efficiently. --- If you're building APIs with FastAPI, understanding this is a game changer. #fastapi #python #backenddevelopment #webdevelopment #async #softwaredeveloper
To view or add a comment, sign in
-
-
Built a Python-based Directory Sync Tool to compare and synchronize files between two directories with reliability and control. Instead of relying only on file names or timestamps, the tool uses a combination of metadata and SHA-256 hashing to accurately detect new, modified, and missing files. Key highlights: • Recursive directory scanning with structured metadata (name, extensions, size, hash) • Efficient change detection using size-first filtering followed by hash comparison • Memory-efficient hashing using chunk-based file reading (handles large files) • Synchronization support with metadata preservation using shutil.copy2 • Safe cleanup by optionally removing extra files from the destination While building this, I focused on moving beyond a basic script and treating it like a real tool, structuring the code into clear components, improving output readability, and adding validation and error handling to make it more reliable in real use. GitHub:https://lnkd.in/gt-Ec3rF #Python #CLI #GitHubProjects #SoftwareDevelopment #LearningByBuilding #SystemsThinking
To view or add a comment, sign in
-
These past few days I've been diving into middleware in FastAPI — and honestly it clicked better than I expected. I implemented 4 types: → CORS — to control which frontends can talk to my API → GZip — to compress large responses and reduce payload size → HTTPS Redirect — to force secure connections automatically → Custom Timer Middleware — my favorite one, built from scratch using BaseHTTPMiddleware The custom one was the most interesting. I wrapped every request with a timer to measure how long each endpoint takes to respond. Something like this: start = time.time() response = await call_next(request) duration = time.time() - start Simple concept, but it made me realize how powerful middleware is — you intercept every request and response without touching a single endpoint. One thing that surprised me: even a basic loop of 10 million iterations is visible in the timing output. That's when I understood why performance monitoring at the middleware level actually matters in production. Still learning, but these small wins keep me going. Code here if you want to check it out 👇 https://lnkd.in/e773_smX #FastAPI #Python #WebDevelopment #BackendDevelopment #Learning
To view or add a comment, sign in
-
𝗙𝗮𝘀𝘁𝗔𝗣𝗜 𝗶𝘀𝗻'𝘁 𝗳𝗮𝘀𝘁 𝗯𝗲𝗰𝗮𝘂𝘀𝗲 𝗼𝗳 𝗙𝗮𝘀𝘁𝗔𝗣𝗜. 𝗜𝘁'𝘀 𝗳𝗮𝘀𝘁 𝗯𝗲𝗰𝗮𝘂𝘀𝗲 𝗼𝗳 𝘄𝗵𝗮𝘁'𝘀 𝘂𝗻𝗱𝗲𝗿𝗻𝗲𝗮𝘁𝗵. Most people stop at "FastAPI is faster than Flask." Few ask 𝘸𝘩𝘺. Here's what's actually happening: 𝗙𝗹𝗮𝘀𝗸 runs on 𝗪𝗦𝗚𝗜. One request = one thread = blocked until done. Your thread waits while the DB responds. It does nothing. Just sits there. 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 runs on 𝗔𝗦𝗚𝗜. One thread handles 𝘵𝘩𝘰𝘶𝘴𝘢𝘯𝘥𝘴 of connections. While one request waits for DB, the thread picks up another. No idle time. But FastAPI doesn't do this alone. The real stack: • 𝗨𝘃𝗶𝗰𝗼𝗿𝗻 — the ASGI server (built on uvloop) • 𝗦𝘁𝗮𝗿𝗹𝗲𝘁𝘁𝗲 — the async engine (handles requests, WebSockets, middleware) • 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 — the developer layer (validation, docs, type hints) Think of it this way: Starlette = 𝘵𝘩𝘦 𝘦𝘯𝘨𝘪𝘯𝘦. FastAPI = 𝘵𝘩𝘦 𝘥𝘢𝘴𝘩𝘣𝘰𝘢𝘳𝘥. Uvicorn = 𝘵𝘩𝘦 𝘧𝘶𝘦𝘭. Flask was built for a 𝘀𝘆𝗻𝗰𝗵𝗿𝗼𝗻𝗼𝘂𝘀 world. FastAPI was built for an 𝗮𝘀𝘆𝗻𝗰-𝗳𝗶𝗿𝘀𝘁 world. The speed difference isn't a feature. It's a 𝗳𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 difference. Next time someone says "FastAPI is fast", ask them: 𝘐𝘴 𝘪𝘵 𝘍𝘢𝘴𝘵𝘈𝘗𝘐, 𝘰𝘳 𝘪𝘴 𝘪𝘵 𝘚𝘵𝘢𝘳𝘭𝘦𝘵𝘵𝘦? #FastAPI #Flask #Starlette #Python #AsyncProgramming #BackendEngineering #SystemDesign #SoftwareEngineering
To view or add a comment, sign in
-
One pattern that changed how I build FastAPI backends: Stop returning raw database models from your endpoints. When your API response mirrors your ORM model 1:1, you're creating tight coupling between your database schema and your API contract. One schema change can break every client. The fix: dedicated Pydantic response models per endpoint. Here's what you get: 1. Auto-generated OpenAPI docs that actually match your responses 2. A clear data boundary - internal fields stay internal 3. Freedom to refactor your DB without touching your API contract Bonus: Pydantic's model_validator and computed fields let you shape responses exactly how your frontend needs them - no extra serialization logic scattered across your codebase. What patterns have saved you the most headaches in your backend work? #Python #FastAPI #WebDevelopment #SoftwareEngineering #FullStackDeveloper
To view or add a comment, sign in
-
🚀 Day 74 of #100DaysOfCode 🧩 LeetCode 220 – Contains Duplicate III (Hard) Today’s problem was a solid mix of logic + optimization. Not brute-force friendly at all — you have to think smart. 🔍 Problem Statement: Given an array "nums" and two integers "indexDiff" and "valueDiff", check if there exist two indices "i" and "j" such that: ✔️ "i ≠ j" ✔️ "|i - j| ≤ indexDiff" ✔️ "|nums[i] - nums[j]| ≤ valueDiff" 💡 Approach Used (Bucket + Sliding Window): Instead of comparing every pair (which would be too slow), I used: 👉 Bucketization Technique 👉 Sliding Window Constraint Each number is placed into a bucket of size "valueDiff + 1". - Same bucket ⇒ valid pair - Neighbor buckets ⇒ check manually - Maintain only last "indexDiff" elements ⚡ Why this works: It reduces time complexity from O(n²) → O(n) 📊 My Performance: ⏱️ Runtime: 139 ms 💾 Memory: 37.38 MB 🔥 Key Learning: Efficient problems are less about coding and more about choosing the right data structure. #Day74 #LeetCode #100DaysOfCode #DSA #CodingJourney #Python #ProblemSolving #Consistency
To view or add a comment, sign in
-
-
Tackling my first HARD tree problem! 🧗♂️🌲 Binary Tree Maximum Path Sum - LeetCode 124 - Hard (Blind 75) Moving from Easy/Medium to a Hard problem is always intimidating, but breaking it down to its core logic makes it manageable. This problem asks us to find the maximum sum of any path in a tree. The catch? A path can start and end anywhere, and it can go up and down, but it cannot branch twice. (The Split Decision): When standing at any node, we have to make two distinct calculations: 1. The Local curved path (The closed loop): What is the maximum sum if the path curves *through* this current node? This is `left_sum + right_sum + node.val`. We check if this curved path is the biggest sum we've seen so far and store it in our global tracker (`self.max_sum`). 2. The Straight path (Reporting to the boss): When returning a value back up to the parent node, we CANNOT return the curved path (because a path can't fork). We must choose the most profitable single straight line: `node.val + max(left_sum, right_sum)`. Key Learnings: 1) Ignoring Toxicity: If a child subtree returns a negative sum, it will only drag our total down. We can simply ignore it by using `max(dfs(...), 0)`. If it's negative, we just pretend the path stops there. 2) Dual-Purpose Recursion: Our recursive function does two things simultaneously—it continuously updates the global maximum path found anywhere, while returning the max straight path to keep the recursion flowing. Time and Space Complexity: Time Complexity: O(N) — We visit every single node exactly once. Space Complexity: O(H) — Where H is the height of the tree (for the recursion stack). Reaching the "Hard" level in the Blind 75 journey feels like a huge milestone. To anyone else practicing DSA right now—keep pushing, the logic eventually clicks! 💡 #LeetCode #BinaryTrees #Blind75 #DataStructures #Python #Recursion #Algorithms #TechInterviews #SoftwareEngineering #CodingJourney #ProblemSolving
To view or add a comment, sign in
-
-
🚀 Stop looping through your DataFrames! I recently refactored a script processing 10 million rows. We were using a standard row-wise loop, which was choking our CI/CD pipeline and causing memory spikes. Before optimisation: for i, row in df.iterrows(): df.at[i, 'tax_total'] = row['price'] * 1.08 if row['state'] == 'NY' else row['price'] After optimisation: import numpy as np conditions = [df['state'] == 'NY'] choices = [df['price'] * 1.08] df['tax_total'] = np.select(conditions, choices, default=df['price']) Performance gain: 45x faster and 90% lower memory usage. By moving from row-wise iteration to NumPy’s vectorized selection, we eliminated the Python-level overhead entirely. The code is not only faster but cleaner and more readable for the rest of the team. Vectorization turns O(n) Python operations into high-performance C-level loops. It’s the single biggest quick win you can apply to any data pipeline. Have you ever seen a loop-heavy process that you successfully migrated to vectorized operations? #DataEngineering #Python #Pandas #PerformanceTuning #CodingTips
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development