Database errors will humble you. No matter how confident you are. Today was one of those days. Everything looked correct: logic made sense API routes were fine schema was clean But nothing worked the way it should. Hours went into: Prisma validation errors weird TypeScript issues (never type…) data not updating even though queries looked right And the funniest part? The actual issue is always something small. One missing field. One wrong assumption. One mismatch between frontend and backend. That’s it. But it’ll cost you hours. What I’ve learned (again): Debugging databases is not just coding. It’s patience + clarity + brutal honesty with your own logic. You have to slow down and ask: “What is actually happening?” “What am I assuming?” “Where is the data breaking?” Finally fixed it. And yeah, that moment when it works? Worth it. But still… database errors are a pain in the ass. #webdev #programming #debugging #buildinpublic
Database Errors Will Humble You: Debugging is Patience and Clarity
More Relevant Posts
-
Ever wonder why your Claude Code session suddenly burned 50k tokens in one turn?🐱 If you use Claude Code a lot, i am sure you have also hit this wall, your session suddenly gets expensive, context fills up unexpectedly, and you have no idea why. Was it that Bash command that searched your entire repo? The Read that loaded a 3,000-line config file? You're left guessing. I spent the past week building CAT (Context Analyzer Terminal) to solve exactly that. What it does: → Hooks silently into Claude Code sessions → Tracks token cost per individual tool call — Read, Bash, Grep, etc. → Builds rolling baselines using Welford's algorithm → Fires a real-time alert the moment something exceeds your normal baseline (Z-score detection) → Gives you a plain-English explanation of why something was expensive → Shows burn rate projection, cache efficiency, and overhead ratio → Live Rich TUI dashboard — runs entirely locally The non-obvious engineering problem: Claude Code hooks fire tool events and token snapshots as two separate streams — neither includes the other's data. The core of CAT is a delta engine that correlates them by session ID and timestamps to compute per-call cost attribution. Setup is 3 commands. MIT licensed. 113 tests. CI passing on macOS, Ubuntu, and Windows across Python 3.11–3.13. 🔗 GitHub: https://lnkd.in/dV69pHvs I'm actively looking for contributors — there are curated good-first-issues ranging from one-liners to full features. If you're into Python, async systems, or developer tooling, take a look. What token visibility features would make Claude Code more useful for you? Drop a comment — building this in public and all feedback shapes the roadmap. כבר לא חתול בשק! 🐱 בניתי כלי open-source שחוסך לכם את הניחושים ומראה בדיוק איזה tool call "אוכל" לכם את ה-context window ב-Claude Code. כל מי שמשתמש ב-Claude Code מכיר את הרגע הזה: הסשן פתאום נהיה יקר, הקונטקסט מתמלא בלי התראה, ואין לכם מושג למה. האם זה היה ה-Bash command שסרק בטעות את כל הריפו? או קובץ קונפיגורציה ענק שנטען ב-Read? בשבוע האחרון פיתחתי את CAT (Context Analyzer Terminal) כדי לפתור בדיוק את זה. מה זה נותן? ← ניטור שקט של סשנים ב-Claude Code. ← מעקב אחרי עלות טוקנים לכל פעולה בנפרד (Read, Bash, Grep וכו'). ← זיהוי חריגות בזמן אמת (Z-score) המבוסס על אלגוריתם Welford. ← הסברים ברורים למה פעולה מסוימת הייתה יקרה. ← תחזית Burn rate, יעילות Cache ויחס Overhead. ← Dashboard מקומי בטרמינל (Rich TUI). האתגר ההנדסי כאן היה לחבר בין שני סטרים נפרדים של מידע (tool events ו-token snapshots) שקלאוד מוציא ללא זיהוי מקשר. המנוע של CAT מבצע קורלציה מבוססת זמן ו-session ID כדי לייחס עלות מדויקת לכל קריאה. ההתקנה פשוטה (3 פקודות), הקוד ב-MIT, ויש כבר מעל 100 טסטים שעוברים ב-CI. אני מחפש תורמים לפרויקט! יש המון good-first-issues פתוחים. אם אתם בתוך Python, מערכות async או dev-tools — מוזמנים להציץ בגיטהאב 🔗 GitHub: https://lnkd.in/dV69pHvs #OpenSource #Python #DeveloperTools #ClaudeCode #AI #BuildInPublic
To view or add a comment, sign in
-
Your logs are lying to you. Not because logging is useless… But because you’re logging the wrong things. --- 👉 Most backend devs think logging = "console.log()" That’s not logging. That’s noise. --- What beginners do: console.log("User logged in"); console.log("Error occurred"); Looks fine. But in production? ❌ Useless for debugging ❌ No context ❌ No traceability --- Real problem: When something breaks in production… You don’t know: - Which user? - Which request? - What triggered it? - What happened before it? --- So you panic. And start guessing. --- What strong backend engineers log: ✔ Request ID (trace every request) ✔ User ID (if available) ✔ Route + method ✔ Status code ✔ Error stack (not just message) ✔ Timestamp --- Example (real logging): logger.info({ requestId: "abc123", userId: "user_42", method: "POST", route: "/api/orders", status: 500, error: err.stack, timestamp: new Date() }); ⚠️ Never log sensitive data (passwords, tokens, PII). Logs are often stored and shared — treat them as public --- This changes everything: Now you can: ✔ Trace a request end-to-end ✔ Debug production issues fast ✔ Understand real user behavior --- But here’s what most still ignore: Logs without structure = garbage. --- Level up your logging: ✔ Use structured logs (JSON) ✔ Use tools (Winston / Pino) ✔ Centralize logs (ELK / cloud logging) ✔ Add log levels (info, warn, error) --- Brutal truth: If you can’t debug your system in production… 👉 You don’t understand your system. --- Takeaway: Logging isn’t printing. 👉 It’s observability. --- Tomorrow: I’ll break down why your database queries are slow (and it’s not your DB’s fault). #BackendDevelopment #NodeJS #SystemDesign #Debugging #SoftwareEngineering
To view or add a comment, sign in
-
-
A small JavaScript check turned into a good lesson this week. I initially had a validation like: 𝙭 !== 𝙣𝙪𝙡𝙡 && 𝙭 !== '' It looked correct — until edge cases started slipping through. In some scenarios, x was undefined, and the condition still passed. The fix was simple: 𝙭 != 𝙣𝙪𝙡𝙡 && 𝙭 !== '' Using != null intentionally to cover both null and undefined. But the bigger takeaway wasn’t the syntax. In backend systems, especially when dealing with external or loosely structured data, assumptions about input shape can quietly break things. You don’t just get clean values — you get missing fields, partial payloads, and inconsistent states. This bug was a reminder to: • Define stricter input boundaries • Normalize data early • Avoid relying on implicit assumptions Sometimes a small condition reveals a larger gap in how we think about data contracts. #BackendDevelopment
To view or add a comment, sign in
-
Most of us have requests baked into our muscle memory, but as web standards move toward HTTP/3 and high-concurrency, the "old reliable" is starting to show its age. I’ve been diving into Niquests, and it’s a serious contender for the new standard. It’s designed as a drop-in replacement, meaning you get a massive performance boost without the headache of a refactor. What makes it a "pro" choice: - Protocol Support: It handles HTTP/2 and HTTP/3 natively. If you're hitting modern APIs, this isn't just a "nice to have"—it’s a massive efficiency gain. - Multiplexing: You can send multiple requests over a single connection. This eliminates the handshake overhead that usually slows down bulk data fetching. - True Async Compatibility: Unlike the original requests library, this is built to play nice with asyncio, making it ideal for high-traffic backend services. - Performance: In standard benchmarks, it significantly outperforms HTTPX and AIOHTTP in request-heavy loops. If you’re building production-grade scrapers, microservices, or data pipelines, the switch is almost a no-brainer. It’s the same API we love, just supercharged for 2026. Check out the project on GitHub: https://lnkd.in/d98Zy_cc #Python #SoftwareEngineering #Backend #Performance #DataEngineering #OpenSource
To view or add a comment, sign in
-
🚀 Day 1: Mono vs Flux (Basics Every Backend Dev Must Know) Starting a daily WebFlux series — from basics → pipelines → production-level patterns. Let’s begin with the foundation 👇 --- 💡 **What is Reactive Programming?** Instead of waiting for data… 👉 You react when data arrives (non-blocking, async) --- 🔹 **Mono<T> (0 or 1 result)** → Emits **only one item OR empty** → Best for **single response APIs** ✅ Use cases: * Get user by ID * Save/update operations * Authentication response 🔥 Benefit: ✔ Lightweight ✔ Simple to handle ✔ Perfect for request-response --- 🔹 **Flux<T> (0 to N results / stream)** → Emits **multiple items over time** → Works as a **data stream** ✅ Use cases: * List of users * Event streaming (Kafka/logs) * Real-time updates 🔥 Benefit: ✔ Streaming support ✔ Handles large data efficiently ✔ Backpressure (controls data flow) --- ⚡ **Core Difference** Mono = One result Flux = Many results (stream) --- 💥 **Golden Rule** If your API returns multiple items: ❌ Don’t use `Mono<List<T>>` ✅ Use `Flux<T>` --- 💡 **Why it matters?** Using the right type helps you: ✔ Improve performance ✔ Reduce memory usage ✔ Build scalable systems --- 📅 Coming next (Day 2): 👉 Mistakes + Mono<List> vs Flux deep dive + diagram) --- 👀 Follow this series if you want to master: WebFlux | Reactive pipelines | Backend systems --- #Java #SpringBoot #WebFlux #AI #ReactiveProgramming #BackendDevelopment #Microservices #SystemDesign #Developers
To view or add a comment, sign in
-
-
I was building filtering for financial records in my backend. Date range. Category. Amount range. User scope. All optional. All combinable. I started with hardcoded query logic using if-else conditions for different filter cases. It got messy fast. Every new filter meant rewriting existing logic. At one point, the queries looked like they were never meant to be read again. So I scrapped it. I implemented the Specification pattern using Spring Data JPA. Each filter became an isolated, composable predicate. At runtime, only the active ones combine into a single query. No hardcoding. No duplication. Small change in approach. Big impact on scalability and future scope. Now, adding a new filter is just one addition. Existing logic doesn't change. This is the Open/Closed principle from SOLID in practice, open for extension, closed for modification. Each Specification also owns exactly one filter concern. Single Responsibility, naturally enforced. The filtering layer went from something I avoided touching to something I can extend confidently, without regression risk. Interesting how backend complexity shifts as systems grow: performance → security → maintainability. This was firmly the third. #Backend #Java #Maintainability #SOLID #LearningInPublic #SWE
To view or add a comment, sign in
-
One fine morning, a customer reported: “File upload sometimes fails…” Not always. Not consistently. Just sometimes. 😄 And of course, those are the best bugs. 👉 System handles 1000+ uploads daily 👉 Issue happens randomly (10–20 times) 👉 Chunk upload + merge logic (unchanged for years) 👉 Stateless architecture (or so I thought…) I jumped into debugging mode. After hours of checking: NFS configs ✅ Multi-server behavior ✅ Retry logic ✅ Logs (100 times) ✅ Observation: Chunks uploaded from Server A were not visible on Server B immediately (10–15 sec delay). Confusion level: 🔥🔥🔥 Then I did something simple (and often ignored)… 👉 Compared old vs new code Guess what changed? Just one line removed (thanks to Sonar cleanup 😅): HttpSession session = request.getSession(); And that innocent line was silently adding JSESSIONID, making requests sticky and hiding the real problem all along. 💡 So for years, reality was something like this: Stateless system... except when upload API enters the chat 😄 Or simply: stateless most of the time, secretly stateful during uploads 🎭 And the moment I removed an “unused variable”… 💥 Load balancing started behaving correctly 💥 NFS delays became visible 💥 Hidden dependency got exposed 💥 Bug said: Hello 👋 I was always here And the best realization: 👉 My application is perfectly stateless… 👉 Until the user hits the upload API and boom, it becomes emotional (stateful) 🤣🤣🤣 Lesson learned: Sometimes the bug is not in new code… It’s in removing the wrong old code 😄 And sometimes… Your system isn’t broken, your assumptions are. Still one mystery remains: 👉 Why exactly NFS behaved that way (never got a perfect answer 😅) #BackendStories #ProductionIssues #Java #NFS
To view or add a comment, sign in
-
Ever opened a file called UserManager.py only to discover it’s 3,000 lines long and handles literally everything from database connections to sending emails? 😅 We’ve all been there. In my daily work as a Back-end Developer, I've seen firsthand how these "God Classes" can turn a simple feature update into a terrifying game of Jenga. That’s why I’m kicking off a new series on Medium deep-diving into the SOLID principles, starting with the foundation: the Single Responsibility Principle (SRP). In this first article, we break down: - The danger of the "Swiss Army Knife" approach to coding - Real-world Python examples showing the exact "Before & After" of a refactor - The top 3 red flags to spot SRP violations in your own projects If you are looking to ditch the clutter and write cleaner, highly modular code, check out the full article here: https://lnkd.in/dxBcDgds What is the most outrageous "God Class" you have ever encountered in the wild? Let me know in the comments! #CleanCode #SoftwareEngineering #Python #SOLID #Backend #DeveloperLife #TechWriting
To view or add a comment, sign in
-
Your FastAPI backend is fast to build. But is it fast to run? Most developers find out the answer at the worst possible moment when real users hit it at the same time. Endpoints slow down. Requests pile up. Users drop off. Not because the code is wrong. Because it is blocking. Here is what blocking actually looks like in production: Your user hits an endpoint. FastAPI calls the database. That query takes 200ms. During those 200ms your server is frozen. Not slow. Frozen. Every other request sits in a queue waiting for that one query to finish. 100 users hit your API at the same time. User 1 gets served. Users 2 to 100 wait in line. That is sync. That is blocking I/O. FastAPI was built to never work that way. With async/await while your database query runs in the background, your server is already picking up the next request. And the next. And the next. 200ms of database wait becomes invisible to every other user. In real backend terms. SYNC — blocks: def get_orders(user_id: int): return db.query(user_id) ASYNC — non blocking: async def get_orders(user_id: int): return await db.query(user_id) Same logic. Same database. Same server. But now 100 users get served in the time it used to take to serve 1. This matters even more when your endpoints call external services. 1. Payment gateway 300ms wait. 2. AI model response 2 to 3 seconds wait. 3. Email service 500ms wait. Sync every user feels every millisecond of every one of those waits. with Async none of them do. FastAPI gives you non-blocking I/O natively. No extra setup. No plugins. No workarounds. Just write async. Add await. Let FastAPI handle the rest. Your backend was already fast to build. Now make it fast to run. Are you using async endpoints in your FastAPI projects? 👇 #FastAPI #Python #BackendDevelopment #AsyncProgramming #SoftwareEngineering #APIDesign #PythonDeveloper #WebDevelopment #TechIn2026 #BuildInPublic
To view or add a comment, sign in
-
-
The Try-Catch Time Trap: Why Async Errors Escape? Lets look at this code that reads and parse the invalid JSON file. Sync code try { const data = fs.readFileSync("invalid.json", "utf-8"); const jsonData = JSON.parse(data); } catch (err) { console.log("error caught:", err.message); // catches parsing error } All good. --- Async code try { fs.readFile("invalid.json", "utf-8", (err, data) => { const jsonData = JSON.parse(data); }); } catch (err) { console.log("error caught:", err.message); // does NOT even run } Catch block won't execute. Now the question is… 👉 Why? --- Here’s how I started thinking about it: If JS finds an error → it stops execution → looks for a catch block in the current call stack → if not found, it bubbles up the stack --- In sync code: 👉 Everything runs in one continuous stack → error happens → catch block is right there → so it works --- But async changes things. When this line runs: fs.readFile(..., callback) 👉 JS does NOT execute the callback immediately Instead: → it registers the callback → pushes it to the event loop → and moves on --- Now important part 👇 👉 The current call stack finishes execution → which means try-catch is gone --- Later… 👉 when file reading is done → event loop pushes the callback to the call stack → callback runs But now: 👉 this is a new call stack And the old try-catch? 👉 already gone. --- So when error happens inside callback: ❌ there is no catch block anymore --- That’s when it clicked for me: 👉 try-catch works only within the same execution stack Not across time. Not across async boundaries. #JavaScript #Node #Programming #ErrorHandling #Interview #Eventloop #Callstack
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development