async/await didn't make your code faster. It just made the slowness harder to see. ───────────────────────── Most devs write this and ship it with confidence: const user = await getUser(id) const posts = await getPosts(id) const comments = await getComments(id) Clean. Readable. And quietly costing you 3x the wait time. ───────────────────────── Here's what's actually happening under the hood 👇 async/await is syntax sugar over Promises. Your linear-looking code compiles into a Promise chain. await doesn't block the thread — it pauses that function and resumes it later. Which means: you can run things in parallel. You're just choosing not to. ───────────────────────── The sequential trap When you await three independent requests in a row, you're making them wait for each other — for no reason. Total time = request1 + request2 + request3 On a slow network, that gap is seconds, not milliseconds. ───────────────────────── The fix: Promise.all const [user, posts, comments] = await Promise.all([ getUser(id), getPosts(id), getComments(id) ]) Total time = slowest request only. ───────────────────────── When one failure shouldn't kill the rest: Promise.allSettled Promise.all fails fast — one rejection kills everything. Promise.allSettled lets every Promise finish and returns each result individually. Use it when partial failure is acceptable. ───────────────────────── The error handling gap nobody talks about A floating Promise that rejects has nowhere to go. No catch. No log. Silent failure in production. Rule: every Promise either gets awaited inside a try/catch, or gets a .catch() attached. No exceptions. ───────────────────────── One question that saves you every time: Before writing a second await — ask yourself: Does this actually need to wait for the previous one? If the answer is no, run them together. ───────────────────────── This post took me 5 seconds to write and 2 years of production bugs to learn. If it saves you the same bugs — repost it for your team ♻️ #JavaScript #WebPerformance #FrontendDevelopment #WebDev #Programming
async/await and the sequential trap in JavaScript
More Relevant Posts
-
𝗬𝗼𝘂𝗿 𝗔𝗣𝗜 𝗿𝗲𝘁𝘂𝗿𝗻𝗲𝗱 𝟮𝟬𝟬. 𝗕𝘂𝘁 𝘁𝗵𝗲 𝗿𝗲𝗾𝘂𝗲𝘀𝘁 𝗳𝗮𝗶𝗹𝗲𝗱. 𝗔𝗻𝗱 𝘆𝗼𝘂 𝗰𝗮𝗹𝗹𝗲𝗱 𝘁𝗵𝗮𝘁 𝗮 𝘀𝘂𝗰𝗰𝗲𝘀𝘀. This is one of the most common API mistakes I see. Returning 200 OK for everything then burying the real result in the response body. { "success": false, "message": "User not found" } The client got a 200. But the user wasn’t found. That’s not a success. That’s a lie. 𝗛𝗧𝗧𝗣 𝘀𝘁𝗮𝘁𝘂𝘀 𝗰𝗼𝗱𝗲𝘀 𝗲𝘅𝗶𝘀𝘁 𝗳𝗼𝗿 𝗮 𝗿𝗲𝗮𝘀𝗼𝗻. They’re not decoration. They’re a contract between your API and everyone who consumes it. 𝗧𝗵𝗲 𝗰𝗼𝗱𝗲𝘀 𝘁𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗺𝗮𝘁𝘁𝗲𝗿: • 200 → Success, here’s your data • 201 → Created successfully • 400 → Bad request, fix your input • 401 → Not authenticated • 403 → Authenticated, but not allowed • 404 → Resource doesn’t exist • 409 → Conflict, something already exists • 422 → Validation failed • 500 → Server broke, not the client’s fault 𝗪𝗵𝗲𝗻 𝘆𝗼𝘂 𝗿𝗲𝘁𝘂𝗿𝗻 𝘁𝗵𝗲 𝘄𝗿𝗼𝗻𝗴 𝘀𝘁𝗮𝘁𝘂𝘀 𝗰𝗼𝗱𝗲: • Frontend devs write broken error handling • Mobile apps show wrong messages to users • Debugging takes twice as long • Your API becomes unpredictable 𝗔 𝗴𝗼𝗼𝗱 𝗔𝗣𝗜 𝘂𝘀𝗲𝘀 𝘀𝘁𝗮𝘁𝘂𝘀 𝗰𝗼𝗱𝗲𝘀 𝗮𝗻𝗱 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗯𝗼𝗱𝗶𝗲𝘀 𝘁𝗼𝗴𝗲𝘁𝗵𝗲𝗿. The status code tells you what happened. The body tells you why. { "status": 404, "message": "User not found" } { "status": 422, "message": "Phone number is required" } Clear enough to debug. Careful enough not to expose sensitive internals. 𝗗𝗷𝗮𝗻𝗴𝗼 𝗥𝗘𝗦𝗧 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 𝗺𝗮𝗸𝗲𝘀 𝘁𝗵𝗶𝘀 𝗲𝗮𝘀𝘆. No excuses. 𝗨𝘀𝗲 𝘀𝘁𝗮𝘁𝘂𝘀 𝗰𝗼𝗱𝗲𝘀 𝗹𝗶𝗸𝗲 𝘆𝗼𝘂 𝗺𝗲𝗮𝗻 𝘁𝗵𝗲𝗺. Your API is a conversation. Make sure it’s saying the right thing. #Django #Python #BackendDevelopment #APIDesign #WebDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
-
🚀 Tired of writing try-catch in every controller? There’s a better way 👇 --- 👉 Problem: @RestController public class UserController { @GetMapping("/user/{id}") public User getUser(@PathVariable int id) { try { return userService.getUser(id); } catch (Exception e) { return null; // ❌ bad practice } } } ❌ Issues: - Repeated code - Messy controllers - Hard to maintain --- ✅ Solution → Global Exception Handling Use @ControllerAdvice + @ExceptionHandler --- 💡 Example: @ControllerAdvice public class GlobalExceptionHandler { @ExceptionHandler(Exception.class) public ResponseEntity<String> handleException(Exception ex) { return new ResponseEntity<>("Something went wrong", HttpStatus.INTERNAL_SERVER_ERROR); } } --- 👉 Now, any exception in your app: ✔ Automatically handled ✔ Clean response returned ✔ No try-catch in controllers --- 🔥 Handle specific exceptions: @ExceptionHandler(UserNotFoundException.class) public ResponseEntity<String> handleUserNotFound(UserNotFoundException ex) { return new ResponseEntity<>(ex.getMessage(), HttpStatus.NOT_FOUND); } --- ⚡ Real-world impact: Without this: ❌ Inconsistent error responses ❌ Debugging becomes hard With this: ✅ Clean API responses ✅ Centralized error handling ✅ Production-ready backend --- 📌 Key Takeaway: Don’t handle exceptions everywhere… Handle them in ONE place. --- Follow for more real backend learnings 🚀 #SpringBoot #Java #BackendDevelopment #CleanCode #SoftwareEngineer
To view or add a comment, sign in
-
-
Most developers start by treating Elixir's control flow like any other language. We use `if` because we are used to branching logic. But Elixir separates truth-based decisions from pattern-based ones, and understanding that distinction changes how you design your code. You are not just executing a branch; you are evaluating an expression that must always return a value. The real power comes when you move from `if` to `case`. It is not just a switch statement. It matches against the shape of your data and extracts values simultaneously. This forces you to be explicit about every possible state of your system. If you have not accounted for a specific case, the code will crash, and in Elixir, that crash is a useful signal for your supervisors rather than a failure. Bruce and I often see developers try to hide these crashes with generic catch-all clauses. It feels safer, but it actually hides problems you have not thought through. We have written about why you should only catch what you can actually fix and let the rest of the system's fault tolerance do its job. https://lnkd.in/eWQyzG7F — Paulo Valim & Bruce Tate at Groxio
To view or add a comment, sign in
-
Here is a 10X Vibe coding HACK : Most people have no CLAUDE[.]md. Or it's 3 lines: "use TypeScript and Tailwind." Mine is 200+ lines. Covers four sections : STACK - Exact versions of every dependency - Which library handles what (React Query for server state, Zustand for client state) - What I explicitly don't use and why (no class components, no Redux, no inline styles, file naming conventations and super important stuff like global state management, api structuring and success / failure flows if stripe is needed ) CONVENTIONS - Folder structure with examples - Naming conventions: what's a service, a hook, a util - Error handling pattern: always use the custom AppError class - API response format for every endpoint SECURITY (non-negotiable rules for every file) - "Never store secrets in frontend code" - "Every route requires auth middleware unless explicitly marked public" - "Validate all inputs with Zod before any processing" - "Never return raw DB objects — always select fields explicitly" OUTPUT QUALITY - "Always include error handling and edge cases" - "Always include loading and error states" - "Write tests for all service layer functions" I am honestly organising it better each day and trying to review more Skills to make it better #AI #Developers #CleanCode #Productivity #FutureOfWork
To view or add a comment, sign in
-
A small habit that improved my code quality a lot: Before pushing code, I ask myself: “What can go wrong here?” Not what works. Not the happy path. But what can break. So I check: • What if data is null? • What if API fails? • What if this runs 1000 times? • What if response is slow? • What if user does something unexpected? Earlier, I used to write code for success. Now I try to write code for failure. Because real systems don’t fail in obvious ways. They fail in edge cases. And most bugs come from: things we didn’t think about. This one mindset shift: 👉 reduced bugs 👉 improved debugging 👉 made code more reliable Good developers write code that works. Better developers write code that keeps working. #dotnet #softwareengineering #developers #cleanCode #AjayDevInsights
To view or add a comment, sign in
-
🚀 10 cURL Commands Every Backend Developer Should Know You’re debugging APIs the slow way. And yeah — that’s costing you hours every week. I’ve seen developers spend 30–40 mins debugging something that takes 30 seconds with cURL. So here are 10 cURL commands that separate beginners from real backend engineers 👇 Here are 10 commands you’ll actually use 👇 ⚡ 1. Basic GET 👉 curl https://lnkd.in/ghmyBe6g ⚡ 2. Add headers (Auth) 👉 curl -H "Authorization: Bearer TOKEN" https://lnkd.in/ghmyBe6g ⚡ 3. POST JSON 👉 curl -X POST https://lnkd.in/ghmyBe6g -H "Content-Type: application/json" -d '{"name":"Vivek"}' ⚡ 4. Update (PUT) 👉 curl -X PUT https://lnkd.in/g4q-PGzz -d '{"name":"Updated"}' ⚡ 5. DELETE 👉 curl -X DELETE https://lnkd.in/g4q-PGzz ⚡ 6. Query params 👉 curl "https://lnkd.in/g2erKS-K " ⚡ 7. Debug (verbose) 👉 curl -v https://api.example.com ⚡ 8. Headers only (underrated) 👉 curl -I https://api.example.com ⚡ 9. Save response 👉 curl -o data.json https://lnkd.in/gWQwUqCU ⚡ 10. File upload 👉 curl -X POST https://lnkd.in/gQVadd_t -F "file=@image.png" 💡 Bonus: 👉 curl --compressed https://api.example.com ⚠️ Hard truth: Postman makes you comfortable. cURL makes you dangerous. 📌 If you found this useful: Save it. You’ll need it during debugging. 💬 Comment: Which cURL command do you use daily? 🔁 Follow for more backend + system design content #BackendDevelopment #WebDevelopment #SoftwareEngineering #APIs #SystemDesign
To view or add a comment, sign in
-
-
I've been using Claude Code every day for 6 months. Here's what most developers get wrong: They treat it like a chat window. Copy code. Paste. Debug. Repeat. That's not how you get 10x results. The secret? One file: CLAUDE.md It's a configuration file that tells Claude: → Your project structure → How to run tests → Your coding standards → The "gotchas" in your codebase With it, Claude goes from "generic assistant" to "team member who knows your code." Without it, you're re-explaining your project every single session. I wrote up everything I learned into a complete guide: • How to structure your CLAUDE.md • The 5 sections every file needs • Common mistakes that kill productivity • Real examples you can copy The developers who master this now will have an unfair advantage. Full guide in the link Below 👇 #ClaudeCode #AIEngineering #DeveloperProductivity #SoftwareDevelopment #CodingTips
To view or add a comment, sign in
-
I used to write REST APIs in C. Not because I wanted to… but because that’s what the system required. And honestly — it made me a better developer. You learn things most people skip: - How requests actually flow - Memory management (and how easy it is to break things) - Why performance really matters But here’s the truth no one says out loud: Building APIs in C feels like assembling a car… just to go to the grocery store. Everything is manual. Everything takes time. Even a small feature feels heavy. Then I started building personal projects with FastAPI. And it felt like cheating. Same API idea. Same logic. But suddenly: - 100+ lines → 10 lines - Manual validation → automatic - No docs → instant Swagger UI - Sync headaches → async out of the box I wasn’t fighting the system anymore. I was actually building. Microservices were another big shift for me. In manually implemented REST services, everything from service communication to retries and error handling requires explicit effort. With FastAPI, structuring and scaling microservices feels far more natural, letting me focus on architecture instead of plumbing. That’s when it clicked: C teaches you how things work FastAPI lets you build what matters Both are valuable. But they serve different purposes. Today, my workflow looks like this: - Low-level systems → C mindset - Rapid product building → FastAPI And that combination is powerful. -——————- If you’re still writing heavy backend code for simple APIs… Try building the same thing in FastAPI once. You’ll question a lot of your current choices. #FastAPI #Python #BackendDevelopment #SoftwareEngineering #APIs #DevJourney
To view or add a comment, sign in
-
-
𝗢𝗻𝗲 𝗺𝗮𝗿𝗸𝗱𝗼𝘄𝗻 𝗳𝗶𝗹𝗲. 𝗭𝗲𝗿𝗼 𝗿𝗲𝗽𝗲𝗮𝘁𝗲𝗱 𝗶𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻𝘀. 𝗡𝗼 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝗱𝗿𝗶𝗳𝘁. Most engineers run OpenCode with the default build agent. OpenCode is a free, open-source Claude Code alternative. 95K GitHub stars. Supports 75+ models. No subscription. Bring your own API key or work with a ChatGPT subscription. But the default agent is still general-purpose. New session. Blank context. Same stack explanation every morning. I wrote my own operator agent instead. 100+ production workflows behind it. 4 stacks. Runs daily across n8n workflows, Python scripts, Next.js builds, and production automation. Here's the exact 𝗗𝗮𝗶𝗹𝘆 𝗢𝗽𝗲𝗿𝗮𝘁𝗼𝗿 𝗔𝗴𝗲𝗻𝘁 config I run in OpenCode: → 𝗣𝗹𝗮𝗻-𝗳𝗶𝗿𝘀𝘁 𝗺𝗼𝗱𝗲. Numbered plan for every non-trivial task. No code gets written before the plan is confirmed. → 𝗔𝗻𝘁𝗶-𝘀𝘆𝗰𝗼𝗽𝗵𝗮𝗻𝗰𝘆 𝗿𝘂𝗹𝗲. Agent challenges assumptions before executing. Flags gaps in my logic before touching a file, not after. → 𝗕𝗮𝘀𝗵 𝗽𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻 𝗺𝗼𝗱𝗲𝗹. git push, npm run, and python scripts all default to ask. Nothing destructive runs without confirmation. → 𝗥𝗲𝗮𝗱 𝗯𝗲𝗳𝗼𝗿𝗲 𝗲𝗱𝗶𝘁. The agent checks the current file state before every change. No overwrites from stale context. → 𝗦𝘂𝗯𝗮𝗴𝗲𝗻𝘁 𝗱𝗲𝗹𝗲𝗴𝗮𝘁𝗶𝗼𝗻. Parallel research and exploration run as scoped subtasks. The main agent stays on the build. → 𝗦𝘁𝗮𝗰𝗸 𝗹𝗼𝗰𝗸. n8n, Python, Next.js, Postgres, Prisma. Full context before the first prompt of every session. The default agent is general-purpose. This one knows my risk tolerance, my repo patterns, and my stack before I type a single word. Drop 𝗦𝗧𝗔𝗖𝗞 in the comments, and I'll send you the full config. 𝗦𝗮𝘃𝗲 𝘁𝗵𝗶𝘀 for the next time you have 30 minutes and want to level up your OpenCode setup. Follow 𝗕𝗶𝗹𝗮𝗹 𝗔𝗵𝗺𝗮𝗱 for more on 𝗻𝟴𝗻, 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲, Opencode, and Production automation. #OpenCode #ClaudeCode #AIAgents #AutomationEngineering
To view or add a comment, sign in
-
𝗜𝗳 𝘆𝗼𝘂𝗿 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁 𝗶𝘀 𝗼𝗻𝗹𝘆 𝗳𝗶𝘅𝗶𝗻𝗴 𝗶𝘁𝘀 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝗱 𝗰𝗼𝗱𝗲, 𝗶𝘁’𝘀 𝗼𝗻𝗹𝘆 𝗱𝗼𝗶𝗻𝗴 𝗵𝗮𝗹𝗳 𝘁𝗵𝗲 𝗷𝗼𝗯. 💡 When we talk about "𝘈𝘶𝘵𝘰-𝘐𝘮𝘱𝘳𝘰𝘷𝘪𝘯𝘨 𝘈𝘐 𝘈𝘨𝘦𝘯𝘵𝘴," there are actually TWO distinct optimization loops happening under the hood: Code Improvement (the short-term fix) and Prompt Improvement (the long-term memory). Here is the breakdown of how both loops trigger after every single iteration: 1️⃣ 𝗔𝘂𝘁𝗼 𝗖𝗼𝗱𝗲 𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁 (𝗧𝗵𝗲 "𝗥𝗲𝗳𝗹𝗲𝘅𝗶𝗼𝗻" 𝗟𝗼𝗼𝗽) This is how an agent fixes immediate, tactical errors. It works just like a human developer debugging a script. • 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗲: The LLM writes the initial code based on the task. • 𝗘𝘅𝗲𝗰𝘂𝘁𝗲: The code runs in an isolated, deterministic environment (like a sandbox or container). • 𝗖𝗿𝗶𝘁𝗶𝗾𝘂𝗲: If the code fails or the unit tests don't pass, the agent doesn't just guess again. It takes the exact stack trace, error message, and the failed code, and passes it to an "Evaluator" prompt. • 𝗥𝗲𝘄𝗿𝗶𝘁𝗲: The agent analyzes the error (e.g., "Ah, I used a deprecated API endpoint") and generates a patched version of the code. • 𝗥𝗲𝘀𝘂𝗹𝘁: The code is iteratively refined until it works for this specific execution. 2️⃣𝗔𝘂𝘁𝗼 𝗣𝗿𝗼𝗺𝗽𝘁 𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁 (𝗠𝗲𝘁𝗮-𝗣𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴) This is the real game-changer. If an agent keeps making the same mistake across different tasks, fixing the code isn't enough—you need to fix the instructions. Instead of manual "prompt engineering," the system optimizes its own prompts: • 𝗔𝗻𝗮𝗹𝘆𝘇𝗲 𝘁𝗵𝗲 𝗛𝗶𝘀𝘁𝗼𝗿𝘆: A Meta-Agent (or an LLM acting as a judge) reviews the logs of recent failures and successful fixes across multiple iterations. • 𝗜𝗱𝗲𝗻𝘁𝗶𝗳𝘆 𝘁𝗵𝗲 𝗚𝗮𝗽: The Meta-Agent spots a pattern. For example, "The underlying model consistently forgets to handle null values before parsing the JSON." • 𝗥𝗲𝘄𝗿𝗶𝘁𝗲 𝘁𝗵𝗲 𝗣𝗿𝗼𝗺𝗽𝘁: The agent automatically mutates and updates its own base system prompt, injecting a new rule (e.g., "CRITICAL: Always validate for null values before JSON parsing"). • 𝗥𝗲𝘀𝘂𝗹𝘁: The baseline intelligence of the agent permanently increases. It won't make that class of error in future tasks, saving compute, tokens, and time. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀: Code improvement allows an agent to brute-force a solution today. Prompt improvement ensures the agent is actually smarter tomorrow. Frameworks utilizing automated prompt optimization (like DSPy) are already proving that algorithmic prompt tweaking beats manual human engineering. The future of AI development isn't writing better prompts; it's building systems that write better prompts for themselves. #AIAgents #PromptEngineering #MachineLearning #SoftwareDevelopment #TechInnovation
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
I have faced the same issue and sloved it using primrose