Day 8/30 — Your API will receive the same request twice. What happens next is on you. Network hiccups. Client retries. Double‑clicking checkout. Message broker redelivery. Duplicate requests are not edge cases in distributed systems. They are guaranteed. The only question is whether your system: ❌ creates duplicate orders and double charges ✅ or handles it safely Idempotency: same request, same result No matter how many times it arrives. The standard approach: an Idempotency Key. Client generates a unique key per intent (not per retry) Retries send the same key Server returns the original result, without re‑executing logic Effect: ✅ Payment charged once ✅ Order created once ✅ Duplicate retries become harmless Where idempotency is mandatory There are no exceptions here: Payments & refunds Order creation Message consumers (Kafka, SQS, RabbitMQ, etc.) Especially for message queues — brokers can redeliver. Idempotency must live in the consumer, not the broker. If you don’t guard against duplicates, “at‑least‑once delivery” becomes “at‑least‑twice damage”. The mental model shift Junior engineers design for: “The request comes once and succeeds.” Production engineers assume: “The request arrives twice, out of order, after a partial failure.” And they write handlers where the second call is a no‑op. That’s idempotency. If retrying a request can break your system, your system isn’t production‑ready yet. #microservices #springboot #java #backend #softwareengineering
Idempotency in Distributed Systems: Handling Duplicate Requests Safely
More Relevant Posts
-
🟢 Spring Boot: Spring Boot ar RabbitMQ - Message Queues Most systems stop scaling the moment one service waits for another to respond. RabbitMQ with Spring Boot is how I decouple those dependencies and make the system breathe again. The mental model is simple once it clicks. A producer does not send messages to a consumer; it sends them to an exchange. The exchange decides - based on routing rules - which queues get the message. Consumers then read from queues at their own pace. Producer and consumer never know each other. That single hop of indirection unlocks retry, fan-out, priority handling, and horizontal scaling. Spring Boot makes the wiring almost boring. Add spring-boot-starter-amqp, define beans for your Queue, Exchange, and Binding, and use RabbitTemplate to publish. A @RabbitListener method becomes a consumer - no infrastructure code, no threads to manage. What I wish I had learned sooner: - Always use a Dead Letter Exchange. Messages will fail, and you need somewhere to inspect them. - Make consumers idempotent. RabbitMQ guarantees at-least-once delivery, not exactly-once. - Acknowledge manually in production. Auto-ack loses messages on crash. - Use direct exchanges for point-to-point, topic exchanges for pub-sub with patterns, fanout for broadcast. - Set prefetch count. Without it, one slow consumer hoards the whole queue. Async messaging is not a silver bullet - it trades latency for resilience and throughput. But when you need reliability under load, nothing beats a well-tuned queue. See the attached diagram for how producer, exchange, queue, and consumer fit together. #SpringBoot #RabbitMQ #MessageQueue #Microservices #EventDriven #Java #SoftwareArchitecture #BackendDevelopment
To view or add a comment, sign in
-
-
Day 10/30 — If you can’t trace a failed request across services in under 2 minutes, your logging is broken. Most teams realize this during an incident. At 2 AM. With leadership asking, “What happened?” A user reports: “My order failed.” You check: Order Service → request looks fine Payment Service → no record API Gateway → thousands of requests, impossible to isolate one 45 minutes later, you’re still grepping logs across 5 services. That’s not a debugging problem. That’s a logging architecture problem. 3 things every production log must have 1️⃣ Structure — log JSON, not sentences Human‑readable logs don’t scale. Machine‑queryable logs do. Structured logs let you filter by orderId, userId, traceId, amount, latency — instantly. When you have millions of log lines, you don’t read. You query. 2️⃣ Correlation — one traceId everywhere Without a correlation ID: Gateway logs are one story Order logs another Payment logs a third With a single traceId, they become one timeline. One query should tell you: When the request entered Which service failed Why At which millisecond If you need multiple terminal windows and manual grep… you’ve already lost. 3️⃣ Centralization — all logs, one place Logs on individual servers are effectively invisible. Ship everything to a central system: ELK, Datadog, Loki, CloudWatch — pick your poison. Key rule: ✅ Log to stdout ✅ Let your platform collect & forward ❌ Don’t SSH into servers to read files If logs aren’t searchable centrally, they don’t exist during incidents. What to log (and what not to) ✅ Request entry & exit (with duration) ✅ Every external call ✅ Every exception with full context ✅ Every state transition (order created → payment started → failed) ❌ Tight loops ❌ Sensitive data (passwords, cards, tokens) ❌ DEBUG by default in production INFO + structured fields + traceId beats verbose noise every time. The rule that covers everything: A developer who’s never seen your system should be able to: Take a traceId from a customer complaint Reconstruct exactly what happened Across all services Without touching a single server If that’s not true today, your logging isn’t done yet. #microservices #springboot #java #backend #softwareengineering
To view or add a comment, sign in
-
Dynamic Multi-DataSource Routing in Spring Boot In one of my backend implementations, I needed to handle **two separate databases**: - Primary DB → user, product, orders - Billing DB → invoices, payments 🔴 Problem: Maintaining separate services for each DB was increasing complexity and deployment overhead. 🟢 Solution: Implemented **dynamic datasource routing** using: - AbstractRoutingDataSource - ThreadLocal context - Request-based DB selection 💡 How it works: - Each request is intercepted via a filter - Based on API path, DB context is set (PRIMARY / BILLING) - RoutingDataSource dynamically switches connections at runtime ⚙️ Why this approach? ✔ Avoids multiple microservices for simple separation ✔ Keeps transaction management centralised ✔ Reduces infra and deployment complexity ⚠️ Trade-offs: - Requires careful ThreadLocal handling - Debugging can be tricky if the context is not cleared properly 📌 Key Learning: Not every problem needs microservices. Sometimes, **smart resource routing inside a single service** is the more scalable choice. #SpringBoot #Java #BackendDevelopment #SystemDesign #SoftwareArchitecture
To view or add a comment, sign in
-
-
𝗜𝗱𝗲𝗺𝗽𝗼𝘁𝗲𝗻𝗰𝘆 𝗶𝘀 𝗼𝗻𝗲 𝗼𝗳 𝘁𝗵𝗲 𝗺𝗼𝘀𝘁 𝘂𝗻𝗱𝗲𝗿-𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗲𝗱 𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝘀 𝗶𝗻 𝗯𝗮𝗰𝗸𝗲𝗻𝗱 𝘀𝘆𝘀𝘁𝗲𝗺𝘀. Not because developers haven't heard the word. Because many think knowing the term is enough. It isn't. 𝗛𝗲𝗿𝗲'𝘀 𝘄𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗵𝗮𝗽𝗽𝗲𝗻𝘀 𝗶𝗻 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻: Your Lambda processes a payment webhook. The upstream service retries because it didn't get an ACK in time. Same event. Same function. Runs twice. 𝗧𝘄𝗼 𝗰𝗵𝗮𝗿𝗴𝗲𝘀. 𝗢𝗻𝗲 𝗰𝘂𝘀𝘁𝗼𝗺𝗲𝗿. 𝗭𝗲𝗿𝗼 𝗮𝗹𝗲𝗿𝘁𝘀. You only find out when support gets the ticket. This isn't some rare edge case. Retries are normal in distributed systems. EventBridge retries. Pub/Sub retries. SQS retries. Users double-click. The question isn't whether your system will receive duplicate requests. 𝗜𝘁'𝘀 𝘄𝗵𝗲𝘁𝗵𝗲𝗿 𝘆𝗼𝘂𝗿 𝘀𝘆𝘀𝘁𝗲𝗺 𝗶𝘀 𝗯𝘂𝗶𝗹𝘁 𝘁𝗼 𝗵𝗮𝗻𝗱𝗹𝗲 𝘁𝗵𝗲𝗺. 𝟰 𝘁𝗵𝗶𝗻𝗴𝘀 𝘁𝗵𝗮𝘁 𝗺𝗮𝗸𝗲 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗿𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝘁: 𝟭. 𝗘𝘃𝗲𝗿𝘆 𝘄𝗿𝗶𝘁𝗲 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗼𝗻 𝗻𝗲𝗲𝗱𝘀 𝗮 𝘂𝗻𝗶𝗾𝘂𝗲 𝗸𝗲𝘆 Client sends a request ID. If the same ID hits again, return the stored response instead of re-running the operation. Call it an idempotency key, request_id, or job_id. The name doesn't matter. 𝗧𝗵𝗲 𝗴𝘂𝗮𝗿𝗮𝗻𝘁𝗲𝗲 𝗱𝗼𝗲𝘀. 𝟮. 𝗧𝗵𝗲 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲 𝗶𝘀 𝘆𝗼𝘂𝗿 𝗳𝗶𝗻𝗮𝗹 𝗱𝗲𝗳𝗲𝗻𝘀𝗲 Application checks can fail under concurrency. Unique constraints won't. They catch what slips through your service layer. 𝟯. "𝗖𝗵𝗲𝗰𝗸 𝘁𝗵𝗲𝗻 𝗮𝗰𝘁" 𝗶𝘀 𝗮 𝗿𝗮𝗰𝗲 𝗰𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻 𝘄𝗮𝗶𝘁𝗶𝗻𝗴 𝘁𝗼 𝗵𝗮𝗽𝗽𝗲𝗻 This breaks under concurrent requests: Check if exists → Insert Two requests can pass the check at the same time. Use atomic inserts, upserts, or transactional guarantees. 𝟰. 𝗔𝗣𝗜 𝗶𝗱𝗲𝗺𝗽𝗼𝘁𝗲𝗻𝗰𝘆 𝗮𝗻𝗱 𝗰𝗼𝗻𝘀𝘂𝗺𝗲𝗿 𝗶𝗱𝗲𝗺𝗽𝗼𝘁𝗲𝗻𝗰𝘆 𝗮𝗿𝗲 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗽𝗿𝗼𝗯𝗹𝗲𝗺𝘀 Handling duplicates at your REST layer doesn't protect your event consumers. Your queue consumers need their own deduplication strategy. 𝗧𝗵𝗲 𝗽𝗮𝘁𝘁𝗲𝗿𝗻 𝗜'𝘃𝗲 𝗻𝗼𝘁𝗶𝗰𝗲𝗱: Most teams add idempotency after the first incident. The better systems are designed with it from day one. #BackendEngineering #SystemDesign #DistributedSystems #NodeJS #SoftwareEngineering
To view or add a comment, sign in
-
-
𝐒𝐨𝐦𝐞 𝐨𝐟 𝐭𝐡𝐞 𝐦𝐨𝐬𝐭 𝐜𝐫𝐢𝐭𝐢𝐜𝐚𝐥 𝐜𝐨𝐝𝐞 𝐢𝐧 𝐚 𝐬𝐲𝐬𝐭𝐞𝐦… 𝐫𝐮𝐧𝐬 𝐰𝐡𝐞𝐧 𝐧𝐨 𝐨𝐧𝐞 𝐢𝐬 𝐰𝐚𝐭𝐜𝐡𝐢𝐧𝐠. The most dangerous code in your backend… runs automatically. Not your APIs. Not your UI. Your background jobs. Recently, I implemented a simple cron job in NestJS to refresh exchange rates every hour: @𝗖𝗿𝗼𝗻('0 * * * *') async handleCron() { await this.service.fetchRates(); } At first glance, this looks straightforward. But in production, this “simple” job can introduce serious issues if not designed carefully. Here’s what actually matters: • Idempotency — The same job should run multiple times without corrupting data • Error Handling — External APIs fail, retries and fallbacks are essential • Concurrency — Multiple instances = duplicate executions unless controlled • Observability — Logs, metrics, and alerts are non-negotiable • Scalability — Heavy tasks should be moved to queues, not handled inline Cron jobs are not just schedulers. They are critical automation layers that directly impact system reliability. A well-designed cron job goes unnoticed. A poorly designed one becomes a production incident waiting to happen. How are you managing background jobs in your systems?
To view or add a comment, sign in
-
If your API waits for everything to finish… you are slowing down your users. Some operations take time: • Sending emails • Generating reports • Calling external APIs • Processing files But many developers do this synchronously. ⸻ ❌ Blocking API @PostMapping("/register") public String registerUser() { userService.saveUser(); emailService.sendWelcomeEmail(); return "User Created"; } User waits until email is sent. Slow response. ⸻ ✅ Async Processing Return response immediately: @PostMapping("/register") public String registerUser() { userService.saveUser(); emailService.sendWelcomeEmailAsync(); return "User Created"; } ⸻ ⚙️ Spring Boot Example Enable async: @EnableAsync @SpringBootApplication Async method: @Async public void sendWelcomeEmailAsync() { // send email } ⸻ 🧠 What Happens Now User Request ↓ Save User ↓ Return Response ↓ Async Email Processing Faster APIs. ⸻ ⚠️ When to Use Async Use for: • Emails • Notifications • Background jobs • Logging Avoid for: • Transactions • Payment processing • Critical operations ⸻ 💡 Lesson Fast APIs don’t do everything. They delegate work to background processing. ⸻ Day 20 of becoming production-ready with Spring Boot. Question: Do you use async processing in your APIs? #Java #SpringBoot #BackendEngineering #Performance #Async
To view or add a comment, sign in
-
-
💥 Building Resilient APIs. Downstream failures are inevitable. Your API should be ready for them, not surprised by them. Here’s the playbook 👇 ⏱️ Timeouts → Don’t wait forever 🔁 Smart retries → Only transient errors + backoff ⚡ Circuit breaker → Stop calling failing services 🛟 Fallbacks → Return something (cache / partial / default) 🚧 Isolation → One failure shouldn’t take everything down 💳 Idempotency → Make retries safe (especially payments) 🧠 Caching → Reduce dependency on external systems 📩 Async → Decouple with queues when possible 🔍 Observability → Logs, metrics, tracing 🚦 Rate limiting → Protect under load 💡 Simple rule: Fail fast. Degrade gracefully. Recover automatically. What’s one technique you rely on the most? #dotnet #webapi #backend #softwareengineering #developers #aspnetcore
To view or add a comment, sign in
-
🚀 Day 32 – RestTemplate vs WebClient vs RestClient: Choosing the Right HTTP Client Calling external APIs is a core part of modern microservices. Spring offers multiple HTTP clients — but choosing the wrong one can impact performance, scalability, and maintainability. Let’s break it down 👇 🔹 1. RestTemplate (Legacy but Stable) ✔ Synchronous, blocking calls ✔ Simple and easy to use ✔ Widely adopted in older systems ❌ Not actively enhanced (in maintenance mode) ❌ Not suitable for high-concurrency systems ➡ Best for: - Legacy applications - Simple use cases 🔹 2. WebClient (Reactive & Non-Blocking) ✔ Asynchronous, non-blocking ✔ Built on reactive programming (Project Reactor) ✔ Supports streaming & backpressure ➡ Ideal for: - High-throughput systems - Microservices with heavy I/O - Reactive architectures ⚠️ Requires understanding of reactive programming 🔹 3. RestClient (Modern Replacement) ✔ Introduced in Spring 6 ✔ Fluent, modern API ✔ Synchronous but cleaner than RestTemplate ➡ Best for: - New applications needing simplicity - Replacement for RestTemplate 🔹 4. Performance Comparison RestTemplate → Thread per request (blocking) WebClient → Event-loop model (non-blocking) RestClient → Blocking but optimized API ➡ For scale → prefer WebClient 🔹 5. When to Use What? ✔ Use RestTemplate → Only in legacy systems ✔ Use WebClient → High scalability & reactive flows ✔ Use RestClient → Clean, modern synchronous calls 🔹 6. Architectural Decision Matters Choosing the right client impacts: ✔ Resource utilization ✔ Latency ✔ Throughput ✔ System scalability 🔥 Architect’s Takeaway There is no “one-size-fits-all”: ✔ Simplicity → RestClient ✔ Scalability → WebClient ✔ Legacy → RestTemplate 👉 Choose based on system needs, not familiarity 💬 Are you still using RestTemplate or have you moved to WebClient/RestClient? Why? #100DaysOfJavaArchitecture #SpringBoot #WebClient #RestTemplate #RestClient #Microservices #SystemDesign #TechLeadership
To view or add a comment, sign in
-
-
Day 2/30 — Design for failure first. Features second. Most mid-level developers ask the wrong question when building microservices. They ask: "Does my API return the right response?" They should ask: "What happens to the entire system when this service dies at 2 AM?" Here's what production actually looks like vs. what your localhost shows you: Localhost (your laptop): All services always up No timeouts, no restarts Network is instant Zero latency between services 1 request at a time No concurrency issues Clean database No stale or partial data Production (reality): Services crash randomly OOM kills, pod restarts, deploys Network is unreliable Packets drop, latency spikes 1000s of requests hit together Race conditions everywhere Data gets inconsistent Half-written, duplicated, lost The mental shift that changes everything Before writing a single line of code, ask: "What happens to the user if THIS service goes down right now?" If you don't have a clear answer — your design is not ready yet. A real example nobody talks about: Your Order Service calls Payment Service. Payment processes the charge — but before it sends back the response, it crashes. Now what? Your Order Service got a timeout. So it retries. Payment processes the charge again. Your user just got billed twice. This is called the dual write problem — and it happens because the retry logic didn't account for failure mid-transaction. The fix isn't to write better code. It's to design around the failure upfront — using idempotency keys, so retrying the same payment request never charges twice. 3 questions to ask before designing ANY microservice: What breaks if this service is down for 60 seconds? What if the same request hits this service twice? What if this service is slow instead of completely down? Slow is actually worse than down. A slow service holds connections open. Those connections pile up. Now your healthy services start timing out too. One slow service can take down your entire system. #microservices #springboot #backend #java #softwareengineering
To view or add a comment, sign in
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development