1. ThreadLocal: The "Private Locker" Strategy When you use ThreadLocal, each thread gets its own independent copy of a variable. There is no shared data, so there is no contention. The Use Case: Imagine handling multiple money transfer requests. Request 1: Customer C101, Txn ID: TXN1001 Request 2: Customer C202, Txn ID: TXN2001 We use ThreadLocal to store the Transaction ID so that every log or service call within that thread knows which transaction it’s working on without passing it as a method parameter everywhere. public class RequestContext { private static ThreadLocal<String> txnId = new ThreadLocal<>(); public static void setTxnId(String id) { txnId.set(id); } public static String getTxnId() { return txnId.get(); } public static void clear() { txnId.remove(); } // Always clean up! } Why not synchronize here? Because Thread 1 doesn't care about Thread 2's ID. We need Isolation, not locking. 2. Synchronization: The "Gatekeeper" Strategy We use synchronized when threads must access the exact same piece of data (like a bank balance). If two threads try to debit the same account at the exact same time, you’ll end up with incorrect data without a lock. public synchronized void debit(int amount) { if (balance >= amount) { balance -= amount; } } Why not use ThreadLocal here? If each thread had its own "copy" of the balance, the actual account would never be updated globally. We need Consistency, which requires a lock. Key Takeaway: Use ThreadLocal when you want to avoid synchronization overhead for data that is specific to a thread's execution context (e.g., User IDs, DB Connections, Transaction IDs). Use synchronized when threads must modify the same shared resource and you need to ensure data integrity. #Java #BackendDevelopment #SoftwareEngineering #MultiThreading #Concurrency #JavaPerformance #CodingTips #Programming #SystemDesign
ThreadLocal vs Synchronization in Java for Concurrency
More Relevant Posts
-
🚀 New Video: Why @Transactional is Important in Spring Boot What happens if: ✔ Employee is saved ❌ IdCard fails 👉 You get inconsistent data This is where @Transactional saves you. 💡 Simple idea: Either everything succeeds… or nothing is saved. In this video, I show: ✔ Real problem (partial data save) ✔ How rollback works ✔ Why transactions are critical in real systems 🎥 Watch here: https://lnkd.in/dN3Duxnj #SpringBoot #JPA #Java #BackendDevelopment #Hibernate
Why @Transactional is Important in Spring Boot? (Fix Data Inconsistency)
https://www.youtube.com/
To view or add a comment, sign in
-
Thread pool types : 1. Fixed Thread Pool (This has a fixed number of threads.) Method: Executors.newFixedThreadPool(int n) :- internally LinkedBlockingQueue data Structure uses. Example:- steady load where you want to strictly limit resource usage. 2. Cached Thread Pool : (creates new threads as needed but reuses existing threads if available . If thread is idle for 60 seconds, it is terminated). Method: Executors.newScheduledThreadPool(int corePoolSize) :- internally SynchronousQueus data Structure uses. Example:- Applications with many short-lived asynchronous tasks. (Push Notifications & SMS Alerts) 3. Scheduled Thread Pool : This pool can schedule to run after a given delay or to execute periodically. Method: Executors.newScheduledThreadPool(int corePoolSize) Example: Background cleanup tasks, heartbeat signals, or polling. Example:- Tasks that must be processed one at a time in a specific order (e.g., event sequencing). 4. Single Thread Executor : single worker thread to execute all tasks. It guarantees that tasks are executed sequentially Method: Executors.newSingleThreadExecutor() :- internally LinkedBlockingQueue data Structure uses. (Ledger Accounting) #Java #BackendDevelopment #SoftwareEngineering #MultiThreading #Concurrency #JavaPerformance #CodingTips #Programming #SystemDesign
To view or add a comment, sign in
-
⚠️ Day 13 — The Lost Update Bug Two users update the same data… One update silently disappears. No errors. No crashes. Still wrong. --- ### The Setup Two requests hit your API at the same time: 👉 User A updates profile 👉 User B updates profile Both read the same old value. --- ### The Problem Flow looks like this: 1. Read current data 2. Modify 3. Save But: - A reads value = 100 - B reads value = 100 - A updates → 120 - B updates → 130 👉 Final value = 130 👉 A’s update is lost ❌ --- ### ❌ The Mistake Assuming single-user access in a multi-user system. 👉 Concurrency issues are invisible… until production. --- ### ✅ The Fix: Optimistic Locking Use versioning to detect conflicts. 👉 Add version field 👉 Check before update If version changed → retry --- ### 🔥 Production-Grade Approach - Use @Version (JPA) - Implement retry mechanism - Use pessimistic locking (if needed) - Design APIs to handle conflicts gracefully --- ### 🧠 Senior Insight Concurrency bugs don’t fail loudly. They corrupt data silently. --- ### 🎯 The Lesson In multi-user systems: 👉 Conflicts are inevitable 👉 Data loss is preventable --- If multiple users can update the same data… 👉 You need concurrency control. --- #BackendDevelopment #Java #SpringBoot #Microservices #SystemDesign #Concurrency #DistributedSystems #Database
To view or add a comment, sign in
-
-
One of the most overlooked performance killers in backend systems: Excessive Logging Many applications have clean architecture, optimized queries, and scalable infrastructure — yet still suffer from performance loss because of excessive logging in frequently executed flows. Common examples: • Logging inside loops processing thousands of records • Debug logs with expensive string construction • Serializing large objects only for logging • Writing too many synchronous logs under load Simple view: Request Processing Time Business Logic = 120 ms Database = 80 ms Logging Overhead = 95 ms Total = 295 ms Better approach: • Use parameterized logging (log.info("User {}", id)) • Avoid logs inside heavy loops • Use async logging where appropriate • Keep DEBUG logs disabled in production • Log signals, not noise Lesson: Sometimes the system is slow not because of the database or business logic — but because we are logging too much. Good logging helps production. Bad logging becomes production load. #Java #SpringBoot #BackendDevelopment #Performance #Logging #SeniorDeveloper #SoftwareEngineering
To view or add a comment, sign in
-
-
One of our APIs started getting slower over time. At first, nothing looked wrong. It was working fine in dev. But as data increased, response time kept going up 📈 After digging a bit, we found the issue. Inside a loop, we were calling the database for every item. So one request was actually triggering 100+ queries 😅 Turns out, this is a classic N+1 query problem. Didn’t notice it early on because the data was small. But once it grew, it started hurting performance. We fixed it by changing it to a single query (join/batch). Same logic. Way better performance 🚀 Small thing, but big impact. Made me realize how easy it is to miss these issues when things “seem fine”. Now I always keep an eye on how many DB calls an API is making 👀 #BackendEngineering #Java #Performance #Database #Microservice
To view or add a comment, sign in
-
Saga eliminated the blocking problem. But a different failure mode remained — one that is harder to detect and equally damaging. ⚠️ The Dual-Write Problem Two operations. No atomic guarantee between them. 🔹 Database write succeeds — yard entry and shipment updated. 🔹 Event publish fails — Kafka unavailable, network timeout. 🔹 System state says the order exists. 🔹 Billing, tracking, analytics — never notified. No exception. No alert. Silent inconsistency at scale. 🔥 Real Production Scenario During a Kafka broker restart in yard services: 🔹 Shipment updates persisting to DB successfully. 🔹 Event publishing silently failing throughout. 🔹 30 minutes of yard activity — zero events delivered downstream. 🔹 Billing and tracking operating on completely stale state. 🔹 Manual reconciliation required — hours of engineering effort lost. The fix was not retry logic. It was eliminating the dual write entirely. 🟡 The Solution — Outbox Pattern Treat the event as part of the database transaction — not a separate operation. 🔹 Yard entry + shipment update + event written to OUTBOX — one transaction. 🔹 Background processor reads OUTBOX and publishes to Kafka. 🔹 If transaction commits — event is guaranteed to exist. 🔹 If transaction rolls back — event never existed. No dual write. No silent data loss. No distributed transaction needed. 💡 Key Takeaways 👉 Dual writes are a hidden reliability risk in every event-driven system. 👉 Outbox makes the event part of the transaction — atomicity guaranteed. 👉 Kafka unavailability does not cause data loss — events wait safely in OUTBOX. 👉 Consumers must be idempotent — at-least-once delivery is the contract. 👉 Slight publishing delay is acceptable — losing an event is not. Saga handles business flow. Outbox handles event reliability. Together — they address two distinct failure modes that no single pattern solves alone. Next → Final comparison: 2PC vs Saga vs Outbox — when to use which #SpringBoot #Microservices #Kafka #SystemDesign #BackendDevelopment #DistributedSystems #Java #SoftwareEngineering
To view or add a comment, sign in
-
-
Spent 25 minutes wondering why my @Transactional was not rolling back on exception The service method threw an exception but the data was still saved to the database Checked the logs and the transaction was committing anyway The problem was I was catching the exception inside the method @Transactional public void saveUser(User user) { try { userRepository save(user); throw new RuntimeException("Something went wrong"); } catch (Exception e) { log.error("Error saving user", e); } } Spring only rolls back when the exception propagates out of the method If you catch it inside, Spring thinks everything is fine and commits The fix was letting the exception propagate or using TransactionAspectSupport to mark rollback manually @Transactional public void saveUser(User user) { try { userRepository save(user); throw new RuntimeException("Something went wrong"); } catch (Exception e) { TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(); log.error("Error saving user", e); } } Small detail but can cause serious data integrity issues What transaction gotcha has caught you before #Java #SpringBoot #Transactions #Debugging #BackendDevelopment
To view or add a comment, sign in
-
A recent issue reminded me that performance optimizations can sometimes become production problems. We had an API that: 1️⃣ Fetches initial details 2️⃣ Extracts IDs from the response 3️⃣ Makes another database call to fetch larger secondary data To speed up step 3, parallel processing was introduced using a fixed thread pool. Sounds reasonable — until load testing began. Under heavy traffic, thread creation kept increasing across instances until limits were hit, leading to: ⚠️ "Can't create new native thread" The interesting part? The optimization worked for individual requests. But at scale, the resource model didn’t. A request with a small number of IDs didn’t always need dedicated worker threads, yet threads were still being allocated repeatedly under concurrent load. The fix was moving to a shared/reusable thread pool model with better resource control. 💡 My takeaway: Code that is fast in isolation may fail under concurrency. When designing for performance, it’s important to ask: - How does this behave at 1 request? - How does this behave at 1000 requests? - What resources grow with traffic? Scalability is often less about speed, more about control. #BackendEngineering #Java #PerformanceTesting #Scalability #Concurrency
To view or add a comment, sign in
-
I didn’t expect duplicate data to become this tricky. Recently, while working on a backend feature, I noticed something off, the same data was getting stored multiple times in the database. When I tried to fetch it, I was getting duplicate records. At first, I thought it was just a one-time issue. But after checking further, it turned out to be happening consistently in certain cases. The root cause was multiple requests hitting the same flow, and there were no proper checks or validations to prevent duplicate inserts. To fix this, I made a few changes: - Added validation before inserting data - Introduced unique constraints at the database level - Handled edge cases where repeated requests could happen After that, the duplicates stopped, and the data became more reliable. It was a good reminder for me: Relying only on application logic is not enough. Both validation and the database should enforce rules where it matters. Sometimes, clean data is not just about writing correct code. It’s about designing the system to prevent mistakes. #Java #BackendDevelopment #Database #SystemDesign #SpringBoot #LearningInPublic
To view or add a comment, sign in
-
-
🚨 One of the most dangerous Spring Boot traps — silent data loss with zero errors. @Transactional on a private method does absolutely nothing. No exception. No warning. Just broken data. ❌ Broken: @Service public class OrderService { public void placeOrder(Order order) { saveOrder(order); } @Transactional // ← silently ignored! private void saveOrder(Order order) { orderRepo.save(order); auditRepo.save(new AuditLog(order)); } } Why does this happen? Spring wraps beans in a proxy to intercept @Transactional calls. But private methods are invisible to the proxy — the call goes direct, completely bypassing transaction management. Result → partial saves, missing audit logs, inconsistent state. The fix: Keep transactional methods public. ✅ Fixed: @Service public class OrderService { @Transactional // ← proxy intercepts correctly public void saveOrder(Order order) { orderRepo.save(order); auditRepo.save(new AuditLog(order)); } } What makes this a production nightmare: → Unit tests pass cleanly → Logs show no errors → Only caught when data is already missing Spring won't warn you. The annotation just silently does nothing. Save this before it saves you. 🔖 Know any other silent Spring Boot traps? Drop them below 👇 #SpringBoot #Java #BackendDevelopment #SoftwareEngineering #CleanCode
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development