🔥 Day 7 — Atomic Classes (AtomicInteger): Simple Fix for Concurrency Issues I’ve seen this pattern quite often in Java code: int count = 0; public void increment() { count++; } Looks correct… but breaks under concurrency. 👉 Because count++ is NOT atomic It actually does: - Read - Increment - Write With multiple threads, updates get lost. ✅ A simple and efficient fix: AtomicInteger count = new AtomicInteger(0); public void increment() { count.incrementAndGet(); } No synchronized No explicit locks Still thread-safe ✔ ⚙ What makes Atomic classes powerful? - Use CAS (Compare-And-Swap) internally - Avoid blocking threads - Perform better under high concurrency 💡 Where AtomicInteger works best ✔ Counters (requests, metrics, retries) ✔ Flags / simple shared state ✔ High-throughput systems ⚠ Where it’s NOT enough ❌ Multiple variables need to be updated together ❌ Complex business logic ❌ Transaction-like operations 💡 From experience: In one system, replacing synchronized counters with AtomicInteger reduced thread contention significantly under load. Small change. Big impact. 👉 Do you prefer Atomic classes or synchronized for counters? #100DaysOfJavaArchitecture #Java #Concurrency #SoftwareArchitecture #Microservices
Java Concurrency Fix: AtomicInteger for Thread-Safe Counters
More Relevant Posts
-
🚀 Java Series — Day 4: Thread Synchronization & Race Condition Multithreading boosts performance ⚡ But without control, it can break your application ❌ Today, I explored one of the most critical concepts in Java — Thread Synchronization. 💡 When multiple threads access shared data at the same time, it leads to a Race Condition, causing unpredictable and incorrect results. 🔍 What I Learned: ✔️ What is Race Condition ✔️ Why Thread Safety is important ✔️ How synchronized ensures only one thread executes at a time ✔️ Importance of critical section in multi-threading 💻 Code Insight: class Counter { int count = 0; public synchronized void increment() { count++; } } 👉 Without synchronization → Data inconsistency 👉 With synchronization → Safe & accurate execution 🌍 Real-World Applications: 💰 Banking systems 👥 Multi-user applications ⚙️ Backend APIs handling concurrent requests 💡 Key Takeaway: Thread Synchronization prevents race conditions and ensures your application runs correctly, safely, and reliably in a multi-threaded environment. 📌 Next: Executor Service & Thread Pool — writing scalable and optimized code 🔥 #Java #Multithreading #ThreadSafety #BackendDevelopment #JavaDeveloper #100DaysOfCode #CodingJourney
To view or add a comment, sign in
-
-
Continuing my recent posts on JVM internals and performance, today I’m sharing a look at Java 21 Virtual Threads (from Project Loom). For a long time, Java handled concurrency using platform threads (OS threads)—which are powerful but expensive, especially for I/O-heavy applications. This led to complex patterns like thread pools, async programming, and reactive frameworks to achieve scalability. With Virtual Threads, Java introduces a lightweight threading model where thousands (even millions) of threads can be managed efficiently. 👉 When a virtual thread performs a blocking I/O operation, the underlying carrier (platform) thread is released to do other work. This brings Java closer to the efficiency of event-loop models (like in Node.js), while still allowing developers to write simple, synchronous code without callback-heavy complexity. However, in real-world scenarios, especially when teams migrate from Java 8/11 to Java 21, it’s important to keep a few things in mind: • Virtual Threads are not a silver bullet—they primarily improve I/O-bound workloads, not CPU-bound ones • If the architecture is not aligned, you may not see significant latency improvements • Legacy codebases often contain synchronized blocks or locking, which can lead to thread pinning and reduce the benefits of Virtual Threads Project Loom took years to evolve because it required deep changes to the JVM, scheduling, and thread management—while preserving backward compatibility and Java’s simplicity. Sharing a diagram that illustrates: • Platform threads vs Virtual Threads • Carrier thread behavior • Pinning scenarios Curious to hear—are you exploring Virtual Threads in your applications, or still evaluating? 👇 #Java #Java21 #VirtualThreads #ProjectLoom #Concurrency #Performance #SoftwareEngineering
To view or add a comment, sign in
-
-
🚀 Most Java developers think performance = better algorithms That’s incomplete. Real performance in Java often comes from what the JVM removes, not what you write. 👉 Escape Analysis (JVM optimization) The JVM checks whether an object “escapes” a method or thread. If it doesn’t, the JVM can: ✨ Allocate it on the stack (not heap) ✨ Remove synchronization (no locks needed) ✨ Eliminate the object entirely (scalar replacement) Yes — your object might never exist at runtime. 💡 Example: public void process() { User u = new User("A", 25); int age = u.getAge(); } If u never escapes this method, JVM can optimize it to: int age = 25; ❌ No object ❌ No GC pressure ❌ No overhead 📉 Where developers go wrong: • Creating unnecessary shared state • Overusing synchronization • Forcing objects onto the heap ✅ What you should do instead: • Keep objects local • Avoid unnecessary sharing between threads • Write code the JVM can optimize 🔥 Key Insight: Performance in Java isn’t just about writing efficient code. It’s about writing code the JVM can optimize. If you ignore this, you’re solving the wrong problem. #Java #JVM #Performance #SoftwareEngineering #BackendDevelopment
To view or add a comment, sign in
-
"Architecting Knowledge" - Java Wisdom Series Post #17: Virtual Threads - Rethinking Concurrency 👇 Million threads. One JVM. Welcome to Project Loom. Why This Matters: Platform threads map 1:1 to OS threads - each consumes ~1MB stack memory. You can create maybe 4000-10000 before your JVM dies. Virtual threads are JVM-managed and stack memory is allocated dynamically on heap - you can create millions. When a virtual thread blocks on I/O, the JVM unmounts it from its carrier thread (platform thread), letting that carrier run other virtual threads. This makes blocking I/O efficient again - no more callback hell. BUT beware thread pinning: synchronized blocks prevent unmounting in Java 21-23 (fixed in 24). Use ReentrantLock for long blocking operations. Key Takeaway: Virtual threads aren't faster - they're cheaper and more scalable. Perfect for I/O-bound workloads (web servers, microservices, API calls). Don't pool them, don't cache in ThreadLocal aggressively. Write simple blocking code, let Loom handle concurrency. #Java #JavaWisdom #VirtualThreads #ProjectLoom #Concurrency #Java21 Are you still using thread pools for I/O-bound tasks? Time to go virtual! All code examples on GitHub - bookmark for quick reference: https://lnkd.in/dJUx3Rd3
To view or add a comment, sign in
-
-
If you’ve worked on Java services that fan out to multiple dependencies, you probably know the real pain isn’t starting work in parallel. It’s everything after that. One task fails. Others keep running. Cancellation becomes inconsistent. Cleanup ends up scattered across the code. The request lifecycle suddenly becomes harder to reason about than the actual business logic. That’s exactly why structured concurrency finally caught my eye. In Java 21, StructuredTaskScope gives related concurrent work one scope, one failure policy, and one cleanup boundary. It’s still a preview API, so this is not a “use it everywhere tomorrow” post. But for request-scoped orchestration, the model feels much cleaner than ad hoc futures. I just published Part 1 of a new series covering: - what structured concurrency is trying to fix - how the Java 21 preview model works - why join() and throwIfFailed() matter - where it fits well, and where it does not Article: https://lnkd.in/gyitBUVi #Java #StructuredConcurrency #ProjectLoom #Java21 #Concurrency #BackendEngineering
To view or add a comment, sign in
-
The interesting timeout question is not whether a distributed system will hit deadlines. It will. The real question is what your code does next. Do you fail the whole response? Do you return partial data? Do you stop unfinished work cleanly, or let it keep running after the caller is already gone? That is what I wrote about in Part 2 of the structured concurrency series. In Java 21, `StructuredTaskScope` makes those choices much more explicit. You can model strict all-or-nothing timeouts, or return partial results when some sections are optional. The part I like is that cancellation and cleanup stop being scattered across the code. This post covers: - all-or-nothing timeout handling - partial results with explicit missing sections - why `joinUntil(...)` is only part of the design - why `scope.shutdown()` matters when returning early - what test cases are worth adding for timeout-sensitive endpoints Article: https://lnkd.in/gWCm5UzB #Java #StructuredConcurrency #ProjectLoom #Java21 #DistributedSystems #BackendEngineering
To view or add a comment, sign in
-
📖 New Post: Java Memory Model Demystified: Stack vs. Heap Where do your variables live? We explain the Stack, the Heap, and the Garbage Collector in simple terms. #java #jvm #memorymanagement
To view or add a comment, sign in
-
Most Java performance issues don’t show up in code reviews They show up in object lifetimes. Two pieces of code can look identical: same logic same complexity same output But behave completely differently in production. Why? Because of how long objects live. Example patterns: creating objects inside tight loops → short-lived → frequent GC holding references longer than needed → objects move to old gen caching “just in case” → memory pressure builds silently Nothing looks wrong in the code. But at runtime: GC frequency increases pause times grow latency becomes unpredictable And the worst part? 👉 It doesn’t fail immediately. 👉 It degrades slowly. This is why some systems: pass load tests work fine initially then become unstable weeks later Takeaway: In Java, performance isn’t just about what you do. It’s about how long your data stays alive while doing it. #Java #JVM #Performance #Backend #SoftwareEngineering
To view or add a comment, sign in
-
Most Java developers have used ThreadLocal to pass context — user IDs, request IDs, tenant info — across method calls. It works fine with a few hundred threads. But with virtual threads in Java 21, "fine" becomes a memory problem fast. With 1 million virtual threads, you get 1 million ThreadLocalMap instances — each holding mutable, heap-allocated state that GC has to clean up. And because ThreadLocal is mutable and global, silent overwrites like this are a real risk in large systems: userContext.set(userA); // ... deep somewhere ... userContext.set(userB); // overrides without warning Java 21 introduces ScopedValue — the right tool for virtual threads: ScopedValue.where(USER, userA).run(() -> { // USER is safely available here, immutably }); It's immutable, scoped to an execution block, requires no per-thread storage, and cleans itself up automatically. No more silent overrides. No memory bloat. No manual remove() calls. In short: ThreadLocal was designed for few, long-lived threads. ScopedValue is designed for millions of short-lived virtual threads. If you're building high-concurrency APIs with Spring Boot + virtual threads and still using ThreadLocal for request context — this switch can meaningfully reduce your memory footprint and make your code safer. Are you already using ScopedValue in production, or still on ThreadLocal? Would love to hear what's holding teams back. #Java #Java21 #VirtualThreads #ProjectLoom #BackendEngineering #SpringBoot #SoftwareEngineering
To view or add a comment, sign in
-
In Java, we often hear that object creation is cheap and the JVM is optimized for it. That’s true — but only up to a point. In high-throughput backend systems, excessive object creation becomes a hidden performance issue. What happens in real systems: Large numbers of short-lived objects are created per request Memory allocation rate increases significantly Garbage collection runs more frequently Latency becomes inconsistent due to GC activity Individually, object creation is fast.But at scale, it creates memory pressure that directly impacts performance. This is especially noticeable in: High-traffic REST APIs Data transformation layers Logging and serialization-heavy flows The key learning for me was to be mindful of an object lifecycle, not just logic. Good Java performance isn’t just about efficient algorithms. It’s about how efficiently the JVM can manage the memory your code produces. #Java #JVM #PerformanceTuning #BackendEngineering #Microservices
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development