🚀 Java Series — Day 4: Thread Synchronization & Race Condition Multithreading boosts performance ⚡ But without control, it can break your application ❌ Today, I explored one of the most critical concepts in Java — Thread Synchronization. 💡 When multiple threads access shared data at the same time, it leads to a Race Condition, causing unpredictable and incorrect results. 🔍 What I Learned: ✔️ What is Race Condition ✔️ Why Thread Safety is important ✔️ How synchronized ensures only one thread executes at a time ✔️ Importance of critical section in multi-threading 💻 Code Insight: class Counter { int count = 0; public synchronized void increment() { count++; } } 👉 Without synchronization → Data inconsistency 👉 With synchronization → Safe & accurate execution 🌍 Real-World Applications: 💰 Banking systems 👥 Multi-user applications ⚙️ Backend APIs handling concurrent requests 💡 Key Takeaway: Thread Synchronization prevents race conditions and ensures your application runs correctly, safely, and reliably in a multi-threaded environment. 📌 Next: Executor Service & Thread Pool — writing scalable and optimized code 🔥 #Java #Multithreading #ThreadSafety #BackendDevelopment #JavaDeveloper #100DaysOfCode #CodingJourney
Java Thread Synchronization & Race Condition Prevention
More Relevant Posts
-
While exploring multithreading in Java, I recently spent some time understanding race conditions. A race condition happens when multiple threads access and modify the same shared data at the same time, and the final result depends on the order in which the threads execute. Without proper synchronisation, this can lead to unexpected or inconsistent outcomes in an application. In backend systems, this becomes important when multiple requests update shared resources such as counters, account balances, or cached data. Understanding race conditions helps developers design safer concurrent code using techniques like synchronisation, locks, or atomic operations. When working with shared data in multithreaded code, what practices do you usually follow to prevent race conditions? #Java #JavaDeveloper #Multithreading #BackendDevelopment #JavaInterviewPreparation #DeveloperLearning
To view or add a comment, sign in
-
-
🚀 Practiced User Defined Exception Handling in Java today by building a small banking scenario. I created a simple program where: - A user starts with a balance - Chooses an amount to withdraw - The system checks if the withdrawal is valid But instead of relying on Java’s default exceptions, I tried something different 👇 👉 I created my own exception: "InsufficientBalance" So whenever the withdrawal amount exceeds the available balance, the program throws my custom exception instead of letting the system handle it blindly. 💡 What clicked for me while doing this: - Exceptions are not just “errors” — they help define business rules - Using "throw" makes your program logic more intentional - Custom exceptions make the code more readable and meaningful - Even simple problems (like withdrawal) can be modeled in a structured way Also explored how Java represents exceptions using "toString()" to get proper error details. It’s a small program, but it helped me connect: 👉 custom exception creation 👉 real-world validation logic 👉 structured error handling Still learning and experimenting with Java — but this felt like a step closer to writing more real-world oriented code. Curious — in real applications, how do you usually design custom exceptions? 🤔 #Java #ExceptionHandling #ProgrammingJourney #LearningInPublic #BTech
To view or add a comment, sign in
-
-
🚀 Java Series — Day 11: Encapsulation (Advanced Java Concept) Good developers write good code… Great developers protect their code 👀 Today, I explored Encapsulation in Java — a powerful concept used to secure data and control access in applications. 🔍 What I Learned: ✔️ Encapsulation = Wrapping data + controlling access ✔️ Use of private variables (data hiding) ✔️ Getters & Setters for controlled access ✔️ Improves security, flexibility & maintainability 💻 Code Insight: class BankAccount { private double balance; // hidden data public BankAccount(double initialBalance) { this.balance = initialBalance; } public double getBalance() { return balance; } } ⚡ Why Encapsulation is Important? 👉 Protects sensitive data 👉 Prevents unauthorized access 👉 Improves code flexibility 👉 Hides internal implementation 🌍 Real-World Examples: 💳 Banking systems (secure transactions) 📱 Mobile apps (user data protection) 🚗 Vehicles (controlled operations) 💡 Key Takeaway: Encapsulation helps you build secure, maintainable, and reliable applications by controlling access to data 🔐 📌 Next: Polymorphism & Runtime Behavior 🔥 #Java #OOPS #Encapsulation #JavaDeveloper #BackendDevelopment #CodingJourney #100DaysOfCode #LearnInPublic
To view or add a comment, sign in
-
-
#Post11 In the previous post(https://lnkd.in/dynAvNrN), we saw how to create threads in Java. Now let’s talk about a problem. If creating threads is so simple… why don’t we just create a new thread every time we need one? Let’s say we are building a backend system. For every incoming request/task, we create a new thread: new Thread(() -> { // process request }).start(); This looks simple. But this approach breaks very quickly in real systems because of below mentioned problems. Problem 1: Thread creation is expensive Creating a thread is not just creating an object. It involves: • Allocating memory (stack) • Registering with OS • Scheduling overhead Creating thousands of threads = performance degradation Problem 2: Too many threads → too much context switching We already saw this earlier(https://lnkd.in/dYG3v-vb). More threads does NOT mean more performance. Instead: • CPU spends more time switching • Less time doing actual work Problem 3: No control over thread lifecycle When you create threads manually: • No limit on number of threads • No reuse • Hard to manage failures This quickly becomes difficult to manage as the system grows. So what’s the solution? Instead of creating threads manually: we use something called the Executor Framework. In simple words consider the framework to be like: Earlier, we were manually hiring a worker (thread) for every task. With Executor, we have a team of workers (thread pool), and we just assign tasks to them. Key idea Instead of: Creating a new thread for every task We do: Submit tasks to a pool of reusable threads This is exactly what Java provides using: Executor Framework Key takeaway Manual thread creation works for learning, but does not scale in real-world systems. Thread pools help: • Control number of threads • Reduce overhead • Improve performance We no longer manage threads directly — we delegate that responsibility to the Executor Framework. In the next post, we’ll see how Executor Framework works and how to use it in Java. #Java #Multithreading #Concurrency #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
Most Java performance issues don’t show up in code reviews They show up in object lifetimes. Two pieces of code can look identical: same logic same complexity same output But behave completely differently in production. Why? Because of how long objects live. Example patterns: creating objects inside tight loops → short-lived → frequent GC holding references longer than needed → objects move to old gen caching “just in case” → memory pressure builds silently Nothing looks wrong in the code. But at runtime: GC frequency increases pause times grow latency becomes unpredictable And the worst part? 👉 It doesn’t fail immediately. 👉 It degrades slowly. This is why some systems: pass load tests work fine initially then become unstable weeks later Takeaway: In Java, performance isn’t just about what you do. It’s about how long your data stays alive while doing it. #Java #JVM #Performance #Backend #SoftwareEngineering
To view or add a comment, sign in
-
🚀 Solving a Critical Memory Retention Issue After Java 21 Migration Recently, I worked on resolving a critical memory retention issue in a high-throughput document processing microservice following a Java 21 & Spring Boot 3 migration. This service handles large document uploads, multipart files, and link-based document ingestion under heavy concurrent load. Post-migration, we observed abnormally high heap memory retention, increasing GC pressure and impacting runtime stability. 🔍 The Challenge • Extremely high retained heap memory under load • Increased GC pressure and latency • Memory growth proportional to concurrent requests • Heap dump analysis showed request-level observation objects as dominant GC roots 🧠 Root Cause Identified Spring Boot 3 introduced implicit Micrometer Observability instrumentation: • Security filter chain wrapped with observations • Entire HTTP request lifecycle instrumented • ~90+ objects allocated per request • No metrics backend configured — leading to memory overhead without observability benefit Under heavy traffic, these allocations accumulated and caused significant heap retention. 🛠️ Resolution Implemented • Disabled observation wrapping in Spring Security filter chain • Disabled servlet-level HTTP observation • Injected 'ObservationRegistry.NOOP' • Restored lightweight request-processing behavior 📈 Outcome ✅ Eliminated per-request observation allocations ✅ Significantly reduced heap usage & GC pressure ✅ Improved runtime stability and throughput ✅ Zero functional impact 💡 Key Takeaway Major framework upgrades can introduce implicit behavioral changes that only surface under load. This experience reinforced the importance of: • Load testing • Heap profiling • Deep root cause analysis • Performance-focused engineering Solving problems at scale like these is always rewarding — especially when they improve system stability, performance, and reliability. #Java21 #SpringBoot3 #Microservices #PerformanceEngineering #MemoryOptimization #SoftwareEngineering #BackendEngineering #Scalability #Java #SystemDesign #TechLeadership
To view or add a comment, sign in
-
In Java, we often hear that object creation is cheap and the JVM is optimized for it. That’s true — but only up to a point. In high-throughput backend systems, excessive object creation becomes a hidden performance issue. What happens in real systems: Large numbers of short-lived objects are created per request Memory allocation rate increases significantly Garbage collection runs more frequently Latency becomes inconsistent due to GC activity Individually, object creation is fast.But at scale, it creates memory pressure that directly impacts performance. This is especially noticeable in: High-traffic REST APIs Data transformation layers Logging and serialization-heavy flows The key learning for me was to be mindful of an object lifecycle, not just logic. Good Java performance isn’t just about efficient algorithms. It’s about how efficiently the JVM can manage the memory your code produces. #Java #JVM #PerformanceTuning #BackendEngineering #Microservices
To view or add a comment, sign in
-
📈 Does Java really use too much memory? It’s a common myth but modern Java tells a different story. With improvements like: ✔️ Low-latency garbage collectors (ZGC, Shenandoah) ✔️ Lightweight virtual threads (Project Loom) ✔️ Compact object headers (JEP 450) ✔️ Container-aware JVM & Class Data Sharing Java today is far more memory efficient, scalable and optimized than before. 💡 The real issue often isn’t Java it’s: • Unbounded caches • Poor object design • Memory leaks • Holding unnecessary references 👉 In short: Java isn’t memory hungry it’s memory aware. If your app is consuming too much RAM, start profiling your code before blaming the JVM. #Java #BackendDevelopment #Performance #JVM #SoftwareEngineering
To view or add a comment, sign in
-
-
Continuing my recent posts on JVM internals and performance, today I’m sharing a look at Java 21 Virtual Threads (from Project Loom). For a long time, Java handled concurrency using platform threads (OS threads)—which are powerful but expensive, especially for I/O-heavy applications. This led to complex patterns like thread pools, async programming, and reactive frameworks to achieve scalability. With Virtual Threads, Java introduces a lightweight threading model where thousands (even millions) of threads can be managed efficiently. 👉 When a virtual thread performs a blocking I/O operation, the underlying carrier (platform) thread is released to do other work. This brings Java closer to the efficiency of event-loop models (like in Node.js), while still allowing developers to write simple, synchronous code without callback-heavy complexity. However, in real-world scenarios, especially when teams migrate from Java 8/11 to Java 21, it’s important to keep a few things in mind: • Virtual Threads are not a silver bullet—they primarily improve I/O-bound workloads, not CPU-bound ones • If the architecture is not aligned, you may not see significant latency improvements • Legacy codebases often contain synchronized blocks or locking, which can lead to thread pinning and reduce the benefits of Virtual Threads Project Loom took years to evolve because it required deep changes to the JVM, scheduling, and thread management—while preserving backward compatibility and Java’s simplicity. Sharing a diagram that illustrates: • Platform threads vs Virtual Threads • Carrier thread behavior • Pinning scenarios Curious to hear—are you exploring Virtual Threads in your applications, or still evaluating? 👇 #Java #Java21 #VirtualThreads #ProjectLoom #Concurrency #Performance #SoftwareEngineering
To view or add a comment, sign in
-
-
Most Java developers have used ThreadLocal to pass context — user IDs, request IDs, tenant info — across method calls. It works fine with a few hundred threads. But with virtual threads in Java 21, "fine" becomes a memory problem fast. With 1 million virtual threads, you get 1 million ThreadLocalMap instances — each holding mutable, heap-allocated state that GC has to clean up. And because ThreadLocal is mutable and global, silent overwrites like this are a real risk in large systems: userContext.set(userA); // ... deep somewhere ... userContext.set(userB); // overrides without warning Java 21 introduces ScopedValue — the right tool for virtual threads: ScopedValue.where(USER, userA).run(() -> { // USER is safely available here, immutably }); It's immutable, scoped to an execution block, requires no per-thread storage, and cleans itself up automatically. No more silent overrides. No memory bloat. No manual remove() calls. In short: ThreadLocal was designed for few, long-lived threads. ScopedValue is designed for millions of short-lived virtual threads. If you're building high-concurrency APIs with Spring Boot + virtual threads and still using ThreadLocal for request context — this switch can meaningfully reduce your memory footprint and make your code safer. Are you already using ScopedValue in production, or still on ThreadLocal? Would love to hear what's holding teams back. #Java #Java21 #VirtualThreads #ProjectLoom #BackendEngineering #SpringBoot #SoftwareEngineering
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development