Java Virtual Threads changed everything I thought I knew about concurrency. For years, we accepted the "thread-per-request is expensive" rule as gospel. Spawn too many threads → CPU thrash → app slows down. We worked around it with thread pools, reactive programming, async callbacks... It worked. But the complexity cost was brutal. Then Java 21 dropped Virtual Threads. Here's what actually blew my mind 🤯 The old model: 1 platform thread = 1 OS thread = heavy, limited, expensive. Virtual threads: Millions of lightweight threads, mounted on carrier threads, managed by the JVM itself — not the OS. Your blocking I/O call? The JVM unmounts the virtual thread, frees the carrier thread for other work, and remounts it when data is ready. Zero reactive boilerplate. Zero callback hell. Just simple, readable blocking code — that scales. 3 things I wish someone told me earlier: → Virtual threads are NOT faster for CPU-bound tasks. They shine for I/O-bound workloads. → Don't pool virtual threads. They're cheap — just create them. → Synchronized blocks can still pin virtual threads to carrier threads. Use ReentrantLock instead. Java didn't just patch concurrency. It rethought it. Are you using Virtual Threads in production yet? What's holding you back? #Java #Java21 #VirtualThreads #Concurrency #Backend
Java Virtual Threads Revolutionize Concurrency with Lightweight Threads
More Relevant Posts
-
🚀 𝗘𝘃𝗲𝗿𝘆𝗼𝗻𝗲 𝘁𝗮𝗹𝗸𝘀 𝗮𝗯𝗼𝘂𝘁 𝗳𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀. Spring Boot. Micronaut. Quarkus. But the 𝗿𝗲𝗮𝗹 𝗿𝗲𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻 happened inside the 𝗝𝗩𝗠. 𝗩𝗶𝗿𝘁𝘂𝗮𝗹 𝗧𝗵𝗿𝗲𝗮𝗱𝘀 𝗶𝗻 𝗝𝗮𝘃𝗮 𝟮𝟭. For years the model looked like this: 𝟭 𝗿𝗲𝗾𝘂𝗲𝘀𝘁 → 𝟭 𝗲𝘅𝗽𝗲𝗻𝘀𝗶𝘃𝗲 𝗢𝗦 𝘁𝗵𝗿𝗲𝗮𝗱 Which meant: • Limited scalability • Thread pool management • Complex async programming Now Java gives us a new model: 𝟭 𝗿𝗲𝗾𝘂𝗲𝘀𝘁 → 𝟭 𝗹𝗶𝗴𝗵𝘁𝘄𝗲𝗶𝗴𝗵𝘁 𝘃𝗶𝗿𝘁𝘂𝗮𝗹 𝘁𝗵𝗿𝗲𝗮𝗱 And the impact is huge. With Virtual Threads you can: • 𝗛𝗮𝗻𝗱𝗹𝗲 𝗺𝗶𝗹𝗹𝗶𝗼𝗻𝘀 𝗼𝗳 𝗰𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝘁 𝘁𝗮𝘀𝗸𝘀 • 𝗪𝗿𝗶𝘁𝗲 𝘀𝗶𝗺𝗽𝗹𝗲 𝗯𝗹𝗼𝗰𝗸𝗶𝗻𝗴 𝗰𝗼𝗱𝗲 • 𝗔𝘃𝗼𝗶𝗱 𝗰𝗼𝗺𝗽𝗹𝗲𝘅 𝗿𝗲𝗮𝗰𝘁𝗶𝘃𝗲 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 • 𝗕𝘂𝗶𝗹𝗱 𝗰𝗹𝗲𝗮𝗻𝗲𝗿 𝗺𝗶𝗰𝗿𝗼𝘀𝗲𝗿𝘃𝗶𝗰𝗲 𝗮𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲𝘀 This is one of the 𝗯𝗶𝗴𝗴𝗲𝘀𝘁 𝗰𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆 𝗶𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁𝘀 in 𝗝𝗮𝘃𝗮 𝗵𝗶𝘀𝘁𝗼𝗿𝘆. Java isn’t slowing down. 𝗜𝘁’𝘀 𝗲𝘃𝗼𝗹𝘃𝗶𝗻𝗴. #Java #Java21 #VirtualThreads #JVM #BackendDevelopment #SoftwareArchitecture #Concurrency
To view or add a comment, sign in
-
-
Creating threads manually works. But in real applications? It doesn’t scale. Why? Because: • Creating threads is expensive • Too many threads → memory issues • Too few threads → underutilized CPU Professionals use Thread Pools. In Java, that’s done using ExecutorService. import java.util.concurrent.*; ExecutorService executor = Executors.newFixedThreadPool(3); executor.execute(() -> { System.out.println("Task running by " + Thread.currentThread().getName()); }); executor.shutdown(); What just happened? Instead of creating new threads every time: • A fixed number of threads are created • Tasks are assigned to them • Threads are reused This improves: • Performance • Resource management • Scalability Why Thread Pools Matter Without thread pools: • You risk system overload • Thread creation overhead increases • Performance becomes unstable With thread pools: • Controlled concurrency • Better CPU utilization • Predictable behavior Bonus: submit() vs execute() execute() → No return value submit() → Returns a Future Future<Integer> result = executor.submit(() -> 10 + 20); System.out.println(result.get()); Now you're not just running threads. You’re managing tasks professionally. Today was about: • Why raw threads aren’t enough • What thread pools are • How ExecutorService works Concurrency at scale needs structure. Thread pools bring that structure. #Java #Concurrency #ExecutorService #Multithreading #SoftwareEngineering #ThreadPool #LearningInPublic
To view or add a comment, sign in
-
-
👉🏻Call by reference :- a convention where the compiler passes an address for the actual parameter to the callee If the actual parameter is a variable, then changing the formal's value also changes the actual's value. parameter, the caller passes an address rather than a value. If the actual parameter resides in memory, the caller passes its memory address. If the actual parameter is an expression, the caller evaluates the expression, stores its value into the caller's local data area, and passes the address of that location. Values kept in registers and constants should be handled in the same way as expressions. Inside the callee, each reference to a formal parameter needs an extra level of indirection. Call by reference differs from call by value in two critical ways. First, if the caller passes a variable x as a call-by-reference actual parameter bound to y in the callee, then any change to y is also a change to x. Second, if the callee can access x directly, then it has two names inside the callee, which can lead to counterintuitive behavior. 👉🏻Call by value:- Call by value is a method of passing arguments to a function where a copy of the actual parameter's value is made in memory and passed to the function's formal parameter. Because the function operates on a copy, any modifications made inside the function do not affect the original variable in the caller #AnandKumar Buddarapu #java
To view or add a comment, sign in
-
Day 46 — LeetCode Progress (Java) Problem: Missing Number Required: Given an array containing n distinct numbers taken from the range [0, n], return the missing number. Idea: Use the difference between the expected sum of numbers from 0 to n and the actual array values to find the missing number. Instead of computing full sums, we update the result incrementally. Approach: Initialize a variable res with the value n (length of the array). Traverse the array from index 0 to n-1. For each index i, update the result using: res += i - nums[i] This effectively balances the expected numbers (0..n) against the numbers present in the array. The remaining value in res after the loop is the missing number. Time Complexity: O(n) Space Complexity: O(1) #LeetCode #DSA #Java #Arrays #Algorithms #CodingJourney #100DaysOfCode
To view or add a comment, sign in
-
-
Stop treating Virtual Threads like OS Threads. It’s killing your scalability. When designing the Exeris Kernel (a zero-copy, Java 26+ runtime), I made a controversial architectural decision: I completely banned ThreadLocal from the codebase. Why? Because in a high-density environment where you spawn 1,000,000 Virtual Threads using Structured Concurrency, InheritableThreadLocal becomes a performance serial killer. Every child task forces the JVM to clone maps, triggering an avalanche of garbage collection pauses. You lose the very scalability Loom was supposed to give you. We need a paradigm shift: From "Thread-bound state" to "Scope-bound state." By replacing ThreadLocal entirely with JEP 506 (Scoped Values), we achieved: -- - O(1) constant-time context inheritance - Zero memory leaks (Lexically bounded lifecycle) - True immutability for security contexts I wrote a deep-dive forensic analysis on why Java 8 patterns don't work in Java 26, and how Exeris uses the "Invisible Wall" pattern to route 1M streams safely. Check out the full article here: https://lnkd.in/dbEBWyH5 Would you ban ThreadLocal in your next project? Let's debate. #Java #ProjectLoom #VirtualThreads #SoftwareArchitecture #Performance #DeepTech #Exeris
To view or add a comment, sign in
-
Engineering at the L0 level requires more than just adopting new APIs — it requires discarding legacy habits that kill performance. At Exeris, we are building a zero-copy runtime for the next decade of cloud computing. Banning ThreadLocal wasn't just a choice; it was a necessity to achieve true 1-VT-per-Stream density without the massive memory tax of the past. Our founder Arkadiusz Przychocki breaks down the forensic reasons why we moved to ScopedValues to preserve L1/L2 cache locality and eliminate GC pressure at scale. This is how we define "Zero-Waste Compute".
Stop treating Virtual Threads like OS Threads. It’s killing your scalability. When designing the Exeris Kernel (a zero-copy, Java 26+ runtime), I made a controversial architectural decision: I completely banned ThreadLocal from the codebase. Why? Because in a high-density environment where you spawn 1,000,000 Virtual Threads using Structured Concurrency, InheritableThreadLocal becomes a performance serial killer. Every child task forces the JVM to clone maps, triggering an avalanche of garbage collection pauses. You lose the very scalability Loom was supposed to give you. We need a paradigm shift: From "Thread-bound state" to "Scope-bound state." By replacing ThreadLocal entirely with JEP 506 (Scoped Values), we achieved: -- - O(1) constant-time context inheritance - Zero memory leaks (Lexically bounded lifecycle) - True immutability for security contexts I wrote a deep-dive forensic analysis on why Java 8 patterns don't work in Java 26, and how Exeris uses the "Invisible Wall" pattern to route 1M streams safely. Check out the full article here: https://lnkd.in/dbEBWyH5 Would you ban ThreadLocal in your next project? Let's debate. #Java #ProjectLoom #VirtualThreads #SoftwareArchitecture #Performance #DeepTech #Exeris
To view or add a comment, sign in
-
Java 21 Virtual Threads vs Platform Threads A Simple Calculation for Microservices Most posts explain what virtual threads are. But let's look at why they win in microservice architectures using simple math. Java 21 Virtual Threads vs Platform Threads (Simple Math) 1️⃣ Typical microservice request: 250 ms total (240 ms waiting for DB/API + 10 ms CPU). 2️⃣ With Platform Threads (pool = 200) → max concurrency = 200 requests. Throughput ≈ 200 / 0.25 = 800 req/sec. 3️⃣ With Virtual Threads, you can run 10,000 concurrent requests because when a request waits for DB/API I/O, the virtual thread unmounts from the OS (carrier) thread, freeing it to execute another request instead of staying blocked. 4️⃣ Throughput ≈ 10,000 / 0.25 = 40,000 req/sec Because the same small set of OS threads keeps executing new virtual threads while others are waiting, CPU resources stay busy instead of idle, dramatically increasing effective throughput. ➡️ Same hardware, ~50× more concurrent requests simply because virtual threads release the OS thread during I/O wait, allowing others to run. #Java21 #VirtualThreads #Microservices #ProjectLoom
To view or add a comment, sign in
-
𝐉𝐚𝐯𝐚 𝟐𝟏 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐙𝐆𝐂: 𝐓𝐡𝐞 𝐄𝐧𝐝 𝐨𝐟 𝐭𝐡𝐞 𝟏𝐦𝐬 𝐁𝐚𝐫𝐫𝐢𝐞𝐫 𝐢𝐧 𝐇𝐅𝐓 Most people talk about Java 21’s ZGC and say, "Look, sub-1ms pauses!" But in HFT, we know there is no such thing as a free lunch. When you move from G1 to Generational ZGC, you aren't just changing a collector; you are changing how your CPU interacts with memory. Here is what I’m seeing from an engineering perspective: 1. The "Load Barrier" Tax ZGC uses Colored Pointers. Every time your code accesses an object reference, a "Load Barrier" kicks in to check the pointer's metadata bits. The Reality: This isn't free. In high-throughput loops, you might see a 2-3% decrease in raw throughput compared to G1. The Trade-off: I’ll take a 3% throughput hit any day if it means my p999 stays flat at 500μs during a market surge. 2. L3 Cache Pressure Because ZGC manipulates pointer bits, it can slightly increase L3 cache misses. In legacy ZGC (non-generational), the "tax" was high because it scanned the whole heap. The Fix in Java 21: By adding a "Young Generation," ZGC now focuses its barrier logic on short-lived objects. This has significantly improved our cache hit rates in order-matching simulations. 3. The "Uncommit" Trap By default, ZGC returns memory to the OS (-XX:+ZUncommit). In a banking production environment, this is a latency killer. The OS page faults when the JVM asks for that memory back. Pro-tip: Always set -Xms and -Xmx to the same value and use -XX:-ZUncommit to keep your heap "warm" and resident. The Conclusion: Java 21 isn't just "faster"—it’s more deterministic. For the first time, we can build massive, 512GB+ stateful trading engines in Java without fearing the "Stop-the-World" monster. Are you seeing the Load Barrier overhead in your tightest loops, or has the Generational shift mitigated it for you? #Java21 #HFT #LowLatency #JVM #ZGC #SoftwareArchitecture
To view or add a comment, sign in
-
Virtual Threads are a game-changer in Java — but there's a silent performance killer you need to know about: the Pinning Problem. With Project Loom (Java 21+), virtual threads allow you to run millions of lightweight threads without exhausting your OS thread pool. Sounds perfect, right? Not so fast. ⚠️ 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗣𝗶𝗻𝗻𝗶𝗻𝗴? A virtual thread gets "pinned" to its carrier (OS) thread when: → It runs inside a synchronized block or method → It calls native methods or foreign functions When pinned, the carrier thread is blocked — defeating the entire purpose of virtual threads and bringing you back to the old-school thread-per-request bottleneck. 𝗛𝗼𝘄 𝘁𝗼 𝗱𝗲𝘁𝗲𝗰𝘁 𝗶𝘁? Run your app with: -Djdk.tracePinnedThreads=full This logs every pinning event so you can track down the culprit. 𝗛𝗼𝘄 𝘁𝗼 𝗳𝗶𝘅 𝗶𝘁? ✅ Replace synchronized with ReentrantLock ✅ Audit third-party libraries for synchronized usage ✅ Use JDK 24+ where many JDK internals have been migrated away from synchronized Virtual threads are powerful — but only if you avoid the traps. Pinning is one of the most overlooked issues when migrating to Loom-based concurrency. #Java #ProjectLoom #VirtualThreads #Concurrency #SoftwareEngineering #Backend #Java21 #PerformanceTuning
To view or add a comment, sign in
-
“We Added More Threads… and the System Got Slower” In this post, I’ll break down parallel computation from a software design perspective (not CPU worship 🙃): • Why parallelism is a design problem • The real difference between concurrency vs parallelism • The mistakes that quietly destroy performance • And how modern Java actually fixes part of this mess https://lnkd.in/ed86MUUX
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development