🚀 Virtual Threads in Java – The Most Underused Superpower in Modern Concurrency 🧵 We’ve all battled with scaling backend services — thread pools exhausted, reactive frameworks adding complexity, and debugging async flows turning into nightmares. Then came Project Loom (Java 21) — quietly transforming concurrency with Virtual Threads. 🧩 What Are Virtual Threads? They’re JVM-managed lightweight threads, not tied to OS threads. You can spawn thousands or even millions of them without worrying about memory or blocking calls. Each virtual thread costs ~2KB vs 1MB for a platform thread — a thousand-fold efficiency boost. ⚙️ Where They Shine ✅ I/O-bound workloads (DB calls, REST APIs, file or network I/O) ✅ Microservices handling high request volume ✅ Gateway or aggregation services calling multiple downstream APIs ✅ ETL / crawler systems needing high concurrency ✅ Load testing simulations mimicking thousands of users Example: try (var executor = Executors.newVirtualThreadPerTaskExecutor()) { urls.forEach(url -> executor.submit(() -> crawl(url))); } Readable, blocking-style code — no CompletableFuture chaos, no reactive boilerplate. 🚫 Where Not to Use ❌ CPU-bound work (heavy computation — no gain) ❌ Synchronized blocks or JNI (can cause thread pinning) ❌ ThreadLocal-heavy frameworks (consider ScopedValues instead) ❌ Old JDBC drivers or native libraries You can detect pinning via: -Djdk.tracePinnedThreads=full 🧠 Real-World Example Imagine a microservice that fetches user data and order history: try (var scope = new StructuredTaskScope.ShutdownOnFailure()) { var user = scope.fork(() -> userService.getUserById(1)); var orders = scope.fork(() -> orderService.getOrders(1)); scope.join(); return new UserProfile(user.resultNow(), orders.resultNow()); } Simple, structured, and scalable — each call runs in its own virtual thread, but the JVM parks them efficiently when blocked. 🔍 The Takeaway Virtual Threads don’t replace parallelism — they make concurrency practical again. #Java #VirtualThreads #Java21 #Concurrency #SoftwareEngineering #BackendDevelopment #Microservices #PerformanceEngineering #Scalability #SystemDesign #Developers #EngineeringExcellence
How Virtual Threads in Java 21 Revolutionize Concurrency
More Relevant Posts
-
⚙️ Blocking Requests to Real-Time Async Uploads: Built a Production-Ready File Upload System In a recent project, I came across a complex challenge for our analytics platform: how to handle high-volume, concurrent file uploads while providing live, user-facing progress tracking and maintaining data enrichment. The Challenge: The initial sync endpoint blocked for ~20 seconds during S3 uploads and providing status update, freezing the UI and introducing data corruption risks with concurrent users due to unsynchronized HashMap writes. Traditional HashMap can return incorrect values/statuses or provide nulls when multiple threads try to perform read/write operations which gives incorrect updates to endpoints.Worse, Tomcat was deleting temp files before async tasks could process them, causing unexplained failures. My Solution Approach: - Enabled async processing in Spring Boot via @Async + custom ThreadPoolTaskExecutor, letting uploads run in the background and API responses return instantly. - Built live status tracking: Used Angular’s RxJS polling (100ms intervals) with switchMap/takeWhile, so users saw every step—“Queued” → “Validating” → “Uploading” → “Completed”—in real time. - Concurrent status management: Adopted ConcurrentHashMap (new ConcurrentHashMap<>(32, 0.75f, 8)) for safe, lock-free concurrent reads and up to 8 parallel writes, eliminating race conditions, lost updates, and ConcurrentModificationException. - File integrity for async: Converted MultipartFile to byte arrays in-controller before async handling, solving the Tomcat temp file deletion problem once and for all. - Fine-grained error handling: Added detailed error tracking & logging for every validation and upload stage, funneling meaningful messages to user-facing modals. Key Technical Wins: - Robust to spikes in concurrent uploads (8-user uploads stress tested) - No blocking or thread starvation; backend always responsive - Data integrity and system stability under all concurrency levels - Clean, production-grade error management and user feedback loop Tech stack: Spring Boot | Angular | AWS S3 | Java Concurrency | RxJS Takeaway: Deep understanding of Java concurrency—knowing when to employ ConcurrentHashMap and async design—let me build a solution that's both performant and bulletproof. #SpringBoot #AsyncProgramming #Java #Angular #AWS #SoftwareEngineering #BackendDevelopment #Concurrency #SystemDesign
To view or add a comment, sign in
-
Multithreading Best Practices I wish I’d learned sooner (Java edition) High throughput isn’t about “more threads” — it’s about less contention, clear ownership, and predictable backpressure. My field notes: 1) Design for concurrency first Prefer immutability and message passing over shared mutation. Keep data thread-confined (owning thread) when possible; share only when you must. 2) Pick the right executor CPU-bound → fixed pool ≈ cores. I/O-bound → larger pool or virtual threads (Java 21+) via Executors.newVirtualThreadPerTaskExecutor(). Always name threads and bound queues (no unbounded surprises). 3) Control contention, then lock Minimize critical sections; guard the smallest mutable state. If you must lock: consistent lock ordering, tryLock + timeout, and consider ReadWriteLock/StampedLock for read-heavy flows. Use LongAdder for hot counters and ConcurrentHashMap for sharded state. 4) Visibility > vibes Understand happens-before; use volatile for visibility (not for compound ops). Safely publish objects (final fields, immutable DTOs). 5) Backpressure is a feature Bounded queues (e.g., ArrayBlockingQueue) + a RejectedExecutionHandler you chose on purpose. Rate limit, shed load, or degrade gracefully before your service falls over. 6) Cancellation you can trust Treat Thread.interrupt() as the standard cancel signal; check it in loops, pass it down, and clean up. 7) Fail fast, shut down cleanly executor.shutdown(); if (!executor.awaitTermination(30, TimeUnit.SECONDS)) { executor.shutdownNow(); } Add metrics around queue depth, wait time, and task latency. 8) Don’t block the future Compose async with CompletableFuture (allOf/anyOf), timebox with timeouts. Consider Structured Concurrency (Java 21+) for request-scoped parallel work (StructuredTaskScope). 9) Test like production Chaos/stress tests; vary pool sizes; fault-inject slow I/O. Use JFR/JStack for live profiling; watch for ThreadLocal leaks. 10) Keep it observable Emit per-pool metrics (active, queued, rejected), plus p95/p99 latencies. Log cause on rejections and timeouts; trace cross-thread hops. Smells to fix quickly Unbounded pools/queues, synchronized getters doing I/O, global locks, ignoring interrupts, shared mutable singletons. If you’ve got one rule to add to this list, what is it? 👇 #java #concurrency #multithreading #springboot #microservices #performance #jvm #systemdesign
To view or add a comment, sign in
-
🚀 Java 21 — Virtual Threads Java 21 quietly brought a game-changer for concurrency — Virtual Threads. If you’ve ever fought with Thread.sleep(), blocking I/O, or scaling your app under load, this one’s for you. Traditional Threads — The Old Way In classic Java, when you run: new Thread(() -> { // some task }).start(); You’re creating an OS-level thread. Each one is heavy — it consumes memory (around 1MB stack space by default) and limited by the operating system. On a typical machine, you can only handle a few thousand concurrent threads before performance drops. That’s why frameworks (like Spring WebFlux or Reactive Streams) were created — to avoid blocking and manage concurrency efficiently. Virtual Threads — The New Way Java 21 introduces Virtual Threads (via Project Loom). They are lightweight, user-mode threads managed by the JVM, not the operating system. Creating millions of them? Totally fine. Each virtual thread takes only a few KBs of memory and doesn’t block the OS thread when waiting (e.g., for I/O). Traditional vs Virtual Threads 🔸 Traditional Thread Example ExecutorService executor = Executors.newFixedThreadPool(100); for (int i = 0; i < 1000; i++) { executor.submit(() -> { doDatabaseCall(); // blocking }); } Here, we’re limited by 100 OS threads. If 100 tasks are waiting on I/O, others must wait. 🔸 Virtual Thread Example ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); for (int i = 0; i < 1000; i++) { executor.submit(() -> { doDatabaseCall(); // blocking }); } Each task runs on its own virtual thread — even if it’s blocking, it doesn’t “occupy” an OS thread. The JVM smartly parks and resumes threads as needed. Result: ✅ Scales effortlessly ✅ Simpler, synchronous code ✅ No reactive complexity Virtual Threads make high concurrency simple again. You can now write plain, readable, blocking code — and still handle massive workloads efficiently. 👋 Have you tried Virtual Threads yet in Java 21?
To view or add a comment, sign in
-
💥 Tricky Java 25 Concurrency Bug — When Google Finally Stopped Leaking Threads At Google, a backend service was making three parallel API calls per request: user profile recommendations ads Simple, right? Until one day the JVM started crying: “Thread leak detected!” Why? Because the old code looked like this: ExecutorService executor = Executors.newCachedThreadPool(); Future<String> user = executor.submit(() -> fetchUser()); Future<String> recs = executor.submit(() -> fetchRecs()); Future<String> ads = executor.submit(() -> fetchAds()); return user.get() + recs.get() + ads.get(); If any task threw an exception or timed out the remaining futures kept running in the background like abandoned pets. This caused: ❌ Thread leaks ❌ Zombie tasks ❌ Quiet memory growth ❌ Slower GC ❌ Higher latency Google’s SRE team summed it up perfectly: > “If your tasks are structured like spaghetti, your threads will behave like spaghetti.” 💡 Enter Java 25’s Structured Concurrency Java 25 gives us StructuredTaskScope, which treats a group of tasks as ONE unit — so if one fails, the whole group is cancelled. No leaks. No zombies. No forgotten futures. try (var scope = new StructuredTaskScope.ShutdownOnFailure()) { var user = scope.fork(() -> fetchUser()); var recs = scope.fork(() -> fetchRecs()); var ads = scope.fork(() -> fetchAds()); scope.join(); // wait for all scope.throwIfFailed(); // throw if any failed return user.get() + recs.get() + ads.get(); } ✔ If any task fails → the entire scope is shut down ✔ Remaining tasks are cancelled automatically ✔ No more thread leaks ✔ No more forgotten futures ✔ Clean, readable, predictable Google engineers called it: > “Finally… concurrency with parental supervision.” 😎 🧠 Why Structured Concurrency Is a GAME-CHANGER ✔ Prevents thread leaks ✔ Ensures tasks complete together ✔ Cleaner error handling ✔ Cancellation is automatic ✔ Works beautifully with virtual threads ✔ Zero orphan tasks This is the concurrency model Java should have had 15 years ago. And now it does. 🧵 Debugging Tips 🔍 If you see a slow memory leak → check for abandoned futures 🔍 If executor threads keep growing → you’re missing cancellation 🔍 If one subtask fails but others continue → use structured concurrency 🔍 If virtual threads are leaking → switch immediately to StructuredTaskScope ✅ Quick Checklist ☑ Use Structured Concurrency for parallel subtasks ☑ Never use Future + executor spaghetti for multi-step workflows ☑ Use ShutdownOnFailure when you want “fail-fast” behavior ☑ Use ShutdownOnSuccess when only the first successful task matters ☑ Combine with virtual threads for maximum speed > Lesson: Java 25 didn’t just fix concurrency… it finally added a cleanup crew. 🧹 #Java #Java25 #Google #StructuredConcurrency #VirtualThreads #ProjectLoom #Multithreading #JavaDevelopers #CleanCode #SoftwareEngineering #DailyLearning #CodingHumor #InterviewPrep
To view or add a comment, sign in
-
🧩 When to Use Which Multithreading Mechanism in Java: Java’s concurrency tools have evolved a lot — from raw Thread to Virtual Threads and Structured Concurrency. But the real challenge? 👉 Knowing which one to use when. Here’s a practical guide based on scalability, complexity, and production readiness 👇 ✅ 1. Simple background or demo tasks Use Thread or Runnable. 🧪 Best for learning, quick tests, or prototypes. ❌ Not ideal for production — limited control, poor resource reuse. ✅ 2. Managing multiple tasks efficiently Use ExecutorService or thread pools. ⚙️ Perfect for production apps, APIs, or services handling concurrent requests. They reuse threads and manage scheduling automatically. ✅ 3. Asynchronous workflows Use CompletableFuture. 💡 Ideal for production-grade async logic — chaining, combining, and composing tasks with cleaner code. ✅ 4. Coordinating multiple threads Use CountDownLatch, CyclicBarrier, or Phaser. 🧩 Use in both production and testing — for synchronizing tasks or test setups that wait for multiple services. ✅ 5. Fine-grained locking and contention control Use ReentrantLock, ReadWriteLock, or StampedLock. ⚡ Production-grade concurrency for shared resources or caches with heavy read/write. ✅ 6. Parallel computation / divide-and-conquer Use ForkJoinPool with RecursiveTask or RecursiveAction. 🚀 Production-ready for CPU-bound tasks — like data crunching, sorting, or analytics. ✅ 7. Reactive & streaming systems Use Flow, Reactor, or RxJava. 🌊 Best for event-driven or streaming applications in production. ✅ 8. Massive concurrency (millions of threads) Use Virtual Threads (Project Loom). 🧠 Production-ready from Java 21+ — game-changer for I/O-heavy apps like microservices, chat servers, and REST backends. ✅ 9. Grouping and managing concurrent subtasks Use Structured Concurrency (Java 21+). 🧱 Production-safe for complex concurrent operations — ensures cleaner cancellation and error propagation. 👉 Quick Takeaway: 🧪 Use raw threads only for learning or small utilities. ⚙️ Use ExecutorService, CompletableFuture, or ForkJoin for stable production workloads. 🧠 For the future — embrace Virtual Threads and Structured Concurrency. 💬 What’s your go-to concurrency tool in production — and why? #Java #Multithreading #VirtualThreads #StructuredConcurrency #CompletableFuture #ExecutorService
To view or add a comment, sign in
-
🔥 Tricky Java / Production Bug — The ThreadLocal Memory Leak (a.k.a. The Silent Killer of Threads 😅) Picture this: You’re working at Microsoft, your microservice is running smooth as butter 🧈... until suddenly, memory usage starts climbing like it just got promoted 🚀 You open your monitoring dashboard — GC is running fine, no OutOfMemoryError yet... but the heap looks like it’s hoarding sessions from the Windows XP era 👀 Welcome to the sneaky world of ThreadLocal memory leaks 🧵 💡 The Sneaky Cause: You use ThreadLocal to store request-specific data (user info, correlation ID, transaction state). But you forget the golden rule: threadLocal.remove(); Your thread pool reuses threads — and guess what stays behind? 👉 Old values. Since ThreadLocalMap keeps weak keys but strong values, once the key is GC’ed, the value just… hangs there. Forever. Leaking memory byte by byte 💀 🧠 Example: ThreadLocal<UserContext> context = new ThreadLocal<>(); void process(UserContext ctx) { context.set(ctx); // Business logic here... context.remove(); // 🧹 Mandatory cleanup! } 🔍 Debugging Tips (When You Suspect ThreadLocal Trouble): 1️⃣ Use a heap dump tool like Eclipse MAT or VisualVM — search for ThreadLocalMap. 2️⃣ Look for “unreachable” keys with large retained sizes. 3️⃣ Check if threads in the pool are holding references to old request objects. 4️⃣ Enable GC logs or use a memory profiler — rising heap after GC = red flag 🚩 5️⃣ Watch for “slow leaks” — this one creeps up over hours or days, not minutes. 🧾 Quick Checklist for Safe ThreadLocal Usage: ✅ Always call remove() after use. ✅ Avoid storing heavy objects (like sessions, big collections). ✅ Prefer request-scoped beans or dependency injection where possible. ✅ Review all ThreadLocal usage before deploying to prod. ✅ Don’t assume frameworks will clean it up for you — they won’t 😏 🎯 Final Thought: ThreadLocal is like caffeine ☕ — great in moderation, disastrous when overused. Use it smartly, clean it up religiously. Otherwise, your memory leak will show up in the next sprint review saying: > “Hi, I’m still here… and I brought more heap!” 😜 #Java #SpringBoot #ThreadLocal #MemoryLeak #Microsoft #TrickyInterviewQuestion #BackendEngineering #Concurrency #Debugging #ProductionBug #JavaDeveloper #CodingHumor
To view or add a comment, sign in
-
POST 1: Spring Boot 3.x with Virtual Threads 🚀 Title: Spring Boot 3.x की Virtual Threads - Game Changer for Performance! 🔥 Java developers ke liye exciting news! Spring Boot 3.x mein Virtual Threads ka integration Java applications ki performance ko completely transform kar diya hai. Aaj hum samjhenge ki yeh kya hai aur kaise use karna hai. Virtual Threads Kya Hai? Virtual Threads (Project Loom) lightweight threads hain jo JVM level pe manage hote hain. Traditional platform threads ke comparison mein yeh bahut kam resources consume karte hain. Ek application mein lakhs virtual threads create kar sakte hain bina system ke resources exhaust kiye. Traditional vs Virtual Threads: Platform threads: Heavy, OS-managed, limited count (few thousands) Virtual threads: Lightweight, JVM-managed, millions possible Context switching: Virtual threads mein bahut faster Spring Boot 3.x Mein Kaise Enable Karein? Bahut simple! Application.properties mein: spring.threads.virtual.enabled=true Practical Use Case: Suppose aapki application mein bahut saare blocking I/O operations hain - database calls, external API calls, file operations. Traditional threads ke saath, har request ek platform thread consume karta hai. High load pe threads exhaust ho jaate hain. Virtual threads ke saath, har request apna dedicated virtual thread le sakta hai bina resource exhaustion ke dar ke. Yeh especially useful hai microservices architecture mein jahan multiple service calls hoti hain. Performance Benefits: 10x better throughput blocking operations mein Reduced memory footprint Better resource utilization Simplified async programming - no need for complex reactive programming Implementation Example: Controller level pe kuch change nahi chahiye. Spring Boot automatically virtual threads use karega agar enabled hai. Lekin agar specifically chahiye: @Bean public TomcatProtocolHandlerCustomizer<?> protocolHandlerVirtualThreadExecutorCustomizer() { return protocolHandler -> { protocolHandler.setExecutor(Executors.newVirtualThreadPerTaskExecutor()); }; } Important Points: Java 21+ required hai CPU-intensive tasks ke liye benefit kam hai I/O bound applications ke liye perfect Thread-local variables carefully use karein Real-world Impact: Ek e-commerce application mein jahan har request 5-6 database calls aur 2-3 external API calls karti hai, virtual threads ne response time 40% tak improve kar diya aur server capacity double ho gayi. Migration Tips: Existing Spring Boot apps ko migrate karna easy hai Bas Java 21+ pe upgrade karein aur property enable karein No code changes required in most cases Test thoroughly - especially thread-local usage Conclusion: Virtual threads Spring Boot applications ke liye revolutionary feature hai. High-concurrency applications mein yeh game-changer sabit ho raha hai. #VirtualThreads #ProjectLoom #BackendDevelopment #JavaFullStack #PerformanceOptimization #Microservices #Java21
To view or add a comment, sign in
-
“Virtual Threads Are Killing Reactive” — 2025 Headlines Keep Saying It. Here’s What I Actually Think I work on systems where a single millisecond can cost millions — real-time payment authorizations that must settle in under 500 ms, end-to-end. In that world, low-latency isn’t a nice-to-have; it’s survival. Fraud detection, ledger consistency, instant settlement — all racing against the clock, across networks, databases, and compliance checks. That’s why non-blocking architecture has always excited me. Not for buzzwords, but because it turns waiting into working. A thread that’s idle during a database call? Unacceptable. Reactive programming — with backpressure, event loops, and stream composition — gave us a way to maximize throughput without sacrificing responsiveness. It felt like engineering at the speed of finance. I keep seeing the same Medium post quoted everywhere: “Project Loom will kill reactive Java and that’s a good thing.” A Developer’s Take on Java’s New Era, Medium, August 2025 It’s not the only one. InfoQ, DZone, and even the Spring blog have run pieces asking: Is reactive programming obsolete now that virtual threads are stable in Java 24? So I paused. Put down the keyboard. And asked myself after years of building with RxJava, Reactor, and structured concurrency .do I agree? Here are three quiet reflections, no drama. 1. It fixes the problem reactive was built to solve Reactive programming saved us from thread exhaustion. One blocking database call used to eat a whole platform thread. At scale, that meant thread pools exploding or complex workarounds. Virtual threads don’t fight that battle — they end it. Block all you want. The JVM parks the carrier, reuses the OS thread, and moves on. “The core motivation for reactive libraries is gone.” Java Concurrency in 2025: A Retrospective, InfoQ, September 2025 That line stuck with me. It’s not an insult to reactive. it’s a tribute. 2. Teams are simplifying — and that feels right I’ve watched Quarkus and Micronaut teams quietly drop WebFlux in favor of virtual threads. Not because reactive failed, but because the code got simpler. Less subscribeOn(), fewer fragmented stack traces, easier onboarding for new devs. It’s not about benchmarks. It’s about maintainability. And in real teams, that’s what lasts. 3. Reactive was a bridge — and bridges get replaced Java 8 gave us lambdas. Reactive gave us scalable I/O. Loom gives us both, without the ceremony. Reactive taught us async thinking, backpressure, resilience. Now the JVM carries that wisdom natively. “We needed reactive to prove the model. Now we can retire the scaffolding.” Virtual Threads in Production: Early Adopter Report, DZone, July 2025 That metaphor resonates. Scaffolding isn’t failure — it’s how tall buildings get built. So, is reactive dead? Not to me. Java keeps evolving. And this time, the evolution feels gentler.
To view or add a comment, sign in
-
🧩 The Polyglot Pipeline: When to Break Java's Rules for Scala As Senior Java developers, we know the JVM is our domain, but choosing the right language for the job is the real architectural win. While Java/Spring Boot excels at building robust, maintainable REST APIs and complex business services, it often struggles with pure data processing and massive concurrency. This is precisely where Scala shines. The value of bringing Scala into the pipeline isn't about replacing Java, it's about strategic specialization. Scala's concise, expressive syntax (thanks to functional programming) is ideal for complex, mathematical logic—think pricing engines or complex data transformation rules. It integrates beautifully with big data ecosystems like Spark, which is still largely built on Scala. By using Scala for these specialized components and letting our Java services consume the results (via Kafka or internal REST calls), we achieve a genuinely polyglot, best-of-breed architecture. The end result is a system where the transactional layer is stable (Java) and the data processing layer is high-performance and concise (Scala), all running on the same familiar JVM. What's one project where you consciously chose Scala over Java, and what specific problem did its functional nature solve better? #Java #Scala #JVM #Microservices #PolyglotArchitecture #SoftwareArchitecture #FullStackDevelopment #BackendDevelopment #SoftwareEngineer #C2C #C2H
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development