Traditional threading model in Java: ```java // Each thread = 1 OS thread (~1-2 MB stack) // 1000 threads = 1-2 GB memory just for stacks ExecutorService executor = Executors.newFixedThreadPool(100); for (int i = 0; i < 1000; i++) { executor.submit(() -> { handleRequest(); // Blocks on I/O }); } ``` Thread pools. Limited concurrency. Wasted memory while threads wait. Virtual Threads (Java 21): ```java // Millions of threads possible try (var executor = Executors.newVirtualThreadPerTaskExecutor()) { for (int i = 0; i < 1000000; i++) { executor.submit(() -> { handleRequest(); // When blocked, virtual thread unmounts }); } } ``` How they work: · Lightweight (few KB, not MB) · Not tied 1:1 to OS threads · When virtual thread blocks on I/O, it's unmounted from carrier thread · Carrier thread can run other virtual threads · No pooling needed. Create as many as you want. Simple creation: ```java // Create and start Thread vThread = Thread.startVirtualThread(() -> { System.out.println("Hello from virtual thread"); }); // Or using builder Thread vThread = Thread.ofVirtual() .name("my-virtual") .unstarted(() -> { /* task */ }); vThread.start(); ``` Where they shine: · High-concurrency servers · Many concurrent I/O operations · Each request gets its own thread (simple synchronous code) Where they DON'T help: · CPU-bound work (still limited by cores) · Already using reactive/async frameworks (may simplify them instead) The biggest change to Java concurrency since... ever. Write simple blocking code at scale. #Java #VirtualThreads #ProjectLoom #Concurrency #Java21 #Performance
Java Virtual Threads: Lightweight, High-Concurrency, and Scalable Concurrency
More Relevant Posts
-
When working with concurrency in Java, one of the first approaches developers try: new Thread(task).start(); This is not wrong, it works well for simple or short-lived scenarios. However, it does not scale: - thread creation is expensive 📈 - memory usage grows 📈 - excessive context switching degrades performance 📉 A more robust approach is using a Thread Pool. Here is an example of a simple setup: - pool size: 3 threads - submitted tasks: 6 Observed behavior: - 3 tasks are executed immediately - 3 tasks are placed into a queue and wait This reflects how ExecutorService manages workload internally: - a fixed number of worker threads - a task queue (by default, an unbounded LinkedBlockingQueue in newFixedThreadPool) - reusing a fixed number of threads instead of creating new ones per task Additionally: 1️⃣ Runnable → no return value 2️⃣ Callable → returns a result 3️⃣ Future → allows retrieving the result asynchronously future.get(); // blocks the calling thread until result is available Proper shutdown is also important: pool.shutdown(); if (!pool.awaitTermination(10, TimeUnit.SECONDS)) { pool.shutdownNow(); // fallback if tasks did not finish } Without this, threads may continue running or resources may not be released properly. 📌 Takeaway: Thread pools are not just a convenience, they are a core abstraction for: - predictable resource usage - stable latency under load - controlled concurrency in backend systems #Java #Multithreading #BackendEngineering #Concurrency
To view or add a comment, sign in
-
-
🔥 Day 8 — volatile Keyword in Java: Simple but Misunderstood volatile is one of those keywords that looks simple… but is often misunderstood. I’ve seen developers use it thinking it solves all concurrency problems. It doesn’t. ⚠ What does volatile actually do? It ensures visibility, not atomicity. 👉 When a variable is marked volatile: Changes made by one thread are immediately visible to others Value is always read from main memory, not CPU cache 💻 Example volatile boolean running = true; public void stop() { running = false; } public void run() { while (running) { // do work } } Without volatile, one thread might never see the updated value. With volatile, it works as expected ✔ ⚠ Where developers go wrong volatile int count = 0; count++; // ❌ Still NOT thread-safe 👉 Because count++ is not atomic volatile does NOT prevent race conditions 💡 When to use volatile ✔ Status flags (start/stop signals) ✔ Simple state sharing ✔ When no compound operations are involved 🚫 When NOT to use ❌ Counters ❌ Complex updates ❌ Multiple dependent variables 💡 From experience: volatile works great for controlling thread lifecycle (like stop flags), but using it for counters or shared updates leads to subtle bugs. 🚀 Rule of Thumb 👉 volatile = visibility guarantee 👉 NOT a replacement for synchronization 👉 Have you ever used volatile incorrectly and faced issues? #100DaysOfJavaArchitecture #Java #Concurrency #SoftwareArchitecture #Microservices
To view or add a comment, sign in
-
-
What’s New in Java 26 (Key Features Developers Should Know) 1. Pattern Matching Enhancements Java continues improving pattern matching for switch and instanceof. Example: if (obj instanceof String s) { System.out.println(s.toUpperCase()); } Why it matters: Cleaner, safer type checks with less boilerplate. 2. Structured Concurrency (Evolving) Helps manage multiple concurrent tasks as a single unit. Example: try (var scope = new StructuredTaskScope.ShutdownOnFailure()) { scope.fork(() -> fetchUser()); scope.fork(() -> fetchOrders()); scope.join(); } Why it matters: Simplifies multi-threaded code and error handling. 3. Scoped Values (Better than ThreadLocal) A safer alternative to ThreadLocal for sharing data. Example: ScopedValue<String> user = ScopedValue.newInstance(); ScopedValue.where(user, "admin").run(() -> { System.out.println(user.get()); }); Why it matters: Avoids memory leaks and improves thread safety. 4. Virtual Threads Improvements Virtual threads continue to mature (Project Loom). Example: Thread.startVirtualThread(() -> { System.out.println("Lightweight task"); }); Why it matters: Handle thousands of concurrent requests with minimal resources. 5. Foreign Function & Memory API (Stabilization) Interact with native code without JNI. Example: MemorySegment segment = Arena.ofAuto().allocate(100); Why it matters: High-performance native integration (AI, ML, system-level apps). 6. Performance & GC Improvements Ongoing improvements in: - ZGC - G1 GC - Startup time - Memory efficiency Why it matters: Better latency and throughput for large-scale applications. 7. String Templates (Further Refinement) Simplifies string formatting and avoids injection issues. Example: String name = "Java"; String msg = STR."Hello \{name}"; Why it matters: Cleaner and safer string construction. Stay updated, but adopt carefully especially for non-LTS releases. #Java #Java26 #BackendEngineering #SpringBoot #Concurrency #Performance #SoftwareEngineering
To view or add a comment, sign in
-
A small Java detail that becomes very important in multi-threaded applications: Difference between HashMap and ConcurrentHashMap. At first glance, both store key-value pairs. But their behavior changes when multiple threads access them. Example: Map<String, String> map = new HashMap<>(); If multiple threads read and write to a HashMap at the same time, it can lead to unpredictable behavior. Why? Because HashMap is 𝐧𝐨𝐭 𝐭𝐡𝐫𝐞𝐚𝐝-𝐬𝐚𝐟𝐞. This means concurrent modifications can cause: • Data inconsistency • Lost updates • Unexpected runtime issues Now let’s look at ConcurrentHashMap. ConcurrentHashMap is designed for 𝐦𝐮𝐥𝐭𝐢-𝐭𝐡𝐫𝐞𝐚𝐝𝐞𝐝 𝐞𝐧𝐯𝐢𝐫𝐨𝐧𝐦𝐞𝐧𝐭𝐬. Instead of locking the entire map, it allows multiple threads to work on different parts of the map at the same time. Think of it like a supermarket checkout. HashMap scenario: Only one billing counter is open. Everyone must wait in a single line. ConcurrentHashMap scenario: There are multiple counters open. Different customers can check out at the same time. That’s why ConcurrentHashMap performs much better when many threads access shared data. So the key difference: HashMap → Not thread-safe ConcurrentHashMap → Designed for concurrent access Small Java choices like this can make a big difference in system reliability. Which one do you usually use in your projects? #Java #BackendEngineering #JavaTips #ConcurrentProgramming #SoftwareEngineering
To view or add a comment, sign in
-
How Does "ConcurrentHashMap" Achieve Thread Safety in Java? In multithreaded applications, using a normal "HashMap" can lead to race conditions and inconsistent data. While "Hashtable" provides thread safety, it locks the entire map, which can reduce performance. This is where "ConcurrentHashMap" comes in. It provides high performance and thread safety by allowing multiple threads to read and write simultaneously. 🔹 How it Works 1️⃣ Segment / Bucket Level Locking (Java 7) Instead of locking the entire map, "ConcurrentHashMap" divides the map into segments. Each segment can be locked independently, allowing multiple threads to work on different segments. This significantly improves concurrency. 2️⃣ Fine-Grained Locking (Java 8+) In Java 8, the implementation was improved further. Instead of segments, it uses: ✔ CAS (Compare-And-Swap) operations ✔ Node-level synchronization when needed This allows better performance and scalability. 🔹 Example import java.util.concurrent.ConcurrentHashMap; public class Example { public static void main(String[] args) { ConcurrentHashMap<Integer, String> map = new ConcurrentHashMap<>(); map.put(1, "Java"); map.put(2, "Spring"); map.put(3, "Kafka"); map.forEach((k,v) -> System.out.println(k + " : " + v)); } } Multiple threads can safely read and update the map without blocking the entire structure. 🔹 Key Benefits ✔ Thread-safe operations ✔ Better performance than "Hashtable" ✔ Allows concurrent reads and writes ✔ Highly scalable in multithreaded environments In simple terms: "HashMap" → Not thread safe "Hashtable" → Thread safe but slow "ConcurrentHashMap" → Thread safe and optimized for concurrency. #Java #ConcurrentHashMap #Multithreading #JavaDeveloper #Concurrency #Programming
To view or add a comment, sign in
-
Virtual Threads in Java 21 and Java 25 – A Major Evolution in Java Concurrency For many years, Java handled concurrency using traditional platform threads. These threads directly map to operating system threads and are created using the Thread class or through thread pools such as ExecutorService. While platform threads work well, they come with limitations. Each thread requires significant memory for its stack, often around 1 MB. When an application needs to handle thousands of concurrent tasks, this memory overhead becomes a major challenge. In addition, operating systems cannot efficiently schedule very large numbers of threads because of context switching overhead. To address these limitations, Project Loom introduced Virtual Threads. This feature became stable in Java 21 and continues to evolve in later releases like Java 25. Virtual threads are lightweight threads managed by the JVM instead of the operating system. This allows applications to create thousands or even millions of concurrent tasks without exhausting system resources. Example of creating a virtual thread: Thread.startVirtualThread(() -> { // task }); Another common approach is using a virtual thread executor: ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor(); Unlike platform threads, virtual threads do not map one-to-one with OS threads. Instead, many virtual threads run on a small number of platform threads called carrier threads. When a virtual thread performs a blocking operation such as a database call or HTTP request, the JVM suspends the virtual thread and releases the carrier thread to execute another task. This makes blocking operations much more efficient. Platform Threads Managed by the operating system Higher memory usage Limited scalability for massive concurrency Require thread pools for control Virtual Threads Managed by the JVM Very small memory footprint Can scale to hundreds of thousands of threads Simplify concurrent programming Virtual threads are especially useful for IO-heavy systems such as web servers, microservices, and applications that perform many database or network calls. Modern frameworks like Spring Boot are already adding support for virtual threads, making it easier to build highly scalable services with simpler code. Virtual threads represent one of the biggest improvements to Java concurrency in recent years and will likely shape how high-concurrency backend systems are built in the future. #Java #Java21 #Java25 #VirtualThreads #ProjectLoom #SpringBoot #BackendDevelopment #Microservices
To view or add a comment, sign in
-
Java Collections look simple—but their internals can make or break application performance. Understanding how ArrayList, HashMap, and ConcurrentHashMap work internally helps avoid GC pressure, contention, and scalability bottlenecks in real systems. Data structures matter more than we often realize. 🚀 #Java #PerformanceEngineering #JVM #BackendDevelopment #JavaCollections
To view or add a comment, sign in
-
📚 Collections in Java – Part 3 | Queue & Concurrent Queues 🚀 Continuing my deep dive into the Java Collections Framework, focusing on queue-based data structures and their role in both sequential processing and high-performance concurrent systems. 🔹 Queue – FIFO (First-In-First-Out) data structure for ordered processing 🔹 PriorityQueue – Processes elements based on priority using a Binary Heap 🔹 Deque (Double Ended Queue) – Insert and remove elements from both ends 🔹 ArrayDeque – Fast, resizable array implementation of Deque 🔹 BlockingQueue – Thread-safe queue designed for producer–consumer systems 🔹 Concurrent Queue – High-performance non-blocking queues using CAS operations 💡 Key Takeaways: • Queue follows the FIFO principle for ordered request processing • PriorityQueue processes elements based on priority instead of insertion order • Deque supports both FIFO and LIFO operations • ArrayDeque is usually faster than Stack and LinkedList for queue/stack operations • BlockingQueue enables safe communication between producer and consumer threads • Concurrent queues provide lock-free, high-throughput operations for multi-threaded systems Understanding these structures is important for: ✔ Designing scalable backend systems ✔ Handling asynchronous and concurrent workloads ✔ Building efficient task scheduling mechanisms ✔ Strengthening Core Java and DSA fundamentals Strong understanding of data structures + concurrency concepts leads to better system design and more efficient applications. 💪 #Java #CoreJava #CollectionsFramework #Queue #PriorityQueue #Deque #ArrayDeque #BlockingQueue #ConcurrentProgramming #JavaDeveloper #BackendDevelopment #DSA #InterviewPreparation #CodesInTransit #MondayMotivation
To view or add a comment, sign in
-
#Post1 Internal working of HashMap And Hash collision 👇 When we insert a value: map.put("Apple", 10) Java performs these steps: 1️⃣ It calls hashCode() on the key. For example, Strings generate hash codes using a formula involving the prime number 31. 2️⃣ Using this hash, Java calculates the bucket index inside the internal array. Now the entry is stored inside that bucket. Example internal structure: Bucket Array [0] [1] [2] → (Apple,10) [3] [4] But what if two keys map to the same bucket? This situation is called a hash collision. Example: [2] → (Apple,10) → (Banana,20) → (Mango,30) HashMap handles collisions by storing multiple entries inside the same bucket. Before Java 8 • Entries were stored in a Linked List After Java 8 • If the bucket size grows beyond 8, the linked list is converted into a Red-Black Tree This improves search performance: O(n) → O(log n) Now let’s see what happens during get() when a bucket has multiple entries. When we call: map.get("Apple") Java performs these steps: 1️⃣ It recomputes the hashCode() of the key 2️⃣ It finds the same bucket index using (capacity - 1) & hash 3️⃣ If multiple nodes exist in that bucket, Java traverses the nodes For each node it checks: existingNode.hash == hash AND existingNode.key.equals(key) Once the correct key is found, the corresponding value is returned. Summary: Two different keys can generate the same hash, which causes a hash collision. HashMap handles collisions by storing entries as a Linked List or Red-Black Tree inside the bucket. 📌 Note HashMap is not thread safe. In the upcoming post, we will explore the thread-safe alternative to HashMap. #Java #SoftwareEngineering #BackendDevelopment #DataStructures #Programming #LearnInPublic
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development