Creating threads manually works. But in real applications? It doesn’t scale. Why? Because: • Creating threads is expensive • Too many threads → memory issues • Too few threads → underutilized CPU Professionals use Thread Pools. In Java, that’s done using ExecutorService. import java.util.concurrent.*; ExecutorService executor = Executors.newFixedThreadPool(3); executor.execute(() -> { System.out.println("Task running by " + Thread.currentThread().getName()); }); executor.shutdown(); What just happened? Instead of creating new threads every time: • A fixed number of threads are created • Tasks are assigned to them • Threads are reused This improves: • Performance • Resource management • Scalability Why Thread Pools Matter Without thread pools: • You risk system overload • Thread creation overhead increases • Performance becomes unstable With thread pools: • Controlled concurrency • Better CPU utilization • Predictable behavior Bonus: submit() vs execute() execute() → No return value submit() → Returns a Future Future<Integer> result = executor.submit(() -> 10 + 20); System.out.println(result.get()); Now you're not just running threads. You’re managing tasks professionally. Today was about: • Why raw threads aren’t enough • What thread pools are • How ExecutorService works Concurrency at scale needs structure. Thread pools bring that structure. #Java #Concurrency #ExecutorService #Multithreading #SoftwareEngineering #ThreadPool #LearningInPublic
Java Thread Pools Improve Performance and Scalability
More Relevant Posts
-
🧵 Stop Over-Engineering Your Threads: The Loom Revolution !! ------------------------------------------------------------------------------------- Remember when handling 10,000 concurrent users meant complex Reactive programming or massive memory overhead? In 2026, Java has fixed that. 🛑 The Problem: Platform Threads are Heavy Traditional Java threads ($1:1$ mapping to OS threads) are expensive. They take up ~1MB of stack memory each. If you try to spin up 10,000 threads, your server’s RAM is gone before the logic even starts. ✅ The Solution: Virtual Threads ($M:N$) Virtual threads are "lightweight" threads managed by the Java Runtime, not the OS. •Low Cost: You can now spin up millions of threads on a single laptop. •Blocking is OK: You no longer need non-blocking Callbacks or Flux/Mono. You can write simple, readable synchronous code, and the JVM handles the "parking" of threads behind the scenes. 💡 The "STACKER" Pro-Tip If you are still using a fixed ThreadPoolExecutor with a limit of 200 threads for your microservices, you are leaving 90% of your performance on the table. In 2026, we switch to: Executors.newVirtualThreadPerTaskExecutor() The Goal: Write code like it’s 2010 (simple/blocking), but get performance like it’s 2026 (massively concurrent). #Java2026 #ProjectLoom #BackendEngineering #SpringBoot #Concurrency #SoftwareArchitecture #STACKER
To view or add a comment, sign in
-
-
Java Streams already made data processing cleaner. But sometimes we want speed. What if operations could run in parallel automatically? That’s exactly what Parallel Streams do. Instead of processing elements one by one, Java can distribute work across multiple CPU cores. 🔹 Normal Stream List<Integer> numbers = List.of(1,2,3,4,5); numbers.stream() .map(n -> n * 2) .forEach(System.out::println); Processing happens sequentially. One element at a time. 🔹 Parallel Stream numbers.parallelStream() .map(n -> n * 2) .forEach(System.out::println); Now Java may process elements simultaneously. Behind the scenes it uses: ForkJoinPool - to split work into smaller tasks and execute them in parallel. Why Parallel Streams are Powerful They allow you to add concurrency with one small change: stream() → parallelStream() But they must be used carefully. Because parallel execution can cause: • Unpredictable order of results • Race conditions with shared data • Overhead for small datasets Best Use Cases Parallel streams work best when: • Dataset is large • Tasks are independent • Operations are CPU intensive Example use case: Processing millions of records, performing calculations, or analyzing data pipelines. Today was about: • What parallel streams are • How Java distributes work across CPU cores • When parallel execution is beneficial Concurrency doesn’t always need threads. Sometimes it’s just one method change. And suddenly your code scales with your hardware. #Java #ParallelStreams #Concurrency #FunctionalProgramming #JavaStreams #SoftwareEngineering #LearningInPublic
To view or add a comment, sign in
-
-
Stop treating Virtual Threads like OS Threads. It’s killing your scalability. When designing the Exeris Kernel (a zero-copy, Java 26+ runtime), I made a controversial architectural decision: I completely banned ThreadLocal from the codebase. Why? Because in a high-density environment where you spawn 1,000,000 Virtual Threads using Structured Concurrency, InheritableThreadLocal becomes a performance serial killer. Every child task forces the JVM to clone maps, triggering an avalanche of garbage collection pauses. You lose the very scalability Loom was supposed to give you. We need a paradigm shift: From "Thread-bound state" to "Scope-bound state." By replacing ThreadLocal entirely with JEP 506 (Scoped Values), we achieved: -- - O(1) constant-time context inheritance - Zero memory leaks (Lexically bounded lifecycle) - True immutability for security contexts I wrote a deep-dive forensic analysis on why Java 8 patterns don't work in Java 26, and how Exeris uses the "Invisible Wall" pattern to route 1M streams safely. Check out the full article here: https://lnkd.in/dbEBWyH5 Would you ban ThreadLocal in your next project? Let's debate. #Java #ProjectLoom #VirtualThreads #SoftwareArchitecture #Performance #DeepTech #Exeris
To view or add a comment, sign in
-
Engineering at the L0 level requires more than just adopting new APIs — it requires discarding legacy habits that kill performance. At Exeris, we are building a zero-copy runtime for the next decade of cloud computing. Banning ThreadLocal wasn't just a choice; it was a necessity to achieve true 1-VT-per-Stream density without the massive memory tax of the past. Our founder Arkadiusz Przychocki breaks down the forensic reasons why we moved to ScopedValues to preserve L1/L2 cache locality and eliminate GC pressure at scale. This is how we define "Zero-Waste Compute".
Stop treating Virtual Threads like OS Threads. It’s killing your scalability. When designing the Exeris Kernel (a zero-copy, Java 26+ runtime), I made a controversial architectural decision: I completely banned ThreadLocal from the codebase. Why? Because in a high-density environment where you spawn 1,000,000 Virtual Threads using Structured Concurrency, InheritableThreadLocal becomes a performance serial killer. Every child task forces the JVM to clone maps, triggering an avalanche of garbage collection pauses. You lose the very scalability Loom was supposed to give you. We need a paradigm shift: From "Thread-bound state" to "Scope-bound state." By replacing ThreadLocal entirely with JEP 506 (Scoped Values), we achieved: -- - O(1) constant-time context inheritance - Zero memory leaks (Lexically bounded lifecycle) - True immutability for security contexts I wrote a deep-dive forensic analysis on why Java 8 patterns don't work in Java 26, and how Exeris uses the "Invisible Wall" pattern to route 1M streams safely. Check out the full article here: https://lnkd.in/dbEBWyH5 Would you ban ThreadLocal in your next project? Let's debate. #Java #ProjectLoom #VirtualThreads #SoftwareArchitecture #Performance #DeepTech #Exeris
To view or add a comment, sign in
-
👉🏻Call by reference :- a convention where the compiler passes an address for the actual parameter to the callee If the actual parameter is a variable, then changing the formal's value also changes the actual's value. parameter, the caller passes an address rather than a value. If the actual parameter resides in memory, the caller passes its memory address. If the actual parameter is an expression, the caller evaluates the expression, stores its value into the caller's local data area, and passes the address of that location. Values kept in registers and constants should be handled in the same way as expressions. Inside the callee, each reference to a formal parameter needs an extra level of indirection. Call by reference differs from call by value in two critical ways. First, if the caller passes a variable x as a call-by-reference actual parameter bound to y in the callee, then any change to y is also a change to x. Second, if the callee can access x directly, then it has two names inside the callee, which can lead to counterintuitive behavior. 👉🏻Call by value:- Call by value is a method of passing arguments to a function where a copy of the actual parameter's value is made in memory and passed to the function's formal parameter. Because the function operates on a copy, any modifications made inside the function do not affect the original variable in the caller #AnandKumar Buddarapu #java
To view or add a comment, sign in
-
Your container keeps restarting from OOM (out of memory), but the Java heap looks fine? Before you go all-in on a heap leak, check your threads. I have seen this in containerized Java services: memory creeps up over hours or days, the container hits its limit, gets killed, then restarts. No dramatic spike, just a slow climb. Threads can drive that climb because they use native memory for stacks (plus per-thread overhead), which will not show up as “heap used”. Mini playbook (5-minute triage): 1) Confirm the restart reason matches the memory limit. 2) Get a shell into the running container. 3) Find the Java PID (try `jcmd -l`). 4) Run: `jcmd <pid> Thread.print` 5) Scan for: - a surprisingly high total thread count - repeating thread names or pools that keep growing - many similar stacks pointing to thread creation (`new Thread(...)`, `Executors.new*`, schedulers, custom thread factories) If the thread count keeps increasing, treat it like a leak: - bound pools and queues - reuse executors instead of creating new ones - shut them down on lifecycle events The nice part is that `Thread.print` often points to the code path creating threads, which is faster than guessing from memory graphs alone. What’s your go-to move when a container is OOM-killed: thread dump first, heap dump first, or something else? #java #kubernetes #observability #performance #memory #jvm
To view or add a comment, sign in
-
-
🚀 Day 15 — JVM Memory Model (JMM): What Every Java Engineer Must Understand The Java Memory Model (JMM) defines how and when threads see updates made by other threads. If you’re building high-throughput or multi-threaded systems, understanding JMM is not optional — it’s foundational. Here’s a crisp breakdown 👇 🚀 Why JMM Exists Modern CPUs reorder instructions for performance. Compilers reorder operations too. Without rules, multithreaded programs would behave unpredictably. JMM establishes visibility, ordering, and happens-before guarantees. 🧠 Key Concepts 1️⃣ Working Memory vs Main Memory Each thread has: - Working Memory → thread-local (registers, CPU cache copies of variables) - Main Memory → shared across threads Threads don't always write directly to main memory — hence visibility issues. 2️⃣ Happens-Before Relationship This determines which actions are guaranteed to be visible to other threads. Examples: - Unlock happens-before a subsequent lock on the same monitor - Writing to a volatile variable happens-before reading it - Thread start happens-before any action inside the thread 3️⃣ Reordering Rules The JIT and CPU may reorder instructions — unless JMM prevents it. JMM ensures reorderings never violate "happens-before" constraints. 4️⃣ Volatile & Synchronization Under JMM - volatile → guarantees visibility + ordering - synchronized → guarantees mutual exclusion + visibility - Locks (ReentrantLock) → follow same memory visibility rules as synchronized 5️⃣ What Happens Without JMM Guarantees? You get: - Stale reads - Lost updates - Instructions executing out of logical order - Race conditions - Hard-to-reproduce production bugs ✅ Why Java Developers Must Care JMM directly impacts: - Correctness of concurrent algorithms - Performance of multi-threaded apps - Microservice request handling under load - Safe use of async patterns - High-performance in-memory caching 🔍 Summary JMM is not about memorizing definitions — it's about understanding how threads see memory, and designing code that respects these rules. If you know the JMM, you write safer, faster, more predictable Java systems. #100DaysOfJavaArchitecture #Java #JavaMemoryManagement #Threads #SoftwareArchitecture #Microservices
To view or add a comment, sign in
-
-
Is SOLID making your Java applications slower? The uncomfortable answer is: Yes. But probably not for the reason you think. I often see engineers debating whether clean code principles like SOLID sacrifice performance. In the JVM world, the "Static vs. Dynamic" trade-off is real: • 𝗧𝗵𝗲 𝗖𝗼𝘀𝘁 𝗼𝗳 𝗦𝗢𝗟𝗜𝗗: Interfaces and Dependency Inversion lead to virtual method calls. For the JIT compiler, deep abstraction layers can act as "inlining barriers." • 𝗧𝗵𝗲 "𝗦𝘁𝗮𝘁𝗶𝗰" 𝗦𝗽𝗲𝗲𝗱: Monolithic, static code is a JIT's dream. It’s predictable, easy to inline, and has better data locality. Unless you are building a High-Frequency Trading (HFT) engine where every microsecond is $$, your bottleneck isn't an interface. It’s your database locks, your network I/O, or that unoptimized SQL query. Don't trade maintainability for "ghost performance." Modern JVMs are incredibly smart at optimizing monomorphic calls. Optimize for the human who has to read your code at 3 AM first; optimize for the machine only after you've looked at the profiler. Have you ever had to "break" SOLID for a genuine performance reason? I'd love to hear the use case. #Java #SystemDesign #Fintech #SoftwareEngineering #TechnologyLeadership
To view or add a comment, sign in
-
🔥 Day 2 — Thread Safety in Java: Common Mistakes Developers Make In high-scale systems, thread safety is not optional — it’s critical. Yet, many production issues come from simple mistakes. Here are some common ones 👇 ⚠ 1. Shared Mutable State Multiple threads modifying the same object without control leads to unpredictable behavior. 👉 Fix: Prefer immutable objects or limit shared state. ⚠ 2. Using Non-Thread-Safe Collections Using HashMap, ArrayList in concurrent environments can cause data corruption. 👉 Fix: Use ConcurrentHashMap, CopyOnWriteArrayList ⚠ 3. Improper Synchronization Overusing synchronized blocks can hurt performance, while underusing it causes race conditions. 👉 Fix: Use fine-grained locking or concurrent utilities ⚠ 4. Ignoring Race Conditions Code that “works locally” may fail under load due to timing issues. 👉 Fix: Use Atomic classes (AtomicInteger, etc.) or proper locking ⚠ 5. Blocking Calls in Multi-threading Blocking threads (DB/API calls) reduces system throughput. 👉 Fix: Use async processing / thread pools wisely 💡 Architect Insight: In systems like payments or high-frequency transactions, thread safety issues can lead to: ❌ Duplicate processing ❌ Inconsistent data ❌ Production outages Design with concurrency in mind from day one. 👉 What’s the most difficult concurrency bug you’ve faced? #100DaysOfJavaArchitecture #Java #Concurrency #Microservices #SoftwareArchitecture
To view or add a comment, sign in
-
-
Imagine this situation. You and your friend are working on a 𝐬𝐡𝐚𝐫𝐞𝐝 𝐰𝐡𝐢𝐭𝐞𝐛𝐨𝐚𝐫𝐝. You write a number on the board. Your friend reads it. Simple, right? But now imagine both of you are working 𝐢𝐧 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐭 𝐫𝐨𝐨𝐦𝐬 with 𝐲𝐨𝐮𝐫 𝐨𝐰𝐧 𝐜𝐨𝐩𝐢𝐞𝐬 𝐨𝐟 𝐭𝐡𝐞 𝐛𝐨𝐚𝐫𝐝. You update your board. But your friend still sees the 𝐨𝐥𝐝 𝐯𝐚𝐥𝐮𝐞. This is exactly what can happen in 𝗺𝘂𝗹𝘁𝗶𝘁𝗵𝗿𝗲𝗮𝗱𝗲𝗱 𝗝𝗮𝘃𝗮 𝗽𝗿𝗼𝗴𝗿𝗮𝗺𝘀. Each thread may work with its 𝐨𝐰𝐧 𝐜𝐚𝐜𝐡𝐞𝐝 𝐜𝐨𝐩𝐲 𝐨𝐟 𝐯𝐚𝐫𝐢𝐚𝐛𝐥𝐞𝐬 instead of the shared memory. And that’s where the 𝗝𝗮𝘃𝗮 𝗠𝗲𝗺𝗼𝗿𝘆 𝗠𝗼𝗱𝗲𝗹 (𝗝𝗠𝗠) comes in. What is the Java Memory Model? The 𝗝𝗮𝘃𝗮 𝗠𝗲𝗺𝗼𝗿𝘆 𝗠𝗼𝗱𝗲𝗹 defines how: • Threads interact with memory • Changes made by one thread become visible to others • The JVM handles caching and reordering Without rules like this, concurrent programs would behave unpredictably. 𝐓𝐡𝐞 𝐕𝐢𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐲 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 Example: class FlagExample { static boolean running = true; public static void main(String[] args) { new Thread(() -> { while (running) { // waiting } System.out.println("Thread stopped"); }).start(); running = false; } } You might expect the thread to stop immediately. But sometimes 𝗶𝘁 𝗸𝗲𝗲𝗽𝘀 𝗿𝘂𝗻𝗻𝗶𝗻𝗴 𝗳𝗼𝗿𝗲𝘃𝗲𝗿. Why? Because the thread may keep reading a 𝗰𝗮𝗰𝗵𝗲𝗱 𝘃𝗮𝗹𝘂𝗲 𝗼𝗳 𝗿𝘂𝗻𝗻𝗶𝗻𝗴. It never sees the update. The Fix → 𝐯𝐨𝐥𝐚𝐭𝐢𝐥𝐞 volatile static boolean running = true; Now Java guarantees: • Updates are visible to all threads • Threads read the latest value from memory 𝐯𝐨𝐥𝐚𝐭𝐢𝐥𝐞 ensures 𝘃𝗶𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆, not locking. 𝐊𝐞𝐲 𝐈𝐝𝐞𝐚 Multithreading problems are not always about race conditions. Sometimes the issue is simply: • Threads not seeing the same data. Today was about understanding: • Why threads may see 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝘃𝗮𝗹𝘂𝗲𝘀 • What the 𝗝𝗮𝘃𝗮 𝗠𝗲𝗺𝗼𝗿𝘆 𝗠𝗼𝗱𝗲𝗹 controls • How 𝐯𝐨𝐥𝐚𝐭𝐢𝐥𝐞 ensures 𝘃𝗶𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆 Concurrency is not only about 𝗿𝘂𝗻𝗻𝗶𝗻𝗴 𝘁𝗵𝗶𝗻𝗴𝘀 𝗳𝗮𝘀𝘁𝗲𝗿. It’s also about 𝗺𝗮𝗸𝗶𝗻𝗴 𝘀𝘂𝗿𝗲 𝗲𝘃𝗲𝗿𝘆 𝘁𝗵𝗿𝗲𝗮𝗱 𝘀𝗲𝗲𝘀 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲 𝗿𝗲𝗮𝗹𝗶𝘁𝘆. #Java #JavaConcurrency #JavaMemoryModel #Multithreading #SoftwareEngineering #LearningInPublic #Programming
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development