InterruptedException is not an error. It’s how threads are asked to stop. And ignoring it can make your application impossible to shut down. --- In Java’s threading model, interruption was never designed as a failure mechanism. It’s a signal. A coordination event between threads. --- Calling interrupt() is the intended way to ask a thread to stop. But it doesn’t stop it. It sets a flag. And if the thread is blocked, it may react by throwing InterruptedException. Here is the trap: when that exception is thrown, the flag is cleared. If you ignore it, you erase the signal. If you care about it, you must restore it: Thread.currentThread().interrupt(); --- This is the model. And most code ignores it. Consider this: try { queue.take(); } catch (InterruptedException e) { // ignore } Looks harmless. It’s not. From that point on, your thread behaves as if no interruption ever happened. The JVM asked it to stop. Your code said: no. This is how systems become impossible to shut down cleanly. Threads keep running. Executors don’t terminate. Shutdown hooks hang. And eventually: kill -9 This is not a rare edge case. It’s the direct consequence of coding against the model. --- There is a contract: If you catch InterruptedException, you must either: - propagate it - or restore the flag Interruption is not about failure. It’s about control. It’s how the JVM coordinates lifecycle across threads. When you ignore it, you’re not just hiding a problem. You’re breaking the control plane of your application. Final thought Most systems don’t fail because something crashed. They fail because something refused to stop. A thread that ignores interruption is not resilient. It’s uncontrollable. And in production, uncontrollable systems don’t degrade. They hang. Then they get killed. 💬 How do you handle interruption in your production code? #Java #JVM #Multithreading #Backend #SoftwareEngineering
laurent kloeble’s Post
More Relevant Posts
-
Stopping Threads Safely: Java does not allow killing a thread directly. Use interrupts as the “polite” way to request a thread to stop. Threads should check Thread.interrupted() or catch InterruptedException. Raw Threads vs Thread Pools With raw threads, you can interrupt them directly. With ThreadPool threads, you use ExecutorService.shutdown() or Future.cancel() to signal cancellation. Callable and Future: Wrapping tasks in Callable allows you to manage them with Future. Future.cancel(true) interrupts the task if it’s running. Useful for applying timeouts on long-running tasks. Volatile / AtomicBoolean Flags: Another approach is using a shared flag (volatile boolean stop = false;). The thread periodically checks this flag to decide whether to exit. AtomicBoolean provides thread-safe updates. Timeout Strategies: Use Thread.sleep() or scheduled tasks to enforce conditional timeouts. For blocking operations (DB calls, HTTP requests), combine interrupts with timeout-aware APIs. Example: future.get(timeout, TimeUnit.SECONDS). Practical Applications: Database Calls: Long stored procedures can be interrupted if they exceed SLA. HTTP Requests: Wrap in Future with timeout to avoid hanging threads. Schedulers: Cancel tasks after a fixed duration to maintain responsiveness. #Java #BackendDevelopment #SoftwareEngineering #MultiThreading #Concurrency #JavaPerformance #CodingTips #Programming #SystemDesign
To view or add a comment, sign in
-
Understanding the Magic Under the Hood: How the JVM Works ☕️⚙️ Ever wondered how your Java code actually runs on any device, regardless of the operating system? The secret sauce is the Java Virtual Machine (JVM). The journey from a .java file to a running application is a fascinating multi-stage process. Here is a high-level breakdown of the lifecycle: 1. The Build Phase 🛠️ It all starts with your Java Source File. When you run the compiler (javac), it doesn't create machine code. Instead, it produces Bytecode—stored in .class files. This is the "Write Once, Run Anywhere" magic! 2. Loading & Linking 🔗 Before execution, the JVM's Class Loader Subsystem takes over: • Loading: Pulls in class files from various sources. • Linking: Verifies the code for security, prepares memory for variables, and resolves symbolic references. • Initialization: Executes static initializers and assigns values to static variables. 3. Runtime Data Areas (Memory) 🧠 The JVM manages memory by splitting it into specific zones: • Shared Areas: The Heap (where objects live) and the Method Area are shared across all threads. • Thread-Specific: Each thread gets its own Stack, PC Register, and Native Method Stack for isolated execution. 4. The Execution Engine ⚡ This is the powerhouse. It uses two main tools: • Interpreter: Quickly reads and executes bytecode instructions. • JIT (Just-In-Time) Compiler: Identifies "hot methods" that run frequently and compiles them directly into native machine code for massive performance gains. The Bottom Line: The JVM isn't just an interpreter; it’s a sophisticated engine that optimizes your code in real-time, manages your memory via Garbage Collection (GC), and ensures platform independence. Understanding these internals makes us better developers, helping us write more efficient code and debug complex performance issues. #Java #JVM #SoftwareEngineering #Programming #BackendDevelopment #TechExplainers #JavaVirtualMachine #CodingLife
To view or add a comment, sign in
-
-
A lot of Java devs solve thread-safety issues with one keyword: synchronized. And honestly, sometimes that’s the right call. Simple, readable, gets the job done. But I’ve seen codebases where synchronized is used everywhere, and once traffic grows, performance starts falling apart. What happens? Only one thread can enter that block/method at a time. If 50 requests hit it together: 1 executes 49 wait So even if your app server has resources, requests are still lining up behind one lock. Where it gets ugly When slow work is inside the lock: -DB queries -External API calls -File operations -Heavy loops / processing Now threads aren’t waiting for CPU. They’re waiting because one thread is holding a lock during slow operations. That’s where response times spike. Better approach depends on the case Use ReentrantLock if you need more control: -tryLock() -timeout -fairness -interruptible waits Use concurrent collections like ConcurrentHashMap instead of manually synchronizing shared maps/lists. Don’t lock the whole method if only one small state update needs protection. Use AtomicInteger / AtomicLong for counters instead of full locks. Real takeaway Thread safety matters. But making everything synchronized is not a concurrency strategy. First make it correct. Then make it scale. #Java #Concurrency #Multithreading #BackendDevelopment #Performance #SoftwareEngineering
To view or add a comment, sign in
-
How the JVM Works We compile, run, and debug Java code all the time. But what exactly does the JVM do between compile and run? Here's the flow: Build: javac compiles your source code into platform-independent bytecode, stored as .class files, JARs, or modules. Load: The class loader subsystem brings in classes as needed using parent delegation. Bootstrap handles core JDK classes, Platform covers extensions, and System loads your application code. Link: The Verify step checks bytecode safety. Prepare allocates static fields with default values, and Resolve turns symbolic references into direct memory addresses. Initialize: Static variables are assigned their actual values, and static initializer blocks execute. This happens only the first time the class is used. Memory: Heap and Method Area are shared across threads. The JVM stack, PC register, and native method stack are created per thread. The garbage collector reclaims unused heap memory. Execute: The interpreter runs bytecode directly. When a method gets called multiple times, the JIT compiler converts it to native machine code and stores it in the code cache. Native calls go through JNI to reach C/C++ libraries. Run: Your program runs on a mix of interpreted and JIT-compiled code. Fast startup, peak performance over time. Thanks to ByteByteGo #JVM #JavaVirtualMachine
To view or add a comment, sign in
-
-
𝐖𝐡𝐲 𝐢𝐬 𝐦𝐲 𝐂𝐮𝐬𝐭𝐨𝐦 𝐀𝐧𝐧𝐨𝐭𝐚𝐭𝐢𝐨𝐧 𝐫𝐞𝐭𝐮𝐫𝐧𝐢𝐧𝐠 𝐧𝐮𝐥𝐥? 🤯 Every Java developer eventually tries to build a custom validation or logging engine, only to get stuck when method.getAnnotation() returns null. The secret lies in the @Retention meta-annotation. If you don't understand these three levels, your reflection-based engine will never work: 1️⃣ SOURCE (e.g., @Override, @SuppressWarnings) Where? Only in your .java files. Why? It’s for the compiler. Once the code is compiled to .class, these annotations are GONE. You cannot find them at runtime. 2️⃣ CLASS (The default!) Where? Stored in the .class file. Why? Used by bytecode analysis tools (like SonarLint or AspectJ). But here's the kicker: the JVM ignores them at runtime. If you try to read them via Reflection — you get null. 3️⃣ RUNTIME (e.g., @Service, @Transactional) Where? Stored in the bytecode AND loaded into memory by the JVM. Why? This is the "Magic Zone." Only these can be accessed by your code while the app is running. In my latest deep dive, I built a custom Geometry Engine using Reflection. I showed exactly how to use @Retention(RUNTIME) to create a declarative validator that replaces messy if-else checks. If you’re still confused about why your custom metadata isn't "visible," this breakdown is for you. 👇 Link to the full build and source code in the first comment! #Java #Backend #SoftwareArchitecture #ReflectionAPI #CleanCode #ProgrammingTips
To view or add a comment, sign in
-
🛑 Stop Saying the Garbage Collector Cleans the Stack A misconception that still appears in backend discussions is the belief that “GC handles all memory in Java.” This is not true, and understanding the distinction is crucial for performance. Stack vs Heap: Two Very Different Worlds 1. Stack (Execution Memory) - Every method call creates a stack frame; when the method returns, the frame is discarded. - No GC involvement. - No tracing or sweeping. - Lifecycle is deterministic (tied to method execution). - The JVM may internally allocate frames differently, but their lifecycle is strictly bound to execution—not garbage collection. 2. Heap (Managed Memory) - Objects reside here, and this is where the Garbage Collector operates. - Utilizes algorithms like generational collection, marking, and compaction. - Trades memory efficiency for runtime overhead. - Can introduce pauses or CPU overhead depending on allocation patterns. 💡 The Important Insight The stack doesn’t free memory; it determines reachability. When a method returns: - Its stack frame disappears. - References held in that frame disappear. - Objects become eligible for GC. 📚 JVM Spec (§2.5.2): Frames are created and destroyed with method execution—not managed by the garbage collector. #Java #JVM #Backend #Performance #SystemDesign
To view or add a comment, sign in
-
#Post11 In the previous post(https://lnkd.in/dynAvNrN), we saw how to create threads in Java. Now let’s talk about a problem. If creating threads is so simple… why don’t we just create a new thread every time we need one? Let’s say we are building a backend system. For every incoming request/task, we create a new thread: new Thread(() -> { // process request }).start(); This looks simple. But this approach breaks very quickly in real systems because of below mentioned problems. Problem 1: Thread creation is expensive Creating a thread is not just creating an object. It involves: • Allocating memory (stack) • Registering with OS • Scheduling overhead Creating thousands of threads = performance degradation Problem 2: Too many threads → too much context switching We already saw this earlier(https://lnkd.in/dYG3v-vb). More threads does NOT mean more performance. Instead: • CPU spends more time switching • Less time doing actual work Problem 3: No control over thread lifecycle When you create threads manually: • No limit on number of threads • No reuse • Hard to manage failures This quickly becomes difficult to manage as the system grows. So what’s the solution? Instead of creating threads manually: we use something called the Executor Framework. In simple words consider the framework to be like: Earlier, we were manually hiring a worker (thread) for every task. With Executor, we have a team of workers (thread pool), and we just assign tasks to them. Key idea Instead of: Creating a new thread for every task We do: Submit tasks to a pool of reusable threads This is exactly what Java provides using: Executor Framework Key takeaway Manual thread creation works for learning, but does not scale in real-world systems. Thread pools help: • Control number of threads • Reduce overhead • Improve performance We no longer manage threads directly — we delegate that responsibility to the Executor Framework. In the next post, we’ll see how Executor Framework works and how to use it in Java. #Java #Multithreading #Concurrency #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
Ever had a bug that Why the inconsistency? To optimize performance and save memory, the JVM maintains an internal cache for Integer objects, but only for the range -128 to 127. Inside the range: Java reuses the same memory reference. == returns true. Outside the range: Java creates a brand new object in the Heap. The memory references differ, so == returns false. The Phantom’s Rule: In the world of professional backend engineering, identity (==) is not equality (.equals()). If you’re comparing values, always use .equals(). Don't let the JVM's hidden optimizations trick your business logic. Have you ever been bitten by a caching "glitch" in production? Let's discuss below. 👇 #TheBytecodePhantom #JavaInternals #BackendEngineering #SoftwareArchitecture #CodingTips #JVM #CleanCodeonly appeared when your numbers got bigger? You might have been haunted by the Integer Cache. Look at this logic: Integer a = 100; Integer b = 100; // a == b is TRUE Integer c = 200; Integer d = 200; // c == d is FALSE
To view or add a comment, sign in
-
-
Checked vs Unchecked Exceptions - Know the Difference ⚠️ 🔹 Checked Exceptions ▸ Checked at compile time ▸ Must be handled (try-catch or throws) ▸ Not handled → code won’t compile ▸ Examples: IOException, SQLException, FileNotFoundException 🔹 Unchecked Exceptions ▸ Occur at runtime ▸ No need to handle (but recommended) ▸ Extend RuntimeException ▸ Examples: NullPointerException, ArrayIndexOutOfBoundsException 💡 Simple Way to Remember → Checked = Compiler forces handling → Unchecked = Runtime errors 🚀 Best Practice ▸ Use Checked → for recoverable scenarios (e.g., file not found → retry) ▸ Use Unchecked → for programming bugs (e.g., null → fix the code) #Java #SpringBoot #ExceptionHandling #JavaDeveloper #BackendDeveloper
To view or add a comment, sign in
-
-
Go has no threads. Yet it handles 10x more concurrent requests than Java. Here is why that should change how you think about concurrency. When thousands of requests hit a server simultaneously, the biggest bottleneck is always the thread. Traditional languages like Java create one OS thread per request. Threads are heavy, kernel managed, and expensive to context switch. Go solved this differently with Goroutines. → A Goroutine's stack is dynamic. It only grows when it actually needs to, not upfront → Creating a Goroutine involves zero system calls. The kernel has no idea it exists → Context switching happens entirely in user space. No kernel involvement whatsoever → The Go scheduler handles everything. OS threads only see what Go exposes to them This is powered by the GMP model: → G: Goroutines, can run in the millions → M: Machine, the actual OS threads, just a handful → P: Processor, the logical CPU that schedules G onto M Millions of Goroutines multiplex across just a few OS threads. When a Goroutine blocks, Go detaches that thread, spins up work elsewhere, and keeps everything moving. The program never stalls. A Goroutine starts at just 2KB because Go's runtime manages memory dynamically instead of fixed provisioning like the OS does. This is not a language feature. It is an architectural decision. Minimize kernel involvement. Maximize work in user space. Let the runtime do what the OS was doing badly. That is the real reason Go scales the way it does. What architecture decision in your stack has had the biggest impact on performance? #GoLang #SystemDesign #BackendEngineering #Concurrency #BuildingInPublic #TechFounders #SoftwareArchitecture #Engineering #Programming #DevOps
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development