Meet Bob. Bob is a Java thread. ☕ Bob’s job → take a request, process it, return a response. Simple. ------------------------------------------------------------------------ 🔴 Bob v1.0 — Blocking Thread Bob puts the request in the oven. Bob stares at the oven. Bob does… nothing else. 👉 500 users = 500 Bobs staring at ovens 👉 500 Bobs = ~250MB RAM doing absolutely nothing Bob gets fired. ------------------------------------------------------------------------ 🔵 Bob v2.0 — WebFlux (Reactive) Bob is replaced by a Robot 🤖 with 8 arms. Robot never waits. Never sleeps. Handles 500 users with just 8 arms using callbacks. Impressive… until: ❌ Someone makes one blocking call inside flatMap() Robot loses an arm. Then another. Then another. ⏰ 3 AM → Production is down 💀 No error. Just silence. Robot is… scary. ------------------------------------------------------------------------ 🟢 Bob v3.0 — Virtual Threads (Java 21) Bob is back. But smarter. Bob puts the request in the oven. 📝 Sticks a Post-it note. 🚶 Walks away immediately. Handles the next request. Comes back when the oven beeps. 👉 500 users = 500 Post-its = ~500KB RAM Same simple code. No callbacks. No reactive complexity. ✨ JVM handles the magic. spring.threads.virtual.enabled: true Bob wins. ------------------------------------------------------------------------ ⚖️ So what should you use? 👉 Java 17 or older? Use WebFlux… carefully. 👉 Java 21+? Bring Bob back. Delete your Mono/Flux where possible. ------------------------------------------------------------------------ 💡 Real takeaway The best architecture isn’t the most “clever” one. It’s the one your team can’t accidentally break at 3 AM. ♻️ Save this before your next system design discussion 👀 Follow for more concepts explained simply Thanks Arshad for the insightful discussion. #Java #SpringBoot #VirtualThreads #WebFlux #ProjectLoom #BackendDevelopment #SoftwareEngineering
Java Threads: From Blocking to Virtual
More Relevant Posts
-
The Integer Cache Trap : The Problem : Order matching works perfectly in all tests — order IDs 1 to 100 always compare correctly. In production with real order IDs above 127, identical orders never match. The logic is silently broken. No exception. No error. Just wrong results. Root Cause : Java caches Integer objects for values -128 to 127 at startup. Any Integer in this range is always the same object in memory. Outside this range, each Integer.valueOf() (including autoboxing) creates a new object. == compares object references, not values. So: java Integer a = 100; Integer b = 100; System.out.println(a == b); // true ✓ (same cached object in pool) Integer x = 200; Integer y = 200; System.out.println(x == y); // false ✗ (two different objects!) In production code java // ❌ BUGGY — works for id=5, silently wrong for id=500 public boolean isSameOrder(Integer id1, Integer id2) { return id1 == id2; // reference comparison! } isSameOrder(5, 5) → true ✓ (cached, same object) isSameOrder(200, 200) → false ✗ (different objects, same value) The cache boundary java Integer.valueOf(127) == Integer.valueOf(127) // true — cached Integer.valueOf(128) == Integer.valueOf(128) // false — not cached The cache range -128 to 127 is mandated by the JLS (Java Language Specification). The upper bound can be extended with -XX:AutoBoxCacheMax=<N> JVM flag — but relying on this is a terrible idea. ✅ Fix — Always use .equals() for boxed types java // ✓ CORRECT — value comparison, works for all ranges public boolean isSameOrder(Integer id1, Integer id2) { return Objects.equals(id1, id2); // null-safe, value-based } Three safe options java // Option 1: Objects.equals() — null-safe Objects.equals(id1, id2); // Option 2: .equals() with null guard id1 != null && id1.equals(id2); // Option 3: unbox to primitive (NPE risk if null) id1.intValue() == id2.intValue(); // or simply: (int) id1 == (int) id2; // auto-unbox — NullPointerException if null Prevention Checklist : -> Never use == to compare Integer, Long, Double, Float, Short, Byte, Character -> Always use Objects.equals(a, b) for nullable boxed comparisons -> Use primitive int, long instead of Integer, Long where null is not needed -> Write unit tests with values outside -128 to 127 (use 200, 500, 1000) -> Enable IntelliJ's "Suspicious equality check" inspection — it flags == on boxed types. IntelliJ Warning -> IntelliJ IDEA flags this automatically: ⚠ Integer equality check with == may not work for values outside -128..127 The Lesson : Java caches Integer objects only for -128 to 127. == on Integer compares references — not values.Tests with small IDs (1–100) will always pass. Production with real IDs (500+) will silently fail.Always use .equals() or Objects.equals() for any boxed type. No exceptions. #JavaInProduction #RealWorldJava #Java #SpringBoot #BackendDevelopment #ProductionIssues #DataStructures #DSA #SystemDesign #SoftwareEngineering #JavaDeveloper #Programming
To view or add a comment, sign in
-
-
Every character in your file takes 8 bits. Whether it appears once or a million times — same cost. That's wasteful. There's a smarter way to encode. 👇 𝑻𝒉𝒆 𝑷𝒓𝒐𝒃𝒍𝒆𝒎: Given characters and their frequencies — assign binary codes such that frequent characters get 𝐬𝐡𝐨𝐫𝐭𝐞𝐫 𝐜𝐨𝐝𝐞𝐬, rare ones get longer. f=45 gets "0" — just one bit. a=5 gets "1100" — four bits. High traffic, short path. Low traffic, longer path. Simple idea. Powerful result. 𝑴𝒊𝒏𝒊 𝑺𝒄𝒆𝒏𝒂𝒓𝒊𝒐: Think of a post office sorting system. Letters going to Mumbai — the most common destination — get a single-digit code. Letters to a remote village get a longer code. Nobody wastes a short code on a rare destination. 𝐇𝐮𝐟𝐟𝐦𝐚𝐧 𝐄𝐧𝐜𝐨𝐝𝐢𝐧𝐠 works exactly the same way. 𝑻𝒉𝒆 𝑨𝒑𝒑𝒓𝒐𝒂𝒄𝒉: Build a tree bottom-up. Always merge the two 𝐥𝐞𝐚𝐬𝐭 𝐟𝐫𝐞𝐪𝐮𝐞𝐧𝐭 nodes first. Rarest characters sink deep into the tree → longer codes. Most frequent float to the top → shorter codes. 𝑨𝒍𝒈𝒐𝒓𝒊𝒕𝒉𝒎 𝑼𝒔𝒆𝒅: 𝐆𝐫𝐞𝐞𝐝𝐲 + 𝐌𝐢𝐧-𝐇𝐞𝐚𝐩 (Priority Queue) java PriorityQueue<Node> minHeap = new PriorityQueue<>((a, b) -> { if (a.data == b.data) return Integer.compare(a.idx, b.idx); return Integer.compare(a.data, b.data); }); The 𝐌𝐢𝐧-𝐇𝐞𝐚𝐩 always serves the two smallest nodes next — that's the greedy choice driving the entire algorithm. java while (minHeap.size() > 1) { Node left = minHeap.poll(); Node right = minHeap.poll(); Node merged = new Node(left.data + right.data, Math.min(left.idx, right.idx)); merged.left = left; merged.right = right; minHeap.add(merged); } Merge → push back → repeat. Until one root remains. 𝑹𝒆𝒂𝒅𝒊𝒏𝒈 𝒕𝒉𝒆 𝒄𝒐𝒅𝒆𝒔 — 𝑷𝒓𝒆𝒐𝒓𝒅𝒆𝒓 𝑫𝑭𝑺: java void solve(Node root, String s, ArrayList<String> ans) { if (root.left == null && root.right == null) { ans.add(s.isEmpty() ? "0" : s); return; } solve(root.left, s + "0", ans); solve(root.right, s + "1", ans); } Go left → append 0. Go right → append 1. Hit a leaf → that's the code. 𝑶𝒏𝒆 𝒆𝒅𝒈𝒆 𝒄𝒂𝒔𝒆 𝒘𝒐𝒓𝒕𝒉 𝒏𝒐𝒕𝒊𝒏𝒈: Single character input. No left, no right — just a root. s.isEmpty() ? "0" handles it. Miss this — wrong output on single-char inputs. Complexity: 𝑻𝒊𝒎𝒆: O(n log n) | 𝑺𝒑𝒂𝒄𝒆: O(n) 𝑻𝒉𝒆 𝒓𝒆𝒂𝒍 𝒕𝒂𝒌𝒆𝒂𝒘𝒂𝒚: Greedy works here because local optimal — always merge smallest two — leads to global optimal encoding. Huffman is the backbone of ZIP, JPEG, MP3. You use it every day without knowing it. Challenge Day: #Day50 Tag: GeeksforGeeks National Payments Corporation Of India (NPCI) Did you reach for recursion first or build the heap directly? 👇 #DSA #HuffmanEncoding #Greedy #PriorityQueue #CodingInterview #Algorithms
To view or add a comment, sign in
-
-
The Ghost in the Machine: Why your Thread-Safe Code Can Be Orders of Magnitude Slower You probably know that two threads can interfere with each other without ever accessing the same variable. We master locks, semaphores, and concurrency. But there is a hardware concept that most of us ignore on a daily basis: False Sharing. The Problem: Cache Lines Processors do not read memory byte by byte. They read in blocks called Cache Lines, typically 64 bytes on modern processors. If you have two distinct variables (say, two counters A and B) that reside in the same Cache Line, the hardware faces a problem: Core 1 updates variable A; Core 2 wants to update variable B; Even though they are different variables, the cache coherence protocol (MESI) marks the entire line as invalid for Core 2, forcing a cache reload. The result? The execution pipelines of both cores stall for hundreds of cycles, with no mutex, no lock, creating a bottleneck where there should be pure parallelism. Why Does This Matter? In high-performance systems (trading, search engines, large-scale event processing), False Sharing is the silent killer of scalability. You add more CPU cores, but performance does not grow. Sometimes it regresses. How to Fix It? Java vs GoBoth languages solve the problem in opposite ways, and that difference says a lot about the philosophy of each. Java handles it for you. Since Java 8, there is the @Contended annotation (package jdk.internal.vm.annotation). It instructs the JVM to add padding around the field, ensuring it occupies an exclusive Cache Line. Important detail: to work outside JDK code, you must add the flag -XX:-RestrictContended to the JVM. Without it, the annotation has no effect on user classes. Go makes it your responsibility. There is no magic annotation. The compiler will not save you. You need to understand the hardware and insert the padding yourself, either manually with a byte array or using cpu.CacheLinePad from the standard library, which is more readable and avoids hardcoded numbers. Java uses @Contended, the JVM manages it, requires -XX:-RestrictContended, not explicit in code, around 128 bytes overhead per field. Go uses cpu.CacheLinePad, you manage it, no extra config needed, explicit in code, around 64 bytes overhead per field. The Takeaway Software is not just logic. It is understanding how that logic behaves when it meets the silicon. In Java, the platform abstracts the problem away. In Go, it sits right there in the code, a constant reminder that real concurrency requires thinking beyond the language. Have you ever debugged a performance problem that made no sense in the code, but made perfect sense in the hardware? #FalseSharing #CacheLines #ConcurrentProgramming #Java #Golang #HighPerformance #BackendDevelopment #SoftwareEngineering #SystemsProgramming #Programming
To view or add a comment, sign in
-
-
Created 1 million objects. App crashed with OutOfMemoryError. Nobody understood why. 😱 Java Fundamentals A-Z | Post 25 Can you spot the bug? 👇 public void processTransactions() { for (int i = 0; i < 1000000; i++) { Transaction t = new Transaction(); // 💀 Heap! t.setId(i); t.process(); // t goes out of scope // But GC hasn't cleaned yet! 💀 } } // Result → OutOfMemoryError! 💀 // Heap filled faster than GC could clean! Every new Transaction() goes to Heap. GC couldn’t keep up. Understanding Stack vs Heap prevents this! 💪 Here’s how Java memory actually works 👇 public void calculate() { // ✅ Stack — primitive, fast, auto-cleaned! int x = 10; // Stack double rate = 0.05; // Stack boolean isValid = true; // Stack // ⚠️ Heap — objects, slower, needs GC! String name = new String("DBS"); // Heap List<Integer> nums = new ArrayList<>(); // Heap // ✅ Fix — reuse objects where possible! StringBuilder sb = new StringBuilder(); for (int i = 0; i < 1000000; i++) { sb.setLength(0); // ✅ Reuse — no new Heap allocation! sb.append("Transaction: ").append(i); } } Fixed batch job OutOfMemoryError by reusing objects instead of creating new ones in loop. Memory usage dropped 60%. 🔥 Stack vs Heap cheat sheet 👇 — Stack → primitives, method calls, references — fast, auto-cleaned — Heap → all objects — slower, needs Garbage Collector — Stack overflow → too many method calls (infinite recursion!) — OutOfMemoryError → too many objects in Heap — Solution → reuse objects, avoid new inside loops! Summary: 🔴 Creating new objects inside million-iteration loops 🟢 Reuse objects — Stack for primitives, Heap wisely for objects! 🤯 60% memory reduction by just reusing StringBuilder in batch job! Have you faced OutOfMemoryError in production? Drop a 🧠 below! #Java #JavaFundamentals #BackendDevelopment #LearningInPublic #SDE2
To view or add a comment, sign in
-
-
🚀 ArrayDeque — Simplifying Stack and Queue Logic ( https://lnkd.in/g-c6q8v6 ) ➡️ Array Deque (Array Double-Ended Queue) is a resizable-array class in Java that lets you insert and remove elements from both ends — making it one of the most flexible data structures in the Collections Framework. 🔹 Revolving Door: Just like a revolving door lets people enter and exit from either side, ArrayDeque lets you add or remove elements from both the front and the rear with equal ease. 🔹 Token Queue at a Bank: Imagine a bank where the manager can add urgent customers at the front AND regular customers at the back — that's exactly how ArrayDeque manages its double-ended insertions. 🔹 A Stack of Trays in a Cafeteria: You always pick the top tray and place new ones on top — ArrayDeque replicates this Stack (LIFO) behavior perfectly using push() and pop(). Here are the key takeaways from the ArrayDeque session at TAP Academy by Sharath R sir: 🔹 No Indexing, No for Loop: Unlike ArrayList, ArrayDeque has zero indexing support. You cannot use a traditional for loop or get(i) — you must use for-each, Iterator, or descendingIterator instead. 🔹 Null is Strictly Forbidden: ArrayDeque throws a NullPointerException the moment you try to insert null — a critical difference from ArrayList and LinkedList that interviewers love to test. 🔹 Smarter Resizing: When the default capacity of 16 fills up, ArrayDeque doubles itself (n × 2). ArrayList uses (n × 3/2) + 1 — two different formulas worth remembering cold. 🔹 Reverse Traversal via descendingIterator(): Since ListIterator is unavailable (ArrayDeque implements Deque, not List), the only way to traverse backward is using descendingIterator() — which starts at the last element and moves toward the front. 🔹 One Class, Three Roles: ArrayDeque can act as a Stack (push/pop), a Queue (offer/poll), or a full Deque (addFirst/addLast) — making it the most versatile tool in Java Collections. Visit this Interactive webpage to understand the concept by visualization: https://lnkd.in/g-c6q8v6 #Java #JavaDeveloper #Collections #ArrayDeque #DataStructures #TAPAcademy #CodingJourney #PlacementPrep #SoftwareEngineering #InterviewPrep
To view or add a comment, sign in
-
-
🔥 𝐒𝐭𝐫𝐢𝐧𝐠 𝐯𝐬 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫 𝐯𝐬 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫 𝐢𝐧 𝐉𝐚𝐯𝐚 — 𝐒𝐭𝐨𝐩 𝐂𝐨𝐧𝐟𝐮𝐬𝐢𝐧𝐠 𝐓𝐡𝐞𝐦! This is one of the most asked Java interview questions — yet most developers can't explain the difference clearly. Let me fix that 👇 🔵 𝐒𝐭𝐫𝐢𝐧𝐠 — 𝐈𝐦𝐦𝐮𝐭𝐚𝐛𝐥𝐞 & 𝐓𝐡𝐫𝐞𝐚𝐝-𝐒𝐚𝐟𝐞 𝐒𝐭𝐫𝐢𝐧𝐠 𝐬𝟏 = "𝐇𝐞𝐥𝐥𝐨"; 𝐒𝐭𝐫𝐢𝐧𝐠 𝐬𝟐 = 𝐬𝟏 + " 𝐖𝐨𝐫𝐥𝐝"; // creates a NEW object every time! 𝐒𝐭𝐫𝐢𝐧𝐠 𝐬𝟑 = "𝐇𝐞𝐥𝐥𝐨"; // s1 == s3 → true (same String pool reference) // s1 == s2 → false (s2 is a brand new object) ✅ Stored in String Pool — memory efficient for reuse ✅ Thread-safe by design (immutable) ❌ Every + or concat() creates a new object — bad in loops! 🩷 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫 — 𝐌𝐮𝐭𝐚𝐛𝐥𝐞 & 𝐅𝐚𝐬𝐭 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫 𝐬𝐛 = 𝐧𝐞𝐰 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫(); 𝐬𝐛.𝐚𝐩𝐩𝐞𝐧𝐝("𝐇𝐞𝐥𝐥𝐨").𝐚𝐩𝐩𝐞𝐧𝐝(" 𝐖𝐨𝐫𝐥𝐝"); // same object 𝐬𝐛.𝐢𝐧𝐬𝐞𝐫𝐭(𝟎, "𝐒𝐚𝐲: "); 𝐬𝐛.𝐫𝐞𝐯𝐞𝐫𝐬𝐞(); 𝐒𝐭𝐫𝐢𝐧𝐠 𝐫𝐞𝐬𝐮𝐥𝐭 = 𝐬𝐛.𝐭𝐨𝐒𝐭𝐫𝐢𝐧𝐠(); ✅ Modifies the same object — no new allocations ✅ Fastest option for string manipulation ❌ NOT thread-safe — don't share between threads 🟣 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫 — 𝐓𝐡𝐫𝐞𝐚𝐝-𝐒𝐚𝐟𝐞 𝐛𝐮𝐭 𝐒𝐥𝐨𝐰𝐞𝐫 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫 𝐬𝐛 = 𝐧𝐞𝐰 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫(); 𝐬𝐛.𝐚𝐩𝐩𝐞𝐧𝐝("𝐇𝐞𝐥𝐥𝐨"); // synchronized 𝐬𝐛.𝐚𝐩𝐩𝐞𝐧𝐝(" 𝐖𝐨𝐫𝐥𝐝"); // Same API as StringBuilder, but all methods are synchronized ✅ Thread-safe — safe for multi-threaded access ❌ Synchronization adds overhead — slower than StringBuilder 📊 𝐐𝐮𝐢𝐜𝐤 𝐂𝐨𝐦𝐩𝐚𝐫𝐢𝐬𝐨𝐧 𝐅𝐞𝐚𝐭𝐮𝐫𝐞 𝐒𝐭𝐫𝐢𝐧𝐠 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫 Mutable? ❌ No ✅ Yes ✅ Yes Thread-safe? ✅ Yes ❌ No ✅ Yes Speed Slowest* Fastest Moderate Use case Constants Loops Multi-thread *+ in a loop is slow. Compiler may optimize single-line concatenation. 💡 Golden Rule: Use 𝐒𝐭𝐫𝐢𝐧𝐠 for fixed values. Use 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐢𝐥𝐝𝐞𝐫 for manipulation in single-threaded code. Use 𝐒𝐭𝐫𝐢𝐧𝐠𝐁𝐮𝐟𝐟𝐞𝐫 only when multiple threads share the same buffer. Drop a 🔥 if this cleared your confusion! Tag a Java dev who still uses + inside loops 😄 👇 Which one do you use most in your projects? #Java #String #StringBuilder #StringBuffer #CoreJava #Backend #SpringBoot #JavaDeveloper #100DaysOfCode #InterviewPrep #Programming
To view or add a comment, sign in
-
-
𝗝𝗗𝗞 𝘃𝘀 𝗝𝗥𝗘 𝘃𝘀 𝗝𝗩𝗠 Here's what actually happens when you run a Java program — and the parts most engineers never learn: JDK → JRE → JVM → JIT 𝗝𝗗𝗞 (Java Development Kit) Your complete toolbox. Compiler (javac), debugger, profiler, keytool, jshell — and a bundled JRE. Without it, you can't write or compile Java. Just run it. 𝗝𝗥𝗘 (Java Runtime Environment) JVM + standard class libraries. Ships to end users. No compiler. No dev tools. Just enough to run a .jar. 𝗝𝗩𝗠 (Java Virtual Machine) This is where it gets interesting. 𝗧𝗵𝗿𝗲𝗲 𝗹𝗮𝘆𝗲𝗿𝘀 𝗶𝗻𝘀𝗶𝗱𝗲: • Class Loader — loads, links, and initializes .class files at runtime (not all at startup) • Runtime Data Areas — Heap, Stack, Method Area, PC Register, Native Method Stack • Execution Engine — interprets + compiles bytecode 𝗝𝗜𝗧 (Just-In-Time Compiler) Watches your code at runtime. Identifies "hot" methods — those called frequently. Compiles them natively. Skips the interpreter next time. That's how Java catches up to C++ performance on long-running workloads. 𝗪𝗵𝗮𝘁 𝗺𝗼𝘀𝘁 𝗱𝗲𝘃𝘀 𝗺𝗶𝘀𝘀 • 𝗖𝗹𝗮𝘀𝘀 𝗟𝗼𝗮𝗱𝗶𝗻𝗴 𝗶𝘀 𝗹𝗮𝘇𝘆 The JVM doesn't load all classes upfront. It loads them on first use — which is why cold start time differs from steady-state throughput. • 𝗝𝗜𝗧 𝗵𝗮𝘀 𝘁𝗶𝗲𝗿𝘀 HotSpot JVM uses tiered compilation: C1 (fast, light optimization) kicks in first, then C2 (aggressive optimization) takes over for truly hot code. GraalVM replaces C2 entirely with a more powerful compiler. • 𝗧𝗵𝗲 𝗛𝗲𝗮𝗽 𝗶𝘀 𝗻𝗼𝘁 𝗼𝗻𝗲 𝘁𝗵𝗶𝗻𝗴 It's split: Eden → Survivor Spaces → Old Gen → Metaspace (post Java 8). Understanding this is prerequisite to tuning GC and fixing OOM errors. • 𝗦𝘁𝗮𝗰𝗸 𝘃𝘀 𝗛𝗲𝗮𝗽 — 𝗿𝘂𝗻𝘁𝗶𝗺𝗲 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿 Every thread gets its own stack. Local primitives live there. Objects always go to the heap. References live on the stack. This is why stack overflows (deep recursion) and heap OOMs are completely different problems. • 𝗝𝗩𝗠 𝗶𝘀 𝗻𝗼𝘁 𝗮𝗹𝘄𝗮𝘆𝘀 𝗶𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗲𝗱 When GraalVM's native-image compiles your app ahead-of-time (AOT), there's no JVM at runtime at all. Instant startup. Fixed memory footprint. Different trade-offs. • 𝗕𝘆𝘁𝗲𝗰𝗼𝗱𝗲 𝗶𝘀 𝗻𝗼𝘁 𝗯𝗶𝗻𝗮𝗿𝘆 It's an intermediate representation — platform-agnostic instructions the JVM can run on any OS. This is Java's "write once, run anywhere" in practice, not just in theory. 𝗧𝗵𝗲 𝗺𝗲𝗻𝘁𝗮𝗹 𝗺𝗼𝗱𝗲𝗹 𝘁𝗵𝗮𝘁 𝗰𝗹𝗶𝗰𝗸𝘀: • JDK = write + compile + run • JRE = run only • JVM = execution environment • JIT = runtime optimizer What's the JVM internals detail that surprised you most when you first learned it? A special thanks to my faculty Syed Zabi Ulla sir at PW Institute of Innovation for their clear explanations and continuous guidance throughout this topic. #Java #JVM #SoftwareEngineering #BackendDevelopment #ProgrammingFundamentals
To view or add a comment, sign in
-
🚨 Heap issues? Don’t just increase memory → shrink your objects. When memory is tight, adding more heap isn’t always possible. A smarter move: reduce object size → reduce overall memory usage. 💡 How to reduce object size: 1. Use only necessary fields → each extra field adds 4–8B on a 64-bit JVM. Across 1M objects, that’s 4–8 MB wasted. 2. Prefer smaller data types → int → byte, double → float, long → int. Saving just 3B per object → 3 MB per million objects. 3. Avoid unused object references → even a null reference costs 4–8B. 4. Be careful with heavy internal objects → e.g., a ConcurrentHashMap per object can cost 200+ B. 5. Reuse shared objects → e.g., shared Locale instances instead of creating one per object. 📊 Impact: Reducing object size by 20% on half the heap can achieve the same effect as increasing heap by 10%. Smaller objects → less GC pressure → fewer pauses → faster response times 💾 Memory Impact of Java Object Fields (64-bit JVM, <32GB heap) byte → 1B (8-bit value) char → 2B (Unicode character) short → 2B (16-bit integer) int → 4B (32-bit integer) float → 4B (32-bit float) long → 8B (64-bit integer) double → 8B (64-bit float) Object reference → 4B (8B if large heap or compressed OOP disabled) Object header → 16B (metadata, GC info, locks) Example: How object fields affect memory Class A → int i → 16B → Simple object, small footprint Class B → int i; Locale l → 24B → Adds 8B for reference; Locale is shared → no extra heap per instance Class C → int i; Map m → 24B + ~200B for map → Each object creates a new map → memory usage explodes if millions of instances 🔍 How to check object sizes in Java: Instrumentation API → measure shallow object size programmatically Profiling tools → Eclipse MAT, VisualVM, YourKit, JProfiler Runtime estimation → allocate sample objects, measure heap difference, divide by number of objects ⚠️ Key takeaway: Memory problems are often about object design, not heap size. Optimizing objects → less GC pressure → faster, more stable apps. #Java #JVM #Performance #MemoryManagement #BackendEngineering #HeapOptimization
To view or add a comment, sign in
-
example : day 6/7 DSA CHALLENGE 👍 [ [1, 1, 1], [1, 0, 1], [1, 1, 1] ] Output: [ [1, 0, 1], [0, 0, 0], [1, 0, 1] ] 🚀 Approach (O(M + N) Space) 👉 Idea simple hai: Ek array row[] of size M Ek array col[] of size N Step-by-step: Traverse matrix: Agar matrix[i][j] == 0 → mark row[i] = 1 aur col[j] = 1 Dobara traverse karo: Agar row[i] == 1 ya col[j] == 1 → matrix[i][j] = 0 ⏱ Complexity Time: O(M × N) Space: O(M + N) ✅ (as required) 💻 Java Code (Clean & Interview Ready) Java public class SetMatrixZeroes { public static void setZeroes(int[][] matrix) { int m = matrix.length; int n = matrix[0].length; int[] row = new int[m]; int[] col = new int[n]; // Step 1: Mark rows and columns for (int i = 0; i < m; i++) { for (int j = 0; j < n; j++) { if (matrix[i][j] == 0) { row[i] = 1; col[j] = 1; } } } // Step 2: Set zeroes for (int i = 0; i < m; i++) { for (int j = 0; j < n; j++) { if (row[i] == 1 || col[j] == 1) { matrix[i][j] = 0; } } } } // Driver code public static void main(String[] args) { int[][] matrix = { {1, 1, 1}, {1, 0, 1}, {1, 1, 1} }; setZeroes(matrix); for (int[] row : matrix) { for (int val : row) { System.out.print(val + " "); } System.out.println(); } } } #dsastreak #dsatreakwithpw #raghavgarg #dsachallenge #pw #pwskilla #datastructure #pwstreakchallengedsa #raghavgarg #pwskillsstreak #challenge #dsastreakwithpwskills Raghav Garg
To view or add a comment, sign in
-
-
The "Thread-Shifting" Trap in Asynchronous Distributed Locking If you are using Redisson for distributed locking in a reactive or asynchronous environment (like Vert.x, Project Reactor, or Spring WebFlux), you might have encountered this frustrating error: java.lang.IllegalMonitorStateException: attempt to unlock lock, not locked by current thread by node id: [...] thread-id: [...] 🔍 The Root Cause: Thread Dissociation In a traditional synchronous Spring Boot application, a request stays on a single thread. You lock on Thread A and unlock on Thread A. Redisson is happy. In Vert.x, we embrace non-blocking event loops and worker pools. Here is what happens: Locking: Your code acquires a lock on EventLoop-1. Redisson records Thread-1 as the owner. Processing: You perform an asynchronous OCR or a WebClient call. Unlocking: The .onComplete() callback is triggered, but Vert.x might schedule it on EventLoop-2 or a Worker-Thread. Failure: When you call lock.unlock(), Redisson checks the ID and says: "Wait, you aren't the thread that started this!" 💡 The Solution: Embracing "Force Unlock" In a reactive chain, the "Ownership" of a lock should be defined by the Business Transaction (Trace ID), not the Operating System Thread. Since we use the lock to prevent duplicate processing of the same file/request, we need a way to release the lock regardless of which thread finished the work. Don't use .unlock(). Use .forceUnlockAsync(). Why forceUnlockAsync()? Thread Agnostic: It removes the key from Redis without verifying the thread ID. Safety: In a properly structured if (lock == null) return; flow, only the "winner" who successfully acquired the lock will ever reach the onComplete stage. There is no risk of a "loser" thread accidentally releasing someone else's lock. Resilience: It handles cases where the lock might have already expired in Redis due to a long-running process, preventing further exceptions. 🛠️ Best Practice Implementation (Vert.x + Redisson) // 1. Acquire the lock (The Entry Guard) RLock lock = redisson.getLock("lock:process:" + traceId); // Try lock with 0 wait time: if someone else is processing, bail out immediately if (!lock.tryLock(0, 10, TimeUnit.MINUTES)) { return Future.succeededFuture("ALREADY_PROCESSING"); } // 2. The Asynchronous Journey return downloadFile(url) .compose(this::processOCR) .compose(this::sendToKafka) // 3. The Graceful Exit .onComplete(ar -> { // Regardless of success or failure, clear the lock // We use forceUnlockAsync to bypass the Thread ID check lock.forceUnlockAsync(); }); Final Thought When moving from Imperative to Reactive programming, your mental model of "Thread Safety" must shift to "Transaction Safety." Don't let thread-bound locks break your asynchronous flow! 🦑 #Java #Vertx #Redis #Redisson #DistributedSystems #BackendDevelopment #Microservices
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development