Forgetting a 𝑻𝒉𝒓𝒆𝒂𝒅𝑳𝒐𝒄𝒂𝒍.𝒓𝒆𝒎𝒐𝒗𝒆() isn’t just messy anymore it can turn request-scoped context into a bug with virtualthreads. Java 25/26 now give us the cleaner model: → 𝑺𝒄𝒐𝒑𝒆𝒅𝑽𝒂𝒍𝒖𝒆 (Final - JEP 506) → 𝑺𝒕𝒓𝒖𝒄𝒕𝒖𝒓𝒆𝒅𝑻𝒂𝒔𝒌𝑺𝒄𝒐𝒑𝒆 (Preview - JEP 505 in Java 25, JEP 525 in Java 26) 𝑻𝒉𝒆 𝒏𝒆𝒘 𝒎𝒐𝒅𝒆𝒍 𝒊𝒔 𝒔𝒊𝒎𝒑𝒍𝒆: 1. Bind context 𝒐𝒏𝒄𝒆 at the request edge 2. Fork parallel work with 𝑺𝒕𝒓𝒖𝒄𝒕𝒖𝒓𝒆𝒅𝑻𝒂𝒔𝒌𝑺𝒄𝒐𝒑𝒆 3. Child tasks 𝒊𝒏𝒉𝒆𝒓𝒊𝒕 the bound context 4. Scope ends → the binding is 𝒈𝒐𝒏𝒆 𝒔𝒕𝒓𝒖𝒄𝒕𝒖𝒓𝒂𝒍𝒍𝒚 No manual cleanup. No per-task rebinding. No 𝑓𝑖𝑛𝑎𝑙𝑙𝑦. 𝐓𝐡𝐞 𝐨𝐧𝐞 𝐚𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐚𝐥 𝐫𝐮𝐥𝐞 𝐭𝐡𝐚𝐭 𝐦𝐚𝐭𝐭𝐞𝐫𝐬 𝐦𝐨𝐬𝐭: ❌ 𝐃𝐨𝐦𝐚𝐢𝐧 𝐝𝐚𝐭𝐚 → method parameters Examples: orderId, customerId, cart ✅ 𝐑𝐞𝐪𝐮𝐞𝐬𝐭-𝐬𝐜𝐨𝐩𝐞𝐝 𝐜𝐨𝐧𝐭𝐞𝐱𝐭 → ScopedValue Examples: traceId, tenantId, auth, feature flags That single distinction removes a surprising amount of noise from service code. From: 𝑯𝒐𝒘 𝒅𝒐 𝑰 𝒑𝒂𝒔𝒔 𝒕𝒉𝒊𝒔 𝒕𝒉𝒓𝒐𝒖𝒈𝒉 15 𝒍𝒂𝒚𝒆𝒓𝒔? To: 𝑾𝒉𝒂𝒕 𝒊𝒔 𝒕𝒉𝒆 𝒔𝒕𝒓𝒖𝒄𝒕𝒖𝒓𝒂𝒍 𝒔𝒄𝒐𝒑𝒆 𝒐𝒇 𝒕𝒉𝒊𝒔 𝒅𝒂𝒕𝒂? That mindset shift is the real upgrade. Detailed walkthrough with examples: https://lnkd.in/gqfDr5rs #Java #Java25 #Java26 #ProjectLoom #ScopedValue #StructuredConcurrency #VirtualThreads #SpringBoot #BackendEngineering #JVM
Java 25/26: Simplify Thread Locals with Scoped Value
More Relevant Posts
-
Day 63 — LeetCode Progress (Java) Problem: Crawler Log Folder Required: Given a list of folder operations, return the minimum number of operations needed to go back to the main folder. Idea: Track the current depth like a counter — move forward increases depth, move back decreases depth, and staying does nothing. Approach: Initialize a variable to track current depth. Traverse each log: "../" → move up (decrease depth, but not below 0) "./" → stay in the same folder "x/" → move into a subfolder (increase depth) Final depth represents the number of steps needed to return to the main folder. Time Complexity: O(n) Space Complexity: O(1) #LeetCode #DSA #Java #Simulation #Strings #Algorithms #CodingJourney #100DaysOfCode
To view or add a comment, sign in
-
-
#Post6 In the previous post, we understood how our code runs: Code → JVM → Process → Threads (https://lnkd.in/dns348v6) Now let’s go one step deeper What actually happens inside a process when it executes? When a Java program runs, the JVM creates a process. Inside that process, memory and execution are organized into different parts. 1. Heap Memory (Shared) This is where objects created using the "new" keyword are stored. • Shared by all threads within the same process • Not shared across different processes • Threads can read and modify data Because multiple threads access it → synchronization is required 2. Code Segment (Shared) Contains the bytecode (instructions to execute). • Read-only • Shared across all threads 3. Data Segment (Shared) Stores static and global variables. • Shared across all threads • Can be modified Synchronization is required when multiple threads update data 4. Stack (Thread-specific) Each thread has its own stack. • Stores method calls • Stores local variables • Not shared between threads 5. Program Counter (Thread-specific) Each thread has its own program counter. • Points to the current instruction being executed • Moves forward as execution progresses 6. Registers (Thread-specific) Each thread uses CPU registers to store temporary/intermediate data during execution. (We will explore how registers are used during context switching in upcoming posts) Important Understanding Inside a process: • Heap + Code + Data → Shared across threads • Stack + Program Counter + Registers → Private to each thread This separation is what makes multithreading both powerful and complex. Key takeaway Threads share memory (heap), but execute independently using their own stack and execution state. In the next post, we’ll explore Registers and how CPU switches between threads (context switching). #Java #SoftwareEngineering #Multithreading #BackendDevelopment #Programming
To view or add a comment, sign in
-
Deep Dive into JVM Internals — Beyond the Basics 🔲 Slide 1 — The Nesting Doll JDK ⊃ JRE ⊃ JVM. Three nested layers. Not three separate tools. Most developers have used all three for years without knowing the difference. 🔗 Slide 2 — Class Loader Every class request travels UP the chain before any loader handles it. Bootstrap → Platform → Application → Custom. This parent delegation contract is why you can never shadow java.lang.String — no matter how hard you try. 🧠 Slide 3 — Memory Architecture Per-thread → PC Register · JVM Stack · Native Stack Shared → Heap (Eden → Survivor → Old Gen) · Metaspace · Code Cache · Constant Pool One fact most devs miss: Every Java object costs 12–16 bytes of header before your first field. That's why int[] uses 8× less memory than Integer[]. ⚡ Slide 4 — JIT Tiered Compilation T0 → T1 → T2 → T3 → T4 (C2 peak) C2 does inlining, escape analysis, lock elision, loop unrolling, and scalar replacement. If an object never leaves a method → C2 puts it on the stack. Zero heap allocation. Zero GC pressure. ♻️ Slide 5 — GC Internals All collectors share tri-colour marking: White → Gray → Black. Serial · Parallel · G1 (default, Java 9+) · ZGC · Shenandoah ZGC's trick: GC state lives in unused bits of 64-bit pointers. Relocation happens concurrently. Threads never pause. Sub-1ms guaranteed. 🔁 Slide 6 — Bytecode → CPU .java → javac → .class → ClassLoader → Interpreter → JIT → Code Cache → CPU The OS is involved at every step: thread scheduling · mmap for heap · SIGSEGV → NullPointerException Java 21 virtual threads intercept I/O before the syscall. Millions of threads. Near-zero OS overhead. 📋 Slide 7 — Quick Reference Card Essential JVM flags · diagnostic tools · jstack · jmap · async-profiler A cheat sheet worth bookmarking. #Java #JVM #BackendDevelopment #SystemDesign #Performance #SoftwareEngineering
To view or add a comment, sign in
-
If your class name looks like CoffeeWithMilkAndSugarAndCream… you’ve already lost. This is how most codebases slowly break: You start with one clean class. Then come “small changes”: add logging add validation add caching So you create: a few subclasses… then a few more or pile everything into if-else Now every change touches existing code. And every change risks breaking something. That’s not scaling. That’s slow decay. The Decorator Pattern fixes this in a simple way: Don’t modify the original class. Wrap it. Start with a base object → then layer behavior on top of it. Each decorator: adds one responsibility doesn’t break existing code can be combined at runtime No subclass explosion. No god classes. No fragile code. Real-world example? Java I/O does this everywhere: you wrap streams on top of streams. The real shift is this: Stop thinking inheritance. Start thinking composition. Because most “just one more feature” problems… are actually design problems. Have you ever seen a codebase collapse under too many subclasses or flags? #DesignPatterns #LowLevelDesign #SystemDesign #CleanCode #Java #SoftwareEngineering #OOP Attaching the decorator pattern diagram with a simple example.
To view or add a comment, sign in
-
-
Stop wasting memory on threads that do nothing. 🛑 If you’re building Java backends, you’ve probably seen this: More users → more threads → more RAM usage I recently explored Virtual Threads (Java 21 / Project Loom), and this concept finally clicked for me. 💡 The Problem In standard Java: 1 request = 1 Platform Thread During DB/API call → thread gets blocked It’s like a waiter standing idle while food is cooking 🍽️ 👉 Wasted resources + poor scalability 🔍 The Solution: Virtual Threads 👉 Lightweight threads managed by JVM (not OS) Cheap to create Can run thousands easily Perfect for I/O-heavy backend systems ⚙️ How it actually works (Mounting / Unmounting) 1️⃣ Mounting Virtual Thread runs on a Carrier Thread (Platform Thread) 2️⃣ I/O Call (DB/API) Your code looks blocking 3️⃣ Unmounting (Parking) Virtual Thread is paused & parked in heap memory 👉 It releases the Carrier Thread 4️⃣ Carrier Thread is free Handles another request immediately 5️⃣ Remounting (Resume) Once response comes → Virtual Thread continues 💻 The "magic" in code // Looks like blocking code Runnable task = () -> { System.out.println("Processing: " + Thread.currentThread()); String data = fetchDataFromDB(); // DB/API call System.out.println("Result: " + data); }; // Run using Virtual Thread Thread.ofVirtual().start(task); 🧩 What’s happening behind the scenes? 👉 Thread.ofVirtual() Creates a lightweight thread (stored in heap, not OS-level) 👉 During DB/API call Virtual Thread gets unmounted (parked) Carrier Thread becomes free 👉 While waiting Same thread handles other requests 👉 When response comes Scheduler remounts Virtual Thread Execution continues 📈 Result No idle threads Better resource usage Simple synchronous code High scalability without complex async code 🧠 Biggest takeaway 👉 “Code looks blocking… but system is not blocked.” That’s the mindset shift. Have you tried Virtual Threads in your services yet? Did you see any real performance improvement? 🤔 #Java #BackendEngineering #VirtualThreads #ProjectLoom #Java21 #Microservices #Performance
To view or add a comment, sign in
-
-
𝗧𝗵𝗲 𝗝𝗮𝘃𝗮 𝘀𝘄𝗶𝘁𝗰𝗵 𝘀𝘁𝗮𝘁𝗲𝗺𝗲𝗻𝘁 𝘁𝘂𝗿𝗻𝗲𝗱 𝟯𝟬 𝘁𝗵𝗶𝘀 𝘆𝗲𝗮𝗿 Here's the short version: What started as a C-style branching construct is now a declarative data-matching engine — and the JVM internals behind it are genuinely fascinating. What I learned going deep on this: → Early switch relied on jump tables — fast, but fall-through bugs were silent and destructive → Java added definite assignment rules, preventing uninitialized variables from slipping through → The JVM picks between tableswitch (O(1)) and lookupswitch (O(log n)) based on how dense your cases are → String switching since Java 7 uses hashCode + equals internally — it's not magic, it's two passes → Java 14 made switch an expression, which killed fall-through at the language level → Modern Java (21+) adds pattern matching with type binding and null handling — code reads like a description of data → invokedynamic enables runtime linking, replacing rigid compile-time dispatch tables → Java 25 enforces unconditional exactness in type matching — no more silent data loss • The real shift isn't syntax. It's the question switch answers. Old: "Where should execution go?" New: "What is the shape of this data?" That's not just a feature upgrade. That's a change in how you think about branching. Which of these surprised you most? Drop it in the comments. A special thanks to Syed Zabi Ulla sir at PW Institute of Innovation for their clear explanations and continuous guidance throughout this topic. #Java #Programming #SoftwareEngineering #JVM #LearningInPublic #CodingJourney
To view or add a comment, sign in
-
Java Arrays: The Ultimate Building Blocks Before building complex data structures, the foundation needs to be rock solid. Arrays are the ultimate building blocks. Today, I locked down the 4 core operations: 1. Declare: Reserving contiguous memory. 2. Initialize: Populating the data. 3. Access: The magic of O(1). 4. Traverse: Looping through the elements. The biggest takeaway? Understanding fixed memory allocation in Java is crucial before moving to dynamic structures like ArrayLists or HashMaps. #DSA #Java #ArrayBasics
To view or add a comment, sign in
-
-
Most Java performance issues don’t show up in code reviews They show up in object lifetimes. Two pieces of code can look identical: same logic same complexity same output But behave completely differently in production. Why? Because of how long objects live. Example patterns: creating objects inside tight loops → short-lived → frequent GC holding references longer than needed → objects move to old gen caching “just in case” → memory pressure builds silently Nothing looks wrong in the code. But at runtime: GC frequency increases pause times grow latency becomes unpredictable And the worst part? 👉 It doesn’t fail immediately. 👉 It degrades slowly. This is why some systems: pass load tests work fine initially then become unstable weeks later Takeaway: In Java, performance isn’t just about what you do. It’s about how long your data stays alive while doing it. #Java #JVM #Performance #Backend #SoftwareEngineering
To view or add a comment, sign in
-
🔥 𝗗𝗮𝘆 𝟵𝟮/𝟭𝟬𝟬 — 𝗟𝗲𝗲𝘁𝗖𝗼𝗱𝗲 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 𝟭𝟯𝟴𝟱. 𝗙𝗶𝗻𝗱 𝘁𝗵𝗲 𝗗𝗶𝘀𝘁𝗮𝗻𝗰𝗲 𝗩𝗮𝗹𝘂𝗲 𝗕𝗲𝘁𝘄𝗲𝗲𝗻 𝗧𝘄𝗼 𝗔𝗿𝗿𝗮𝘆𝘀 | 🟢 Easy | Java Could use brute force O(n×m) — chose binary search O(n log m) instead. Always optimise. 💡 🔍 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 Count elements in arr1 where no element in arr2 is within distance d. That means |arr1[i] - arr2[j]| > d must hold for ALL j. ⚡ 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 — 𝗦𝗼𝗿𝘁 + 𝗕𝗶𝗻𝗮𝗿𝘆 𝗦𝗲𝗮𝗿𝗰𝗵 ✅ Sort arr2 once — O(m log m) ✅ For each element in arr1, binary search arr2 ✅ If any arr2[mid] falls within distance d → invalid, return false ✅ Navigate left/right based on comparison — if no match found → count it 💡 𝗪𝗵𝘆 𝗕𝗶𝗻𝗮𝗿𝘆 𝗦𝗲𝗮𝗿𝗰𝗵 𝗪𝗼𝗿𝗸𝘀 arr2 is sorted, so elements close in value are close in index. At each mid, if the distance is within d we immediately disqualify. Otherwise we confidently eliminate half the array. ✂️ 📊 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 ⏱ Time: O(n log m + m log m) 📦 Space: O(1) extra The brute force is fine for small inputs — but building the binary search habit now pays dividends when constraints scale up. Always think: can I sort + search instead of nested loops? 🧠 📂 𝗙𝘂𝗹𝗹 𝘀𝗼𝗹𝘂𝘁𝗶𝗼𝗻 𝗼𝗻 𝗚𝗶𝘁𝗛𝘂𝗯: https://lnkd.in/gYAnuwb2 𝟴 𝗺𝗼𝗿𝗲 𝗱𝗮𝘆𝘀. 𝗧𝗵𝗲 𝗰𝗲𝗻𝘁𝘂𝗿𝘆 𝗶𝘀 𝗰𝗮𝗹𝗹𝗶𝗻𝗴! 🏁 #LeetCode #Day92of100 #100DaysOfCode #Java #DSA #BinarySearch #Sorting #CodingChallenge #Programming
To view or add a comment, sign in
-
You can brute force this in O(n²)… or think smart and finish in O(n). Day 71 — LeetCode Progress Problem: Number of Good Pairs Required: Given an array nums, return the number of pairs (i, j) such that: i < j nums[i] == nums[j] Idea: Instead of checking all pairs, count frequency of each number. If a number appears c times, it can form: c × (c - 1) / 2 pairs. Approach: Use a HashMap to store frequency of each number Traverse the array and build frequency map For each frequency c: Add c * (c - 1) / 2 to result Return the result Time Complexity: O(n) Space Complexity: O(n) Small problem, but a powerful pattern: Counting + combinations = huge optimization. #LeetCode #DSA #Java #HashMap #ProblemSolving #CodingJourney” Day 71 — LeetCode Progress Problem: Number of Good Pairs Required: Given an array nums, return the number of pairs (i, j) such that: i < j nums[i] == nums[j] Idea: Instead of checking all pairs, count frequency of each number. If a number appears c times, it can form: c × (c - 1) / 2 pairs. Approach: Use a HashMap to store frequency of each number Traverse the array and build frequency map For each frequency c: Add c * (c - 1) / 2 to result Return the result Time Complexity: O(n) Space Complexity: O(n) Small problem, but a powerful pattern: Counting + combinations = huge optimization. #LeetCode #DSA #Java #HashMap #ProblemSolving #CodingJourney
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development