#Post5 So far in this series, we explored concepts like HashMap, ConcurrentHashMap, CAS, and volatile. Now let’s step back and understand the foundation. We will now start with Multithreading Fundamentals. How does our code actually run? What is a Process and Thread? From Code to Process When we write a Java program: Compilation: javac Test.java → generates bytecode Execution: java Test → starts the program At this point, the JVM starts a new process in the operating system. This process is responsible for executing our program. The OS allocates resources to this process such as: • Memory • CPU time • Threads What is a Process? A process is simply a running instance of a program. Each process has its own memory and resources allocated by the operating system. What is a Thread? A thread is the smallest unit of execution inside a process. When a process starts, it begins with one thread called the main thread. From there, we can create multiple threads to perform tasks concurrently. Multitasking vs Multithreading Multitasking: The operating system runs multiple processes at the same time. Multithreading: A single process runs multiple threads concurrently. In simple terms: Operating System → Multiple Processes (Multitasking) → Each Process contains multiple Threads (Multithreading) Example Think of a web application: • One thread handles user requests • Another thread processes data • Another thread sends responses This allows multiple tasks to run efficiently at the same time. Key takeaway Code execution flow: Code → JVM → Process → Threads Understanding this flow is important because all advanced concepts like synchronization, CAS, and concurrent data structures are built on top of threads. In the next post, we’ll explore what happens inside a process (heap, stack, memory structure). #Java #SoftwareEngineering #Multithreading #BackendDevelopment #Programming
Java Multithreading Fundamentals: Processes and Threads
More Relevant Posts
-
#Post11 In the previous post(https://lnkd.in/dynAvNrN), we saw how to create threads in Java. Now let’s talk about a problem. If creating threads is so simple… why don’t we just create a new thread every time we need one? Let’s say we are building a backend system. For every incoming request/task, we create a new thread: new Thread(() -> { // process request }).start(); This looks simple. But this approach breaks very quickly in real systems because of below mentioned problems. Problem 1: Thread creation is expensive Creating a thread is not just creating an object. It involves: • Allocating memory (stack) • Registering with OS • Scheduling overhead Creating thousands of threads = performance degradation Problem 2: Too many threads → too much context switching We already saw this earlier(https://lnkd.in/dYG3v-vb). More threads does NOT mean more performance. Instead: • CPU spends more time switching • Less time doing actual work Problem 3: No control over thread lifecycle When you create threads manually: • No limit on number of threads • No reuse • Hard to manage failures This quickly becomes difficult to manage as the system grows. So what’s the solution? Instead of creating threads manually: we use something called the Executor Framework. In simple words consider the framework to be like: Earlier, we were manually hiring a worker (thread) for every task. With Executor, we have a team of workers (thread pool), and we just assign tasks to them. Key idea Instead of: Creating a new thread for every task We do: Submit tasks to a pool of reusable threads This is exactly what Java provides using: Executor Framework Key takeaway Manual thread creation works for learning, but does not scale in real-world systems. Thread pools help: • Control number of threads • Reduce overhead • Improve performance We no longer manage threads directly — we delegate that responsibility to the Executor Framework. In the next post, we’ll see how Executor Framework works and how to use it in Java. #Java #Multithreading #Concurrency #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
#Post6 In the previous post, we understood how our code runs: Code → JVM → Process → Threads (https://lnkd.in/dns348v6) Now let’s go one step deeper What actually happens inside a process when it executes? When a Java program runs, the JVM creates a process. Inside that process, memory and execution are organized into different parts. 1. Heap Memory (Shared) This is where objects created using the "new" keyword are stored. • Shared by all threads within the same process • Not shared across different processes • Threads can read and modify data Because multiple threads access it → synchronization is required 2. Code Segment (Shared) Contains the bytecode (instructions to execute). • Read-only • Shared across all threads 3. Data Segment (Shared) Stores static and global variables. • Shared across all threads • Can be modified Synchronization is required when multiple threads update data 4. Stack (Thread-specific) Each thread has its own stack. • Stores method calls • Stores local variables • Not shared between threads 5. Program Counter (Thread-specific) Each thread has its own program counter. • Points to the current instruction being executed • Moves forward as execution progresses 6. Registers (Thread-specific) Each thread uses CPU registers to store temporary/intermediate data during execution. (We will explore how registers are used during context switching in upcoming posts) Important Understanding Inside a process: • Heap + Code + Data → Shared across threads • Stack + Program Counter + Registers → Private to each thread This separation is what makes multithreading both powerful and complex. Key takeaway Threads share memory (heap), but execute independently using their own stack and execution state. In the next post, we’ll explore Registers and how CPU switches between threads (context switching). #Java #SoftwareEngineering #Multithreading #BackendDevelopment #Programming
To view or add a comment, sign in
-
Most Java performance issues don’t show up in code reviews They show up in object lifetimes. Two pieces of code can look identical: same logic same complexity same output But behave completely differently in production. Why? Because of how long objects live. Example patterns: creating objects inside tight loops → short-lived → frequent GC holding references longer than needed → objects move to old gen caching “just in case” → memory pressure builds silently Nothing looks wrong in the code. But at runtime: GC frequency increases pause times grow latency becomes unpredictable And the worst part? 👉 It doesn’t fail immediately. 👉 It degrades slowly. This is why some systems: pass load tests work fine initially then become unstable weeks later Takeaway: In Java, performance isn’t just about what you do. It’s about how long your data stays alive while doing it. #Java #JVM #Performance #Backend #SoftwareEngineering
To view or add a comment, sign in
-
🚀 Day 7 – Exception Handling: More Than Just try-catch Today I focused on how exception handling should be used in real applications—not just syntax. try { int result = 10 / 0; } catch (Exception e) { System.out.println("Error occurred"); } This works… but is it the right approach? 🤔 👉 Catching generic "Exception" is usually a bad practice 💡 Better approach: ✔ Catch specific exceptions (like "ArithmeticException") ✔ Helps in debugging and handling issues more precisely ⚠️ Another insight: Avoid using exceptions for normal flow control Example: if (value != null) { value.process(); } 👉 is better than relying on exceptions 💡 Key takeaway: - Exceptions are for unexpected scenarios, not regular logic - Proper handling improves readability, debugging, and reliability Small changes here can make a big difference in production code. #Java #BackendDevelopment #ExceptionHandling #CleanCode #LearningInPublic
To view or add a comment, sign in
-
Last weekend I’ve been studying more about error handling and I watched a great video from Dan Vega — an interesting view on how to implement error handling and retries. 🚀 I pushed a small demo repo that captures those ideas using Spring Boot’s new RestClient and a clean error-handling pattern: 🔁 Retry transient failures with @Retryable (exponential backoff). 🧭 Centralize response-to-exception mapping via RestClient’s defaultStatusHandler (ApiException, NotFoundException). 🛡️ Serve consistent ProblemDetail (RFC 7807) responses from a GlobalExceptionHandler for clearer API errors. 🧪 Uses https://httpbin.org/ to simulate status codes and unstable endpoints for deterministic testing. Why this is valuable: - Increases reliability by handling transient errors consistently. - Keeps controllers and business logic clean — the client layer interprets remote errors. - Improves client experience with predictable, machine-readable error payloads. Inspired by Dan Vega’s walkthroughs — credit to him for the patterns and clarity. 🙏 Feel curious how that I did it, check the repo to see the config, examples, and how to adapt this pattern: https://lnkd.in/dEe3HNKr #SpringBoot #Java #Resilience #ErrorHandling #RestClient #Microservices
To view or add a comment, sign in
-
𝗛𝗼𝘄 𝗱𝗼𝗲𝘀 𝗛𝗮𝘀𝗵𝗠𝗮𝗽 𝘄𝗼𝗿𝗸 𝗶𝗻𝘁𝗲𝗿𝗻𝗮𝗹𝗹𝘆? 𝗪𝗵𝗮𝘁 𝗶𝘀 𝘁𝗿𝗲𝗲𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝘁𝗵𝗮𝘁 𝘄𝗮𝘀 𝗮𝗱𝗱𝗲𝗱 𝗶𝗻 𝗝𝗮𝘃𝗮 𝟴? Under the hood, HashMap is not just a simple key-value store: → Array of buckets → Linked List (for collisions) → Red-Black Tree (Java 8+ optimization) 𝗪𝗵𝗮𝘁 𝗿𝗲𝗮𝗹𝗹𝘆 𝗵𝗮𝗽𝗽𝗲𝗻𝘀 𝗼𝗻 𝗽𝘂𝘁()? Hash is calculated (with bit mixing) Bucket index is derived using (n-1) & hash Collision? Before Java 8 → Linked List (O(n)) After Java 8 → Converts to Tree (O(log n)) 𝗧𝗿𝗲𝗲𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 (𝘁𝗵𝗲 𝗴𝗮𝗺𝗲 𝗰𝗵𝗮𝗻𝗴𝗲𝗿) Bucket size > 8 (A single bucket (bin) holds more than 8 entries.) AND capacity ≥ 64 (The total HashMap capacity is at least 64.) Only then → Linked List → Red-Black Tree Otherwise → resize instead 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 𝗶𝗻 𝗿𝗲𝗮𝗹 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 Bad hashing or high collisions can turn your O(1) into O(n) In high throughput systems (100k+ rpm), this = → latency spikes → CPU increase → unpredictable performance 𝗞𝗲𝘆 𝘁𝗮𝗸𝗲𝗮𝘄𝗮𝘆 HashMap = Array + LinkedList + Tree (conditionally) Performance depends heavily on: ▸ hashCode() ▸ load factor ▸ resizing behavior 👉 Full deep dive (with diagrams & internals) - link in the comments section. Try out Java-related quizzes and solidify learning - https://lnkd.in/gzFmANXT #Java #HashMap #Map #SystemDesign #Backend #DataStructures #InterviewPrep #Codefarm
To view or add a comment, sign in
-
-
Forgetting a 𝑻𝒉𝒓𝒆𝒂𝒅𝑳𝒐𝒄𝒂𝒍.𝒓𝒆𝒎𝒐𝒗𝒆() isn’t just messy anymore it can turn request-scoped context into a bug with virtualthreads. Java 25/26 now give us the cleaner model: → 𝑺𝒄𝒐𝒑𝒆𝒅𝑽𝒂𝒍𝒖𝒆 (Final - JEP 506) → 𝑺𝒕𝒓𝒖𝒄𝒕𝒖𝒓𝒆𝒅𝑻𝒂𝒔𝒌𝑺𝒄𝒐𝒑𝒆 (Preview - JEP 505 in Java 25, JEP 525 in Java 26) 𝑻𝒉𝒆 𝒏𝒆𝒘 𝒎𝒐𝒅𝒆𝒍 𝒊𝒔 𝒔𝒊𝒎𝒑𝒍𝒆: 1. Bind context 𝒐𝒏𝒄𝒆 at the request edge 2. Fork parallel work with 𝑺𝒕𝒓𝒖𝒄𝒕𝒖𝒓𝒆𝒅𝑻𝒂𝒔𝒌𝑺𝒄𝒐𝒑𝒆 3. Child tasks 𝒊𝒏𝒉𝒆𝒓𝒊𝒕 the bound context 4. Scope ends → the binding is 𝒈𝒐𝒏𝒆 𝒔𝒕𝒓𝒖𝒄𝒕𝒖𝒓𝒂𝒍𝒍𝒚 No manual cleanup. No per-task rebinding. No 𝑓𝑖𝑛𝑎𝑙𝑙𝑦. 𝐓𝐡𝐞 𝐨𝐧𝐞 𝐚𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐚𝐥 𝐫𝐮𝐥𝐞 𝐭𝐡𝐚𝐭 𝐦𝐚𝐭𝐭𝐞𝐫𝐬 𝐦𝐨𝐬𝐭: ❌ 𝐃𝐨𝐦𝐚𝐢𝐧 𝐝𝐚𝐭𝐚 → method parameters Examples: orderId, customerId, cart ✅ 𝐑𝐞𝐪𝐮𝐞𝐬𝐭-𝐬𝐜𝐨𝐩𝐞𝐝 𝐜𝐨𝐧𝐭𝐞𝐱𝐭 → ScopedValue Examples: traceId, tenantId, auth, feature flags That single distinction removes a surprising amount of noise from service code. From: 𝑯𝒐𝒘 𝒅𝒐 𝑰 𝒑𝒂𝒔𝒔 𝒕𝒉𝒊𝒔 𝒕𝒉𝒓𝒐𝒖𝒈𝒉 15 𝒍𝒂𝒚𝒆𝒓𝒔? To: 𝑾𝒉𝒂𝒕 𝒊𝒔 𝒕𝒉𝒆 𝒔𝒕𝒓𝒖𝒄𝒕𝒖𝒓𝒂𝒍 𝒔𝒄𝒐𝒑𝒆 𝒐𝒇 𝒕𝒉𝒊𝒔 𝒅𝒂𝒕𝒂? That mindset shift is the real upgrade. Detailed walkthrough with examples: https://lnkd.in/gqfDr5rs #Java #Java25 #Java26 #ProjectLoom #ScopedValue #StructuredConcurrency #VirtualThreads #SpringBoot #BackendEngineering #JVM
To view or add a comment, sign in
-
-
Building a C++ log storage engine — because the JVM has limits you can't always engineer around. Phase 5 of my distributed messaging system (Java) eliminated 655ms GC pauses and cut write p99.99 by 77% — but it also showed me where the JVM stops you: TLAB filler allocations you don't control, futex syscalls on every lock, object headers you can't remove. Phase 6 is a C++ port of the log storage engine, targeting the same SPSC benchmark. The constraints are different: - mmap + MAP_POPULATE + MAP_HUGETLB — pages pre-faulted at startup, 262,144 TLB entries → 512 - atomic<int64_t> with acquire/release replacing ReentrantReadWriteLock — zero syscalls on the read path - alignas(64) + C++ concepts enforcing cache line fit at compile time — no silent layout bugs - push() returns bool — the engine never blocks, never throws, backpressure is the caller's problem The benchmark measures the mmap store path only — msync excluded, flush policy is caller-determined. Same methodology as the Java numbers: bare-metal Linux, no artificial constraints. ADR written and reviewed before a single line of code. Code next. Java implementation: https://lnkd.in/gfANhFti Stay tuned: https://lnkd.in/gifTNMSB #cpp #lowlatency #hft #systemsprogramming #distributedsystems
To view or add a comment, sign in
-
I just finished building a Write-Ahead Log engine from scratch in Java for the JVM 🫠 Every byte written to disk is hand-packed into a 32-byte binary header with a CRC32C checksum, a monotonic sequence number, and a timestamp designed to survive a kill -9 mid-write. Here's what I shipped via Phase 1: - A binary record format with hardware-accelerated corruption detection. - A single-writer append pipeline hitting ~100,000 writes/sec via group commit. - Crash recovery that truncates corrupt tails and rebuilds the key index on restart automatically. - 99 tests covering everything from bit-flips in the header to partial writes at end of file Technical Documentation: https://lnkd.in/dB5pGauT Phase 2 is next!! The in-memory key index :) If you’ve ever wanted to CONTRIBUTE & read STORAGE ENGINE CODE with full design rationale, The repo is open. PRs welcome. 🔗 https://lnkd.in/dK_U-Ue6 🔗 #java #systemsdesign #opensource #softwaredevelopment
To view or add a comment, sign in
-
Java Arrays: The Ultimate Building Blocks Before building complex data structures, the foundation needs to be rock solid. Arrays are the ultimate building blocks. Today, I locked down the 4 core operations: 1. Declare: Reserving contiguous memory. 2. Initialize: Populating the data. 3. Access: The magic of O(1). 4. Traverse: Looping through the elements. The biggest takeaway? Understanding fixed memory allocation in Java is crucial before moving to dynamic structures like ArrayLists or HashMaps. #DSA #Java #ArrayBasics
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development