Struggling with a 'Java heap space error' in your application? 🤯 It is a frustrating but solvable issue! Troubleshooting these OutOfMemoryErrors is a clear, multi-step process: 1️⃣ Collect GC Logs: Enable GC logging to get a timeline of memory pressure, which helps you understand whether your heap is gradually filling up from a leak or if there was a sudden traffic burst. 2️⃣ Capture a Heap Dump: Grab a snapshot of your application memory right before the JVM throws the error (e.g., using -XX:+HeapDumpOnOutOfMemoryError) to see exactly what objects are present, their references, and their sizes. 3️⃣ Analyze the Dump: Use tools like HeapHero or JHat to dig into the dump and uncover valuable insights into your application's memory usage patterns. 4️⃣ Capture a Thread Dump: If your app becomes unresponsive before crashing, a thread dump can reveal stuck threads that are holding onto large object graphs and preventing garbage collection. Ready to dive deeper into these steps and master Java memory issues? Read the full guide here: 👉 https://lnkd.in/gQu6cy5w #Java #JavaDevelopment #SoftwareEngineering #PerformanceTuning #HeapDump #GarbageCollection #HeapHero #OutOfMemoryError
Solve Java Heap Space Errors with These 4 Steps
More Relevant Posts
-
I was debugging a backend issue recently in a Java service. At first nothing looked wrong. No errors, no obvious problems. But something was off. After checking a bit more, it turned out to be a small mismatch between the data model and the repository. The code worked fine in most cases, but not with certain data. Fixing it was simple. Finding it was not. Sometimes the problem is not complex. It’s just hidden in small details. #Java #BackendEngineering #Debugging #SpringBoot
To view or add a comment, sign in
-
#Post11 In the previous post(https://lnkd.in/dynAvNrN), we saw how to create threads in Java. Now let’s talk about a problem. If creating threads is so simple… why don’t we just create a new thread every time we need one? Let’s say we are building a backend system. For every incoming request/task, we create a new thread: new Thread(() -> { // process request }).start(); This looks simple. But this approach breaks very quickly in real systems because of below mentioned problems. Problem 1: Thread creation is expensive Creating a thread is not just creating an object. It involves: • Allocating memory (stack) • Registering with OS • Scheduling overhead Creating thousands of threads = performance degradation Problem 2: Too many threads → too much context switching We already saw this earlier(https://lnkd.in/dYG3v-vb). More threads does NOT mean more performance. Instead: • CPU spends more time switching • Less time doing actual work Problem 3: No control over thread lifecycle When you create threads manually: • No limit on number of threads • No reuse • Hard to manage failures This quickly becomes difficult to manage as the system grows. So what’s the solution? Instead of creating threads manually: we use something called the Executor Framework. In simple words consider the framework to be like: Earlier, we were manually hiring a worker (thread) for every task. With Executor, we have a team of workers (thread pool), and we just assign tasks to them. Key idea Instead of: Creating a new thread for every task We do: Submit tasks to a pool of reusable threads This is exactly what Java provides using: Executor Framework Key takeaway Manual thread creation works for learning, but does not scale in real-world systems. Thread pools help: • Control number of threads • Reduce overhead • Improve performance We no longer manage threads directly — we delegate that responsibility to the Executor Framework. In the next post, we’ll see how Executor Framework works and how to use it in Java. #Java #Multithreading #Concurrency #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
𝐉𝐚𝐯𝐚 𝐆𝐚𝐫𝐛𝐚𝐠𝐞 𝐂𝐨𝐥𝐥𝐞𝐜𝐭𝐢𝐨𝐧 𝐀𝐥𝐠𝐨𝐫𝐢𝐭𝐡𝐦𝐬 – 𝐖𝐡𝐚𝐭 𝐄𝐯𝐞𝐫𝐲 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐒𝐡𝐨𝐮𝐥𝐝 𝐊𝐧𝐨𝐰 Garbage Collection isn’t just a JVM feature — it directly impacts your application performance, latency, and scalability. Let’s break it down simply 𝐒𝐞𝐫𝐢𝐚𝐥 𝐆𝐂 ✔️ Single-threaded (Stop-The-World) ✔️ Best for small apps or testing ✔️ Minimal CPU usage ⚠️ Not suitable for production-scale systems 𝐏𝐚𝐫𝐚𝐥𝐥𝐞𝐥 𝐆𝐂 (𝐓𝐡𝐫𝐨𝐮𝐠𝐡𝐩𝐮𝐭 𝐆𝐂) ✔️ Multi-threaded GC ✔️ Maximizes throughput ✔️ Suitable for batch processing systems ⚠️ Longer pause times 🔹 𝐂𝐌𝐒 (𝐂𝐨𝐧𝐜𝐮𝐫𝐫𝐞𝐧𝐭 𝐌𝐚𝐫𝐤-𝐒𝐰𝐞𝐞𝐩) ✔️ Runs alongside application threads ✔️ Reduces pause time ✔️ Good for responsive apps ⚠️ High CPU usage + deprecated in newer Java 𝐆𝟏 𝐆𝐂 (𝐆𝐚𝐫𝐛𝐚𝐠𝐞 𝐅𝐢𝐫𝐬𝐭) ✔️ Splits heap into regions ✔️ Predictable pause times ✔️ Handles large heaps efficiently 🔥 Default GC since Java 9 𝐙𝐆𝐂 (𝐙 𝐆𝐚𝐫𝐛𝐚𝐠𝐞 𝐂𝐨𝐥𝐥𝐞𝐜𝐭𝐨𝐫) ✔️ Sub-millisecond pause times ✔️ Scales to TB-sized heaps ✔️ Fully concurrent 💡 Best for ultra-low latency systems 𝐒𝐡𝐞𝐧𝐚𝐧𝐝𝐨𝐚𝐡 𝐆𝐂 ✔️ Concurrent compaction ✔️ Consistent low pause times ✔️ Ideal for real-time applications 𝐖𝐡𝐞𝐧 𝐭𝐨 𝐔𝐬𝐞 𝐖𝐡𝐚𝐭? ✔️ Small apps → Serial GC ✔️ High throughput → Parallel GC ✔️ Balanced performance → G1 GC ✔️ Low latency critical apps → ZGC / Shenandoah 𝐏𝐫𝐨 𝐓𝐢𝐩: GC tuning can improve performance more than code optimization in many cases! Follow Madhu K. for more backend & system design content #Java #GarbageCollection #JVM #JavaPerformance #BackendDevelopment #SystemDesign #TechLearning #JavaDeveloper #CodingTips #SoftwareEngineering
To view or add a comment, sign in
-
-
𝐖𝐡𝐲 𝐢𝐬 𝐦𝐲 𝐂𝐮𝐬𝐭𝐨𝐦 𝐀𝐧𝐧𝐨𝐭𝐚𝐭𝐢𝐨𝐧 𝐫𝐞𝐭𝐮𝐫𝐧𝐢𝐧𝐠 𝐧𝐮𝐥𝐥? 🤯 Every Java developer eventually tries to build a custom validation or logging engine, only to get stuck when method.getAnnotation() returns null. The secret lies in the @Retention meta-annotation. If you don't understand these three levels, your reflection-based engine will never work: 1️⃣ SOURCE (e.g., @Override, @SuppressWarnings) Where? Only in your .java files. Why? It’s for the compiler. Once the code is compiled to .class, these annotations are GONE. You cannot find them at runtime. 2️⃣ CLASS (The default!) Where? Stored in the .class file. Why? Used by bytecode analysis tools (like SonarLint or AspectJ). But here's the kicker: the JVM ignores them at runtime. If you try to read them via Reflection — you get null. 3️⃣ RUNTIME (e.g., @Service, @Transactional) Where? Stored in the bytecode AND loaded into memory by the JVM. Why? This is the "Magic Zone." Only these can be accessed by your code while the app is running. In my latest deep dive, I built a custom Geometry Engine using Reflection. I showed exactly how to use @Retention(RUNTIME) to create a declarative validator that replaces messy if-else checks. If you’re still confused about why your custom metadata isn't "visible," this breakdown is for you. 👇 Link to the full build and source code in the first comment! #Java #Backend #SoftwareArchitecture #ReflectionAPI #CleanCode #ProgrammingTips
To view or add a comment, sign in
-
Just wrapped up a take home assessment where I built an LRU (Least Recently Used) Cache from scratch in Java, and I decided to take it a step further. Instead of stopping at the basic implementation, I: • Designed it using a combination of HashMap and Doubly LinkedList to achieve O(1) time complexity for both get and put operations • Applied generics to make the cache reusable and type-safe • Extended it into a Spring Boot REST API to simulate real world backend usage • Recorded a walkthrough loom video explaining my design decisions and trade-offs. This exercise was a good reminder that writing code is only one part of the job, being able to clearly explain "WHY" you made certain decisions is just as important. Key takeaway: Clean design + clear communication > just “working code”. Looking forward to more opportunities to build and share. #Java #BackendDevelopment #SystemDesign #SpringBoot #SoftwareEngineering #LRUCache
To view or add a comment, sign in
-
Stop wasting memory on threads that do nothing. 🛑 If you’re building Java backends, you’ve probably seen this: More users → more threads → more RAM usage I recently explored Virtual Threads (Java 21 / Project Loom), and this concept finally clicked for me. 💡 The Problem In standard Java: 1 request = 1 Platform Thread During DB/API call → thread gets blocked It’s like a waiter standing idle while food is cooking 🍽️ 👉 Wasted resources + poor scalability 🔍 The Solution: Virtual Threads 👉 Lightweight threads managed by JVM (not OS) Cheap to create Can run thousands easily Perfect for I/O-heavy backend systems ⚙️ How it actually works (Mounting / Unmounting) 1️⃣ Mounting Virtual Thread runs on a Carrier Thread (Platform Thread) 2️⃣ I/O Call (DB/API) Your code looks blocking 3️⃣ Unmounting (Parking) Virtual Thread is paused & parked in heap memory 👉 It releases the Carrier Thread 4️⃣ Carrier Thread is free Handles another request immediately 5️⃣ Remounting (Resume) Once response comes → Virtual Thread continues 💻 The "magic" in code // Looks like blocking code Runnable task = () -> { System.out.println("Processing: " + Thread.currentThread()); String data = fetchDataFromDB(); // DB/API call System.out.println("Result: " + data); }; // Run using Virtual Thread Thread.ofVirtual().start(task); 🧩 What’s happening behind the scenes? 👉 Thread.ofVirtual() Creates a lightweight thread (stored in heap, not OS-level) 👉 During DB/API call Virtual Thread gets unmounted (parked) Carrier Thread becomes free 👉 While waiting Same thread handles other requests 👉 When response comes Scheduler remounts Virtual Thread Execution continues 📈 Result No idle threads Better resource usage Simple synchronous code High scalability without complex async code 🧠 Biggest takeaway 👉 “Code looks blocking… but system is not blocked.” That’s the mindset shift. Have you tried Virtual Threads in your services yet? Did you see any real performance improvement? 🤔 #Java #BackendEngineering #VirtualThreads #ProjectLoom #Java21 #Microservices #Performance
To view or add a comment, sign in
-
-
Still confused about how Java actually runs your code? 🤔 Here’s a simple breakdown of how the JVM works 👇 👉 1. Build Phase ✔️ Java code (.java) is compiled using javac ✔️ Converted into bytecode (.class files) 👉 2. Class Loading ✔️ JVM loads classes using: Bootstrap Class Loader Platform Class Loader System Class Loader 👉 3. Linking ✔️ Verify → Prepare → Resolve ✔️ Ensures code is safe and ready to run 👉 4. Initialization ✔️ Static variables & blocks are initialized 👉 5. Runtime Data Areas ✔️ Method Area & Heap (shared) ✔️ Stack, PC Register, Native Stack (per thread) 👉 6. Execution Engine ✔️ Interpreter executes bytecode line by line ✔️ JIT Compiler converts hot code into machine code for speed 👉 7. Native Interface (JNI) ✔️ Interacts with native libraries when needed 💡 JVM is the reason behind Java’s “Write Once, Run Anywhere” power. 📌 Save this post 🔁 Repost to help others 👨💻 Follow Abhishek Sharma for more such content #Java #JVM #SystemDesign #BackendDevelopment #SoftwareEngineer #Developers #TechJobs #Programming #LearnJava
To view or add a comment, sign in
-
-
If your class name looks like CoffeeWithMilkAndSugarAndCream… you’ve already lost. This is how most codebases slowly break: You start with one clean class. Then come “small changes”: add logging add validation add caching So you create: a few subclasses… then a few more or pile everything into if-else Now every change touches existing code. And every change risks breaking something. That’s not scaling. That’s slow decay. The Decorator Pattern fixes this in a simple way: Don’t modify the original class. Wrap it. Start with a base object → then layer behavior on top of it. Each decorator: adds one responsibility doesn’t break existing code can be combined at runtime No subclass explosion. No god classes. No fragile code. Real-world example? Java I/O does this everywhere: you wrap streams on top of streams. The real shift is this: Stop thinking inheritance. Start thinking composition. Because most “just one more feature” problems… are actually design problems. Have you ever seen a codebase collapse under too many subclasses or flags? #DesignPatterns #LowLevelDesign #SystemDesign #CleanCode #Java #SoftwareEngineering #OOP Attaching the decorator pattern diagram with a simple example.
To view or add a comment, sign in
-
-
InterruptedException is not an error. It’s how threads are asked to stop. And ignoring it can make your application impossible to shut down. --- In Java’s threading model, interruption was never designed as a failure mechanism. It’s a signal. A coordination event between threads. --- Calling interrupt() is the intended way to ask a thread to stop. But it doesn’t stop it. It sets a flag. And if the thread is blocked, it may react by throwing InterruptedException. Here is the trap: when that exception is thrown, the flag is cleared. If you ignore it, you erase the signal. If you care about it, you must restore it: Thread.currentThread().interrupt(); --- This is the model. And most code ignores it. Consider this: try { queue.take(); } catch (InterruptedException e) { // ignore } Looks harmless. It’s not. From that point on, your thread behaves as if no interruption ever happened. The JVM asked it to stop. Your code said: no. This is how systems become impossible to shut down cleanly. Threads keep running. Executors don’t terminate. Shutdown hooks hang. And eventually: kill -9 This is not a rare edge case. It’s the direct consequence of coding against the model. --- There is a contract: If you catch InterruptedException, you must either: - propagate it - or restore the flag Interruption is not about failure. It’s about control. It’s how the JVM coordinates lifecycle across threads. When you ignore it, you’re not just hiding a problem. You’re breaking the control plane of your application. Final thought Most systems don’t fail because something crashed. They fail because something refused to stop. A thread that ignores interruption is not resilient. It’s uncontrollable. And in production, uncontrollable systems don’t degrade. They hang. Then they get killed. 💬 How do you handle interruption in your production code? #Java #JVM #Multithreading #Backend #SoftwareEngineering
To view or add a comment, sign in
-
Let’s break the code step by step: int x = 4; int y = 11; x += y >> 1; System.out.println(x); 🔹 Step 1 — Understand the operator precedence In Java, the right-shift operator >> has higher precedence than the compound assignment +=. So this line: x += y >> 1; is evaluated as: x = x + (y >> 1); 🔹 Step 2 — Evaluate the shift operation y >> 1 Right shift means: divide by 2 (for positive integers). 11 in binary = 00001011 Right shift 1 = 00000101 That equals 5. 🔹 Step 3 — Add to x Copy code x = 4 + 5 x = 9 🔹 Step 4 — Print System.out.println(x); // 9 🧠 ✅ Correct Answer: B) 9 #Java #SpringBoot #FullStackDeveloper
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Kumar Nagaraju Really helpful post. I’ve seen cases where GC logs looked fine, but the heap dump showed what was actually holding memory. That step really makes a big difference. Explaining it this way makes it easy to understand.