OOMKilled by JVM Memory: Beyond Heap

🚨 Why your 4GB JVM app still gets OOMKilled (even when heap looks fine) Most developers assume: JVM memory = Heap But that’s only part of the story. Let’s look at a common real-world setup: You configure: -Xmx=4g Container memory limit = 4GB Looks perfect, right? ❌ Not really. Your JVM uses much more than just heap. Here’s where the “hidden” memory goes: 🔹 Metaspace Stores class metadata (Spring, Hibernate, proxies) ~150–250MB (no strict cap by default) 🔹 Thread Stacks Each thread ≈ 1MB 200 threads = ~200MB (Tomcat + HikariCP + Kafka + @Async) 🔹 Direct Buffers (Off-Heap) Used by WebClient, Kafka, Netty - Not visible in heap, not GC-managed 🔹 Code Cache JIT compiled code ~50–150MB (grows as app warms up) 🔹 GC Overhead Garbage collector needs working memory too 💥 Reality Check 4GB heap + 200MB metaspace + 200MB threads + 100MB buffers + 100MB code cache 👉 Total ≈ 4.7GB But your container limit is still 4GB. Result? 🚫 Kubernetes OOMKills your pod And the confusing part: ✔️ Heap looks fine (~2.5GB used) ❌ But your app still crashes Because OOMKill is based on total process memory, not just heap. ✅ The Fix ✔️ Keep heap at 70–75% of container memory ✔️ If container = 4GB → set -Xmx=3g ✔️ If -Xmx=4g → container ≥ 5.2GB ✔️ Cap metaspace: -XX:MaxMetaspaceSize=256m ✔️ Cap direct memory: -XX:MaxDirectMemorySize=256m ✔️ Monitor non-heap usage: /actuator/metrics/jvm.memory.used 💡 Takeaway If you're only monitoring heap, you're missing the full picture. 👉 Always consider total JVM memory footprint in containerized environments. #Java #JVM #SpringBoot #Kubernetes #Microservices #DevOps #Performance #BackendEngineering #SoftwareEngineering #Programming

To view or add a comment, sign in

Explore content categories