Asad Tahseen’s Post

𝗪𝗵𝗲𝗻 𝘁𝗵𝗲 𝗝𝗩𝗠 𝗗𝗶𝗱𝗻’𝘁 𝗞𝗻𝗼𝘄 𝗜𝘁 𝗪𝗮𝘀 𝗜𝗻 𝗮 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿 It’s 2024. Your Spring Boot application is running inside Kubernetes. The container memory limit is 2𝐆𝐁. Everything looks properly configured. A few hours after deployment, the pod restarts. Kubernetes reports: 𝗢𝗢𝗠𝗞𝗶𝗹𝗹𝗲𝗱 The logs show: 𝗢𝘂𝘁𝗢𝗳𝗠𝗲𝗺𝗼𝗿𝘆𝗘𝗿𝗿𝗼𝗿 You investigate the code. No obvious memory leak. Load is normal. Heap usage during testing was fine. So what happened? The issue isn’t your business logic. It isn’t a missing -𝗫𝗺𝘅 either. For years, the 𝗝𝗩𝗠 calculated heap size based on the host machine’s total memory, not the container’s limit. If the node had 16𝐆𝐁 RAM, the JVM sized itself assuming that memory was available even if the container was capped at 2𝐆𝐁. Kubernetes enforced the limit strictly. The JVM exceeded it. The Linux 𝗢𝗢𝗠 𝗞𝗶𝗹𝗹𝗲𝗿 terminated the process. The core problem was simple: the 𝗝𝗩𝗠 was not container-aware. It read memory information from '/proc/meminfo' and applied its ergonomics algorithm as if it were running directly on the host. Containers existed but the JVM didn’t fully see them. This behavior shaped early cloud-native decisions. Some teams concluded that Java as too heavy for Kubernetes. In reality, the language was not the problem environmental visibility was. The fix evolved gradually. Starting with experimental container support flags in later Java 8 updates, and eventually reaching proper 𝗰𝗴𝗿𝗼𝘂𝗽𝘀 support and container awareness by default in modern JVM versions. But the side effect lingered: many teams began over-provisioning memory out of fear, increasing infrastructure costs just to avoid mysterious restarts. Have you ever debugged an 𝗢𝗢𝗠𝗞𝗶𝗹𝗹𝗲𝗱 restart loop and later realized the issue wasn’t your code but how the 𝗝𝗩𝗠 sized its memory? #Java #Kubernetes #CloudNative #JVM #BackendEngineering #DevOps

  • No alternative text description for this image

To view or add a comment, sign in

Explore content categories