Tobias Unger’s Post

Java, Kubernetes, and the Garbage Collector Yet another application has gone down in the Bermuda Triangle of Java, Kubernetes, and the Garbage Collector. Personally, I find it risky to run Java on Kubernetes with configured limits and requests without explicitly setting the Garbage Collector and heap parameters. Why? By default, Java sets the maximum heap size (-Xmx) to about 25% of the container’s memory limit. So, if your pod has 2 GB of RAM, the JVM happily limits itself to around 500 MB of heap. I can already hear the OutOfMemoryError creeping up the logs... And contrary to popular belief, G1 GC is not always the default Garbage Collector. Nicolai Parlog pointed this out in his Inside Java Newscast #99 . Under certain conditions, such as single-CPU or small-memory environments, the JVM still picks the Serial GC. JEP 523 aims to change that in the future by finally making G1 the default everywhere, eliminating those inconsistencies. So, if you’re running Java in Kubernetes, do yourself a favor: set all your JVM options explicitly. It’s the only way to be sure the configuration that’s actually running is the one you think is running. Reference: Nicolai Parlog, Inside Java Newscast #99 – G1 GC: 3 Upcoming Improvements, Oracle (Oct 23 2025). https://lnkd.in/ezfpNCJe #Java #Kubernetes #GarbageCollector #JVM #OutOfMemoryError #DevLife #OpenJDK

FYI percentages sometimes get ignored in your JVM options. Let the JVM breathe and allocate 30% of your k8s limits with numbers. And be aware of the different bases 1000 (GB) and 1024 (GiB) since it affects the threshold of your limits. As a rule of thumb --> always go with the binary base (MiB, GiB) in k8s if working with Java.

Indeed, defining JVM args explicitly in production is essential. Another way to look at it is to start from your app’s observed memory usage. If your Java service peaks at around 3 GB under stress, set your K8s memory limit ~20% higher. This simple buffer often prevents OOM issues in real-world workloads.

Maybe worth adding that killing the JVM on an OutOfMemoryError is actually quite a promising move. Once you’re out of memory, there’s nothing left to recover anyway. Letting the JVM crash and restart via Kubernetes’ liveness probe or restart policy is the recommended behavior in production. After all, a fresh JVM has never hurt anyone. Except the poor soul who has to fix the configuration.

I don’t even think that I’ve ever deployed in over 20+ years of JVM usage any workload , containerized or not, where I did not explicitly set the heap and non-heap memory sizes … sure it’s nice to not have to set it in 2 places (k8s resource limits and -Xmx but then you just write a simple script where you specify what % you want) I think I still have one dating back to 2018 if you’re interested

Don't forget that -Xmx sets the HEAP size. The JVM needs more memory than just the heap. Setting the heap size too close to the container limit may actually cause the JVM to go out of memory before the heap is even full because it cannot allocate the non-heap memory it needs (e.g. for garbage collection, classloading, profiling and other activities).

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories