JDK inside Containers

Does the OpenJDK JVM play well in the world of Linux Containers? Some FAQ:

1) Why JVM uses up more memory than 1gb of memory when I specify -Xmx=1g?

Specifying -Xmx=1g is telling the JVM to allocate a 1gb heap. It’s not telling the JVM to limit its entire memory usage to 1gb. There are card tables, code caches, and all sorts of other off-heap data structures. The parameter you use to specify total memory usage is -XX:MaxRAM. Be aware that with -XX:MaxRam=500m your heap will be approximately 250mb.

2) Why JVM appears to ignore the limit when I specify -m 10m to my Linux container?

The JVM historically looked in /proc to figure out how much memory was available and then set its heap size based on that value. Unfortunately, containers like Docker don’t provide container-specific information in /proc. In the OpenJDK version you are running you can simulate it by setting -XX:MaxRAM=n explicitly.

3) What if I specify cpusets?

There is a patch in OpenJDK8, which will use the information available to the cgroup to calculate the appropriate number of parallel GC threads. However, if this patch is not available in your version of OpenJDK you may end up with 8 parallel GC threads and only 2 CPUs in your container. The workaround is to specify the number of parallel GC threads explicitly. -XX:ParallelGCThreads=2. If you only have 1 CPU in your container it is recommended that you run with -XX:+UseSerialGC and avoid parallel GC altogether.

4) As we can explicitly set things like heap size and parallel GC threads, but how can we tell the JVM we don’t care about pause time or throughput I just want it to use as few resources as possible?

-XX:+UseSerialGC will run with only 1 garbage collection thread and will run with the smallest heap overhead.

-XX:+TieredCompilation -XX:TieredStopAtLevel=1 will disable the optimizing compiler and save some space.

5) My java program (like spring-boot microservice) has a startup phase where it needs a lot of heaps but will settle into a quiet looping phase where it doesn’t need as much. Can I configure the heap to grow, shrink, and give memory it isn’t currently using back to the operating system?

SerialGC will do this for you, but you can ask it to be more aggressive.

-XX:MinHeapFreeRatio=20 (This defaults to Grow when the heap is greater than 80% occupied)

-XX:MaxHeapFreeRatio=40 (Shrink when the heap is less than 60% occupied).

Parallel GC will do this for you as well. Additional recommended parameters to be tuned:

-XX:GCTimeRatio=4

-XX:AdaptiveSizePolicyWeight=90

6) If a single JVM within a pod on a 36-node cluster would see 36 CPUs. Great. But put two more JVMs, each on its own pod, on that node, and they all will see 36 CPUs. How to solve the resource contention.

The issue has since been fixed in Java 8u191+. The option UseContainerSupport was backported to Java 8u191 which was first introduced in java 10 and activated by default.

docker run -m 1GB openjdk:8u191-alpine java \
           -XX:+PrintFlagsFinal -version \
          | grep -E "UseContainerSupport | InitialRAMPercentage | MaxRAMPercentage | MinRAMPercentage"
docker command output

Some JVM parameters for further tuning:

JVM Parameters


Good information Nikhil, my 2 cents instead of specifying the exact boundary values for Heap JDK now is container aware and hence a max and min heap percentage defined in the Dockerfile is way more efficient, specially when we alter these parameters a lot during perf engineering, what i mean is once i can specify these params and then only play around with the kubernetes params for limits i.e. allocating the memory to containter what container allocates to JVM then is a percentage of it, reduced burden :)

Like
Reply

Jdk10+ improved native container support with a couple of interesting capability to handle thread pool efficiently

To view or add a comment, sign in

Others also viewed

Explore content categories