Java TLABs: Thread-Local Allocation Buffer Explained

Most Java allocations are not “asking the GC for memory.” They usually go through a tiny per-thread buffer called a TLAB: Thread-Local Allocation Buffer. The idea is simple. Instead of every new object competing for the same global allocation area, each thread gets its own small chunk of Eden. Inside that chunk, allocation is almost boring: obj = top if obj + size <= end: top = top + size return obj That is basically a pointer bump. No lock. No global coordination on the fast path. Just “is there room?” and “move top forward.” That’s why TLABs matter. They make the common case of allocation very cheap, especially in code that creates lots of short-lived objects. What happens when the thread runs out of space in its TLAB? It does not “grow” the same buffer. HotSpot goes to a slower path and makes a decision. If the remaining space is small enough, it retires that TLAB, fills the leftover gap so GC can parse the heap safely, and asks the heap for a fresh TLAB. If the remaining space is still considered too valuable to waste, HotSpot may keep the current TLAB and allocate that one object outside it instead. That detail is easy to miss, and it matters. A TLAB is not just “thread-local memory.” It is a policy boundary too. The JVM is constantly balancing two goals: cheap thread-local allocation vs not wasting too much Eden in half-used buffers There’s also a subtle point around observability. A TLAB has a real end, but the JVM can temporarily shorten the allocation limit to trigger sampling or profiling events. So even something that looks like “out of TLAB space” is not always a true exhaustion case. Sometimes it is the runtime deliberately forcing the slow path so tools can see allocations. My takeaway: TLABs are one of those JVM ideas that look small but explain a lot. If you want to understand why allocation in Java is often surprisingly fast, this is one of the best places to start. Follow me for more on systems engineering ✌️ #java #systems #systemsengineering #performance #jvm #jdk

To view or add a comment, sign in

Explore content categories