Markus Eisele’s Post

Performance benchmarks in Java are easy to misunderstand. Recently the Quarkus team published new benchmark results. But the interesting story isn’t just the numbers. It’s the engineering work behind them: controlled environments, reproducible runs, and transparency about how the results are produced. People like Holly Cummins, Eric Deandrea, Sanne Grinovero and many others invested real engineering effort to make these benchmarks trustworthy. In this article I explain: • Why benchmarking Java frameworks is harder than it looks • Why local laptop benchmarks are often misleading • What developers should actually learn from the new Quarkus numbers If you care about startup time, memory footprint, and real JVM performance, this is worth understanding. Read the full article here: https://lnkd.in/dANEJp8d #Java #Quarkus #PerformanceEngineering #Microservices #JVM #Benchmarking

  • No alternative text description for this image

Great points on benchmarking fairness! Few people talk about it openly, glad to see the method open-sourced. I assume the experiments used an open workload model; just curious, did you also explore a closed model (Hyperfoil supports it too)? I’ve found the closed model useful for identifying systems capacity (max throughput) and the “knee” concurrency as the first step.

Like
Reply

To view or add a comment, sign in

Explore content categories