🔍 Go beyond heap dumps Whether you're a JVM enthusiast, performance engineer, or backend developer, JOL helps you peek under the hood of Java object memory layout, including object headers, field ordering, padding, footprint, and more, using low-level JVM internals. JOL uses Unsafe, JVMTI, and the Serviceability Agent for unparalleled accuracy. Useful for optimizing data structures, reducing memory footprint, and improving cache behavior. 🛠️ JOL can be used as: - A library in your project - A CLI tool for quick analysis - A guide for deep JVM performance tuning and research If you care about performance, memory efficiency, or just love understanding how Java really works behind the scenes. https://lnkd.in/dtC_5B5n #Java #OpenSource #JVM #PerformanceEngineering #Developers
Optimize Java Performance with JOL
More Relevant Posts
-
☕ 𝐎𝐟𝐟𝐢𝐜𝐢𝐚𝐥 𝐉𝐚𝐯𝐚 / 𝐎𝐩𝐞𝐧𝐉𝐃𝐊 𝐏𝐫𝐨𝐣𝐞𝐜𝐭𝐬 (𝐌𝐚𝐫𝐜𝐡 2026 𝐔𝐩𝐝𝐚𝐭𝐞) These are the major long-running OpenJDK projects actively shaping the future of Java, with their latest status based on 2026 plans and releases. 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐕𝐚𝐥𝐡𝐚𝐥𝐥𝐚 — Augmenting the Java object model with value objects. After years of development, a preview of Value Classes and Objects (JEP 401) is scheduled for delivery in the second half of 2026 . An early-access build is already available for developers to test . Future work will focus on null-aware types and unifying primitives with classes . 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐋𝐨𝐨𝐦 — Delivered virtual threads in JDK 21. The project is now finalizing the Structured Concurrency API, which will be a preview feature in JDK 26 (due March 17, 2026) and is expected to be finalized by the end of 2026 . A recent improvement also allows virtual threads to unmount from their carrier thread while waiting for class initialization, reducing "pinning" issues . 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐋𝐞𝐲𝐝𝐞𝐧 — Focused on improving Java startup and warmup time. A major feature, Ahead-of-Time Object Caching (JEP 516), is targeted for JDK 26. This allows the JVM to load pre-cached Java objects to start up faster, working with any garbage collector . Early-access builds are available . 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐏𝐚𝐧𝐚𝐦𝐚 — Delivered the Foreign Function & Memory (FFM) API in JDK 22. Active work continues with the Vector API incubating for the eleventh time in JDK 26 . Improvements are also being made to the jextract tool for easier native library interop. 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐀𝐦𝐛𝐞𝐫 — This project, focused on productivity features, is in a "breather" after delivering pattern matching, records, and switch expressions. Current work includes Primitive Types in Patterns, instanceof, and switch (JEP 507), which is in its fourth preview for JDK 26 . Future plans may include exploring new features like constant patterns . 𝐏𝐫𝐨𝐣𝐞𝐜𝐭 𝐁𝐚𝐛𝐲𝐥𝐨𝐧 — A newer project aimed at extending Java to foreign programming models like GPUs and SQL. The team is working toward the incubation of Code Reflection , which would allow Java code to be analyzed and transformed for execution on heterogeneous hardware. Proof-of-concept work includes running ML models on GPUs . 𝐉𝐃𝐊 26 — The next feature release, scheduled for March 17, 2026 . Key JEPs targeted for this release include: Structured Concurrency (Sixth Preview) Vector API (Eleventh Incubator) Ahead-of-Time Object Caching Primitive Types in Patterns (Fourth Preview) HTTP/3 Client API Support
To view or add a comment, sign in
-
Ever spent hours debugging a production issue only to find... absolutely nothing in the logs? 🕵️♂️ I recently went down a rabbit hole investigating why critical database failures were vanishing in our asynchronous code paths. I checked Loki, container logs, and stderr—but the system was dead silent. It turns out Java's asynchronous APIs can be a "silent trap." If you're using ThreadPoolExecutor or ScheduledThreadPoolExecutor, unhandled exceptions are often captured and stored within a Future object. If you don't explicitly call .get(), those exceptions may never see the light of day. The Solution? I implemented a LoggingRunnable wrapper and specialized LoggingThreadPools using the Delegate Pattern. This allows us to: ✅ Automatically intercept tasks. ✅ Ensure every Throwable is logged before it vanishes. ✅ Keep the original business logic clean and decoupled from logging concerns. No more "gobbled" exceptions or flying blind in production. Read the full technical deep dive on my blog: https://lnkd.in/gfmJwkMw #Java #SoftwareEngineering #Debugging #Backend #CleanCode #Concurrency #Programming #JavaDevelopment #eGluTech
To view or add a comment, sign in
-
Most #Java developers go years without touching class loaders directly… until something behaves differently in production, and suddenly, the loading order, delegation, and visibility rules matter a lot more than expected. Class loaders shape how the JVM finds code, resolves dependencies, isolates modules, and even loads different versions of the same class. This article walks through the mechanics behind that process and why it’s so easy to overlook. https://bit.ly/4ro7Bcb
To view or add a comment, sign in
-
𝐂𝐥𝐚𝐬𝐬𝐋𝐨𝐚𝐝𝐞𝐫𝐬 & 𝐌𝐞𝐦𝐨𝐫𝐲 — 𝐉𝐚𝐯𝐚’𝐬 𝐒𝐦𝐚𝐫𝐭 𝐖𝐚𝐫𝐞𝐡𝐨𝐮𝐬𝐞 𝐒𝐲𝐬𝐭𝐞𝐦: Think of the JVM as a fully automated smart warehouse.Before any product (Java class) can be used, it must be located, verified, stored correctly, and tracked. That’s exactly what happens before your Java code even starts executing. 1. 𝐂𝐥𝐚𝐬𝐬𝐋𝐨𝐚𝐝𝐞𝐫 — 𝐓𝐡𝐞 𝐖𝐚𝐫𝐞𝐡𝐨𝐮𝐬𝐞 𝐑𝐞𝐜𝐞𝐢𝐯𝐢𝐧𝐠 𝐓𝐞𝐚𝐦 Before Java can use a class, it must be loaded into the JVM. ClassLoaders: ✔️ Locate .class files ✔️ Verify bytecode ✔️ Load them into memory Types of ClassLoaders (Receiving Zones): 🔹 𝐁𝐨𝐨𝐭𝐬𝐭𝐫𝐚𝐩 𝐂𝐥𝐚𝐬𝐬𝐋𝐨𝐚𝐝𝐞𝐫 • Loads core Java inventory (java.lang.*, java.util.*) • This is the foundation stock — always available 🔹 𝐄𝐱𝐭𝐞𝐧𝐬𝐢𝐨𝐧 𝐂𝐥𝐚𝐬𝐬𝐋𝐨𝐚𝐝𝐞𝐫 • Loads optional, extended packages • Think of specialized add-on supplies 🔹 𝐀𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐂𝐥𝐚𝐬𝐬𝐋𝐨𝐚𝐝𝐞𝐫 • Loads your application’s classes • Custom products you bring into the warehouse 🔁 Delegation Model: Each loader checks with its parent first before loading. ➡️ Prevents duplicate inventory and ensures system stability. 2. 𝐉𝐕𝐌 𝐌𝐞𝐦𝐨𝐫𝐲 — 𝐓𝐡𝐞 𝐖𝐚𝐫𝐞𝐡𝐨𝐮𝐬𝐞 𝐒𝐭𝐨𝐫𝐚𝐠𝐞 𝐙𝐨𝐧𝐞𝐬 Once classes arrive, JVM memory organizes where everything lives: 📘 𝐌𝐞𝐭𝐡𝐨𝐝 𝐀𝐫𝐞𝐚 • Blueprint records: class structure, methods, static data • Shared reference catalog 📦 𝐇𝐞𝐚𝐩 • Where actual products (objects) are stored • Shared across all workers (threads) 🗂️ 𝐒𝐭𝐚𝐜𝐤 • Personal workbench per worker (thread) • Holds method calls & local variables 📮 𝐏𝐫𝐨𝐠𝐫𝐚𝐦 𝐂𝐨𝐮𝐧𝐭𝐞𝐫 (𝐏𝐂) • Tracks the current instruction each worker is handling ⚙️ 𝐍𝐚𝐭𝐢𝐯𝐞 𝐌𝐞𝐭𝐡𝐨𝐝 𝐒𝐭𝐚𝐜𝐤 • Manages external tools (native C/C++ operations) ✅ 𝐒𝐮𝐦𝐦𝐚𝐫𝐲 ✔️ ClassLoaders find and load Java classes ✔️ JVM Memory organizes and manages them ✔️ This entire process completes before execution begins #Java #JavaDailyUpdates #JavaDeveloper #JVM #ClassLoader #JavaMemory #JavaArchitecture #CoreJava #LearnJava #JavaConcepts #BackendDevelopment #SoftwareEngineering #JavaInterview #SystemDesign #ProgrammingConcepts #CodingTip #DevelopersOfLinkedIn #JavaCommunity #DailyLearning #DeveloperJourney #JavaCommunity #SoftwareDevelopment #SoftwareArchitecture #C2C #WebDevelopment #Developer #C2H #BackendEngineering #Microservices #JavaScript #Data #Deployment #FullStackDevelopment #TechTalk #SoftwareEngineering #Python #Java #AI #FullStack #Frontend #Backend #Cloud #Testing #OpentoWork #Jobs #Jobsearch #C2C #Data #Dataengineering #Automotive #Linkedin #Tips #DevOps #Remote #Hybrid #Senior #UST #Brooksource #ProwessSoft #KYYBAKYYBA Inc #Experis #SRSSRS Consulting Inc #TEKsystems #TheThe Judge Group #BeaconBeacon Hill #BayOneBayOne Solutions #RandstadRandstad USA #Insightglobal #JavaCommunity
To view or add a comment, sign in
-
-
Your container keeps restarting from OOM (out of memory), but the Java heap looks fine? Before you go all-in on a heap leak, check your threads. I have seen this in containerized Java services: memory creeps up over hours or days, the container hits its limit, gets killed, then restarts. No dramatic spike, just a slow climb. Threads can drive that climb because they use native memory for stacks (plus per-thread overhead), which will not show up as “heap used”. Mini playbook (5-minute triage): 1) Confirm the restart reason matches the memory limit. 2) Get a shell into the running container. 3) Find the Java PID (try `jcmd -l`). 4) Run: `jcmd <pid> Thread.print` 5) Scan for: - a surprisingly high total thread count - repeating thread names or pools that keep growing - many similar stacks pointing to thread creation (`new Thread(...)`, `Executors.new*`, schedulers, custom thread factories) If the thread count keeps increasing, treat it like a leak: - bound pools and queues - reuse executors instead of creating new ones - shut them down on lifecycle events The nice part is that `Thread.print` often points to the code path creating threads, which is faster than guessing from memory graphs alone. What’s your go-to move when a container is OOM-killed: thread dump first, heap dump first, or something else? #java #kubernetes #observability #performance #memory #jvm
To view or add a comment, sign in
-
-
📚 Collections in Java – Part 2 | Legacy Collections & LIFO Concepts 🚀 Today I continued my deep dive into the Java Collections Framework, focusing on legacy classes and stack-based data structures—understanding their design, behavior, and when they should (or shouldn’t) be used in modern applications. 🔹 Vector – Thread-safe dynamic array, legacy collection 🔹 Vector Internal Working – Capacity, synchronization, resizing 🔹 Vector Legacy Methods – addElement(), elementAt(), elements() 🔹 Stack – LIFO data structure built on Vector 🔹 Stack Operations – push(), pop(), peek(), search() 🔹 Vector vs ArrayList – Synchronization, performance, legacy usage 💡 Key Takeaways: • Vector is synchronized → thread-safe but slower • ArrayList replaced Vector in most modern applications • Stack follows LIFO (Last In First Out) principle • Stack extends Vector, inheriting synchronization • Modern Java prefers Deque / ArrayDeque for stack operations Understanding legacy collections helps in: ✔ Maintaining older enterprise Java systems ✔ Understanding design evolution of the Collections Framework ✔ Writing better concurrent and performance-aware code ✔ Strengthening Core Java fundamentals for interviews Strong understanding of data structures + Java internals leads to better system design and more efficient applications. 💪 #Java #CoreJava #CollectionsFramework #Vector #Stack #JavaDeveloper #BackendDevelopment #DSA #InterviewPreparation #CodesInTransit
To view or add a comment, sign in
-
zio-openfeature v0.6.2 is out! zio-openfeature (https://lnkd.in/dapfciW6 ) is an open-source Scala library that lets you manage feature flags in a safe, functional way — built on top of the OpenFeature (https://openfeature.dev/) standard so it works with any compatible provider (Optimizely, LaunchDarkly, Flagd, and more). A few highlights from the latest release: ✅ Safer evaluations — flag checks now refuse to run when the provider is in an error state, not just when it's not yet ready. This prevents silent misbehavior in production. 🔁 Correct hook ordering — hooks that clean up or handle errors now run in the right order (last registered, first to run), which is what the OpenFeature spec requires. 🔒 Type-safe internals — a new TypedKey construct makes passing data through the hook pipeline strongly typed, catching mistakes at compile time instead of runtime. 🧹 Major cleanup — v0.6.0 brought a large internal refactor: cleaner Java interop, better concurrency primitives, and less boilerplate. No breaking changes to the public API. #Scala #ZIO #OpenFeature #FeatureFlags #OpenSource #Optimizely
To view or add a comment, sign in
-
📚 Collections in Java – Part 3 | Queue & Concurrent Queues 🚀 Continuing my deep dive into the Java Collections Framework, focusing on queue-based data structures and their role in both sequential processing and high-performance concurrent systems. 🔹 Queue – FIFO (First-In-First-Out) data structure for ordered processing 🔹 PriorityQueue – Processes elements based on priority using a Binary Heap 🔹 Deque (Double Ended Queue) – Insert and remove elements from both ends 🔹 ArrayDeque – Fast, resizable array implementation of Deque 🔹 BlockingQueue – Thread-safe queue designed for producer–consumer systems 🔹 Concurrent Queue – High-performance non-blocking queues using CAS operations 💡 Key Takeaways: • Queue follows the FIFO principle for ordered request processing • PriorityQueue processes elements based on priority instead of insertion order • Deque supports both FIFO and LIFO operations • ArrayDeque is usually faster than Stack and LinkedList for queue/stack operations • BlockingQueue enables safe communication between producer and consumer threads • Concurrent queues provide lock-free, high-throughput operations for multi-threaded systems Understanding these structures is important for: ✔ Designing scalable backend systems ✔ Handling asynchronous and concurrent workloads ✔ Building efficient task scheduling mechanisms ✔ Strengthening Core Java and DSA fundamentals Strong understanding of data structures + concurrency concepts leads to better system design and more efficient applications. 💪 #Java #CoreJava #CollectionsFramework #Queue #PriorityQueue #Deque #ArrayDeque #BlockingQueue #ConcurrentProgramming #JavaDeveloper #BackendDevelopment #DSA #InterviewPreparation #CodesInTransit #MondayMotivation
To view or add a comment, sign in
-
VM Architecture — The Backbone of Every Java Application If you’re learning Java or building backend systems with Spring Boot, understanding JVM architecture is not optional — it’s essential. Most developers write Java code… but only a few truly understand what happens behind the scenes. That’s where JVM gives you an edge • What is JVM? The Java Virtual Machine (JVM) is responsible for executing Java bytecode and making Java platform-independent. Core Components of JVM Architecture: • Class Loader Loads ".class" files into memory, verifies bytecode, and prepares it for execution. • Memory Areas - Heap → Stores objects - Stack → Handles method calls & local variables - Method Area → Stores class-level data - PC Register → Tracks current execution • Execution Engine - Interpreter → Executes code line by line - JIT Compiler → Optimizes performance by compiling code into native instructions • Garbage Collector Automatically removes unused objects, helping manage memory efficiently. Why this matters? Understanding JVM helps you: ✓ Debug memory issues like OutOfMemoryError. ✓ Write optimized & scalable code ✓ Perform better in Java interviews ✓ Stand out from average developers My Take: Learning JVM architecture is one of the highest ROI topics for any Java developer. It separates coders from engineers. What’s the most confusing part of JVM for you — Heap, Stack, or Garbage Collection? #Java #JVM #BackendDevelopment #SpringBoot #SoftwareEngineering #Programming #TechCareers
To view or add a comment, sign in
-
-
𝐉𝐚𝐯𝐚 𝐂𝐚𝐧 𝐑𝐮𝐧 𝐖𝐢𝐭𝐡𝐨𝐮𝐭 𝐚 𝐆𝐚𝐫𝐛𝐚𝐠𝐞 𝐂𝐨𝐥𝐥𝐞𝐜𝐭𝐨𝐫 💡 Java can run without a garbage collector. Not as a hack. As a real JVM option. That option is Epsilon GC. And yes, it does exactly what it sounds like: ✅ allocations still happen ✅ the application still runs ❌ memory is never reclaimed At first, this sounds absurd. For many engineers, Java without GC feels like saying: "Java without the JVM" But Epsilon GC exists for a reason. It is a no-op garbage collector. Its job is not to reclaim memory. Its job is to let the application run until the heap is exhausted. So why would anyone want that? Because for some workloads, reclaiming memory is wasted work. Think about: 🔹 short-lived batch jobs 🔹 one-shot CLI tools 🔹 ephemeral workers 🔹 benchmark scenarios 🔹 tightly controlled processes with known memory bounds If the process is going to finish before memory pressure becomes a real issue, then GC may be solving a problem that workload never actually had. That is what makes Epsilon GC so interesting. It forces a deeper question. Not: "Which GC should we tune?" But: "Does this workload need reclamation at all?" That is a very different mindset. We usually treat garbage collection as mandatory Java runtime machinery. But in reality, it is a trade-off. And Epsilon GC makes that trade-off visible in the most brutal possible way: 🧠 no pauses 🧠 no reclamation work 🧠 no long-term safety net Just pure allocation until the process ends or dies. Of course, this is not a good fit for general long-lived services. For a typical backend: ❌ memory usage will only grow ❌ heap exhaustion becomes inevitable ❌ one wrong assumption can kill the process So this is not "turn off GC and win". It is a reminder that runtime design should follow workload shape. Sometimes the right question is not how to improve GC. Sometimes the right question is whether this workload should pay for GC at all. That is why I like Epsilon GC as a concept. Not because it replaces real collectors. But because it exposes an uncomfortable truth: ⚡ the best memory strategy depends on how your process actually lives, allocates, and dies. And for some short-lived workloads, no GC is not madness. It is simply the trade-off. ❓Would you run a production workload without a garbage collector? #Java #JVM #Performance #Backend #SoftwareEngineering #GarbageCollection #JavaInternals #SystemDesign #EngineeringLeadership
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development