𝗝𝗮𝘃𝗮 𝟮𝟱: 𝗝𝗮𝘃𝗮 𝗙𝗹𝗶𝗴𝗵𝘁 𝗥𝗲𝗰𝗼𝗿𝗱𝗲𝗿 (𝗝𝗙𝗥) – 𝗣𝗮𝗿𝘁 𝟭 Java Flight Recorder (JFR) received a set of important improvements in Java 25, highly relevant for anyone working with observability, profiling and production performance. 🧭 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗝𝗙𝗥 (𝗾𝘂𝗶𝗰𝗸 𝗿𝗲𝗰𝗮𝗽) Java Flight Recorder is a 𝗯𝘂𝗶𝗹𝘁-𝗶𝗻 𝗝𝗩𝗠 𝗽𝗿𝗼𝗳𝗶𝗹𝗲𝗿 — with 𝘂𝗹𝘁𝗿𝗮-𝗹𝗼𝘄 𝗼𝘃𝗲𝗿𝗵𝗲𝗮𝗱 (<𝟭%) — that collects events on: • CPU and memory usage • Thread locks and latencies • Object allocation • Garbage collection • I/O, networking, and JVM internals It is ideal for 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁𝘀, unlike heavy tools such as VisualVM or YourKit. 📌 𝗝𝗘𝗣 𝟱𝟬𝟵 — 𝗖𝗼𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗖𝗣𝗨 𝗦𝗮𝗺𝗽𝗹𝗶𝗻𝗴 “𝘊𝘰𝘰𝘱𝘦𝘳𝘢𝘵𝘪𝘷𝘦 𝘊𝘗𝘜 𝘚𝘢𝘮𝘱𝘭𝘪𝘯𝘨” 𝗣𝗿𝗲𝘃𝗶𝗼𝘂𝘀𝗹𝘆, JFR used preemptive sampling, meaning it interrupted threads arbitrarily to record stack traces. 𝗡𝗼𝘄, with 𝗰𝗼𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝘀𝗮𝗺𝗽𝗹𝗶𝗻𝗴, 𝘁𝗵𝗿𝗲𝗮𝗱𝘀 𝘃𝗼𝗹𝘂𝗻𝘁𝗮𝗿𝗶𝗹𝘆 𝗿𝗲𝗽𝗼𝗿𝘁 CPU usage at safe points. This reduces overhead and enables analysis of high-concurrency workloads, such as virtual threads. 📌 𝗝𝗘𝗣 𝟱𝟭𝟴 — 𝗖𝗣𝗨 𝗧𝗶𝗺𝗲 𝗣𝗿𝗼𝗳𝗶𝗹𝗶𝗻𝗴 𝗼𝗻 𝗟𝗶𝗻𝘂𝘅 “𝘙𝘦𝘤𝘰𝘳𝘥 𝘱𝘦𝘳-𝘵𝘩𝘳𝘦𝘢𝘥 𝘊𝘗𝘜 𝘵𝘪𝘮𝘦 𝘮𝘦𝘵𝘳𝘪𝘤𝘴” JFR now measures actual CPU time per thread (user + system) on Linux. 𝗣𝗿𝗲𝘃𝗶𝗼𝘂𝘀𝗹𝘆, JFR measuared only wall-clock metrics, i.e., total elapsed time, 𝗶𝗻𝗰𝗹𝘂𝗱𝗶𝗻𝗴 I/O waits, lock contention and threads sleeping in queues. 𝗡𝗼𝘄, JFR tracks CPU time spent per method, thread, and event (e.g., garbage collection, JIT compilation). 📌 𝗝𝗘𝗣 𝟱𝟮𝟬 — 𝗡𝗲𝘄 𝗰𝗼𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝘀𝗮𝗺𝗽𝗹𝗶𝗻𝗴 𝗺𝗼𝗱𝗲𝗹 + 𝗝𝗜𝗧 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 “𝘑𝘍𝘙 𝘊𝘰𝘰𝘱𝘦𝘳𝘢𝘵𝘪𝘷𝘦 𝘚𝘢𝘮𝘱𝘭𝘪𝘯𝘨 𝘪𝘯𝘵𝘦𝘨𝘳𝘢𝘵𝘦𝘥 𝘸𝘪𝘵𝘩 𝘑𝘐𝘛 𝘤𝘰𝘮𝘱𝘪𝘭𝘢𝘵𝘪𝘰𝘯 𝘦𝘷𝘦𝘯𝘵𝘴” 𝗡𝗼𝘄, JFR 𝗰𝗼𝗿𝗿𝗲𝗹𝗮𝘁𝗲𝘀 CPU events with JIT compiler state. It knows if a method was 𝗶𝗻𝘁𝗲𝗿𝗽𝗿𝗲𝘁𝗲𝗱, 𝗝𝗜𝗧-𝗰𝗼𝗺𝗽𝗶𝗹𝗲𝗱, or 𝗿𝗲𝗰𝗼𝗺𝗽𝗶𝗹𝗲𝗱, enabling deeper understanding of JVM warm-up. And it helps determine if 𝘀𝗹𝗼𝘄𝗱𝗼𝘄𝗻𝘀 come from the initial interpreted phase or are bottlenecks in already optimized code. 📌 𝗕𝗲𝘁𝘁𝗲𝗿 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝘄𝗶𝘁𝗵 𝘁𝗵𝗲 𝗼𝗯𝘀𝗲𝗿𝘃𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗲𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺 JFR 𝗲𝘅𝗽𝗼𝗿𝘁𝘀 metrics compatible with Micrometer and OpenTelemetry. Enables 𝗿𝗲𝗮𝗹-𝘁𝗶𝗺𝗲 𝘀𝘁𝗿𝗲𝗮𝗺𝗶𝗻𝗴 of events to systems like 𝗣𝗿𝗼𝗺𝗲𝘁𝗵𝗲𝘂𝘀, 𝗚𝗿𝗮𝗳𝗮𝗻𝗮, and 𝗘𝗹𝗮𝘀𝘁𝗶𝗰𝘀𝗲𝗮𝗿𝗰𝗵. New JSON-configurable filters allow selecting which events to export (by class, package, or event type). 😎 𝗣𝗮𝗿𝘁 𝟮 𝗼𝗳 𝘁𝗵𝗶𝘀 𝗽𝗼𝘀𝘁 𝗰𝗼𝗺𝗶𝗻𝗴 𝘀𝗼𝗼𝗻! #Java #Java25 #JFR #𝗝𝗮𝘃𝗮𝗙𝗹𝗶𝗴𝗵𝘁𝗥𝗲𝗰𝗼𝗿𝗱𝗲𝗿 #Profiling #Performance #Observability
Java 25: Java Flight Recorder (JFR) Part 1
More Relevant Posts
-
#SoftwareEngineering #Monitoring #Troubleshooting #Java #VisualVM 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 𝗮𝗻𝗱 𝗗𝗶𝗮𝗴𝗻𝗼𝘀𝗶𝗻𝗴 𝗝𝗮𝘃𝗮 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝘄𝗶𝘁𝗵 𝗩𝗶𝘀𝘂𝗮𝗹𝗩𝗠 𝟭- 𝗜𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 VisualVM is a free, graphical monitoring and profiling tool bundled with the Java Development Kit (JDK). It lets you observe what happens inside the Java Virtual Machine (JVM) while your Spring Boot or Java app runs. It connects to running JVM processes, locally or remotely, and shows detailed runtime metrics. 𝟮- 𝗖𝗣𝗨 𝗨𝘀𝗮𝗴𝗲 𝗣𝗮𝗻𝗲𝗹 The CPU Usage graph (top left) shows how much processor time the Java process consumes. Spikes indicate computation-heavy moments, such as complex request handling or garbage collection. Keeping CPU usage low and stable indicates efficient code and balanced thread activity. VisualVM can also trigger CPU sampling or profiling to identify which methods consume the most CPU. 𝟯- 𝗛𝗲𝗮𝗽 𝗠𝗲𝗺𝗼𝗿𝘆 𝗣𝗮𝗻𝗲𝗹 The Heap graph (top right) displays allocated memory versus used memory. “Heap Size” (orange line) shows how much memory the JVM has reserved, while “Used Heap” (blue area) shows current usage. When the used memory drops suddenly, it means Garbage Collection (GC) has reclaimed unused objects. You can manually trigger GC using the “Perform GC” button or take a heap dump to analyze memory usage in detail. 𝟰- 𝗠𝗲𝘁𝗮𝘀𝗽𝗮𝗰𝗲 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 Metaspace holds class metadata and static data loaded by the JVM. A steady metaspace size is normal; a continuously growing one may indicate classloader leaks. VisualVM allows inspecting how many classes have been loaded or unloaded over time. 𝟱- 𝗖𝗹𝗮𝘀𝘀𝗲𝘀 𝗣𝗮𝗻𝗲𝗹 This graph shows total loaded and shared loaded classes in the JVM. A stable line indicates that your application has reached a steady state after startup. If class loading continues increasing during runtime, it might point to a memory leak due to dynamic class generation or reloading. 𝟲- 𝗧𝗵𝗿𝗲𝗮𝗱𝘀 𝗣𝗮𝗻𝗲𝗹 The Threads chart tracks live, daemon, and total started threads. Live threads correspond to active tasks, requests, or background workers. A growing number of threads without dropping back may reveal thread leaks or excessive parallelism. VisualVM can open a thread dump to inspect blocked or waiting threads and identify deadlocks. VisualVM supports 𝘀𝗮𝗺𝗽𝗹𝗶𝗻𝗴 and 𝗽𝗿𝗼𝗳𝗶𝗹𝗶𝗻𝗴 𝗺𝗼𝗱𝗲𝘀 to pinpoint CPU- or memory-intensive methods. 𝗦𝘂𝗺𝗺𝗮𝗿𝘆 In short, VisualVM is a complete monitoring and diagnostic dashboard for any Java application. It offers a real-time view of CPU, memory, threads, and class loading, helping detect performance issues early. For Spring Boot or microservices projects, it’s an essential tool to understand how your application behaves under load and how efficiently it uses JVM resources.
To view or add a comment, sign in
-
-
A CountDownLatch is a fundamental synchronization aid in Java that allows one or more threads to wait until a set of operations being performed in other threads has completed. Think of it like a one-time gate that opens only after a specific number of events have occurred. 💡 How CountDownLatch Works * Initialization: You create a CountDownLatch with an initial integer count. This count represents the number of events (tasks, threads, etc.) that must complete before the gate opens. CountDownLatch latch = new CountDownLatch(3); // Wait for 3 events * Waiting (await()): The thread that needs to wait for the other operations to finish calls the await() method. This thread will block (pause) until the internal count reaches zero. // The main thread waits here until the latch count is zero latch.await(); System.out.println("All services started! Application proceeding..."); * Decrementing (countDown()): The worker threads perform their tasks. Once a worker thread finishes its assigned task, it calls the countDown() method on the latch. This decrements the internal count by one. // Worker thread completes its task System.out.println("Service 1 initialized."); latch.countDown(); // Count decreases by 1 * Release: When the count hits zero, the latch is released, and all threads blocked on await() are immediately unblocked and allowed to proceed. 🛠️ Typical Use Cases The primary use case for a CountDownLatch is one-time, one-way synchronization where a driver thread waits for worker threads. * Application Startup: The main application thread needs to wait for several critical services (e.g., database connection, configuration loading, cache initialization) to start up before it can accept user requests. The count is set to the number of services. * Parallel Processing: A problem is divided into N sub-tasks and executed in parallel by N threads. The main thread uses a latch initialized to N and waits for all N worker threads to signal their completion before merging the results. * Start Signal: You can initialize the latch with a count of 1. The main thread calls countDown() to signal all worker threads (which are waiting on await()) to start simultaneously. CountDownLatch vs. CyclicBarrier (The Key Difference) | Feature | CountDownLatch | CyclicBarrier | |---|---|---| | Purpose | One or more threads wait for other operations/tasks to finish. (One-way gate) | A group of threads wait for each other to reach a common barrier point. (Two-way waiting) | | Reusability | Not Reusable. Once the count reaches zero, it cannot be reset. | Reusable. It can be used repeatedly by calling the reset() method. | | Action | Threads call countDown(). | Threads call await(). | In short: Use a CountDownLatch when you need to wait for a fixed number of events to occur once. #java #multithreading
To view or add a comment, sign in
-
𝗝𝗮𝘃𝗮 𝟮𝟱: 𝗝𝗮𝘃𝗮 𝗙𝗹𝗶𝗴𝗵𝘁 𝗥𝗲𝗰𝗼𝗿𝗱𝗲𝗿 (𝗝𝗙𝗥) - 𝗝𝗘𝗣 𝟰𝟳𝟯 𝗣𝗿𝗼𝗺𝗲𝘁𝗵𝗲𝘂𝘀 𝗘𝗻𝗱𝗽𝗼𝗶𝗻𝘁 - 𝗣𝗮𝗿𝘁 𝟮 Starting with Java 25, the 𝚓𝚍𝚔.𝚓𝚏𝚛.𝚙𝚛𝚘𝚖𝚎𝚝𝚑𝚎𝚞𝚜 module is part of the official JDK distribution and when you enable the feature with: -𝚇𝚇:+𝙴𝚗𝚊𝚋𝚕𝚎𝙹𝙵𝚁𝙿𝚛𝚘𝚖𝚎𝚝𝚑𝚎𝚞𝚜𝙴𝚡𝚙𝚘𝚛𝚝𝚎𝚛 𝗧𝗵𝗲 𝗝𝗩𝗠: • Starts JFR internally • Creates a small embedded HTTP server running inside the JVM process • Connects this server to the JFR event stream • And exposes everything through a local HTTP endpoint The local server listens on the default port 𝟳𝟬𝟵𝟭: 𝚑𝚝𝚝𝚙://𝚕𝚘𝚌𝚊𝚕𝚑𝚘𝚜𝚝:𝟽𝟶𝟿𝟷/𝚖𝚎𝚝𝚛𝚒𝚌𝚜 🚫 𝗡𝗼 𝗲𝘅𝘁𝗿𝗮 𝗱𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝗰𝗶𝗲𝘀 • No Java code needed in your application • No libraries required (𝚒𝚘.𝚖𝚒𝚌𝚛𝚘𝚖𝚎𝚝𝚎𝚛, 𝚓𝚍𝚔.𝚓𝚏𝚛.𝚌𝚘𝚗𝚜𝚞𝚖𝚎𝚛, etc.) • And no sidecar process needed It is entirely internal to the JVM process, implemented in 𝗻𝗮𝘁𝗶𝘃𝗲 𝗖++ 𝗰𝗼𝗱𝗲, running inside the runtime itself. 💡 𝗪𝗵𝗮𝘁 𝘁𝗵𝗲 𝗲𝘅𝗽𝗼𝗿𝘁𝗲𝗿 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗱𝗼𝗲𝘀 Inside the JVM, the exporter behaves like an 𝗼𝗯𝘀𝗲𝗿𝘃𝗲𝗿 of the JFR event stream, with a very lightweight polling loop: • JFR collects events (such as GC, CPU, Threads, Safepoints, etc.) in a 𝗰𝗶𝗿𝗰𝘂𝗹𝗮𝗿 𝗯𝘂𝗳𝗳𝗲𝗿 • The exporter reads these events periodically • It converts them into 𝗰𝘂𝗺𝘂𝗹𝗮𝘁𝗶𝘃𝗲 𝗺𝗲𝘁𝗿𝗶𝗰𝘀 (in OpenMetrics/Prometheus format) • It publishes them via HTTP — 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝘄𝗿𝗶𝘁𝗶𝗻𝗴 𝗮𝗻𝘆𝘁𝗵𝗶𝗻𝗴 𝘁𝗼 𝗱𝗶𝘀𝗸 Metrics are exposed in real time, without any file I/O overhead. ⚙️ 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 𝗮𝗻𝗱 𝗰𝗼𝗻𝗳𝗶𝗴𝘂𝗿𝗮𝘁𝗶𝗼𝗻 Everything is controlled through 𝗝𝗩𝗠 𝗼𝗽𝘁𝗶𝗼𝗻𝘀, for example: 𝚓𝚊𝚟𝚊 \ -𝚇𝚇:𝚂𝚝𝚊𝚛𝚝𝙵𝚕𝚒𝚐𝚑𝚝𝚁𝚎𝚌𝚘𝚛𝚍𝚒𝚗𝚐=𝚗𝚊𝚖𝚎=𝚙𝚛𝚘𝚍, 𝚜𝚎𝚝𝚝𝚒𝚗𝚐𝚜=𝚙𝚛𝚘𝚏𝚒𝚕𝚎, 𝚖𝚊𝚡𝚊𝚐𝚎=𝟸𝚑, 𝚖𝚊𝚡𝚜𝚒𝚣𝚎=𝟻𝟶𝟶𝙼, 𝚍𝚞𝚖𝚙𝚘𝚗𝚎𝚡𝚒𝚝=𝚏𝚊𝚕𝚜𝚎 \ -𝚇𝚇:+𝙴𝚗𝚊𝚋𝚕𝚎𝙹𝙵𝚁𝙿𝚛𝚘𝚖𝚎𝚝𝚑𝚎𝚞𝚜𝙴𝚡𝚙𝚘𝚛𝚝𝚎𝚛 \ -𝙳𝚓𝚍𝚔.𝚓𝚏𝚛.𝚙𝚛𝚘𝚖𝚎𝚝𝚑𝚎𝚞𝚜.𝚙𝚘𝚛𝚝=𝟽𝟶𝟿𝟷 \ -𝙳𝚓𝚍𝚔.𝚓𝚏𝚛.𝚙𝚛𝚘𝚖𝚎𝚝𝚑𝚎𝚞𝚜.𝚙𝚊𝚝𝚑=/𝚖𝚎𝚝𝚛𝚒𝚌𝚜 \ -𝙳𝚓𝚍𝚔.𝚓𝚏𝚛.𝚙𝚛𝚘𝚖𝚎𝚝𝚑𝚎𝚞𝚜.𝚙𝚎𝚛𝚒𝚘𝚍=𝟹𝟶𝚜 ⚡ 𝗖𝗣𝗨 𝗮𝗻𝗱 𝗺𝗲𝗺𝗼𝗿𝘆 𝗼𝘃𝗲𝗿𝗵𝗲𝗮𝗱 Unfortunately, I haven't had the opportunity to test it in production yet, so we don't know the real overhead of this new feature.... But, in practice, you should not need to disable JFR and it’s common to keep it always active and only 𝗮𝗱𝗷𝘂𝘀𝘁 𝘁𝗵𝗲 𝗹𝗲𝘃𝗲𝗹 𝗼𝗳 𝗱𝗲𝘁𝗮𝗶𝗹 when an incident occurs (via 𝚓𝚌𝚖𝚍). #Java #Java25 #JFR #𝗝𝗮𝘃𝗮𝗙𝗹𝗶𝗴𝗵𝘁𝗥𝗲𝗰𝗼𝗿𝗱𝗲𝗿 #Profiling #Performance #Observability
To view or add a comment, sign in
-
Master the art of shaping JSON in Java: explore how slickly the Jackson annotations in give you fine-tuned control over serialization, field formats, nested objects, and polymorphism. Read more:👇https://lnkd.in/dQRsvDKg #Java #Jackson #JSON #JavaDevelopers #BackendDevelopment #CodingMadeEasy #GeoTech
To view or add a comment, sign in
-
☕💻 JAVA THREADS CHEAT SHEET 🔹 🧠 What is a Thread? 👉 The smallest unit of execution in a process. Each thread runs independently but shares memory & resources with others. 🧩 Enables multitasking and parallel execution. 🔹 ⚙️ What is a Process? 👉 An executing instance of a program. Each process has its own memory space and can run multiple threads. 🧵 Types of Threads 👤 1️⃣ User Threads ✅ Created by users or apps. ✅ JVM waits for them before exit. 💡 Example: Thread t = new Thread(() -> System.out.println("User Thread")); t.start(); 👻 2️⃣ Daemon Threads ⚙️ Background threads (e.g., Garbage Collector). ❌ JVM doesn’t wait for them. 💡 Example: Thread d = new Thread(() -> {}); d.setDaemon(true); d.start(); 🚀 Creating Threads 🔸 Extending Thread: class MyThread extends Thread { public void run(){ System.out.println("Running..."); } } new MyThread().start(); 🔸 Implementing Runnable: new Thread(() -> System.out.println("Runnable running...")).start(); 🔄 Thread Lifecycle 🧩 NEW → RUNNABLE → RUNNING → WAITING/BLOCKED → TERMINATED 🕹️ Common Thread Methods 🧩 Method ⚡ Use start() Start a thread run() Thread logic sleep(ms) Pause execution join() Wait for thread interrupt() Interrupt thread setDaemon(true) Make daemon thread 🔒 Synchronization Prevents race conditions when multiple threads share data. synchronized void increment() { count++; } 💬 Thread Communication Use wait(), notify(), and notifyAll() for coordination. 🧠 Must be called inside a synchronized block. ⚡ Executor Framework (Thread Pooling) Efficient thread reuse for better performance. ExecutorService ex = Executors.newFixedThreadPool(3); ex.submit(() -> System.out.println("Task executed")); ex.shutdown(); 🏁 Pro Tips ✅ Prefer Runnable / ExecutorService ✅ Handle InterruptedException ✅ Avoid deprecated methods (stop(), suspend()) ✅ Use java.util.concurrent for safe multithreading 📢 Boost Your Skills 🚀 #Java #Threads #Multithreading #Concurrency #JavaDeveloper #JavaInterview #JavaCoding #JavaProgrammer #CodeNewbie #ProgrammingTips #DeveloperCommunity #CodingLife #LearnJava #JavaCheatSheet #ThreadPool #TechInterview #SoftwareEngineer #CodeWithMe #JavaExperts #BackendDeveloper #JavaLearning #JavaBasics #OOP #CodeDaily #CodingJourney #DevCommunity #JavaLovers #TechCareers #CodeSmarter #100DaysOfCode
To view or add a comment, sign in
-
-
# 🚀 JDK 25 - Java Flight Recorder Just Got a Massive Upgrade! Java 25 dropped last month, and if you haven't explored the Java Flight Recorder (JFR) enhancements yet, you're missing out on some of the most powerful production observability tools ever added to the JVM. After working with these features in our production environment, I'm excited to share what's new and why it matters for your team. ## 🎯 The Game-Changing Trinity **1️⃣ CPU-Time Profiling (JEP 509)** This is HUGE. For years, JFR could only approximate CPU usage through execution sampling. Now, on Linux, it leverages the kernel's CPU timer for precise, accurate CPU-cycle profiling. java -XX:StartFlightRecording=jdk.CPUTimeSample#enabled=true,filename=profile.jfr -jar app.jar **Real Impact:** We identified a "fast" API endpoint that was actually burning 40% CPU while appearing responsive. The I/O wait made it seem fine in execution profiles, but CPU profiling revealed the truth. Fixed it, saved thousands in compute costs. **2️⃣ Cooperative Sampling (JEP 518)** The safepoint bias problem that plagued JFR sampling? Solved. Instead of risky heuristics that could crash your JVM, stack walking now happens cooperatively at safepoints - without the traditional safepoint bias. More stable, more accurate, less overhead. **What this means:** No more "JVM crashed during profiling" incidents in production. Been there? This fixes it. **3️⃣ Method Timing & Tracing (JEP 520)** Production-ready bytecode instrumentation for precise method-level profiling. No more "sampling says method X is slow, but we don't know exactly how slow." Now you get: ✅ Exact invocation counts ✅ Real execution times (not sampled approximations) ✅ Complete trace paths ✅ All without external agents or significant overheadl ## 💡 Why This Matters Beyond the Hype **For DevOps Teams:** Your "unknown performance issue" troubleshooting time just dropped from hours to minutes. Start a recording, analyze, fix. Done. **For Platform Engineers:** CPU-time profiling means you can finally distinguish between "slow because busy" vs "slow because waiting." ## 🛠️ Getting Started is Dead Simple **Already running JDK 25?** # 30-second production snapshot jcmd <your-app-pid> JFR.start duration=30s filename=snapshot.jfr # Analyze with JDK Mission Control or CLI jfr print snapshot.jfr **New to JFR?** Start your app with recording enabled: -XX:StartFlightRecording=duration=60s,filename=first-recording.jfr -jar your-app.jar That's it. No code changes. No dependencies. No complex setup. ## 📊 Real-World Results After migrating to JDK 25 and enabling these JFR features: - **Reduced troubleshooting time by 70%** for performance issues - **Identified 3 major bottlenecks** that execution sampling had missed - **Cut CPU costs by 25%** by finding and fixing inefficient code paths - **Zero crashes** during profiling (cooperative sampling FTW) #Java25 #JVM #JavaFlightRecorder #JFR
To view or add a comment, sign in
-
Understanding the Java Memory Model (JMM) is the key to mastering thread safety, visibility, ordering and atomicity all come into play. In my latest article, I break down: - What the JMM really defines - How volatile, synchronized, and Atomic* classes work under the hood - How to avoid those elusive visibility bugs. If you’ve ever wondered why your threads don’t always see the same data, this deep dive is for you. #Java #Concurrency #Programming #SoftwareEngineering #ThreadSafety #Developers #CodeQuality #JavaMemoryModel #Multithreading
To view or add a comment, sign in
-
🌟 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗦𝘄𝗶𝘁𝗰𝗵𝗶𝗻𝗴 𝗶𝗻 𝗝𝗮𝘃𝗮 (𝗣𝗮𝗿𝘁 𝟭/𝟮) 🌟 In my last interview, I was asked about "𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝘀𝘄𝗶𝘁𝗰𝗵𝗶𝗻𝗴" in Java. I think exploring this topic in depth could be really valuable! In simple terms, 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝘀𝘄𝗶𝘁𝗰𝗵𝗶𝗻𝗴 is the process by which a CPU suspends execution of one task (thread or process) in order to start or resume another. It's an essential mechanism for multitasking systems that ensures multiple processes or threads can share CPU resources effectively. 𝗛𝗼𝘄 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗦𝘄𝗶𝘁𝗰𝗵𝗶𝗻𝗴 𝗢𝗰𝗰𝘂𝗿𝘀 𝗶𝗻 𝗧𝗿𝗮𝗱𝗶𝘁𝗶𝗼𝗻𝗮𝗹 𝗝𝗮𝘃𝗮 𝗧𝗵𝗿𝗲𝗮𝗱𝘀: Java traditionally uses threads, commonly known as 𝗽𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝘁𝗵𝗿𝗲𝗮𝗱𝘀, managed by the JVM but mapped directly to underlying operating system (OS) threads. Here's a step-by-step breakdown of context switching in these standard Java threads: 𝟭. 𝗦𝗮𝘃𝗶𝗻𝗴 𝘁𝗵𝗲 𝗖𝘂𝗿𝗿𝗲𝗻𝘁 𝗖𝗼𝗻𝘁𝗲𝘅𝘁: When the time slice assigned by the OS expires or a thread becomes blocked (waiting for I/O, synchronization locks, or other resources), the OS scheduler intervenes. It saves the current execution context—this typically includes CPU registers, program counters, stack pointers, and memory management states. 𝟮. 𝗣𝗮𝘂𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝗖𝘂𝗿𝗿𝗲𝗻𝘁 𝗧𝗵𝗿𝗲𝗮𝗱: After saving its state, the OS scheduler pauses the current thread, marking it as suspended or waiting state. It will remain in this state until the reason for blocking (I/O completion, lock availability, etc.) resolves, or it's explicitly rescheduled. 𝟯. 𝗥𝗲𝘀𝘁𝗼𝗿𝗶𝗻𝗴 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗼𝗳 𝘁𝗵𝗲 𝗡𝗲𝘅𝘁 𝗧𝗵𝗿𝗲𝗮𝗱: The OS scheduler selects the next thread to run based on scheduling algorithms (like round-robin, priority-based, etc.). It restores the previously saved context of this new thread. This action includes loading CPU registers, memory mappings, and stack pointers back into the CPU registers and hardware states. 𝟰. 𝗥𝗲𝘀𝘂𝗺𝗶𝗻𝗴 𝗘𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻: After the new thread’s context is properly restored, its execution resumes naturally from the exact point it had been paused previously. 𝗖𝗼𝘀𝘁𝘀 𝗮𝗻𝗱 𝗗𝗿𝗮𝘄𝗯𝗮𝗰𝗸𝘀 𝗼𝗳 𝗧𝗿𝗮𝗱𝗶𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗦𝘄𝗶𝘁𝗰𝗵𝗶𝗻𝗴: Even though context switching enables multitasking, it has downsides: - 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗼𝘃𝗲𝗿𝗵𝗲𝗮𝗱: Frequent context switches can degrade system performance, due to high CPU overhead from saving, restoring states, and interacting with the kernel. - 𝗠𝗲𝗺𝗼𝗿𝘆 𝗼𝘃𝗲𝗿𝗵𝗲𝗮𝗱: Each thread needs its own stack and management space, limiting the number of threads the system can efficiently handle. - 𝗟𝗮𝘁𝗲𝗻𝗰𝘆: Context switching induces latency within your application, particularly critical for latency-sensitive applications. 🔔 𝗙𝗼𝗹𝗹𝗼𝘄 𝗼𝗿 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 so you don't miss out on the next part of this in-depth Java series! #Java #Concurrency #Threads #ProgrammingConcepts #SoftwarePerformance #SoftwareEngineering
To view or add a comment, sign in
-
-
POST 1: Spring Boot 3.x with Virtual Threads 🚀 Title: Spring Boot 3.x की Virtual Threads - Game Changer for Performance! 🔥 Java developers ke liye exciting news! Spring Boot 3.x mein Virtual Threads ka integration Java applications ki performance ko completely transform kar diya hai. Aaj hum samjhenge ki yeh kya hai aur kaise use karna hai. Virtual Threads Kya Hai? Virtual Threads (Project Loom) lightweight threads hain jo JVM level pe manage hote hain. Traditional platform threads ke comparison mein yeh bahut kam resources consume karte hain. Ek application mein lakhs virtual threads create kar sakte hain bina system ke resources exhaust kiye. Traditional vs Virtual Threads: Platform threads: Heavy, OS-managed, limited count (few thousands) Virtual threads: Lightweight, JVM-managed, millions possible Context switching: Virtual threads mein bahut faster Spring Boot 3.x Mein Kaise Enable Karein? Bahut simple! Application.properties mein: spring.threads.virtual.enabled=true Practical Use Case: Suppose aapki application mein bahut saare blocking I/O operations hain - database calls, external API calls, file operations. Traditional threads ke saath, har request ek platform thread consume karta hai. High load pe threads exhaust ho jaate hain. Virtual threads ke saath, har request apna dedicated virtual thread le sakta hai bina resource exhaustion ke dar ke. Yeh especially useful hai microservices architecture mein jahan multiple service calls hoti hain. Performance Benefits: 10x better throughput blocking operations mein Reduced memory footprint Better resource utilization Simplified async programming - no need for complex reactive programming Implementation Example: Controller level pe kuch change nahi chahiye. Spring Boot automatically virtual threads use karega agar enabled hai. Lekin agar specifically chahiye: @Bean public TomcatProtocolHandlerCustomizer<?> protocolHandlerVirtualThreadExecutorCustomizer() { return protocolHandler -> { protocolHandler.setExecutor(Executors.newVirtualThreadPerTaskExecutor()); }; } Important Points: Java 21+ required hai CPU-intensive tasks ke liye benefit kam hai I/O bound applications ke liye perfect Thread-local variables carefully use karein Real-world Impact: Ek e-commerce application mein jahan har request 5-6 database calls aur 2-3 external API calls karti hai, virtual threads ne response time 40% tak improve kar diya aur server capacity double ho gayi. Migration Tips: Existing Spring Boot apps ko migrate karna easy hai Bas Java 21+ pe upgrade karein aur property enable karein No code changes required in most cases Test thoroughly - especially thread-local usage Conclusion: Virtual threads Spring Boot applications ke liye revolutionary feature hai. High-concurrency applications mein yeh game-changer sabit ho raha hai. #VirtualThreads #ProjectLoom #BackendDevelopment #JavaFullStack #PerformanceOptimization #Microservices #Java21
To view or add a comment, sign in
-
Second Revision Topic – Thread States in Java (Explained Through One Example) Today during my revision, I tried to understand thread states using a single practical example. When you connect the states with an actual flow, everything becomes much easier to remember and relate during debugging. --- Explanation of All States NEW The thread object is created but not started. No execution happens at this point. RUNNABLE Calling start() moves the thread into RUNNABLE. It is either running or waiting for CPU time depending on the scheduler. TIMED_WAITING When the thread enters Thread.sleep(1000), it waits for a fixed amount of time. This is the TIMED_WAITING state. WAITING After the sleep, the thread enters a synchronized block and calls wait(). Since there is no timeout, the thread moves into WAITING until another thread calls notify(). BLOCKED If multiple threads try to enter the same synchronized block, the thread that doesn’t get the lock goes into BLOCKED. This usually appears in debugging when threads are fighting for shared resources. TERMINATED Once the run method finishes, the thread reaches TERMINATED. It will not execute further code. One Example That Covers All Thread States class WorkerThread extends Thread { @Override public void run() { try { // TIMED_WAITING Thread.sleep(1000); synchronized (WorkerThread.class) { // WAITING WorkerThread.class.wait(); } } catch (Exception e) { e.printStackTrace(); } } } public class ThreadStateDemo { public static void main(String[] args) throws Exception { WorkerThread t1 = new WorkerThread(); // NEW System.out.println(t1.getState()); // NEW t1.start(); // Moves to RUNNABLE Thread.sleep(100); System.out.println(t1.getState()); // RUNNABLE or TIMED_WAITING Thread.sleep(1500); System.out.println(t1.getState()); // WAITING synchronized (WorkerThread.class) { WorkerThread.class.notify(); } Thread.sleep(100); System.out.println(t1.getState()); // RUNNABLE again Thread.sleep(100); System.out.println(t1.getState()); // TERMINATED } } --- Quick Takeaway Understanding thread states becomes much easier when you follow a single flow and track how the thread transitions from one state to another. This is especially helpful while debugging issues in executor services, concurrency handling or deadlock scenarios. #java #javarevision #javadeveloper #threadstates #multithreading #backenddevelopment #codingjourney #learninginpublic #softwareengineering #programmingtips #Developers #100DaysOfCode #TechCommunity #CodeNewbie
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development