🧠 𝗧𝗵𝗲 𝗛𝗶𝗱𝗱𝗲𝗻 𝗠𝗲𝗺𝗼𝗿𝘆 𝗦𝗶𝗻𝗸 (𝗣𝗮𝗿𝘁 𝟭𝟯.𝟱.𝟯): 𝗝𝗮𝘃𝗮 𝗛𝘁𝘁𝗽𝗖𝗹𝗶𝗲𝗻𝘁 - 𝗠𝗼𝗱𝗲𝗿𝗻 𝗦𝘁𝗮𝗰𝗸 𝗡𝗼𝗯𝗼𝗱𝘆 𝗨𝘀𝗲𝘀 Java 11 gave us HttpClient. HTTP/2 built-in. Connection pooling included. Async API. No OkHttp, Apache, or Netty dependency. Five years later, everyone still adds external HTTP libraries. Why? Spring Boot doesn't default to it. Teams don't know it exists. Legacy code stays legacy. But it's there. Java 21 virtual threads change blocking model. Time to reconsider built-in? --- 🔧 𝗪𝗵𝗮𝘁 𝗝𝗮𝘃𝗮 𝗛𝘁𝘁𝗽𝗖𝗹𝗶𝗲𝗻𝘁 𝗣𝗿𝗼𝘃𝗶𝗱𝗲𝘀 _HttpClient client = HttpClient.newBuilder() .version(Version.HTTP_2) // HTTP/2 default .connectTimeout(...) .build();_ 𝗕𝘂𝗶𝗹𝘁-𝗶𝗻: - HTTP/2 by default (fallback to HTTP/1.1) - Connection pooling (automatic) - Async and sync APIs (CompletableFuture) - No external dependencies --- 🔧 𝗪𝗵𝘆 𝗡𝗼𝗯𝗼𝗱𝘆 𝗨𝘀𝗲𝘀 𝗜𝘁 𝗦𝗽𝗿𝗶𝗻𝗴 𝗕𝗼𝗼𝘁 𝗱𝗲𝗳𝗮𝘂𝗹𝘁𝘀: RestTemplate (13.3), WebClient (13.4). Not Java HttpClient. 𝗟𝗲𝗴𝗮𝗰𝘆 𝗰𝗼𝗱𝗲 𝗨𝗻𝗳𝗮𝗺𝗶𝗹𝗶𝗮𝗿𝗶𝘁𝘆 𝗘𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺: Fewer Stack Overflow answers, less community content. --- 🔧 𝗝𝗮𝘃𝗮 𝟮𝟭 𝗩𝗶𝗿𝘁𝘂𝗮𝗹 𝗧𝗵𝗿𝗲𝗮𝗱𝘀 𝗖𝗵𝗮𝗻𝗴𝗲 𝗧𝗵𝗲 𝗚𝗮𝗺𝗲 _HttpClient client = HttpClient.newHttpClient(); // Blocking call on virtual thread = no problem String response = client.send(request, BodyHandlers.ofString()).body();_ 𝗕𝗲𝗳𝗼𝗿𝗲 𝘃𝗶𝗿𝘁𝘂𝗮𝗹 𝘁𝗵𝗿𝗲𝗮𝗱𝘀: Blocking HTTP = platform thread per request. Doesn't scale (13.4). 𝗪𝗶𝘁𝗵 𝘃𝗶𝗿𝘁𝘂𝗮𝗹 𝘁𝗵𝗿𝗲𝗮𝗱𝘀: Blocking code performs like async. Simple blocking HttpClient calls scale to thousands of concurrent requests. 𝗧𝗿𝗮𝗱𝗲𝗼𝗳𝗳: WebClient reactive complexity vs HttpClient simplicity + virtual threads. Both scale. Different models. --- 🔧 𝗠𝗲𝗺𝗼𝗿𝘆 & 𝗖𝗼𝗻𝗳𝗶𝗴𝘂𝗿𝗮𝘁𝗶𝗼𝗻 𝗖𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻 𝗽𝗼𝗼𝗹𝗶𝗻𝗴: Built-in, automatic. 𝗛𝗧𝗧𝗣/𝟮 𝘀𝘁𝗮𝘁𝗲: HPACK overhead same as any HTTP/2 client (13.5.2). 𝗗𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝗰𝗶𝗲𝘀: Zero external. 𝗠𝗲𝗺𝗼𝗿𝘆 𝗳𝗼𝗼𝘁𝗽𝗿𝗶𝗻𝘁: fewer jars. --- ⚖️ 𝗧𝗿𝗮𝗱𝗲𝗼𝗳𝗳𝘀 𝗝𝗮𝘃𝗮 𝗛𝘁𝘁𝗽𝗖𝗹𝗶𝗲𝗻𝘁: ✓ HTTP/2 built-in, no dependencies, virtual thread ready ✗ Less Spring integration, smaller ecosystem, less familiar 𝗢𝗸𝗛𝘁𝘁𝗽/𝗔𝗽𝗮𝗰𝗵𝗲: ✓ Mature, large ecosystem, Spring integrated ✗ External dependencies, more complex configuration 𝗪𝗲𝗯𝗖𝗹𝗶𝗲𝗻𝘁: ✓ Reactive, Spring native, high concurrency ✗ Reactor complexity, learning curve (13.4) --- 𝗧𝗵𝗲 𝗕𝗼𝘁𝘁𝗼𝗺 𝗟𝗶𝗻𝗲 Built-in HTTP finally viable. Reconsider dependency bloat. #Java #HttpClient #Java11 #Java21 #VirtualThreads #HTTP2 #Performance #Memory #NoDependencies #JavaPerformance #BackendDevelopment #ProjectLoom #ModernJava #CloudComputing #EnterpriseJava #DevOps #Microservices #MemoryOptimization #SoftwareEngineering #CloudNative #ProductionReady #JavaDevelopment #Async #NonBlocking #HighScale
Java HttpClient: Built-in HTTP/2, No Dependencies, Virtual Thread Ready
More Relevant Posts
-
🚀 🍃 𝗦𝗽𝗿𝗶𝗻𝗴 𝗕𝗼𝗼𝘁 𝟰 𝗠𝗲𝗲𝘁𝘀 𝗝𝗮𝗰𝗸𝘀𝗼𝗻 𝟯 — 𝗔 𝗡𝗲𝘄 𝗘𝗿𝗮 𝗳𝗼𝗿 𝗝𝗦𝗢𝗡 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗶𝗻 𝗝𝗮𝘃𝗮 For years, Java developers have battled “boilerplate fatigue” when mapping Java objects ↔ JSON. Common frustrations: • Checked exceptions in streams • Mutable ObjectMapper configuration • Verbose serialization setup But the Modern Java Renaissance is here. With Spring Framework 7 and Spring Boot 4 on the horizon, the ecosystem is evolving toward a cleaner, more functional, and concurrency-friendly future. At the center of this shift lies 𝗝𝗮𝗰𝗸𝘀𝗼𝗻 𝟯 ⚙️ 𝗔 𝗡𝗲𝘄 𝗕𝗮𝘀𝗲𝗹𝗶𝗻𝗲: 𝗝𝗗𝗞 𝟭𝟳+ Jackson 3 raises the baseline to Java 17, unlocking modern language capabilities and allowing frameworks like Spring to simplify APIs and remove legacy constraints. Result: a leaner and more modern JSON processing model. 🧩 𝗧𝗵𝗲 “𝗥𝗼𝗼𝗺 𝗳𝗼𝗿 𝗧𝘄𝗼” 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆 A key decision in Spring Boot 4: Jackson 2 and Jackson 3 can coexist on the classpath when using spring-boot-starter-webmvc • Jackson 3 → tools.jackson.* • Annotations stay in com.fasterxml.jackson.* This enables: ✅ Existing models to keep working ✅ Third-party libraries to stay compatible ✅ Gradual ecosystem migration 🧱 𝗜𝗺𝗺𝘂𝘁𝗮𝗯𝗹𝗲 & 𝗧𝗵𝗿𝗲𝗮𝗱-𝗦𝗮𝗳𝗲 𝗖𝗼𝗻𝗳𝗶𝗴 Jackson 2’s ObjectMapper was mutable, which could lead to subtle concurrency issues. Jackson 3 introduces JsonMapper with an immutable builder pattern. Once built, configuration cannot change. ✔ Thread-safe by design ✔ No runtime config mutation ✔ Better fit for modern concurrent Java ⚡𝗟𝗮𝗺𝗯𝗱𝗮-𝗙𝗿𝗶𝗲𝗻𝗱𝗹𝘆 𝗘𝗿𝗿𝗼𝗿 𝗛𝗮𝗻𝗱𝗹𝗶𝗻𝗴 Using JSON inside Java Streams used to be painful because of checked exceptions. Before: IOException and JsonProcessingException forced developers into verbose try/catch blocks. Jackson 3 introduces JacksonException, an unchecked runtime exception. ✔ Cleaner lambda expressions ✔ Stream pipelines without boilerplate ✔ Centralized error handling 📅 𝗛𝘂𝗺𝗮𝗻-𝗥𝗲𝗮𝗱𝗮𝗯𝗹𝗲 𝗗𝗮𝘁𝗲𝘀 𝗯𝘆 𝗗𝗲𝗳𝗮𝘂𝗹𝘁 Jackson 3 switches the default from epoch timestamps → ISO-8601 strings. ✔ Human-readable ✔ Standardized across APIs ✔ No custom serializers needed for most frontend apps 🌐 𝗖𝗹𝗲𝗮𝗻𝗲𝗿 𝗔𝗣𝗜𝘀 𝘄𝗶𝘁𝗵 𝗥𝗲𝘀𝘁𝗖𝗹𝗶𝗲𝗻𝘁 𝗮𝗻𝗱 𝗝𝗦𝗢𝗡 𝗩𝗶𝗲𝘄𝘀 Spring Boot 4 integrates JSON Views directly with RestClient using hint(), removing the need for wrappers like MappingJacksonValue. The result? ✔ One model → multiple API views ✔ Cleaner API design ✔ No DTO explosion 💡 𝗙𝗶𝗻𝗮𝗹 𝗧𝗵𝗼𝘂𝗴𝗵𝘁𝘀 The adoption of Jackson 3 in Spring Boot 4 is more than a version upgrade. It reflects the modernization of the Java ecosystem: • Immutable configuration • Safer concurrency • Functional-friendly exceptions • Better serialization defaults • Cleaner API design Less friction. More focus on building great applications. #Java #SpringBoot4 #Jackson3 #SpringFramework #Java #JSONMapper #RestAPI #JSON #SpringBoot
To view or add a comment, sign in
-
-
𝐉𝐚𝐯𝐚 𝟏𝟑: 𝐌𝐚𝐤𝐢𝐧𝐠 𝐭𝐡𝐞 𝐒𝐰𝐢𝐭𝐜𝐡 𝐒𝐭𝐚𝐭𝐞𝐦𝐞𝐧𝐭 𝐁𝐞𝐭𝐭𝐞𝐫! If you've ever spent hours debugging a logic error only to find you missed a single break; statement, you know the pain of the traditional Java switch. Java 12 and 13 introduced major upgrades to fix these "legacy" headaches. Here is a quick breakdown of how the Enhanced Switch makes your code cleaner and safer: 𝟏. 𝐍𝐨 𝐌𝐨𝐫𝐞 "𝐅𝐚𝐥𝐥-𝐓𝐡𝐫𝐨𝐮𝐠𝐡" 𝐓𝐫𝐚𝐩𝐬 Traditional switches require a break for every case. If you forget it, the code "falls through" to the next case. The Fix: Using the new arrow (->) syntax. It executes only the code on the right side. No break required! 𝟐. 𝐒𝐰𝐢𝐭𝐜𝐡 𝐚𝐬 𝐚𝐧 𝐄𝐱𝐩𝐫𝐞𝐬𝐬𝐢𝐨𝐧 You can now assign the result of a switch directly to a variable. This makes your code much more concise. Example: 𝐉𝐚𝐯𝐚 String device = switch (itemCode) { case 001 -> "Laptop"; case 002 -> "Desktop"; default -> "Unknown"; }; System.out.println("Output: " + device); 𝐎𝐮𝐭𝐩𝐮𝐭: Output: Laptop 𝟑. 𝐌𝐮𝐥𝐭𝐢𝐩𝐥𝐞 𝐕𝐚𝐥𝐮𝐞𝐬, 𝐎𝐧𝐞 𝐂𝐚𝐬𝐞 Gone are the days of stacking cases on top of each other. You can now comma-separate multiple values in a single line: case 001, 002, 003 -> System.out.println("Electronic Gadget"); 𝟒. 𝐓𝐡𝐞 𝐲𝐢𝐞𝐥𝐝 𝐊𝐞𝐲𝐰𝐨𝐫𝐝 In Java 13, if you are using the traditional colon syntax (:) but want to return a value from a switch expression, use yield. It returns the value and exits the switch immediately. 𝟓. 𝐄𝐱𝐡𝐚𝐮𝐬𝐭𝐢𝐯𝐞𝐧𝐞𝐬𝐬 (𝐒𝐚𝐟𝐞𝐭𝐲 𝐅𝐢𝐫𝐬𝐭!) When using switch as an expression, Java forces you to cover every possible case (or provide a default). This prevents those pesky "unhandled value" bugs from reaching production. 𝟔. 𝐌𝐨𝐝𝐞𝐫𝐧 𝐋𝐨𝐠𝐢𝐜 𝐰𝐢𝐭𝐡 𝐭𝐡𝐞 𝐰𝐡𝐞𝐧 𝐊𝐞𝐲𝐰𝐨𝐫𝐝 (𝐉𝐚𝐯𝐚 𝟐𝟏+) Introduced in Java 21, the when keyword acts as a Guard. It allows you to add extra boolean conditions directly to a case label. No more nesting if statements inside your cases! Example: 𝐉𝐚𝐯𝐚 switch (obj) { case String s when s.length() > 5 -> System.out.println("Long string: " + s); case String s -> System.out.println("Short string"); default -> System.out.println("Not a string"); } Deeply grateful to Syed Zabi Ulla Sir for his expert guidance. He has a gift for making even the trickiest Java updates feel intuitive. Thank you, sir, for helping us build such a strong technical base and for always being a guiding light in our learning journey! #Java #Programming #CodingTips #SoftwareDevelopment #Java21 #CleanCode #BackendDeveloper #Mentorship #PWIOI #LearningJourney
To view or add a comment, sign in
-
-
I built Spring Perf Analyzer — because performance bugs are cheap to fix in dev and expensive to debug in prod. Most teams find N+1 queries, missing caches, and slow service calls through production incidents. This moves that discovery into your local development loop — zero infrastructure, one annotation. ⚙️ Zero-Friction Setup Step 1 — Drop the dependency. Step 2 — Add @EnablePerfAnalyzer to your main class. One annotation. Spring Boot's auto-configuration does the rest — the full instrumentation stack registers itself into your application context on startup. No XML. No manual wiring. No agent. No sidecar. Start your application. Every request is now instrumented. 🔧 How It Works Under the Hood Each layer was chosen because it's the earliest possible interception point for that class of problem: Hibernate StatementInspector sits at the JDBC boundary — before execution. Queries are tracked per-request via ThreadLocal, then analyzed post-request for structural repetition. That's where N+1s live, and that's exactly where they're caught. MVC HandlerInterceptor stamps request entry and exit timestamps directly into request attributes. No object allocation. No thread contention. Accurate latency measurement with negligible overhead. AspectJ @Around Advice wraps every @Service method and tracks invocation frequency against parameter signatures. A method called 3+ times with identical arguments, each taking over 100ms — that's not a coincidence. That's a missing cache, and it gets flagged with the exact method signature. Servlet Filter coordinates the entire analysis at the request boundary. After the response completes, findings are pushed to a non-blocking report queue. Your latency numbers stay clean. 🆚 Why This, and Not What You Already Have Datadog and New Relic are built for production — they need infrastructure, agents, and an incident to justify the signal. Hibernate's show_sql gives you a wall of text with no aggregation. Actuator surfaces metrics, not root causes. None of them are designed to give you actionable, request-scoped diagnostics inside a local dev loop with nothing to configure. That's the gap. This fills it. 📈 What's Next Thread pool saturation detection → JFR-based memory allocation profiling → CI/CD performance regression gates → Prometheus metrics export The direction is the same throughout: make performance observable by default, not discoverable by accident. Stack: Java 17 | Spring Boot 3.2.3 | Hibernate StatementInspector | AspectJ | HandlerInterceptor | Maven
To view or add a comment, sign in
-
I created a small #Java library (with zero dependency) to extract #JSON structures from chatty #LLM outputs that don't always output pure JSON. Then you pass that extracted JSON content to a tolerant parser like #Jackson in case the LLM decided to add comments, to unquote keys or what not! https://lnkd.in/emi_PsR4
To view or add a comment, sign in
-
𝐉𝐚𝐯𝐚 𝐂𝐚𝐧 𝐑𝐮𝐧 𝐖𝐢𝐭𝐡𝐨𝐮𝐭 𝐚 𝐆𝐚𝐫𝐛𝐚𝐠𝐞 𝐂𝐨𝐥𝐥𝐞𝐜𝐭𝐨𝐫 💡 Java can run without a garbage collector. Not as a hack. As a real JVM option. That option is Epsilon GC. And yes, it does exactly what it sounds like: ✅ allocations still happen ✅ the application still runs ❌ memory is never reclaimed At first, this sounds absurd. For many engineers, Java without GC feels like saying: "Java without the JVM" But Epsilon GC exists for a reason. It is a no-op garbage collector. Its job is not to reclaim memory. Its job is to let the application run until the heap is exhausted. So why would anyone want that? Because for some workloads, reclaiming memory is wasted work. Think about: 🔹 short-lived batch jobs 🔹 one-shot CLI tools 🔹 ephemeral workers 🔹 benchmark scenarios 🔹 tightly controlled processes with known memory bounds If the process is going to finish before memory pressure becomes a real issue, then GC may be solving a problem that workload never actually had. That is what makes Epsilon GC so interesting. It forces a deeper question. Not: "Which GC should we tune?" But: "Does this workload need reclamation at all?" That is a very different mindset. We usually treat garbage collection as mandatory Java runtime machinery. But in reality, it is a trade-off. And Epsilon GC makes that trade-off visible in the most brutal possible way: 🧠 no pauses 🧠 no reclamation work 🧠 no long-term safety net Just pure allocation until the process ends or dies. Of course, this is not a good fit for general long-lived services. For a typical backend: ❌ memory usage will only grow ❌ heap exhaustion becomes inevitable ❌ one wrong assumption can kill the process So this is not "turn off GC and win". It is a reminder that runtime design should follow workload shape. Sometimes the right question is not how to improve GC. Sometimes the right question is whether this workload should pay for GC at all. That is why I like Epsilon GC as a concept. Not because it replaces real collectors. But because it exposes an uncomfortable truth: ⚡ the best memory strategy depends on how your process actually lives, allocates, and dies. And for some short-lived workloads, no GC is not madness. It is simply the trade-off. ❓Would you run a production workload without a garbage collector? #Java #JVM #Performance #Backend #SoftwareEngineering #GarbageCollection #JavaInternals #SystemDesign #EngineeringLeadership
To view or add a comment, sign in
-
-
🧠 Understanding HashMap Internals — Collision, Load Factor & Bucket Size (Java Deep Dive) If you’ve used HashMap in Java, you probably know it’s fast. But why is it fast? And what actually happens under the hood when things go wrong? 🤔 Let’s break down three core concepts that every backend developer should understand 👇 🔹 1. Collision — When Two Keys Fight for the Same Spot A collision happens when two different keys generate the same hash index. index = hash(key) % bucketSize; 👉 Different keys → Same index → Collision 💥 What happens internally? Before Java 8 → Stored as a Linked List. Java 8+ → Converts to a Balanced Tree (Red-Black Tree) if collisions grow ⚠️ Why it matters: More collisions = more time to search → O(n) instead of O(1). 🔹 2. Load Factor — The Resize Trigger Load Factor defines how full the HashMap can get before resizing. loadFactor = size / capacity Default value in Java: 👉 0.75 💡 What it means: If capacity = 16 Resize happens when size > 12 🔄 What happens during resize? Capacity doubles (16 → 32 → 64…) All entries are rehashed Expensive operation ⚠️ ⚖️ Trade-off: High load factor → Less memory, more collisions Low load factor → More memory, fewer collisions 🔹 3. Bucket Size — The Foundation Bucket size = number of slots (capacity) in the HashMap 👉 Default initial capacity = 16 Each bucket stores: One node (ideal case) Or multiple nodes (collision case) 📌 Important: Always a power of 2 (16, 32, 64…) Helps optimize index calculation using bit operations 🔄 How Everything Works Together 1️⃣ Key goes through hashCode() 2️⃣ Hash determines bucket index 3️⃣ If empty → insert 4️⃣ If collision → chain/tree 5️⃣ If threshold crossed → resize ⚡ Performance Summary: 🚀 Best Case → O(1) ⚖️ With Collisions (Java 8+) → O(log n) 🐢 Worst Case → O(n) 🏁 Final Takeaway 👉 HashMap is not just a simple key-value store — it’s a carefully optimized data structure. 👉 Understanding collisions, load factor, and bucket sizing helps you: Write better code Avoid performance pitfalls #Java #HashMap #DataStructures #BackendDevelopment #SystemDesign #JavaInternals #CodingInterview
To view or add a comment, sign in
-
-
☕ Java 25 LTS dropped in September 2025 and most Spring Boot teams haven't fully processed what changed. Virtual Threads are no longer experimental — and they retire two architectural patterns you've been using for years. Here's what actually matters for your backend design: **1. The synchronized pinning problem is gone (JEP 491)** In Java 21, virtual threads would "pin" the carrier thread when hitting a `synchronized` block — meaning your shiny new virtual threads behaved just like platform threads under contention. JEP 491 fixed this in Java 24. In Java 25, it's fully stable. What this means: Hibernate, JDBC drivers, and any legacy `synchronized` code that was blocking virtual thread scalability? It works now. You can enable virtual threads in Spring Boot 3.5 with a single property and actually trust the throughput gains: ```yaml spring: threads: virtual: enabled: true ``` **2. Scoped Values replace ThreadLocal — for real this time** `ThreadLocal` was always a code smell: mutable, inherited by child threads in unexpected ways, a leak waiting to happen. Scoped Values (finalized in Java 25) are immutable, structured around the call scope, and designed for virtual threads from the ground up. Spring Boot 3.5 embraces them natively. The architectural shift: instead of passing context via `ThreadLocal` (request ID, tenant ID, user session), you bind it once with `ScopedValue.where()` — immutable for the entire scope. **3. Reactive vs Virtual Threads — the decision just got easier** The classic argument for WebFlux was throughput under high concurrency. With virtual threads stable and pinning fixed, the breakeven point shifted significantly. For most enterprise use cases — REST APIs, microservices, database-bound workloads — Spring MVC + Virtual Threads now matches reactive throughput with a fraction of the complexity. Reserve Reactor/WebFlux for true streaming scenarios: SSE, WebSocket, or pipeline-style data processing. Are you already running Java 25 in production? What's holding teams back from the upgrade? 👇 Source(s): https://lnkd.in/dcMhHgjr https://lnkd.in/dzDJnPu3 https://lnkd.in/dg6kxdQ8 #Java #SpringBoot #Java25 #VirtualThreads #BackendEngineering #SoftwareArchitecture #JVM #SpringFramework
To view or add a comment, sign in
-
-
Spring Bean Scope: Prototype In Spring, when a bean is configured with prototype scope: • The IoC Container creates a new bean object every time when getBean() method is called. • These objects are not stored in the internal cache of the IoC container. • Prototype is NOT the default scope in Spring (default scope is Singleton). • When a bean is configured with prototype scope, the IoC container creates a new object for the bean class: for every bean id, and for every factory.getBean() method call. 🔹 Example Bean Class @Component @Scope("prototype") public class PrototypeBean { public void show() { System.out.println("Prototype Bean Method Called"); } } Main Spring Boot Application @SpringBootApplication public class DemoApplication { public static void main(String[] args) { ApplicationContext context = SpringApplication.run(DemoApplication.class, args); PrototypeBean bean1 = context.getBean(PrototypeBean.class); bean1.show(); PrototypeBean bean2 = context.getBean(PrototypeBean.class); bean2.show(); System.out.println(bean1); System.out.println(bean2); } } 🔹 Output Different memory addresses: com.example.demo.PrototypeBean@1a2b3c com.example.demo.PrototypeBean@4d5e6f ✔ This proves that two different objects are created. 🔹 Interesting Interview Question What happens if a real Java Singleton class is configured as a Spring bean with Prototype scope? Answer • If Spring creates the object using the constructor (new), the IoC container creates a new object for every getBean() call. • If Spring uses the factory method (getInstance()), then the same Singleton object is returned every time. 🔹 Example: Singleton Java Class public class MySingleton { private static MySingleton instance; private MySingleton() { System.out.println("Constructor Called"); } public static MySingleton getInstance() { if(instance == null) { instance = new MySingleton(); } return instance; } } Case 1: Using Constructor @Bean @Scope("prototype") public MySingleton mySingleton() { return new MySingleton(); } Output: Constructor Called Constructor Called ✔ Different objects created. Case 2: Using Factory Method @Bean @Scope("prototype") public MySingleton mySingleton() { return MySingleton.getInstance(); } Output: Constructor Called same object reference same object reference ✔ Same object returned. 🔹 Conclusion If a real Java Singleton class is configured as a Spring Prototype bean: • Using constructor (new) → New object every getBean() call • Using factory method (getInstance()) → Same Singleton instance returned #Java #SpringBoot #SpringFramework #BackendDevelopment #SoftwareEngineering #JavaDeveloper #TechInterview #SpringCore
To view or add a comment, sign in
-
Spring Boot's OSIV default turns your JSON serializer into a hidden query engine I'm building a personal finance platform with Spring Boot 3.5 + Java 21. The first thing I disabled was Open Session in View. Spring Boot ships with spring.jpa.open-in-view=true. That means Hibernate keeps a database connection open through the entire HTTP request - including JSON serialization. When Jackson walks your entity graph to build a response, every uninitialized lazy relationship triggers a database query. Your serializer is now executing SQL. In a load test with HikariCP's default pool of 10 connections and around 150 concurrent requests, this is where things break. Each request holds a connection for the full request lifecycle instead of just the service layer. The pool exhausts, threads queue up, and response times spike. The tricky part is that it works fine in dev when you're the only user. Disabling OSIV forces you to think about what data you actually need. You fetch it explicitly in the service layer with JOIN FETCH or projections, map it to a DTO, and the connection goes back to the pool before serialization even starts. It's more code upfront but the data flow becomes visible instead of hidden behind proxy magic. The second thing I changed was ddl-auto. Hibernate's update mode can generate schema changes automatically, but it can't rename columns, drop unused indexes, or migrate data. It produces a schema that looks right but drifts from what you intended. I use validate with Flyway migrations instead - every schema change is an explicit, versioned SQL file. If the code and the database disagree, the app refuses to start rather than silently diverging. These two defaults share the same problem. They hide complexity that surfaces as production issues. OSIV hides query execution. ddl-auto update hides schema drift. In both cases, making the behavior explicit costs more effort early but removes an entire class of debugging later. #SpringBoot #Java #BuildInPublic #BackendDevelopment
To view or add a comment, sign in
-
Explain the difference between synchronized, ReentrantLock, and StampedLock. When would you choose StampedLock over the others? 1. synchronized (The Built-In Standard) What it is: Java's implicit monitor lock. You apply it directly to methods or blocks of code. Pros: Very easy to use. The JVM automatically handles acquiring and releasing the lock, meaning you can't accidentally forget to unlock it if an exception is thrown. Cons: Rigid. It lacks advanced features like checking if a lock is available (tryLock), interrupting a waiting thread, or separating read vs. write operations. Reentrancy: Yes (a thread holding the lock can enter other synchronized blocks using the same lock without deadlocking). 2. ReentrantLock (The Flexible Alternative) What it is: An explicit lock from the java.util.concurrent.locks package. You must manually call .lock() and explicitly call .unlock() (usually in a finally block). Pros: Highly flexible. It supports fairness (granting locks to the longest-waiting thread), interruptible lock waits, and tryLock() (attempting to acquire a lock without blocking indefinitely). Cons: More verbose and error-prone. If you forget the finally block, a crash can leave the lock held forever. Reentrancy: Yes. 3. StampedLock (The High-Performance Optimizer) What it is: Introduced in Java 8, it is an advanced lock that returns a long "stamp" whenever you acquire a lock, which you must use to release it. Pros: Extremely fast in read-heavy scenarios. It introduces the concept of Optimistic Reading. You can read data without actually locking it, then check the "stamp" afterward to see if a writer came in and changed the data while you were reading. If the data changed, you fall back to a traditional read lock. Cons: Complex to implement correctly. It is not reentrant (a thread can deadlock itself if it tries to re-acquire it), and it doesn't support condition variables. When to choose StampedLock over the others? You should choose StampedLock when you have a highly concurrent application where read operations vastly outnumber write operations. Because traditional read-write locks (ReentrantReadWriteLock) can suffer from "writer starvation" (where constant readers prevent a writer from ever getting the lock), StampedLock's optimistic read solves this. It allows threads to read data without blocking writers, yielding massive performance gains in read-heavy, high-contention environments.
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development