🧠 HashMap vs ConcurrentHashMap — and How HashMap Works Internally Choosing between HashMap and ConcurrentHashMap is easy. Explaining why — and how HashMap actually works internally — is what separates usage from understanding. Let’s break both in one place 👇 1️⃣ HashMap vs ConcurrentHashMap (When & Why) -> HashMap ❌ Not thread-safe ✔ Very fast in single-threaded scenarios ✔ Allows one null key and multiple null values ❌ Unsafe when accessed by multiple threads Use it when: the map is not shared across threads performance matters and concurrency doesn’t -> ConcurrentHashMap ✔ Thread-safe ✔ Scales well under high concurrency ❌ Does not allow null keys or values ✔ Designed for multi-threaded access Under the hood (Java 8+), it avoids global locking by using: CAS (Compare-And-Swap) fine-grained, bucket-level locking mostly lock-free reads This makes it far more scalable than synchronized maps. 2️⃣ Why HashMap Breaks in Multithreaded Code HashMap has no synchronization. When multiple threads modify it: buckets can be updated concurrently internal structure may corrupt resize operations can behave unpredictably This is why HashMap should never be used for shared mutable state. 3️⃣ How HashMap Works Internally (Structured View) To explain HashMap internals clearly (especially in interviews or debugging), I use a simple structure to organize the concepts: HBCET H — Hashing hashCode() is called on the key Hash is spread and converted into an index using: (n - 1) & hash This determines which bucket stores the entry. B — Buckets Internally, HashMap is an array of buckets Each bucket holds key-value nodes (Node<K,V>) C — Collision When multiple keys map to the same bucket. Handled using: Linked List (default) Red-Black Tree (Java 8+) E — Equals If hash matches, equals() is used to: identify the exact key prevent duplicate entries Incorrect equals() or hashCode() can silently break the map. T — Treeification To protect worst-case performance, Java 8+ converts: Linked List → Red-Black Tree Conditions: bucket size > 8 table size ≥ 64 Time complexity improves from O(n) to O(log n). 4️⃣ How ConcurrentHashMap Builds on This ConcurrentHashMap uses the same hashing and bucket concepts, but adds: safe concurrent updates fine-grained locking non-blocking reads Result: ✔ better throughput ✔ no global lock ✔ predictable behavior under load 🧩 Key Takeaways ✔ HashMap is fast but not thread-safe ✔ ConcurrentHashMap is designed for concurrency ✔ HashMap uses buckets, collisions, equals, and treeification ✔ Correct equals() and hashCode() are critical ✔ Internal understanding helps avoid subtle production bugs 📌 Save this for Java interviews 📌 Revisit before using Map in concurrent code #Java #HashMap #ConcurrentHashMap #CoreJava #BackendEngineering #Multithreading
HashMap vs ConcurrentHashMap: Understanding Internal Mechanics
More Relevant Posts
-
If your JVM services still treat threads as scarce OS resources, you’re multiplying complexity and production risk. In three recent Java 21 migrations I ran, endpoints that were previously dominated by parked threads saw p99 latency reduce by ~3–10x after disciplined adoption of virtual threads — but only when teams followed a safety-first rollout. Step 1 — Measure and isolate blocking boundaries. Don’t guess. Capture a short Flight Recorder trace and your APM spans under a light load. Look for long PARKED/BLOCKED thread states, high HikariCP waiters, and where time is spent in native I/O. If blocked-time dominates, those handlers are the right place to try virtual threads. Concrete: run a 10s load test, open the JFR, filter for long PARKED threads and Hikari wait events. Correlate with thread dump to find the exact call stacks. Step 2 — Adopt virtual threads with a safety-first executor pattern. Introduce virtual threads at the application boundary so you can roll back quickly. Do not rewrite your entire stack in day one. Example (Java 21): ExecutorService vte = Executors.newVirtualThreadPerTaskExecutor(); try (vte) { CompletableFuture.supplyAsync(() -> userRepository.findById(id), vte).join(); } Spring Boot tip: expose the executor as a @Bean and use CompletableFuture.supplyAsync(...) in controllers or @Async services to keep the servlet container untouched. Step 3 — Tune external systems and use structured concurrency. Virtual threads lower JVM overhead but do not increase upstream capacity. Keep Hikari maxPoolSize aligned with DB capacity (e.g., 20–100), add sensible timeouts, and introduce backpressure or async drivers where needed. Structured concurrency example: try (var scope = new StructuredTaskScope.ShutdownOnFailure()) { var u = scope.fork(() -> userService.load(id)); var o = scope.fork(() -> orderService.loadForUser(id)); scope.join(); scope.throwIfFailed(); return combine(u.resultNow(), o.resultNow()); } Hard-won lessons: • More virtual threads ≠ more DB. The database is still the bottleneck. Limit concurrent DB work and prefer batching. • Start small and target noisy endpoints first: background jobs, report generators, or a single high-latency API. • Instrument aggressively: thread states, pool waiters, TTFB, p95/p99 latencies; add dashboards and alerts. Deep-signal question (looking for long-form replies): What was the hardest concurrency or resource-exhaustion bug you discovered after introducing virtual threads — what diagnostic signals pointed to it, how did you fix it, and what would you change in hindsight? Share traces, pool configs, and post-mortems; I’ll read and reply. Save this for your next architectural review. Full deep-dive and extended code samples: https://lnkd.in/gYBH2Apk #Java #ProjectLoom #SpringBoot #Concurrency #Architecture
To view or add a comment, sign in
-
Hook Data: In benchmarked Spring Boot services, blocking I/O patterns typically trigger thread-pool saturation and long tail latency. In one real test, switching a high-throughput endpoint to Java 21 virtual threads reduced 95th-percentile latency by ~70% and dropped CPU overhead during peaks. Body The problem: traditional platform threads are expensive — they limit concurrency and force large thread pools, leading to context-switch overhead, OOM risk, and brittle thread-local/transaction semantics. For services that still perform blocking I/O (databases, legacy libs), the simple "one platform thread per request" model becomes the bottleneck. The fix: adopt virtual threads (Java 21) and structured concurrency for request-scoped work. Virtual threads are cheap to create, let you keep straightforward blocking code, and eliminate the need to re-write everything to async callbacks. In Spring Boot: use Executors.newVirtualThreadPerTaskExecutor() for custom thread pools, or integrate virtual threads at the server level (Tomcat/Jetty/Undertow adapters or with Spring's TaskExecutor). Pair this with short-lived structured-concurrency scopes so all child tasks finish within the request lifecycle. Key practical rules - Keep blocking boundaries explicit (DB/network calls). Use dedicated virtual-thread executors for blocking adapters. - Watch thread-locals and transaction propagation — prefer passing context explicitly or use scoped context managers. - Test third-party libraries for compatibility with virtual threads (NIO vs blocking IO). - Measure: track p95/p99 latency, GC pause time, and CPU utilization during load tests. Mini example ExecutorService exec = Executors.newVirtualThreadPerTaskExecutor(); try { Future<String> f = exec.submit(() -> blockingCall(params)); return f.get(); } finally { exec.close(); } Where to get the full recipe I condensed the technical highlights here. For a complete, topic-by-topic guide covering Core Java (deep dives, patterns, PrepInsta logical problems), HTML, CSS, JavaScript, React, Spring Boot, RESTful API design, JDBC and SQL — each with 10–15 line explanations and example problems + solutions — see the full appendices and runnable examples in my portfolio: https://lnkd.in/guDYgeuR Deep signal trigger If you’ve migrated a production Spring Boot service to virtual threads, what measurable trade-offs did you see (metrics, GC behavior, library incompatibilities)? Please share a short post‑mortem with numbers and the exact mitigation you applied — that kind of detail helps everyone. Call to action Save this post for your next architecture review or check the full guide and runnable examples at the link above. 🚀 #Java21 #SpringBoot #Concurrency #Architecture #FullStack
To view or add a comment, sign in
-
Heap Memory and Stack Memory: What’s the Difference? Heap Memory: -> The heap is a common storage area used by all threads and all classes within a program. -> It stores any objects created by the program, and also stores Java primitives defined as instance variables. -> The heap is an area of JVM-managed memory. -> The heap is cleaned regularly by the GC. Stack Memory: -> The purpose of Stack memory is to retain context for each active method within each thread. -> Local variables defined as primitives are stored in the stack. -> The stack stores pointers to each local object variable. -> The stack is a structure within the Threads area, which is in native memory. -> A stack frame is popped from the stack when a method completes, freeing up the space it occupied. Each has its own purpose and its own method of organization. A good understanding of the #stack and the #heap helps you to plan memory usage more efficiently, as well as being useful for #Troubleshooting and #PerformanceTuning. Read more in this blog: https://lnkd.in/g9kVQfcK
To view or add a comment, sign in
-
𝗧𝗢𝗣 𝟱𝟬 𝗖𝗢𝗥𝗘 𝗝𝗔𝗩𝗔 𝗜𝗡𝗧𝗘𝗥𝗩𝗜𝗘𝗪 𝗤𝗨𝗘𝗦𝗧𝗜𝗢𝗡𝗦 (𝗠𝘂𝘀𝘁-𝗽𝗿𝗲𝗽𝗮𝗿𝗲 𝗳𝗼𝗿 𝗙𝗿𝗲𝘀𝗵𝗲𝗿𝘀 & 𝗘𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲𝗱 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀) 1️⃣ 𝗢𝗢𝗣𝘀 & 𝗖𝗼𝗿𝗲 𝗝𝗮𝘃𝗮 𝗙𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹𝘀 (1–10) 1. Explain OOPs principles with real project examples 2. Difference between abstraction and encapsulation 3. Method overloading vs method overriding 4. Can we override a static method? Why/why not 5. Abstract class vs interface after Java 8 6. Why multiple inheritance is not supported in Java classes 7. Role of Object class in Java 8. Why equals() and hashCode() must be overridden together 9. Use of functional interface 10. How is the diamond problem resolved at interface level 2️⃣ 𝗝𝗩𝗠, 𝗠𝗲𝗺𝗼𝗿𝘆 & 𝗞𝗲𝘆𝘄𝗼𝗿𝗱𝘀 (11–15) 11. Difference between JDK, JRE, and JVM 12. What happens internally when an object is created 13. static keyword in terms of memory allocation 14. Why static is not allowed on local variables 15. High-level class loading process 3️⃣ 𝗦𝘁𝗿𝗶𝗻𝗴𝘀 & 𝗘𝘅𝗰𝗲𝗽𝘁𝗶𝗼𝗻 𝗛𝗮𝗻𝗱𝗹𝗶𝗻𝗴 (16–20) 16. Why String is immutable 17. StringBuilder vs StringBuffer 18. Checked vs unchecked exceptions 19. Try-with-resources – why preferred 20. Designing custom exceptions 4️⃣ 𝗖𝗼𝗹𝗹𝗲𝗰𝘁𝗶𝗼𝗻𝘀 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸 & 𝗦𝗼𝗿𝘁𝗶𝗻𝗴 (21–30) 21. List vs Set vs Map 22. Internal working of HashMap 23. HashMap vs ConcurrentHashMap 24. Why HashMap allows one null key 25. Internal working of TreeSet 26. Fail-fast vs fail-safe iterators 27. ArrayList vs LinkedList performance 28. Comparable vs Comparator 29. How TreeSet maintains sorting 30. What if hashCode() always returns the same value 5️⃣ 𝗦𝗲𝗿𝗶𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 & 𝗞𝗲𝘆𝘄𝗼𝗿𝗱𝘀 (31–35) 31. What is Java Serialization 32. Use of transient keyword 33. Problems with Java Serialization 34. Serializable vs Externalizable 35. When serialization should be avoided 6️⃣ 𝗠𝘂𝗹𝘁𝗶𝘁𝗵𝗿𝗲𝗮𝗱𝗶𝗻𝗴 & 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆 (36–45) 36. start() vs run() 37. Thread lifecycle states 38. Synchronization – why needed 39. Synchronized method vs block 40. Deadlock and its prevention 41. Guarantees of volatile 42. Volatile vs synchronized 43. Visibility vs atomicity 44. How ConcurrentHashMap is thread-safe 45. Why volatile is not a replacement for synchronization 7️⃣ 𝗗𝗲𝘀𝗶𝗴𝗻 𝗣𝗮𝘁𝘁𝗲𝗿𝗻𝘀 (46–50) 46. Singleton pattern and its problems 47. Thread-safe Singleton approaches 48. Why Enum Singleton is preferred 49. Factory vs Abstract Factory 50. Real-world usage of Abstract Factory 💡 𝗧𝗵𝗲𝘀𝗲 𝗮𝗿𝗲 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗶𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 — 𝗧𝗵𝗲𝘆 𝗱𝗲𝗰𝗶𝗱𝗲 𝘆𝗼𝘂𝗿 𝗰𝗼𝗻𝗳𝗶𝗱𝗲𝗻𝗰𝗲, 𝗰𝗹𝗮𝗿𝗶𝘁𝘆, 𝗮𝗻𝗱 𝘀𝗲𝗹𝗲𝗰𝘁𝗶𝗼𝗻. 👉 Comment “JAVA“ and follow Narendra Sahoo 𝗳𝗼𝗿 𝗱𝗲𝘁𝗮𝗶𝗹𝗲𝗱 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝗮𝗻𝘀𝘄𝗲𝗿𝘀 𝘁𝗼 𝗮𝗹𝗹 𝘁𝗵𝗲𝘀𝗲 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀.
To view or add a comment, sign in
-
🧵 𝗝𝗮𝘃𝗮 𝗖𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆 - 𝗣𝗮𝗿𝘁 𝟮: 𝗪𝗵𝗮𝘁 𝗟𝗶𝘃𝗲𝘀 𝗪𝗵𝗲𝗿𝗲 𝗜𝗻𝘀𝗶𝗱𝗲 𝘁𝗵𝗲 𝗝𝗩𝗠? In Part 1, we saw that running a Java program creates a process with its own memory layout: code, data, heap, and stack. The JVM runs inside that process. It requests memory from the OS, organizes it into its own runtime areas, and the Java code executes entirely within that structure. Everything below lives inside that JVM-managed memory. 1️⃣ 𝗝𝗮𝘃𝗮 𝗛𝗲𝗮𝗽 The JVM allocates the Java Heap inside its process memory. This is where runtime data lives: • All objects • Instance fields • Static variables There is only one heap per JVM. All threads share it. If two threads modify the same object, they are modifying the same memory location. 2️⃣ 𝗠𝗲𝘁𝗮𝘀𝗽𝗮𝗰𝗲 Metaspace stores class metadata, method bytecode, and runtime constant pool. It defines the structure of your program. It does not store changing variable values. 𝗠𝗲𝘁𝗮𝘀𝗽𝗮𝗰𝗲 𝗶𝘀 𝘀𝗵𝗮𝗿𝗲𝗱, 𝗯𝘂𝘁 𝗶𝘁 𝗵𝗼𝗹𝗱𝘀 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲, 𝗻𝗼𝘁 𝗺𝘂𝘁𝗮𝗯𝗹𝗲 𝘀𝘁𝗮𝘁𝗲. 3️⃣ 𝗝𝗩𝗠 𝗦𝘁𝗮𝗰𝗸 Each thread gets its own JVM stack. The stack stores: • Method call frames • Local variables • Method parameters • References to heap objects This memory is private to the thread. If Thread A declares a local variable, Thread B cannot access it. But remember: A reference to an object lives on the stack. The object it points to lives on the heap. So the stack can be private while still pointing to shared memory. That distinction is critical. 4️⃣ 𝗣𝗖 𝗥𝗲𝗴𝗶𝘀𝘁𝗲𝗿 Each thread has its own Program Counter register. It keeps track of the current bytecode instruction being executed. 5️⃣ 𝗡𝗮𝘁𝗶𝘃𝗲 𝗦𝘁𝗮𝗰𝗸 Each thread also has a native stack. So when your Java code calls a native method (for example, something written in C/C++ through JNI), execution temporarily leaves the JVM and runs native code. Like the JVM stack, it is private to the thread. So, When a thread works with local variables, it is operating on its own private memory. But when it reads or modifies an object or a static variable, it is operating on shared heap memory. Two threads can execute completely different stack frames, yet still read and update the same heap object at the same time. And that is where unexpected behavior begins. This is 𝗣𝗮𝗿𝘁 𝟮 of the Java Concurrency series. Follow along and feel free to refine or add anything I miss.
To view or add a comment, sign in
-
-
Java27 may include "the last mile of 7NF type system" after generics, fork/join framework, lambda expression, pattern matching, virtual thread, scoped value, and structured concurrency. ~ The 7NF Philosophy: Transcending Trade-offs through Data Modeling https://lnkd.in/gzxXUTrt The 7NF type system achieves a breakthrough by moving beyond traditional design trade-offs (such as "type safety vs. type explosion") through the following four pillars: 1. SoA (Structure of Arrays) × Append-only: The extreme normalization of 7NF (one attribute per table) physically enforces an SoA layout. Combined with an append-only (immutable) approach, it maximizes memory bandwidth efficiency and SIMD affinity while eliminating concurrency complexity. 2. Never-code-gen × Thin Generic Library: "Complexity starts from code-gen": Generating unique code for every data variation creates a maintenance nightmare and bloats the binary. "Code-gen destroys Instruction-Cache": Large amounts of generated code overwhelm the CPU's I-Cache. 7NF utilizes a Thin Generic Library that operates on fixed ordinal positions and pointer transfers, keeping the instruction set minimal and the I-Cache hit rate maximal. 3. Causal Posets (Partially Ordered Sets): By modeling state transitions as Causal Posets, the system replaces complex transaction logic with mathematical order. Data structures themselves carry their consistency (causality), enabling true distributed systems without physical locks. Java 27 and Project Valhalla: The Physical Realization JDK 26 has entered the RC2 phase without JEP 401, making JDK 27 the likely target for the official preview of Value Classes. This evolution in Java aligns perfectly with the 7NF vision: * Eliminating Indirection: 7NF manages relations via Object References, but for attributes, it seeks to eliminate the overhead of pointers. Value Classes allow these attributes to be "flattened" directly into arrays or stack frames. * Hardware-Level SoA: Previously, achieving SoA in Java required either sacrificing type safety (using primitive arrays) or resorting to Codegen. Value Classes enable the JVM to handle flattening natively, supporting the "Never Code Gen" principle while maintaining hardware efficiency. * Unified Execution Path: By letting the runtime handle data layout, the system can stay "thin." A single generic path can process different data types efficiently, protecting the I-Cache from the "Instruction-Cache destruction" caused by generated wrapper classes. Summary The 7NF type system treats software not as a collection of "instructions to be written" (Code), but as a "geometric structure of data" (Model). With Java 27's Value Classes, the infrastructure is finally catching up to provide the physical memory layout required to realize this Native State Transfer without the complexity of code generation. ~ The evolution of Java can be seen as a progressive assembly of the puzzle pieces required to realize the 7NF Type System.
To view or add a comment, sign in
-
🧠 𝗧𝗵𝗲 𝗛𝗶𝗱𝗱𝗲𝗻 𝗠𝗲𝗺𝗼𝗿𝘆 𝗦𝗶𝗻𝗸 (𝗣𝗮𝗿𝘁 𝟭𝟯.𝟱.𝟯): 𝗝𝗮𝘃𝗮 𝗛𝘁𝘁𝗽𝗖𝗹𝗶𝗲𝗻𝘁 - 𝗠𝗼𝗱𝗲𝗿𝗻 𝗦𝘁𝗮𝗰𝗸 𝗡𝗼𝗯𝗼𝗱𝘆 𝗨𝘀𝗲𝘀 Java 11 gave us HttpClient. HTTP/2 built-in. Connection pooling included. Async API. No OkHttp, Apache, or Netty dependency. Five years later, everyone still adds external HTTP libraries. Why? Spring Boot doesn't default to it. Teams don't know it exists. Legacy code stays legacy. But it's there. Java 21 virtual threads change blocking model. Time to reconsider built-in? --- 🔧 𝗪𝗵𝗮𝘁 𝗝𝗮𝘃𝗮 𝗛𝘁𝘁𝗽𝗖𝗹𝗶𝗲𝗻𝘁 𝗣𝗿𝗼𝘃𝗶𝗱𝗲𝘀 _HttpClient client = HttpClient.newBuilder() .version(Version.HTTP_2) // HTTP/2 default .connectTimeout(...) .build();_ 𝗕𝘂𝗶𝗹𝘁-𝗶𝗻: - HTTP/2 by default (fallback to HTTP/1.1) - Connection pooling (automatic) - Async and sync APIs (CompletableFuture) - No external dependencies --- 🔧 𝗪𝗵𝘆 𝗡𝗼𝗯𝗼𝗱𝘆 𝗨𝘀𝗲𝘀 𝗜𝘁 𝗦𝗽𝗿𝗶𝗻𝗴 𝗕𝗼𝗼𝘁 𝗱𝗲𝗳𝗮𝘂𝗹𝘁𝘀: RestTemplate (13.3), WebClient (13.4). Not Java HttpClient. 𝗟𝗲𝗴𝗮𝗰𝘆 𝗰𝗼𝗱𝗲 𝗨𝗻𝗳𝗮𝗺𝗶𝗹𝗶𝗮𝗿𝗶𝘁𝘆 𝗘𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺: Fewer Stack Overflow answers, less community content. --- 🔧 𝗝𝗮𝘃𝗮 𝟮𝟭 𝗩𝗶𝗿𝘁𝘂𝗮𝗹 𝗧𝗵𝗿𝗲𝗮𝗱𝘀 𝗖𝗵𝗮𝗻𝗴𝗲 𝗧𝗵𝗲 𝗚𝗮𝗺𝗲 _HttpClient client = HttpClient.newHttpClient(); // Blocking call on virtual thread = no problem String response = client.send(request, BodyHandlers.ofString()).body();_ 𝗕𝗲𝗳𝗼𝗿𝗲 𝘃𝗶𝗿𝘁𝘂𝗮𝗹 𝘁𝗵𝗿𝗲𝗮𝗱𝘀: Blocking HTTP = platform thread per request. Doesn't scale (13.4). 𝗪𝗶𝘁𝗵 𝘃𝗶𝗿𝘁𝘂𝗮𝗹 𝘁𝗵𝗿𝗲𝗮𝗱𝘀: Blocking code performs like async. Simple blocking HttpClient calls scale to thousands of concurrent requests. 𝗧𝗿𝗮𝗱𝗲𝗼𝗳𝗳: WebClient reactive complexity vs HttpClient simplicity + virtual threads. Both scale. Different models. --- 🔧 𝗠𝗲𝗺𝗼𝗿𝘆 & 𝗖𝗼𝗻𝗳𝗶𝗴𝘂𝗿𝗮𝘁𝗶𝗼𝗻 𝗖𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻 𝗽𝗼𝗼𝗹𝗶𝗻𝗴: Built-in, automatic. 𝗛𝗧𝗧𝗣/𝟮 𝘀𝘁𝗮𝘁𝗲: HPACK overhead same as any HTTP/2 client (13.5.2). 𝗗𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝗰𝗶𝗲𝘀: Zero external. 𝗠𝗲𝗺𝗼𝗿𝘆 𝗳𝗼𝗼𝘁𝗽𝗿𝗶𝗻𝘁: fewer jars. --- ⚖️ 𝗧𝗿𝗮𝗱𝗲𝗼𝗳𝗳𝘀 𝗝𝗮𝘃𝗮 𝗛𝘁𝘁𝗽𝗖𝗹𝗶𝗲𝗻𝘁: ✓ HTTP/2 built-in, no dependencies, virtual thread ready ✗ Less Spring integration, smaller ecosystem, less familiar 𝗢𝗸𝗛𝘁𝘁𝗽/𝗔𝗽𝗮𝗰𝗵𝗲: ✓ Mature, large ecosystem, Spring integrated ✗ External dependencies, more complex configuration 𝗪𝗲𝗯𝗖𝗹𝗶𝗲𝗻𝘁: ✓ Reactive, Spring native, high concurrency ✗ Reactor complexity, learning curve (13.4) --- 𝗧𝗵𝗲 𝗕𝗼𝘁𝘁𝗼𝗺 𝗟𝗶𝗻𝗲 Built-in HTTP finally viable. Reconsider dependency bloat. #Java #HttpClient #Java11 #Java21 #VirtualThreads #HTTP2 #Performance #Memory #NoDependencies #JavaPerformance #BackendDevelopment #ProjectLoom #ModernJava #CloudComputing #EnterpriseJava #DevOps #Microservices #MemoryOptimization #SoftwareEngineering #CloudNative #ProductionReady #JavaDevelopment #Async #NonBlocking #HighScale
To view or add a comment, sign in
-
🧠 𝗧𝗵𝗲 𝗛𝗶𝗱𝗱𝗲𝗻 𝗠𝗲𝗺𝗼𝗿𝘆 𝗦𝗶𝗻𝗸 (𝗣𝗮𝗿𝘁 𝟭𝟯.𝟯): 𝗥𝗲𝘀𝘁𝗧𝗲𝗺𝗽𝗹𝗮𝘁𝗲 𝗘𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺 - 𝗧𝗵𝗲 𝗢𝗽𝘁𝗶𝗼𝗻𝘀 𝗡𝗼𝗯𝗼𝗱𝘆 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝘀 How do I use RestTemplate? Wrong question. Which request factory? Which message converter? Which interceptors? RestTemplate has 5 request factory options. Default: SimpleClientHttpRequestFactory (no pooling). HttpComponents? OkHttp? Netty? Each has memory cost. Do you know what your RestTemplate uses? Or copy-paste code that works, scratch head later? --- 🔧 𝗥𝗲𝗾𝘂𝗲𝘀𝘁 𝗙𝗮𝗰𝘁𝗼𝗿𝘆 - 𝗧𝗵𝗲 𝗛𝗶𝗱𝗱𝗲𝗻 𝗖𝗵𝗼𝗶𝗰𝗲 _new RestTemplate(); // Uses SimpleClientHttpRequestFactory_ 𝟱 𝗼𝗽𝘁𝗶𝗼𝗻𝘀 𝗲𝘅𝗶𝘀𝘁: 𝗦𝗶𝗺𝗽𝗹𝗲𝗖𝗹𝗶𝗲𝗻𝘁 (default): No pooling 𝗛𝘁𝘁𝗽𝗖𝗼𝗺𝗽𝗼𝗻𝗲𝗻𝘁𝘀: Apache HttpClient, pooling 𝗢𝗸𝗛𝘁𝘁𝗽: OkHttp wrapper (13.2 series) 𝗡𝗲𝘁𝘁𝘆: Async 𝗝𝗱𝗸𝗖𝗹𝗶𝗲𝗻𝘁: Java 11+ wrapper Default = no pooling. --- 🔄 𝗧𝗵𝗲 𝗟𝗮𝘆𝗲𝗿𝘀 𝗡𝗼𝗯𝗼𝗱𝘆 𝗧𝗮𝗹𝗸𝘀 𝗔𝗯𝗼𝘂𝘁 𝗠𝗲𝘀𝘀𝗮𝗴𝗲 𝗖𝗼𝗻𝘃𝗲𝗿𝘁𝗲𝗿𝘀: JSON → object. Jackson parses, allocates. Every response. 𝗜𝗻𝘁𝗲𝗿𝗰𝗲𝗽𝘁𝗼𝗿𝘀: Logging, tracing, auth. Each sees every request. Chain multiplies overhead. 13.3.4 covers this. Two layers between your code and HTTP. Both cost memory. Both configurable. Most use defaults. --- 🗺️ 𝗧𝗵𝗲 𝗙𝗹𝗼𝘄 _Request → Interceptors → Request Factory → HTTP → Response → Converters → Your Object_ Every layer has cost. Default hides choices. --- ❓ 𝗧𝗵𝗲 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 Do you know what your RestTemplate uses? - Request factory? - Pooling enabled? - Which converters? - Interceptor count? Or copy-paste from Stack Overflow? Works until scale exposes cost. Then scratch head. --- ⚖️ 𝗧𝗿𝗮𝗱𝗲𝗼𝗳𝗳𝘀 𝗗𝗲𝗳𝗮𝘂𝗹𝘁: ✓ Works immediately ✗ No pooling, hidden waste 𝗖𝘂𝘀𝘁𝗼𝗺: ✓ Optimized ✗ Requires understanding 𝗠𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻: ✓ Modern better ✗ Effort --- 𝗧𝗵𝗲 𝗕𝗼𝘁𝘁𝗼𝗺 𝗟𝗶𝗻𝗲 • Don't use RestTemplate default blindly. • Multiple options exist for a reason: tradeoffs. • Think: what suits you? What will suit you at scale? • Abstract your choice. Make switching easy. Tech debt accumulates when defaults become concrete dependencies. • Start trading off now. Easier to switch later. #Java #SpringBoot #RestTemplate #HTTP #Performance #Memory #Microservices #JavaPerformance #BackendDevelopment #SpringFramework #CloudComputing #EnterpriseJava #DevOps #APIClient #HttpClient #ProductionReady #MemoryOptimization #SoftwareEngineering #TechDebt #BestPractices #CodeQuality #SystemDesign #CloudNative #SpringCloud #JavaDevelopment
To view or add a comment, sign in
-
🚀 Day 6 — Restarting My Java Journey with Consistency Today wasn’t just about operators. It was about understanding how Java actually thinks — in bits. 🔹 Arithmetic Operators + - * / % ++ -- += -= *= /= %= 🔹 Relational Operators == != < > <= >= 🔹 Bitwise Operators — Where Real Understanding Begins & | ^ ~ << >> >>> &= |= ^= >>= <<= >>>= These don’t work on decimal numbers. They work directly on binary representation. And that changes everything. (1) Left Shift (<<) — More Than Just Multiply by 2 Example: int i = 1; Sout(i << 8); // 256 Why? Because shifting left by 8 moves bits 8 places: 00000001 → 00000001 00000000 → 256 But here’s the interview-level insight: For int (32 bits), Java actually does: shiftValue % 32 So: i << 32 ≈ i << 0 i << 33 ≈ i << 1 This small detail can change answers completely in technical rounds. (2)Type Promotion — The Hidden Rule -:- Shift operators work on int and long. So if you write: byte b = 1; b = (byte)(b << 7); // -128 int x = b<<1; // -256 , how? Internally: 1️⃣ byte is promoted to int 2️⃣ Shift happens in 32-bit 3️⃣ Then it’s cast back to byte Now: 1 << 7 = 128 But byte range is -128 to 127 Binary of 128: 10000000 In signed 8-bit representation → that equals -128 So the result becomes -128. That’s not magic. That’s binary overflow. (3) Right Shift vs Unsigned Right Shift >> → Preserves sign >>> → Fills with zeros example: byte b = -128 ; // 10000000 b>>1 // 11000000 (-64) b>>>1 // 01000000 (64) This matters especially in: Hashing Bit masking System-level optimizations Performance-critical code 🔹Logical Operators & Short Circuiting && || ! Short-circuit rule: A && B → If A is false, B is NOT evaluated A || B → If A is true, B is NOT evaluated This prevents: ->Unnecessary computation ->NullPointerExceptions ->Hidden bugs Important point: we can use bitwise & | also works on expression for example: (a < b) & (a < c) But ->Bitwise & does NOT short-circuit. ->Both sides always execute. 🔹Assignment Operator = 🔹Operator Precedence Just use Parentheses , bro💪 Learning daily with Coder Army and Aditya Tandon Bhaiya and Rohit Negi Bhaiya #Day6 #Java #Consistency #BackendDevelopment #LearningJourney #SoftwareEngineering #CoderArmy #AdityaTandon
To view or add a comment, sign in
-
-
Follow-up to urn:li:share:7427018662399696896 — practical hard-learned patterns for migrating to Java 21 virtual threads and structured concurrency. Hook In my benchmarks across three services, replacing a fixed thread-pool with virtual threads reduced 95th-percentile request latency by ~40% and cut thread-management complexity in half — but most teams still get production behaviour wrong because they treat virtual threads like a drop-in replacement. Three-step, pragmatic migration 1) Isolate blocking boundaries. Virtual threads scale cheaply, but blocking system calls and legacy libraries still need containment. Identify blocking hotspots with simple latency heatmaps and wrap blocking calls in a dedicated carrier pool. Example 1 — carrier pool for blocking JDBC calls try (var carrier = Executors.newFixedThreadPool(20)) { var future = CompletableFuture.supplyAsync(() -> jdbcQuery(), carrier); var result = future.get(); } 2) Adopt structured concurrency to reason about lifetimes. Replace ad-hoc CompletableFuture chains with StructuredTaskScope to ensure cancellation on failure and clear lifecycles. Example 2 — structured concurrency (Java 21 preview API) try (var scope = new StructuredTaskScope.ShutdownOnFailure()) { Future<User> u = scope.fork(() -> loadUser(id)); Future<Account> a = scope.fork(() -> loadAccount(id)); scope.join(); scope.throwIfFailed(); return new Profile(u.resultNow(), a.resultNow()); } 3) Profile, backpressure, and graceful degradation. Use connection limits, timeouts, and circuit-breakers. Measure queueing delays (not just CPU) and add admission control before the web layer. Example 3 — virtual-thread executor for request handling try (var exec = Executors.newVirtualThreadPerTaskExecutor()) { exec.submit(() -> handleRequest(req)); } Hard-learned lesson Virtual threads expose latent blocking and resource contention — the migration wins come from redesigning blocking boundaries, adding explicit timeouts, and using structured concurrency to make failure semantics explicit. Call to action Save this post for your next architecture review. Deep-dive and migration guides: https://lnkd.in/gYBH2Apk Deep-signal question When you migrated a production service to virtual threads, what single unexpected failure did you encounter, how did you detect it, and what change fixed it for good? Describe the metric you used to prove the fix.
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development