🕵️♂️ The "Invisibility Cloak": Why Spring and private Don't Mix. You’ve added @Transactional. You’ve added @Async. You’ve even added @Cacheable. You run the code, and... nothing happens. No transaction starts, the call isn't asynchronous, and the cache is ignored. The culprit? A single keyword: private. In the world of Spring, the private modifier is essentially an invisibility cloak for the Application Context. Here is the technical "why" behind this behavior and the edge cases you need to know. ⚙️ How the Magic Fails: The Proxy Problem Spring manages cross-cutting concerns (AOP) using Proxies. When you inject a bean, you’re usually not getting the real class; you’re getting a wrapper (a Proxy). JDK Dynamic Proxies: These work by implementing your bean's interfaces. Since interfaces only define public methods, the proxy literally cannot "see" or wrap anything else. CGLIB Proxies: These create a subclass of your bean at runtime. In Java, a subclass cannot override or even access a private method of its parent. The Result: If the Proxy can’t override the method, it can’t add the "magic" (the transaction logic, the interceptor, etc.) around it. The call goes straight to your original method, bypassing Spring entirely. ⚠️ The Tricky Edge Cases The "Silent" Failure: Spring won’t throw an error if you put @Transactional on a private method. It will simply ignore it. This is dangerous because your data integrity is at risk without you knowing. The Self-Invocation Trap: Even if your method is public, calling it from another method inside the same class will fail. Why? Because the call uses this, which refers to the real object, not the Proxy. Final & Static: Just like private, final methods cannot be overridden by CGLIB, and static methods belong to the class, not the instance. Both are "dead zones" for Spring AOP. ✅ Best Practices for the Everyday Grind Visibility Matters: If a method needs Spring "magic," it must be public (or at least protected/package-private if using certain CGLIB configurations, but public is the gold standard). Refactor for AOP: If you find yourself needing a transaction on a private method, it’s usually a sign that the logic belongs in a separate service. Self-Injection (The Last Resort): If you absolutely must call a method in the same class and keep the proxy logic, you can lazily inject the bean into itself—but treat this as a code smell! Have you ever spent hours debugging an annotation only to realize it was on a private method? Share your "proxy horror stories" below! 👇 #Java #SpringBoot #SoftwareEngineering #BackendDevelopment #CodingTips #CleanCode
Why Spring and Private Don't Mix: The Invisibility Cloak
More Relevant Posts
-
Day 26: 🧑💻 Retry Pattern with Exponential Backoff + Jitter: Retry — Handle Transient Failures Safely (Resilience4j + Spring Boot) We had a payment gateway that occasionally returned 503s during traffic spikes. The dev who added retries did the obvious thing: java for (int i = 0; i < 3; i++) { try { return paymentGateway.process(req); } catch (Exception e) { Thread.sleep(1000); // retry after 1s } } Fixed 1-second retry. Clean code. Logical. During the next traffic spike — the gateway went down for 12 seconds. 8,000 requests failed simultaneously. All 8,000 started retrying at exactly t=1s. The gateway tried to recover. 8,000 requests hit it simultaneously again. It went down again. Retry at t=2s. Same result. t=3s. Same result. The gateway never recovered during the spike. Every retry was a new attack. This is the Thundering Herd Problem. Three retry strategies — only one is correct: ❌ Fixed interval t=1s, t=2s, t=3s (all clients retry together) 100 clients fail → all retry at t=1s → server slammed Server starts recovering → all retry at t=2s → slammed again Never recovers ⚠️ Exponential backoff t=1s, t=2s, t=4s, t=8s (doubles each time) Better — fewer calls, server gets breathing room But all 100 clients still retry at EXACTLY t=2s, t=4s, t=8s Still synchronized. Still thundering herd. ✅ Exponential backoff + Jitter Client A: t=0.7s, t=2.3s, t=5.1s, t=9.8s Client B: t=1.3s, t=1.8s, t=4.7s, t=11.2s Client C: t=0.9s, t=2.7s, t=5.5s, t=8.3s Random ±50% added to each wait. 100 clients retry at 100 different times. Server sees a smooth trickle, not a spike. Server recovers. ✅ In Resilience4j: java RetryConfig.custom() .maxAttempts(4) .intervalFunction( IntervalFunction.ofExponentialRandomBackoff( 500, // base: 500ms 2.0, // multiplier: 500→1000→2000→4000ms 0.5)) // jitter: ±50% randomisation ← the fix .retryExceptions(IOException.class, TimeoutException.class) .ignoreExceptions( InsufficientFundsException.class, // ← DON'T retry CardDeclinedException.class) // ← DON'T retry .build(); The most important line: ignoreExceptions This is where most retry implementations go wrong. ✅ Retry: 503 Service Unavailable → server momentarily overloaded ✅ Retry: IOException → network blip ✅ Retry: TimeoutException → slow network, not a bug ❌ Don't retry: 400 Bad Request → fix the request, won't change ❌ Don't retry: 402 Payment Required → card declined — retrying won't help ❌ Don't retry: 404 Not Found → resource doesn't exist Retrying a CardDeclinedException three times doesn't un-decline the card. It just wastes 3×500ms and looks suspicious to fraud detection. #SystemDesign #RetryPattern #ExponentialBackoff #Jitter #Resilience4j #SpringBoot #Java #Microservices #DSA #DesignPattern #Pattern #Spring #Retry
To view or add a comment, sign in
-
-
🚀 ArrayDeque — Simplifying Stack and Queue Logic ( https://lnkd.in/g-c6q8v6 ) ➡️ Array Deque (Array Double-Ended Queue) is a resizable-array class in Java that lets you insert and remove elements from both ends — making it one of the most flexible data structures in the Collections Framework. 🔹 Revolving Door: Just like a revolving door lets people enter and exit from either side, ArrayDeque lets you add or remove elements from both the front and the rear with equal ease. 🔹 Token Queue at a Bank: Imagine a bank where the manager can add urgent customers at the front AND regular customers at the back — that's exactly how ArrayDeque manages its double-ended insertions. 🔹 A Stack of Trays in a Cafeteria: You always pick the top tray and place new ones on top — ArrayDeque replicates this Stack (LIFO) behavior perfectly using push() and pop(). Here are the key takeaways from the ArrayDeque session at TAP Academy by Sharath R sir: 🔹 No Indexing, No for Loop: Unlike ArrayList, ArrayDeque has zero indexing support. You cannot use a traditional for loop or get(i) — you must use for-each, Iterator, or descendingIterator instead. 🔹 Null is Strictly Forbidden: ArrayDeque throws a NullPointerException the moment you try to insert null — a critical difference from ArrayList and LinkedList that interviewers love to test. 🔹 Smarter Resizing: When the default capacity of 16 fills up, ArrayDeque doubles itself (n × 2). ArrayList uses (n × 3/2) + 1 — two different formulas worth remembering cold. 🔹 Reverse Traversal via descendingIterator(): Since ListIterator is unavailable (ArrayDeque implements Deque, not List), the only way to traverse backward is using descendingIterator() — which starts at the last element and moves toward the front. 🔹 One Class, Three Roles: ArrayDeque can act as a Stack (push/pop), a Queue (offer/poll), or a full Deque (addFirst/addLast) — making it the most versatile tool in Java Collections. Visit this Interactive webpage to understand the concept by visualization: https://lnkd.in/g-c6q8v6 #Java #JavaDeveloper #Collections #ArrayDeque #DataStructures #TAPAcademy #CodingJourney #PlacementPrep #SoftwareEngineering #InterviewPrep
To view or add a comment, sign in
-
-
🏷️ The "Magic String": Why Go Struct Tags Blew My Node.js Mind 🤯 Coming from Node.js, converting an object to JSON is effortless. What you type is what you get. But today, while defining the User model for my Auth backend, I hit a wall with Go’s strictness. In Go, if you want a struct field to be visible outside its package (exported), it must start with a Capital letter: type User struct { Email string Firstname string } The Problem: My frontend and my database don’t want Firstname. They expect standard, lowercase JSON: {"firstname": "Dave"}. The Solution: Struct Tags. type User struct { Email string `json:"email"` Firstname string `json:"firstname"` } 🧐 Why were they invented? Struct tags were invented to be the bridge between Go’s rigid internal capitalization rules and the messy, lowercase/snake_case external world (APIs, databases, XML). It tells Go’s encoders: "I know my internal name is Firstname, but when you talk to the outside world, put on this firstname nametag." 🤫 The Secret Most Beginners Don't Know Here is the wild part: The Go compiler completely ignores struct tags. They are literally just raw strings evaluated at runtime using a concept called Reflection. Because the compiler ignores them, if you accidentally make a typo and write `jsn:"email"` instead of `json:"email"`, your app will compile perfectly. But when you run it, the JSON decoder will silently ignore it, and your API response will leak the capitalized Email key. It's another silent failure trap! 🪤 The upside? Because it's just a raw string, you can invent your own tags! Packages like validator use this to let you write custom rules like `validate:"required,min=4"`. I used to think Go was just rigidly strict, but under the hood, it has these crazy, powerful escape hatches. The architecture is getting deep. We move! 💪🏾 To my Senior Gophers: What is the most creative or complex use of custom Struct Tags you’ve seen in a production codebase? Let’s gist in the comments 👇🏾 #Golang #BackendEngineering #SoftwareArchitecture #SystemDesign #TechBro #TechInNigeria #WeMove
To view or add a comment, sign in
-
-
🔹 Understanding autowire in Spring XML Configuration In Spring Framework, the autowire attribute helps automatically inject dependencies between beans — reducing manual configuration in XML. 📌 Why use autowire? It eliminates the need to explicitly define <property> or <constructor-arg> for every dependency. 💡 Types of Autowiring in Spring XML: 1️⃣ byName Spring matches the bean property name with the bean id. <bean id="employee" class="com.example.Employee" autowire="byName"/> <bean id="address" class="com.example.Address"/> 👉 Here, if Employee has a property named address, Spring injects the address bean. 2️⃣ byType Spring injects the bean based on the data type. <bean id="employee" class="com.example.Employee" autowire="byType"/> <bean id="address" class="com.example.Address"/> 👉 If only one bean of type Address exists, it gets injected automatically. 3️⃣ constructor Spring injects dependencies via constructor. <bean id="employee" class="com.example.Employee" autowire="constructor"/> 4️⃣ no (default) No autowiring. Dependencies must be defined manually. ⚠️ Important Notes: ✔ Works only if matching bean is available ✔ byType fails if multiple beans of same type exist ✔ Not recommended for large projects (Annotations like @Autowired are preferred) 🚀 Conclusion: autowire is useful for reducing XML configuration, but modern Spring applications mostly use annotation-based dependency injection. #SpringFramework #Java #BackendDevelopment #SpringBoot #CodingTips
To view or add a comment, sign in
-
Today, I built a small prototype using 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 𝗕𝘂𝗳𝗳𝗲𝗿𝘀 𝗮𝘀 𝘁𝗵𝗲 𝗰𝗮𝗻𝗼𝗻𝗶𝗰𝗮𝗹 𝗰𝗼𝗻𝘁𝗿𝗮𝗰𝘁 behind an LLM structured-output flow, and it changed how I think about production systems. The model still returned structured text, because LLMs generate text, not raw Protobuf bytes. So the runtime flow was still: LLM → Structured JSON → Java Record → Protobuf Message But the design flow changed to: Protobuf Contract → Output Shape → Prompt → Mapping This distinction is important. This was not about "Protobuf instead of JSON" as serialization. It was about moving the contract outside a single service. JSON schemas often end up embedded in DTOs, validators, controller code, or prompt templates. That works while one service owns the flow, but gets fragile when output crosses service boundaries, lands on a queue, evolves, or is consumed by multiple teams. That is where Protobuf felt more natural. `oneof` models mutually exclusive outcomes directly, and backward compatibility enters the design much earlier. Tiny example: ```𝘱𝘳𝘰𝘵𝘰 𝘮𝘦𝘴𝘴𝘢𝘨𝘦 𝘚𝘶𝘱𝘱𝘰𝘳𝘵𝘖𝘶𝘵𝘤𝘰𝘮𝘦 { 𝘴𝘵𝘳𝘪𝘯𝘨 𝘤𝘶𝘴𝘵𝘰𝘮𝘦𝘳_𝘪𝘥 = 1; 𝘰𝘯𝘦𝘰𝘧 𝘰𝘶𝘵𝘤𝘰𝘮𝘦 { 𝘙𝘦𝘧𝘶𝘯𝘥𝘙𝘦𝘲𝘶𝘦𝘴𝘵 𝘳𝘦𝘧𝘶𝘯𝘥_𝘳𝘦𝘲𝘶𝘦𝘴𝘵 = 10; 𝘈𝘥𝘥𝘳𝘦𝘴𝘴𝘊𝘩𝘢𝘯𝘨𝘦 𝘢𝘥𝘥𝘳𝘦𝘴𝘴_𝘤𝘩𝘢𝘯𝘨𝘦 = 11; 𝘜𝘯𝘬𝘯𝘰𝘸𝘯𝘐𝘯𝘵𝘦𝘯𝘵 𝘶𝘯𝘬𝘯𝘰𝘸𝘯_𝘪𝘯𝘵𝘦𝘯𝘵 = 12; } } ``` ```𝘫𝘢𝘷𝘢 ]𝘚𝘶𝘱𝘱𝘰𝘳𝘵𝘖𝘶𝘵𝘤𝘰𝘮𝘦𝘙𝘦𝘴𝘶𝘭𝘵 𝘳𝘦𝘴𝘶𝘭𝘵 = 𝘮𝘢𝘱𝘱𝘦𝘳.𝘳𝘦𝘢𝘥𝘝𝘢𝘭𝘶𝘦(𝘰𝘶𝘵𝘱𝘶𝘵𝘑𝘴𝘰𝘯, 𝘚𝘶𝘱𝘱𝘰𝘳𝘵𝘖𝘶𝘵𝘤𝘰𝘮𝘦𝘙𝘦𝘴𝘶𝘭𝘵.𝘤𝘭𝘢𝘴𝘴); 𝘷𝘢𝘳 𝘣𝘶𝘪𝘭𝘥𝘦𝘳 = 𝘚𝘶𝘱𝘱𝘰𝘳𝘵𝘖𝘶𝘵𝘤𝘰𝘮𝘦.𝘯𝘦𝘸𝘉𝘶𝘪𝘭𝘥𝘦𝘳(); 𝘴𝘸𝘪𝘵𝘤𝘩 (𝘳𝘦𝘴𝘶𝘭𝘵.𝘵𝘺𝘱𝘦()) { 𝘤𝘢𝘴𝘦 "𝘳𝘦𝘧𝘶𝘯𝘥_𝘳𝘦𝘲𝘶𝘦𝘴𝘵" -> 𝘣𝘶𝘪𝘭𝘥𝘦𝘳.𝘴𝘦𝘵𝘙𝘦𝘧𝘶𝘯𝘥𝘙𝘦𝘲𝘶𝘦𝘴𝘵( 𝘙𝘦𝘧𝘶𝘯𝘥𝘙𝘦𝘲𝘶𝘦𝘴𝘵.𝘯𝘦𝘸𝘉𝘶𝘪𝘭𝘥𝘦𝘳() .𝘴𝘦𝘵𝘊𝘩𝘢𝘳𝘨𝘦𝘐𝘥((𝘚𝘵𝘳𝘪𝘯𝘨) 𝘳𝘦𝘴𝘶𝘭𝘵.𝘱𝘢𝘺𝘭𝘰𝘢𝘥().𝘨𝘦𝘵("𝘤𝘩𝘢𝘳𝘨𝘦_𝘪𝘥")) .𝘣𝘶𝘪𝘭𝘥() ); 𝘥𝘦𝘧𝘢𝘶𝘭𝘵 -> 𝘣𝘶𝘪𝘭𝘥𝘦𝘳.𝘴𝘦𝘵𝘜𝘯𝘬𝘯𝘰𝘸𝘯𝘐𝘯𝘵𝘦𝘯𝘵( 𝘜𝘯𝘬𝘯𝘰𝘸𝘯𝘐𝘯𝘵𝘦𝘯𝘵.𝘯𝘦𝘸𝘉𝘶𝘪𝘭𝘥𝘦𝘳() .𝘴𝘦𝘵𝘙𝘢𝘸𝘐𝘯𝘵𝘦𝘯𝘵(𝘳𝘦𝘴𝘶𝘭𝘵.𝘵𝘺𝘱𝘦()) .𝘣𝘶𝘪𝘭𝘥() ); } ``` This immediately raised better engineering questions: which schema is the real contract, how does it evolve, and what is safe to propagate downstream? In my experience, those are questions that eventually pop up when you move systems to production. My takeaway is that 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲𝗱 𝗼𝘂𝘁𝗽𝘂𝘁𝘀 𝘀𝗼𝗹𝘃𝗲 𝘁𝗵𝗲 𝘀𝗲𝗿𝗶𝗮𝗹𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗽𝗿𝗼𝗯𝗹𝗲𝗺, 𝗯𝘂𝘁 𝗻𝗼𝘁 𝘁𝗵𝗲 𝗰𝗼𝗻𝘁𝗿𝗮𝗰𝘁 𝗽𝗿𝗼𝗯𝗹𝗲𝗺. Protobuf does not make the model smarter, and it does not remove the need for validation. But it does force the system to be honest about what it is willing to accept, version, and propagate. That made this experiment feel much less like prompt engineering and much more like interface design for distributed systems. Next stop: I will build a demo showing a schema-first design with Protocol Buffers.
To view or add a comment, sign in
-
-
The "Thread-Shifting" Trap in Asynchronous Distributed Locking If you are using Redisson for distributed locking in a reactive or asynchronous environment (like Vert.x, Project Reactor, or Spring WebFlux), you might have encountered this frustrating error: java.lang.IllegalMonitorStateException: attempt to unlock lock, not locked by current thread by node id: [...] thread-id: [...] 🔍 The Root Cause: Thread Dissociation In a traditional synchronous Spring Boot application, a request stays on a single thread. You lock on Thread A and unlock on Thread A. Redisson is happy. In Vert.x, we embrace non-blocking event loops and worker pools. Here is what happens: Locking: Your code acquires a lock on EventLoop-1. Redisson records Thread-1 as the owner. Processing: You perform an asynchronous OCR or a WebClient call. Unlocking: The .onComplete() callback is triggered, but Vert.x might schedule it on EventLoop-2 or a Worker-Thread. Failure: When you call lock.unlock(), Redisson checks the ID and says: "Wait, you aren't the thread that started this!" 💡 The Solution: Embracing "Force Unlock" In a reactive chain, the "Ownership" of a lock should be defined by the Business Transaction (Trace ID), not the Operating System Thread. Since we use the lock to prevent duplicate processing of the same file/request, we need a way to release the lock regardless of which thread finished the work. Don't use .unlock(). Use .forceUnlockAsync(). Why forceUnlockAsync()? Thread Agnostic: It removes the key from Redis without verifying the thread ID. Safety: In a properly structured if (lock == null) return; flow, only the "winner" who successfully acquired the lock will ever reach the onComplete stage. There is no risk of a "loser" thread accidentally releasing someone else's lock. Resilience: It handles cases where the lock might have already expired in Redis due to a long-running process, preventing further exceptions. 🛠️ Best Practice Implementation (Vert.x + Redisson) // 1. Acquire the lock (The Entry Guard) RLock lock = redisson.getLock("lock:process:" + traceId); // Try lock with 0 wait time: if someone else is processing, bail out immediately if (!lock.tryLock(0, 10, TimeUnit.MINUTES)) { return Future.succeededFuture("ALREADY_PROCESSING"); } // 2. The Asynchronous Journey return downloadFile(url) .compose(this::processOCR) .compose(this::sendToKafka) // 3. The Graceful Exit .onComplete(ar -> { // Regardless of success or failure, clear the lock // We use forceUnlockAsync to bypass the Thread ID check lock.forceUnlockAsync(); }); Final Thought When moving from Imperative to Reactive programming, your mental model of "Thread Safety" must shift to "Transaction Safety." Don't let thread-bound locks break your asynchronous flow! 🦑 #Java #Vertx #Redis #Redisson #DistributedSystems #BackendDevelopment #Microservices
To view or add a comment, sign in
-
When Elegance Meets Reality: Moving from Javers to Commons Lang3 In software architecture, we often preach the gospel of "using the right tool for the job." For object auditing and diffing in Java, Javers is undoubtedly that "right" tool. Its ability to abstract away reflection, handle deep nested comparisons, and manage audit snapshots is, by all definitions, the "Senior" way to design a robust auditing system. However, as the saying goes: "Architecture is the art of trade-offs." The "Perfect" Design vs. Dependency Hell Recently, while implementing a data tracking module, I hit a wall that every modern Java developer fears: The Gson Version Paradox. Javers (7.4.1) relies on Gson. Starting with Gson 2.10+, the library introduced strict protections that forbid overriding built-in adapters (like JsonElement). Unfortunately, Javers’ internal initialization logic still attempts this override. When you run this in a modern Spring Boot 3 environment (which forces Gson 2.11+), you find yourself in a circular trap: 1. Javers needs a modern environment. 2. The modern environment enforces strict Gson rules. 3. Javers violates those rules during its internal setup. The Senior Pivot: Pragmatism over Elegance While I could have spent days shaded-packaging a custom Javers build or forcing global dependency downgrades (which introduces security risks), a Senior developer must know when to stop fighting the framework and start solving the business problem. The mission was simple: Compare 8 specific fields between two objects (often of different types) and trigger logic only if the new value was non-null and changed. Enter Apache Commons Lang3's DiffBuilder. Why I Chose the "Boring" Path I decided to pivot to a manual, reflection-free implementation using Commons Lang3. Here’s why this was the more "Senior" decision in this specific context: 1. Zero Dependency Conflict: Lang3 is a "Swiss Army Knife" with no transitive dependencies. It is immune to the Gson/Jackson version wars. 2. Compile-Time Safety: By avoiding reflection and using explicit chain-calling (.append()), the code is now refactor-friendly. If a field name changes, the IDE catches it immediately—not at runtime. 3. Precision Control: I could easily inject business-specific logic, such as using BigDecimal.compareTo() instead of .equals(), avoiding "false positive" changes caused by scale differences (e.g., 1.0 vs 1.00). 4. Different Type Compatibility: It allowed me to seamlessly compare a Persistence Entity with a DTO without complex mapping overhead. The Lesson Learned Dependency management often introduces uncontrollable risks. While high-level frameworks promise "magic," they can become "black boxes" that derail a project through deep-seated version conflicts. Elegance is a goal, but reliability is the requirement. #Java #SoftwareEngineering #ProgrammingTips #RiskManagement #JavaDevelopment #MinimalismInCode
To view or add a comment, sign in
-
When was the last time you paused before typing CompletableFuture in a Spring controller and asked yourself: why exactly is this needed here? I’m not talking about legacy projects where this pattern emerged back in the Java 8 era and persists through sheer inertia—that's understandable; history is history. I’m talking about the new code we’re writing in 2026, knowing what we know about virtual threads, already running Spring Boot 3.2+ with spring.threads.virtual.enabled=true, and realizing that the world has shifted. If we’re being honest, CompletableFuture in a controller wasn't born out of a desire for elegance—it was an engineering compromise made when OS threads were expensive, the Tomcat thread pool was a limited resource, and the only way to avoid blocking it during I/O was to retreat into callback-driven asynchrony. We did this even at the cost of unreadable thenApply → thenCompose → exceptionally chains and the manual plumbing of MDC and SecurityContext between threads. But what happens to that compromise when its original premise is no longer true? Ron Pressler, tech lead of Project Loom at Oracle, answered this directly: with virtual threads, calling .get() on a Future becomes a practically free operation because the virtual thread simply parks without holding onto an OS thread. Consequently, the motivation to write complex asynchronous chains instead of simple sequential code disappears. The Spring team went even further in their blog, asking an uncomfortable question: if the entire request-handling process now lives on a virtual thread, why do we even need the asynchronous Servlet API, which was designed specifically to free up server threads? That said, I’m not sure the answer is black and white—and that’s what interests me. There is still a scenario where CompletableFuture in a controller feels justified: the parallel aggregation of several independent calls. Using allOf(fetchUser(), fetchOrders(), fetchRecommendations()) gives you a declarative composition that is hard to express as cleanly in synchronous code even with virtual threads—especially while StructuredTaskScope remains in preview. On the flip side, if your project is already on Java 21+ with virtual threads enabled, wrapping a standard I/O call in CompletableFuture.supplyAsync() is ceremony for ceremony's sake. It adds complexity to stack traces and forces manual context management where things used to just work. So, where is the line between a conscious tool and a pattern we reproduce out of habit because "that's how it's done"? Tell me—in your production environment, is CompletableFuture in controllers an active choice or a historical given? And if it's a choice, what is the objective?
To view or add a comment, sign in
-
🔥 Part 2 of the series is live: We just implemented @Autowired from scratch. Not just reading about it. Actually building it. Most Java developers use Spring every day — @Component, @Autowired, @Transactional — without ever understanding what's happening underneath. The answer to all of it is one API: Java Reflection. In this series, we use Reflection as the lens AND the tool. Every concept we cover, we immediately apply to build a real piece of the framework. Theory and implementation, side by side. By the end, you won't just know how Reflection works. You'll know how frameworks think. Second blog is live now 👇
To view or add a comment, sign in
-
The Ghost in the Machine: Why your Thread-Safe Code Can Be Orders of Magnitude Slower You probably know that two threads can interfere with each other without ever accessing the same variable. We master locks, semaphores, and concurrency. But there is a hardware concept that most of us ignore on a daily basis: False Sharing. The Problem: Cache Lines Processors do not read memory byte by byte. They read in blocks called Cache Lines, typically 64 bytes on modern processors. If you have two distinct variables (say, two counters A and B) that reside in the same Cache Line, the hardware faces a problem: Core 1 updates variable A; Core 2 wants to update variable B; Even though they are different variables, the cache coherence protocol (MESI) marks the entire line as invalid for Core 2, forcing a cache reload. The result? The execution pipelines of both cores stall for hundreds of cycles, with no mutex, no lock, creating a bottleneck where there should be pure parallelism. Why Does This Matter? In high-performance systems (trading, search engines, large-scale event processing), False Sharing is the silent killer of scalability. You add more CPU cores, but performance does not grow. Sometimes it regresses. How to Fix It? Java vs GoBoth languages solve the problem in opposite ways, and that difference says a lot about the philosophy of each. Java handles it for you. Since Java 8, there is the @Contended annotation (package jdk.internal.vm.annotation). It instructs the JVM to add padding around the field, ensuring it occupies an exclusive Cache Line. Important detail: to work outside JDK code, you must add the flag -XX:-RestrictContended to the JVM. Without it, the annotation has no effect on user classes. Go makes it your responsibility. There is no magic annotation. The compiler will not save you. You need to understand the hardware and insert the padding yourself, either manually with a byte array or using cpu.CacheLinePad from the standard library, which is more readable and avoids hardcoded numbers. Java uses @Contended, the JVM manages it, requires -XX:-RestrictContended, not explicit in code, around 128 bytes overhead per field. Go uses cpu.CacheLinePad, you manage it, no extra config needed, explicit in code, around 64 bytes overhead per field. The Takeaway Software is not just logic. It is understanding how that logic behaves when it meets the silicon. In Java, the platform abstracts the problem away. In Go, it sits right there in the code, a constant reminder that real concurrency requires thinking beyond the language. Have you ever debugged a performance problem that made no sense in the code, but made perfect sense in the hardware? #FalseSharing #CacheLines #ConcurrentProgramming #Java #Golang #HighPerformance #BackendDevelopment #SoftwareEngineering #SystemsProgramming #Programming
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development