I built a distributed rate limiter in Java to explore a scaling problem I’ve seen multiple times: naive rate limiting doesn’t scale. The usual approach: request → datastore → decision works fine… until your datastore becomes the bottleneck. So I tried a different design: - local counters - batched updates - async flush to datastore This reduces distributed operations dramatically while keeping latency low. Of course, this comes with trade-offs: you lose strict accuracy in favor of throughput. I documented the architecture, trade-offs, concurrency model, and benchmarks in my GitHub repository. If you're dealing with high-throughput systems or rate limiting challenges, I’d be happy to exchange ideas.
Java Distributed Rate Limiter Architecture and Trade-Offs
More Relevant Posts
-
If your Spring Boot backend crashes during heavy file uploads, the problem isn't Java. It's your architecture. 🛑 This week, I’m building 'Aegis'—a Distributed Enterprise RAG Engine. To handle massive data ingestion without causing JVM memory spikes, I ripped out synchronous processing and implemented the Claim Check Pattern using MinIO and Apache Kafka. API latencies dropped from 32s to 12ms, and the system can now handle virtually infinite throughput. I wrote a deep-dive on exactly how I built this distributed Java architecture. Read the full breakdown here: https://lnkd.in/g7WYEG6Q (Check out the raw code on GitHub: https://lnkd.in/gdwJ_drr) #SystemDesign #Java #SpringBoot #Kafka
To view or add a comment, sign in
-
Spring Data Interview Question : Efficient Keyset Pagination with Spring Data WindowIterator Scenario You’re building a Spring Boot API to fetch comments for a post. The table has tens of millions of rows, and users can scroll deeply into the history. Initial Implementation is attached. Problem Observed in Production As users scroll deeper: - Queries slow down dramatically - DB CPU rises - Response times become inconsistent Why offset pagination is inefficient? Checkout detailed questions and answers : https://lnkd.in/ePcwy9tJ Subscribe and join 6.5k java and spring boot devs https://lnkd.in/gwiRqWBV #java #spring #springboot
To view or add a comment, sign in
-
-
#Post3 In the previous post, we understood the role of @RestController in building APIs. Now the next step is 👇 How do we map HTTP requests to methods? That’s where mapping annotations come in 🔥 In Spring Boot, we use: • @GetMapping → for GET requests • @PostMapping → for POST requests • @PutMapping → for UPDATE • @DeleteMapping → for DELETE • @PatchMapping → for partial updates Example: @GetMapping("/users") → fetch all users @PostMapping("/users") → create a new user 💡 What about @RequestMapping? @RequestMapping is a generic annotation that can handle all HTTP methods. Example: @RequestMapping(value="/users", method=RequestMethod.GET) 👉 But in modern Spring Boot, we prefer specific annotations like @GetMapping for cleaner and readable code Key takeaway: Use specific mapping annotations for better clarity and maintainability 👍 In the next post, we will understand how @RequestBody works in handling request data 🔥 #Java #SpringBoot #BackendDevelopment #RESTAPI #LearnInPublic
To view or add a comment, sign in
-
New to Spring Boot? You'll see these annotations in every project. Here's what they actually do: @SpringBootApplication → Entry point. Combines @Configuration, @EnableAutoConfiguration, @ComponentScan @RestController → Marks a class as an HTTP request handler that returns data (not views) @Service → Business logic layer. Spring manages it as a bean @Repository → Data access layer. Also enables Spring's exception translation @Autowired → Inject a dependency automatically (prefer constructor injection instead) @GetMapping / @PostMapping / @PutMapping / @DeleteMapping → Maps HTTP methods to your handler methods @RequestBody → Deserializes JSON from request body into a Java object @PathVariable → Extracts values from the URL path Bookmark this. You'll refer back to it constantly. Which annotation confused you the most when starting out? 👇 #Java #SpringBoot #Annotations #BackendDevelopment #LearningInPublic
To view or add a comment, sign in
-
#Post5 So far in this series, we explored concepts like HashMap, ConcurrentHashMap, CAS, and volatile. Now let’s step back and understand the foundation. We will now start with Multithreading Fundamentals. How does our code actually run? What is a Process and Thread? From Code to Process When we write a Java program: Compilation: javac Test.java → generates bytecode Execution: java Test → starts the program At this point, the JVM starts a new process in the operating system. This process is responsible for executing our program. The OS allocates resources to this process such as: • Memory • CPU time • Threads What is a Process? A process is simply a running instance of a program. Each process has its own memory and resources allocated by the operating system. What is a Thread? A thread is the smallest unit of execution inside a process. When a process starts, it begins with one thread called the main thread. From there, we can create multiple threads to perform tasks concurrently. Multitasking vs Multithreading Multitasking: The operating system runs multiple processes at the same time. Multithreading: A single process runs multiple threads concurrently. In simple terms: Operating System → Multiple Processes (Multitasking) → Each Process contains multiple Threads (Multithreading) Example Think of a web application: • One thread handles user requests • Another thread processes data • Another thread sends responses This allows multiple tasks to run efficiently at the same time. Key takeaway Code execution flow: Code → JVM → Process → Threads Understanding this flow is important because all advanced concepts like synchronization, CAS, and concurrent data structures are built on top of threads. In the next post, we’ll explore what happens inside a process (heap, stack, memory structure). #Java #SoftwareEngineering #Multithreading #BackendDevelopment #Programming
To view or add a comment, sign in
-
🔥 Day 12 — Stream vs Parallel Stream Java gives us stream() and parallelStream(), but using both interchangeably is a common performance trap. Here’s a concise, architecture-focused breakdown 👇 ✅ When stream() (sequential) is the right choice Use it by default unless there is a clear reason not to. ✔ Order matters ✔ Small dataset ✔ Computation is lightweight ✔ Tasks depend on external state ✔ Running inside a web request thread (avoid blocking!) Sequential streams = predictable, cheap, safe. 🚀 When parallelStream() actually helps Parallel streams shine only in specific scenarios: ✔ CPU-heavy operations ✔ Very large collections ✔ Pure functions (no shared mutable state) ✔ Independent tasks ✔ Running on multi-core servers ✔ Safe to use fork-join pool (or overridable) Example workloads: image processing, bulk calculations, data transformation. Rule: Only use parallel streams for CPU-bound operations on big datasets. ⚠️ When to AVOID parallelStream() Parallel is not always faster — sometimes it’s worse. ❌ Small collections (overhead > benefit) ❌ IO tasks (network/db calls block threads) ❌ Code modifying shared variables ❌ Inside web servers (uses common ForkJoinPool → thread starvation) ❌ Any scenario where ordering is important Parallel streams can cause unexpected latency spikes in prod if used blindly. 🧠 Architect’s Take: Parallel streams are powerful — but they borrow threads from the common ForkJoinPool, which your entire application also uses. One wrong usage in production can slow down every request. Default to sequential. Use parallel only when data and computation justify it. #100DaysOfJavaArchitecture #Java #Streams #Concurrency #SoftwareArchitecture #Microservices
To view or add a comment, sign in
-
-
Most used Spring Boot annotations (that you’ll see almost everywhere) If you’ve worked with Spring, you already know… half the magic is in annotations 😅 Here are some of the ones I keep using almost daily: * @SpringBootApplication → starting point of the app * @RestController → tells Spring this class handles APIs * @RequestMapping / @GetMapping / @PostMapping → for routing requests * @Autowired → dependency injection (used a lot, sometimes too much 👀) * @Service → business logic layer * @Repository → database layer * @Component → generic bean * @Entity → maps class to DB table * @Id → primary key * @Configuration → for config classes * @Bean → manually define beans when needed When I started, I used to just memorize these. Over time, I realised understanding when NOT to use them is equally important. Like: * overusing @Autowired everywhere * mixing @Component, @Service randomly * not understanding bean lifecycle Spring feels simple at start, but there’s a lot going under the hood. If you’re learning Spring right now → focus less on remembering, more on understanding what each annotation actually does. Which Spring annotation do you use the most? 👇 #ThoughtForTheDay #SpringBoot #Java #Backend #SoftwareEngineering
To view or add a comment, sign in
-
-
Migrating to Java 21? We're seeing engineering teams hit a hard wall at exactly 256 concurrent requests, despite the promise of 'infinite' virtual threads. Here’s why. The expectation: effortless scalability, ditching reactive models. The reality: under production load, services crash into catastrophic latency cliffs. Your P99 jumps from milliseconds to 5 seconds. Heap bloats. Thread dumps confirm carrier thread starvation, hard-capped at exactly 256. What we're seeing at Azguards Technolabs: The 'Carrier Pinning Trap.' Legacy `synchronized` blocks or native (JNI) calls deep in your dependency tree physically hold carrier threads hostage. The JVM cannot unmount them, leading to complete virtual thread pool starvation at precisely 256 threads. Our approach: precise JFR diagnostics (`jdk.VirtualThreadPinned`) to pinpoint the offending stack frames. Then, tactical lock modernization with `ReentrantLock` for internal code, or implementing dedicated platform thread bulkheads for unmodifiable third-party dependencies. This isolates toxic workloads and preserves Loom's global `ForkJoinPool`. This isn't just a dependency upgrade; it's a fundamental architectural shift. We specialize in these "Hard Parts" – profiling critical latency, tracing obscure pinning, and implementing resilient architectures to stabilize your Spring Boot 3 infrastructure. Stop guessing at your concurrency limits. Read the full technical breakdown here:
To view or add a comment, sign in
-
The "ThreadLocal" Trap: Why your Session Logic Fails in Vert.x and Kafka Transitioning from traditional synchronous Java development to an asynchronous, event-driven architecture with Vert.x and Apache Kafka is a rewarding journey, but it comes with a major wake-up call: Your traditional session mechanisms are probably obsolete. After a deep dive into development and debugging today, I’ve consolidated a few critical architectural shifts that every team must consider before writing the first line of code. 1. The Death of the Context-Thread Bond In a classic Servlet-based world, we rely heavily on ThreadLocal to store user sessions, security contexts, or trace IDs. It’s easy: one thread per request. In Vert.x, the Event Loop is king. A single request may jump across multiple threads, or a single thread may handle thousands of interleaved requests. The moment you hit an await or a Kafka send, your ThreadLocal context vanishes. 2. Statelessness is Not Optional In a Kafka-driven processor, the "session" doesn't exist in memory. If your processor needs to call a long-running remote API (like a heavy PDF parser), you cannot simply "wait" and expect the environment to stay the same. You must explicitly pass state through Message Headers or Metadata Objects. 3. Rethinking the "Long-Running" Task Synchronous systems "block." Asynchronous systems "flow." If a task takes 30 seconds, a traditional system hangs a thread. In an event-driven system, you should be looking at: Asynchronous Callbacks: Trigger the task and let the result flow back into a different Kafka topic. Context Propagation: Explicitly carrying userId and traceId within the payload metadata. 🔑 Key Takeaway for Architects: Don't try to retrofit synchronous patterns into an asynchronous world. If you don't design your context propagation strategy (how session data travels across the event bus or Kafka topics) during the blueprint phase, you will spend weeks debugging NullPointerExceptions and lost sessions. Design for the flow, not for the thread. 🦑 #Java #Vertx #ApacheKafka #SoftwareArchitecture #BackendDevelopment #Microservices #AsyncProgramming
To view or add a comment, sign in
-
Sometimes all you need is 1 good in-depth resource: In-depth playlist: JAVA from Basics to Advanced: https://lnkd.in/dUNA6vsU Spring Boot from Basics to Advanced: https://lnkd.in/gz2A5ih2 Low Level Design from Basics to Advanced: https://lnkd.in/dJkgzKxf High Level Design from Basics to Advanced: https://lnkd.in/d8eDwYVA Distributed Microservices (Practical): https://lnkd.in/gdXkZ75y JUnit5 and Mockito from Basics to Advanced: https://lnkd.in/g5fmcXHJ Event Driven Architecture: https://lnkd.in/gP5vY7y7 Spring AI: https://lnkd.in/gyn2X2Fu or try www.conceptandcoding.in (beta) #softwareengineer
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development