Rate Limiting is a strategy used in backend systems to keep systems fair and fast.
Most production systems use a mixture of different algorithms. Here's a breakdown of the most common ones.
#developer#softwaredevelopment#programming#coding#technology
Rate limiting is a strategy used in back end systems to control how much traffic a service handles. It protects against abuse, prevents resource exhaustion, and ensures fair access to all users. It's how we keep systems stable, fair, and fast. There are four major strategies to rate limiting, and each one handles traffic differently. The first approach is the fixed window. It's the simplest form of rate limiting. You choose a time window, say one minute for example, and then set a limit like 100 requests per minute every time a request from a specific client comes in. You increase that client's counter. When the minute resets, the counter goes back to zero, given the client another 100 requests for the next minute. It's easy to understand, easy to implement, and very fast. But here's the problem. The fixed window doesn't care when the request arrives inside the window. So a client can send 100 requests at 115959, then another hundred at 12. That's 200 request in two seconds, even though the limit is 100 per minute. This creates an attack pattern called burstiness, which is technically legal, but can be annoying to deal with. This burstiness is exactly where we move to the next approach, the sliding window. Instead of resetting counters at fixed intervals, the sliding window calculates usage over the past time period from the moment each request arrives. So if your limit is 100 requests per minute, every time a new request comes in, it looks back at the past 60 seconds to see how many requests were made and checks if the limit has been reached. But sliding windows aren't perfect either. You can still get microburst. Depending on how you track time buckets, you user can squeeze in a spike in traffic. Still passes the limits. These limitations lead us to a different class of algorithms that handle burstiness more naturally. Instead of counting how many requests you've made, what if we give clients a budget to spend? That's the token bucket approach. You fill a bucket assigned to a client with tokens at a steady rate, say 10 tokens per second. Each request consumes A token. A request that requires multiple operations may cost multiple tokens. If a request comes in and you have tokens, you're good. But if the bucket is empty, the service. Has reached its limit. The key advantage is that the token bucket allows controlled bursts. The bucket can hold extra tokens, so if a client sends requests in spikes, that's allowed as long as the bucket hasn't been drained already and they're tokens in it. But for situations that require strict predictable traffic flow, token bucket might be too lenient. That's where the leaky bucket comes in. In the leaky bucket, new requests flow into the bucket and are processed at a fixed rate. Think of it like a bucket with a hole in the bottom. A lot of requests can enter the bucket from the top, but the only. Exist through the hole at the bottom at a fixed steady pace. If request come in faster than they can drain out of the bucket, the bucket overflows and those extra requests get dropped. The leaky bucket gives the most predictable, smooth traffic pattern. It enforces consistency, which is great for systems that get overwhelmed easily and need guaranteed steady throughput. But the drawback is that it's the least flexible. Most production systems use a combination of these strategies. Although Marcos from programming videos like this and check out umacodes.com for more.
I've seen teams move from the Leaky Bucket system back to the basics of the rate limiting system, balancing flexibility and control of implemented systems while staying adaptable is always important! 😃
What’s the biggest waste of time for a programmer?
Working without understanding the bigger picture.
You can spend hours fixing a bug,
but if you understood the system from the start… you’d save half that time.
Code is part of the story…
but understanding is the whole story.
#Programming#SoftwareEngineering#Developers#Tech#SystemThinking#Coding
I've been coding for longer than I can remember and I've definitely had my fair share of bugs. Here are some hard-earned debugging lessons that could save you hours of frustration down the line.
#developer#softwaredevelopment#programming#coding#tech
I can relate to the Icloud thing back in 2025.
I can never forget that mess.
I got the brilliant idea to move tons of projects to the Icloud repository so that I could save storage 😂.
Trust me, You really don't want to do this.
Software Engineer at Netflix | Tech Creator & Career Mentor
I've been coding for longer than I can remember and I've definitely had my fair share of bugs. Here are some hard-earned debugging lessons that could save you hours of frustration down the line.
#developer#softwaredevelopment#programming#coding#tech
Software engineers should also be aware of the drawbacks of vibe coding. Yes, it makes development faster, but speed without understanding can easily turn into heavy technical debt.
You know that feeling when you need to dive into a new codebase and it's completely overwhelming?
We've all been there starting on a new project.
It's all new. You have no idea where to start looking to understand the complexities of the code and the system in front of you.
Sometimes there's documentation, sometimes not.
Sometimes there's existing expertise, sometimes not.
What can we do to try and optimize our approach?
Check out the article:
https://lnkd.in/gps5GWZ7#coding#programming#softwaredevelopment
Here's a list of 10 different refactoring techniques for you to leverage!
Refactoring is a critical part of software development. Without it, we'd essentially have to predict every step of what we need to deliver perfectly, or constantly be faced with rewriting code from scratch.
Both of those are ridiculous.
I've put together a list of 10 different refactoring techniques that you can leverage!
Check out the article:
https://lnkd.in/gX8uVrym#coding#programming#refactor#refactoring
Every developer has experienced this.
You spend hours debugging your code, checking every function, rewriting logic… and the real problem ends up being a tiny mistake.
Sometimes the biggest lessons in programming come from the smallest bugs.
Read the story:
https://lnkd.in/dTF-2uV7#programming#webdevelopment#coding#softwareengineering
The "programming iceberg" metaphor highlights the complexity of software development. Understanding this metaphor can help us ensure that our code is not only functional but also maintainable and scalable.
#programming#softwaredevelopment#coding#tech
In 2026, if you’re a developer and what you deliver doesn’t work, you can’t blame Claude.
You are still responsible for the code.
#Claude#AICoding#Developers#Programming
I've seen teams move from the Leaky Bucket system back to the basics of the rate limiting system, balancing flexibility and control of implemented systems while staying adaptable is always important! 😃