ARTICLE 10 — Implementing Command Pipelining
Every Redis command has a hidden cost you're probably ignoring.
It's not the command itself. It's the round trip.
The round-trip problem:
Client → [SET key1 val1] → Redis
Client ← [OK] ← Redis (1 round trip)
Client → [SET key2 val2] → Redis
Client ← [OK] ← Redis (2 round trips)
...repeat 100 times... (100 round trips)
On a local machine: ~0.1ms per round trip = 10ms total. Acceptable. On a cloud instance in another region: ~5ms per round trip = 500ms total. Your users feel that.
Pipelining: batch the commands, not the trips
Client → [SET key1] [SET key2] [SET key3] ... [SET key100] → Redis
Client ← [OK] [OK] [OK] ... [OK] ← Redis
(1 round trip)
Send everything at once. Get everything back at once.
One network round trip instead of 100.
Real-world throughput comparison:
Without pipelining: ~10,000 commands/second (network-bound) With pipelining (batch of 100): ~200,000 commands/second
That's a 20x improvement — with zero changes to Redis, zero changes to your infrastructure.
When to use pipelining:
✅ Seeding Redis with initial data (loading 100K keys at startup) ✅ Batch processing (update 1,000 user sessions at once) ✅ Import scripts and data migrations ✅ Analytics pipelines writing multiple counters at once
What pipelining is NOT:
Not atomic. Commands are batched on the client side and sent together, but Redis processes them one-by-one. Another client can interleave between your pipelined commands.
Not a transaction. If command 47 fails, commands 1-46 already executed and commands 48-100 will still execute.
For atomicity → use MULTI/EXEC (transactions). For throughput → use pipelining.
They solve different problems.
The beginner mistake:
Using pipelining when you need atomicity. Expecting that if one command fails, none of them execute.
That's not what pipelining does. Know the difference before you choose.
📌 Read Also: