Optimize API Performance with Keyset Pagination

Building APIs? Your pagination strategy might be killing performance. OFFSET + LIMIT looks clean. Until your data grows. Works great with 10K rows. Breaks silently at 10M. Here’s why 👇 When someone requests : SELECT * FROM orders ORDER BY created_at DESC OFFSET 50000 LIMIT 20; The database doesn’t “jump” to row 50,000. It scans 50,000 rows. Discards them. Then returns 20. As traffic grows → Latency grows. CPU grows. Your API slows down. Now imagine a new row is inserted while the user moves from page 1 to page 2. Rows shift. Users see duplicates. Or miss records entirely. Production nightmare. The fix? Keyset (Cursor) Pagination. Instead of saying “skip 50,000 rows” Say “start after this record”. Example: SELECT * FROM orders WHERE created_at < :last_seen_timestamp ORDER BY created_at DESC LIMIT 20; Now the database: • Uses index directly • Doesn’t scan discarded rows • Avoids shifting issues • Scales predictably This is what high-scale systems use. OFFSET pagination is easy. Cursor pagination is scalable. When building backend systems, small decisions compound at scale. What’s one backend design choice you changed after hitting production load? #BackendEngineering #SystemDesign #Java #SpringBoot #Database #Performance

Small issue. This query will first scan 50020 rows(not 50000) and then discard the first 50000 rows

Like
Reply

To view or add a comment, sign in

Explore content categories