Helm Updates: V1.1 — Synchronous vs. Asynchronous Programming: I/O, Threads and Event Loops
Cover Credit - Mendix

Helm Updates: V1.1 — Synchronous vs. Asynchronous Programming: I/O, Threads and Event Loops


Sync vs Async for IO

When building fast, responsive APIs—you’ll often hear:

“Don’t use a sync calls in an async endpoint.”

But why exactly? What changes between synchronous and asynchronous SDKs, especially for IO-bound operations?


The Problem: IO-Bound Operations and Blocking

Most web backends spend a lot of time waiting on IO—network calls to a database, cache, or external service.

  • Synchronous (sync) SDK: Makes a blocking call; the code and the thread wait until the IO completes.
  • Asynchronous (async) SDK: Initiates the IO, then yields control. The thread is freed to serve other requests while waiting for the IO.

💡 Sync = Standing in a single line at Starbucks, waiting for each coffee before starting the next. Async = Barista starts 5 orders at once, jumps between them as milk boils, espresso shots finish, everybody gets the coffee faster.


Why Does Blocking Matter?

If your server’s threads are stuck waiting on IO, they can’t handle other requests. Scalability collapses as traffic grows (Threads become bottlenecked because each one is blocked, limiting concurrency).


Article content
Sync Request: Blocking IO
Article content
Async Request: Non Blocking IO

The Event Loop: Async’s Secret Sauce 🥘

The event loop is like a chef managing multiple pans:

  • Async: While one pan is simmering (waiting for I/O), the chef flips another pancake.
  • Sync: Nothing sits idle.


Server Activity Timeline

Time →
Req1: [Working][---DB Wait---][Working]
Req2:         [Working][---DB Wait---][Working]
Req3:                [Working][---DB Wait---][Working]

✔ With Async: while 1 & 2 are waiting, 3 can cook  
❌ With Sync: only one line runs at a time
        

Imagine a server with only one CPU core — meaning it can run just one thing at a time. 5 Requests, 1-Core CPU, Each DB Call Takes 2s

Sync Flow

  • With a single worker, requests are processed one after another.
  • Each request makes a DB call that takes 2s. Since the call is blocking, the CPU just sits idle, waiting.
  • But because the thread doesn’t release the CPU, nothing else can be scheduled during that wait.
  • Result: 5 requests × 2s = ~10s total before all are done.

Async Flow

  • All 5 requests arrive, and each (almost) immediately starts its DB call.
  • When the first request hits its DB wait, the async code yields control back to the event loop.
  • The event loop says: “Great, while this one waits, let’s put the CPU to work on other requests.”
  • Each DB call runs concurrently (overlapped), and the CPU is kept busy rather than sitting idle.
  • Once DB results return, the event loop resumes the exact spot in code where each task left off.
  • Result: After ~2s (plus tiny overhead), all 5 requests complete nearly simultaneously.


[Task1 waiting][Task2 waiting][Task3 active][Task4 waiting]
   |                |                |             |
 <---event loop picks up whatever's ready, no idle time--->        

How Sync SDKs Work

A sync SDK issues a call like query(). The calling thread is occupied until the IO finishes.

Example: synchronous Postgres driver (psycopg2)

import psycopg2  # Sync driver

def get_user_sync(user_id):
    conn = psycopg2.connect(...)
    cur = conn.cursor()
    cur.execute('SELECT * FROM users WHERE id=%s', (user_id,))
    result = cur.fetchone()
    cur.close()
    conn.close()
    return result
        

If this runs inside a async endpoint, the thread is stuck until the DB responds. No other coroutine on that thread can run.


How Async SDKs Work

Async SDKs integrate with an event loop (like asyncio). Instead of blocking, they register interest in IO and let the event loop resume the task only when the IO is ready.

Example: async Postgres driver (asyncpg)

import asyncpg
import asyncio

async def get_user_async(user_id):
    conn = await asyncpg.connect(...)
    result = await conn.fetchrow('SELECT * FROM users WHERE id=$1', user_id)
    await conn.close()
    return result
        

Here the await releases control back to the event loop while the database query is in progress. The same thread can handle other requests in the meantime.

The same principle applies to HTTP clients, file I/O, message queues, etc.


OS-Level View: How Async IO Works

When your code does IO, it’s really asking the operating system (OS) to talk to hardware (network card, disk, etc.). The difference between sync and async comes down to how the OS handles readiness:

  • Blocking IO (sync): The app calls read(), and the OS parks the thread until the data is ready. Nothing else can use that thread until the IO finishes.
  • Non-blocking IO (async): The app calls read(), but instead of waiting, it tells the OS “register my interest in this socket”. The OS immediately returns control. Later, when the socket has data, the OS signals the program (via system calls used to notify programs when sockets are ready). The event loop then resumes the exact coroutine that was waiting.

Putting it together

Async works by:

  1. Marking sockets as non-blocking (calls return immediately).
  2. Using an event loop that asks the OS which sockets are ready.
  3. Resuming only the tasks that have data available.

That’s why a single thread can manage thousands of connections: it’s not spinning on them—it’s only touching the ones that are “ready now.”

Article content
Event Loop Cycle

Why Sync SDKs Don’t “Become Async”

Calling sync SDKs inside async def doesn’t make them async.

  • Sync SDKs: Block the thread until IO is done.
  • Async SDKs: Cooperate with the event loop, freeing the thread for other work.

If you must call a sync SDK inside async code, you’d need run_in_executor (a new process or a new thread) to shove the work into a threadpool—but this adds overhead and misses the scalability benefits of async drivers.


"So when should I actually pick sync over async?"

When Sync (Blocking I/O) is Good Enough

  • Use Case: Small Flask/Django app, admin dashboards, internal tools, or cron jobs.
  • Why: Simpler code, easier debugging, no event loop complexity.
  • Benefit: Faster developer velocity, fewer “gotchas” (callback hell, debugging difficulties, libraries not supporting async, etc.) around async libraries.
  • Example: tasks which are low volume but CPU-heavy (PDF generation, image processing). Async won’t help here.

When Async (Non-blocking I/O) Shines

  • Use Case: High-concurrency APIs, chat servers, streaming platforms, payment gateways, or microservices that talk to multiple APIs/DBs.
  • Why: Handles thousands of concurrent requests efficiently on fewer threads.
  • Benefit: Lower infra costs, better scalability, reduced latency under load.

👉 Rule of Thumb:

  • If your app mostly computes → Stick with sync + multiprocessing.
  • If your app mostly waits on IO → Async wins big.


TL;DR ☕

  • Sync = A chef cooking one dish at a time
  • Async = the chef juggling everyone’s orders at once. Chopping for dish 2 while dish 1 is cooking
  • Blocking IO = wasted CPU cycles. Async keeps it busy.
  • Use sync when: your app is simple, traffic is low, or workloads are CPU-heavy.
  • Use async when: your app does tons of DB/API calls and needs to scale.
  • Async isn’t “faster code” — it’s about better concurrency and resource usage.
  • The magic sauce = event loop + non-blocking IO.


Resources


To view or add a comment, sign in

More articles by Pradyumn Jain

Others also viewed

Explore content categories