Queues vs Threads vs Async: The Guide Every Python Developer Should Know
Queues vs Threads vs Async

Queues vs Threads vs Async: The Guide Every Python Developer Should Know

Most Python developers know about concurrency but very few actually know when to use:

  • asyncio
  • threads
  • processes
  • background queues

And this confusion creates:

  • slow APIs
  • blocked event loops
  • CPU-bound tasks inside async routes
  • race conditions
  • performance bottlenecks that are almost impossible to debug

So today, we’re fixing that.

Here’s the backend developer’s guide to picking the right concurrency model every time.

01 Threads: Best for I/O, Worst for CPU

Use threads when:

  • Your task waits more than it works
  • Tasks involve network calls (APIs, DB, file I/O)
  • You don’t need strict execution order

Avoid threads when:

  • Your code is CPU-heavy (the GIL will choke)
  • You need thousands of concurrent tasks

Example (ThreadPoolExecutor)

from concurrent.futures import ThreadPoolExecutor
import requests
from typing import List

def fetch(url: str) -> str:
    return requests.get(url).text

urls: List[str] = ["https://api.github.com", "https://google.com"]

with ThreadPoolExecutor(max_workers=10) as executor:
    responses = list(executor.map(fetch, urls))
        

02 AsyncIO: Perfect for High-Concurrency I/O

Async is not multithreading. It’s a single thread doing smart task switching.

Use Async when:

  • You’re building high-performance APIs (FastAPI)
  • You have thousands of concurrent I/O tasks
  • You want predictable execution flow

Avoid Async when:

  • You’re doing CPU work
  • You call blocking libraries (requests, pandas, PostgreSQL drivers)

Example (AsyncIO)

import aiohttp
import asyncio
from typing import List

async def fetch(session: aiohttp.ClientSession, url: str) -> str:
    async with session.get(url) as response:
        return await response.text()

async def main() -> None:
    urls: List[str] = ["https://api.github.com", "https://google.com"]
    async with aiohttp.ClientSession() as session:
        tasks = [fetch(session, url) for url in urls]
        await asyncio.gather(*tasks)

asyncio.run(main())
        

03 Multiprocessing: The Only Real Solution for CPU Work

Want parallelism? You’ll never get it with threads (GIL says no).

Use Processes when:

  • You're doing CPU-heavy workloads:

Avoid when:

  • Tasks are tiny (process startup is expensive)

Example (ProcessPoolExecutor)

from concurrent.futures import ProcessPoolExecutor
from typing import List

def heavy_compute(x: int) -> int:
    return x * x * x

numbers: List[int] = list(range(10000))

with ProcessPoolExecutor() as executor:
    results = list(executor.map(heavy_compute, numbers))
        

04 Background Queues: The Real Production Way

Your API shouldn’t process heavy tasks directly.

That's where queues come in:

  • Celery
  • RQ
  • Dramatiq
  • FastAPI background tasks

Use Queues when:

  • You don’t want users waiting
  • Work is heavy or slow
  • You want retries & fault tolerance
  • You’re designing a microservice architecture

Example (Celery Task)

from celery import Celery

app = Celery("worker", broker="redis://localhost:6379")

@app.task
def generate_report(user_id: int) -> str:
    return f"Report created for {user_id}"
        

Final Takeaway

Professional developers don’t guess. They pick the concurrency model that matches the workload.

Threads = simple I/O

Async = high concurrency

Processes = CPU bound

Queues = real production architecture

Master this, and your Python systems automatically jump into the “senior-level” category.

To view or add a comment, sign in

More articles by Uman Sheikh

Explore content categories