Shared-Memory Concurrency Models

If your tasks share memory within the same process, your choice of concurrency model will shape everything — performance, complexity, and testability.

Here’s a quick breakdown of the major shared-memory concurrency models and when to use them:


1️⃣ No Concurrency (Single-Threaded)

What it is: A single flow of instructions. One task runs at a time.

Pros:

  • Easiest model to reason about
  • Minimal complexity

Cons:

  • Tasks must wait their turn

Good for:

  • Simple scripts
  • Cases where concurrency is handled externally (e.g., running multiple processes)

Sometimes boring is beautiful.


2️⃣ OS Threads

What it is: One thread per task.

Pros:

  • True parallelism
  • Code is easy to understand (until you need to share state between threads)

Cons:

  • Threads are relatively heavy (memory + setup cost)
  • Shared data introduces synchronization complexity

Good for:

  • Independent tasks with minimal shared state

This is the “classic” concurrency model most engineers first learn.


3️⃣ Event Callbacks

What it is: Programs consists of callbacks that respond to events. Execution flows through emitted events.

Pros:

  • Lightweight
  • Fine-grained control over execution

Cons:

  • Code is scattered across callbacks
  • Harder to test
  • “Callback hell” in complex systems

Good for:

  • High-performance systems where you need tight control over scheduling

Powerful, but easy to abuse.


4️⃣ Promises / Futures

What it is: Tasks are wrapped in futures/promises and scheduled on an executor. Futures can be chained together to form workflows.

Pros:

  • Better separation of logic and scheduling
  • Easier to test than callbacks

Cons:

  • Less intuitive than synchronous code

Good for:

  • Independent concurrent tasks that don't require much information shared between them

A big step forward from raw callbacks.


5️⃣ Cooperative Multitasking (Async/Await)

What it is: Code looks synchronous, but functions yield control when blocked (I/O, sleep, etc.). The runtime schedules other tasks until they’re ready again.

Examples: async/await in Python and JavaScript.

Pros:

  • Reads like synchronous code
  • Lightweight
  • Easier to reason about
  • Cooperative multitasking requires less synchronization primitives

Cons:

  • Codebase is split into async and sync functions

Good for:

  • I/O-bound systems
  • Applications where concurrent tasks share state

For many modern applications, this hits the sweet spot.


6️⃣ Virtual Threads

What it is: The platform drastically reduces the cost of threads, allowing thousands (or more) in a single process.

Example: Virtual threads in modern Java.

Pros:

  • Synchronous programming model
  • Lightweight
  • No async/sync split

Cons:

  • For platforms that use pre-emptive scheduling, more synchronisation primitives are needed to ensure code correctness

Good for:

  • CPU or I/O-bound applications
  • Shared-state concurrent applications

It’s like having the simplicity of threads without the traditional cost.


🧠 Final Thought

There’s no universally “best” concurrency model.

  • If you want simplicity → Single-threaded
  • If tasks are mostly independent → Futures
  • If you’re I/O-bound → Async/virtual threads

The right model depends on workload characteristics and team expertise.

Concurrency isn’t just about speed — it’s about choosing the right mental model.

To view or add a comment, sign in

More articles by Daniel Seah

  • AI in Software Development

    Artificial intelligence has rapidly become part of everyday software development workflows. To understand where AI fits…

Others also viewed

Explore content categories