Shared-Memory Concurrency Models
If your tasks share memory within the same process, your choice of concurrency model will shape everything — performance, complexity, and testability.
Here’s a quick breakdown of the major shared-memory concurrency models and when to use them:
1️⃣ No Concurrency (Single-Threaded)
What it is: A single flow of instructions. One task runs at a time.
Pros:
Cons:
Good for:
Sometimes boring is beautiful.
2️⃣ OS Threads
What it is: One thread per task.
Pros:
Cons:
Good for:
This is the “classic” concurrency model most engineers first learn.
3️⃣ Event Callbacks
What it is: Programs consists of callbacks that respond to events. Execution flows through emitted events.
Pros:
Cons:
Good for:
Powerful, but easy to abuse.
4️⃣ Promises / Futures
What it is: Tasks are wrapped in futures/promises and scheduled on an executor. Futures can be chained together to form workflows.
Recommended by LinkedIn
Pros:
Cons:
Good for:
A big step forward from raw callbacks.
5️⃣ Cooperative Multitasking (Async/Await)
What it is: Code looks synchronous, but functions yield control when blocked (I/O, sleep, etc.). The runtime schedules other tasks until they’re ready again.
Examples: async/await in Python and JavaScript.
Pros:
Cons:
Good for:
For many modern applications, this hits the sweet spot.
6️⃣ Virtual Threads
What it is: The platform drastically reduces the cost of threads, allowing thousands (or more) in a single process.
Example: Virtual threads in modern Java.
Pros:
Cons:
Good for:
It’s like having the simplicity of threads without the traditional cost.
🧠 Final Thought
There’s no universally “best” concurrency model.
The right model depends on workload characteristics and team expertise.
Concurrency isn’t just about speed — it’s about choosing the right mental model.