Threads vs Processes

Threads vs Processes

A process is the fundamental unit that owns memory in an operating system. When a program starts, the OS sets up everything it needs:

  • a virtual address space,
  • code (text segment),
  • global/static variables,
  • a heap for dynamic allocation,
  • memory‑mapped files and shared libraries.

This entire memory layout belongs to the process, not to any thread inside it.

Threads Live Inside a Process

A thread is not a separate program with its own memory. Instead, a thread represents an execution path running within the memory of its parent process.

You can think of a process as a house, and threads as people in different rooms:

  • The house (process) owns all resources: furniture, appliances, electricity, address.
  • The people (threads) live inside the same house and use the same resources.
  • Each person has their own notebook, hands, position, and thoughts → equivalent to registers, stack, program counter, execution context.

But they all share:

  • the kitchen (global variables)
  • the living room (heap)
  • the garage (memory‑mapped files)
  • the backyard (open file descriptors)

So if one thread rearranges furniture (modifies memory), every other thread sees it instantly.

Memory Sharing Is Built Into Thread Architecture

When a process creates additional threads, the OS does not allocate new memory spaces. Instead:

  • Every thread uses the same page tables,
  • the same virtual addresses map to the same physical memory.

This means:

A pointer created in one thread can be used in another without special handling.

This makes multi‑threading very fast for sharing data — there is no copy and no need for inter‑process communication (IPC).

What Memory Is Private to Each Thread?

Only three components are unique for every thread:

1. Its own stack

Used for:

  • local variables
  • function calls
  • return addresses

Each thread gets its own stack region inside the process.

2. Its CPU registers

These store things like:

  • general-purpose registers
  • program counter (instruction pointer)
  • stack pointer

3. Its scheduling metadata

The OS keeps book‑keeping info about each thread:

  • state (running, waiting)
  • priority
  • OS thread ID

Every other part of memory is shared across the process.

The Tradeoff: Speed vs Safety

Because threads share so much, thread‑to‑thread communication is extremely fast:


  • No memory copying
  • No IPC overhead
  • No serialization
  • No context switching between separate address spaces

But the drawback is correctness.

❌ If two threads write to the same variable at the same time:

  • values can interleave
  • reads can see half‑updated data
  • state may become corrupted

This leads to race conditions.

To prevent this, threads must use:

  • mutexes (locks)
  • atomic operations
  • memory barriers/fences
  • condition variables

Without synchronization, shared memory becomes a source of unpredictable bugs.

Final Summary

Threads are fast because they share memory. But that same shared memory means:

  • No isolation
  • No automatic safety
  • High risk if threads modify shared data unsafely

In contrast, processes have:

  • separate memory
  • separate address spaces
  • safer isolation
  • slower communication (because of IPC)

So:

Threads trade isolation for raw execution efficiency. Processes trade efficiency for safety and separation.

To view or add a comment, sign in

Others also viewed

Explore content categories