How Operating Systems Manage Processes: Dispatching & Scheduling

How Operating Systems Manage Processes: Dispatching & Scheduling

What Is a Process?

A process is a program in execution. Unlike a static binary file on disk, a process has:

  • PID – unique process ID
  • Program counter – next instruction to execute
  • Stack – local variables, function call frames
  • Heap – dynamically allocated memory
  • File descriptors – open files, sockets
  • Process Control Block (PCB) – the kernel's struct that tracks all of the above

Process State Diagram

Every process moves through a finite set of states:

          fork()
            │
            ▼
        ┌────────┐
        │  NEW   │
        └────┬───┘
             │ admitted
             ▼
        ┌────────┐   scheduler dispatch   ┌─────────┐
        │ READY  │ ───────────────────────►│ RUNNING │
        │ queue  │ ◄───────────────────── │  (CPU)  │
        └────────┘    preempt / time-out  └────┬────┘
             ▲                                 │
             │         I/O or event wait       │
             │   ┌──────────────────────────── ┘
             │   ▼
        ┌────────────┐
        │  WAITING   │  (blocked on I/O, mutex, sleep…)
        └─────┬──────┘
              │ I/O complete / event fires
              └──────────────────► READY
                                        │
                                        │ exit()
                                        ▼
                                   ┌─────────┐
                                   │ ZOMBIE  │ ◄── parent hasn't called wait()
                                   └─────────┘
                                        │ wait()
                                        ▼
                                   ┌──────────┐
                                   │ TERMINATED│
                                   └──────────┘

The Dispatcher

The dispatcher is the low-level kernel component that hands CPU control to a process chosen by the scheduler. It does three things:

  1. Context switch – saves CPU registers of current process into its PCB, loads registers of the next process from its PCB
  2. Mode switch – transitions from kernel mode → user mode
  3. Jump – sets the program counter to resume the new process

Dispatch latency = time to stop one process and start another. Minimising it is critical for interactive responsiveness.


Scheduling Algorithms

1. First-Come, First-Served (FCFS)

Simple queue. No preemption. Long jobs block short ones ("convoy effect").

2. Shortest Job First (SJF)

Optimal average wait time — but requires knowing burst time in advance. Starvation risk for long processes.

3. Round Robin (RR)

Each process gets a fixed time quantum (typically 10–100 ms). After quantum expires → preempted → back of ready queue.

Time:  0    10   20   30   40
       [P1] [P2] [P3] [P1] [P2] ...   (quantum = 10ms)

Best for time-sharing systems (Linux desktop, web servers).

4. Multilevel Feedback Queue (MLFQ)

The algorithm used by most real OSes (Linux CFS, Windows, macOS).

  • Multiple queues with different priorities
  • New processes start at highest priority
  • If a process uses its full quantum → demoted one level
  • I/O-bound processes (use little CPU) stay at high priority
Priority 0 (highest): ──[P_interactive]──
Priority 1:           ──[P_mixed]──
Priority 2 (lowest):  ──[P_batch]──[P_batch]──

5. Completely Fair Scheduler (CFS) — Linux

Instead of fixed quanta, CFS tracks vruntime (virtual runtime) per process. The process with the lowest vruntime always runs next. Stored in a red-black tree for O(log n) scheduling.


Context Switch Deep Dive

User process A running
        │
   timer interrupt (or syscall)
        │
        ▼
  kernel saves A's registers → PCB_A
  kernel runs scheduler → picks B
  kernel loads B's registers ← PCB_B
        │
        ▼
User process B running

Cost: ~1–10 µs. Includes:

  • Register save/restore
  • TLB flush (on architectures without ASID tagging)
  • Cache warming for new process

Fork & Exec

pid_t pid = fork();   // duplicates the process
if (pid == 0) {
    execve("/bin/ls", args, env);  // replaces memory image
} else {
    wait(NULL);  // parent waits for child
}
  • fork() → copy-on-write clone of parent
  • exec() → loads new binary, replaces address space
  • Together they implement "create a new program"

Inter-Process Communication (IPC)

MethodSpeedUse case
PipeFastParent ↔ child
Named pipe (FIFO)FastUnrelated processes
Message queueMediumDecoupled producers/consumers
Shared memoryFastestHigh-throughput data sharing
SocketSlow (flexible)Network or local cross-machine
SignalInstant (limited)Notifications (SIGKILL, SIGTERM)

Key Metrics

MetricDefinition
CPU utilization% of time CPU is busy (target: 40–90%)
Throughputprocesses completed per second
Turnaround timefinish time − arrival time
Waiting timetime spent in ready queue
Response timetime from request to first response

Summary

Program on disk
      │  fork/exec
      ▼
  Process (PCB created)
      │
      ▼
  Ready Queue  ◄──── preemption
      │  dispatcher
      ▼
  CPU execution
      │
   ┌──┴──┐
   │     │
  I/O  exit()
   │
  Wait → Ready → CPU …

The OS constantly cycles processes through this loop, creating the illusion that hundreds of programs run simultaneously on just a few CPU cores.


Threads vs Processes

A thread is a lightweight unit of execution within a process. All threads in a process share the same heap and file descriptors, but each has its own stack and registers.

Process
├── PCB (PID, file descriptors, heap, code segment)
├── Thread 1 → stack, registers, program counter
├── Thread 2 → stack, registers, program counter
└── Thread 3 → stack, registers, program counter

| | Process | Thread | |--|-

Comments

(0)
Top commentsNewest first

0/3000 • Press Ctrl + Enter to submit

Loading comments...

How Operating Systems Manage Processes: Dispatching & Scheduling | SourceDev