CPU Scheduling in Operating Systems

CPU Scheduling in Operating Systems

CPU scheduling is a fundamental operating system concept that involves the allocation of the central processing unit (CPU) time to multiple processes in a computer system.

To understand this in a better way, let's assume a case from supermarket billing queues. Assume customers waiting in line represent processes in the "Ready Queue" state. The cashier is analogous to the CPU. The cashier serves one customer at a time, acting as a process in the "running" state, and when a new customer arrives (interrupt), the cashier may switch to serving that customer (preemption).

General Flow of CPU Scheduling
Article content
Source: Author

As shown in the above diagram, New Jobs is the process that will be running over the CPU. Once the process is ready, it will move to "Ready Queue". If there is no process running on the CPU, then the CPU scheduler will select another process which dispatcher will move from the ready queue to the CPU for running. The process that are running on the CPU require an I/O operation that needs to wait for some time, and then the running process will be moved to the "wait queue". Once the I/O operation is completed, the process will move from the wait queue to the ready queue. If the process is completed to use the CPU, then CPU scheduling will mark as an end for that process.

CPU Scheduler

  • Whenever the CPU becomes idle, it is the job of the CPU Scheduler to select another process from the ready queue to run next.
  • The process going from the ready queue to the run state is not the same every time in FIFO order. It will depend on which scheduling algorithm has been picked. will discuss more about the scheduling algorithm.

Dispatcher

The dispatcher is the module that gives control of the CPU to the process selected by the scheduler. This function involves:

  1. context switching.
  2. Switching to user mode.
  3. Jumping to the proper location in the newly loaded program.

The dispatcher needs to be as fast as possible, as it is run on every context switch. The time consumed by the dispatcher is known as dispatch latency.

Scheduling Criteria

Scheduling criteria refer to the principles and factors that operating systems consider when deciding how to allocate CPU time to different processes.

  • CPU Utilization : Maximizing CPU utilization ensures that the processor is actively performing tasks.
  • Throughput : Total number of processes that complete their execution within a given time frame.
  • Turnaround time : Time required for a particular process to complete, from submission time to completion. ( Wall clock time. )
  • Waiting time : How much time processes spend in the ready queue waiting their turn to get on the CPU.
  • Response time : The time taken for a system to respond to an input or request.
  • Context Switching Overhead : Minimizing the overhead associated with context switching between processes is important to maintain system efficiency.

CPU Scheduling Algorithms in OS
Article content
source: Author

1. First Come First Serve Scheduling, FCFS

This scheduling is very simple just like FIFO queue but this scheduling yield some very long average wait times.

Article content
source: Author

The above picture explains the actual flow of FCFS scheduling with assuming 0 arrival time. When P1 starts running, it will run for a complete burst time of 30, so P2 and P3 need to wait in the ready queue for P1's completion. Once P1 is completed, P2 will get a chance to run, then P3.

FCFS is suitable for scenarios where processes have similar burst times, and simplicity is more critical than optimizing waiting times. However, in practice, it might not be the best choice for many applications due to its potential for high waiting times.

2. Shortest Job First Scheduling, SJF

SJF algorithm selects the process with the shortest burst time to execute first. It aims to minimize the total processing time and reduce waiting times. SJF can be preemptive (Shortest Remaining Time First - SRTF) or non-preemptive.

Article content
Source: Author

As per above picture it clearly shows that the shortest burst time process will get chance first then continue in same order with assuming 0 arrival time.

This minimizes waiting times and turnaround times & efficient in terms of total processing time.

it is difficult to implement in a non-preemptive setting as it requires knowledge of future burst times. It also lead to starvation if a long job continually arrives after short jobs.

3. Priority Scheduling

This algorithm where each process is assigned a priority, and the process with the highest priority is selected for execution first. Processes with the same priority are scheduled based on other factors such as arrival time or other scheduling algorithms.

Article content

It allows for the prioritization of critical processes. This can be used to manage real-time applications where certain tasks need immediate attention.

It leads to starvation of lower-priority processes if higher-priority processes keep arriving. In the case of equal priority, other scheduling criteria (e.g., FCFS) may need to be applied.

4. Round Robin Scheduling

It is a preemptive CPU scheduling algorithm where each process is assigned a fixed time unit or time quantum. The scheduler cyclically assigns the CPU to each process for a small unit of time, and if a process's burst time exceeds the time quantum, it is moved to the back of the queue. This ensures fair distribution of CPU time among all processes.

Article content

Above picture try to explain how this round robin works with assuming arrival time 0.

Fair distribution of CPU time among processes and Suitable for time-sharing systems in this RR algorithm.

It will have poor performance for processes with long burst times and high turnaround time for processes with short burst times.

So, Round Robin is a widely used scheduling algorithm, especially in systems where fairness in CPU allocation is important, such as time-sharing systems or interactive environments.

5. Multilevel Queue Scheduling

This scheduling technique divides the ready queue into multiple queues, each with its own priority level. Processes are assigned to a specific queue based on certain characteristics, and each queue follows its own scheduling algorithm. The queues are ordered by priority, and processes move between queues based on their behavior and requirements.

Let look at the example for this scheduling

--> Consider three queues: High Priority, Medium Priority, and Low Priority.

  • High Priority Queue: Uses Round Robin scheduling with a short time quantum.
  • Medium Priority Queue: Uses Priority Scheduling.
  • Low Priority Queue: Uses First come first serve scheduling.

Multilevel Queue Scheduling is versatile and can accommodate a variety of workloads with different characteristics. It aims to provide a balance between responsiveness for high-priority tasks and fairness for lower-priority tasks.

6. Multilevel Feedback-Queue Scheduling

This scheduling is an extension of the Multilevel Queue Scheduling algorithm with the added capability of dynamically adjusting the priority of a process based on its behavior. In this scheduling algorithm, a process can move between different queues or levels based on its past behavior and resource requirements.

Multilevel feedback queue scheduling is the most flexible, because it can be tuned for any situation. But it is also the most complex to implement because of all the adjustable parameters. Some of the parameters which define one of these systems include:

  1. The number of queues.
  2. The scheduling algorithm for each queue.
  3. The methods used to upgrade or demote processes from one queue to another. ( Which may be different. )
  4. The method used to determine which queue a process enters initially.

Efficient CPU scheduling is paramount for achieving optimal system performance. This article delves into the intricacies of CPU scheduling, exploring key algorithms and criteria that govern the allocation of CPU time to various processes.


To view or add a comment, sign in

More articles by Ritesh Kumar

Explore content categories