top of page
shubhangisingh453

Introduction to CPU Scheduling in Operating Systems: Concepts and Algorithms



📗 Introduction


CPU scheduling is a process carried out by the operating system to allocate CPU time to processes that are waiting to be executed. CPU scheduling plays a crucial role in determining the overall system performance and throughput. It ensures that the system runs efficiently by balancing the needs of different processes and allocating CPU resources appropriately.


The CPU scheduling algorithm selects a process from the ready queue and allocates the CPU to it. The selection criteria for a process depend on various factors such as the priority of the process, the time it has been waiting in the queue, and the CPU burst time of the process. The CPU scheduling algorithm is responsible for determining the order in which the processes should be executed, and the amount of CPU time that each process should be allocated.


📗 Types of CPU Scheduling


There are several types of CPU scheduling algorithms used in operating systems, including:

  1. First-Come, First-Served (FCFS): Also known as First-In, First-Out (FIFO), this algorithm executes processes in the order they arrive. It is simple to implement, but can lead to long wait times for processes that arrive later.

  2. Shortest Job First (SJF): This algorithm executes processes based on their estimated CPU burst time. The process with the shortest estimated CPU burst time is executed first. It can lead to faster turnaround times for processes, but requires knowledge of the CPU burst times in advance.

  3. Priority Scheduling: This algorithm assigns a priority level to each process, and executes higher priority processes first. It can result in starvation for lower priority processes, and can be susceptible to priority inversion.

  4. Round Robin (RR): This algorithm executes processes in a circular order, assigning them a fixed time slice or quantum. If a process doesn't complete its execution within the time slice, it is preempted and added to the end of the ready queue. It provides fair allocation of CPU time, but can lead to higher overhead due to frequent context switching.

  5. Multilevel Queue: This algorithm divides the ready queue into several separate queues, each with its own priority level and scheduling algorithm. Processes are then assigned to the appropriate queue based on their characteristics. It can be effective in managing different types of processes with varying scheduling needs.

  6. Multilevel Feedback Queue: This algorithm is similar to multilevel queue scheduling, but it allows for processes to move between queues based on their behavior. Processes that use too much CPU time can be moved to a lower priority queue, while processes that are I/O bound can be moved to a higher priority queue. It can be effective in managing processes with dynamic scheduling needs.


📗 Important CPU Scheduling Terminologies


Some of the important terminologies related to CPU scheduling in operating systems are:

  1. Burst time: The amount of time required by a process to complete its execution is called burst time.

  2. Arrival time: The time when a process arrives in the ready queue is called arrival time.

  3. Completion time: The time at which a process completes its execution is called completion time.

  4. Turnaround time: The time difference between the completion time and the arrival time of a process is called turnaround time. It is the total time taken by a process to execute, including the time spent waiting in the ready queue.

  5. Waiting time: The amount of time a process spends waiting in the ready queue is called waiting time.

  6. Response time: The time difference between the first request for the CPU and the first response from the CPU is called response time. It includes the time spent in the ready queue as well as the time spent executing.

  7. Preemption: The act of temporarily suspending a process and moving it out of the CPU to allow another process to use the CPU is called preemption. It is used in preemptive scheduling algorithms.

  8. Scheduling algorithm: A set of rules that determines which process should be executed next is called a scheduling algorithm.

  9. Dispatcher: The component of the operating system that is responsible for selecting the next process to be executed is called the dispatcher. It is part of the kernel.

  10. Quantum: The maximum amount of time a process can spend on the CPU before being preempted is called the quantum or time slice. It is used in round-robin scheduling.

📗 Criteria for CPU Scheduling


CPU scheduling criteria are the metrics that are used to evaluate the performance of a CPU scheduling algorithm. The primary goal of CPU scheduling is to minimize the average waiting time and turnaround time, and maximize CPU utilization. Various criteria are used to measure the efficiency of a CPU scheduling algorithm. Here are some important CPU scheduling criteria:

  1. CPU Utilization: This criterion is used to measure the amount of time the CPU remains busy. An ideal CPU scheduling algorithm should maximize the CPU utilization.

  2. Throughput: It is the number of processes that complete their execution in a unit time. A good scheduling algorithm should aim to maximize the throughput.

  3. Turnaround Time: Turnaround time is the time taken to complete a process, i.e., from the moment it enters the ready queue until it finishes its execution. A good scheduling algorithm should aim to minimize the turnaround time.

  4. Waiting Time: It is the amount of time a process spends waiting in the ready queue for its turn to be executed. A good scheduling algorithm should aim to minimize the waiting time.

  5. Response Time: It is the time taken by the system to respond to a request generated by a process. A good scheduling algorithm should aim to minimize the response time.

  6. Fairness: It refers to the even distribution of CPU time among all the processes in the ready queue. A good scheduling algorithm should aim to provide fairness to all processes.

These criteria are used to evaluate the performance of CPU scheduling algorithms and to compare different algorithms. A good CPU scheduling algorithm should provide an optimal balance between these criteria.


📗 Scheduling Algorithms


Scheduling Algorithms in Operating System are used to schedule processes that are waiting to be executed on the CPU. The purpose of scheduling algorithms is to allocate system resources effectively, minimize waiting time, and maximize system throughput.


📗 Types of Scheduling Algorithm


♦ First-Come, First-Serve (FCFS)


First-Come, First-Serve (FCFS) is a non-preemptive CPU scheduling algorithm. In this algorithm, the process that arrives first is served first. FCFS is the simplest scheduling algorithm and works in a very intuitive manner. Let's dive into the details of the FCFS algorithm.

When a process enters the ready queue, it is assigned a burst time, which is the amount of CPU time required to complete the process. The process with the shortest burst time is the next to be executed. In FCFS, the ready queue is treated as a first-in, first-out (FIFO) queue, and the process at the front of the queue is the one that will be executed first.

Here is an example to illustrate how FCFS works:

Assume that we have three processes that need to be executed:

In FCFS, the processes are executed in the order in which they arrive. Therefore, the execution order for the above processes would be P1, P2, and P3. The Gantt chart for the execution of these processes would look like this:

The waiting time for a process is the amount of time it spends in the ready queue before it starts executing. The turnaround time is the amount of time it takes for a process to complete execution from the time it enters the system. In FCFS, the waiting time and turnaround time are dependent on the arrival time of the processes. In the above example, the waiting time and turnaround time for each process would be:

♦ Shortest Job First (SJF) Scheduling Algorithm

Shortest Job First (SJF) Scheduling Algorithm is a non-preemptive scheduling algorithm in which the process with the shortest CPU burst time is executed first. In this algorithm, the scheduler selects the process that has the shortest expected processing time.

The SJF scheduling algorithm can be implemented in two ways:

  1. Non-Preemptive SJF: In non-preemptive SJF scheduling, once a process has been allocated CPU time, it is not preempted until it completes its CPU burst. Non-preemptive SJF scheduling is also known as Shortest Job Next (SJN) scheduling.

  2. Preemptive SJF: In preemptive SJF scheduling, the process that has the shortest expected processing time is allocated CPU time, and if a new process with a shorter expected processing time arrives, it preempts the currently executing process. Preemptive SJF scheduling is also known as Shortest Remaining Time First (SRTF) scheduling.

SJF scheduling algorithm is based on the premise that the shorter the CPU burst time of a process, the higher its priority for execution. The following are the advantages and disadvantages of the SJF scheduling algorithm:

Advantages:

  • The SJF algorithm provides the minimum average waiting time for a given set of processes.

  • The SJF algorithm is easy to implement and understand.

Disadvantages:

  • The main disadvantage of SJF scheduling is that it requires accurate knowledge of the CPU burst time of each process, which may not be available.

  • The SJF algorithm may cause starvation, where a process with a long CPU burst time may never get CPU time if shorter processes keep arriving.

  • The SJF algorithm is not suitable for interactive systems where the response time is critical.

Example:

Consider the following set of processes with their respective CPU burst times:

The Gantt chart for the non-preemptive SJF scheduling algorithm would be as follows:

The average waiting time for the above processes would be (0+3+7+4)/4 = 3.5 units.

The Gantt chart for the preemptive SJF scheduling algorithm would be as follows:

The average waiting time for the above processes would be (0+3+1+2)/4 = 1.5 units.


♦ Round Robin scheduling algorithm


Round Robin (RR) is a popular CPU scheduling algorithm used by operating systems. It is a pre-emptive scheduling algorithm in which the CPU is assigned to each process for a fixed time quantum. If the process does not complete its execution during this time quantum, the CPU is taken away and assigned to the next process in the ready queue.

Here is how the Round Robin scheduling algorithm works:

  1. When a process arrives, it is added to the end of the ready queue.

  2. The CPU is assigned to the first process in the ready queue for a fixed time quantum.

  3. If the process completes its execution within the time quantum, it is removed from the ready queue and the CPU is assigned to the next process in the queue.

  4. If the process does not complete its execution within the time quantum, it is pre-empted and added to the end of the ready queue. The CPU is then assigned to the next process in the queue.

  5. The algorithm continues until all processes have completed their execution.

Round Robin is a fair scheduling algorithm as each process gets an equal amount of time to execute. It is also good for interactive systems as it ensures that no process is starved of CPU time for too long.

The time quantum is an important parameter in the Round Robin scheduling algorithm. If the time quantum is too short, the overhead of context switching may become too high. On the other hand, if the time quantum is too long, the response time for interactive processes may become too long.

Round Robin algorithm can be implemented using a simple queue data structure. Each time a process is pre-empted, it is added to the end of the queue. The next process to be executed is always the first process in the queue.


Example: Let's consider three processes P1, P2, and P3 with burst times of 8, 4, and 6 respectively, and a time quantum of 3.

Time 0: P1 is added to the ready queue.

Time 3: P1 is pre-empted after using 3 units of CPU time and added to the end of the ready queue. P2 is assigned to the CPU.

Time 7: P2 completes its execution and is removed from the ready queue. P3 is assigned to the CPU.

Time 10: P3 is pre-empted after using 3 units of CPU time and added to the end of the ready queue. P1 is assigned to the CPU.

Time 13: P1 completes its execution and is removed from the ready queue.

Time 16: P3 completes its execution and is removed from the ready queue.

In this example, all three processes are executed in a round-robin fashion and each process gets an equal amount of CPU time.


♦ Shortest Remaining Time First (SRTF)


SRTF is a variation of Shortest Job First (SJF) scheduling algorithm. It is a pre-emptive algorithm, which means that if a new process with a shorter burst time arrives while a process is currently running, the running process will be stopped and the new process will be executed first. The SRTF algorithm selects the process that has the shortest remaining burst time to be executed next.

Let's understand the SRTF algorithm with the help of an example:

Consider the following set of processes, with their respective arrival time and burst time:

The Gantt chart for the SRTF algorithm is:

The average waiting time for the above example is (0 + 5 + 4 + 0)/4 = 2.25.


Advantages:

  • SRTF gives the minimum average waiting time among all the scheduling algorithms.

  • It is a preemptive algorithm, so it is beneficial for processes that have short burst time.

Disadvantages:

  • SRTF algorithm is complex as it requires keeping track of the remaining burst time of each process.

  • It is not suitable for long-term scheduling since it requires knowledge of the remaining burst time for each process.


Thanks for reading, and happy coding!


Introduction to CPU Scheduling in Operating Systems: Concepts and Algorithms -> Understanding Bootloader in Operating System

7 views0 comments
bottom of page