When it comes to managing processes in an operating system, selecting the most efficient scheduling algorithm is crucial. In this short article, let us have a look into a few well-known scheduling algorithms.
1. 𝐅𝐢𝐫𝐬𝐭 𝐂𝐨𝐦𝐞, 𝐅𝐢𝐫𝐬𝐭-𝐒𝐞𝐫𝐯𝐞 (𝐅𝐂𝐅𝐒):
Operating on the principle of “first come, first served,” FCFS allocates the CPU to the process that initiates the request first. It operates using a First-In-First-Out (FIFO) queue structure. When a process enters the ready queue, its Process Control Block (PCB) is appended to the queue’s tail. As soon as the CPU becomes available, it is assigned to the process at the head of the queue, which is subsequently removed. FCFS scheduling is non-preemptive, allowing a process to retain the CPU until it terminates or requests I/O.
2. 𝐒𝐡𝐨𝐫𝐭𝐞𝐬𝐭 𝐉𝐨𝐛 𝐅𝐢𝐫𝐬𝐭 (𝐒𝐉𝐅):
SJF scheduling associates each process with the length of its next CPU burst. When the CPU becomes available, it is assigned to the process with the smallest next CPU burst duration. In cases where two processes vie for the CPU simultaneously, FCFS scheduling comes into play to select a single process. By minimizing the average waiting time for a given process set, SJF scheduling proves beneficial in long-term scheduling scenarios. It can operate in both preemptive and non-preemptive modes, with the latter arising when a new process arrives while a previous one is still executing.
3. 𝐏𝐫𝐢𝐨𝐫𝐢𝐭𝐲 𝐒𝐜𝐡𝐞𝐝𝐮𝐥𝐢𝐧𝐠:
In priority scheduling, every process is assigned a priority level, and the CPU is allocated to the process with the highest priority. Processes sharing the same priority undergo scheduling using the FCFS principle. Preemptive and non-preemptive variations are possible. Nevertheless, priority scheduling introduces the challenge of indefinite blocking or starvation, where low-priority processes may suffer prolonged delays. To address this issue, the aging technique incrementally raises the priority of long-waiting processes within the system.
4. 𝐑𝐨𝐮𝐧𝐝 𝐑𝐨𝐛𝐢𝐧 (𝐑𝐑):
RR scheduling involves subdividing time into small intervals called time quanta or time slices. The ready queue adopts a circular queue structure. The CPU scheduler traverses the ready queue, allocating the CPU to each process for a time quantum interval. If a process completes its CPU burst within the assigned time quantum, it voluntarily releases the CPU, allowing the scheduler to move to the next process. If a process’s CPU burst exceeds the time quantum, an interrupt occurs, triggering a context switch. The process is then placed at the tail of the ready queue. RR scheduling ensures fair CPU time distribution among processes and is commonly used in multitasking environments.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An Article by: Yashwanth Naidu Tikkisetty
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
