Brendan Ang

Search

Search IconIcon to open search

Process scheduling

Last updated Nov 8, 2022 Edit Source

# Process Scheduling

A process execution alternates between CPU executions and I/O operations CPU Burst: duration of one CPU execution cycle I/O Burst: duration of one I/O operation (wait time)

# Types

All processes are stored in queue structures Job queue: set of all processes with the same state in the system

A scheduler will be in charge of handling these queues

# Objectives

System:

  1. Maximize CPU utilization
    • Increase the % of time which CPU cores are busy executing processes
    • $\frac{\text{Total execution time}}{\text{Total time}}$
  2. Maximize throughput
    • Number of processes that complete their execution per unit time (number of exit transitions) Process:
  3. Minimize turnaround time
    • Amount of time to finish executing a particular process (e.g. time between admitted and exit transitions)
  4. Minimize waiting time
    • Amount of time process is in the ready state
    • __Turnaround time - CPU burst time
  5. Minimize response time
    • Time between admission and first response (assume to be start of execution)

# Uni-Core Algorithms

# First Come First Serve (FCFS)

Non pre-emptive type: processes have to voluntarily release CPU once allocated

Convoy effect: Short processes suffer increased waiting times due to earlier arrived long processes

# Shortest Job First (SJF)

How to handle the convoy effect from FCFS? Prioritize processes based on CPU burst lengths

Non pre-emptive: a process cannot be stopped. Preemption only after a process is completed Pre-emptive (Shortest Remaining Time First): processes in the midst of execution can be rescheduled This algorithm is optimal to achieve minimum average waiting time. However, this algorithm is often not used in practice as it is difficult to know the burst length of a process.

# Priority Based

CPU is allocated to the process with highest priority

  1. Priority based on arrival order (FCFS)
  2. Priority base on CPU burst length (SJF)

Starvation problem: lower priority processes may never execute. Need to use aging to slowly increase priority of processes that have been in the pipeline longer.

# Round Robin

Use a fixed time quantum (q) for scheduling. A process is allocated CPU for q time units and after that it is preempted and inserted at the end of the ready queue.

Large q: degenerates to FCFS Small q: many context switches leading to greater overhead This is the algorithm that is used most commonly in practice

# Multi-Queue

# Multi-Core Algorithms

# Partitioned Scheduling

Each process are partitioned at process creation time among CPU cores Each process is mapped to one core Asymmetric scheduling: each CPU can have a separate scheduling strategy/algorithm

How to map cores to processes?

# Global Scheduling

Maintain 1 or more ready queues for the entire system without mapping any queue to any CPU core Symmetric scheduling: one scheduling strategy/algorithm across all cores

# Practice Problems

Under Round-Robin scheduling, if quantum size is q, average CPU burst length is B, average number of CPU bursts per process is N, and average number of processes in the ready queue is R, then the average response time for a process is? $$\frac{0+q+2q+3q+…+(R-1)q}{R} = \frac{\frac{R}{2}(R-1)q}{R}=(R-1)q$$ a. False. If the CPU cannot be removed from the process, it is non-preemptive b. False. Only need to run scheduler when a process exits or changes to waiting c. False. Response time is time to first start of execution. Turnaround time is time to finish the process. Waiting time is time in the ready state. Turnaround - waiting time is just the time in the waiting and running states combined. d. False. Migration overheads occur in global scheduling when a process partially executes on one core and then migrates to another. In partitioned scheduling processes don’t migrate between cores. However, partitioned scheduling has the problem of unbalanced loading of the cores depending on the process-core mapping. abc. 800 d. Uni-core: RR Duo-core: RR, SRTF Efficiency is total process time over total process time + total overhead: $\frac{T}{T+kS}$ where k is the total number of context switches we should also include the 1st context switch needed to start process a. If $Q->\infty$, there will be 0 context switches $Efficiency=\frac{T}{T+S} = 1$ b. If Q >T, average process will run without context switching $Efficiency=\frac{T}{T+S}= 1$ c. S < Q < T. Average process will switch T/Q times. $Efficiency=\frac{T}{S\times\lceil\frac{T}{Q}\rceil}$ c. Q = S. Average process will switch T/S times. $Efficiency=\frac{T}{T+\frac{T}{S}S} = \frac{T}{2T}=0.5$ d. Q -> 0, number of switches tend to infinity $Efficiency=0$