Tuesday 20 October 2009

Operating Systems – Process and Thread Scheduling

Recall: Process States

DO you remember the states of a process from one of the previous posts? Either way im going to remind you :)

When a new process is created it is instantly waiting the the CPUs attention in the ‘Ready’ state. Once the process has the CPUs attention it is placed into the ‘Running’ state. to remember is better, go to post ***.

Scheduling

Lets say we have 2 or more processes which are simultaneously in the ready state, yet we only have one CPU available. A choice must be made to decide which process is to be run next. The part of the Operating System that makes these choices is called the scheduler.

Any Operating System that supports kernel-level threads, has threads that are being scheduled.

When to Schedule

The only time that the Operating System needs to schedule, is when a process becomes free from the CPU – for example, the process moves to the blocking state or completely terminates. If a process goes into the blocking state, the CPU or/and scheduler must mae the user (you) able to deal with other processes, even if another process is waiting for I/O.

We also need to schedule when a new process joins the list of processes in the ready state.

Example of CPU Burst

First-come-First-Serve

This is by far the most simplest scheduling algorithm.

The process that entered the ready state first, then this is the process that the CPU will deal with first, and will hold it until the process is blocked or terminated.

The algorithm requires a queue of ready processes. If a process enters the ready state, then it is linked to the tail of the ready queue. This means that when the CPU is free, it will deal with the process at the head of the queue.

Pre-emptive vs. Non-Pre-emptive

We need to be able to avoid situations of where processes with a very long CPU burst keep the CPU for what seems like forever… and ever… and ever… and… you get the point? :)

Non-Pre-emptive

This is when the process keeps the CPU until the process terminates or it switches to the blocked state.

Pre-emptive

This is when the process can run continuously for a maximum of some fixed time. If it is still running at the end of this fixed time, then the process is interrupted and the scheduler will pick another process to run.

This fixed amount of time that im talking about is called a time slice or a time quantum – and it is generally from 10-100ms.

Round-Robin

Round-Robin scheduling is the simplest algorithm for a time-sharing system. It is basically the same as First-Come-First-Serve, however it has one small addition:

  • A process can hold the CPU for a maximum of some fixed time; if the process is still running at the end of the time slice, then the CPU is pre-empted and given to a process at the head of the ready queue. The process that was running is now moved to the tail of the ready queue.

Goals and Criteria

Q: What is a good scheduling algorithm?
A: It depends on the circumstances – scheduling algorithms are chosen on the basis of how the most typical behaviour can best satisfy the most chosen criteria.

In scheduling, fairness is important. We have seen this already in the argument for pre-emptive scheduling -  we want watch process to get a fair/equal time slice of the CPU to other processes.

Other objectives and criteria for scheduling can be:

  • CPU Utilisation – we want to keep the CPU busy.
  • Throughput – This is the number of processes that are completed per time unit (the more the better!! :) ).
  • Turnaround time – How long it takes to complete a process for a submission to completion (small is better).
  • Waiting time – How long a process spends in the ready state.

Time Slices

We have already seen that different time slices may lead to a different result. The major trade-off is the cost of a context switch.

If the time slice is small, then the cost of the context switch will be significant. In the scheduling charts so far we ignored the cost of context switching.

Two solutions that we can are to increase the time slice (but not too high or it will make a round robin look like a first-come-first-serve), or to reduce the cost of the context switch. A common value for time slices is 20-50ms. A context switch is 0.001-1ms.

Shortest-Job-First

It can be proved that starting with the process that has the smallest CPU burst is optimal for a given set of processes that become available simultaneously. This algorithm is called Smallest-Job-First.

 

A final question: What about the time slices for the scheduler itself?? o_O

No comments:

Post a Comment