Saturday 21 November 2009

Operating Systems - Process & Thread Scheduling

Process & Thread Scheduling


Basic concepts


If there are 2 processes in the ready state and only 1 CPU, the OS must choose which one to run next. The part of the OS that makes this choice is the scheduler. It uses a scheduling algorithm to choose.

If an OS has support for kernel level threads, it is threads and not processes that are being scheduled. Here we are just going to talk about process scheduling but it's mostly the same.

As CPUs have got more powerful, scheduling has become of less importance. However, the scheduling algorithms have got more complex.

First of all, let's talk about when scheduling is needed:

When is scheduling needed?


When the CPU becomes free, like when a process terminates. It's also needed when a new process becomes 'ready'.

Process execution is usually bursts of execution and IO waiting. If the CPU burst is really long, then the other processes might have to wait, and wait, and wait, and yougetthepoint. This needs to be dealt with too.

Example time!

Example


We've got 2 processes, we'll call them A and B. A has a CPU burst time of 10 arbitrary time units. B has a CPU burst time of 4.

Let's try letting A go first:

Note: Amber means the process is ready, and green means it's running.



Now if B goes first:



Now we can work out the average turnaround time.

Average turnaround time


When A went first, the average turnaround time was:

(10 + 14) / 2 = 12


When B went first:


(14 + 4) / 2 = 9



Average waiting time


When A was first:

(0 + 10) / 2 = 5


When B was first:


(4 + 0) / 2 = 2


So both were lower when B went first.



First come, first served (FCFS)


This is the simplest scheduling algorithm. The first process to enter ready state gets the CPU first, and until it is blocked or finished, will hold the CPU.

FCFS requires there to be a single queue of ready processes. So when a process becomes ready, it goes to the back of the queue.

When the CPU becomes free, it will execute the process at the front of the queue.

Preemptive and non-preemptive scheduling


You will remember from before that we need to stop processes with long CPU burst from hogging the CPU for too long.

With NON-preemptive scheduling, the process gets to keep the CPU until it ends or it switches to blocked state.

With preemptive scheduling, the process has a maximum time it can run for. If at the end of this time it is still running, it gets interrupted and the scheduler chooses another process to run. These fixed amounts of time are called either time slices or time quanta. They normally last from 10 to 100 milliseconds.

Let's do the example from earlier but with preemptive scheduling added.

Example


The time slice is 5.

First we will do A then B:



Now B then A:

This is actually the same as the diagram for B then A without preemptive scheduling. Therefore the average turnaround time is still 9, and the average wait time is still 2.

Average turnaround time


A then B:

(14 + 9) / 2 = 11.5



Average waiting time


A then B:

(4 + 5) / 2 = 4.5


Basically, if there are a lot of processes in the ready state, then the CPU is given to each process in turn. This is called round robin scheduling.



Round robin scheduling


This is the simplest algorithm for time sharing

It is pretty much the same as FCFS, but with time slices added.

Choosing a scheduling algorithm


It's important to be fair. We also need to consider the following criteria:

  • Try to keep the CPU busy as much as possible

  • Have the highest throughput possible

  • Smallest possible turnaround time

  • Lowest wait times


Choosing a time quantum / slice


Every time a process is switched, some CPU time is taken up by the switching, aka context switching.

If you use a small time slice, the cost of the context switch will be higher. For example, if you use a time slice of 3, and the context switch is 1, then 25% of CPU time is just context switching and not doing any real work.

However, if you make the time slice too big, then it will seem as though you are using FCFS.

Shortest job first (SJF)


In the examples from before, doing the shorter process first gave better results. When processes become available simultaneously, it is better to do the shortest job first.

Shortest remaining time next


This is a preemptive version of SJF. Every time a new process in ready state arrives, we look at the the CPU burst time of it. If it is less than the time needed to complete the current process, we move the current process to ready state and schedule the new one instead.

The problem with this is that processes with long CPU times will always have to wait as newer shorter processes keep arriving. Also, it's impossible to predict how long a process will take as we don't know if it will keep getting moved off and on the CPU.

Priority scheduling


Using a round robin assumes that all processes have equal importance. But some might be higher than others. What if we gave a higher time slice to higher priority processes, or ran them first?

There are a lot of options for priority scheduling, but it is important to avoid starvation (where low priority processes are left waiting for the CPU forever)

Priority types


Static priorities are when the priorities have been externally defined.

Dynamic ones are when they are internally defined by the system in order to achieve certain goals. For example, the system may boost the priority of IO bound processes.

Both types may be used.

Multiple queues


What if, instead of there being a queue of processes in ready state, there were multiple queues, each one with a different priority?

It could be done in these ways:

  • Processes belonging to high priority queues have to finish before processes belonging to lower priority queues can start

  • High priority queues get more time than lower ones

  • Processes may move between queues


More...


There are lots more algorithms. Real-time systems may need to use different types of algorithms, for example 'earliest deadline first'

 



Stuff I may have glossed over available here



 

No comments:

Post a Comment