Tuesday 27 October 2009

Operating Systems – Process/Thread Syncing

Process Synchronization

OK, so a background Process Syncing. Process Syncing is concurrent access to shared data that may result in in data inconsistency. There are however some mechanisms required to help maintain data consistency.

Here’s an example of a program that uses a shared variable ‘i’:

 

//process or thread A
i = 0;
while (i < 100) {
   i++;
}
System.out.println(“A won”);
//process or thread B
i = 0;
while (i > –100) {
   i—;
}
System.out.println(“B won”);

The question is, which process/thread will finish first – A or B? Will they ever finish??

 

Race Conditions

A Race Condition is a situation where several threads or processes manipulate the same shared data concurrently and the outcome of the execution depends on the precise order of what is happening .

 

A Few Definitions

A few of these definitions may seem very familiar from the previous Distributed Systems posts – if you want them in more detail, go there :) :

Synchronization

This is the use of appropriate policies and mechanisms to ensure the correct operation of cooperating processes and threads.

Critical Section

This is a section of code in which shared data in accessed, manipulated, written, etc etc….

Mutual Exclusion (MuTex)

If one of the threads is executing in its Critical Section, then no other threads can be executing in their critical section; and this is exactly what we want to achieve! :)

 

Semaphores

Imagine now that we want a way to implement  MuTex. If several requests arrive at once, then only one process should proceed, and the other processes must wait outside their Critical Section.

One technique to do this is called Semaphore. With this we have one integer variable ‘S’ which can be accessed via 2 indivisible operations.

  • P(S)  =  while (S<=0) do { };
    S—;
  • V(S)  =  S++;

Here’s a simple example of Semaphore:

 

P(S1);
r1=A[100];
r1++;
A[100]=r1;
V(S2);
P(S2);
r2=A[100];
r2++;
A[100]=r2;
V(S1);

Lets say here we have S1=1 and S2=0.

The P operation states that if ‘S’ is 0, then loop until S becomes 1, stopping the code after P from being executed. When ‘S’ becomes 1, P will make ‘S=0’, stopping the second process from executing.

When the process gets to the operation V, there is no waiting, as ‘S’ is just incremented – normally to 1…

In the example above, S2=0 so P(S2) will loop indefinitely. however, S1=1, so P(S1) will make S1=0 and execute the code. Once we get tot the end of the code, V(S2) will increment S2 to S2=1, allowing P(S2) to fall out of the loop making S2=0. This code is now executed and then V(S1) makes S1=1, allowing the first code to be executed and so on forth.

 

Deadlock

This one should definitely seem familiar from the Distributed Systems notes. It is when two or more processes are waiting indefinitely for something such as an event, that can be provided by only one of the waiting processes.

Starvation

Starvation is when a process happens to become overlooked repeatedly. The difference between Starvation and Deadlock, is that when Starvation occurs, the situation can be resolved, but with Deadlock – it is impossible.

 

NB: These notes are very brief, links to the Distributed Systems versions will be added, but everything from this post is better described there.

No comments:

Post a Comment