main | files
April 18th, 2024    

CISC 3320 3.0 17279 EW6
Main
Files
Syllabus
Links
Homeworks

Notes
0001
0002
0003
0004
0005
0006
0007
0008
0009
0010
0011
0012
0013
0014
0015
0016
0017
Networ Intro
LAN Intro
Topologies
Comp Networks

Big Data
Security
Email

Misc
What is ELF/COFF?

Projects
Project 1
Project 2

Past Tests
Midterm
OldSample Midterm
OldMidterm
OldSample Final
OldMidterm Exam

Notes 0006

Process Synchronization

(read chapter 7 of the dinosaur book)

Overall Picture

Systems with the ability to run many concurrent processes need to provide the ability for those processes to communicate in a safe and reliable manner.

When scheduling is preemptive, processes may not have any control of when and how they're running in relation to other processes. Many problems creep in when several processes try to modify and access the same memory without any type of synchronization.

Processes generally communicate through shared memory by writing and reading to that memory. To organize their accessing the memory, they could use a queue such as this:

Producer code (code for process that writes to memory):

while(1){
    // wait until there is space on queue
    while(counter == BUFFER_SIZE);
    // produce an item
    buffer[in] = nextProduced;
    // move to next position
    in = (in + 1) % BUFFER_SIZE;
    // increment size
    counter++;
}

Consumer code (code for process that reads the memory):

while(1){
    // wait until there is something on the queue
    while(counter == 0);
    // get whatever producer produced
    nextConsumed = buffer[out];
    // move to next position
    out = (out + 1) % BUFFER_SIZE;
    // decrement size
    counter--;
}

This code is for a fairly average array based queue. Notice that it works fine from the perspective of a single process (if only one process accesses with this queue, there are no problems).

Problems start to creep in when one process does the writing and another does the reading (and the scheduler is using preemption). For example, at the lowest level, the system has code like this for incrementing and decrementing the counter:

Incrementing (process 1 code):

register1 = counter
register1 = register1 + 1
counter = register1

Decrementing (process 2 code):

register2 = counter
register2 = register2 - 1
counter = register2

Now, because the scheduler is preemptive, and we have several processes executing, the order of each of these individual instructions cannot be guaranteed. For example, imagine an ordering like this:

T0: producer: register1 = counter
T1: producer: register1 = register1 + 1
T2: consumer: register2 = counter
T3: consumer: register2 = register2 - 1
T4: producer: counter = register1
T5: consumer: counter = register2

Notice that the resulting value of the counter is affected by order in which these instructions execute. In our case above, the effect of the producer is overwritten by the consumer.

Situations like this, where several processes access and manipulate the same data concurrently and the outcome depends on the particular order in which the access takes place is called a race condition.

Obviously if we are to have several processes communicating with each other in a preemptive system we need to figure out a way for them to coordinate their work, and allow only 1 process to access a shared resource at a time.

Critical Section

Consider a system consisting of several processes, each having a segment of code called a critical section, in which the process may be changing common variables, updating tables, etc. The important feature of the system is that when one process is executing its critical section, no other process is to be allowed to execute its critical section. Execution of the critical section is mutually exclusive in time.

The critical section problem is to design a protocol that these processes can use to cooperate safely. Each process must request permission to enter its critical section (entry section). The critical section may be followed by the exit section. The remainder code (after the exit section) is remainder section. For example:

do{
    // entry section

    // critical section

    // exit section

    // remainder section
}while(1);

A solution to the critical section problem must satisfy the following three requirements:

1. Mutual Exclusion: If process Pi is executing in its critical section, then no other process can be executing in their critical sections.

2. Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are no executing in their remainder section can participate in the decision on which will enter its critical section next, and this selection cannot be postponed indefinitely.

3. Bounded Waiting: There exists a bound on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Number 1 just says the obvious, that no two processes can be executing their critical section at the same time. Number 2 says that the choosing process (who will enter the critical section) should not depend on processes currently executing their critical section - the system still has to work even if nobody is executing the critical section. Number 3 says that no process should have to wait forever to execute its critical section.

Two-Process Synchronization

Let's follow the book and first explain how we can synchronize 2 processes before moving on to many processes.

Consider our first attempt at an algorithm:

do{
    while(turn != i);

    // critical section

    turn = j;

    // remainder section
}while(1);

The idea here is that we share a variable turn between the two processes. When a variable is not set to our process number (turn != j) we loop. Once it becomes set (meaning that the other process set it to that value), we do our critical section (other process waits because turn is set to our process number, not theirs. Upon leaving the critical section, we turn the control over to the other process by setting turn to the other process's number.

This algorithm does satisfy mutual exclusion, but it does not satisfy the progress requirement (nor does it satisfy bounded waiting, since if turn is set to 0, and P0 never needs to do its critical section, P1 may end up waiting forever to enter its critical section).

With that in mind, lets consider another algorithm:

do{
    flag[i] = true;
    while(flag[j]);

    // critical section

    flag[i] = false;

    // remainder section
}while(1);

Here, the two processes share a 2 element array of flags; when a process wants to enter its critical section, it sets its flag location to true, and waits until the other process may be executing its critical section (flag[j] is set to true). Upon leaving the critical section, the process unsets the flag for its process (implying that another process which may be waiting for it may enter its critical section).

As before, this algorithm does not satisfy the progress requirement, since if both processes set their flags to true at the same time (one after another) then they'll both end up looping forever; waiting for the other process to finish, which never happens.

Combing these algorithms together produces another remarkably simple algorithm, which does satisfy all the critical section problem requirements.

do{
    flag[i] = true;
    turn = j;
    while(flag[j] && turn == j);

    // critical section

    flag[i] = false;

    // remainder section
}while(1);

The two processes share the flag array and the turn variable. The array acts as a request; when a process sets its array element to true, it's requesting to enter its critical section.

The turn acts as which process gets to execute next. For the process to enter its critical section, it has to request the entrance and turn over control to the other process (if another process requested to enter its critical section, it will do so then). Our process (process i) waits until the other process has finished its critical section (when the other processes flag clears), and proceeds to enter the critical section.

Notice that the progress and bounded waiting requirements are satisfied. A process will only loop forever if the other process has the turn and has its flag set (at which point; the other process has the critical section - which is what we want: mutual exclusion). If the other process is not executing its critical section, then the flag will not be set to true, and it won't mean much when we turn over control to the other process, since the loop does an && to ensure that both of those conditions happen.

Multiple-Process Solutions

Multiple process synchronization involves a simile of the last two-process synchronization algorithm.

It is known as a bakery algorithm and is based on a scheduling algorithm commonly used in bakeries (and other such places): upon entering the store, each customer receives a number; the customer with the lowest number is served next. Unfortunately, the bakery algorithm cannot guarantee that two processes (customers) do not receive the same number. In case of a tie, the process with the lowest name is served first. That is, if Pi and Pj receive the same number and i < j, then Pi is served first.

For each process, we shall have a requesting variable (where a process can request to enter its critical section), and a number (that is "given" to the process). Consider data structures such as:

boolean choosing[n];
int number[n];

Notice that for n processes, each has a choosing and a number entry in the arrays. Now, the algorithm would look like this:

do{
    choosing[i] = true;
    number[i] = max(number,n) + 1;
    choosing[i] = false;
    for(j=0;j<n;j++){
        while(choosing[j]);
        while((number[j]!=0)&&(number[j,j] < number[i,i]));
    }

    // critical section

    number[i] = 0;

    // remainder section
}while(1);

If a process wants to enter its critical section, it announces that by setting its 'choosing' entry to true, it then proceeds to find the maximum number of the whole array, and gives its self maximum + 1. Notice that this step can produce duplicates.

Now that we've chosen our number, we unset our choosing value (we already got the number that will allow us to enter the critical section). We fall into a loop that goes through all array elements and give them a chance to run their critical section.

By the end this loop terminates, we are certain to be the process with the smallest number, and all other processes are waiting for us to set our number back to 0 for them to proceed.

Synchronization Hardware

Some hardware has special instructions to enable easier process synchronization. These instructions are guaranteed to be executed atomically (as one unit):

TestAndSet

boolean TestAndSet(boolean &target){
    boolean rv = target;
    target = true;
    return rv;
}

This code just sets value to true and returns the old value. To achieve critical section, you might use it like this:

do{

    while(TestAndSet(lock));

    // critical section

    lock = false;

    // remainder section
}while(1);

Initially, lock is initialized to false.

When a process wants to enter its critical section, it continuously calls TestAndSet function until it returns false. (meaning that last value it was called with was false). This will only happen if no other process is in its critical section (since if the other process got the lock, the function will return true; until the other process releases the lock by setting lock to false).

Swap

Another hardware method may be a swap, defined such as:

void swap(boolean &a,boolean &b){
    boolean temp = a;
    a = b;
    b = temp;
}

An algorithm that uses the swap is:

do{
    key = true;
    while(key == true)
        swap(lock,key);

    // critical section

    lock = false;

    // remainder section
}while(1);

Again, notice that mutual exclusion is achieved.

These algorithms do not achieve the bounded waiting requirement, since depending on the order of execution, another process may always grab the lock before some other process.

Here's an algorithm that satisfies all critical section problem requirements, using TestAndSet() method.

do{
    waiting[i] = true;
    key = true;
    while(waiting[i] && key)
        key = TestAndSet(lock);
    waiting[i] = false;

    // critical section

    j = (i+1) % n;
    while((j != i) && !waiting[j])
        j = (j+1) % n;
    if(j == i)
        lock = false;
    else
        waiting[j] = false;

    // remainder section
}while(1);

In the code above, the process first starts waiting. It continues waiting until someone unsets its waiting array entry or TestAndSet() returns false (at which point, it unsets the waiting array entry).

Upon finishing the critical section, we go through the list of all processes looking for one that is waiting. If we don't find any that are waiting, we reset lock to false; else we let the other waiting process run (by unsetting the other process's waiting flag).

Semaphores

A non-computer meaning of the word semaphore is a system or code for sending signals, by using arms or flags held in hands, etc. Various positions represent different letters and numbers. These are the things that used to be used on ships to coordinate their motion (before the invention of radios). Presently, you might have seen them used on aircraft carriers to coordinate the onboard activities of airplanes.

In a computer sense, a semaphore is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait and signal. These operations were originally termed P (for wait; from the Dutch proberen, to test) and V (for signal; from verhogen, to increment). The classical definition of wait in pseudocode is:

wait(S){
    while(S <= 0);
    S--;
}

The classical definition of signal in pseudocode is:

signal(S){
    S++;
}

Execution of these functions must be atomic.

Usage of Semaphores

Semaphores can be used to solve mutual exclusion problem. We create a semaphore named mutex (mutual exclusion) and then wait on it, and free it by signaling on it.

do{
    wait(mutex);

    // critical section

    signal(mutex);

    // remainder section
}while(1);

Semaphores can also be used to solve other synchronization problems, like ensuring one process executes some operation before or after another process. For example, code for process one might be:

S1;
signal(synch);

Code for process 2 might be:

wait(synch);
S2;

The above code for both processes ensures that statement 1 (S1) is executed before statement 2 (S2).

Semaphore Implementation

The main disadvantage of the mutual-exclusion solutions presented so far is that they all require busy waiting. Upon entering or leaving the critical section, the code would loop continuously until some condition is met. This type of busy waiting wastes CPU cycles. A semaphore that loops is called spinlock.

What we'd like to do is block the process while it is waiting, so that it doesn't waste CPU cycles looping.

This new type of semaphore may be represented by a C structure like this:

typedef struct {
    int value;
    struct process* L;
}semaphore;

The value is the value of the semaphore, and L is the list of processes that are waiting for this semaphore. The wait operation would be:

void wait(semaphore S){
    S.value--;
    if(S.value < 0){
        add this process to S.L
        block();
    }
}

The code for signal would be:

void signal(semaphore S){
    S.value++;
    if(S.value <= 0){
        remove a process P from S.L
        wakeup(P);
    }
}

Note that wait decrements value of the semaphore and blocks the process if the value is of the semaphore is negative. The signal increments the semaphore, and wakes one process from the list if the semaphore is negative or is 0.

The main thing is that these wait and signal are atomic operations (nothing else can execute while they are executing). On a single processor machine, this can be achieved by disabling interrupts (nothing can interrupt a currently executing process if interrupts are disabled) or using some of the correct software spinlock solutions presented earlier.

Deadlocks and Starvation

Since semaphores can be used to wait for events, we may have a situation where a process waits for an event from a process that is itself waiting for an event from the original process. For example, suppose we have 2 semaphores, Q and S:

Process 1: wait(S);
Process 2: wait(Q);
Process 1: wait(Q); // Process 1 blocks indefinitely
Process 2: wait(S); // Process 2 blocks indefinitely

This is what is called a deadlock and will be covered later.

The semaphore presented earlier with a list of processes can also have infinite blocking or starvation (where one process never gets to its critical section) if the list is implemented as a stack (LIFO).

Binary Semaphores

Semaphores presented up to this point were counting semaphores; they had an integer that indicated how many processes are waiting for the semaphore. There is also a simpler kind of semaphore, called a binary semaphore, which can only take two values, 0 and 1. These binary semaphores may be easier to implement under certain configurations.

We can implement a counting semaphore using these binary semaphores: First, we get two binary semaphores, S1 and S2, and an integer C that represent the value of our counting semaphore. S1 is initialized to 1 and S2 is initialized to 0. Then the wait operation of our counting semaphore would be:

wait(S1);
C--;
if(C < 0){
    signal(S1);
    wait(S2);
}
signal(S1);

The signal operation would be:

wait(S1);
C++;
if(C <= 0)
    signal(S2);
else
    signal(S1);

That's as far as we'll go with process synchronization. There is quite a bit more really important information in chapter 7 of the dinosaur book, which you should read on your own. (and which may find its way to your midterm or final)



































© 2006, Particle