07_feb_Chapter_6_Process_Syn

Download Report

Transcript 07_feb_Chapter_6_Process_Syn

Process Synchonization Chapter 6

Outline       Process Synchronization The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores   Implementation Deadlocks and Starvation Classic Problems of Synchronization  The Bounded-Buffer Problem  The Readers-Writers Problem  The Dining-Philosophers problem

Process Synchronization

   Cooperating Processes  That can affect or be affected by other processes executing in the system.

Concurrent access to shared data may result in data inconsistency.

Maintaining data consistency requires mechanisms to ensure that cooperating processes access shared data sequentially .

Bounded-Buffer Problem

Shared data

#define BUFFER_SIZE 10 typedef struct { . . .

} item; item buffer[BUFFER_SIZE]; int

in

= 0,

out

= 0; int

counter

= 0;

Bounded-Buffer Problem

Producer process

} item nextProduced; … while (1) { while (counter == BUFFER_SIZE) ; buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; counter++;

Bounded-Buffer Problem

Consumer process

} item nextConsumed; while (1) { while (counter == 0) ; nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--;

Bounded-Buffer Problem

 “ counter++ ” in assembly language

MOV R1, counter INC R1 MOV counter, R1

 “ counter- ” in assembly language

MOV R2, counter DEC R2 MOV counter, R2

Bounded-Buffer Problem

 If both the producer and consumer attempt to update the buffer concurrently, the machine language statements may get interleaved.

 Interleaving depends upon how the producer and consumer processes are scheduled.

Bounded-Buffer Problem

Assume counter is initially 5. One interleaving of statements is: producer: MOV R1, counter (R1 = 5) INC R1 (R1 = 6) consumer: MOV R2, counter (R2 = 5) DEC R2 (R2 = 4) producer: MOV counter, R1 (counter = 6) consumer: MOV counter, R2 (counter = 4)

The value of count may be either 4 or 6, where the correct result should be 5.

Process Synchronization

Race Condition

: The situation where several processes access and manipulate shared data concurrently, the final value of the data depends on which process finishes last.

Process Synchronization

Critical Section

: A piece of code in a cooperating process in which the process may updates shared data (variable, file, database, etc.).

Critical Section Problem

: Serialize executions of critical sections in cooperating processes

Process Synchronization

More Examples

 Bank transactions  Airline reservation

Bank Transactions

D Deposit

MOV A, Balance ADD A, Deposited MOV Balance, A

Balance W Withdrawal

MOV B, Balance SUB B, Withdrawn MOV Balance, B

Bank Transactions

Bank Transactions

 Current balance = Rs. 50,000  Check deposited = Rs. 10,000  ATM withdrawn = Rs. 5,000

Bank Transactions

Check Deposit: MOV A, Balance ADD A, Deposit ed // A = 50,000 // A = 60,000 ATM Withdrawal: MOV B, Balance SUB B, Withdrawn Check Deposit: MOV Balance, A ATM Withdrawal: MOV Balance, B // B = 50,000 // B = 45,000 // Balance = 60,000 // Balance = 45,000

Assembly Language usage in C++ compiler } { #include main() int c = 5; __asm { mov eax,c mov ebx,10 add eax,ebx mov c,eax } cout << c << endl; } { #include main() int c = 0; __asm { mov eax,c add eax,1 mov c,eax } cout << c << endl;

Solution of the Critical Problem

Software based solutions

Hardware based solutions

Operating system based solution

Structure of Solution

do { entry section critical section exit section reminder section } while (1);

Solution to Critical-Section Problem

  

2-Process Critical Section Problem N-Process Critical Section Problem Conditions for a good solution

:

1.

Mutual Exclusion

: If a process is executing in its critical section, then no other processes can be executing in their critical sections.

Solution to Critical Section Problem 2.

Progress

: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can decide which process will enter its critical section next, and this decision cannot be postponed indefinitely.

Solution to Critical Section Problem 3.

Bounded Waiting

: A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

Solution to Critical Section Problem

 Assumptions  Assume that each process executes at a nonzero speed  No assumption can be made regarding the relative speeds of the N processes.

Possible Solutions

  Only 2 processes, P0 and P1 Processes may share some common variables to synchronize their actions.

 General structure of process Pi do {

entry section

critical section

exit section

remainder section } while (1);

Algorithm 1

 Shared variables: 

int turn; initially turn = 0

turn = i

Pi can enter its critical section

Algorithm 1

Process Pi

do { while (turn != i) ; critical section turn = j; remainder section } while (1);  Does not satisfy the progress condition

Only two processes (P0 and P1) concurrently executing

Process P0 do

{

while (turn != 0) ;

critical section

turn = 1

; reminder section }

while (1)

;

Process P1 do

{

while (turn != 1) ;

critical section

turn = 0

; reminder section }

while (1)

;       P0 executes first, enters the critical region… and leaves it, leaving turn = 1.

Turn = 1 means P1 can enter the critical region, and finishes the critical section quickly, now turn = 0.

P0 enters the loop and finishes its critical region quickly. Giving turn to P1 (turn = 1) At this point as we assumed that there is no process speed considerations say process P0 gets ready to run again while P1 (Turn=1) is still in its remainder section.

P0 cannot enters it critical region because Turn = 1.

Violation of Condition (2. Progress)

Algorithm 2 Shared variables

boolean flag[2]; // Set to false

flag [i] = true

Pi ready to enter its critical section

Algorithm 2

Process Pi

do { flag[i] = true; while (flag[j]) ; critical section flag[i] = false; remainder section } while (1);  Does not satisfy the progress condition

Process

P0

do { flag[0] := true; while (flag[1]) ;

critical section

flag [0] = false;

remainder section

} while (1); Problem!!!!!

 Process

P1

do { flag[1] := true; while (flag[0]) ;

critical section

flag [1] = false;

remainder section

} while (1);

 Tx: P0 sets flag[0] true to enter its region  Ty: P1 sets its flag i.e. flag[1] to true to enter its region  A Deadlock!!!!!

 Both will wait forever to enter their regions.

Algorithm 3

 Combined shared variables of algorithms 1 and 2.

 boolean flag[2]; // Set to false  int turn=0;

Algorithm 3

Process Pi

do { flag[i] = true; turn = j; while (flag[j] && turn == j) ; critical section flag[i] = false; remainder section } while (1);

Algorithm 3

 Meets all three requirements: 

Mutual Exclusion

: ‘turn’ can have one value at a given time (0 or 1) 

Bounded-waiting

: At most one entry by a process and then the second process enters into its CS

Algorithm 3

Progress

: Exiting process sets its ‘flag’ to false … comes back quickly and set it to true again … but sets turn to the number of the other process

Process Pi

do

{

flag [0]:= true; turn = 1; while (flag [1] and turn = 1) ;

critical section

flag [0] = false;

remainder section }

while (1);

Process Pi

do

{

flag [1]:= true; turn = 0; while (flag [0] and turn = 0) ;

critical section

flag [1] = false;

remainder section }

while (1);

 Meets all the requirements  Solves critical section problem of the two processes 

n-Process Critical Section Problem

 Consider a system of n processes (P 0 , P 1 ... P n-1 ).

 Each process has a segment of code called a critical section in which the process may change shared data.

n-Process Critical Section Problem

 When one process is executing its critical section, no other process is allowed to execute in its critical section.  The critical section problem is to design a protocol to serialize executions of critical sections.

Bakery Algorithm

 By Leslie Lamport  Before entering its critical section, process receives a ticket number. Holder of the smallest ticket number enters the critical section.

 If processes Pi and Pj receive the same number, if i < j, then Pi is served first; else Pj is served first.

Bakery Algorithm

 The ticket numbering scheme always generates numbers in the increasing order of enumeration; i.e., 1, 2, 3, 4, 5 ...

Bakery Algorithm Notations

 (ticket #, process id #)

(a,b) < (c,d) if a < c or if a == c and b < d

 max (a 0 ,…, a n-1 ) is a number, k, such that k  a i for i = 0, …, n–1

Bakery Algorithm Data Structures

 boolean choosing[n];  int number[n]; These data structures are initialized to false and 0, respectively

Bakery Algorithm Structure of P i

do { choosing[i] = true; number[i] = max(number[0], number[1], …, number [n – 1]) + 1; choosing[i] = false;

Bakery Algorithm

} for (j = 0; j < n; j++) { while (choosing[j]) ; while ( (number[j] != 0) && ((number[j], j) < (number[i], i)) ) ;

Critical Section

Bakery Algorithm

number[i] = 0;

remainder section

} while (1);

Bakery Algorithm

do { choosing[i] = true; number[i] = max(number[0], number[1], …, number [n – 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]) ; while ((number[j] != 0) && (number[j,j] < number[i,i])) ; }

critical section

number[i] = 0;

remainder section

} while (1);

Bakery Algorithm

Process Number P0 3 P1 P2 0 7 P3 P4 4 8

Bakery Algorithm

P0 P2 P3 P4 (3,0) < (3,0) (3,0) < (7,2) (3,0) < (4,3) (3,0) < (8,4) Number[1] = 0 Number[1] = 0 Number[1] = 0 Number[1] = 0 (7,2) < (3,0) (7,2) < (7,2) (7,2) < (4,3) (7,2) < (8,4) (4,3) < (3,0) (4,3) < (7,2) (4,3) < (4,3) (4,3) < (8,4) (8,4) < (3,0) (8,4) < (7,2) (8,4) < (4,3) (8,4) < (8,4)

Bakery Algorithm

 P1 not interested to get into its critical section  number[1] is 0  P2, P3, and P4 wait for P0  P0 gets into its CS, get out, and sets its number to 0  P3 get into its CS and P2 and P4 wait for it to get out of its CS

Bakery Algorithm

 P2 gets into its CS and P4 waits for it to get out  P4 gets into its CS  Sequence of execution of processes:

Bakery Algorithm

 Meets all three requirements: 

Mutual Exclusion

: (number[j], j) < (number[i], i) cannot be true for both P i and P j 

Bounded-waiting

: At most one entry by each process (n-1 processes) and then a requesting process enters its critical section (First-Come-First-Serve)

Bakery Algorithm

Progress

:  Decision takes complete execution of the ‘for loop’ by one process  No process in its ‘Remainder Section’ (with its number set to 0) participates in the decision making

Synchronization Hardware

Bakery algorithm

is very difficult to implement . Next attempt was to solve issues with a little help of Hardware 

Two methods

 Disable Interrupts  Special Hardware Instructions

Synchronization Hardware

Normally, access to a memory location excludes other accesses to that same location.

 Extension: designers have proposed machine instructions that perform two operations atomically (indivisibly) on the same memory location (e.g., reading and writing).

Synchronization Hardware

The execution of such an instruction is also mutually exclusive (even on Multiprocessors).

 They can be used to provide mutual exclusion but other mechanisms are needed to satisfy the other two requirements of a good solution to the CS problem.

Test-And-Set Instruction ( TSL )

Test and modify a word atomically.

{ boolean TestAndSet(boolean &target) boolean rv = target; target = true; return rv; } 2020 年 4 月 24 日星期五 © Copyright Virtual University of Pakistan

Solution with TSL

 Structure for Pi (‘lock’ is set to false) do { while ( TestAndSet(lock) ) ; Critical Section 2020 年 4 月 24 日星期五 } lock = false; Remainder Section © Copyright Virtual University of Pakistan

Solution with TSL

 Is the

TSL

-based solution good?

No

Mutual Exclusion

: Satisfied 

Progress

: Satisfied 

Bounded Waiting

: Not satisfied

Swap Instruction

 Swaps two variables atomically { void swap (boolean &a, boolean &b) boolean temp = a; a = b; b = temp; } 2020 年 4 月 24 日星期五 © Copyright Virtual University of Pakistan

Solution with Swap

 Structure for Pi  ‘key’ is local and set to false do { key = true; while (key == true) swap(lock,key); Critical Section lock = false; Remainder Section © Copyright Virtual University of Pakistan } 2020 年 4 月 24 日星期五

Solution with swap

 Is the

swap

-based solution good?

No

Mutual Exclusion

: Satisfied 

Progress

: Satisfied 

Bounded Waiting

: Not satisfied

A Good Solution

 ‘ key’ local; ‘lock’ and ‘waiting’ global  All variables set to false do { waiting[i] = true; key = true; while (waiting[i] && key) key = TestAndSet(lock); waiting[i] = false; 2020 年 4 月 24 日星期五 © Copyright Virtual University of Pakistan Critical Section

A Good Solution

j = (i+1) % n; while ( (j != i) && !waiting[j] ) j = (j+1) % n; if (j == i) lock = false; else waiting[j] = false; } 2020 年 4 月 24 日星期五 Remainder Section © Copyright Virtual University of Pakistan

Solution with Test-And-Set

 Is the given solution good?

Yes

Mutual Exclusion

: Satisfied 

Progress

: Satisfied 

Bounded Waiting

: Satisfied

Outline       Process Synchronization The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores   Implementation Deadlocks and Starvation Classic Problems of Synchronization  The Bounded-Buffer Problem  The Readers-Writers Problem  The Dining-Philosophers problem

Semaphores

Semaphores

Synchronization tool  Available in operating systems  Semaphore S – integer variable that can only be accessed via two indivisible (atomic) operations, called wait signal and

Semaphores    Synchronization tool.

Semaphore

S

– integer variable Can only be accessed via two indivisible (atomic) operations.

wait (S): while S

S--; 0 do no-op; signal (S):

S++;

 Single CPU system disable all interrupts when executing Wait and Signal  Multiple CPU systems protect semaphore by TSL ..

Two Types of Semaphores 

Counting semaphore

– integer value (S) can range over an unrestricted domain.

Binary semaphore

– integer value can range only between 0 and 1; can be simpler to implement.

 Can implement a counting semaphore

S

as a binary semaphore.

Semaphores

wait(S){ while S

0 ; //no-op S--; } signal(S){ S++; }

n-Processes CS Problem

 Shared data: semaphore mutex = 1;

do { wait(mutex); critical section signal(mutex); remainder section } while (1);

Remember wait and signal are atomic

P0 do { wait(mutex);

critical section

signal(mutex); section

remainder

} while (1); P1 do { wait(mutex);

critical section

signal(mutex);

section

} while (1);

remainder Wait executes as one indivisible operation that once executing cannot be preempted Same holds for Signal If both Processes execute Wait(mutex) at same time then only one would get access As Race condition would apply (wait and signal are atomic)

Is it a Good Solution?

Mutual Exclusion

: Yes 

Progress

: Yes 

Bounded Waiting

: No

Semaphores Mutexes  Mutex variable can have only two states..

Locked

and

Unlocked

 Good when only mutual exclusion is required  If lock is 0 any process may set it to 1 using TSL instruction and then READ or WRITE Shared data  If lock is 1 then process may set it to 1 using TSL (note no effect on lock state) and then Compare if it was 0 or not. It was non-zero (1) in this case hence process loops to check the state of the lock.

Mutex Code in Assembly Enter_region: TSL Rx, Mutex CMP RX, 0

//get lock value in Rx and Set lock to 1 //Compare old value of Lock with 0

JZE ok JMP enter_region

//It was 0 hence enter critical region

CALL Thread_Yield

//If already 1 then loop to enter_region

OK RET

//If Lock was 0 then return and enter critical region

Leave_region: MOV MUTEX, 0 RET

//Set lock to 0 so other waiting or looping process may enter

Busy Waiting

 Processes wait by executing CPU instructions 

Problem

? Wasted CPU cycles 

Solution

? Modify the definition of semaphore

Semaphore Implementation

 Define a semaphore as a record

typedef struct { int value; struct process *L; } semaphore

;

Semaphore Implementation

 Assume two simple operations:  block() suspends the process that invokes it.

 wakeup(P) resumes the execution of a blocked process P.

Semaphore Implementation

 The negative value of S.value

indicates the number of processes waiting for the semaphore  A pointer in the PCB needed to maintain a queue of processes waiting for a semaphore

Semaphore wait()

 Semaphore operations now defined as

void wait(semaphore S){ S.value--; if (S.value < 0) { add this process to S.L; block(); } }

Semaphore signal()

void signal(semaphore S){ S.value++; if (S.value <= 0) { remove process P from S.L; wakeup(P); } }

There are numerous Win32 API’s  Block and Wakeup equivalent in win32 are ResumeThread(), SuspendThread, WaitForSingleObject(), WaitForMultipleOjects() etc..

 To create a semaphore variable  CSemaphore Object, CreateSemaphore() returns handle

Two Implementations

 Busy-waiting version is better when critical sections are small and queue-waiting version is better for long critical sections (when waiting is for longer periods of time).

Process Synchronization

Execute statement B in Pj only after statement A has been executed in Pi  Use semaphore S initialized to 0

Pi

A signal(S) Pj

wait(S) B

Process Synchronization

 Give a semaphore based solution for the following problem: Statement S1 in P1 executes only after statement S2 in P2 has executed, and statement S2 in P2 should execute only after statement S3 in P3 has executed.

Process Synchronization

P1

S1

P2

S2

P3

S3

Solution

Semaphore A=0, B=0;

P1

wait(A) S1

P2 P3

wait(B) S2 signal(A)

 

S3 signal(B)

Problems with Semaphores

Semaphores provide a powerful tool for enforcing mutual exclusion and coordinating processes.

But wait(S) and signal(S) are scattered among several processes. Hence, difficult to understand their effects.

Problems with Semaphores

 Usage must be correct in all the processes.

 One bad (or malicious) process can fail the entire collection of processes.

 

Deadlocks and Starvation

A set of processes are said to be in a deadlock state if every process is waiting for an event that can be caused only by another process in the set.

 Traffic deadlocks  One-way bridge-crossing Starvation (infinite blocking) due to unavailability of resources

Deadlock

P0 wait(S); wait(Q);

signal(S); signal(Q); P1 wait(Q); wait(S);

signal(Q); signal(S);

Deadlock

signal(S ); P0 signal(Q ); P1

Starvation (Infinite Blocking)

P0

wait(S);

wait(S);

P1

wait(S);

signal(S);

Violation of Mutual Exclusion

P0 P1

signal(S); wait(S);

wait(S);

signal(S);

Main Cause of Problem and Solution

Cause

: Programming errors due to the tandem use of wait() and signal() operations 

Solution

: Higher-level language constructs such as critical region (region statement) and monitor.

Producer consumer using Semaphores

} #define N 100 typedef int Semaphore; //Any critical instruction be executed by a semaphore Semaphore mutex =1, empty = N, full = 0; { void producer(void) { int item; while (TRUE) item = produce_item(); down(&empty); down(&mutex); insert_item(item); up(&mutex); up(&full); }

Producer consumer using Semaphores … } { void customer(void) { int item; while(TRUE) down(&full); down(&mutex); item = remove_item(); up(&mutex); up(&empty); consume_item(item); }

Classical Problems of Synchronization  Bounded-Buffer Problem  Do it yourself !

 Readers and Writers Problem  Do it yourself   Dining-Philosophers Problem  See Next Slides: But do it yourself

Readers-Writers Problem  Shared data

semaphore mutex, wrt;

Initially

mutex = 1, wrt = 1, readcount = 0

Readers-Writers Problem Writer Process

wait(wrt); …

writing is performed

… signal(wrt);

Readers-Writers Problem Reader Process

wait(mutex); readcount++; if (readcount == 1) wait(wrt); signal(mutex);

… reading is performed …

wait(mutex); readcount--; if (readcount == 0) signal(wrt); signal(mutex):

Dining-Philosophers Problem  Shared data

semaphore chopstick[5];

Initially all values are 1

Dining-Philosophers Problem  Philosopher

i

:

do { wait(chopstick[i]) wait(chopstick[(i+1) % 5]) …

eat()

… signal(chopstick[i]); signal(chopstick[(i+1) % 5]); …

think

… } while (1); Deadlock can occur if all the philosophers become hungry at the same time

Semaphores in Win32  CreateSemahore()  WaitForSingleObject()  ReleaseSemahore()

How to use Semaphores

HANDLE hs hs= CreateSemaphore ( NULL, // no security attributes InitialValue, // initial count .. For binary cMax =1 MaxValue, // maximum count .. For binary cMax =1 NULL); // pointer to semaphore-object name WaitForSingleObject(hs,INFINITE); count++; //simulation of critical region ReleaseSemaphore(hs,1,NULL);

Monitors  Continuation of idea of ``object'' and object-oriented programming, abstract data types, data encapsulation, software engineering  Monitor contains programmer-defined operations that are provided mutual exclusion within the monitor  Only one process can be active inside a monitor  Programmer doesn’t need to code synchronization explicitly  Integral part of the programming language so the compiler can generate the correct code to implement the monitor using the programming language run time support

Monitors 

monitor monitor-name {

shared variable declarations procedure body P1

(…) { . . .

}

procedure body P2

(…) { . . .

}

procedure body Pn

(…) { . . .

} {

initialization code

} }

Monitors… What if programmer wants explicit Synchronization  To allow a process to wait within the monitor, a

condition

variable must be declared, as 

condition x, y;

Condition variable can only be used with the operations

wait

and

signal

.

 The operation

x.wait();

means that the process invoking this operation is suspended until another process invokes

x.signal();

 The

x.signal

operation resumes exactly one suspended process. If no process is suspended, then the

signal

operation has no effect.

 Normally x.signal resumes the process which first issued x.wait()

Dining Philosophers Example

monitor dp { enum {thinking, hungry, eating} state[5]; condition self[5]; void pickup(int i) // following slides void putdown(int i) void test(int i) // following slides // following slides void init() { for (int i = 0; i < 5; i++) state[i] = thinking; } }

Dining Philosophers

void pickup(int i) { state[i] = hungry; test[i]; if (state[i] != eating) self[i].wait(); }

Philosopher issues pickup() when It wants to eat. In this code it also Checks if chopsticks are available

void putdown(int i) { state[i] = thinking; // test left and right neighbors test((i+4) % 5); test((i+1) % 5); }

Putdown is used after eating

Dining Philosophers

void test(int i) { if ( (state[(i + 4) % 5] != eating) && (state[i] == hungry) && (state[(i + 1) % 5] != eating)) { state[i] = eating; self[i].signal(); } } Check the states of neighboring philosophers. They must not be eating i.e. must not be in critical region.

Dining Philosophers Example Code is now simple. Dinning philospher picks up the chop sticks and if they are not available it waits Dp.pickup(i) Eat Do.putdown(i)

Interesting Reading…

Sec 2.3 Tanenbaum

 Producer_Consumer Problem with monitors.

 Base Structure Monitor ProducerConsumer { Function Consumer Function Producer Function InsertItem Function RemoveItem }

Monitor With Condition Variables

Windows 2000 Synchronization  Uses interrupt masks to protect access to global resources on uniprocessor systems.

 Uses

spinlocks

on multiprocessor systems.

 Dispatcher objects may also provide

events

. An event acts much like a condition variable.

END OF CHAPTER  Next: Memory Management