Chap 1: Introduction

Download Report

Transcript Chap 1: Introduction

CHAPTER 7: PROCESS
SYNCHRONIZATION (进程同步)
Why synchronization
 Primitive software approaches
 Hardware approaches
 Semaphores
 Classical Problems of Synchronization
 Critical Regions
 Monitors
 Synchronization in Solaris 2 & Windows 2000
 High priority  interactive processes

BACKGROUND
Concurrent access to shared data may result in
data inconsistency.
 Maintaining data consistency requires
mechanisms to ensure the orderly execution of
cooperating processes. (Blackboard)
 Producer writes to the blackboard
 “You are a bad guy, …”
 Consumer reads from the blackboard.
 Bounded buffer problem with shared data.

Background: Bounded-Buffer: Shared Data

Shared data
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
int counter = 0;
Background: Bounded-Buffer: Producer

Producer process
item nextProduced;
while (1) {
while (counter == BUFFER_SIZE)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}
Background: Bounded-Buffer: Consumer

Consumer process
item nextConsumed;
while (1) {
while (counter == 0)
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
}
Background: Bounded-Buffer: Atomicity

The statements
counter++;
counter--;
must be performed atomically.

Atomic operation (原子操作) means an operation
that completes in its entirety without interruption.
Background: Bounded-Buffer: Interleaving

The statement “count++” may be implemented in
machine language as:
register1 = counter
register1 = register1 + 1
counter = register1

The statement “count--” may be implemented as:
register2 = counter
register2 = register2 – 1
counter = register2
Background: Bounded-Buffer: Interleaving

If both the producer and consumer attempt to
update the buffer concurrently, the assembly
language statements may get interleaved.

Interleaving depends upon how the producer and
consumer processes are scheduled.
Background: Bounded-Buffer: Interleaving

Assume counter is initially 5. One interleaving of
statements is:
p:
p:
c:
c:
p:
c:

register1
register1
register2
register2
counter
counter
=
=
=
=
=
=
counter
register1 + 1
counter
register2 – 1
register1
register2
//(register1
//(register1
//(register2
//(register2
//(counter =
//(counter =
= 5)
= 6)
= 5)
= 4)
6)
4)
The value of count may be either 4 or 6, where the
correct result should be 5.
Background

Race condition:
 The situation where several processes access
and manipulate shared data concurrently.
 The final value of the shared data depends
upon which process finishes last.

TO PREVENT RACE CONDITIONS,
CONCURRENT PROCESSES MUST BE
SYNCHRONIZED.
THE CRITICAL-SECTION PROBLEM




The n processes compete to use some shared data.
Each process has a code segment, called critical section, in
which the shared data is accessed.
How to ensure that when one process is executing in its
critical section, no other process is allowed to execute in its
critical section.
To solve the critical-section problem
 To design a protocol that the processes can use to
cooperate.
 Each process must behave properly:
Entry section, Critical section, Exit section,
remainder section.
The Critical-Section Problem: Solution Criteria
3 Criteria for the critical section problem solution

Mutual Exclusion. If process Pi is executing in its
critical section, then no other processes can be executing
in their critical sections.

Progress. If no process is executing in its critical section
and there exist some processes that wish to enter their
critical section, then the selection of the processes that
will enter the critical section next cannot be postponed
indefinitely.

Bounded Waiting. A bound must exist on the number of
times that other processes are allowed to enter their
critical sections after a process has made a request to enter
its critical section and before that request is granted.
The Critical-Section Problem:
2-Processes: Algorithm 0





Only 2 processes, P0 and P1
General structure of process Pi (other process Pj)
do { entry section
critical section
exit section
remainder section
} while (1);
Processes may share some common variables to
synchronize their actions.
int i = 0 or 1;
int j = 1-i;
The Critical-Section Problem:
2-Processes: Algorithm 1


Shared variables:
 int turn;
// initially turn = 0
 turn = i;
// Pi can enter its critical section
Process Pi
do {
while (turn != i) ;
critical section
turn = j;
reminder section
} while (1);

Satisfies mutual exclusion, but not progress requirement.
The Critical-Section Problem:
2-Processes: Algorithm 2

Shared variables
 boolean flag[2]; // initially flag[0] = flag[1] = false.
 flag [i] = true;
// Action: Pi will enter its critical section
intention


Process Pi
do {
while (flag[j]) ;
flag[i] = true;
critical section
flag [i] = false;
remainder section
} while (1);
It does not even satisfy mutual exclusion!
The Critical-Section Problem:
2-Processes: Algorithm 3


Shared variables
 boolean flag[2]; // initially flag[0] = flag[1] = false.
 flag [i] = true;
// Intention: Pi ready to enter its critical section
Process Pi
do {
flag[i] = true;
while (flag[j]) ;
critical section
flag [i] = false;
remainder section
} while (1);

Satisfies mutual exclusion, but not progress requirement.
The Critical-Section Problem:
2-Processes: Algorithm 4


Shared variables
 boolean flag[2]; // initially flag[0] = flag[1] = false.
 flag [i] = true;
// Pi ready to enter its critical section
Process Pi
do {
flag[i] = true;
while (flag[j])
{ flag[i] = false; sleep(random()); flag[i] = true;}
critical section
flag [i] = false;
remainder section
} while (1);

Satisfies mutual exclusion, but not progress requirement.
The Critical-Section Problem:
2-Processes: Algorithm N (Peterson 1981)



Combined shared variables of algorithms 1 and 2.
Process Pi
do {
flag [i] = true;
turn = j;
while (flag [j] and turn = j) ;
critical section
flag [i] = false;
remainder section
} while (1);
Meets all three requirements; solves the critical-section
problem for two processes.
The Critical-Section Problem:
N-Processes: Bakery Algorithm
Critical section for n processes
 Before entering its critical section, a process
receives a number.
 The holder with the smallest number will enter
the critical section.
 If processes Pi and Pj receive the same number, if
i < j, then Pi is served first; else Pj is served first.
 The numbering scheme always generates
numbers in increasing order of enumeration; i.e.,
1,2,3,3,3,3,4,5...
The Critical-Section Problem:
N-Processes: Bakery Algorithm



Notation < lexicographical order (ticket #, process id #)
 (a,b) < c,d) if a < c or if a = c and b < d
 max (a0,…, an-1) is a number, k, such that k  ai
for
i = 0, 1, 2, …, n – 1
Shared data
boolean choosing[n];
int
number[n];
Data structures are initialized to false and 0 respectively.
The Critical-Section Problem:
N-Processes: Bakery Algorithm
do {
choosing[i] = true;
number[i] = max(number[0], number[1], …, number [n – 1])+1;
choosing[i] = false;
for (j = 0; j < n; j++) {
while (choosing[j]) ;
while ((number[j] != 0) && ( (number[j],j) < (number[i],i) ) ) ;
}
critical section
number[i] = 0;
remainder section
} while (1);
The Critical-Section Problem:
N-Processes: Bakery Algorithm



If Pi is in its critical section and Pj (j!=i) has already
chosen its number[j] != 0, then
(number[i],i) < (number[j], j)
or
(number[i],i) is the smallest of {(number[0], 0),
(number[1], 1), (number[2], 2), … }
Mutual exclusion. Only the process with the smallest
(number[i],i) can enter its critical section.
Progesss requirement and bounded waiting. The
processes enter their critical section on a first-come,
first-served business.
SYNCHRONIZATION
HARDWARE

Interrupt enabling and disabling
 Can be applied to Uni-processor.
 Not feasible for Multi-processors.

Special Memory instruction
 TestAndSet
 Swap
Synchronization Hardware:
TestAndSet

Test and modify the content of a word atomically.
boolean TestAndSet(boolean &target) {
boolean rv = target;
target = true;
return rv;
}
Synchronization Hardware:
TestAndSet: Mutual exclusion
 Shared data:
boolean lock = false;

Process Pi
do {
while (TestAndSet(lock)) ;
critical section
lock = false;
remainder section
}
Synchronization Hardware:
Swap
 Swap two variables atomically.
void Swap(boolean &a, boolean &b) {
boolean temp = a;
a = b;
b = temp;
}
Synchronization Hardware:
Swap: Mutual exclusion


Shared data (initialized to false):
boolean lock;
boolean waiting[n];
Process Pi
do {
key = true;
while (key == true) Swap(lock,key);
critical section
lock = false;
remainder section
}
Synchronization Hardware:
Bounded-waiting mutual exclusion with TestAndSet
do{
waiting[i] = true; key = true;
while (waiting[i] && key) key = TestAndSet(lock);
waiting[i] = false;
critical section
j = (i+1)%n;
while ((j != i) && !waiting[j] ) j = (j+1) % n;
if (j==i) lock = false;
else waiting[j] = false;
remainder section
} while (1);
Synchronization Hardware:
Bounded-waiting mutual exclusion with TestAndSet


Mutual exclusion: Process Pi can enter its critical section
only either waiting[i] == false or key == false.
 key == false: the first process to execute the TestAndSet.
 waiting[i] == false: only if another process leaves its
critical section.
Progress requirement and bounded-waiting requirement:
 When a process leaves its critical section, it scans the
array waiting in the cyclic ordering.
 It designates the first process in this ordering that is in
the entry section as the next one to enter the critical
section.
BACI





BACI stands for Ben-Ari Concurrent Interpreter.
The compiler and interpreter originally were
procedures in a program written by M. Ben-Ari,
based on the original Pascal compiler by Niklaus
Wirth.
The original version of the BACI compiler and
interpreter was created from that source code and was
hosted on a PRIME mainframe.
After several modifications and additions, this
version was ported to a PC version in Turbo Pascal,
to Sun Pascal, and to C.
Finally, the compiler and interpreter were split into
two separate programs.
BACI



Recently, a C-- compiler has been added to the BACI
suite of programs to compile source programs written
in a restricted dialect of C++ into PCODE object code
executable by the interpreter.
Compared with other concurrent languages, BACI
offers a variety of synchronization techniques with a
syntax that is usually familiar.
Any experienced C or Pascal programmer could use
BACI within hours.
BACI

Adding.Cm






Attemp0.Cm
Attemp1.Cm
Attemp2.Cm
Attemp3.Cm
Attemp4.Cm
AttempN.Cm

Multiple.Cm

MTestSet.Cm
MSwap.Cm
Mbwts.Cm


SEMAPHORES(信号量)

A semaphore can be viewed as an object which
has an integer value and three atomic operations:
I. A semaphore may be initialized via a
nonnegative value.
II. The wait operation decrements the semaphore
value. If the value becomes negative, then the
process executing the wait is blocked.
III.The signal operation increments the semaphore
value. If the value is not positive, then a
process blocked by a wait operation is
unblocked.
Semaphores: Critical Section of n Processes
Shared data:
semaphore mutex; //initially mutex = 1
 Process Pi:
do {
wait(mutex);
critical section
signal(mutex);
remainder section
} while (1);

Semaphores: Implementation
Assume two simple operations:
 block suspends the process that invokes it.
 wakeup(P) resumes the execution of a
blocked process P.
 Define a semaphore as a record
typedef struct {
int value;
struct process *L;
} semaphore;

Semaphores: Implementation

Semaphore operations now defined as
wait(S):
S.value--;
if (S.value < 0) {
add this process to S.L;
block;
}
signal(S):
S.value++;
if (S.value <= 0) {
remove a process P from S.L;
wakeup(P);
}
Semaphores: As a General Synchronization Tool

Only after A is executed in Pi , execute B in Pj
Use semaphore flag initialized to 0.
Code:
Pi

A
signal(flag)
Pj

wait(flag)
B
Semaphores: Deadlock and Starvation



Deadlock(死锁) – two or more processes are waiting
indefinitely for an event that can be caused by only
one of the waiting processes.
Let S and Q be two semaphores initialized to 1
P0
P1
wait(S);
wait(Q);
wait(Q);
wait(S);


signal(S);
signal(Q);
signal(Q)
signal(S);
Starvation(饥饿) – indefinite blocking. A process
may never be removed from the semaphore queue in
which it is suspended.
Semaphores: Types
Counting semaphore – integer value can range
over an unrestricted domain.
 Binary semaphore – integer value can range
only between 0 and 1; can be simpler to
implement.
 Can implement a counting semaphore S as a
binary semaphore.

Semaphores: Binary Semaphores
Implementing S as a Binary Semaphore
Data structures:
binary-semaphore S1, S2;
int C:
 Initialization:
S1 = 1
S2 = 0
C = initial value of semaphore S.

Semaphore: Binary Semaphores:
Implementing counting semaphore by binary semaphore
Shared variables:
binary-semaphore S1=1, S2=0;
int C:

wait operation
wait(S1);
C--;
if (C < 0) { signal(S1); wait(S2); }
signal(S1);

signal operation
wait(S1);
C ++;
if (C <= 0)
else
signal(S2);
signal(S1);
CLASSICAL PROBLEMS OF
SYNCHRONIZATION




Bounded-buffer problem (有限缓存问题)
Readers-writers problem (读者-著者问题)
 Readers have priorities
 Writers have priorities
Dining-philosophers problem(哲学家就餐问题)
 With deadlock solution
 Deadlock free solution
Barbershop problem (理发师问题)
 Unfair solution
 Fair solution
Classical Problems of Synchronization:
Bounded-Buffer Problem: Shared Data
// The producer produce the item
// which is consumed by the consumer.
//
// The number of buffers is bounded.
semaphore
mutex = 1, // To protect the buffer pool
full = 0, // To count the number of full buffers
empty = n; // To count the number of empty buffers
Classical Problems of Synchronization:
Bounded-Buffer Problem: Producer Process
while (1)
{ …
produce an item in nextp
…
wait(empty);
wait(mutex);
…; add nextp to buffer; …
signal(mutex);
signal(full);
}
Classical Problems of Synchronization:
Bounded-Buffer Problem: Consumer Process
while (1)
{
wait(full);
wait(mutex);
…; remove one buffer to nextc;
signal(mutex);
signal(empty);
…
consume the item in nextc
…
…
Classical Problems of Synchronization:
Readers-Writers Problem


Readers-writers problem
 One data object is to be shared by two kinds of
processes.
 Reader processes may want only to read the shared
object.
 Writer processes may want to update (that is, to
read and write) the shared object.
Synchronization requirement for
 A writer and some other processes may not access
the shared object simultaneously.
 More than one reader process can access the shared
object simultaneously.
Classical Problems of Synchronization:
Readers-Writers Problem

The solution for Readers-Writers Problem
 Readers first
 No reader will be kept waiting unless a writer has
already obtained permission to use the shared object.
(In other words, no reader should wait for other
readers to finish simply because a writer is waiting)
 Writers first
 Once a writer is ready, that writer performs its write
ASAP. (In other words, if a writer is waiting to access
the object, no new readers may start reading. )
…
Classical Problems of Synchronization:
Readers-Writers Problem: Readers First: S

Shared data
semaphore
mutex = 1,
wrt
= 1;
int readcount = 0;
Classical Problems of Synchronization:
Readers-Writers Problem: Readers First: W
Writers process:
wait(wrt);
writing is performed
signal(wrt);
remainder section
Classical Problems of Synchronization:
Readers-Writers Problem: Readers First: R
Readers process:
wait(mutex);
readcount++;
if (readcount == 1) wait(wrt);
signal(mutex);
reading is performed
wait(mutex);
readcount--;
if (readcount == 0) signal(wrt);
signal(mutex):
remainder section
Classical Problems of Synchronization:
Readers-Writers Problem: Writer First: S

Shared variable:
// Once a write is ready, that writer performs
// its write ASAP.
semaphore
wSem = 1, // Semaphore for all writers
rSem = 1, // Semaphore for first reader
rsSem = 1, // Semaphore for other readers
x
= 1,
y
= 1;
int readCount = 0, writeCount = 0;
Classical Problems of Synchronization:
Readers-Writers Problem: Writer First: W
while(1)
{ wait(y);
if (++writeCount == 1) { wait(rSem); }
signal(y);
wait(wSem); WRITEUNIT; signal(wSem);
wait(y);
if (--writeCount == 0) {signal(rSem);}
signal(y);
}
Classical Problems of Synchronization:
Readers-Writers Problem: Writer First: R
while(1)
{ wait(rsSem);
wait(rSem);
wait(x);
if (++readCount==1){wait(wSem);}
signal(x);
signal(rSem);
signal(rsSem);
READUNIT;
wait(x);
if (--readCount == 0) { signal(wSem); }
signal(x);
}
Classical Problems of Synchronization:
Readers-Writers Problem: Writer First: D




Readers only in the system
 wSem set and no queues
Writers only in the system
 wSem and rSem set; Writers queue on wSem
Both readers and writers with read first
 wSem set by reader; rSem set by writer
 All writers queue on wSem; One reader queues on
rSem and other readers queue on rsSem.
Both readers and writers with write first
 wSem set by writer; rSem set by writer
 Writers queue on wSem; One reader queues on rSem
and other readers queue on rsSem.
Classical Problems of Synchronization:
Dining-Philosophers Problem



The scene: 1 round table, 1 bowl of rice, 5 plates, 5
chopsticks, 5 chairs.
The actors: 5 philosophers.
The plot
 A philosopher spends his life thinking, eating, and
starving.
 If a philosopher gets hungry, he tries to pick up the
two chopsticks that are closest to him. He may pick
up only one chopstick at a time.
 When he has both chopsticks at the same time, he eats
without releasing his chopsticks.
 When he is finished eating, he puts down both of his
chopsticks and thinks again.
Classical Problems of Synchronization:
Dining-Philosophers Problem

The dining-philosophers problem is considered as a
classic synchronization problem, neither because of its
practical importance nor because computer scientists
dislike philosophers, but because it is an example of a
large class of concurrency-control problems.

The dining-philosophers problem is a simple
representation of the need to allocate several limited
resources among many processes in a deadlock-free and
starvation-free manner.
Classical Problems of Synchronization:
Dining-Philosophers Problem
Classical Problems of Synchronization:
Dining-Philosophers Problem: Solution 1: S

Shared data
semaphore chopstick[5]
= {1, 1, 1, 1, 1};
// One chopstick can be picked up by
// one philosopher at a time
Classical Problems of Synchronization:
Dining-Philosophers Problem: Solution 1: Pi
do {
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
…
eat
…
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
…
think
…
} while (1);
Classical Problems of Synchronization:
Dining-Philosophers Problem: Solution 2: S

Shared data
semaphore chopstick[5] =
{1, 1, 1, 1, 1};
// One chopstick can only be picked up
// by one philosopher
semaphore coord = 4;
// Only four philosophers can try to eat
// simultaneously
Classical Problems of Synchronization:
Dining-Philosophers Problem: Solution 2: Pi

Philosopher i:
do {
wait(coord);
wait(chopstick[i])
wait(chopstick[(i+1) % 5])
…eat …
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
signal(coord);
…think…
} while (1);
Classical Problems of Synchronization:
The Barbershop Problem



The scene
 1 barbershop, 1 standing room area, 1 sofa for 4
customers, 3 barber chairs.
The actors
 3 barbers who can cut hair and receive payment
 50 customers. Fire code limit the total numbers of
customers in the shop to 20.
The plot
 A customer will try to enter the shop.
 Once inside, he will stand or take a seat on the sofa.
 When a barber is free and he can sit on the barber
chair.
Classical Problems of Synchronization:
The Barbershop Problem
Classical Problems of Synchronization:
The Barbershop Problem


Semaphores for
 fire code,
 sofa seats,
 barber chairs, …
 cashier
Semaphores for
 The customer is ready so that the possible sleepy
barber can be waked up.
 The haircut is finished.
 The customer leaves the barber chair.
 The customer gets the receipt after payment.
Classical Problems of Synchronization:
The Barbershop Problem: Solution 1: S
// shared variables
semaphore
max_capacity
semaphore
sofa
semaphore
barber_chair
coord
semaphore
cust_ready
finished
leave_b_chair
payment
receipt
= 20;
= 4;
= 3,
= 3;
= 0,
= 0,
= 0,
= 0,
= 0;
Classical Problems of Synchronization:
The Barbershop Problem: Solution 1: C
// one customer
wait(max_capacity);
enter shop;
wait(sofa);
sit on sofa;
wait(barber_chair);
get up from sofa;
signal(sofa);
sit in barber chair;
signal(cust_ready);
wait(finished);
leave barber chair;
signal(leave_b_chair);
pay;
signal(payment);
wait(receipt);
exit shop;
signal(max_capacity);
Classical Problems of Synchronization:
The Barbershop Problem: Solution 1: B
//barber
while (1)
{
wait(cust_ready);
wait(coord);
cut hair;
signal(coord);
signal(finished);
wait(leave_b_chair);
signal(barber_chair);
}
//cashier
while (1)
{
wait(payment);
wait(coord);
accept pay;
signal(coord);
signal(receipt);
}
Classical Problems of Synchronization:
The Barbershop Problem: Solution 1: Main
customer;
.....
customer; // 50 customers
barber;
barber;
barber;
cashier; // 3 barbers and 1 cashier
Classical Problems of Synchronization:
The Barbershop Problem: Solution 2: S
semaphore
semaphore
semaphore
semaphore
semaphore
int
max_capacity
= 20;
sofa
= 4;
barber_chair
= 3,
coord
= 3;
cust_ready
= 0,
finished[50]
= {0, …,0},
leave_b_chair
= 0,
payment
= 0,
receipt
= 0;
mutex1=1; mutex2=1;
count = 0;
Classical Problems of Synchronization:
The Barbershop Problem: Solution 2: C
// one customer
int custnr;
wait(max_capacity);
enter shop;
wait(mutext1);
custnr = ++count;
signal(mutext1);
wait(sofa);
sit on sofa;
wait(barber_chair);
get up from sofa;
signal(sofa);
sit in barber chair;
wait(mutex2);
enqueue1(custnr);
signal(cust_ready);
signal(mutex2);
wait(finished[custnr]);
leave barber chair;
signal(leave_b_chair);
pay;
signal(payment);
wait(receipt);
exit shop;
signal(max_capacity)
Classical Problems of Synchronization:
The Barbershop Problem: Solution 2: B
//barber
while (1)
{ int b_cust;
wait(cust_ready);
wait(mutext2);
dequeue1(b_cust);
signal(mutex2);
wait(coord);
cut hair
signal(coord);
signal(finished[b_cust]);
wait(leave_b_chair);
signal(barber_chair);
//cashier
while (1)
{
wait(payment);
wait(coord);
accept pay;
signal(coord);
signal(receipt);
}
Classical Problems of Synchronization:
The Barbershop Problem: Solution 2: Main
customer;
.....
customer; // 50 customers
barber;
barber;
barber;
cashier; // 3 barbers
CRITICAL REGIONS (临界区域)


Semaphore Solution for n processes mutual exclusion
Shared variables:
semaphore mutex=1;
Solution
while (1)
{ wait(mutex); critical section; signal(mutex);
remainder section;
}
Possible problems
 signal … wait;
 no mutual exclusion
 wait
… wait;
 deadlock
 Nothing ... signal;
 no mutual exclusion
 wait
… Nothing;
 deadlock
Critical Regions: Definition




High-level synchronization construct
 Critical regions
 Monitors
 Message-passing primitives
A shared variable v of type T, is declared as:
shared T v;
Variable v can be accessed only inside statement
region v when B do S;
where B is a Boolean expression to control the access
and S is the compound statement.
While statement S is being executed, no other process
can access variable v.
Critical Regions: Definition
Regions referring to the same shared variable
exclude each other in time.
region v when B do S;
 When a process tries to execute the region
statement, the Boolean expression B is evaluated.
 If B is true, statement S is executed.
 If B is false, the process is delayed until B
becomes true and no other process is in the
region associated with v.

Critical Regions: Example – Bounded Buffer



Shared data:
struct { int pool[n], in, out, count;} buffer;
Producer
region buffer when( count < n)
{
pool[in] = nextp;
in= (in+1) % n; count++;
}
Consumer
region buffer when (count > 0)
{
nextc = pool[out];
out = (out+1) % n; count--;
}
Critical Regions: Implementation




Associate with the shared variable x, the following variables:
semaphore mutex=1,first-delay=0, second-delay=0;
int
first-count=0, second-count=0;
Mutually exclusive access to the critical section is provided
by mutex.
If a process cannot enter the critical section because the
Boolean expression B is false, it initially waits on the firstdelay semaphore; moved to the second-delay semaphore
before it is allowed to reevaluate B.
Keep track of the number of processes waiting on firstdelay and second-delay, with first-count and second-count
respectively.
Critical Regions: Implementation
wait(mutex);
while (!B)
{ first_count++;
if (second_count>0) signal(second_delay);
else
signal(mutex);
wait(first_delay); first_count--; second_count++;
if (first_count > 0) signal(first_delay);
else
signal(second_delay);
wait(second_delay); second_count--;
}
S;
if
(first_count > 0) signal(first_delay);
else if (second_count > 0) signal(second_delay);
else
signal(mutex);
MONITORS



High-level synchronization construct that allows the safe
sharing of an abstract data type among concurrent processes.
The monitor construct ensures that only one process at a
time can be active within the monitor. (mutual exclusion)
To synchronize processes, conditional variables are required.
monitor monitor-name
{
shared variable declarations
procedure body P1 (…) { . . . }
procedure body P2 (…) { . . . }
procedure body Pn (…) { . . . }
{ initialization code
}
}
Monitor: Schematic View
Monitors


To allow a process to wait within the monitor, a
condition variable must be declared, as
condition x, y;
Condition variable can only be used with the
operations wait and signal.
 The operation
x.wait();
means that the process invoking this operation is
suspended until another process invokes
x.signal();
 The x.signal operation resumes exactly one
suspended process. If no process is suspended,
then the signal operation has no effect.
Monitor: With Condition Variables
Monitor: Dining Philosophers
monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i)
// following slides
void putdown(int i)
// following slides
void test(int i)
// following slides
void init() {
for (int i = 0; i < 5; i++)
state[i] = thinking;
}
}
Monitor: Dining Philosophers
void test(int i) {
if ( (state[(I + 4) % 5] != eating) &&
(state[i] == hungry) &&
(state[(i + 1) % 5] != eating)) {
state[i] = eating;
self[i].signal();
}
}
Monitor: Dining Philosophers
void pickup(int i) {
state[i] = hungry;
test[i];
if (state[i] != eating)
self[i].wait();
}
void putdown(int i) {
state[i] = thinking;
// test left and right neighbors
test((i+4) % 5);
test((i+1) % 5);
}
Monitor: Dining Philosophers
Philosopher(i)
{
dp.pickup(i);
…
eat, eat, eat
…
dp.putdown(i)
}
Monitor: Implementation Using Semaphores



Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int
next-count = 0;
Each external procedure F will be replaced by
wait(mutex);
…
body of F;
…
if (next-count > 0) signal(next)
else
signal(mutex);
Mutual exclusion within a monitor is ensured.
Monitor: Implementation

For each condition variable x, we have:
semaphore x-sem; // (initially = 0)
int
x-count = 0;

The operation x.wait can be implemented as:
x-count++;
if (next-count > 0)
signal(next);
else
signal(mutex);
wait(x-sem);
x-count--;
Monitor: Implementation

The operation x.signal can be implemented as:
if (x-count > 0) {
next-count++;
signal(x-sem);
wait(next);
next-count--;
}
Monitor: Resource Allocation

Conditional-wait construct: x.wait(c);
 c – integer expression evaluated when the wait
operation is executed.
 value of c (a priority number) stored with the name
of the process that is suspended.
 when x.signal is executed, process with smallest
associated priority number is resumed next.
Monitor: Resource Allocation
monitor ResourceAllocation
{ boolean busy;
condition x;
void acquire(int time)
{ if (busy) x.wait(time); busy = true; }
void release()
{ busy = false; x.signal(); }
void init()
{ busy = false; }
}
Monitor: Resource Allocation

Check two conditions to establish correctness of
system:
 User processes must always make their calls on the
monitor in a correct sequence.
 Must ensure that an uncooperative process does
not ignore the mutual-exclusion gateway provided
by the monitor, and try to access the shared
resource directly, without using the access
protocols.
Message Passing
Two primitives
send(mailbox, msg);
// non blocking
receive(mailbox, msg);
// blocking
Message Passing For Mutul Exclusion
program mutualexclusion
const n=...;
procedure P(i: integer);
var msg: message
begin
repeat
receive(mutex, msg);
<critical section>;
send(mutex, msg);
<remainder>
forever
end;
Message Passing For Mutul Exclusion
begin(* main program *)
create_mainbox(mutex);
send(mutex, null);
parbegin
P(1);
P(2);
...
P(n)
parend
end.
A Solution to the Bounded-Buffer
Producer/Consumer Using Messages
const
capacity = ...; {buffering capacity}
null
= ...; {empty message}
procedure producer;
var pmsg: message;
begin
while true do
begin
receive(mayproduce, pmsg);
pmsg := produce;
…
send(mayconsume, pmsg)
end
end
A Solution to the Bounded-Buffer
Producer/Consumer Using Messages
procedure consumer;
var cmsg: message;
begin
while true do
begin
receive(mayconsume, cmsg);
consume(cmsg);
send(mayproduce, null)
end
end;
A Solution to the Bounded-Buffer
Producer/Consumer Using Messages
{ parent process}
var i: integer;
begin
create_mailbox(mayproduce);
create_mailbox(mayconsume);
for i:=1 to capacity do send(mayproduce, null);
parbegin
producer;
consumers
parend
end
Dekker’s Algorithm for 2-Processes
/* The first known correct software solution to the
critical-section problem for two-processes was
developed by Dekker.
*/
boolean flag[2] =
{false,
false
};
int turn = 0;
Dekker’s Algorithm for 2-Processes
while (1)
// The structure of process P_i
{ flag[i] = true;
while ( flag[j] )
{ if (turn == j )
{ flag[i] =false;
while(turn = = j ) ;
flag[i] = true;
}
}
critical section
turn = j; flag[i] = false;
remainder section
}
Eisenberg and McGuire’s Algorithm for n Processes
/* The first known correct software solution to the
critical-section problem for n processes with a lower
bound on waiting of n-1 turns was presented by
Eisenberg and McGuire.
*/
enum pstate {idle, want_in, in_cs};
pstate flag[n];
// initially idle
int turn;
Eisenberg and McGuire’s Algorithm for n Processes
do{
while(1)
{ flag[i] = want_in;
j = turn;
while (j !=i )
{ if (flag[j] != idle) j = turn; else j = (j+1) % n;
flag[i] = in_cs;
j = 0;
while ( (j<n) && (j==i || flag[j]!=in_cs) ) j++;
if ((j>n) && (turn==i || flag[turn]==idle)) break;
}
turn = i;
CRITICAL SECTION
j = (turn+1)%n; while (flag[j] == idle) j = (j+1)%n;
turn = j;
flag[i] = idle;
REMAINDER SECTION
} while (1)
}
SOLARIS 2 SYNCHRONIZATION

Implements a variety of locks to support multitasking,
multithreading (including real-time threads), and
multiprocessing.

Uses adaptive mutexes for efficiency when protecting
data from short code segments.

Uses condition variables and readers-writers locks
when longer sections of code need access to data.

Uses turnstiles to order the list of threads waiting to
acquire either an adaptive mutex or reader-writer lock.
WINDOWS 2000 SYNCHRONIZATION

Uses interrupt masks to protect access to
global resources on uniprocessor systems.

Uses spinlocks on multiprocessor systems.

Also provides dispatcher objects which may
act as wither mutexes and semaphores.

Dispatcher objects may also provide events.
An event acts much like a condition variable.
Homework
7.3
 7.4
 7.5
 7.8
 7.9
 7.10
 7.11
