Module 7: Process Synchronization

Download Report

Transcript Module 7: Process Synchronization

Silberschatz, Galvin and Gagne ©2006
Operating System Principles
Chapter 6: Process Synchronization
Mi-Jung Choi
[email protected]
Dept. of Computer and Science
Ch06 - Process Synchronization
Chapter 6: Process Synchronization




Background → atomic operation
The Critical-Section Problem
Peterson’s Solution
Synchronization Hardware
 Semaphores
 Classic Problems of Synchronization
 Monitors
Operating System
-2-
Fall 2009
Ch06 - Process Synchronization
Background
 Concurrent access to shared data may result in data
inconsistency
 Maintaining data consistency requires mechanisms to
ensure the orderly execution of cooperating processes
 Suppose that we wanted to provide a solution to the
consumer-producer problem that fills all the buffers.




We can do so by having an integer counter that keeps track of
the number of items in the buffer.
Initially, counter is set to 0.
It is incremented by the producer after it produces a new item.
It is decremented by the consumer after it consumes a item.
Operating System
-3-
Fall 2009
Ch06 - Process Synchronization
Shared data among producer and consumer
#define BUFFER_SIZE 10
typedef struct
{
int
content;
} item;
item buffer[BUFFER_SIZE];
int in
= 0;
// initial state
int out
= 0;
// empty
int counter = 0;
Operating System
-4-
Fall 2009
Ch06 - Process Synchronization
Producer
while (TRUE)
{
// produce an item and put in nextProduced
while (counter == BUFFER_SIZE)
; // do nothing
// is buffer full?
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
counter ++;
}
Operating System
-5-
Fall 2009
Ch06 - Process Synchronization
Consumer
while (TRUE)
{
while (counter == 0)
; // do nothing
// is buffer empty?
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter --;
// consume the item in nextConsumed
}
Operating System
-6-
Fall 2009
Ch06 - Process Synchronization
Race Condition
 Although the producer and consumer routines are
correct separately, they may not function correctly when
executed concurrently.
 Suppose


that the value of the variable counter is currently 5 and
that the producer and consumer execute the statements
“counter++” and “counter--” concurrently.
 Following the execution of these two statements,
the value of the variable counter may be 4, 5, or 6.
 Why?
Operating System
-7-
Fall 2009
Ch06 - Process Synchronization
Race Condition
 count++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
 count-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2
 Consider this execution interleaving with “count = 5” initially:
S0: producer execute register1 = counter
{register1 = 5}
S1: producer execute register1 = register1 + 1
{register1 = 6}
S2: consumer execute register2 = counter
{register2 = 5}
S3: consumer execute register2 = register2 - 1
{register2 = 4}
S4: producer execute counter = register1
{counter = 6 }
S5: consumer execute counter = register2
{counter = 4}
counter = 4
Operating System
-8-
Fall 2009
Ch06 - Process Synchronization
“counter++” is not ATOMIC
S - Box
E - Box
2. operand
1.
3. execution
data
4. result
Separation
of
Execution Box
&
Storage Box
Operating System
Ex.
--------------------------E-box
S-box
--------------------------CPU
Memory
Computer Disk
----------------------------9-
Fall 2009
Ch06 - Process Synchronization
Race Condition
counter++, counter-- are not ATOMIC
Producer
E - Box
Consumer
E - Box
S - Box
counter--
counter++
counter
Operating System
- 10 -
Fall 2009
Ch06 - Process Synchronization
Example of a Race Condition
CPU1
CPU2
P1
P2
Memory
X == 2
X = X – 1;
X = X + 1;
Load X, R1
Inc
R1
Store X, R1
Load X, R2
Dec R2
Store X, R2
Interleaved execution?
Operating System
- 11 -
Fall 2009
Ch06 - Process Synchronization
Race Condition
 We would arrive at this incorrect state

because we allowed both processes to manipulate the variable
counter concurrently.
 Race Condition

Several processes access and manipulate the same data
concurrently and the outcome of the execution depends on the
particular order in which the access takes place
 To prevent the race condition


We need to ensure that only one process at a time can manipulate
the variable counter
We require that the processes be synchronized in some way
Operating System
- 12 -
Fall 2009
Ch06 - Process Synchronization
Critical-Section (CS) Problem
 A system consisting of n processes {P0, P1, …, Pn-1}


Each process has a segment of code, called a critical section,
where the process may change common variables, updating a
table, writing a file, and so forth.
The system requires that no two processes are executing in their
critical section at the same time.
 Solution to the critical-section problem

To design a protocol that the processes can use to cooperate.




Each process must request permission to enter its CS.
The section of code implementing this request is the entry section.
The CS may be followed by an exit section
The remaining code is the remainder section
Operating System
- 13 -
Fall 2009
Ch06 - Process Synchronization
Critical-Section Problem
 The general structure of a typical process Pi
Do
{
entry section
critical section
exit section
remainder section
} while ( TRUE );
Operating System
- 14 -
Fall 2009
Ch06 - Process Synchronization
Solution to Critical-Section Problem
 A solution to the critical-section problem must satisfy the
following three requirements:
1. Mutual Exclusion - If process Pi is executing in its critical section,
then no other processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there
exist some processes that wish to enter their critical section, then the
selection of the processes that will enter the critical section next
cannot be postponed indefinitely – deadlock-free condition
3. Bounded Waiting - A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before
that request is granted – starvation-free condition
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes
Operating System
- 15 -
Fall 2009
Ch06 - Process Synchronization
Peterson’s Solution for critical section problem
 Two process {Pi, Pj} solution


Software-based solution
Assume that the LOAD and STORE instructions are atomic; that
is, cannot be interrupted.
 The two processes share two variables:




int turn;
Boolean flag[2]
The variable turn indicates whose turn it is to enter the critical
section.
The flag array is used to indicate if a process is ready to enter the
critical section.
flag[i] = true implies that process Pi is ready!
Operating System
- 16 -
Fall 2009
Ch06 - Process Synchronization
Algorithm for Process Pi
do
{
flag[i] = TRUE;
turn = j;
while ( flag[j] && turn == j);
// entry section
CRITICAL SECTION
// exit section
flag[i] = FALSE;
REMAINDER SECTION
} while (TRUE);
Operating System
- 17 -
Fall 2009
Ch06 - Process Synchronization
Peterson’s Solution satisfies 3 conditions
 Mutual Exclusion



Each Pi enters its critical section only if either flag[j]==false or
turn==i.
Each Pi enters its critical section with flag[i]==true
Both Pi and Pj cannot enter their critical section at the same time.
 Progress

Pi can be stuck only if either flag[j]==true and turn==j.
 Bounded Waiting

Pi will enter the critical section after at most one entry by Pj .
Operating System
- 18 -
Fall 2009
Ch06 - Process Synchronization
What is the problem in the Pi?
do {
while (turn != i);
CRITICAL SECTION
turn = j;
REMAINDER SECTION
} while (TRUE);
satisfies
Mutual Exclusion
does not satisfy
Progress
Bounded Waiting
do {
flag[i] = true;
while (flag[j]);
CRITICAL SECTION
flag[i] = false;
REMAINDER SECTION
} while (TRUE);
Operating System
- 19 -
satisfies
Mutual Exclusion
does not satisfy
Progress
Bounded Waiting
Fall 2009
Ch06 - Process Synchronization
Dekker’s Algorithm – two-process solution
do {
flag[i] = TRUE;
while (flag[j] )
{
if ( turn == j )
{
flag[i] = FALSE;
while ( turn == j) ;
flag[i] = TRUE;
}
}
CRITICAL SECTION
turn = j;
flag[i] = FALSE;
REMAINDER SECTION
} while (TRUE);
Operating System
- 20 -
Fall 2009
Ch06 - Process Synchronization
Synchronization Hardware
 Many systems provide hardware support for critical
section code
 Uni-processors – could disable interrupts


Currently running code would execute without preemption
Generally too inefficient on multiprocessor systems
 Operating systems using this are not broadly scalable
 Modern machines provide special atomic hardware
instructions


Atomic = non-interruptible
Test memory word and set value: TestAndSet()
Swap contents of two memory words: Swap()
Operating System

- 21 -
Fall 2009
Ch06 - Process Synchronization
TestAndSet() Instruction
 Definition:
boolean TestAndSet (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
 This Instruction is atomic
 This Instruction is provided by hardware.
Operating System
- 22 -
Fall 2009
Ch06 - Process Synchronization
Solution using TestAndSet()
 Shared Boolean variable lock., initialized to false.
 Solution for Mutual Exclusion :
do {
while ( TestAndSet (&lock ) )
; // do nothing
busy waiting
CRITICAL SECTION
lock = FALSE;
REMAINDER SECTION
} while ( TRUE);
Operating System
- 23 -
Fall 2009
Ch06 - Process Synchronization
Swap() Instruction
 Definition:
void Swap (boolean *a, boolean *b)
{
boolean temp = *a;
*a = *b;
*b = temp:
}
 This Instruction is atomic
 This Instruction is provided by hardware.
Operating System
- 24 -
Fall 2009
Ch06 - Process Synchronization
Solution using Swap()
 Shared Boolean variable lock initialized to FALSE;
 Each process has a local Boolean variable key.
 Solution for Mutual Exclusion:
do {
key = TRUE;
while ( key == TRUE)
Swap (&lock, &key );
busy waiting
CRITICAL SECTION
lock = FALSE;
REMAINDER SECTION
} while ( TRUE);
Operating System
- 25 -
Fall 2009
Ch06 - Process Synchronization
Solution using TestAndSet()
 Shared Boolean variable waiting[n] and lock initialized to FALSE;
 Each process has a local Boolean variable key.
 Solution for Bounded Waiting:
do {
waiting [i] = TRUE;
key = TRUE;
while ( waiting [i] && key ) key = TestAndSet (&lock);
waiting [i] = FALSE;
busy waiting
CRITICAL SECTION
j = (i+1)%n;
while ( (j != i) && !waiting[j] ) j=(j+1)%n;
if( j == i) lock = FALSE;
else waiting [i] = FALSE;
REMAINDER SECTION
} while ( TRUE);
Operating System
- 26 -
Fall 2009
Ch06 - Process Synchronization
Semaphore
 Semaphore Alphabet
Operating System
- 27 -
Fall 2009
Ch06 - Process Synchronization
Semaphore
 The various hardware-based solutions to the critical section problem


TestAndSet(), Swap()
complicated for application programmers to use
 To overcome this difficulty

Semaphore may be used.
 Semaphore is …



Synchronization tool that does not require busy waiting
Semaphore S – integer variable
Two standard operations to modify S: wait() and signal()
 Originally called P() for wait() and V() for signal()
 The modification of the semaphore is atomic.

less complicated to use.
Operating System
- 28 -
Fall 2009
Ch06 - Process Synchronization
Semaphore
 Definition for wait(S);
wait (S)
{
while ( S <= 0 ) ; // no-op
S--;
}
busy waiting
 Definition for signal(S);
signal (S)
{
S++;
}
 All the modification to the semaphore is atomic.
 When one process modifies the semaphore value, no other process
can simultaneously modify that same semaphore value.
Operating System
- 29 -
Fall 2009
Ch06 - Process Synchronization
Usage of Semaphore
 Counting semaphore


Integer value can range over an unrestricted domain
Ex. 0 .. 10
 Binary semaphore


Integer value can range only between 0 and 1
Which is refer to as mutex locks
1. Binary semaphore for solving the
critical-section problem for
multiple processes



n processes share a semaphore, mutex.
mutex is initialized to 1
The structure of a process Pi
Operating System
- 30 -
do
{
wait (mutex);
CRITICAL SECTION
signal (mutex);
REMAINDER SECTION
} while (TRUE);
Fall 2009
Ch06 - Process Synchronization
1. binary semaphore
 Does previous code satisfy three requirements of CS
problem.

Mutual exclusion

Progress

Bounded waiting
 The third requirement is not guaranteed as a default.


It usually depends on the implementation of wait() function.
Ex, Linux sem_wait() does not guarantee the bounded waiting req.
Operating System
- 31 -
Fall 2009
Ch06 - Process Synchronization
Usage of Semaphore
2. Counting semaphore used to control access to a given
resource consisting of a finite number of instances




Semaphore is initialized to the number of resources available
To use a resource, a process performs wait()
To release a resource, a process perform signal()
When the semaphore is 0, all resources are being used.
3. Counting semaphore to solve various synchronization
problems.




Ex. Two concurrently running processes P1, P2
P1 with a statement S1, P2 with a statement S2
S2 is executed only after S1 has completed
How to solve this using semaphore?
Operating System
- 32 -
Fall 2009
Ch06 - Process Synchronization
Usage of Semaphore
 Solution of #3 in previous page

Initialization
Semaphore synch = 0;

P1 structure
S1;
signal (synch);

P2 structure
wait (synch);
S2;
Operating System
- 33 -
Fall 2009
Ch06 - Process Synchronization
Semaphore Implementation with Busy waiting
 must guarantee that no two processes can execute wait () and signal ()
on the same semaphore at the same time
 Thus, implementation becomes the critical section problem where the
wait and signal code are placed in the critical section.
 Previous codes have busy waiting in critical section implementation




While a process in critical section, others must loop continuously in wait
code.
called a spinlock
Disadvantage
 waists CPU cycle during wait();
Sometimes it is useful:
 No context switch involved in the wait(), signal()
 Implementation code is short
 Little busy waiting if critical section rarely occupied
 However, applications may spend lots of time in critical sections and
therefore this is not a good solution.
Operating System
- 34 -
Fall 2009
Ch06 - Process Synchronization
Semaphore Implementation with no Busy waiting
 To overcome the busy waiting problem, two operations are used




Block() – place the process invoking the operation on the appropriate waiting
queue.
Wakeup() – remove one of processes in the waiting queue and place it in the
ready queue.
When a semaphore value is not positive on executing wait(), the process
blocks itself instead of busy waiting.
signal() operation wakes up a waiting process.
 To implement semaphores under this definition, we define a semaphore
as a record like
typedef struct {
int value;
struct process *list;
}
// semaphore value
// pointer to the PCB list of waiting processes
 With each semaphore there is an associated waiting queue.
Operating System
- 35 -
Fall 2009
Ch06 - Process Synchronization
Semaphore Implementation with no Busy waiting
 How to implement the waiting queue of a semaphore?



linked list in the semaphore
contains PCBs of waiting processes.

The negative value means
the number of waiting processes
The positive value means
the number of available resources

The list can use any queuing strategy.
value:-3
list
PCB
PCB
PCB
 To satisfy the bounded waiting requirements
the queue can be implemented with FIFO queue.

Two pointer variables which indicate head and tails of the PCB list.
Operating System
- 36 -
Fall 2009
Ch06 - Process Synchronization
Semaphore Implementation with no Busy waiting
 Implementation of wait()
wait ( semaphore *S) {
S->value --;
if ( S->value < 0 ) {
add this process to S->list;
block();
}
}
// put the PCB into waiting queue
// go to waiting state from running state
 Implementation of signal()
signal ( semaphore *S) {
S->value ++;
if ( S->value <= 0 ) {
remove a process P from S->list; // select a process from waiting queue
wakeup();
// put the process into ready queue
}
}
Operating System
- 37 -
Fall 2009
Ch06 - Process Synchronization
Deadlock and Starvation
 Deadlock – two or more processes are waiting indefinitely for an event
that can be caused by only one of the waiting processes
 The use of a semaphore with a waiting queue may result in a deadlock.
 Let S and Q be two semaphores initialized to 1
P0
P1
wait (S);
wait (Q);
…
…
signal (S);
signal (Q);
wait (Q);
wait (S);
…
…
signal (Q);
signal (S);
 Starvation – indefinite blocking. A process may never be removed from
the semaphore queue in which it is suspended.
 The implementation of a semaphore with a waiting queue may result in
a starvation.

When the queue is in LIFO (last-in, first-out) order.
Operating System
- 38 -
Fall 2009
Ch06 - Process Synchronization
Classical Problems of Synchronization
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem
Operating System
- 39 -
Fall 2009
Ch06 - Process Synchronization
Bounded-Buffer Problem
 More than two producers

produce one item, store it in the buffer, and continue.
 More than two consumers

consume a item from buffer, and continue.
 The buffer contains at most N items.
 Solution with semaphore



Semaphore mutex
Semaphore full
Semaphore empty
Operating System
initialized to the value 1
initialized to the value 0
initialized to the value N.
- 40 -
Fall 2009
Ch06 - Process Synchronization
Bounded Buffer Problem
 The structure of the producer process
do
{
// produce an item
wait (empty);
wait (mutex);
// check buffer is full or not
// enter critical section
// add the item to the buffer
// critical section
signal (mutex);
signal (full);
// exit critical section
// one item is produced
} while (TRUE);
Operating System
- 41 -
Fall 2009
Ch06 - Process Synchronization
Bounded Buffer Problem
 The structure of the consumer process
do
{
wait (full);
wait (mutex);
// check buffer is empty or not
// enter critical section
// add the item to the buffer
// critical section
signal (mutex);
signal (empty);
// exit critical section
// one item is produced
// consume the item
} while (TRUE);
Operating System
- 42 -
Fall 2009
Ch06 - Process Synchronization
Readers-Writers Problem
 A data set is shared among a number of concurrent
processes


Readers – only read the data set; they do not perform any updates
Writers – can both read and write.
 Problem – allow multiple readers to read at the same time.
Only one single writer can access the shared data at the
same time.
 Shared Data




Data set
Semaphore mutex
Semaphore wrt
Integer readcount
Operating System
initialized to 1.
initialized to 1.
initialized to 0.
- 43 -
Fall 2009
Ch06 - Process Synchronization
Readers-Writers Problem
 The structure of a writer process
do
{
wait (wrt);
// enter a critical section
// writing is performed
// critical section
signal (wrt);
// exit critical section
} while (TRUE);
Operating System
- 44 -
Fall 2009
Ch06 - Process Synchronization
Readers-Writers Problem
 The structure of a reader process
do
{
wait (mutex);
readcount ++;
if ( readcount == 1 ) wait(wrt);
signal(mutex)
// reading is performed
// when it is the first reading
// more than two reader can read
// concurrently
wait(mutex);
readcount --;
if ( readcount == 0 ) signal(wrt);
signal (mutex);
// when it is the last reading
} while (TRUE);
Operating System
- 45 -
Fall 2009
Ch06 - Process Synchronization
Readers-Writers Problem
 Starvation is the problem…


The writer may starve.
No readers will be kept waiting unless a writer has already
obtained permission to use the shared object.
 Another variation of Readers-Writers Problem


Once a writer is ready, that the writer performs its write as soon
as possible
If a writer is waiting to access the object, no new readers may
start reading

=> In this case, the readers may starve.
 What is the starvation free solution for Readers-Writers
Problem?
Operating System
- 46 -
Fall 2009
Ch06 - Process Synchronization
Dining-Philosophers Problem




Five philosophers who spend their lives thinking and eating.
They share a circular table surrounded by five chairs.
In the center of the table is a bowl of rice.
File single chopsticks on the table.
 When a philosophers think, she does not
interact with neighbors.
 From time to time, she gets hungry and
try to pick up the two chopsticks that are
closest to her.
 She picks up only one chopsticks
at a time.
 She cannot pick up the chopstick that is
already in the hand of a neighbor.
 When she gets two chopsticks, she eats without releasing them.
 When she finished eating, she put down both chopsticks and think again.
 How to implement this with concurrent processors in a deadlock-free
and starvation-free manner?
Operating System
- 47 -
Fall 2009
Ch06 - Process Synchronization
Dining-Philosophers Problem
 It is a large class of concurrency-control problems.
 It is a simple representation of the need to allocate
several resources among several processes in a
deadlock-free and starvation-free manner.
 Simple Solution …
 Shared data


Bowl of rice (data set)
Semaphore chopstick [5] initialized to 1
Operating System
- 48 -
Fall 2009
Ch06 - Process Synchronization
Dining-Philosophers Problem (Cont.)
 This solution guarantees that

No two neighbors are eating simultaneously
 However, this solution creates deadlock.



Suppose all five philosophers become hungry simultaneously
and each grabs her left chopsticks.
All chopsticks are taken.
When each philosophers tries to grab her right chopstick, she
delays forever
 Possible deadlock free solutions



Allow at most four philosophers to be sitting together
Allow a philosopher to pick up her chopsticks only if both
chopsticks are available
Use an asymmetric solution; an odd one picks up left chopstick
first, an even one picks up right chopstick first.
Operating System
- 49 -
Fall 2009
Ch06 - Process Synchronization
Problems with Semaphores
 Be careful not to misuse the semaphore.
 Examples of misuse:




mutex is a semaphore which is initialized to 1
signal (mutex) …. wait (mutex)
 results in a violation of the mutual exclusion requirement
wait (mutex) … wait (mutex)
 results in deadlock
Omitting of wait (mutex) or signal (mutex) (or both)
 Either mutual exclusion is violated or a deadlock will occur
 To deal with such errors, the monitor may be used.
Operating System
- 50 -
Fall 2009
Ch06 - Process Synchronization
Monitors
 A high-level abstraction that provides a convenient and effective
mechanism for process synchronization
 Monitor is defined
as an abstract data type
like class in object-oriented
language (C++, Java)
 A monitor instance
is shared among
multiple processes.
monitor monitor-name
{
Private Data
shared variable declarations
procedure P1 (…) { …. }
…
procedure Pn (…) {……}
Public Methods
 Only one process
may be active
within the monitor
at a time
Operating System
Initialization code ( ….) { … }
Initialization Method
}
- 51 -
Fall 2009
Ch06 - Process Synchronization
Schematic view of a Monitor
 Shared data
 Operations
 Initialization code
 The shared data can be
accessed by operations
 Only one process is
active within a monitor
at a time
 Programmer don’t need
to code this
synchronization.
Operating System
- 52 -
Fall 2009
Ch06 - Process Synchronization
Condition Variables
 Basic monitor is not sufficiently powerful for modeling some
synchronization schemes.
 To solve, condition variables are introduced.


condition x, y;
A programmer can define one or more variables of type condition
 Two operations on a condition variable:


x.wait () – a process that invokes the operation is suspended.
x.signal () – resumes one of processes (if any) that invoked x.wait ()


x.signal() resumes exactly one suspended process.
If no process is suspended, then nothing happens.
Operating System
- 53 -
Fall 2009
Ch06 - Process Synchronization
Monitor with Condition Variables
Operating System
- 54 -
Fall 2009
Ch06 - Process Synchronization
Solution to Dining Philosophers
monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];
void pickup (int i) {
state[i] = HUNGRY;
test(i);
if (state[i] != EATING) self [i].wait();
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
Operating System
Three stats of a philosopher
Ph_i delays herself when
she is hungry and
unable to pick both
chopsticks
Ph_i invokes the operation
pickup(i) before she
eat.
Ph_i invokes the operation
putdown(i) after she
eat.
- 55 -
Fall 2009
Ch06 - Process Synchronization
Solution to Dining Philosophers (cont)
void test (int i)
{
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) )
{
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
Ph_i invokes the test(i) to
pickup chopsticks.
Ph_i invokes the test(i+1)
and test((i+1)%5) to
release her chopsticks.
When two neighbors are not
eating, she changes its
state to EATING and
invoke her.
Initialize all the Ph with
THINKING state.
}
Operating System
- 56 -
Fall 2009
Ch06 - Process Synchronization
Solution to Dining Philosophers (cont)
 A philosopher_i must invoke the operation pickup() and
putdown() in the following sequence.
do
{
thinking
dp.pickup( i )
eat.
dp.putdown ( i );
} while (TRUE);
Operating System
- 57 -
Fall 2009
Ch06 - Process Synchronization
Solution to Dining Philosophers (cont)
pickup() putdown()
entry
queue
condition
variable
queue
shared
data
operations
pickup()
putdown()
test()
initialization_code()
Operating System
- 58 -
Fall 2009
Ch06 - Process Synchronization
Summary
 Cooperating processes that share data have critical section of code
each of which is in use by only one process at a time.
 Solutions for critical section problem


Software-based solution: Peterson’s algorithm, Bakery algorithm
Hardware-based solution: Testandset(), Swap()
 Three requirements for critical section problem

Mutual Exclusion, Progress, Bounded Wating
 The main disadvantage of these solution is that they all require busy
waiting. Semaphore overcome this difficulty.
 Semaphore can be used to solve various synchronization problems



The bounded-buffer, The readers-writers, The dining-philosophers problem
problems
Those are examples of a large class of concurrency-control problems.
The solution should be deadlock-free ad starvation-free.
 Monitors provides the synchronization mechanism for sharing abstract
data types.

Condition variables provide a method by which a monitor procedure can
block its execution until it is signaled to continue.
Operating System
- 59 -
Fall 2009
Silberschatz, Galvin and Gagne ©2006
Operating System Principles
End of Chapter 6
Mi-Jung Choi
[email protected]
Dept. of Computer and Science