CS 471 - Lecture 3 Process Synchronization & Deadlock Ch. 6 (6.1-6.7), 7 Fall 2009

Download Report

Transcript CS 471 - Lecture 3 Process Synchronization & Deadlock Ch. 6 (6.1-6.7), 7 Fall 2009

CS 471 - Lecture 3
Process Synchronization & Deadlock
Ch. 6 (6.1-6.7), 7
Fall 2009
Process Synchronization
 Race Conditions
 The Critical Section Problem
 Synchronization Hardware
 Classical Problems of Synchronization
 Semaphores
 Monitors
 Deadlock
GMU – CS 571
3.2
Concurrent Access to Shared Data

Suppose that two processes A and B have access to
a shared variable “Balance”.
PROCESS A:
PROCESS B:
Balance = Balance - 100

Balance = Balance + 200
Further, assume that Process A and Process B are
executing concurrently in a time-shared, multiprogrammed system.
GMU – CS 571
3.3
Concurrent Access to Shared Data


The statement “Balance = Balance – 100” is
implemented by several machine level instructions
such as:
A1. lw $t0, BALANCE
A2. sub $t0,$t0,100
A3. sw $t0, BALANCE
Similarly, “Balance = Balance + 200” can be
implemented by the following:
B1. lw $t1, BALANCE
B2. add $t1,$t1,200
B3. sw $t1, BALANCE
GMU – CS 571
3.4
Race Conditions


Observe: In a time-shared system the exact
instruction execution order cannot be predicted

Scenario 1:
A1. lw $t0, BALANCE
A2. sub $t0,$t0,100
A3. sw $t0, BALANCE
Context Switch
B1. lw $t1, BALANCE
B2. add $t1,$t0,200
B3. sw $t1, BALANCE
Context Switch
B1. lw $t1, BALANCE
B2. add $t1,$t1,200
B3. sw $t1, BALANCE

Context Switch
A3. sw $t0, BALANCE
Balance is increased by
100
GMU – CS 571
Scenario 2:
A1. lw $t0, BALANCE
A2. sub $t0,$t0,100

3.5
Balance is decreased by
100
Race Conditions




Situations where multiple processes are writing or
reading some shared data and the final result depends on
who runs precisely when are called race conditions.
A serious problem for most concurrent systems using
shared variables! Maintaining data consistency requires
mechanisms to ensure the orderly execution of
cooperating processes.
We must make sure that some high-level code sections
are executed atomically.
Atomic operation means that it completes in its entirety
without worrying about interruption by any other
potentially conflict-causing process.
GMU – CS 571
3.6
The Critical-Section Problem




n processes all competing to use some shared data
Each process has a code segment, called critical
section (critical region), in which the shared data is
accessed.
Problem – ensure that when one process is executing
in its critical section, no other process is allowed to
execute in their critical section.
The execution of the critical sections by the
processes must be mutually exclusive in time.
GMU – CS 571
3.7
Mutual Exclusion
GMU – CS 571
3.8
Solving Critical-Section Problem
Any solution to the problem must satisfy three conditions.
1. Mutual Exclusion:
No two processes may be simultaneously inside the
same critical section.
2. Bounded Waiting:
No process should have to wait forever to enter a
critical section.
3. Progress:
No process executing a code segment unrelated
to a given critical section can block another
process trying to enter the same critical section.
 Arbitrary Speed:
In addition, no assumption can be made about the
relative speed of different processes (though all
processes have a non-zero speed).
GMU – CS 571
3.9
Attempts to solve mutual exclusion
do {
….
entry section
critical section
exit section
remainder section
} while (1);




General structure as above
Two processes P1 and P2
Processes may share common variables
Assume each statement is executed atomically (is this
realistic?)
GMU – CS 571
3.10
Algorithm 1



Shared variables:
• int turn; initially turn = 0
• turn = i  Pi can enter its critical section
Process Pi
busy waiting
do {
while (turn != i) ; //it can be forever
critical section
turn = j;
reminder section
} while (1);
Satisfies mutual exclusion, but not progress
GMU – CS 571
3.11
Algorithm 2



Shared variables:
• boolean flag[2];
initially flag[0] = flag[1] = false
• flag[i] = true  Pi wants to enter its critical region
Process Pi
do {
flag[i]=true;
while (flag[j]) ; //Pi & Pj both waiting
critical section
flag[i] = false;
reminder section
} while (1);
Satisfies mutual exclusion, but not progress
GMU – CS 571
3.12
Algorithm 3: Peterson’s solution



Combine shared variables of previous attempts
Process Pi
do {
flag[i]=true;
turn = j;
while (flag[j] and turn == j) ;
critical section
flag[i] = false;
flag[j] = false or turn = i
reminder section
} while (1);
Meets our 3 requirements (assuming each individual
statement is executed atomically)
GMU – CS 571
3.13
Synchronization Hardware





Many machines provide special hardware instructions
that help to achieve mutual exclusion
The TestAndSet (TAS) instruction tests and modifies the
content of a memory word atomically
TAS R1,LOCK
reads the contents of the memory word LOCK into
register R1, and stores a nonzero value (e.g. 1)
at the memory word LOCK (atomically)
Assume LOCK = 0
• calling TAS R1, LOCK will set R1 to 0, and set LOCK to 1.
Assume LOCK = 1
• calling TAS R1, LOCK will set R1 to 1, and set LOCK to 1.
GMU – CS 571
3.14
Mutual Exclusion with Test-and-Set


Initially, shared memory word LOCK = 0.
Process Pi
do {
entry_section:
TAS R1, LOCK
CMP R1, #0 /* was LOCK = 0? */
JNE entry_section
critical section
MOVE LOCK, #0 /* exit section */
remainder section
} while(1);
GMU – CS 571
3.15
Classical Problems of Synchronization

Producer-Consumer Problem

Readers-Writers Problem

Dining-Philosophers Problem

We will develop solutions using semaphores
and monitors as synchronization tools (no busy
waiting).
GMU – CS 571
3.16
Producer/Consumer Problem
Producer
Consumer
bounded size buffer (N)
 Producer and Consumer execute concurrently but only one
should be accessing the buffer at a time.
 Producer puts items to the buffer area but should not be
allowed to put items into a full buffer
 Consumer process consumes items from the buffer but
cannot remove information from an empty buffer
GMU – CS 571
3.17
Readers-Writers Problem




A data object (e.g. a file) is to be shared among
several concurrent processes.
A writer process must have exclusive access
to the data object.
Multiple reader processes may access the
shared data simultaneously without a problem
Several variations on this general problem
GMU – CS 571
3.18
Dining-Philosophers Problem
Five philosophers share a common
circular table. There are five
chopsticks and a bowl of rice (in the
middle). When a philosopher gets
hungry, he tries to pick up the
closest chopsticks.
A philosopher may pick up only one
chopstick at a time, and cannot pick
up a chopstick already in use. When
done, he puts down both of his
chopsticks, one after the other.
GMU – CS 571
3.19
Synchronization
 Synchronization
• Most synchronization can be regarded as
either:

 Mutual exclusion (making sure that only one
Competition
process is executing a CRITICAL SECTION
[touching a variable or data structure, for
example] at a time),
or as
 CONDITION SYNCHRONIZATION, which
Cooperation
means making sure that a given process does
not proceed until some condition holds (e.g.
that a variable contains a given value)
The sample problems will illustrate this
GMU – CS 571
3.20
Semaphores






Language level synchronization construct introduced
by E.W. Dijkstra (1965)
Motivation: Avoid busy waiting by blocking a
process execution until some condition is satisfied.
Each semaphore has an integer value and a queue.
Two operations are defined on a semaphore variable
s:
wait(s) (also called P(s) or down(s))
signal(s) (also called V(s) or up(s))
We will assume that these are the only
user-visible operations on a semaphore.
Semaphores are typically available in thread
implementations.
GMU – CS 571
3.21
Semaphore Operations

Conceptually a semaphore has an integer value
greater than or equal to 0.

wait(s):: wait/block until s.value > 0;
s.value-- ; /* Executed atomically! */

A process executing the wait operation on a
semaphore with value 0 is blocked (put on a queue)
until the semaphore’s value becomes greater than 0.
• No busy waiting

signal(s):: s.value++; /* Executed atomically! */
GMU – CS 571
3.22
Semaphore Operations (cont.)



If multiple processes are blocked on the same
semaphore s, only one of them will be awakened
when another process performs signal(s)
operation. Who will have priority?
Binary semaphores only have value 0 or 1.
Counting semaphores can have any nonnegative value. [We will see some of these later]
GMU – CS 571
3.23
Critical Section Problem
with Semaphores


Shared data:
semaphore mutex; /* initially mutex = 1 */
Process Pi:
do {
wait(mutex);
critical section
signal(mutex);
remainder section
} while (1);
GMU – CS 571
3.24
Re-visiting the “Simultaneous Balance
Update” Problem
 Shared data:
int Balance;
semaphore mutex; // initially mutex = 1


Process A:
…….
wait (mutex);
Balance = Balance – 100;
Process B:
…….
signal (mutex);
wait (mutex);
Balance = Balance + 200;
signal (mutex);
……
……
GMU – CS 571
3.25
Semaphore as a General
Synchronization Tool



Suppose we need to execute B in Pj only after A
executed in Pi
Use semaphore flag initialized to 0
Code:
Pi :
Pj :


A
wait(flag)
signal(flag)
B
GMU – CS 571
3.26
Returning to Producer/Consumer
Producer
Consumer
bounded size buffer (N)
 Producer and Consumer execute concurrently but only one
should be accessing the buffer at a time.
 Producer puts items to the buffer area but should not be
allowed to put items into a full buffer
 Consumer process consumes items from the buffer but
cannot remove information from an empty buffer
GMU – CS 571
3.27
Producer-Consumer Problem (Cont.)

competition
cooperation
Make sure that:
1. The producer and the consumer do not access
the buffer area and related variables at the
same time.
2. No item is made available to the consumer if all
the buffer slots are empty.
3. No slot in the buffer is made available to the
producer if all the buffer slots are full.
GMU – CS 571
3.28
Producer-Consumer Problem

binary
Shared data
semaphore full, empty, mutex;
counting
Initially:
full = 0 /* The number of full buffers */
empty = n /* The number of empty buffers */
mutex = 1 /* Semaphore controlling the access
to the buffer pool */
GMU – CS 571
3.29
Producer Process
do {
…
produce an item in p
…
wait(empty);
wait(mutex);
…
add p to buffer
…
signal(mutex);
signal(full);
} while (1);
GMU – CS 571
3.30
Consumer Process
do {
wait(full);
wait(mutex);
…
remove an item from buffer to c
…
signal(mutex);
signal(empty);
…
consume the item in c
…
} while (1);
GMU – CS 571
3.31
Readers-Writers Problem




A data object (e.g. a file) is to be shared among
several concurrent processes.
A writer process must have exclusive access
to the data object.
Multiple reader processes may access the
shared data simultaneously without a problem
Shared data
semaphore mutex, wrt;
int readcount;
Initially
mutex = 1, readcount = 0, wrt = 1;
GMU – CS 571
3.32
Readers-Writers Problem:
Writer Process
wait(wrt);
…
writing is performed
…
signal(wrt);
GMU – CS 571
3.33
Readers-Writers Problem:
Reader Process
wait(mutex);
readcount++;
if (readcount == 1)
wait(wrt);
signal(mutex);
…
reading is performed
…
wait(mutex);
Is this solution o.k.?
readcount--;
if (readcount == 0)
signal(wrt);
signal(mutex);
GMU – CS 571
3.34
Dining-Philosophers Problem
Five philosophers share a common
circular table. There are five
chopsticks and a bowl of rice (in the
middle). When a philosopher gets
hungry, he tries to pick up the
closest chopsticks.
A philosopher may pick up only one
chopstick at a time, and cannot pick
up a chopstick already in use. When
done, he puts down both of his
chopsticks, one after the other.

Shared data
semaphore chopstick[5];
Initially all semaphore values are 1
GMU – CS 571
3.35
Dining-Philosophers Problem

Philosopher i:
do {
wait(chopstick[i]);
wait(chopstick[(i+1) % 5]);
…
eat
…
signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
…
think
Is this solution o.k.?
…
} while (1);
GMU – CS 571
3.36
Semaphores


It is generally assumed that semaphores are fair, in
the sense that processes complete semaphore
operations in the same order they start them
Problems with semaphores
• They're pretty low-level.
 When using them for mutual exclusion, for example (the most
common usage), it's easy to forget a wait or a release, especially
when they don't occur in strictly matched pairs.
• Their use is scattered.
 If you want to change how processes synchronize access to a
data structure, you have to find all the places in the code where
they touch that structure, which is difficult and error-prone.
• Order Matters
 What could happen if we switch the two ‘wait’ instructions in the
consumer (or producer)?
GMU – CS 571
3.37
Deadlock and Starvation



Deadlock – two or more processes are
waiting indefinitely for an event that can
be caused by only one of the waiting
processes.
Let S and Q be two semaphores
initialized to 1
P0
wait(S);
wait(Q);
M
signal(S);
signal(Q)
P1
wait(Q);
wait(S);
M
signal(Q);
signal(S);
Starvation – indefinite blocking. A
process may never be removed from the
semaphore queue in which it is
suspended.
GMU – CS 571
3.38
High Level Synchronization
Mechanisms




Several high-level mechanisms that are easier to use
have been proposed.
• Monitors
• Critical Regions
• Read/Write locks
We will study monitors (Java and Pthreads provide
synchronization mechanisms based on monitors)
They were suggested by Dijkstra, developed more
thoroughly by Brinch Hansen, and formalized nicely by
Tony Hoare in the early 1970s
Several parallel programming languages have
incorporated some version of monitors as their
fundamental synchronization mechanism
GMU – CS 571
3.39
Monitors


A monitor is a shared object
with operations, internal
state, and a number of
condition queues. Only one
operation of a given monitor
may be active at a given point
in time
A process that calls a busy
monitor is delayed until the
monitor is free
• On behalf of its calling
•
process, any operation may
suspend itself by waiting on
a condition
An operation may also signal
a condition, in which case
one of the waiting processes
is resumed, usually the one
that waited first
GMU – CS 571
3.40
Monitors


Condition (cooperation)
Synchronization with
Monitors:
• Access to the shared data
•
• delay takes a queue type
in the monitor is limited by
the implementation to a
single process at a time;
therefore, mutually
exclusive access is
inherent in the semantic
definition of the monitor
Multiple calls are queued
GMU – CS 571
Mutual Exclusion
(competition)
Synchronization with
Monitors:
•
3.41
parameter; it puts the
process that calls it in the
specified queue and removes
its exclusive access rights to
the monitor’s data structure
continue takes a queue type
parameter; it disconnects the
caller from the monitor, thus
freeing the monitor for use
by another process. It also
takes a process from the
parameter queue (if the
queue isn’t empty) and starts
it
Shared Data with Monitors
Monitor SharedData
{
int balance;
void updateBalance(int amount);
int getBalance();
void init(int startValue) { balance =
startValue; }
void updateBalance(int amount)
{ balance += amount;
}
int getBalance()
{ return balance; }
}
GMU – CS 571
3.42
Monitors


To allow a process to wait within the monitor, a
condition variable must be declared, as
condition x, y;
Condition variable can only be used with the
operations wait and signal.
• The operation
•

x.wait();
means that the process invoking this operation is
suspended until another process invokes
x.signal();
The x.signal operation resumes exactly one suspended
process on condition variable x. If no process is
suspended on condition variable x, then the signal
operation has no effect.
Wait and signal operations of the monitors are not the
same as semaphore wait and signal operations!
GMU – CS 571
3.43
Monitor with Condition Variables


When a process P “signals” to wake up the process
Q that was waiting on a condition, potentially both of
them can be active.
However, monitor rules require that at most one
process can be active within the monitor.
Who will go first?
• Signal-and-wait: P waits until Q leaves the monitor
(or, until Q waits for another condition).
• Signal-and-continue: Q waits until P leaves the monitor
(or, until P waits for another condition).
• Signal-and-leave: P has to leave the monitor after
signaling (Concurrent Pascal)

The design decision is different for different
programming languages
GMU – CS 571
3.44
Monitor with Condition Variables
GMU – CS 571
3.45
Producer-Consumer Problem
with Monitors
Monitor Producer-consumer
{
condition full, empty;
int count;
void insert(int item); //the following slide
int remove(); //the following slide
void init() {
count = 0;
}
}
GMU – CS 571
3.46
Producer-Consumer Problem
with Monitors (Cont.)
void insert(int item)
{
if (count == N) full.wait();
insert_item(item); // Add the new item
count ++;
if (count == 1) empty.signal();
}
int remove()
{ int m;
if (count == 0) empty.wait();
m = remove_item(); // Retrieve one item
count --;
if (count == N–1) full.signal();
return m;
}
GMU – CS 571
3.47
Producer-Consumer Problem
with Monitors (Cont.)
void producer() {
//Producer process
while (1) {
item = Produce_Item();
Producer-consumer.insert(item);
}
}
void consumer() { //Consumer process
while (1) {
item =
Producer-consumer.remove(item);
consume_item(item);
}
}
GMU – CS 571
3.48
Monitors

Evaluation of monitors:
• Strong support for mutual exclusion synchronization
• Support for condition synchronization is very similar as with
semaphores, so it has the same problems
• Building a correct monitor requires that one think about the
"monitor invariant“. The monitor invariant is a predicate that
captures the notion "the state of the monitor is consistent."
 It needs to be true initially, and at monitor exit
 It also needs to be true at every wait statement
• Can implement monitors in terms of semaphores  that
semaphores can do anything monitors can.
• The inverse is also true; it is trivial to build a semaphores
from monitors
GMU – CS 571
3.49
Dining-Philosophers Problem
with Monitors
monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i)
// following slides
void putdown(int i)
// following slides
void test(int i)
// following slides
void init() {
for (int i = 0; i < 5; i++)
state[i] = thinking;
Each philosopher
}
will perform:
}
dp. pickup (i);
…eat..
dp. putdown(i);
GMU – CS 571
3.50
Solving Dining-Philosophers Problem
with Monitors
void pickup(int i) {
state[i] = hungry;
test(i);
if (state[i] != eating)
self[i].wait();
}
void putdown(int i) {
state[i] = thinking;
// test left and right neighbors
// wake them up if possible
test((i+4) % 5);
test((i+1) % 5);
}
GMU – CS 571
3.51
Solving Dining-Philosophers Problem
with Monitors
void test(int i) {
if ( (state[(i + 4) % 5] != eating) &&
(state[i] == hungry) &&
(state[(i + 1) % 5] != eating)) {
state[i] = eating;
self[i].signal();
}
}
Is this solution o.k.?
GMU – CS 571
3.52
Dining Philosophers problem (Cont.)







Philosopher1 arrives and starts eating
Philosopher2 arrives; he is suspended
Philosopher3 arrives and starts eating
Philosopher1 puts down the chopsticks, wakes
up Philosopher2 (suspended once again)
Philosopher1 re-arrives, and starts eating
Philosopher3 puts down the chopsticks, wakes
up Philosopher2 (suspended once again)
Philosopher3 re-arrives, and starts eating
……
GMU – CS 571
3.53
Deadlock

In a multi-programmed
environment,
processes/threads
compete for (exclusive)
use of a finite set of
resources
GMU – CS 571
3.54
Necessary Conditions for Deadlock
1. Mutual exclusion – At least one resource must
2.
3.
4.
be held in a non-sharable mode
Hold and wait – A process holds at least one
resource while waiting to acquire additional
resources that are being held by other
processes
No preemption – a resource is only released
voluntarily from a process
Circular wait: A set of processes {P0,…,Pn} such
that Pi is waiting for a resource held by Pi+1 and
Pn is waiting for a resource held by P0
GMU – CS 571
3.55
Resource-Allocation Graphs

Process

Resource Type with 4 instances

Pi requests instance of Rj
Rj
Pi

Pi is holding an instance of Rj
Pi
GMU – CS 571
Rj
3.56
Resource Allocation Graph
request edge
assignment
edge
deadlocked
GMU – CS 571
3.57
Graph With A Cycle But No Deadlock
GMU – CS 571
3.58
Handling Deadlock



Deadlock prevention – ensure that at least
one of the necessary conditions cannot
occur
Deadlock avoidance – analysis determines
whether a new request could lead toward a
deadlock situation
Deadlock detection – detect and recover
from any deadlocks that occur
GMU – CS 571
3.59
Deadlock Prevention
Prevent deadlocks by ensuring one of the required
conditions cannot occur
 Mutual exclusion  this condition typically
cannot be removed for non-sharable resources
 Hold and wait  elimination requires that
processes request and acquire all resources in
single atomic action
 No preemption  add pre-emption when
waiting for a resource and some other
processes needs a resource currently held
 Circular wait  require processes to acquire
resources in some ordered way.
GMU – CS 571
3.60
Deadlock Avoidance
 When a process requests an available
resource, system must decide if immediate
allocation leaves the system in a safe state.
 System is in safe state if there exists a
sequence <P1, P2, …, Pn> of ALL the
processes in the systems such that for each
Pi, the resources that Pi can still request can
be satisfied by currently available resources
+ resources held by all the Pj, with j < i. If a
system is in safe state  no deadlocks.
 If a system is in unsafe state  possibility of
deadlock.
 Avoidance  ensure that a system will never
enter an unsafe state.
Operating System Concepts - 7th Edition
3.61
Silberschatz, Galvin and Gagne ©2005
Example: Deadlock Avoidance

Overall System has 12 tape drives and 3 processes:
Max Needs Current Needs

P0
10
5
P1
4
2
P2
9
2
Safe state: <P1, P0, P2>
• P1 only needs 2 of the 3 remaining drives
• P0 needs 5 (3 remaining + 2 held by P1)
• P2 needs 7 (3 remaining + 4 held by P0)
GMU – CS 571
3.62
3 free drives
Example (cont’d)


Overall System has 12 tape drives and 3 processes.
Now P2 requests 1 more drive – are we still in a safe state
if we grant this request??
Max Needs Current Needs


P0
10
5
P1
4
2
P2
9
3
2 free drives
Unsafe state – only P1 can be granted its request and once
it terminates, there are still not enough drives for P0 and
P2  potential deadlock
Avoidance algorithm would make P2 wait
GMU – CS 571
3.63
Avoidance algorithms
 Single instance of a resource type. Use a resource-
allocation graph
 Multiple instances of a resource type. Use the banker’s
algorithm
Operating System Concepts - 7th Edition
3.64
Silberschatz, Galvin and Gagne ©2005
Resource-Allocation Graph Scheme
 Claim edge Pi  Rj indicated that process Pi may request
resource Rj; represented by a dashed line.
 Claim edge converts to request edge when a process
requests a resource.
 Request edge converted to an assignment edge when the
resource is allocated to the process.
 When a resource is released by a process, assignment
edge reconverts to a claim edge.
 Resources must be claimed a priori in the system.
Operating System Concepts - 7th Edition
3.65
Silberschatz, Galvin and Gagne ©2005
Resource-Allocation Graph
P1 has been assigned R1
P2 has requested R1
Both P1 and P2 may request
R2 in the future
A request can be granted only
if converting the request edge
to an assignment edge does not
result in the formation of a cycle in
the resource allocation graph
Operating System Concepts - 7th Edition
3.66
Silberschatz, Galvin and Gagne ©2005
Unsafe State In Resource-Allocation Graph
Operating System Concepts - 7th Edition
3.67
Silberschatz, Galvin and Gagne ©2005
Banker’s Algorithm
 Multiple instances.
 Each process must a priori claim maximum use.
 When a process requests a resource it may have to wait.
 When a process gets all its resources it must return them in
a finite amount of time.
Operating System Concepts - 7th Edition
3.68
Silberschatz, Galvin and Gagne ©2005
Bankers Algorithm

Overall System has m=3 different resource types (A=10,
B=5, C=7) and n=5 processes:
Allocation Max
A B C
A B C

P0
0 1 0
7 5 3
P1
2 0 0
3 2 2
P2
3 0 2
9 0 2
P3
2 1 1
2 2 2
P4
0 0 2
4 3 3
In a safe state??
GMU – CS 571
3.69
Data Structures for the Banker’s Algorithm
Let n = number of processes, and m = number of resources types.
 Available: Vector of length m. If available [j] = k, there are
k instances of resource type Rj available.
 Max: n x m matrix. If Max [i,j] = k, then process Pi may
request at most k instances of resource type Rj.
 Allocation: n x m matrix. If Allocation[i,j] = k then Pi is
currently allocated k instances of Rj.
 Need: n x m matrix. If Need[i,j] = k, then Pi may need k
more instances of Rj to complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Operating System Concepts - 7th Edition
3.70
Silberschatz, Galvin and Gagne ©2005
Bankers Algorithm (cont’d)

Overall System has 3 different resource types (A=10, B=5,
C=7) and 5 processes:
Allocation Max
A B C
A B C
Need
A B C
P0
0 1 0
7 5 3
7 4 3
P1
2 0 0
3 2 2
1 2 2
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
P4
0 0 2
4 3 3
4 3 1
Available 3 3 2

safe state: <P1,P3,P4,P2,P0>
GMU – CS 571
3.71
Bankers Algorithm (cont’d)


<P1,P3,P4,P2,P0>
All of P1’s requests can be granted from available
Allocation Max
A B C
A B C
Need
A B C
P0
0 1 0
7 5 3
7 4 3
P1
2 0 0
3 2 2
1 2 2
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
P4
0 0 2
4 3 3
4 3 1
Available 3 3 2
GMU – CS 571
3.72
*
*
Bankers Algorithm (cont’d)


<P1,P3,P4,P2,P0>
Once P1 ends, P3’s requests could be granted
Allocation Max
A B C
A B C
Need
A B C
P0
0 1 0
7 5 3
7 4 3
P1
2 0 0
3 2 2
1 2 2
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
*
P4
0 0 2
4 3 3
4 3 1
*
Available 5 3 2
GMU – CS 571
3.73
Bankers Algorithm (cont’d)


<P1,P3,P4,P2,P0>
Once P3 ends, P4’s requests could be granted
Allocation Max
A B C
A B C
Need
A B C
P0
0 1 0
7 5 3
7 4 3
P1
2 0 0
3 2 2
1 2 2
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
P4
0 0 2
4 3 3
4 3 1
Available 7 4 3
GMU – CS 571
3.74
*
*
*
Bankers Algorithm (cont’d)


<P1,P3,P4,P2,P0>
Once P4 ends, P2’s requests could be granted
Allocation Max
A B C
A B C
Need
A B C
P0
0 1 0
7 5 3
7 4 3
P1
2 0 0
3 2 2
1 2 2
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
P4
0 0 2
4 3 3
4 3 1
Available 7 4 5
GMU – CS 571
3.75
*
*
Bankers Algorithm (cont’d)


<P1,P3,P4,P2,P0>
Once P2 ends, P0’s requests could be granted
Allocation Max
A B C
A B C
Need
A B C
P0
0 1 0
7 5 3
7 4 3
P1
2 0 0
3 2 2
1 2 2
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
P4
0 0 2
4 3 3
4 3 1
Available 10 4 7
GMU – CS 571
3.76
*
Safety Algorithm (Sec. 7.5.3.1)
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1.
2. Find an i such that both:
(a) Finish [i] = false
(b) Needi  Work
May require m x n2 operations
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state.
Operating System Concepts - 7th Edition
3.77
Silberschatz, Galvin and Gagne ©2005
Bankers Algorithm (cont’d)

Starting from previous state, suppose P1 requests (1,0,2).
Should this be granted?
Allocation Max
A B C
A B C
Need
A B C
7 5 3
P1
0 1 0
3
2
2 0 0
3 2 2
7 4 3
0
0
1 2 2
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
4 3 3
4 3 1
P0
P4
0 0 2
2
0
Available 3 3 2

<P1>
GMU – CS 571
3.78
*
Bankers Algorithm (cont’d)

P1 requests (1,0,2). Should this be granted?
Allocation Max
A B C
A B C
Need
A B C
P0
0 1 0
7 5 3
7 4 3
P1
3 0 2
3 2 2
0 2 0
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
*
P4
0 0 2
4 3 3
4 3 1
*
Available 5 3 2

<P1,P3>
GMU – CS 571
3.79
Bankers Algorithm (cont’d)

P1 requests (1,0,2). Should this be granted?
Allocation Max
A B C
A B C
Need
A B C
P0
0 1 0
7 5 3
7 4 3
P1
3 0 2
3 2 2
0 2 0
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
P4
0 0 2
4 3 3
4 3 1
Available 7 4 3

<P1,P3,P0>
GMU – CS 571
3.80
*
*
*
Bankers Algorithm (cont’d)

P1 requests (1,0,2). Should this be granted?
Allocation Max
A B C
A B C
Need
A B C
P0
0 1 0
7 5 3
7 4 3
P1
3 0 2
3 2 2
0 2 0
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
P4
0 0 2
4 3 3
4 3 1
Available 7 5 3

<P1,P3,P0,P2>
GMU – CS 571
3.81
*
*
Bankers Algorithm (cont’d)

P1 requests (1,0,2). Should this be granted?
Allocation Max
A B C
A B C
Need
A B C
P0
0 1 0
7 5 3
7 4 3
P1
3 0 2
3 2 2
0 2 0
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
P4
0 0 2
4 3 3
4 3 1
Available 10 5 5

<P1,P3,P0,P2,P4>
GMU – CS 571
3.82
*
Bankers Algorithm (cont’d)

Now, P4 requests (3,3,0). Should this be granted?
Allocation Max
A B C
A B C
Need
A B C
P0
0 1 0
7 5 3
7 4 3
P1
3 0 2
3 2 2
0 2 0
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
P4
0 0 2
4 3 3
4 3 1
Available 2 3 0

No – more than the available resources
GMU – CS 571
3.83
Bankers Algorithm (cont’d)

Now, P0 requests (0,2,0). Should this be granted?
Allocation Max
A B C
A B C
Need
A B C
P0
0 3 0
7 5 3
7 4 3
P1
3 0 2
3 2 2
0 2 0
P2
3 0 2
9 0 2
6 0 0
P3
2 1 1
2 2 2
0 1 1
P4
0 0 2
4 3 3
4 3 1
Available 2 1 0
GMU – CS 571
3.84
Resource-Request Algorithm for Process Pi
(Sec 7.5.3.2)
Request = request vector for process Pi. If Requesti [j] = k then
process Pi wants k instances of resource type Rj.
1. If Requesti  Needi go to step 2. Otherwise, raise error
condition, since process has exceeded its maximum claim.
2. If Requesti  Available, go to step 3. Otherwise Pi must
wait, since resources are not available.
3. Pretend to allocate requested resources to Pi by modifying
the state as follows:
Available = Available – Request;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
 If safe  the resources are allocated to Pi.

If unsafe  Pi must wait, and the old resource-allocation
state is restored
Operating System Concepts - 7th Edition
3.85
Silberschatz, Galvin and Gagne ©2005
Deadlock Detection and Recovery

Allow system to enter deadlock state

Detection algorithm
• Single Instance (look for cycles in wait-for graph)
• Multiple instance (related to banker’s algorithm)
Algorithm requires an order of O(m x n2)
operations to detect whether the system is in
deadlocked state.

Recovery scheme
GMU – CS 571
3.86
Resource-Allocation Graph and Wait-for Graph
Resource-Allocation Graph
Operating System Concepts - 7th Edition
3.87
Corresponding wait-for graph
Silberschatz, Galvin and Gagne ©2005
Detection-Algorithm Usage
 When, and how often, to invoke depends on:

How often a deadlock is likely to occur?

How many processes will need to be rolled back?

one for each disjoint cycle
 If detection algorithm is invoked arbitrarily, there may be many
cycles in the resource graph and so we would not be able to tell
which of the many deadlocked processes “caused” the deadlock.
Operating System Concepts - 7th Edition
3.88
Silberschatz, Galvin and Gagne ©2005
Recovery from Deadlock: Process Termination
 Abort all deadlocked processes.
 Abort one process at a time until the deadlock cycle is eliminated.
 In which order should we choose to abort?

Priority of the process.

How long process has computed, and how much longer to
completion.

Resources the process has used.

Resources process needs to complete.

How many processes will need to be terminated.

Is process interactive or batch?
Operating System Concepts - 7th Edition
3.89
Silberschatz, Galvin and Gagne ©2005
Recovery from Deadlock: Resource Preemption
 Selecting a victim – minimize cost.
 Rollback – return to some safe state, restart process for that state.
 Starvation – same process may always be picked as victim,
include number of rollbacks in cost factor.
Operating System Concepts - 7th Edition
3.90
Silberschatz, Galvin and Gagne ©2005