Processes and Threads

Download Report

Transcript Processes and Threads

Chapter 2. Processes and Threads
2.1 Processes
2.2 Threads
2.4 Scheduling
2.3 Interprocess communication
2.5 Classical IPC problems
1
1
2.1 Processes
All modern computers can do several things at the same
time.
Clearly, some way is needed to model and control the concurrency.
Processes can help here.
Strictly speaking, at any instant of time, CPU is running only one
process.
CPU switches from process to process quickly, running each for tens or
hundreds of milliseconds.
A program in execution is called “process”.
Processes run independently.
2
2.1.1 The Process Model
Conceptually, each process has its own virtual CPU.
In reality, the real CPU switches back and forth from process to
process.
In (a), we see a computer multiprogramming four programs in memory.
In (b), we see four processes, each with its own flow of control, and
each one running independently of the other ones.
There is only one physical program counter, so when each process runs, its
logical program counter is loaded into the real program counter.
In (c), at any given time, only one process is actually running.
In this chapter, we assume there is only one CPU.
2.1.1 The Process Model
The difference between a process and a program is subtle,
but crucial.
A process is an instance of an executing program, including the current
values of the program counter, registers, and variables.
E.g., a culinary-mined computer scientist
who is baking a birthday cake for his daughter.
a birth cake recipe with all the input: flour, eggs, sugar, …
the computer scientist’s son comes running in screaming his head off.
a first aid book
4
2.1.1 The Process Model
A process has a program, input, output, and a state.
A single processor may be shared among several processes, with
some scheduling algorithm being used to determine when to stop work
on one process and service a different one.
If a program is running twice, it counts as two processes.
They are distinct processes.
5
2.1.2 Process Creation
In general-purpose systems, some way is needed to create
and terminate processes as needed during operation.
System initialization
Execution of a process creation system call; fork()
User request to create a new process
Initiation of a batch job
Technically, in all these cases, a new process is created by
having an existing process execute a process creation
system call.
In UNIX, there is only one system call to create a new process: fork.
In Win32, CreateProcess
6
2.1.3 Process Termination
Conditions which terminate processes
1. Normal exit (voluntary)
2. Error exit (voluntary)
3. Fatal error (involuntary)
4. Killed by another process (involuntary): kill
exit in UNIX and ExitProcess in Windows.
kill in UNIX and TerminateProcess in Windows.
2.1.4 Process Hierarchies
A process and all of its children and further descendants together form
a process group.
7
2.1.5 Process States (1/2)
Between creation and termination, process may be in one
of three states.
Possible process states
running (actually using CPU at that time)
ready (runnable; temporarily stopped to let another process run)
blocked (unable to run until some external event happens)
Transitions between states shown
8
2.1.5 Process States (2/2)
Transactions 2 and 3 are caused by the process scheduler.
decide which process should run when and for how long.
Many algorithm have been devised to balance the competing demands
of efficiency for the system as a whole and fairness to individual
processes.
9
2.1.6 Implementation of Processes
To implement process model, OS maintain process table
(process control blocks) which contains information about
each process
which must be saved when the process is switched from running to
ready or blocked state.
10
2.2 THREADS
Each process has an address space and a single thread of
control.
There are frequently situations in which it is desirable to have multiple
threads of control in the same address space running in quasi-parallel.
They are almost separate processes except for the shared address
space.
2.2.1 Thread Usage
The main reason for having threads is that in many
applications, multiple activities are going on at once.
Some of these may block from time to time.
11
2.2.1 Thread Usage
A word processor with three threads
One thread interacts with the user.
Second one handles reformatting in the background.
Third one handle the disk backups without interfering with the other two.
Why not by a single-threaded process?
Why not by three processes?
12
2.2.2 The Classical Thread Model
Process model is based on two independent concepts:
resource grouping & execution.
A process has
Address space: program text, data
Other resource: open files, child processes, and so on.
By putting them together in the form of a process, they can be
managed more easily.
The other concept is a thread of execution.
Thread has
A program counter: to keep track of which instruction to execute next
Registers: to hold its working variables.
Stack: to contain execution history.
Allow multiple executions in the same process environment
The threads share an address space and other resources
13
2.2.2 The Classical Thread Model
(a) Three processes each with one thread
Each process has its own address space and a single thread of control
(b) One process with three threads of control
All three of them share the same address space.
2
3
14
2.2.2 The Classical Thread Model
When a multithreaded process is run on a single-CPU system, the threads
take turns running.
CPU switches rapidly back and forth among the threads.
Different threads in a process are not as independent as different
processes.
All threads have exactly the same address space, which means that they also
share the same global variables.
All the threads can share the same set of open files, child processes, alarms and
signals.
A thread can be in any one of states: running, blocked, ready, or terminated.
The transitions between thread states are the same as the transitions b/w process
states.
15
2.2.2 The Classical Thread Model
When multithreading is present,
processes normally start with a single thread.
This thread has the ability to create new threads by calling a library
procedure, for example, thread_create.
A parameter to thread_create typically specifies the name of a
procedure for the new thread to run.
When a thread has finished its work, it can exit by calling a library
procedure, say, thread_exit.
One thread can wait for a (specific) thread to exit by calling a
procedure, for example, thread_join.
16
2.2.3 POSIX Threads
To make it possible to write threaded programs,
IEEE has defined a standard for threads in IEEE standard 1003.1c.
The threads package it defines is called Pthreads.
Most UNIX systems support it.
The standard defines over 60 function calls.
17
2.2.4 Implementing Threads in User Space
Two ways to implement threads
In user space,
As far as OS is concerned, it
manages ordinary, single thread
processes.
Can be implemented on any OS.
Run-time system
A collection of procedures to manage
threads.
Switch threads locally in the same
process
Thread table
Faster than trapping to kernel
Each process can have its own
customized scheduling algorithm.
A user-level threads package
18
2.2.4 Implementing Threads in User Space
Disadvantages
How to implement blocking system calls.
Suppose that a thread reads from keyboard before any keys have
been hit.
Letting the thread actually make system call is unacceptable, since
this will stop all the threads in the same process.
System calls could all be changed to be nonblocking, but not a good
idea.
Page fault
If a thread cause a page fault, the kernel blocks the entire process
until the disk I/O is complete, even though other threads might be
runnable.
19
2.2.5 Implementing Threads in the Kernel
OS knows about & manage threads.
No run-time system is needed.
No thread table in each process
Kernel has a thread table.
When a thread want to create a new
thread or destroy a thread, it makes a
system call.
More expensive than user level
When a thread blocks, kernel can
run either another thread from the
same process or a thread from a
different process.
Kernel threads do not require new
non-blocking system calls.
Easy for page fault
A threads package managed by
the kernel
20
2.4 Scheduling
In multiprogramming environment, we have to make a
choice which process will be running in next.
Scheduler
Scheduling algorithm
21
2.4.1 Introduction to Scheduling
Nearly all processes alternate bursts of computing with
(disk) I/O requests.
CPU-bound process: spend most of time for computing
I/O bound process: spend most of time for I/O.
Compute-bound processes typically have long CPU bursts and thus
infrequent I/O waits, whereas I/O bound processes have short CPU
bursts and frequently I/O waits.
22
When to schedule
1. When a new child process is created, parent process or child process?
2. When a current process terminates, which one among ready processes?
3. When a process blocks on I/O, another process has to be selected.
4. When I/O interrupt occurs (I/O request is completed), it is up to scheduler
to decide
if the newly ready process should be run,
if the current process should continue running,
if some third process should run.
Preemptive vs. Non-preemptive
If a HW clock provides periodic interrupts at 50 or 60 Hz, a scheduling decision
can be made at each clock interrupt or at every k-th clock interrupt.
Non-preemptive: Scheduling algorithm picks a process to run & lets it run until it
blocks/terminates/releases CPU.
Preemptive: Scheduling algorithm picks a process & lets it run for a maximum
time unit. If it is still running at the end of time interval, it is suspended and the
scheduler picks a another process to run. Also, if a new process comes in,
scheduler may decide which process will be run.
23
Scheduling Algorithm Goals
Batch system [Wikipedia]
• Batch processing is execution of a series of programs ("jobs") on a computer without
manual intervention.
• Batch jobs are set up so they can be run to completion without manual intervention, so all
input data is preselected through scripts or command-line parameters. This is in contrast
to "online" or interactive programs which prompt the user for such input.
24
2.4.2 Scheduling in Batch Systems
Non-preemptive First-Come First Serve (a.k.a., FIFO)
Processes are assigned CPU in the order they request it.
There is a single queue of ready processes.
Easy to understand & easy to program.
Non-preemptive Shortest Job First
Average turnaround time
Turnaround time: time between submission and termination
(a) (8 + 12 + 16 + 20) / 4 = 14
(b) (4 + 8 + 12 + 20) / 4 = 11
Shortest job first is optimal only when all the jobs are available simultaneously.
25
2.4.2 Scheduling in Batch Systems
Counterexamle
A
B
C
D
E
Run time
2
4
1
1
1
Arrival time
0
0
3
3
3
SJF: A-B-C-D-E (Avarage wait: 4.6)
B-C-D-E-A (Avarage wait: 4.4)
Shortest Remaining Time Next (Preemptive version of
shortest job first)
If the new job needs less time to finish than the current process, the
current process is suspended and the new job started.
A-B-C-D-E-B (Average wait: 3.4)
26
2.4.3 Scheduling in Interactive Systems
Round-Robin Scheduling
Each process is assigned a time interval, called its quantum.
If the process is still running at the end of quantum, CPU is preempted
and given to another process.
If the process has blocked or finished before its quantum, CPU
switching is done.
Easy to implement with a list of runnable processes.
27
2.4.3 Scheduling in Interactive Systems
Interesting issue in round-robin
Switching between processes requires a certain amount of time for
doing administration.
Saving and loading registers and memory maps, updating various tables,
flushing and reloading the cache, etc.
Suppose that process switch (context switch) takes 1 msec and time
quantum is 4 msec.
Approx. 20% of CPU time is wasted.
Suppose that process switch (context switch) takes 1 msec and time
quantum is 100 msec.
Only 1% of CPU time is wasted.
However, If there are 50 processes, the last one should wait 5 sec.
If the quantum is set longer than the mean CPU burst, preemption will not
happen very often.
Setting quantum too short causes too many process switches and lower
CPU efficiency.
Setting it too long may cause poor response time.
28
2.4.3 Scheduling in Interactive Systems
Priority Scheduling
Round robin scheduling makes the implicit assumption that all processes are
equally important.
Each process is assigned a priority, and the runnable process with the highest
priority is allowed to run.
E.g., email daemon process vs. video player process
To prevent high-priority processes from running forever, scheduler may decrease
the priority of current running process at each clock tick.
The UNIX system has a command, nice, which allows a user to voluntarily reduce
the priority of his process, in order to be nice to the other users.
Some processes are highly I/O bound and spend most of their time waiting for I/O
to complete. Whenever such a process wants the CPU, it should be given the
CPU immediately.
Priority + Round robin scheduling
2.4.3 Scheduling in Interactive Systems
Lottery Scheduling
In a priority scheduler, it is very hard to state what having a priority of
40 actually means.
Make real promises to users about performance and then live up to the
promises.
Give processes lottery tickets for CPU.
Whenever a scheduling decision has to be made, a lottery ticket is
chosen at random.
The process holding that ticket gets CPU.
If the same amount of tickets are distributed to each process, then
every process is eqaul.
To paraphrase George Orwell: “All processes are equal, but some
processes are more equal than the others”.
If process 1 has 40 tickets and process 2 has 20, then process 1 can use
CPU twice as often as process 2.
Cooperating processes may exchange tickets if they wish.
30
Shortest Process Next
Shortest job first produce the minimum average response
time for batch systems.
The only problem is figuring out which of the current runnable
processes is the shortest one.
Aging
One approach is to make estimates based on past behavior and run
the process with the shortest estimated running time.
Suppose that the current estimated time is T0.
Now suppose its next run is measured to be T1.
We could update our estimate by taking a weighted sum of these two
numbers, that is, a∙T0 + (1 – a)∙T1.
Through the choice of a, we can decide to have the estimation process
forget old runs quickly, or remember them for a long time.
With a = 1/2, we get successive estimates of
T0, T0/2 +T1/2, T0/4 +T1/4 +T2/2, T0/8 +T1/8 +T2/4 +T3/2, …
31
2.4.4 Scheduling in Real-Time Systems
A real-time system is one in which time plays an essential role.
Having the right answer but having it too late is often just as bad as not having
at all.
E.g., Autopilot in airplanes, robot control in automated factories, patient
monitoring in hospital intensive-care units.
OS may have to response to multiple periodic processes.
Depending on how much time each process requires for processing, it may not
be possible to handle them.
Schedulability
If there are m periodic processes and process i occurs with period Pi and
requires Ci seconds of CPU,
m
Ci
1

then the load can only be handled if
P
i 1
i
Example: Schedulable?
Process1 – P1: 100 msec, C1: 50 msec
Process2 – P2: 200 msec, C2: 30 msec
Process3 – P3: 500 msec, C3: 100 msec
32
2.3 Interprocess Communication
Processes frequently need to communicate with other
processes : Interprocesses Communication (IPC)
Three issues
How one process can pass information to another?
Two or more processes do not get into each other’s way when
engaging in critical activities.
E.g., two processes in an airline reservation system each trying to grab
the last seat on a plane for a different customer.
Proper sequencing when dependencies are present.
E.g., if process A produces data and process B prints them, B has to
wait until A has produced some data before starting to print.
For multi-threads, the first one is easy, but other two are still
issues.
33
2.3.1 Race Conditions
Processes that are working together may share some common storage
that each one can read and write.
The shared storage may be in many memory (possibly in a kernel data structure,
e.g. shmget()) or it may be a shared file.
The location of the shared memory does not change the nature of the
communication or the problems that arise.
E.g., when a process wants to print a file, it enters the file name in a
special spooler directory.
Another process, printer daemon, periodically checks to see if there are any files
to be printed, and it prints them and then removes their names from the directory.
Two shared variables: out which points to the next file to be printed and in which
points to the next free slot.
These two variables might well be kept on
a file available to all processes.
2.3.1 Race Conditions
Process A reads in(=7) into its local
variable next_free_slot.
Its time quantum ends, and scheduler
executes Process B.
Process B also reads in(=7) into its local
variable next_free_slot.
It stores a file name (fileB) in slot 7, and
update in to be 8.
Its time quantum ends, scheduler
executes Process A.
Process A overwrites a file name (fileA) in
slot 7, and update in to be 8.
Printer daemon prints only fileA.
Race condition: two or more processes are reading or writing
some shared data and the final result depends on who runs
precisely when.
35
2.3.2 Critical Regions
How do we avoid race condition?
The key to preventing trouble here is to find some way to prohibit more
than one process from reading and writing the shared data at the same
time.
Mutual exclusion
The difficulty above occurred because process B started using one of the
shared variables before process A was finished with it.
If one process is using a shared variable or file, the other processes will be
excluded from doing the same thing.
Achieving mutual exclusion mechanism is a major design issue in any OS.
Critical regions/Critical sections
Sometimes, a process is busy doing internal computation, but sometimes,
it has to access shared memory or files that can lead to race condition.
If we could arrange matters such that no two processes were ever in their
critical regions at the same time, we could avoid race conditions.
36
2.3.2 Critical Regions
In an abstract sense, the behavior we want is shown in Fig. 2-22.
Four conditions to hold to have a good solution:
1.
2.
3.
4.
No two processes may be simultaneously inside their critical sections.
No assumption may be made about speeds or the number of CPUs.
No process running outside its critical region may block other processes.
No process should have to wait forever to enter its critical region.
37
2.3.3 Mutual Exclusion with Busy Waiting
To achieve mutual exclusion, we will study various proposals.
(1) Disabling Interrupts
CPU is only switched from process to process as a result of clock interrupt or
other interrupts.
With interrupts turned off, CPU will not be switched to another process.
Once a process has disabled interrupts, it can read and write the shared
memory without any concern that other process will intervene.
Unattractive
Suppose that one process turned off interrupts, and never turned them on
again.
On the other hand, it is convenient for the kernel itself to disable interrupts
during a few instructions.
Disabling interrupts is often a useful technique within OS itself, but it is
not appropriate as a general solution for user processes.
If the system is a multiprocessor, disabling interrupts affects only the CPU that
executed the disable instruction. The other ones will continue running and can
access the shared memory.
38
(2) Lock Variables
As a second attempt, let us look for a software solution.
Consider having a single, shared lock variable, initially 0.
Whenever a process wants to enter its critical region, it first tests the
lock.
If the lock is 0, then the process sets it to 1 and enters the critical
region.
If the lock is 1, the process just waits until it becomes 0.
Thus, 0 means that no process is in its critical region, and 1 means that
some process is in its critical region.
Unfortunately, this idea has the same problem.
39
(3) Strict Alternation
We have the integer variable turn that keeps track of whose turn it
is to enter the critical region.
Initially, it is 0; that means that process 0 reads turn, finds it to be 0, and enters
its critical section.
At the same time, on the other hand, process 1 also finds it to be 0, and
therefore waits with testing turn until it becomes 1.
Continuously testing a variable until some value appears is called busy waiting.
It should usually be avoided, since it wastes CPU time.
However, if the wait will be short, the busy waiting can be used.
A lock that uses busy waiting is called a spin lock.
40
(3) Strict Alternation
Taking turns is not a good idea when one of processes is much slower
than the other.
Process 0 finishes its critical region. (turn: 0  1, P0 is in NCR)
Then, process 1 can finish its critical region. (turn: 1  0, P1 is in NCR)
Process 0 again finish its critical region. (turn: 0  1)
However, process 1 keeps staying in its noncritical region. (turn: 1)
Now, even if process 0 wants to enter its critical region, it cannot. (turn: 1)
This situation violates condition 3: process 0 is being blocked by a process not
in its critical region.
Neither one process would enter its critical section twice in a row.
While this algorithm does avoid all races, it is not really a serious candidates as
a solution because it violates condition 3.
41
(4) Peterson’s Solution (1981)
S/W solution
Before using the shared variables, each process calls enter_region() with
its process number, 0 or 1, as parameter.
This call will cause it to wait, if needed, until it is safe to enter.
After finishing with the shared variables, the process calls leave_region()
to allow the other process to enter, if it so desires.
42
(4) Peterson’s Solution (1981)
Whenever Process 0 calls enter_region(),
It indicates its array element interest[0] as TRUE and sets turn to 0.
If process 1 is not interested in critical section, process 0 enters CR.
After this point, if process 1 calls enter_region(), process 1 should wait until interested[0]
goes to FALSE.
If process 1 has already called enter_region(), its interested value is true.
In this case, both processes are interested.
Both will store their process number in turn.
However, process which stores its number last should wait.
A process can enter its critical section continuously as long as the other process
is not interested.
43
(5) TSL Instruction
A proposal that requires hardware help.
TSL RX, LOCK
(Test and Set Lock)
It reads the contents of the memory word lock and stores it into register RX.
Then, it stores a nonzero value at the memory address lock.
The operations of reading the word and storing into it are guaranteed to be
indivisible—no other process can access the memory word until the instruction
is finished.
When a process wants to enter CR, it calls enter_region, which does busy
waiting until the lock is free.
After the critical region, the process calls leave_region.
44
2.3.4 Sleep and Wakeup
Peterson’s solution and TSL solution are correct, but they are busy
waiting.
It wastes CPU time.
Priority inversion problem (with busy waiting).
Consider a computer with two processes, H and L.
H > L : H can run whenever it is in ready state.
At a certain moment, with L in its critical section, H becomes ready to run and
wants to enter its critical section.
H now begins busy waiting (repeats loop without doing anything until L leaves).
Now, L is never scheduled while H is running. L cannot gets the chance to
leave its critical section.
So, H loops forever and cannot enter its critical section.
H with higher priority is blocked forever by a lower priority process L.
Sleep & Wakeup
When a process cannot enter its CR, it sleeps without busy waiting.
Whenever another process leaves CR, it wakes up sleeping processes.
45
2.3.4 Sleep and Wakeup
The Producer-Consumer problem (bounded-buffer problem)
Two processes share a common, fixed-size buffer.
The producer process puts information into the buffer.
The consumer process takes it out.
It is also possible to generalize the problem to have m producers and n
consumers.
Problem situations
When the producer wants to put a new item in the buffer, but it is
already full,
the producer goes to sleep, and it wakes up when the consumer has removed
one or more items.
When the consumer wants to remove an item, but the buffer is empty,
the consumer goes to sleep until the producer puts something in the buffer.
This approach sounds simple enough, but it leads to the same kinds of
race conditions with the spooler directory.
46
Naïve solution for Producer-Consumer problem
47
Producer-Consumer problem
Race condition
Buffer is empty, and C has just read count(=0).
At that moment, scheduler decides to run P.
P inserts an item, increases count (now, count=1),
and calls wakeup().
However, C is not logically asleep yet.
So, wakeup signal is lost.
When C next runs, it tests value of count(=0), and
goes to sleep.
Now, P fills up the buffer without waking up C.
Eventually (when the buffer is full), P also goes to
sleep.
Problem is losing a wakeup signal.
To add a wakeup waiting bit is a piggy bank for
storing wakeup signals.
48
2.3.5 Semaphores
A new variable type Semaphore proposed by Dijkstra.
suggested using an integer variable to count the number of wakeups
saved for future use.
It can be 0; indicating that no wakeup signals (tokens) were saved, or
some positive if one or more wakeup signals were pending.
Two operations; down (sleep) and up (wakeup)
Down (sleep) checks to see if the value is greater than 0.
If so, it decrements the value and just continues.
If the value is 0, then the process is put to sleep without completing down
operation at the moment.
Up (wakeup) increments the value.
If one or more processes were sleeping, one of them is chosen by the system
and it is allowed to complete its down operation.
Down and Up are all done as a single, indivisible atomic action.
Checking the value, changing it, and possibly going to sleep (wake up), are
all done as an atomic action.
49
2.3.5 Semaphores
It is essential that up and down be
implemented in an indivisible way.
Three semaphores
full: counting the number of slots that are full
empty: counting the number of slots that are
empty
mutex: to make sure the producer and
consumer do not access the buffer at the
same time for multiple producers (or
consumers).
http://en.wikipedia.org/wiki/Producer-consumer_problem
mutex is used for mutual exclusion.
full & empty are for synchronization.
needed to guarantee that certain event
sequences do or do not occur.
They ensure that the producer stops running
when there is no empty slot, and the
consumer stops running when there is no
item in the buffer.
2.3.5 Semaphores
Producer-Consumer with 3 semaphores
51
2.3.6. Mutexes
When the semaphore’s ability to count is not needed,
a simplified version of the semaphore, called a mutex, is sometimes
used.
Mutexes are good only for managing mutual exclusion to some shared
resource or piece of code.
A mutex is a veriable that can be in one of two states: unlocked or
locked (by an integer 0 meaning unlocked and all other values meaning
locked).
Two procedures are used with mutex.
When a thread (or process) needs access to a critical region, it calls
mutex_lock.
If the mutex is currently unlocked, the call succeeds and the calling thread is
free to enter the critical region.
If the mutex is already locked, the calling thread is blocked until the thread in
the critical region is finished and calls mutex_unlock.
52
2.3.6. Mutexes
The code is similar to enter_region of Fig 2-25 but with a
crucial difference.
When enter_region fails to enter the critical region, it keeps testing the
clock repeatedly (busy waiting).
When mutex_lock fails to acquire a lock, it calls thread_yield to give up
the CPU to another thread.
53
2.5 Classical IPC problems
2.5.1 The Dining Philosophers Problem (by
Dijkstra, 1965)
Five philosophers are seated around a circular
table.
Each philosopher has a plate of spaghetti.
Philosopher needs two forks to eat it.
Between each pair of plates is one fork.
The life of philosopher consists of alternate periods of
eating and thinking.
When a philosopher gets hungry, she tries to acquire
her left and right fork, one at a time, in either order.
If successful in acquiring two forks, she eats for a
while, then puts down the forks, and continues to think.
Can you write a program for each philosopher that
does what it is supposed to do and never gets stuck?
54
2.5.1 The Dining Philosophers Problem
Bad solution
None will be able to take their right forks if all philosophers take their left forks
at the same time.
Modification
If the right fork is not available, the philosopher puts down the left one & wait it
during some time.
With bad luck, all philosophers pick up left forks simultaneously, and then see
that right one is not available. Then he/she puts it down & waits with the same
amount of time, and try again simultaneously, again and again ….
All the programs continue to run indefinitely but fail to make any progress is
55
called starvation.
2.5.1 The Dining Philosophers Problem
Next, what if waiting a random time.
In many cases, this is ok.
However, in a few applications, one would prefer a
solution that always works and cannot fail due to an
unlikely series of random numbers.
E.g., safety control in nuclear power plaint.
Protect five statements by a binary semaphore.
Before starting to acquire forks, a philosopher would do a
down on mutex.
After replacing the forks, she would do an up on mutex.
No deadlock.
However, it has a performance bug; only one philosopher
can be eating, even though we should be allow two
philosophers to eat.
56
2.5.1 The Dining Philosophers Problem
It is deadlock-free and allows the maximum parallelism for
an arbitrary number of philosophers.
It uses an array, state, to keep track of whether a philosopher is eating,
thinking, or hungry, which is shared by philosophers.
A philosopher may only move into eating state if neither neighbor is
eating.
The program uses an array of semaphores, one per philosopher, so
hungry philosophers can block if the needed forks are busy.
57
Initially,
state[i] = THINKING;
s[i] = 0;