Concurrent Reading and Writing using Mobile Agents

Download Report

Transcript Concurrent Reading and Writing using Mobile Agents

Physical clock synchronization
Question 1.
Why is physical clock synchronization important?
Question 2.
With the price of atomic clocks or GPS coming down,
should we care about physical clock synchronization?
Classification
Types of Synchronization
Types of clocks
 External Synchronization
 Internal Synchronization
 Phase Synchronization
Unbounded 0, 1, 2, 3, . . .
Bounded 0,1, 2, . . . M-1, 0, 1, . . .
Unbounded clocks are not realistic, but are easier to
deal with in the design of algorithms. Real clocks are
always bounded.
Terminologies
c
l
o
c
k
t
i
m
e
What are these?
Drift rate 
Clock skew 
Resynchronization interval R
Š
clock 1
clock 2
drift rate= 
Newtonian time
R
R
Max drift rate  implies:
(1- ) ≤ dC/dt < (1+ )
Challenges
(Drift is unavoidable)
Accounting for propagation delay
Accounting for processing delay
Faulty clocks
Internal synchronization
Berkeley Algorithm
A simple averaging algorithm
that guarantees mutual
consistency |c(i) - c(j)| < 
Step 1. Read every clock in the system.
Step 2. Discard outliers and substitute
them by the value of the local clock.
Step 3. Update the clock using the
average of these values.
Internal synchronization
Lamport and Melliar-Smith’s
averaging algorithm handles
byzantine clocks too
c-
c
i
j
c-2
c+
Assume n clocks, at most t are faulty
Step 1. Read every clock in the system.
Step 2. Discard outliers and substitute them by the
value of the local clock.
Step 3. Update the clock using the average of
these values.
k
Synchronization is maintained if
Bad clock
A faulty clocks exhibits 2-faced
or byzantine behavior
Why?
n > 3t
Internal synchronization
Lamport & Melliar-Smith’s
algorithm (continued)
c-
c
i
The maximum difference between
the averages computed by two
non-faulty nodes is (3t / n)
j
To keep the clocks synchronized,
c-2
c+
3t/ n < 
kk
So,
Bad clocks
3t < n
Cristian’s method
External Synchronization
Time
server
Client pulls data from a time server
every R unit of time, where R <  / 2.
For accuracy, clients must compute
the round trip time (RTT), and
compensate for this delay
while adjusting their own clocks.
(Too large RTT’s are rejected)
Network Time Protocol (NTP)
Tiered architecture
Level 1
Time
server
Level 0
Level 1
Level 1
Level 2
Level 2
Broadcast mode
- least accurate
Procedure call
- medium accuracy
Peer-to-peer mode
- upper level servers use
this for max accuracy
Level 2
The tree can reconfigure itself if some node fails.
P2P mode of NTP
Let Q’s time be ahead of P’s time by .
Then
T2
Q
T3
T2 = T1 + TPQ + 
T4 = T3 + TQP - 
y = TPQ + TQP = T2 +T4 -T1 -T3 (RTT)
P
T1
T4

= (T2 -T4 -T1 +T3) / 2 - (TPQ - TQP) / 2
x
Between y/2 and -y/2
So, x- y/2 ≤  ≤ x+ y/2
Ping several times, and obtain the smallest value of y. Use it to calculate 
Problems with Clock
adjustment
1. What problems can occur when a clock value is
Advanced from 171 to 174?
2. What problems can occur when a clock value is
Moved back from 180 to 175?
1.What happened to the instant 172 and 173?
2. The instants 175 -180 appear twice
Mutual Exclusion
p0
CS
p1
CS
p2
CS
p3
CS
Why mutual exclusion?
Some applications are:
1. Resource sharing
2. Avoiding concurrent update on shared data
3. Controlling the grain of atomicity
4. Medium Access Control in Ethernet
5. Collision avoidance in wireless broadcasts
Specifications
ME1. At most one process in the CS. (Safety property)
ME2. No deadlock. (Safety property)
ME3. Every process trying to enter its CS must eventually succeed.
This is called progress. (Liveness property)
Progress is quantified by the criterion of bounded waiting. It measures
a form of fairness by answering the question:
Between two consecutive CS trips by one process, how many times
other processes can enter the CS?
There are many solutions, both on the shared memory model and the
message-passing model
Message passing solution:
Centralized decision making
Client
do true 
send request;
reply received  enter CS;
send release;
<other work>
server
busy: boolean
od
queue
req
reply
clients
release
Server
do request received and not busy  send reply; busy:= true
request received and busy  enqueue sender
release received and queue is empty  busy:= false
release received and queue not empty  send reply
to the head of the queue
od
Comments
- Centralized solution is simple.
- But the server is a single point of failure. This is BAD.
- ME1-ME3 is satisfied, but FIFO fairness is not guaranteed. Why?
Can we do better? Yes!
Decentralized solution 1:
Lamport’s algorithm
{Life of each process}
1. Broadcast a timestamped request to all.
2.
Request received  enqueue sender in local Q;.
Q0
Not in CS  send ack
Q1
0
1
2
3
In CS  postpone sending ack (until exit
from CS).
3. Enter CS, when
(i) You are at the head of your own local Q
Q2
Q3
(ii) You have received ack from all processes
4. To exit from the CS,
(i) Delete the request from Q, and
(ii) Broadcast a timestamped release
5. Release received  remove sender from local Q.
Completely connected topology
Can you show that it satisfies
all the properties (i.e. ME1, ME2,
ME3) of a correct solution?