End to End Protocols End to End Protocols  We already saw: basic protocols  Stop & wait (Correct but low performance)   Now:  Window.

Download Report

Transcript End to End Protocols End to End Protocols  We already saw: basic protocols  Stop & wait (Correct but low performance)   Now:  Window.

End to End
Protocols
1
End to End Protocols
 We already saw:
basic protocols
 Stop & wait (Correct but low performance)

 Now:
 Window based protocol.
• Go Back N
• Selective Repeat
TCP protocol.
 UDP protocol.

2
Pipelined protocols
Pipelining: sender allows multiple, “in-flight”, yet-to-beacknowledged pkts


range of sequence numbers must be increased
buffering at sender and/or receiver
 Two generic forms of pipelined protocols: go-Back-N, selective
repeat
3
Go Back N (GBN)
4
Go-Back-N
Sender:
 k-bit seq # in pkt header
 “window” of up to N, consecutive unack’ed pkts allowed
 ACK(n): ACKs all pkts up to, including seq # n - “cumulative ACK”
may deceive duplicate ACKs (see receiver)
 timer for each in-flight pkt
 timeout(n): retransmit pkt n and all higher seq # pkts in window

5
GBN: sender extended FSM
6
GBN: receiver extended FSM
receiver simple:
 ACK-only: always send ACK for correctly-received pkt
with highest in-order seq #


may generate duplicate ACKs
need only remember expectedseqnum
 out-of-order pkt:
 discard (don’t buffer) -> no receiver buffering!
 ACK pkt with highest in-order seq #
7
GBN in
action
8
GBN - correctness
 Safety:
 The sequence numbers




guarantee:
packet received in
order.
No gaps.
No duplicates.
Safety follows from
extectedsequencenum


Next seg.
Received exactly once.
 Liveness:
 Eventually timeout.
 Re-sends the window.
 Eventually base is
received correctly.
 Receiver:
 from that time ACK at
least base.
 Eventually an ACK will
get through.
 The sender will update
to Base (or more).
9
GBN - correctness
Clearing a FIFO channel:
Ack i<k
impossible
Ack k
impossible
Data i<k Data k
Claim: After receiving Data/ACK k no Data/ACK i<k is received.
Sufficient to use N+1 seq. num.
10
Selective Repeat
11
Selective Repeat
 receiver individually acknowledges all correctly received
pkts

buffers pkts, as needed, for eventual in-order delivery to upper
layer
 sender only resends pkts for which ACK not received
 sender timer for each unACKed pkt
 sender window
 N consecutive seq #’s
 again limits seq #s of sent, unACKed pkts
12
Selective repeat: sender, receiver windows
13
Selective repeat
sender
data from above :
receiver
pkt n in [rcvbase, rcvbase+N-1]
 if next available seq # in
 send ACK(n)
timeout(n):
 in-order: deliver (also
window, send pkt
 resend pkt n, restart timer
ACK(n) in [sendbase,sendbase+N]:
 mark pkt n as received
 if n smallest unACKed pkt,
advance window base to
next unACKed seq #
 out-of-order: buffer
deliver buffered, in-order
pkts), advance window to
next not-yet-received pkt
pkt n in
[rcvbase-N,rcvbase-1]
 ACK(n)
otherwise:
 ignore
14
Selective repeat in action
15
Selective Repeat - Correctness
 Infinite seq. Num.
Safety: immediate from the seq. Num.
 Liveness: Eventually data and ACKs get through.

 Finite Seq. Num.
 Idea: Re-use seq. Num.
 Use less bits to encode them.
 Number of seq. Num.:
 At
least N.
 Needs more!
16
Selective repeat:
dilemma
Example:
 seq #’s: 0, 1, 2, 3
 window size=3
 receiver sees no difference
in two scenarios!
 incorrectly passes
duplicate data as new in
(a)
Q: what relationship between
seq # size and window
size?
17
Choosing the window size
 Small window size:

idle link (under-utilization).
 Large window size:
 Buffer space
 Delay after loss
 Ideal window size (assuming very low loss)
 RTT =Round trip time
 C = link capacity
 window size = RTT * C
 What happens with no loss?
18
End to End
Protocols:
Multiplexing &
Demultiplexing
19
Multiplexing/demultiplexing
Recall: segment - unit of data
exchanged between transport
layer entities
 aka TPDU: transport
protocol data unit
application-layer
data
segment
header
segment
Ht M
Hn segment
P1
M
application
transport
network
Demultiplexing: delivering
received segments (TPDUs)to
correct app layer processes
P3
receiver
M
M
application
transport
network
P4
M
P2
application
transport
network
20
Multiplexing/demultiplexing
Multiplexing:
gathering data from multiple
app processes, enveloping
data with header (later used
for demultiplexing)
32 bits
source port #
dest port #
other header fields
multiplexing/demultiplexing:
 based on sender, receiver port
numbers, IP addresses
 source, dest port #s in each
segment
 recall: well-known port
numbers for specific
applications
application
data
(message)
TCP/UDP segment format
21
Multiplexing/demultiplexing: examples
host A
source port: x
dest. port: 23
server B
source port:23
dest. port: x
Source IP: C
Dest IP: B
source port: y
dest. port: 80
port use: simple telnet app
WWW client
host A
WWW client
host C
Source IP: A
Dest IP: B
source port: x
dest. port: 80
Source IP: C
Dest IP: B
source port: x
dest. port: 80
WWW
server B
port use: WWW server
22
TCP Protocol
23
TCP: Overview
 point-to-point:
 one sender, one receiver
 reliable, in-order byte
steam:

no “message boundaries”
 pipelined:
 TCP congestion and flow
control set window size
RFCs: 793, 1122, 1323, 2018, 2581
 full duplex data:
 bi-directional data flow in
same connection
 MSS: maximum segment
size
 connection-oriented:
 handshaking (exchange of
control msgs) init’s sender,
receiver state before data
exchange
 flow controlled:
 sender will not overwhelm
receiver
24
TCP segment structure
32 bits
URG: urgent data
(generally not used)
ACK: ACK #
valid
PSH: push data now
(generally not used)
RST, SYN, FIN:
connection estab
(setup, teardown
commands)
Internet
checksum
source port #
dest port #
sequence number
acknowledgement number
head not
UA P R S F
len used
checksum
rcvr window size
ptr urgent data
Options (variable length)
counting
by bytes
of data
(not segments!)
# bytes
rcvr willing
to accept
application
data
(variable length)
25
TCP seq. #’s and ACKs
Seq. #’s:
 byte stream “number”
of first byte in
segment’s data
ACKs:
 seq # of next byte
expected from other
side
 cumulative ACK
Q: how receiver handles outof-order segments
 A: TCP spec doesn’t
say, - up to
implementor
Host A
User
types
‘C’
Host B
host ACKs
receipt of
‘C’, echoes
back ‘C’
host ACKs
receipt
of echoed
‘C’
simple telnet scenario
time
26
TCP: reliable data transfer
event: data received
from application above
create, send segment
wait
wait
for
for
event
event
simplified sender, assuming
•one way data transfer
•no flow, congestion control
event: timer timeout for
segment with seq # y
retransmit segment
event: ACK received,
with ACK # y
ACK processing
27
TCP: reliable
data transfer
Simplified
TCP
sender
00 sendbase = initial_sequence number
01 nextseqnum = initial_sequence number
02
03 loop (forever) {
04
switch(event)
05
event: data received from application above
06
create TCP segment with sequence number nextseqnum
07
start timer for segment nextseqnum
08
pass segment to IP
09
nextseqnum = nextseqnum + length(data)
10
event: timer timeout for segment with sequence number y
11
retransmit segment with sequence number y
12
compue new timeout interval for segment y
13
restart timer for sequence number y
14
event: ACK received, with ACK field value of y
15
if (y > sendbase) { /* cumulative ACK of all data up to y */
16
cancel all timers for segments with sequence numbers < y
17
sendbase = y
18
}
19
else { /* a duplicate ACK for already ACKed segment */
20
increment number of duplicate ACKs received for y
21
if (number of duplicate ACKS received for y == 3) {
22
/* TCP fast retransmit */
23
resend segment with sequence number y
24
restart timer for segment y
25
}
26
} /* end of loop forever */
28
TCP ACK generation [RFC 1122, RFC 2581]
Event
TCP Receiver action
in-order segment arrival,
no gaps,
everything else already ACKed
delayed ACK. Wait up to 500ms
for next segment. If no next segment,
send ACK
in-order segment arrival,
no gaps,
one delayed ACK pending
immediately send single
cumulative ACK
out-of-order segment arrival
higher-than-expect seq. #
gap detected
send duplicate ACK, indicating seq. #
of next expected byte
arrival of segment that
partially or completely fills gap
immediate ACK if segment starts
at lower end of gap
29
TCP: retransmission scenarios
time
Host A
Host B
X
loss
lost ACK scenario
Host B
Seq=100 timeout
Seq=92 timeout
timeout
Host A
time
premature timeout,
cumulative ACKs
30
TCP Flow Control
flow control
sender won’t overrun
receiver’s buffers by
transmitting too much,
too fast
RcvBuffer = size or TCP Receive Buffer
RcvWindow = amount of spare room in Buffer
receiver: explicitly
informs sender of
(dynamically changing)
amount of free buffer
space
 RcvWindow field in
TCP segment
sender: keeps the amount
of transmitted,
unACKed data less than
most recently received
RcvWindow
receiver buffering
31
TCP Round Trip Time and Timeout
Q: how to set TCP
timeout value?
 longer than RTT
note: RTT will vary
 too short: premature
timeout
 unnecessary
retransmissions
 too long: slow reaction
to segment loss

Q: how to estimate RTT?
 SampleRTT: measured time from
segment transmission until ACK
receipt
 ignore retransmissions,
cumulatively ACKed segments
 SampleRTT will vary, want
estimated RTT “smoother”
 use several recent
measurements, not just
current SampleRTT
32
TCP Round Trip Time and Timeout
EstimatedRTT = (1-x)*EstimatedRTT + x*SampleRTT
 Exponential weighted moving average
 influence of given sample decreases exponentially fast
 typical value of x: 0.1
Setting the timeout
 EstimtedRTT plus “safety margin”
 large variation in EstimatedRTT -> larger safety margin
Timeout = EstimatedRTT + 4*Deviation
Deviation = (1-x)*Deviation +
x*|SampleRTT-EstimatedRTT|
33
TCP Connection Management
Recall: TCP sender, receiver
establish “connection” before
exchanging data segments
 initialize TCP variables:
 seq. #s
 buffers, flow control info
(e.g. RcvWindow)
 client: connection initiator
Socket clientSocket = new
Socket("hostname","port number");
 server: contacted by client
Socket connectionSocket =
welcomeSocket.accept();
Three way handshake:
Step 1: client sends TCP SYN
control segment to server
 specifies initial seq #
Step 2: server receives SYN,
replies with SYNACK control
segment
ACKs received SYN
 allocates buffers
 specifies server-to-receiver
initial seq. #
Step 3: client sends ACK and data.

34
TCP Connection Management (cont.)
client
Closing a connection:
client closes socket:
clientSocket.close();
close
Step 1: client end system sends
close
replies with ACK. Closes
connection, sends FIN.
timed wait
TCP FIN control segment to
server.
Step 2: server receives FIN,
server
closed
35
TCP Connection Management (cont.)
client
Step 3: client receives FIN,
replies with ACK.

server
closing
Enters “timed wait” - will
respond with ACK to
received FINs
closing
Step 4: server, receives ACK.
Note: with small modification,
can handly simultaneous FINs.
timed wait
Connection closed.
closed
closed
36
TCP Connection Management (cont)
TCP server
lifecycle
TCP client
lifecycle
37
Principles of Congestion Control
Congestion:
 informally: “too many sources sending too much
data too fast for network to handle”
 different from flow control!
 manifestations:
 lost packets (buffer overflow at routers)
 long delays (queueing in router buffers)
 a top-10 problem!
38
Causes/costs of congestion: scenario 1
 two senders, two
receivers
 one router,
infinite buffers
 no retransmission
 large delays
when congested
 maximum
achievable
throughput
39
Causes/costs of congestion: scenario 2
 one router,
finite buffers
 sender retransmission of lost packet
40
Causes/costs of congestion: scenario 2
= l
(goodput)
out
in
 “perfect” retransmission only when loss:
 always:

l
l > lout
in
retransmission of delayed (not lost) packet makes l
in
l
(than perfect case) for same
out
larger
“costs” of congestion:
 more work (retrans) for given “goodput”
 unneeded retransmissions: link carries multiple copies of pkt
41
Causes/costs of congestion: scenario 3
 four senders
 multihop paths
 timeout/retransmit
Q: what happens as l
in
and l increase ?
in
42
Causes/costs of congestion: scenario 3
Another “cost” of congestion:
 when packet dropped, any “upstream transmission
capacity used for that packet was wasted!
43
Approaches towards congestion control
Two broad approaches towards congestion control:
End-end congestion
control:
 no explicit feedback from
network
 congestion inferred from
end-system observed loss,
delay
 approach taken by TCP
Network-assisted
congestion control:
 routers provide feedback
to end systems
 single bit indicating
congestion (SNA,
DECbit, TCP/IP ECN,
ATM)
 explicit rate sender
should send at
44
Case study: ATM ABR congestion control
ABR: available bit rate:
 “elastic service”
RM (resource management)
cells:
 if sender’s path
 sent by sender, interspersed
“underloaded”:
 sender should use
available bandwidth
 if sender’s path
congested:
 sender throttled to
minimum guaranteed
rate
with data cells
 bits in RM cell set by switches
(“network-assisted”)
 NI bit: no increase in rate
(mild congestion)
 CI bit: congestion
indication
 RM cells returned to sender by
receiver, with bits intact
45
Case study: ATM ABR congestion control
 two-byte ER (explicit rate) field in RM cell
 congested switch may lower ER value in cell
 sender’ send rate thus minimum supportable rate on path
 EFCI bit in data cells: set to 1 in congested switch
 if data cell preceding RM cell has EFCI set, sender sets CI
bit in returned RM cell
46
TCP Congestion Control
 end-end control (no network assistance)
 transmission rate limited by congestion window
size, Congwin, over segments:
Congwin
 w segments, each with MSS bytes sent in one RTT:
throughput =
w * MSS
Bytes/sec
RTT
47
TCP congestion control:
 “probing” for usable
bandwidth:



ideally: transmit as fast
as possible (Congwin as
large as possible)
without loss
increase Congwin until
loss (congestion)
loss: decrease Congwin,
then begin probing
(increasing) again
 two “phases”
 slow start
 congestion avoidance
 important variables:
 Congwin
 threshold: defines
threshold between two
slow start phase,
congestion control
phase
48
TCP Slowstart
Host A
initialize: Congwin = 1
for (each segment ACKed)
Congwin++
until (loss event OR
CongWin > threshold)
RTT
Slowstart algorithm
Host B
 exponential increase (per
RTT) in window size (not so
slow!)
 loss event: timeout (Tahoe
TCP) and/or or three
duplicate ACKs (Reno TCP)
time
49
TCP Congestion Avoidance
Congestion avoidance
/* slowstart is over
*/
/* Congwin > threshold */
Until (loss event) {
every w segments ACKed:
Congwin++
}
threshold = Congwin/2
Congwin = 1
1
perform slowstart
Reno
Tahoe
1: TCP Reno skips slowstart (fast
recovery) after three duplicate ACKs
50
AIMD
TCP congestion
avoidance:
 AIMD: additive
increase,
multiplicative
decrease


increase window by 1
per RTT
decrease window by
factor of 2 on loss
event
TCP Fairness
Fairness goal: if N TCP
sessions share same
bottleneck link, each
should get 1/N of link
capacity
TCP connection 1
TCP
connection 2
bottleneck
router
capacity R
51
Why is TCP fair?
Two competing sessions:
 Additive increase gives slope of 1, as throughout increases
 multiplicative decrease decreases throughput proportionally
R
equal bandwidth share
loss: decrease window by factor of 2
congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase
Connection 1 throughput R
52
TCP
strains
Tahoe
Reno
Vegas
53
Vegas
54
From Reno to Vegas
Reno
Vegas
RTT measurem-t coarse
fine
Fast Retrans
3 dup ack
Dup ack + timer
CongWin
decrease
For each Fast
Retrans.
For the 1st Fast
Retrans. in RTT
Congestion
detection
Loss detection
Also based on
measured
thruput
55
Some more examples
56
TCP latency modeling
Q: How long does it take to Notation, assumptions:
receive an object from a  Assume one link between
client and server of rate R
Web server after sending
 Assume: fixed congestion
a request?
 TCP connection establishment
 data transfer delay
window, W segments
 S: MSS (bits)
 O: object size (bits)
 no retransmissions (no loss,
no corruption)
Two cases to consider:
 WS/R > RTT + S/R: ACK for first segment in
window returns before window’s worth of data
sent
 WS/R < RTT + S/R: wait for ACK after sending
window’s worth of data sent
57
TCP latency Modeling
Case 1: latency = 2RTT + O/R
K:= O/WS
Case 2: latency = 2RTT + O/R
+ (K-1)[S/R + RTT - WS/R]
58
TCP Latency Modeling: Slow Start
 Now suppose window grows according to slow start.
 Will show that the latency of one object of size O is:
Latency  2 RTT 
O
S
S

 P  RTT    (2 P  1)
R
R
R

where P is the number of times TCP stalls at server:
P  min{Q, K  1}
- where Q is the number of times the server would stall
if the object were of infinite size.
- and K is the number of windows that cover the object.
59
TCP Latency Modeling: Slow Start (cont.)
Example:
O/S = 15 segments
K = 4 windows
initiate TCP
connection
request
object
first window
= S/R
RTT
second window
= 2S/R
Q=2
third window
= 4S/R
P = min{K-1,Q} = 2
Server stalls P=2 times.
fourth window
= 8S/R
complete
transmission
object
delivered
time at
client
time at
server
60
TCP Latency Modeling: Slow Start (cont.)
S
 RTT  timefrom when server startstosend segment
R
untilserver receivesacknowledgement
initiate TCP
connection
2k 1
S
 time to transmit the kth window
R

request
object
S
k 1 S 

RTT

2
 stall timeafter thekth window
 R
R 
first window
= S/R
RTT
second window
= 2S/R
third window
= 4S/R
P
O
latency  2 RTT   stallTim ep
R
p 1
P
O
S
S
  2 RTT   [  RTT  2k 1 ]
R
R
k 1 R
O
S
S
  2 RTT  P[ RTT  ]  (2 P  1)
R
R
R
fourth window
= 8S/R
complete
transmission
object
delivered
time at
client
time at
server
61
UDP Protocol
62
UDP: User Datagram Protocol [RFC 768]
 “no frills,” “bare bones”
Internet transport protocol
 “best effort” service, UDP
segments may be:
 lost
 delivered out of order to
app
 connectionless:
 no handshaking between
UDP sender, receiver
 each UDP segment handled
independently of others
Why is there a UDP?
 no connection establishment
(which can add delay)
 simple: no connection state at
sender, receiver
 small segment header
 no congestion control: UDP can
blast away as fast as desired
63
UDP: more
 often used for streaming
multimedia apps
 loss tolerant
 rate sensitive
Length, in
bytes of UDP
segment,
(why?): including
header
 other UDP uses
 DNS
 SNMP
 reliable transfer over UDP: add
reliability at application layer
 application-specific error
recover!
32 bits
source port #
dest port #
length
checksum
Application
data
(message)
UDP segment format
64
UDP checksum
Goal: detect “errors” (e.g., flipped bits) in transmitted
segment
Sender:
Receiver:
 treat segment contents as
 compute checksum of received
sequence of 16-bit integers
 checksum: addition (1’s
complement sum) of segment
contents
 sender puts checksum value
into UDP checksum field
segment
 check if computed checksum
equals checksum field value:
 NO - error detected
 YES - no error detected. But
maybe errors nonethless?
65
Summary
 principles behind
transport layer services:
multiplexing/demultiplexing
 reliable data transfer
 flow control
 congestion control
 instantiation and
implementation in the Internet
 UDP
 TCP

66