3rd Edition: Chapter 3

Download Report

Transcript 3rd Edition: Chapter 3

Announcement

Homework 2 in tonight

 Will be graded and sent back before Th. class 

Midterm next Tu. in class

 Review session next time  Closed book  One 8.5” by 11” sheet of paper permitted 

Recitation tomorrow on project 2

Transport Layer 3-1

Review of Previous Lecture

 Connection-oriented transport: TCP  Overview and segment structure • RTT and RTO    Reliable data transfer • Timeout and fast retransmit Flow control • Don’t overwhelm the receiver Connection management Transport Layer 3-2

TCP Connection Management

Three way handshake: Step 1: client host sends TCP SYN segment to server   specifies initial seq # no data Step 2: server host receives SYN, replies with SYNACK segment   server allocates buffers specifies server initial seq. # Step 3: client receives SYNACK, replies with ACK segment, which may contain data Closing a connection: close closed client server close Transport Layer 3-3

Outline

  Principles of congestion control TCP congestion control Transport Layer 3-4

Principles of Congestion Control

Congestion:     informally: “too many sources sending too much data too fast for

network

to handle” different from flow control!

manifestations:   lost packets (buffer overflow at routers) long delays (queueing in router buffers) a top-10 problem!

Transport Layer 3-5

Causes/costs of congestion: scenario 1

   two senders, two receivers one router, infinite buffers no retransmission Host B Host A l in : original data unlimited shared output link buffers l out   large delays when congested maximum achievable throughput Transport Layer 3-6

Causes/costs of congestion: scenario 2

  one router,

finite

buffers sender retransmission of lost packet l out Host B Host A l in : original data l ' in : original data, plus retransmitted data finite shared output link buffers Transport Layer 3-7

Causes/costs of congestion: scenario 2

   l in l out “perfect” retransmission only when loss: l in > l out l in (than perfect case) for same l out R/2 R/2 R/2 R/3 R/4 R/2 R/2 R/2 l in l in l in a.

“costs” of congestion:   b.

c.

more work (retrans) for given “goodput” unneeded retransmissions: link carries multiple copies of pkt Transport Layer 3-8

Causes/costs of congestion: scenario 3

   four senders multihop paths timeout/retransmit Q: l in ?

l in Host A l out l in : original data l ' in : original data, plus retransmitted data finite shared output link buffers Host B Transport Layer 3-9

Causes/costs of congestion: scenario 3

H o s t A t l o u H o s t B Another “cost” of congestion:  when packet dropped, any “upstream transmission capacity used for that packet was wasted!

Transport Layer 3-10

Approaches towards congestion control

Two broad approaches towards congestion control: End-end congestion control:   no explicit feedback from network congestion inferred from end-system observed loss, delay  approach taken by TCP Network-assisted congestion control:  routers provide feedback to end systems  single bit indicating congestion (SNA, DECbit, TCP/IP ECN, ATM)  explicit rate sender should send at Transport Layer 3-11

Case study: ATM ABR congestion control

ABR: available bit rate:    “elastic service” if sender’s path “underloaded”:  sender should use available bandwidth if sender’s path congested:  sender throttled to minimum guaranteed rate RM (resource management) cells:    sent by sender, interspersed with data cells bits in RM cell set by switches (“network-assisted”) Implicit control :    NI bit: no increase in rate (mild congestion) CI bit: congestion indication RM cells returned to sender by receiver, with bits intact Transport Layer 3-12

Case study: ATM ABR congestion control

  two-byte ER (explicit rate) field in RM cell   congested switch may lower ER value in cell sender’ send rate thus minimum supportable rate on path Scalability issue Transport Layer 3-13

Outline

  Principles of congestion control TCP congestion control Transport Layer 3-14

TCP Congestion Control

   end-end control (no network assistance) sender limits transmission:

LastByteSent-LastByteAcked

CongWin

Roughly,  rate = CongWin RTT Bytes/sec

CongWin

is dynamic, function of perceived network congestion How does sender perceive congestion?

 loss event = timeout or 3 duplicate acks  TCP sender reduces rate (

CongWin

) after loss event three mechanisms:   AIMD slow start  conservative after timeout events Transport Layer 3-15

TCP AIMD

multiplicative decrease: cut

CongWin

in half after loss event congestion window 24 Kbytes additive increase: increase

CongWin

1 MSS every RTT in the absence of loss events: probing by 16 Kbytes 8 Kbytes time Long-lived TCP connection Transport Layer 3-16

TCP Slow Start

  When connection begins,

CongWin

= 1 MSS   Example: MSS = 500 bytes & RTT = 200 msec initial rate = 20 kbps available bandwidth may be >> MSS/RTT  desirable to quickly ramp up to respectable rate  When connection begins, increase rate exponentially fast until first loss event Transport Layer 3-17

TCP Slow Start (more)

  When connection begins, increase rate exponentially until first loss event:  double

CongWin

RTT every  done by incrementing

CongWin

for every ACK received Summary: initial rate is slow but ramps up exponentially fast Host A Host B time Transport Layer 3-18

Refinement

Philosophy:   After 3 dup ACKs: 

CongWin

is cut in half  window then grows linearly But after timeout event: 

CongWin

1 MSS; instead set to   window then grows exponentially to a threshold, then grows linearly • 3 dup ACKs indicates network capable of delivering some segments • timeout before 3 dup ACKs is “more alarming” Transport Layer 3-19

Refinement (more)

Q: A: When should the exponential increase switch to linear? When

CongWin

gets to 1/2 of its value before timeout.

Implementation:   Variable Threshold At loss event, Threshold is set to 1/2 of CongWin just before loss event Transport Layer 3-20

Summary: TCP Congestion Control

 When

CongWin

slow-start is below

Threshold

, sender in phase, window grows exponentially.

 When

CongWin

is above

Threshold

, sender is in congestion-avoidance phase, window grows linearly.

 When a triple duplicate ACK set to

CongWin/2 Threshold

.

occurs,

Threshold

and

CongWin

set to  When timeout

CongWin/2

occurs,

Threshold

and

CongWin

set to is set to 1 MSS.

Transport Layer 3-21

TCP sender congestion control

Event

ACK receipt for previously unacked data

State

Slow Start (SS) ACK receipt for previously unacked data Congestion Avoidance (CA) Loss event detected by triple duplicate ACK Timeout SS or CA SS or CA Duplicate ACK SS or CA

TCP Sender Action Commentary

CongWin = CongWin + MSS, Resulting in a doubling of If (CongWin > Threshold) CongWin every RTT set state to “Congestion Avoidance ” CongWin = CongWin+MSS * (MSS/CongWin) Additive increase, resulting in increase of CongWin by 1 MSS every RTT Threshold = CongWin/2, CongWin = Threshold, Set state to “Congestion Avoidance ” Threshold = CongWin/2, CongWin = 1 MSS, Set state to “Slow Start” Increment duplicate ACK count for segment being acked Fast recovery, implementing multiplicative decrease. CongWin will not drop below 1 MSS.

Enter slow start CongWin and Threshold not changed Transport Layer 3-22

TCP throughput

     What’s the average throughout ot TCP as a function of window size and RTT?

 Ignore slow start Let W be the window size when loss occurs.

When window is W, throughput is W/RTT Just after loss, window drops to W/2, throughput to W/2RTT. Average throughout: .75 W/RTT Transport Layer 3-23

TCP Futures

   Example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput Requires window size W = 83,333 in-flight segments Throughput in terms of loss rate: 1 .

22 

MSS

 

RTT L

L = 2·10 -10

Wow

New versions of TCP for high-speed needed!

Transport Layer 3-24

TCP Fairness

Fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 TCP connection 2 bottleneck router capacity R Transport Layer 3-25

Why is TCP fair?

Two competing sessions:   Additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally R equal bandwidth share loss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2 congestion avoidance: additive increase Connection 1 throughput R Transport Layer 3-26

Fairness (more)

Fairness and UDP  Multimedia apps often do not use TCP  do not want rate throttled by congestion control   Instead use UDP:  pump audio/video at constant rate, tolerate packet loss Research area: TCP friendly Fairness and parallel TCP connections  nothing prevents app from opening parallel cnctions between 2 hosts.

  Web browsers do this Example: link of rate R supporting 9 cnctions;   new app asks for 1 TCP, gets rate R/10 new app asks for 11 TCPs, gets R/2 !

Transport Layer 3-27

Shrew

 Very small but aggressive mammal that ferociously bite Transport Layer 3-28

Low-Rate Attacks

 TCP is vulnerable to low-rate DoS attacks

TCP

DoS Rate

DoS

DoS Inter-burst Period Transport Layer 3-29

TCP: a Dual Time-Scale Perspective

  Two time-scales fundamentally required   RTT time-scales (~10-100 ms) • AIMD control RTO time-scales (RTO=SRTT+4*RTTVAR) • Avoid congestion collapse Lower-bounding the RTO parameter:  [AllPax99]: minRTO = 1 sec • to avoid spurious retransmissions  RFC2988 recommends minRTO = 1 sec Discrepancy between RTO and RTT time-scales is a key source of vulnerability to low rate attacks Transport Layer 3-30

The Low-Rate Attack

Time Time Victim Attacker

Transport Layer 3-31

The Low-Rate Attack

outage

Time

random initial phase

Victim Attacker

  A short burst (~RTT) sufficient to create outage Outage – event of correlated packet losses that forces TCP to enter RTO mechanism 3-32

The Low-Rate Attack

minRTO Time Time

random initial phase

Victim Attacker

 The outage synchronizes all TCP flows  All flows react simultaneously and identically • backoff for minRTO Transport Layer 3-33

The Low-Rate Attack

minRTO Time Time

random initial phase

Victim Attacker

 Once the TCP flows try to recover – hit them again  Exploit protocol determinism 3-34

The Low-Rate Attack

Victim minRTO minRTO Time Time

random initial phase

Attacker

 And keep repeating…  RTT-time-scale outages inter-spaced on minRTO periods can deny service to TCP traffic 3-35

Low-Rate Attacks

 TCP is vulnerable to low-rate DoS attacks

TCP

DoS Rate

DoS

DoS Inter-burst Period Transport Layer 3-36

Delay modeling - homework

Q: How long does it take to receive an object from a Web server after sending a request? Ignoring congestion, delay is influenced by:  TCP connection establishment   data transmission delay slow start Notation, assumptions:     Assume one link between client and server of rate R S: MSS (bits) O: object size (bits) no retransmissions (no loss, no corruption) Window size:  First assume: fixed congestion window, W segments  Then dynamic window, modeling slow start Transport Layer 3-37

Fixed congestion window (1)

First case: WS/R > RTT + S/R: ACK for first segment in window returns before window’s worth of data sent delay = 2RTT + O/R Transport Layer 3-38

Fixed congestion window (2)

Second case:  WS/R < RTT + S/R: wait for ACK after sending window’s worth of data sent delay = 2RTT + O/R + (K-1)[S/R + RTT - WS/R] Where K=O/WS Transport Layer 3-39

TCP Delay Modeling: Slow Start (1) Now suppose window grows according to slow start Will show that the delay for one object is:

Latency

 2

RTT

O R

P

 

RTT

S R

   ( 2

P

 1 )

S R

where P is the number of times TCP idles at server:

P

 min {

Q

,

K

 1 } where Q is the number of times the server idles if the object were of infinite size.

and K is the number of windows that cover the object.

Transport Layer 3-40

TCP Delay Modeling: Slow Start (2) Delay components: • 2 RTT for connection estab and request • O/R to transmit object • time server idles due to slow start initiate TCP connection request object RTT Server idles: P = min{K-1,Q} times Example: • O/S = 15 segments • K = 4 windows • Q = 2 • P = min{K-1,Q} = 2 Server idles P=2 times object delivered time at client first window = S/R second window = 2S/R third window = 4S/R fourth window = 8S/R complete transmission time at server Transport Layer 3-41

TCP Delay Modeling (3)

S R

RTT

 time from when server starts to send segment until server receives acknowledg ement initiate TCP connection 2

k

 1

S R

 time to transmit the kth window request object  

S R

RTT

 2

k

 1

S R

    idle time after the

k

th window RTT delay 

O

 2

RTT R

p P

  1

idleTime p

O

 2

RTT R

k P

  1 [

S R

RTT

 2

k

 1

S R

] 

O

 2

RTT R

P

[

RTT

S R

]  ( 2

P

 1 )

S R

object delivered time at client first window = S/R second window = 2S/R third window = 4S/R fourth window = 8S/R complete transmission time at server Transport Layer 3-42

TCP Delay Modeling (4)

Recall K = number of windows that cover object How do we calculate K ?

K

 min {

k

: 2 0

S

 2 1

S

   2

k

 1

S

O

}  min {

k

: 2 0  2 1    2

k

 1 

O

/

S

}  min {

k

: 2

k

 1 

O

}

S

 min {

k

:

k

   log 2 (

O S

 log 2 (

O S

 1 )    1 )} Calculation of Q, number of idles for infinite-size object, is similar (see HW).

Transport Layer 3-43

Summary

  principles behind transport layer services:  multiplexing, demultiplexing    reliable data transfer flow control congestion control instantiation and implementation in the Internet  UDP  TCP Next:   leaving the network “edge” (application, transport layers) into the network “core” Transport Layer 3-44