Active Queue Management Rong Pan Cisco System EE384y

Download Report

Transcript Active Queue Management Rong Pan Cisco System EE384y

Active Queue Management
Rong Pan
Cisco System
EE384y
Spring Quarter 2008
Outline
• Queue Management
– Drop as a way to feedback to TCP sources
– Part of a closed-loop
• Traditional Queue Management
– Drop Tail
– Problems
• Active Queue Management
– RED
– CHOKe
– AFD
2
Queue Management: Drops/Marks
- A Feedback Mechanism To Regulate End TCP Hosts
• End hosts send TCP traffic -> Queue size
• Network elements, switches/routers, generate
drops/marks based on their queue sizes
• Drops/Marks: regulation messages to end hosts
• TCP sources respond to drops/marks by cutting
down their windows, i.e. sending rate
3
TCP+Queue Management
- A closed-loop control system
C
W/R
_
+

N
+

q
_
0.5
-
+
Time
Delay
p
Queue
Management
1
4
Drop Tail
- problems
• Lock out
• Full queue
• Bias against bursty traffic
• Global synchronization
5
Tail Drop Queue Management
Lock-Out
Max Queue Length
6
Tail Drop Queue Management
Full-Queue
• Only drop packets when queue is full
– long steady-state delay
7
Bias Against Bursty Traffic
Max Queue Length
8
Tail Drop Queue Management
Global Synchronization
Max Queue Length
9
Alternative Queue
Management Schemes
• Drop from front on full queue
• Drop at random on full queue
 both solve the lock-out problem
 both have the full-queues problem
10
Active Queue Management
Goals
• Solve tail-drop problems
–
–
–
–
no lock-out behavior
no full queue
no bias against bursty flow
no global synchronization
• Provide better QoS at network nodes
– low steady-state delay
– lower packet dropping
11
Random Early Detection
(RED)
Arriving packet
no
AvgQsize > Minth?
yes
Admit the
new packet
end
no
Admit packet with
a probability p
end
AvgQsize > Maxth?
yes
Drop the new packet
end
12
Drop Probability
RED Dropping Curve
1
maxp
0
minth
maxth
Average Queue Size
13
Effectiveness of RED
- Lock-Out & Global Synchronization
• Packets are randomly dropped
• Each flow has the same probability of being
discarded
14
Effectiveness of RED
- Full-Queue & Bias against bursty traffic
• Drop packets probabilistically in anticipation
of congestion
– not when queue is full
• Use qavg to decide packet dropping
probability: allow instantaneous bursts
15
What QoS does RED
Provide?
• Lower buffer delay: good interactive service
– qavg is controlled to be small
• Given responsive flows: packet dropping is
reduced
– early congestion indication allows traffic to throttle
back before congestion
• Given responsive flows: fair bandwidth allocation
16
Bad News - unresponsive end hosts
udp
udp
tcp
tcp
udp
tcp
tcp
The Internet
Connectionless; Best-Effort
17
Scheduling & Queue
Management
• What routers want to do?
– isolate unresponsive flows (e.g. UDP)
– provide Quality of Service to all users
• Two ways to do it
– scheduling algorithms:
e.g. FQ, CSFQ, SFQ
– queue management algorithms with fairness
enhancement:
e.g. CHOKe, AFD, WRED
18
Active Queue Manament With
Enhancement to Fairness
FIFO
• Approximate fair bandwidth allocation
• Provide isolation from unresponsive flows
• Be as simple as RED
19
CHOKe
RED
Arriving packet
AvgQsize > Minth?
no
Drawyes
a packet at
random from queue
Admit the
new packet
end
no
yes
Flow id same as AvgQsize > Maxth?
nopacket id ?
the new
no
yes
Drop
packet
Admit packet with AvgQsize
Drop both
> the
Maxnew
th?
no
p
matched packets a probability
yes
end
end with
Admit packet
a probability p
end
end
Drop the new packet
end
20
Random Sampling from
Queue
UDP flow
•
A randomly chosen packet more likely
from the unresponsive flow
•
Adversary can’t fool the system
21
Comparison of Flow ID
•
Compare the flow id with the incoming packet
– more acurate
– Reduce the chance of dropping packets from a TCPfriendly flows.
22
Dropping Mechanism
• Drop packets (both incoming and matching
samples )
– More arrival -> More Drop
– Give users a disincentive to send more
23
Simulation Setup
S(1)
m TCP
Sources
S(2)
D(1)
10Mbps
10Mbps
D(2)
m TCP
Sinks
S(m)
S(m+1)
1Mbps
D(m)
D(m+1)
n UDP
Sources
n UDP
Sinks
S(m+n)
D(m+n)
24
Network Setup Parameters

32 TCP flows, 1 UDP flow

All TCP’s maximum window size = 300

All links have a propagation delay of 1ms

FIFO buffer size = 300 packets

All packets sizes = 1 KByte

RED: (minth,maxth) = (100,200) packets
25
32 TCP, 1 UDP (one sample)
Throughput (Kbps)
1000
800
UDP Throughput (RED)
UDP Throughput (CHOKe)
Avg. TCP Throughput (CHOKe)
600
400
74.1%
200 23.0%
0
100
99.6%
1000
UDP Arrival Rate (Kbps)
10000
26
32 TCP, 5 UDP (5 samples)
Throughput (Kbps)
1500
CHOKe(one sample):Total UDP Throughput
CHOKe(one sample):Total TCP Throughput
CHOKe with 5 samples: Total UDP Throughput
CHOKe with 5 samples: Total TCP Throughput
1200
900
600
300
0
100
1000
Total UDP Arrival Rate (Kbps)
10000
27
How Many Samples to
Take?
Rk
Maxth

R2 R1
avg
minth
Different samples for different Qlenavg
–
# samples  when Qlenavg close to minth
–
# samples when Qlenavg close to maxth
28
Throughput (Kbps)
32 TCP, 5 UDP (selfadjusting)
Total UDP Throughput
1200
Total TCP Throughput
900
600
61.1%
38.3%
300
0
89.7%
6.6%
100
99.3%
1000
10000
Total UDP Arrival Rate (Kbps)
29
Analytical Model
discards from the queue
permeable tube
with leakage
30
Fluid Analysis

N: the total number of packets in the buffer

Li(t): the survival rate for flow i packets
Li(t)t - Li(t +t)t = i t Li(t)t /N
- dLi(t)/dt = i Li(t) N
Li(0) = i (1-pi )
Li(D) = i (1-2pi )
31
Model vs Simulation
- multiple TCPs and one UDP
350
1/(1+e)
Throughput
300
250
200
150
100
fluid model
CHOKe ns simulation
50
0
0.1
1
10
Arrival Rate
32
Fluid Model
- Multiple samples

Multiple samples are chosen
Li(t)t - Li(t +t)t = Mi t Li(t)t /N
- dLi(t)/dt = Mi Li(t) N
Li(0) = i (1-pi )M
Li(D) = i (1-pi )M - Mi pi
33
Two Samples
- multiple TCPs and one UDP
UDP Throughput(Kbps)
140
120
100
80
60
40
NS Simulation
Fluid Model
20
0
0
0.5
1
1.5
UDP Arrival Rate(Mbps)
2
34
Two Samples
- multiple TCPs and two UDP
UDP Throughput(Kbps)
200
160
120
80
NS Simulation
Fluid Model
40
0
0
0.5
1
1.5
2
UDP Arrival Rate(Mbps)
35
What If We Use a
Small Amount of
State?
36
AFD: Goal
• Approximate weighted bandwidth
allocation
– Not only AQM, approximate WDRR
scheduling
– Provide soft queues in addition to physical
queues
• Keep the state requirement small
• Be simple to implement
37
AFD Algorithm: Details
(Basic Case: Equal Share)
Di = Drop Probability for Class i
Arriving Packets
Qlen
1-Di
Qref
Class i
Di
Mi = Arrival estimate
for Class i
(Bytes over interval Ts)
Mfair = f(Qlen, Qlen_old,Qref)
Fair Share
If Mi  Mfair : No Drop (Di = 0)
If Mi > Mfair : Di > 0 such that
Mi (1-Di) = Mfair
38
AFD Algorithm: Details
(General Case)
Di = Drop Probability for Class i
Arriving Packets
Qlen
1-Di
Qref
Class i
Di
Mi = Arrival estimate
for Class i
(Bytes over interval Ts)
Mfair = f(Qlen, Qlen_old,Qref)
Fair Share
If Mi  F(Mfair,Mini,Maxi,Wi, …): No Drop (Di = 0)
If Mi > Mfair : Di > 0 such that
Mi (1-Di) = F(Mfair,Mini,Maxi,Wi, …)
39
Not Per-Flow State
Fraction of flows
• State requirement on the order of # of unresponsive flows
• Elephant Traps (developed jointly Stanford and Cisco)
40
AFD Solution: Details
• Based on 3 simple mechanisms
– estimate per “class” arrival rate
• counting per “class” bytes over fixed intervals ( Ts )
• potential averaging over multiple intervals
– estimate deserved departure rate (so as to achieve
the proper bandwidth allocation for each class)
• observation of queue length as measure of
congestion
– perform selective dropping (pre-enqueue) to drive
arrival rate to the desired departure rate
41
Mixed Traffic
with Different Levels of Unresponsiveness
Throughput (kbps)
500
400
300
200
100
0
RED
CHOKe
AFD
42
Drop Probabilities
(note differential dropping)
Drop Probability
0.3
0.25
0.2
0.15
0.1
0.05
0
RED
CHOKe
AFD
43
Different Number of TCP
Flows in Each Class
Class 2
Class 1
10 TCP Flows
5 TCP Flows
0
50
100
150
200
0
time
50
100
150
200
time
200
time
20 TCP Flows
Class 3
Class 4
15 TCP Flows
0
50
100
150
200
time
0
50
100
150
44
Different Class Throughput
Comparison
45
Queue Length
46
Mfair
47
AFD Implementation Issues
• Monitor Arrival Rate
• Determine Drop Probability
• Maximize Link Utilization
48
Arrival Monitoring
• Keep a counter for each class
– Count the data arrivals (in bytes) of each
class in 10ms interval: arvi
• Arrival rate of each class is updated every
10ms
– mi = mi(1-1/2c)+arvi
– c determines the average window
49
Implementing the Drop
Function
• If Mi  Mfair then Di = 0
• Otherwise, rewrite the drop function as
• Suppose we have predetermined drop levels, find
the one such that Di* Mi = (Mi – Mfair)
50
Implementing the Drop
Function
• Drop levels are: 1/32, 1/16, 3/32…
• Suppose mi = 100, mfair = 62.0 => Di = 0.380,
Di
0.0
0.375 0.406
We choose the higher
value using binary
search
1.0
51
AFD - Summary
Ideal
FQ
Fairness
AFD
CHOKe
RED
Simplicity
• Equal share is approximated in a wide variety of settings
• The state requirement is limited
52
Summary
• Traditional Queue Management
– Drop Tail, Drop Front, Drop Random
– Problems: lock-out, full queue, global
synchronization, bias against bursty traffic
• Active Queue Management
– RED: can’t handle unresponsive flows
– CHOKe: penalize unresponsive flows
– AFD: provides approximate fairness with
limited states
53