슬라이드 1 - UMass CS | College of Information and

Download Report

Transcript 슬라이드 1 - UMass CS | College of Information and

The Effects of Active Queue
Management on Web Performance
SICOMM 2003
Long Le, Jay Aikat, Kevin Jeffay, F.Donelson Smith
29th January, 2004
Presented by Sookhyun, Yang
Contents
• Introduction
• Problem Statement
• Related Work
• Experimental Methodology
• Result and Analysis
• Conclusion
2/36
Introduction
Drop policy
– Drop tail : when a queue overflows
– Active queue management (AQM) : before a queue overflows
Active queue management (AQM)
– Keep the average queue size small in routers
– RED (Random early detection) algorithm
Most widely studied and implemented
Various design issues of AQM
– How to detect congestion
– How to control for achieving a stable point for queue size
– How congestion signal is delivered to the sender
Implicitly by dropping packets at the router
Explicitly by signal explicit congestion notification (ECN)
3/36
Problem Statement
Goal
– Compare the performance of control theoretic AQM algorithms with
original randomized dropping paradigms
Considered AQM schemes
– Control theoretic AQM algorithms
Proportional integrator (PI) controller
Random exponential marking (REM) controller
– Original randomized dropping paradigms
Adaptive random early detection (ARED) controller
Performance metrics
– Link utilization
– Loss rate
– Response time for each request/response transaction
4/36
Contents
•
•
•
•
Introduction
Problem Statement
Related Work
Experimental Methodology
• Platform
• Calibration
• Procedure
• Result and Analysis
• AQM Experiments with Packet Drops
• AQM Experiments with ECN
• Discussion
• Conclusion
5/36
Random Early Detection
Original RED
– Measure of congestion: weighted-average queue size (AvgQLen)
minth
maxth
Drop packets
linearly
Drop probability
Drop
all packets
1
maxp
minth
maxth
AvgQLen
6/36
Random Early Detection
Modification of the original RED
– Gentle mode
– Mark or drop probability increases linearly
minth
maxth
Drop packets
linearly
Drop probability
2 * maxth
Drop
all packets
1
maxp
minth
maxth
2 * maxth
AvgQLen
7/36
Random Early Detection
Weakness of RED
– Does not consider the number of flows sharing a bottleneck link
– In TCP congestion control mechanism
Packet mark or drop reduces the offered load by a factor of
1- 0.5/n
(n: number of flows sharing the bottleneck link)
Self-configuring RED
– Adjust maxp every time AvgQLen
minth
multiplicative decrease
maxth
additive/multiplicative increase
ARED
– Adaptive and gentle refinements to original RED
8/36
Control Theoretic AQM
Misra et al.
– Applied control theory to develop a model for TCP/AQM dynamics
– Used this model for analyzing RED
– Limitation of RED
Response to changes in network traffic
Use of a weighted average queue length
PI controller (Hollot et al.)
– Regulate queue length to target value called “queue reference” (qref )
– Use instantaneous samples of the queue length at a constant sampling
frequency
– Drop probability p(kT)
(q(kT): instantaneous sample of queue length, T=1/sampling-frequency)
p(kT) = a * (q(kT) – qref) – b * (q((k-1)T) – qref) + p((k-1)T)
link capacity, maximum RTT, expected number of active flows
9/36
Control Theoretic AQM
REM scheme (Athuraliya et al.)
– Periodically updates a congestion measure called “price”
– Price p(t)
Rate mismatch between packet arrival and departure rate at the link
Queue difference between the actual queue length and target value
p(t) = max( 0, p(t-1) + γ * (α * (q(t) – qref)) + x(t) –c )
c : link capacity, q(t) : queue length,
qref : target value – queue size, x(t) : packet arrival rate
– Drop probability
prob(t) = 1 - 1
Φ
p(t)
( )
,where Φ >1 is a constant
10/36
Contents
•
•
•
•
Introduction
Problem Statement
Related Work
Experimental Methodology
• Platform
• Calibration
• Procedure
• Result and Analysis
• AQM Experiments with Packet Drops
• AQM Experiments with ECN
• Discussion
• Conclusion
11/36
Platform
Emulate one peering link carrying web traffics between sources and
destinations
Network
100 Mbps Ethernet interface
monitor
3Com 10/100/1000 Ethernet switches
ISP 1
router
Ethernet
Switches
1Gbps
1Gbps
Uncongested
Network
100/1000
Bottleneck
Mbps
ISP 2
router
Ethernet
Switches
1Gbps
1Gbps
…
…
100Mbps
ISP1
Browser/Servers
1000-SX
fiberextensions
gigabit Ethernet
NIC
ALTQ
to FreeBSD
(PI, REM, ARED)
100Mpbs
Fast
Ethernet
NICs
1GHz Pentium Ⅲ
1GB of memory
100Mbps
1000-SX fiber Network
gigabit
Ethernet
NIC
monitor
100Mpbs Fast Ethernet NICs
Intel-based machines with FreeBSD 4.5
Web request generator (browser) : 14 machines
Web response generator (server) : 8 machines
Total number of flows = 44
ISP2
Browser/Servers
12/36
Monitoring Program
Program 1: monitoring router interface
– Effects of the AQM algorithms
– Log of queue size sampled every 10ms along
Number of entering packets
Number of dropped packets
Program 2: link-monitoring machine
– Connected to the links between the routers
Hubs on the 100Mbps segments
Fiber splitters on the Gigabit link
– Collect TCP/IP headers
Locally-modified version of the tcpdump utility
– Log of link utilization
13/36
Emulation of End-to-End Latency
Congestion control loop is influenced by RTT
Emulate different RTTs on each TCP connection (per-flow delay)
– Locally-modified version of dummynet component of FreeBSD
– Add a randomly chosen minimum delay to all packets from each flow
Minimum delay
– Sampled from a discrete uniform distribution
– Internet RTTs within the continental U.S.
RTT
– Flow’s minimum delay + additional delay (caused by queues at the
routers or on the end systems)
TCP window size = 16Kbyte on all end systems (widely used value)
14/36
Web-Like Traffic Generation
Model of [13]
– Based on empirical data
– Empirical distributions describing the elements necessary to generate
synthetic to generate synthetic HTTP workloads
Browser program and server program
– Browser program logs response time for each request/response pair
After random time
requesting
a
webpage
thinking
Server’s service time = 0
request
15/36
Calibrations
Offered loads
– Network traffic resulting from emulating the browsing behavior of a fixed
size population of web users
Three critical calibrations before experiments
– Only one primary bottleneck
100Mbps links between two routers
– Predictably controlled offered load on the network
– Resulting packet arrival time-series (packet counts per ms)
Long-range dependent (LRD) behavior [14]
Calibration experiment
– Configure the network connecting the routers at 1Gbps
– Drop-tail queues having 2400 queue elements
16/36
Calibrations
One direction of the 1Gbps link
17/36
Calibrations
Heavy-tailed distribution
for both user “think” time and response size [13]
18/36
Procedure
Experimental setting
– Offered loads by user populations
80%, 90%, 98%, or 105% of the capacity of the 100Mbps link
– Run for 120 min over 10,000,000 request/response exchanges
Collect data during 90min interval
– Repeat three times for each AQM schemes PI, REM, ARED
Experimental focus
–
–
–
–
End-to-end response time for each request/response pair
Loss rate : fraction of IP datagram dropped at the link queue
Link utilization on the bottleneck link
Number of request/response exchanges completed
19/36
Contents
•
•
•
•
Introduction
Problem Statement
Related Work
Experimental Methodology
• Platform
• Calibration
• Procedure
• Result and Analysis
• AQM Experiments with Packet Drops
• AQM Experiments with ECN
• Discussion
• Conclusion
AQM Experiments with Packet Drops
Two target queue length of PI, REM, and ARED
– Tradeoff between link utilization and queuing delay
24 packets for minimum latency
240 packets for high link utilization
Recommended in [1,6,8]
– Set the maximum queue size sfficient to ensure drop-tail do not
occur
Baseline
– Conventional drop-tail FIFO queues
– Queue size for drop-tail
24, 240 packets : comparing with AQM schemes
2400 packets : recently favorable buffering equivalent to 100ms at
the link’s transmission speed (from mailing list)
21/36
Drop-Tail Performance
Queue Size for Drop-Tail
Drop-tail queue size = 240
22/36
AQM Experiments with Packet Drops
Response Time at 80% Load
AREM show some degradation relative to the results
on the un-congested link at 80% load
23/36
AQM Experiments with Packet Drops
Response Time at 90% Load
24/36
AQM Experiments with Packet Drops
Response Time at 98% Load
No AQM scheme can offset the performance degradation at 98% load
25/36
AQM Experiments with Packet Drops
Response Time at 105% Load
All schemes degrades uniformly from the 98% case
26/36
AQM Experiments with ECN
Explicitly signal congestion to end-systems with an ECN bit
Procedure of signal congestion with ECN
– [Router] : mark a ECN bit in the TCP/IP header of the packet
– [Receiver] : mark TCP header of the next outbound segment (typically
an ACK) destined for sender of original marked segment
– [Original sender]
React as if a single segment had been lost within a send window
Mark the next outbound segment to confirm that it reacted to the congestion
ECN has no effect on response time of PI, REM, and ARED up to
80% offered load
27/36
AQM Experiments with ECN
Response Time at 90% Load
Both PI and REM provide response time performance
that is both close to that on un-congested link
28/36
AQM Experiments with ECN
Response Time at 98% Load
Degradation, but far superior to Drop tail
29/36
AQM Experiments with ECN
Response Time at 105% Load
REM shows the most significant improvement
in performance with ECN
ECN has very little effect on the performance
Of ARED
30/36
AQM Experiments with Packet Drops or with ECN
Loss ratio/Completed requests/Link utilization
31/36
Summary
For 80% load
– No AQM scheme provides better response time performance than
simple drop-tail FIFO queue management
– Not changed by the AQM schemes with ECN
For 90% load or greater without ECN
– PI is better than drop-tail and other AQM schemes without ECN
With ECN
– Both PI and REM provide significant response time improvement
ARED with recommended parameter settings
– poorest response time performance
– Lowest link utilization
– Not changed with ECN
32/36
Discussion
Positive Impact of ECN
– Response time performance under PI and REM with ECN at loads of
90% and 98%
– 90% load: approximately achieved on an un-congested network
33/36
Discussion
Performance gap between PI and REM with packet
dropping was closed through the addition of ECN
Difference in performance between ARED and the other
AQM schemes
– PI and REM operate in “byte mode” in default, but ARED in
“packet mode”
– Gentle mode in REM
– PI and REM periodically sample the queue length when deciding
to mark packets, but ARED uses a weighted average
34/36
Contents
•
•
•
•
Introduction
Problem Statement
Related Work
Experimental Methodology
• Platform
• Calibration
• Procedure
• Result and Analysis
• AQM Experiments with Packet Drops
• AQM Experiments with ECN
• Summary
• Conclusion
35/36
Conclusion
Unlike a similar earlier negative study on the use of AQM,
the AQM scheme with ECN can be realized in practice
Limitation of this paper
– Comparison between only two classes of algorithms
Control theoretic principles
Original randomized dropping paradigm
– Studied a link carrying only web-like traffic
More realistic mixed of HTTP, other TCP traffic, and UDP traffic
36/36