Bandwidth Metrics and Measurement Tools Xin, Lu High-Performance Computing Group

Download Report

Transcript Bandwidth Metrics and Measurement Tools Xin, Lu High-Performance Computing Group

Bandwidth Metrics and
Measurement Tools
Xin, Lu
High-Performance Computing Group
Computer Science University of Windsor
Bandwidth Metrics
NMWG divide bandwidth into four submetrics:
 Bandwidth Capacity
 Achievable Bandwidth
 Available Bandwidth
 Bandwidth Utilization
FOR MORE INFO...
http://www-didc.lbl.gov/NMWG
http://www-didc.lbl.gov/NMWG/docs/measurements.pdf
Other Metric Terms

Throughput
– Throughput is the same as achievable bandwidth.

Bulk Transfer Capacity (BTC)
– Defined by RFC 3148
BTC = data_sent / elapsed_time
– The throughput of a persistent TCP transfer.
Each of these metrics can be used to describe
the entire path (end-to-end) as well as path’s
link (hop-by-hop)characteristics.
Bandwidth Capacity vs.
Achievable Bandwidth

Capacity is the maximum amount of data per time unit
that the link or path has available, when there is no
competing traffic.

Achievable bandwidth is the maximum amount of data
per time unit that a link or path can provide to an
application, given the current utilization, the protocol and
operating system used, and the end-host performance
capability and load. (Throughput )
Reference: [2]
Bandwidth Capacity vs.
Achievable Bandwidth Cont.

If a path consists of several links, the link with
the minimum transmission rate determines
the capacity of the path.

While the link with the minimum unused
capacity limits the achievable bandwidth. i.e.
at high-speed networks, hardware
configuration or software load on the end
hosts actually limit the bandwidth delivered to
the application.
Available Bandwidth vs.
Bandwidth Utilization

Available bandwidth is the maximum amount of
data per time unit that a link or path can provide,
given the current utilization.
 Utilization is the aggregate capacity currently
being consumed on a link or path.

Available Bandwidth =
Bandwidth Capacity – Bandwidth Utilization
Reference: [2]
BTC vs. Available Bandwidth

Available Bandwidth is the amount of usable
bandwidth without affecting cross-traffic,
whereas, the BTC is measured by sending
as much packets as possible, limiting other
traffic.
 BTC is simulating “steady state”– persistent
flow, taking considerable time and overhead.
FOR MORE INFO...
RFC 3148 : A Frame Work for Defining Empirical Bulk
Transfer Capacity Metrics
BTC vs. Available Bandwidth
Cont.

The BTC definition assumes an “ideal TCP
implementation”, actually, this doesn’t exist,
and what BTC measured is the variant of
achievable bandwidth.
FOR MORE INFO...
RFC 3148 : A Frame Work for Defining Empirical Bulk
Transfer Capacity Metrics
Passive vs. Active measurement

Active measurement means that the tool
actively sends probing packets into the
network.
 Passive measurement tools monitors the
passing traffic without interfering.
 Passive measurement is appreciated,
however, less reliable than active, as it
can’t extract any data pass through it.
Receiver-based vs. Senderbased techniques
Receiver-based (end-to-end)
techniques usually use the onedirection TCP stream to probe the
path bandwidth.
 Sender-based (echo-based)
techniques force the receiver to reply
the ICMP query, UDP echo or TCPFIN.

Sender-based technique
Advantage:
 Flexible deployment.
 Clock needn’t synchronized at two ends.
Disadvantage:
 ICMP and UDP echo packets usually be rate-limited
or filtered out by some routers.
 Round-trip is much more possibility influenced by
cross-traffic than that of one-way delay
 Response packets may come back through a
different path
Receiver-based technique
Advantage:
 More accurate than sender-based technique.
Disadvantage:
 Difficult to deployment.
 The clock have to be synchronized at two
ends.
Bandwidth Measurement
Technology

Packet Dispersion technology
– packet pair and packet train
– Self-Loading Periodic streams (SLOPS)

Variable Packet Size (VPS)
technology
– VPS even/odd
– Tailgating technique
Packet Dispersion Technique

Sender sends two same-size packets
back-to-back from source to sink.
 The packets will reach the sink dispersed
by the transmission delay of the
bottleneck links if there is no cross traffic.
 Measuring the dispersion can infer the
bottleneck link bandwidth capacity.
Note: Bottleneck link can refer to the link with smallest
transmission rate, it’s also can refer to the link with
minimum available bandwidth. We refer the
bottleneck link to the first case.
Packet Dispersion Technique
Cont.
Bottleneck bandwidth = packet size/ t
Packet Dispersion Technique
Cont.

If sender sends the packets as one
observation sample more than two, called
packet train.
 Tools usually apply robust statistical
filtering techniques to find valid samples.
Packet pair vs. packet train

Packet train is more likely to be interfered
by cross traffic than packet pair.
 Packet train can be used to measure the
bottleneck link that is multichannel while
packet pair can’t deal with.
 Packet train can reduce the limitation of
clock resolution.
 Sophisticated tools apply both methods in
their implementation. i.e. Pathrate
Packet Dispersion Technique
Cont.
Tool
Name
Active/ Method- Protocol Metrics
Passive ology
Path/Perlink
bprobe
Active
Packet
pair
ICMP
Bandwidth
Capacity
Path
cprobe
Active
Packet
pair
ICMP
Bandwidth
utilization
Path
Netest
Active
Packet
pair
UDP
Bandwidth
capacity
Path
FOR MORE INFO...
Bprobe and cprobe http://cs-people.bu.edu/carter/tools/Tools.html
Nettest http://www-didc.lbl.gov/pipechar
Packet Dispersion Technique
Cont.
Pathrate
Active
Packet pair,
packet train
UDP
Bandwidth
capacity
Path
Pipechar
Active
Packet
train
UDP
Available
bandwidth
Per-link
Sprobe Active
Packet
pair
TCP
Bandwidth
capacity
Path
FOR MORE INFO...
Pathrate http://www.cc.gatech.edu/fac/Constantinos.Dovrolis
Pipechar http://www-didc.lbl.gov/pipechar
SProbe http://sprobe.cs.washington.edu
Self-Loading Periodic
Streams(SLOPS)

Sender sends series of packets to the sink
at the rate of larger than the bottleneck
link available bandwidth.
 Every packets get a timestamp at sender
side.
 Compare the difference of successive
packets timestamp and their arrival times
to infer the available bandwidth.
 Rate-adjustment adaptive algorithm to
converge to the available bandwidth.
Self-Loading Periodic Streams
Cont.
Tool
Name
Active/ Method- Protocol Metrics
Passive ology
pathload
Active
SLOPS
UDP
Available
bandwidth
Path/Per
-link
Path
FOR MORE INFO...
Pathload http://www.cc.gatech.edu/fac/Constantinos.Dovrolis
Variable Packet Size (VPS)
Technique

Step1. Sender set TTL=1, send out the packet, and
wait for the ICMP TTL-exceeded packet back.
 Step2. Upon receiving ICMP, estimate the RTT.
Estimate the RTT multiple times for various size
packets.The minimum RTT of various packets are
believed to be the valid sample.
 Step3. The first link capacity is C=1/b , b is slope of
RTT graph.
Set the TTL=2,3…n, repeat the process of step1 to 3, to
Calculate the C=1/ bi – bi-1
VPS technique cont.
Even-odd VPS

The VPS probing technique is not altered,
Mathematical ‘trick’ to improve reliability.
 For each of the probing sizes, divide the set
of samples into even and odd numbers.
 Calculation is based on even-odd samples.
i.e. the even sample of link i, the odd sample
of link i+1.
Tailgating Technique
Tailgating technique divides into two phrase:
Phase one: Like VPS probing, but for entire path
instead of per link.
Phase two: (tailgating phase) The largest possible
non-fragmented packet followed by a tailgater
which is the smallest possible packet size (i.e
40 bytes). This causes the smaller packet
always queue behind the larger packet.
Reference: Kevin Lai, Mary Baker “Measuring Link Bandwidths
Using a Deterministic Model of Packet Delay” ACM SIGCOMM 2000
Tailgating Technique cont.
The following condition should met:




The large packet should not be queued due to
cross traffic.
The large packet should have a TTL field set to L
(1…n).
The tailgater packet should be queued directly
after the large packet on link L.
The tailgater packet should not queued after
having passing link L.
VPS Technology
Tool
Name
Active/
Passive
Method- Protocol Metrics
ology
bing
Active
VPS
ICMP
Bandwidth
capacity,
loss, delay
Path
clink
Active
VPS/
even-odd
UDP
Bandwidth
capacity,
Loss
Path
Pchar
Active
VPS
UDP,
ICMP
Bandwidth
capacity,
Loss, delay
Per-link
Bing http://www.cnam.fn/reseau/bing.html
Clink http://rocky.wellesley.edu/downey/clink/
Pchar http://www.emplyees.org/~bmah/software/pchar
Path/Per
-link
VPS Technology Cont.
Tool
Name
Active/ Method- Protocol Metrics
Passive ology
Nettimer
Active, VPS/tail TCP
Passive gating
Bandwidth
capacity
Per-link
pathchar
Active
Bandwidth
capacity,
Loss, delay
Per-link
VPS/eve UDP,
n-odd
ICMP
Path/Per
-link
FOR MORE INFO...
Nettimer http://mosquitonet.stanford.edu/~laik/project/nettimer
Pathchar ftp://ftp.ee.lbl.gov/pathchar/
TCP Simulation and Path Flooding

TCP simulation operates at two mode :
UDP/ICMP with low TTL or ICMP echo/reply.
It simulates the TCP of using slow-start
algorithm.
 Path flooding method injects TCP/UDP
packets into the net as fast as possible
within the specific time.
 To some degree, both TCP simulation and
path flooding are associated with Bulk
Transfer Capacity (BTC)metrics.
TCP Simulation and Path Flooding Cont.
Tool
Name
Active/ Method- Protocol Metrics
Passive ology
TReno
Active
TCP
simulation
ttcp
Active
iperf
Path/Per
-link
BTC
Path
Path
TCP,
flooding UDP
Achievable
bandwidth
Path
Active
Path
TCP,
flooding UDP
Bandwidth
capacity,
Loss
Path
Netperf Active
Path
TCP,
flooding UDP
BTC, delay
throughput
Path
UDP,
ICMP
TCP Simulation and Path Flooding Cont.
TReno
http://www.psc.edu/networking/treno_info.html
Iperf
http://dast.nlanr.net/Project/Iperf
Netperf
http://www.netperf.org/netperf/NetperfPage.html
ttcp part of OS
ftp://ftp.arl.mil/pub/ttcp/
Bandwidth Measurement Tools
Con.
Reference :
1.http://www.caida.org/tools/
2. Bruce Lowekamp, Brain Tierney, Les Cottrell, Richard
Hughes-Jones, Thilo Kielmann and Martin Swany. A
Hierarchy of Network Measurements for Grid
Applications and Services Document (draft) Global Grid
Forum NMWG Feb 17, 2003.
3. Rody Schoonderwoerd Network Performance
Measurement Tools a comprehensive comparison Nov.,
2002