Transcript Slide 1

THE ROYAL
SOCIETY
Congestion control
for Multipath TCP (MPTCP)
Damon Wischik
Costin Raiciu
Adam Greenhalgh
Mark Handley
Packet switching ‘pools’ circuits.
Multipath ‘pools’ links : it is Packet Switching 2.0.
TCP controls how a link is shared.
How should a pool be shared? What is TCP 2.0?
Two circuits
A link
Two separate
links
A pool of links
2
In a BCube data center, can we use
multipath to get higher throughput?
Initially,
there is
one flow.
3
In a BCube data center, can we use
multipath to get higher throughput?
Initially,
there is
one flow.
A new flow
starts. Its
direct route
collides with
the first flow.
4
In a BCube data center, can we use
multipath to get higher throughput?
Initially,
there is
one flow.
A new flow
starts. Its
direct route
collides with
the first flow.
But it also has
longer routes
available, which
don’t collide.
5
How can a wireless device use two channels
simultaneously, without hurting other users of
the network?
How should it balance its traffic, when the
channels have very different characteristics?
wifi path:
high loss, small RTT
3G path:
low loss, high RTT
6
What were our design goals
for MPTCP congestion control?
I will propose design goals for Multipath TCP, and
illustrate them in simple scenarios.
I will show experimental results for our MPTCP
algorithm in these simple scenarios.
7
What is the MPTCP protocol?
8
MPTCP is a replacement for TCP which lets you use
multiple paths simultaneously.
The sender
stripes
packets
across paths
user space
socket API
MPTCP
TCP
MPTCP
IP
addr
addr1
addr2
The receiver
puts the
packets in
the correct
order
Design goal 1:
Multipath TCP should be fair to regular TCP at
shared bottlenecks
A multipath
TCP flow with
two subflows
Regular TCP
To be fair, Multipath TCP should take as much capacity as TCP
at a bottleneck link, no matter how many paths it is using.
Strawman solution:
Run “½ TCP” on each path
9
Design goal 2:
10
MPTCP should use efficient paths
12Mb/s
12Mb/s
12Mb/s
Each flow has a choice of a 1-hop and a 2-hop path.
How should split its traffic?
Design goal 2:
11
MPTCP should use efficient paths
12Mb/s
8Mb/s
12Mb/s
8Mb/s
12Mb/s
If each flow split its traffic 1:1 ...
8Mb/s
Design goal 2:
12
MPTCP should use efficient paths
12Mb/s
9Mb/s
12Mb/s
9Mb/s
12Mb/s
If each flow split its traffic 2:1 ...
9Mb/s
Design goal 2:
13
MPTCP should use efficient paths
12Mb/s
10Mb/s
12Mb/s
10Mb/s
12Mb/s
If each flow split its traffic 4:1 ...
10Mb/s
Design goal 2:
14
MPTCP should use efficient paths
12Mb/s
12Mb/s
12Mb/s
12Mb/s
12Mb/s
If each flow split its traffic ∞:1 ...
12Mb/s
Design goal 2:
15
MPTCP should use efficient paths
12Mb/s
12Mb/s
12Mb/s
12Mb/s
12Mb/s
12Mb/s
Theoretical solution (Kelly+Voice 2005; Han, Towsley et al. 2006)
MPTCP should send all its traffic on its least-congested paths.
Theorem. This will lead to the most efficient allocation possible, given a
network topology and a set of available paths.
MPTCP chooses efficient paths in a BCube
data center, hence it gets high throughput.
Initially,
there is
one flow.
A new flow
starts. Its
direct route
collides with
the first flow.
But it also has
longer routes
available, which
don’t collide.
MPTCP shifts
its traffic away
from the
congested link.
16
MPTCP chooses efficient paths in a BCube
data center, hence it gets high throughput.
17
throughput
[Mb/s]
300
250
200
½ TCP
MPTCP
150
We ran packet-level simulations of BCube (125
hosts, 25 switches, 100Mb/s links) and measured
average throughput, for three traffic matrices.
100
50
0
For two of the traffic matrices, MPTCP and ½ TCP
(strawman) were as good. For one of the traffic
matrices, MPTCP got 19% higher throughput.
perm.
traffic
matrix
sparse
traffic
matrix
local
traffic
matrix
Design goal 3:
18
MPTCP should be fair compared to TCP
wifi path:
high loss, small RTT
c
d
3G path:
low loss, high RTT
Design Goal 2 says to send all your traffic on the least
congested path, in this case 3G. But this has high RTT, hence
it will give low throughput.
Goal 3a. A Multipath TCP user should get at least as much throughput as
a single-path TCP would on the best of the available paths.
Goal 3b. A Multipath TCP flow should take no more capacity on any link
than a single-path TCP would.
19
MPTCP gives fair throughput.
wifi
throughput [Mb/s]
time
[min]
3G
throughput [Mb/s]
User in his office,
using wifi and 3G
In the kitchen
Going downstairs
20
MPTCP gives fair throughput.
wifi
throughput [Mb/s]
time
[min]
3G
throughput [Mb/s]
3G has lower loss rate. Design Goal 2
says to shift traffic onto 3G ...
But, today, TCP over 3G was only getting
0.4Mb/s, so don’t take more than that …
But, today, TCP over wifi was getting
2.2Mb/s, so user is entitled to this much…
MPTCP sends
0.4Mb/s over
3G, and the
remaining
1.8Mb/s over
wifi.
21
MPTCP gives fair throughput.
wifi
throughput [Mb/s]
time
[min]
3G
throughput [Mb/s]
2.5
We measured throughput , for both
½ TCP (strawman) and MPTCP, in the
office.
½ TCP is unfair to the user, and its
throughput is 25% worse than MPTCP.
2
½ TCP
1.5
MPTCP
1
0.5
0
Design goals
Goal 1. Be fair to TCP at bottleneck links redundant
Goal 2. Use efficient paths ...
Goal 3. as much as we can, while being fair to TCP
Goal 4. Adapt quickly when congestion changes
Goal 5. Don’t oscillate
How does MPTCP achieve all this?
22
How does TCP congestion control work?
Maintain a congestion window w.
• Increase w for each ACK, by 1/w
• Decrease w for each drop, by w/2
23
24
How does MPTCP congestion control work?
Maintain a congestion window wr, one
window for each path, where r ∊ R
ranges over the set of available paths.
• Increase wr for each ACK on path r, by
• Decrease wr for each drop on path r,
by wr /2
25
How does MPTCP congestion control work?
Maintain a congestion window wr, one
window for each path, where r ∊ R
ranges over the set of available paths.
• Increase
Design goal 3:
At any potential bottleneck
S that path r might be in,
look at the best that a
single-path TCP could get,
and compare to what I’m getting.
wr for each ACK on path r, by
• Decrease wr for each drop on path r,
by wr /2
26
How does MPTCP congestion control work?
Maintain a congestion window wr, one
window for each path, where r ∊ R
ranges over the set of available paths.
•
Design goal 2:
We want to shift traffic
away from congestion.
To achieve this, we
increase windows in
proportion to their
size.
Increase wr for each ACK on path r, by
• Decrease wr for each drop on path r,
by wr /2
MPTCP is a control plane for multipath
transport.
What problem is it trying to
solve?
27
At a multihomed web server, MPTCP tries
to share the ‘pooled access capacity’ fairly.
2 TCPs
@ 50Mb/s
100Mb/s
100Mb/s
4 TCPs
@ 25Mb/s
28
At a multihomed web server, MPTCP tries
to share the ‘pooled access capacity’ fairly.
2 TCPs
@ 33Mb/s
1 MPTCP
@ 33Mb/s
4 TCPs
@ 25Mb/s
100Mb/s
100Mb/s
29
At a multihomed web server, MPTCP tries
to share the ‘pooled access capacity’ fairly.
2 TCPs
@ 25Mb/s
2 MPTCPs
@ 25Mb/s
100Mb/s
100Mb/s
4 TCPs
@ 25Mb/s
The total capacity, 200Mb/s, is shared out
evenly between all 8 flows.
30
At a multihomed web server, MPTCP tries
to share the ‘pooled access capacity’ fairly.
2 TCPs
@ 22Mb/s
3 MPTCPs
@ 22Mb/s
100Mb/s
100Mb/s
4 TCPs
@ 22Mb/s
The total capacity, 200Mb/s, is shared out
evenly between all 9 flows.
It’s as if they were all sharing a single
200Mb/s link. The two links can be said to
form a 200Mb/s pool.
31
At a multihomed web server, MPTCP tries
to share the ‘pooled access capacity’ fairly.
2 TCPs
@ 20Mb/s
4 MPTCPs
@ 20Mb/s
100Mb/s
100Mb/s
4 TCPs
@ 20Mb/s
The total capacity, 200Mb/s, is shared out
evenly between all 10 flows.
It’s as if they were all sharing a single
200Mb/s link. The two links can be said to
form a 200Mb/s pool.
32
At a multihomed web server, MPTCP tries
to share the ‘pooled access capacity’ fairly.
5 TCPs
100Mb/s
First 0,
then
10 MPTCPs
100Mb/s
33
throughput per
flow [Mb/s]
15 TCPs
We confirmed in experiments that
MPTCP nearly manages to pool the
capacity of the two access links.
Setup: two 100Mb/s access links,
10ms delay, first 20 flows, then 30.
time [min]
At a multihomed web server, MPTCP tries
to share the ‘pooled access capacity’ fairly.
5 TCPs
100Mb/s
First 0,
then
10 MPTCPs
100Mb/s
34
15 TCPs
MPTCP makes a collection of links behave like a single
large pool of capacity —
i.e. if the total capacity is C, and there are n flows,
each flow gets throughput C/n.
Open question:
35
Can we make a data center behave like a
simple ‘capacity pool’?
What topologies and path choices would make it
behave like a simple capacity pool?
What is the pooled capacity?
Further questions
How much of the Internet can be pooled?
What are the implications for network
operators?
How should we fit multipath congestion
control to CompoundTCP or CubicTCP?
Is it worth using multipath for small flows?
36
Conclusion
37
• Multipath is Packet Switching 2.0
It lets you share capacity between links.
• MPTCP is TCP 2.0
It is a control plane to harness the flexibility of multipath.
It is traffic engineering, done by end-systems, and it works
in milliseconds.
• We formulated design goals and test scenarios for how
multipath congestion control should behave.
• We designed, implemented and evaluated MPTCP, a TCPlike congestion control algorithm that achieves these goals.
Related work
on multipath congestion control
pTCP , CMT over SCTP, and M/TCP
that meets goal 1 (fairness at shared bottleneck)
mTCP, ≈ R-MTP
and goal 2 (choosing efficient paths)
Honda et al. (2009), ≈ Tsao and Sivakumar (2009)
and goal 5 (non-oscillation)
Kelly and Voice (2005), Han et al. (2006)
and goal 3 (fairness)
and goal 4 (rapid adjustment)
(none)
38