slides - Computer Science Department

Download Report

Transcript slides - Computer Science Department

1

PEREGRINE: AN ALL-LAYER-2 CONTAINER COMPUTER NETWORK

Tzi-cker Chiueh , Cheng-Chun Tu, Yu-Cheng Wang, Pai-Wei Wang, Kai-Wen Li, and Yu-Ming Huan ∗ Computer Science Department, Stony Brook University †Industrial Technology Research Institute, Taiwan 1

2

Outline

   Motivation  Layer 2 + Layer 3 design  Requirements for cloud-scale DC  Problems of Classic Ethernet to the Cloud Solutions  Related Solutions  Peregrine’s Solution Implementation and Evaluation  Software architecture  Performance Evaluation

3

L2 + L3 Architecture: Problems

Bandwidth Bottleneck Problem: Configuration: - Routing table in the routers - IP assignment - DHCP coordination - VLAN and STP Problem: forwarding table size Commodity switch 16-32k Virtual Machine Mobility Constrained to a Physical Location Ref: Cisco data center with FabricPath and the Cisco FabricPath Switching System

Requirements for Cloud-Scale DC

4

 Any-to-any connectivity with non-blocking fabric  Scale to more than 10,000 physical nodes  Virtual machine mobility  Large layer 2 domain  Fast fail-over  Quick failure detection and recovery  Support for Multi-tenancy  Share resources between different customers  Load balancing routing  Efficiently use all available links

Solution: A Huge L2 Switch!

Layer 2 Switch Single L2 Network Non-blocking Backplane Bandwidth However, Ethernet does not scale!

Config-free, Plug-and play Linear cost and power scaling

5

VM VM VM VM VM VM VM VM VM VM VM VM ….. Scale to 1 million VMs 5

6

Revisit Ethernet: Spanning Tree Topology N3 N4 N5 s2 s1 N6 N2 s3 s4 N8 N1 N7

7

Revisit Ethernet: Spanning Tree Topology

N2 N3 N1 s1 s3 N4 N8 D D s2 R B s4

Root

N5 N7 N6

8

Revisit Ethernet: Broadcast and Source Learning N2 N3 N1 s1 s3 N4 N8 N5 D D s2 R B N6 s4

Root

N7 Benefit: plug-and-play

9

Ethernet’s Scalability Issues

 Limited forwarding table size  Commodity switch: 16k to 64k entries  STP as a solution to loop prevention  Not all physical links are used  No load-sensitive dynamic routing  Slow fail-over  Fail-over latency is high ( > 5 seconds)  Broadcast overhead  Typical Layer 2 consists of hundreds of hosts

10

Related Works / Solution Strategies

 Scalability:  Clos network / Fat-tree to scale out  Alternative to STP  Link aggregation, e.g. LACP, Layer 2 trunking  Routing protocols to layer 2 network  Limited forwarding table size  Packet header encapsulation or re-writing  Load balancing  Randomness or traffic engineering approach

11

Design of the Peregrine

12

Peregrine’s Solutions

 Not all links are used  Disable Spanning Tree protocol  L2 Loop Prevention  Redirect broadcast and block flooding packet  Source learning and forwarding  calculate all routes for all node-pairs by Route Server  Limited switch forwarding table size  Mac-in-Mac two stage forwarding by Dom0 kernel module

13

ARP Intercept and Redirect

Control flow Data flow 1. DS-ARP 2. DS-Reply A 3. Send data sw1 Directory Service Route Algorithm Server DS RAS sw3 sw4 B sw2

14

Peregrine’s Solutions

 Not all links are used  Disable Spanning Tree protocol  L2 Loop Prevention  Redirect broadcast and block flooding packet  Source learning and forwarding  Calculate all routes for all node-pairs by Route Server  Fast fail-over: primary and backup routes for each pair  Limited switch forwarding table size  Mac-in-Mac two stage forwarding by Dom0 kernel module

15

Mac-in-Mac Encapsulation

Control flow Data flow 3. sw4 A 1. ARP redirect 4. Encap sw4 in source mac sw1 sw2 DA SA Encapsulation IDA DA SA sw 4 B A DS sw3 2. B locates at sw4 sw4 5. Decap and restore original frame B DA IDA SA Decapsulation DA SA

Fast Fail-Over

16

 Goal: Fail-over latency < 100 msec  Application agnostic  TCP timeout: 200ms  Strategy: Pre-compute a primary and backup route for each VM  Each VM has two virtual MACs  When a link fails, notify hosts using affected primary routes that they should switch to corresponding backup routes

17

When a Network Link Fails

18

IMPLEMENTATION & Evaluation

19

Software Architecture

Review All Components

20

ARP request rate MIM module A ARP redirect sw1 sw5 DS sw3 How fast can DS handle request?

sw4 RAS How long can RAS process request?

sw2 sw6 sw7 Backup route Performance of: MIM, DS, RAS, switch ?

B

21

Mac-In-Mac Performance

Time spent for decap/encap/total: 1us / 5us / 7us (2.66GHz CPU) Around 2.66K / 13.3K / 18.6K cycles

22

Aggregate Throughput for Multiple VMs 940 930 920 910 980 970 960 950 No MIM MIM 1VM 2VM 4VM 1. APR table size < 1k 2. Measure TCP throughput of 1VM, 2VM, 4VM communicating to each other.

ARP Broadcast Rate in a Data Center

23

   What’s the ARP traffic rate in real world?  From 2456 hosts, CMU CS department claims that there are 1150 ARP/sec at peak, 89 ARP/sec on average.

 From 3800 hosts at university network, there are around 1000 ARP/sec at peak, < 100 ARP/sec on average.

To scale to 1M node, 20K-30K ARP/sec on average. Current optimal DS: 100K ARP/sec

Fail-over time and its breakdown

24

 Average fail-over time: 75ms 400  Switch: 25 ~ 45 ms 350 300  sending trap (soft unplug) 250  RS: 25ms 200   receiving trap and processing 150 100 DS: 2ms 50  receiving info from RS and inform DS 0 1  The rests are network delay and dom0 processing time 2 3 4 5 6 7 8 9 10 11 total RS DS

Conclusion

25

     A unified Layer-2-only network for LAN and SAN Centralized control plane and distributed data plane Use only Commodity Ethernet switches  Army of commodity switches vs. few high-port-density switches  Requirements on switches: run fast and has programmable routing table Centralized load-balancing routing using real-time traffic matrix Fast fail-over using pre-computed primary/back routes

26

Questions?

Thank you

27

Review All Components: Result

DS 100K ARP/sec ARP request rate 7us for Packet processing A ARP redirect sw1 sw5 sw3 RS sw2 Link down 35ms sw4 sw6 sw7 Backup route 25ms per request B 27

28

Backup slides

Thank you

OpenFlow Architecture

29

 OpenFlow switch : A data plane that implements a set of flow rules specified in terms of the OpenFlow instruction set  OpenFlow controller : A control plane that sets up the flow rules in the flow tables of OpenFlow switches  OpenFlow protocol OpenFlow switches : A secure protocol for an OpenFlow controller to set up the flow tables in

30

OpenFlow Controller

OpenFlow Protocol (SSL/TCP)

Control Path OpenFlow Data Path (Hardware)

Conclusion and Contribution

31

   Using commodity switches to build a large scale layer 2 network Provide solutions to Ethernet’s scalability issues  Suppressing broadcast  Load balancing route calculation  Controlling MAC forwarding table  Scale up to one million VMs by Mac-in-Mac two stage forwarding  Fast fail-over Future work  High Availability of DS and RAS, mater-slave model  Inter

Comparisons

32

  Scalable and available data center fabrics  IEEE 802.1aq: Shortest Path Bridging    IETF TRILL Competitors: Cisco, Juniper, Brocade Differences : commodity switches, centralized load balancing routing and proactive backup route deployment Network virtualization   OpenStack Quantum API Competitors: Nicira, NEC   Generality carries a steep performance price  Every virtual network link is a tunnel Differences : Simpler and more efficient because it runs on L2 switches directly

Three Stage Clos Network (m,n,r)

n n x m 1 n n 2 .

.

.

r r x r 1 2 .

.

.

m m x n n 1 2 n .

.

.

r n

33

Clos Network Theory

34

    Clos(m, n, r) configuration: rn inputs, rn outputs 2r nxm + m rxr switches, less than rn x rn Each rxr switch can in turn be implemented as a 3-stage Clos network Clos(m,n,r) is rearrangeably non-blocking iff m >= n Clos(m,n,r) is stricly non-blocking iff m >= 2n-1

35

Link Aggregation

Logically a single switch, states are synchronized Stacking or Trunking STP views as a non-loop topology ! " #$ ! " %$ ! " #$ ! " ' $ ! " &$ ! " ' $ Logically a single link to the upper layer ! " &$

36

ECMP: Equal-Cost Multipath

$%# ' # $&# Flow hashing to 4 uplinks

! "

$) # %# ! %# … ! " # ( # $*# - %# … - " # $' # $+# . %# … . " # $( # $, # Layer 3 ECMP routing / %# … / " # Pros: multiple links are used, Cons: hash collision, re-converge downstream to a single link

37

Example: Brocade Data Center

L3 ECMP Link aggregation Ref: Deploying Brocade VDX 6720 Data Center Switches with Brocade VCS in Enterprise Data Centers

PortLand

• • • • • Scale-out: Three-layer, multi-root topology Hierarchical, encode location into MAC address Local Discover Protocol to find shortest path, route by MAC Fabric Manager maintains IP to MAC mapping 60-80 ms failover, centrally control and notify 38

VL2: Virtual Layer 2

• • • • • • Three layer, Clos network Flat, IP-in-IP, Location address(LA) an Application Address (AA) Link-state routing to disseminate LA VLB + flow-based ECMP Depend on ECMP to detect link failure Packet interception at S IP-in-IP encapsulation VLB VL2 Directory Service IP-in-IP decapsulation 39

Monsoon

• • • • • • Three layer, multi-root topology 802.1ah MAC-in-MAC encapsulation, source routing centralized routing decision VLB + MAC rotation Depend on LSA to detect failures Packet interception at S Monsoon Directory Service IP <-> (server MAC, ToR MAC) 40

TRILL and SPB

• • • • •

TRILL

Transparent Interconnect of Lots of Links, IETF IS-IS as a topology management protocol Shortest path forwarding New TRILL header Transit hash to select next hop • • • • •

SPB

Shortest Path Bridging, IEEE IS-IS as a topology management protocol Shortest path forwarding 802.1ah MAC-in-MAC Compute 16 source node based trees 41

TRILL Packet Forwarding

Link-state routing TRILL header Ref: NIL Data Communications A-ID: nickname of A C-ID: nickname of C HopC: hop count 42

SPB Packet Forwarding

Link-state routing 802.1ah Mac-in-Mac Ref: NIL Data Communications I-SID: Backbone Service Instance Identifier B-VID: backbone VLAN identifier 43

Re-arrangeable non-blocking Clos network Example: 1. Three-stage Clos network 2. Condition: k>=n 3. An unused input at ingress switch can always be connected to an unused output at egress switch 4. Existing calls may have to be rearranged

nxk (N/n)x(N/n) kxn

input

2x2 2x2 2x2

ingress

3x3 3x3

middle

2x2 2x2 2x2

egress output

N=6 n=2 k=2

44

Features of Peregrine network

• • • • • Utilize all links Load balancing routing algorithm Scale up to 1 million VMs – Two stage dual mode forwarding Fast fail over Load balancing routing algorithm 45 45

Goal

Primary S D Backup • Given a mesh network and traffic profile – Load balance the network resource utilization • Prevent congestion by balancing the network load to support as many traffic load as possible – Provide fast recovery from failure • Provide primary-backup route to minimize recovery time 46

Factors

• • • • Only hop count Hop count and link residual capacity Hop count, link residual capacity, and link expected load Hop count, link residual capacity, link expected load and additional forwarding table entries required How to combine them into one number for a particular candidate route?

47

Route Selection: idea

A B Leave C-D free S1 D1 Share with S2-D2 C D S2 S2-D2 shares link C-D D2 Which route is better from S1 to D1?

Link C-D is more important! Idea: use it as sparsely as possible 48

Route Selection: hop count and Residual

A

capacity

B Traffic Matrix: S1 -> D1: 1G S2 -> D2: 1G Leave C-D free S1 D1 Share with S2-D2 C D S2 Using Hop count or residual capacity makes no difference!

D2 49

Determine Criticality of A Link

f

l

Determine the importance of a link = fraction of all (s, d) routes that pass through link l Expected load of a link at initial state f

l

= å f

l

(

s

,

d

) ( ) ( ) = Bandwidth demand matrix for s and d 50

Criticality Example

From B to C has four possible routes.

A 0 2/4 2/4 2/4 4/4 B 2/4 2/4 2/4 Calculate Case2: f

l

s = B, d = C Case3: s = A, d = C is similar 4/4 C 51

A 0

Expected Load

Assumption: load is equally distributed over each possible routes between S and D.

10 10 Consider bandwidth demand for B-C is 20.

Expected Load: 10 10 10 10 f

l

= å f

l

(

s

,

d

) ( ) ( ) 20 20 B C 52

A 0

Cost Metrics

Cost metric represents the expected load per unit of available capacity on the link 0.01

0.02

0.01

0.01

cos

t

(

l

) = f

l R l

f

l

= Expected Load 0.01

0.01

0.01

R l

= Residual Capacity B 0.02

C Idea: pick the link with minimum cost 53

Forwarding Table Metric

Consider using commodity switch with 16-32k forwarding table size. 300 200 0

Fwd

(

n

) = available forwarding table entries at node n 100 100 INC_FWD = extra entries needed to route A-C A B C Idea: minimize entry consumption, prevent forwarding table from being exhausted 54

Load Balanced Routing

• • • Simulated network – 52 PMs with 4 NICs, – total 384 links Replay 17 multi-VDC 300-second traces Compare – Random shortest path routing (RSPR) – Full Link Criticality-based routing (FLCR) Metrics: congestion count – # of links with exceeded capacity • Low additional traffic induced by FLCR 55