Slides - Events

Download Report

Transcript Slides - Events

Enabling Fast, Dynamic Network
Processing with ClickOS
Joao Martins*, Mohamed Ahmed*, Costin Raiciu§, Felipe Huici*
* NEC Europe, Heidelberg, Germany
§University Politehnica of Bucharest
[email protected], [email protected]
The Idealized Network
Application
Application
Transport
Transport
Network
Page 2
Network
Network
Datalink
Datalink
Datalink
Datalink
Physical
Physical
Physical
Physical
A Middlebox World
ad insertion
WAN accelerator
BRAS
carrier-grade NAT
transcoder
IDS
session border
controller
load balancer
DDoS protection
firewall
DPI
QoE monitor
Page 3
Hardware Middleboxes - Drawbacks
▐ Middleboxes are useful, but…






Expensive
Difficult to add new features, lock-in
Difficult to manage
Cannot be scaled with demand
Cannot share a device among different tenants
Hard for new players to enter market
▐ Clearly shifting middlebox processing to a software-based, multitenant platform would address these issues
 But can it be built using commodity hardware while still achieving high
performance?
▐ ClickOS: tiny Xen-based virtual machine that runs Click
Page 4
Xen Background - Overview
dom0
domU
domU
domU
domU
apps
apps
apps
apps
apps
guest
OS
guest
OS
guest
OS
guest
OS
guest
OS
paravirt
paravirt
paravirt
paravirt
paravirt
dom0
interface
hypervisor
hardware
Page 5
Xen Background – Split Driver Model
Page 6
ClickOS - Contributions
domU
ClickOS
apps
Click
guest
OS
mini
OS
paravirt
paravirt
▐ Work consisted of




Page 7
Build system to create ClickOS images (5 MB in size)
Emulating a Click control plane over MiniOS/Xen
Optimizations to reduce boot times (30 miliseconds)
Optimizations to the data plane (10 Gb/s for larger pkt sizes)
Xen I/O Subsystem and Bottlenecks
pkt size (bytes)
10Gb rate
64
14.8 Mp/s
128
8.4 Mp/s
256
4.5 Mp/s
512
2.3 Mp/s
1024
1.2 Mp/s
1500
810 Kp/s
ClickOS Domain
Driver Domain (e.g., dom0)
netback
NW driver
netfront
Xen bus/store
Linux/OVS bridge
Event channel
FromNetfront
vif
ToNetfront
Xen ring API
(data)
300 Kp/s
Page 8
350 Kp/s
Click
225 Kp/s
Optimized Xen I/O
ClickOS Domain
Driver Domain (e.g., dom0)
netback
NW driver
VALE bridge
Linux/OVS
vif
netfront
Xen bus/store
Xen bus/store
channel
EventEvent
channel
FromNetfront
ToNetfront
Netmap
Xen ringAPI
API
(data)
(data)
Page 9
Click
Throughput – One CPU Core
ClickOS
rate meter
10Gb/s direct cable
Page 10
Boot times
220 milliseconds
30 milliseconds
Page 11
Conclusions
▐ Presented ClickOS




Tiny (5MB) Xen VM tailored at network processing
Can be booted in 30 milliseconds
Can run a large number of ClickOS vm concurrently (128)
Can achieve 10Gb/s throughput using only a single core.
▐ Future work
 Implementation and performance evaluation of ClickOS middleboxes
(e.g., firewalls, IDSes, carrier-grade NATs, software BRASes)
 Work to adapt Linux netfront to netmap API
 Service chaining
Page 12