Network Service Virtualization

Download Report

Transcript Network Service Virtualization

High Performance Storage
Service Virtualization
Scott Baker
University of Arizona
What is Virtualization?
Consider an existing client/server scenario:
Service
Client
Server
A virtualizer is inserted between the client to provide a better service:
Client
Service'
Virtualizer
Service
Server
Why Virtualization ?
• Why not create a newer/better service ?
– Modified clients / servers
– Lengthy standardization process
– Slow to integrate into existing infrastructure
• Virtualization offers
– Unmodified clients/server
– No standardization requirements
– Rapid integration
Types of Virtualizations
• Mutation
– Change a service into something different
• Aggregation
– 2+ servers  1 big server
• Replication
– 2+ servers  1 more reliable server
• Fortification
– Vulnerable server  more secure server
Mutation (Gecko)
• Web uses HTTP protocol
• Traditional programs use file system semantics
– open, close, read, write, etc
• Inconvenient to modify existing body of
applications to use the web
• WWW8, SP&E papers
Linux or
Windows Client
NFS
Gecko
HTTP
World Wide
Web
Aggregation (Mirage)
• Combine 2+ NFS file systems to create one big
file system
• Clients are unaware multiple servers exist
• IEEE LCN paper
NFS
Client
NFS
Mirage
NFS Server
NFS
NFS Server
Replication (Mirage)
• Unmodified primary server
• Specialized backup server
• Asymmetric design
– Commercial primary, commodity backup
• Logging
Client
NFS
Mirage
NFS
NF
S+
Primary Server
BP
Backup Server
Fortification (Mirage)
• Fortify a server against DoS attacks
• Several ideas
– Prevent faulty requests from reaching servers
– Use scheduling to ensure fairness
– Push authentication to the border of the
network
• Currently work in progress
Mirage in the Network
• Mirage could be located in:
– Client (patch OS, or user-mode daemon)
– Server (patch app, or user-mode daemon)
– Router (unmodified clients, servers)
• Mirage is a router not an application
– Rewrite packets on-the-fly & forward
– Benefits: stateless, low overhead
NFS Basics
• NFS uses “handles” to identify objects
– Lookup (par_handle, “name”)  chi_handle
– Read (chi_handle)  data
• Handles are opaque to clients
• Handles are 32-bytes (NFSv2) or 64-bytes
(NFSv3)
Aggregation Issues
• Make 2+ servers look like 1 big server
• Key problem
– 2 servers may generate the same handle for
different objects
– Client will be confused
• Solution
– Virtual handles
Virtual and Physical Handles
• Virtual handles exist between clients and Mirage
• Physical handles exist between Mirage and
servers
Server #1
Client
Mirage
Server #2
Virtual Handles
Physical Handles
VFH Contents
• Mirage decides what to put in VFH
• VFH is composed of
– PIN (Physical Inode Number)
– PFS (Physical File System Number)
– SID (Server ID)
– VIN (Virtual Inode Number)
– HVC (Handle Verification Checksum)
– MCH (Mount Checksum)
• (PIN, PFS, SID) uniquely identifies a file
Data Structures
• Transaction Table (TT)
– Entry created during request
– Entry deleted during reply
– Remembers NFS Proc Number, Client ID
• Handle Table (HT)
– VFH  PFH mappings
• Tables are Soft State
Request / Reply Processing
• On requests,
– Lookup VFH in HT  yields PFH
– Rewrite VFH in request with PFH
– Forward to server (SID tells which one)
• On replies,
– Lookup PFH in HT  yields VFH
• Create new mapping if necessary
– Rewrite PFH in reply with VFH
– Forward to client
Router Failure / Recovery
• If router fails, TT and HT are lost
• Clients will retry any ops in progress
– TT state regenerated automatically
• Recover HT state from fields in VFH
– Extract (PIN, PFS, SID)
– Search servers for (PIN, PFS, SID) to get
PFH
– Similar to BASE
• Periodically checkpoint HT to servers
Prototypes
•
User Mode Process
–
–
–
–
•
Linux operating system / commodity HW
Proof of concept
Demonstrates aggregation & replication
UDP Sockets
IXP2400 Network Processor
– High performance
– Possible production system
– Subject of ongoing/future work
IXP2400 Overview
• 1 StrongArm CPU (general purpose, Linux OS)
• 8 Microengine CPUs (packet processing)
ScratchPad
Hash
SRAM Memory
DRAM Memory
Cap
GB ETH
BUS
GB ETH
StrongArm
CPU
me0
me4
me1
me5
me2
me6
me3
me7
Microengine CPU Properties
• Lots of registers
– 256 GPR, 128 NN, 512 memory-i/o
• Special packet-processing instruction set
• Multithreading support
– 8 threads per microengine
– Zero context-switch overhead
• Asynchronous memory I/O
• Fast-path processing
Memory
• DRAM: 64 MB / 300 cycles
– Direct IO to and from network interface
• SRAM: 8 MB / 150 cycles
– Support atomic operations
– Built-in “queues” w/ atomic dequeue, get, put
• Scratchpad:16 KB / 60 cycles
– Supports atomic operations
– Built-in “rings” with atomic get/put
• Local per-microengine: 2560 B / 3 cycles
IXP Issues
• Divide Mirage functionality across Microengines
• Control interface between StrongArm and
Microengines
• Optimize Microengine code
NfsReq
Packets
In
Receiver
Transmitter
Classifier
NfsRep
StrongArm
CPU
Packets
Out
Benchmark Configuration
• Two IXP boards: Benchmark and Mirage
• Attempt throughput and measure actual
Flood
Request
NFS Req
Fake
Server
Rewritten Requ
est
Reply
NFS Rep
Count
Benchmark IXP Board
Rewritten Reply
Mirage IXP Board
(note: transmit, receive, classifier microengines not shown)
Loopback Configuration
• Simulates a router without Mirage
Flood
Fake
Server
Count
Benchmark IXP Board
Request
Request
Reply
Reply
Mirage IXP Board
IXP Performance
600000
500000
getattr-mirage
write-loop
400000
write-mirage
300000
200000
100000
0
50
00
10 0
00
0
15 0
00
0
20 0
00
0
25 0
00
0
30 0
00
0
35 0
00
40 00
00
0
45 0
00
0
50 0
00
0
55 0
00
0
60 0
00
0
65 0
00
0
70 0
00
00
Actual Throughput (Packets / Sec)
getattr-loop
Attempted Throughput (Packets / Sec)
Analysis
• User-mode Mirage
– 40,000 packets/second
– Read/Write bandwidth at 320 Mbps
• IXP Mirage
– 290,000 packets/second
– Read/write bandwidth exceeds gigabit line
speed (In theory, approx 2.4 Gbps)
Status
• Completed
– User-mode Mutation (Gecko), Aggregation,
Replication, Fortification
– IXP Aggregation
• To-do
–
–
–
–
–
IXP performance tuning
Finish IXP benchmarks
IXP Replication ?
IXP Gecko ?
SOSP Paper
Publications
• Scott Baker, John Hartman, “The Gecko NFS Web Proxy,”
Proceedings of the Eighth International Conference on World Wide
Web. 1999.
• Scott Baker, Bongki Moon, “Distributed Cooperative Web
Servers,” Proceedings of the Eighth International Conference on
World Wide Web. 1999.
• Scott Baker, John Hartman, “The design and implementation of
the Gecko NFS Web Proxy,” Software Practice and Experience,
June 2001.
• Scott Baker, John Hartman, and Ian Murdock, "Swarm: AgentBased Storage," The 2004 International Conference on Software
Engineering Research and Practice. Las Vegas, Nevada. June,
2004.
• Scott Baker and John Hartman, "The Mirage NFS Router," The
29th IEEE Conference on Local Area Networks. Tampa, FL.
November, 2004.