VTL: A Transparent Network Service Framework

Download Report

Transcript VTL: A Transparent Network Service Framework

VTL: A Transparent Network
Service Framework
John R. Lange
and
Peter A. Dinda
Prescience Lab
Department of Electrical Engineering and Computer
Science
Northwestern University
http://plab.cs.northwestern.edu
1
Transparent Network Services
• Manipulate data and signaling of
flows/connections to add services to existing
unmodified applications and OSes
– High Level transformations of Low Level traffic
– Transparency: Manipulations invisible to guest
environment
• VTL (Virtual Traffic Layer)
– A framework for creating Transparent Network
Services
• Wide range of possible services
– Many useful for HPDC
Outline
• Defining Transparent Network Services
• Motivation
• VTL Framework
– Architecture
– Performance
• Example Transparent Network Services
– Protocol Transformations
– Anonymous Networking
–…
• Conclusion and Future Work
Transparency
• Improving Existing Unmodified Applications
– Invisible to connection end points
– No changes to guest environment
– Seamless integration of networking techniques
• Transparency readily available with VMS
– Provide transparent bridge
– Service integration below virtual hardware
Network Services
• Implement high level functions
• Operate on low level network traffic
– Monitor
– Control
– Manipulate
• Traffic Data
• Signaling
• Unique challenges in Virtual Environments
– E.g. Migration
Motivation
• HPDC 2005 -- VRESERVE
– Automatic Optical Network Reservations for
unmodified applications
– Demonstrated performance gains over
standard internet routes
• Performance Issues
– TCP applications ill suited for optical networks
J. Lange, A. Sundararaj, and P. Dinda, Automatic Dynamic Run-time Optical Network
Reservations, Proceedings of the 14th IEEE International Symposium on High
Performance Distributed Computing, (HPDC 2005)
TCP over Optical Networks
• Optical Networks
have high BDPs
– Bandwidth Delay
Products
– Very High bandwidth
– Long distance
• High relative latency
– TCP breaks down
Copyright 2004 National LambdaRail, Inc
D. Petravick, Fermilab
Typical BDP values
• Assume endpoints are on opposite ends of the earth
– Real world example: CERN and StarLight
– Latency lower bound is ~60ms
• Half circumference of earth / Speed of light
• CERN <–> FNAL has a measured ~60 ms delay
– D. Petravick, Fermilab
– Optical Networks currently operate at 10 Gbps
• But 1 GigE NICs are most common
– TCP Window Size (BDP):
• 10 Gbps ~= 70 MB
• 1 Gbps ~= 7MB
• SACK lookups cause TCP timeouts
– Window size  1
Transparently Optimize high BDP
flows
• High performance protocols exist
– UDT/SABUL, RBUDP, etc…
– But applications must be configured for them
• Need method of transforming TCP to UDT
– Opens UDT connections based on SYNs
– Transmits data segments over UDT
VTL
• Transparent Network Service Framework
– Network device interface
– Packet modification and creation
– Rapid prototyping and evaluation
• Capabilities
– Virtual TCP endpoint
– Transparent packet generator
• Acks, keep-alive
– Packet header and content modifications
– Not confined to virtual machines
VTL Components
• Network Interface API
– Reads/Writes packets to/from network
interfaces
• Packet Access API
– Reading and writing packet data
• State Models
– Maintain state of connection endpoints
Network Interface API
• Common interface for packet capture and
injection
– Virtual or Real devices
– Unix or Windows
• Built on PCAP and libnet
• Operations
– Connect/Disconnect
– Read/Write
– Packet notifications
Packet Access API
• Packet inspection and modification
– Primitives to access standard fields
• Higher level functions built on primitives
– Packet class queries
– Field swapping
– Header calculations
– Derivative packet creation
Connection State Models
• Maintain and manipulate protocol state
– Layered architecture
• Create packets belonging to a connection
• State kept for both connection endpoints
– Generate packets from either endpoint
• API operation
– Manual or packet based
• Model Initialization
• State Updates
• Packet Creation
VTL Configuration
Hosting Server
(Windows or Unix)
VM
VMM
(VMWare, Xen, etc)
Host-only
interface
VNET Overlay
Module
UDT Flow
Over Optical
Network
VTL
VNET
Physical
interface
Sundararaj, A., Gupta, A., , and Dinda, P. Increasing application performance in virtual environments
through run-time inference and adaptation. In Proc. of the 14th IEEE International Symposium on High
Performance Distributed Computing (HPDC) (July 2005)
Baseline Performance
• Limited by Network Interface API
– Implemented in user space
• PCAP + libnet
• Experimental setup
– Simple interface bridge (virtual->real)
• Xen bridge
• Single process (half duplex)
• Two processes (full duplex)
Baseline Performance
Bandwidth (MB/s)
Overhead Measurements
Xen Bridge
One VTL
Process
Two VTL
Processes
Protocol Transformation for High
BDP networks
• Addresses performance of TCP over optical
• VTL allows transformation of TCP flows to other
transport protocols
• VTL module acts as virtual TCP endpoint
– Implements TCP states
• SYN sequence (open)
• FIN sequence (close)
• Data Transfer over new protocol (established)
Code Example – Creating Packets
int create_data_pkt(vtl_model_t * model,
char * data,
int data_len) {
RawEthernetPacket data_pkt;
create_empty_pkt(model, &data_pkt, INBOUND_PKT);
memcpy(TCP_DATA(data_pkt), data, data_len);
compute_ip_len(&data_pkt, data_len);
compute_ip_checksum(&data_pkt);
compute_tcp_checksum(&data_pkt);
sync_model(model, &data_pkt);
queue_pkt(&data_pkt);
}
Performance Evaluation Setup
• Comparing TCP vs. VTL + UDT
• Added artificial latency to gigabit switch
– Linux iproute2 + tc netem
• TTCP benchmark
– Standard TCP (Host to host)
– TCP with intelligent socket buffers (Host to host)
– VTL + UDT (Xen VM to Xen VM)
• Note: No virtualization present for TCP tests
– Same hardware
Bandwidth (MB/s)
Performance
Latency (ms)
More Transparent Network
Services
•
•
•
•
•
Socks (TOR)
Subnet Tunneling
VM Migration Support (TCP keep alive)
Stateful Firewall
Performance Enhancing Proxies
– RFC 3135
– Local acknowledgements
Anonymous Networking for Any
Application
• Tor Anonymous Network (http://tor.eff.org)
– Anonymizes source of any TCP connection
– Functions as a SOCKS proxy
– Requires SOCKS application support
VM
TOR
NETWORK
VMM
(VMWare,
Xen, etc)
Tor Server
SOCKS
Connection
TCP Connections +
DNS lookups
VTL
VTL Interface
Host-only
interface
Hosting Service
Tor + VTL
• VTL implements transparent SOCKS interface
– VTL simulates a TCP endpoint
– Extracts data segment from TCP packet and transmits it over
SOCKS tunnel
– Data from SOCKS is encapsulated into TCP packets and
delivered to VM
• Gotchas
– DNS is UDP based
• VTL handles DNS case for UDP
– ARPs
• VTL answers ARPs with a fake MAC address
• All tcp connections from a VM are anonymized
– No modification to OS or applications
– User not restricted to applications implementing socks
Transparent Security
• Iptables and Windows Firewall are now
ubiquitous
– Not perfect
• Successful attacker can alter rules
• Only as strong as the weakest link
• VTL rules are not accessible by VM
– Even if VM is compromised firewall rules are
safe
Subnet Tunneling
123.123.1.0/24
123.123.1.1
10.10.0.0/16
VNET Proxy
(PROXY1)
123.123.1.50
VM1
LAN
connection
Gateway
(GW1)
Internet
VNET Overlay
(Internet)
Gateway
Router
234.234.1.1
VM2 MAC Address Mismatch!
234.234.1.50
VNET Proxy
(PROXY2)
Gateway
(GW2)
234.234.1.0/24
Subnet Tunneling
• Two VMs on different subnets communicating
– Fast Path link is available between them
• Bypasses routers
• VMs use subnet gateway
– Set gateway MAC as destination
• VTL rewrites destination MAC addresses
– Route packets on fast path link
Network Suspension during VM
Migrations
• A VM is suspended for a long duration
– i.e. VM is migrating over WAN
– Open TCP connections begin to timeout
• In order to maintain connections VTL
generates keep-alive packets
• Secondary service must handle routing
– i.e. VNET
Cooperative Selective Wormholing
• Distributed traffic aggregation for Network Intrusion Detection
Systems
• Wormhole
– Tunnel traffic from a remote sensor to backend NIDS
– VTL mechanisms for packet capture and injection
• Cooperative
– Volunteer machines aggregate traffic
– VTL implementation cross platform
• Selective
– Aggregates traffic that Volunteer client is not interested in
– VTL mechanisms for packet inspection
•
J. Lange, P. Dinda, and F. Bustamante, Vortex: Enabling Cooperative Selective Wormholing for
Network Security Systems, Proceedings of the 10th International Symposium on Recent
Advances in Intrusion Detection (To Appear)
Future Work
• Generalizable to complete IO framework
• Performance
– VMM based implementation
• Automatic Service Adaptation
Conclusion
• Transparent Network Services allow high
level transformations of low level network
traffic
• VTL
– A framework for creating Transparent Network
Services
• Wide range of potential services
– Many useful for HPDC
• Prescience Lab
– http://plab.cs.northwestern.edu
• Virtuoso
– http://virtuoso.cs.northwestern.edu
• John Lange
– http://www.artifex.org/~jarusl
Vortex
• Cooperative Selective Wormhole
implementation
• VTL
– Traffic capture and injection
– Packet modifications
• Rewrite addresses
• Anonymize packets
– Cross platform functionality
Vortex Architecture
VM Based
Honeypot
Commodity PC
Windows/UNIX
VM
Apps
Operating
System
Firewall
Vortex
VNET
Proxy
VTL
PCAP libnet
IDS
Analysis
Backend
Physical
Honeypot
NIC
VNET
Overlay
Backend Network
Subnet Tunneling
123.123.1.0/24
123.123.1.1
10.10.0.0/16
VNET Proxy
(PROXY1)
123.123.1.50
VM1
LAN
connection
Gateway
(GW1)
Internet
VNET Overlay
Gateway
Router
234.234.1.1
VM2
234.234.1.50
VNET Proxy
(PROXY2)
Gateway
(GW2)
234.234.1.0/24