Lecture 8: Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net.

Download Report

Transcript Lecture 8: Testbeds Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Material uses slides from Larry Peterson, Jay Lapreau, and GENI.net.

Lecture 8:
Testbeds
Anish Arora
CIS788.11J
Introduction to Wireless Sensor Networks
Material uses slides from Larry Peterson,
Jay Lapreau, and GENI.net
References
•
EmuLab
: artifact-free, auto-configured, fully controlled
 A configurable Internet emulator
 2001: 200 nodes, 500 wires, 2x BFS (switch)
 2006: 350 PCs, 7 IXPs, 40 WANodes, 27+ 802.11nodes
•
PlanetLab : real environment
• 670 machines spanning 325 sites and 35 countries
nodes within a LAN-hop of > 3M users
• Supports distributed virtualization
each of 600+ network services running in their own slice
•
GENI
2
Emulab philosophy
•
•
•
Live-network experimentation

Achieves realism

Surrenders repeatability

e.g., MIT “RON” testbed, PlanetLab
Pure emulation

Introduces controlled packet loss and delay

Requires tedious manual configuration
Emulab approach

Brings simulation’s efficiency and automation to emulation

Artifact free environment

Arbitrary workload: any OS, any ”router” code, any program, for any user

So default resource allocation policy is conservative:
 allocate full real node & link: no multiplexing; assume max. possible traffic3
Emulab
•
Allow experimenter complete control, i.e., bare
hardware with lots of tools for common cases
 OS’s, disk loading, state mgmt tools, IP, traffic generation,
batch, ...
•
Virtualization
 of all experimenter-visible resources
 topology, links, software, node names, network interface
names, network addresses
 Allows swapin/swapout
•
Remotely accessible
•
Persistent state maintenance (in database)
•
Separate control network
•
Configuration language: ns
4
Experiment Life Cycle
Global
Node
Experiment
Specification
Resource
Self-Configuration
Swap
Parsing
SwapOut
In
Control
Allocation
$ns duplex-link $A $B 1.5Mbps 20ms
A
A
B
B
A
DB
B
5
assign:
Mapping Local Cluster Resources
•
Maps virtual resources to local nodes and VLANs
•
General combinatorial optimization approach to NP-complete problem
•
Based on simulated annealing
•
Minimizes inter-switch links, # switches & other constraints …
•
All experiments mapped in less than 3 secs [100 nodes]
•
WANassign for Mapping Global Resources (uses genetic algorithm)
6
Frisbee:
Disk Loading
•
Loads full disk images (bulk download)
•
Performance techniques:
 Overlaps block decompression and device I/O
 Uses a domain-specific algorithm to skip unused blocks
 Delivers images via a custom reliable multicast protocol
•
13 GB generic IDE 7200 rpm drives
•
Was 20 minutes for 6 GB image
•
Now 88 seconds
7
IDE planned for Emulab
•
Evolve Emulab to be the network-deviceindependent control and integration center for
experimentation, research, development,
debugging, measurement, data management,
and archiving
 Collaboratory:
Emulab’s project abstraction
 Workbench:
Emulab’s experiment abstraction
 Device-independent: Emulab’s builtin abstractions for
all things network-related
8
Collaboratory Subsystems
•
•
•
•
•
•
•
•
Source repository:Sourceforge, CVS, Subversion
Datapository
“My Wikis”
Mailing list(s)
Bug database
Chat/IM, chatroom management
Moodle?
Approach
 Transparently do authentication, authorization and
membership mgmt: “single signon”
 Use separate server for information and resource
security and management
 Support flexible access policies: default is projectprivate, but project leader can change, per-subsytem
 Private, public read-only, public read/write
9
Experimentation Workbench
•
Four types:
 Workflow management (processes), including
 Measurement and feedback steps
 mandatory pipelines
 Experiment management
 Data management
 Analyses
10
Workbench: “Time Travel” and Stateful Swapout
•
Time-travel of distributed systems for debugging





•
Generalize disk image format and handling
Periodic disk checkpointing
Full state-save on swapout
Xen-based virtual machines
Challenge: network state (packets in flight)
 Pragmatic approach: quiesce senders, flush buffers
Stateful swapout/swapin [easier]
 Allows transparent pre-emption experiment
•
Related to workbench: history, tree traversal
 Can share some mechanisms, some UI
11
Planetlab: Requirements
1)
•
It must provide a global platform that supports both
short-term experiments and long-running services.

services must be isolated from each other

multiple services must run concurrently

must support real client workloads
Key Ideas




Slices
Virtualization
 multiple architectures on a shared infrastructure
Programmable
 virtually no limit on new designs
Opt-in on a per-user / per-application basis
 attract real users
 demand drives deployment / adoption
12
PlanetLab: Slices
13
Slices
14
Slices
15
User Opt-in
Client
NAT
Server
16
Virtualization: Per Node View
Node
Mgr
Owner
VM
VM1
VM2
…
VMn
Auditing service
Monitoring services
Brokerage services
Provisioning services
Virtual Machine Monitor (VMM)
Linux kernel (Fedora Core)
+ Vservers (namespace isolation)
+ Schedulers (performance isolation)
+ VNET (network virtualization)
17
Global View
…
PLC
…
…
18
Requirements
2)
It must be available now, even though no one knows for
sure what “it” is

deploy what we have today, and evolve over time

make the system as familiar as possible (e.g., Linux)

accommodate third-party management services
19
Brokerage Service
.
.
.
Bind(slice, pool)
PLC
(SA)
User
BuyResources( )
NM VM VM
VM … VM
VMM
Broker
.
.
.
(broker contacts relevant nodes)
20
Requirements
3)
Convince sites to host nodes running code written by
unknown researchers from other organizations.

protect the Internet from PlanetLab traffic

must get the trust relationships right

trusted intermediary: PLC
2
4
Node
Owner
PLC
3
1
Service
Developer
(User)
1) PLC expresses trust in a user by issuing it credentials to access a slice
2) Users trust PLC to create slices on their behalf and inspect credentials
3) Owner trusts PLC to vet users and map network activity to right user
4) PLC trusts owner to keep nodes physically secure
21
Requirements
4)
•
Sustaining growth depends on support for site autonomy
and decentralized control

sites have final say over the nodes they host

must minimize (eliminate) centralized control
Owner autonomy
 owners allocate resources to favored slices
 owners selectively disallow unfavored slices
•
Delegation
 PLC grants tickets that are redeemed at nodes
 enables third-party management services
•
Federation
 create “private” PlanetLabs using MyPLC
 establish peering agreements
22
Requirements
5)
•
It must scale to support many users with minimal
resources available

expect under-provisioned state to be the norm

shortage of logical resources too (e.g., IP addresses)
Decouple slice creation and resource allocation
 given a “fair share” by default when created
 acquire additional resources, including guarantees
•
Fair share with protection against thrashing
 1/Nth of CPU
 1/Nth of link bandwidth
 owner limits peak rate
 upper bound on average rate (protect campus bandwidth)
 disk quota
23
GENI Design
•
Key Idea
 Slices embedded in a substrate of networking resources
•
Two central pieces
 Physical network substrate
 expandable collection of building block components
 nodes / links / subnets
 Software management framework
 knits building blocks together into a coherent facility
 embeds slices in the physical substrate
27
National Fiber Facility
28
+ Programmable Routers
29
+ Clusters at Edge Sites
30
+ Wireless Subnets
31
+ ISP Peers
MAE-West
MAE-East
32
Closer Look
Sensor Network
backbone wavelength
Dynamic
Configurable
backbone switch Switch
Customizable
Router
Internet
Wireless Subnet
Edge Site
33
Summary of Substrate
•
Node Components



•
Bandwidth


•
edge devices
customizable routers
optical switches
national fiber facility
tail circuits (including tunnels)
Wireless Subnets





urban 802.11
wide-area 3G/WiMax
cognitive radio
sensor net
emulation
34
Management Framework
Management Services
- name space for users, slices, & components
GMC
- set of interfaces (“plug in” new components)
- support for federation (“plug in” new partners)
Substrate Components
35
GENI Management Core (GMC)
Slice Manager
GMC
Resource Controller
Auditing Archive
node
control
sensor
data
CM
CM
CM
Virtualization SW
Virtualization SW
Virtualization SW
Substrate HW
Substrate HW
Substrate HW
36