Network Reprogramming & Programming Abstractions

Download Report

Transcript Network Reprogramming & Programming Abstractions

Network Reprogramming
&
Programming Abstractions
Network reprogramming
• XNP: wireless reprogramming tool
• Mate: Virtual machine for WSN
2
Over NW Programming Wireless Sensors
• In-System Programming

A sensor node is plugged to the serial / parallel port

But, it can program only one sensor node at a time
• Network Programming

Delivers the program code to multiple nodes over the air with a single transmission

Saves the efforts of programming each individual node
Sensor Node
Parallel
Cable
Program
Code
Host Machine
In-system programming
Sensor Node
Sensor Node
…
Sensor Node
Program
Code
Radio
Channel
Host Machine
Network programming
3
Network Programming for TinyOS (XNP)
• Has been available since release 1.1
• Originally made by Crossbow and modified by UCB
• Provides basic network programming capability
• Has some limitations

No support of multi-hop delivery

No support of incremental update
4
Background – Mechanisms of XNP
(1) Host: sends program code as download msgs
(2) Sensor node: stores the msgs in the external flash
(3) Sensor node: calls the boot loader. The boot loader copies the program code to
the program memory.
Host Machine
Sensor Node
Program
Memory
Network
Programming
Host Program
User app
SREC file
Radio
Packets
(1)
Network
Programming
Module
(2)
External
Flash
Boot
loader
(3)
User
Application
Section
Boot loader
Section
5
Network reprogramming
• XNP: wireless reprogramming tool
• Mate: Virtual machine for WSN
6
Mate: A Virtual Machine for WSNs
Why VM?
• Large number (100’s to 1000’s) of nodes in a coverage area
• Some nodes will fail during operation
• Change of function during the mission
Related Work

PicoJava : assumes Java bytecode execution hardware

K Virtual Machine : requires 160 – 512 KB of memory

XML : too complex and not enough RAM

Scylla : VM for mobile embedded system
7
Mate features
• Small (16KB instruction memory, 1KB RAM)
• Concise
(limited memory & bandwidth)
• Resilience (memory protection)
• Efficient
(bandwidth)
• Tailorable (user defined instructions)
8
Mate in a nutshell (capsule?)
• Stack architecture
• Three concurrent execution contexts (clock, send, receive)
• Execution triggered by predefined events
• Tiny code capsules; self-propagate into network
• Built in communication and sensing instructions
9
When is Mate Preferable?
• For small number of executions

Bytecode version is preferable for a program running < 5 days

The energy saved in communicating new program via Mate
compensates for the energy wasted due to running virtual machine
bytecode interpreter
• In energy constrained domains
• Use Mate capsule as a general RPC engine, memory
protection, virtualization
10
Mate Architecture

Stack based architecture

Single shared variable
•
Three events:
•
Message reception
•
Message send
Hides asynchrony
•
Simplifies programming
•
Less prone to bugs
0
1
2
3
Send
Clock timer
Events
Clock
•
gets/sets
PC
Code

Subroutines
Receive

gets/sets
Operand
Stack
Return
Stack
11
Instruction Set

One byte per instruction

Three classes: basic, s-type, x-type
•
basic: arithmetic, halting, LED operation
•
s-type: messaging system
•
x-type: pushc, blez
 8 instructions reserved for users to define

Instruction polymorphism
•
e.g. add(data, message, sensing)
12
Code Example
• Display Counter to LED
gets
pushc 1
add
copy
sets
pushc 7
and
putled
halt
#
#
#
#
#
#
#
#
#
Push heap variable on stack
Push 1 on stack
Pop twice, add, push result
Copy top of stack
Pop, set heap
Push 0x0007 onto stack
Take bottom 3 bits of value
Pop, set LEDs to bit pattern
13
Code Capsules
• One capsule = 24 instructions
• Fits into single TOS packet
• Atomic reception
• Code Capsule

Type and version information

Type: send, receive, timer, subroutine
14
Viral Code
• Capsule transmission: forw

Forwarding other installed capsule: forwo (use within clock capsule)
• Mate checks on version number on reception of a capsule
-> if it is newer, install it
• Versioning: 32bit counter
• Disseminates new code over the network
15
Component Breakdown
• Mate runs on mica with 7286 bytes code, 603 bytes RAM
16
Network Infection Rate
• 42 node network in 3 by
14 grid
• Radio transmission: 3
hop network
• Cell size: 15 to 30 motes
• Every mote runs its clock
capsule every 20 seconds
• Self-forwarding clock
capsule
17
Bytecodes vs. Native Code
• Mate IPS: ~10,000
• Overhead: Every instruction executed as separate TOS task
18
Customizing Mate
• Mate is general architecture; user can build customized VM

Bombilla in TinyOS for querying

Agilla (over Bombilla) for mobile agents in WSNs
• User can select bytecodes and execution events
• Issues:

Flexibility vs. Efficiency
Customizing increases efficiency w/ cost of changing requirements

Java’s solution:
General computational VM + class libraries

Mate’s approach:
More customizable solution -> let user decide
19
Programming abstractions
Macro-programming approaches
• Hood abstraction
• Region streams
• Kairos
20
Macroprogramming
• Program sensornet as a whole

Easier than programming at the level of individual nodes
 e.g) Matrix multiplication
Matrix notation vs. Parallel program in MPI

Compile into node-level programs
• Non CS researchers shall be able to program without
worrying about distributed execution details

Abstract away the details of concurrency and communication
21
Taxonomy of Macroprogramming
Macro-programming
Abstractions
Global
behavior
Nodeindependent
• TAG, Cougar
• DFuse
Node-dependent
• Kairos
• Regiment
• Split-C
Local
Behavior
Data-Centric
• EIP, Statespace
Geometric
• Regions, Hood
Support
Composition
Sensorware
SNACK
Distribution
Automatic
& Safe
Optimization
Execution
Impala
Mate
Tofu
Trickle
Deluge
22
Hood
(UC Berkeley)
• Neighborhood

A neighborhood in Hood is defined by a set of criteria for choosing neighbors and a
set of variables to be shared.

A node can define multiple neighborhoods with different variables shared over each
of them.
• Captures the essence of the neighborhood concepts needed by
many existing applications
• Defines the relationship between several concepts fundamental to
neighborhoods

membership, data sharing, data caching, and messaging.

decouples data sharing and caching

Integrate neighbor lists and caching with messaging

Mirror & filter
• Explicitly proposes the neighborhood-oriented programming
23
Region streams
(Harvard)
• Purely functional macroprogramming language for sensornet
• Basic data abstraction: region streams

A time-varying collection of node state

e.g., “All sensor nodes within area R” form a region

The set of their periodic data samples form a region stream
• Example: tracking moving vehicle
• A region stream is created that represents the value of
the proximity sensor on every node in the network
• Each value is also annotated with the location of the
corresponding sensor.
• Data items that fall below the threshold are filtered out.
• The spatial centroid of the remaining collection of sensor
values is computed to determine the approximate location
of the object that generated the readings
24
Region streams
(Harvard)
• Regiment: Functional Macroprogramming Language

Based on functional reactive programming concepts

Functional languages: “pure”, no input no output

cannot manipulate program state

allows the compiler to decide how and where the program state is kept in
the volatile mesh of sensor nodes
25
Market Based Macroprogramming (Harvard)
• Basic model:

Nodes act as agents that sell goods (such as sensor readings or routed msgs)

Each good is produced by an associated action that produces it

Nodes attempt to maximize their profit, subject to energy constraints
• Each good has an associated price

Network is “programmed” by setting prices for each good
• Each action has an associated energy cost

e.g., Cost to sample a sensor << Cost to transmit a radio message
material from Matt Welsh
26
How to program in MBM?
• First step: Set the price(s)

use one of many efficient dissemination protocols

update prices as need by the overall application goal
• Nodes select actions based on a utility function
• Utility depends on:

Price
 Advertised by base station

Energy availability
 Taking an action must stay within energy budget

Other dependencies
 Cannot aggregate data until multiple samples have been received
 Cannot transmit if nothing in local buffer
material from Matt Welsh
27
Kairos
(USC)
• In Kairos, a programmer writes a single sequential program
using a simple centralized memory model
Centralized Sensor State mapped from Sensors
Sequential
Program
Thread
of
control
Read/write
28
Advantage
• Centralized sequential programs easier to specify, code,
understand and debug than hand-coded distributed versions

Reuse “textbook” algorithms for sophisticated tasks

Ignoring latency and energy considerations, a dumb but obviously trivial
“distributed” implementation always possible, by shipping sensor nodes’
state to and from a central location
29
Kairos Features
• Three constructs with which to write programs

node (a first-class datatype) and node_list (iterator on nodes) that

get_neighbors() to obtain current one-hop neighbors of a node

var@node to synchronously access data and program state of node’s
facilitate topology independent programming
• These constructs are language-agnostic
• They can be implemented in the preprocessor stage of
compilation
30
Eventual Consistency
• Synchronization model called Loose Synchrony
• Useful when there is relatively static node state
• Did not work well for a dynamic vehicle tracking scenario
• Implemented a tighter semantic called Loop-level Synchrony
• Long term, we are exploring temporal abstractions as a fourth
construct that can capture this requirement completely
31