Document 7245077

Download Report

Transcript Document 7245077

Lecture 2:
Software Platforms
Anish Arora
CIS788.11J
Introduction to Wireless Sensor Networks
Lecture uses slides from tutorials
prepared by authors of these platforms
Outline
•
Discussion includes not only operating systems but also
programming methodology
 Some environments focus more on one than the other
•
Focus here is on node centric platforms
(versus distributed system centric platforms)
•
Platforms
 TinyOS
(applies to XSMs)
slides from Culler et al
 EmStar
(applies to XSSs)
slides from UCLA
 SOS
 Contiki
 Virtual machines
 TinyCLR
(Maté)
2
References
•
•
•
NesC
The Emergence of Networking Abstractions and Techniques
in TinyOS
EmStar: An Environment for Developing Wireless
Embedded Systems Software
•
TinyOS webpage
•
EmStar webpage
3
Traditional Systems
•
Application
Application
User
System
Network Stack
•
Strict boundaries
•
Ample resources
•
Transport
Threads
Network
Address Space
Data Link
Files
Well established layers
of abstractions
Physical Layer
Drivers
•
Independent
applications at
endpoints
communicate pt-pt
through routers
Well attended
Routers
4
Sensor Network Systems
•
Highly constrained resources
 processing, storage, bandwidth, power, limited hardware
parallelism, relatively simple interconnect
•
Applications spread over many small nodes
 self-organizing collectives
 highly integrated with changing environment and network
 diversity in design and usage
•
Concurrency intensive in bursts
 streams of sensor data &
network traffic
•
Robust
 inaccessible, critical operation
•
Unclear where the
 Need a framework for:
• Resource-constrained
concurrency
• Defining boundaries
• Appl’n-specific processing
allow abstractions to emerge
5
Choice of Programming Primitives
•
Traditional approaches
 command processing loop (wait request, act, respond)
 monolithic event processing
 full thread/socket posix regime
•
Alternative
 provide framework for concurrency and modularity
 never poll, never block
 interleaving flows, events
6
TinyOS
•
Microthreaded OS (lightweight thread support) and
efficient network interfaces
•
Two level scheduling structure
 Long running tasks that can be interrupted by hardware
events
•
Small, tightly integrated design that allows crossover of
software components into hardware
7
Tiny OS Concepts
Scheduler + Graph of Components
 Event Handlers
 Frame (storage)
 Tasks (concurrency)
•
Constrained Storage Model
 frame per component, shared stack,
no heap
•
Very lean multithreading
•
Efficient Layering
Messaging Component
internal thread
Internal
State
RX_packet_done (buffer)
 Commands
init
Component:
init
Power(mode)
TX_packet(buf)
•
power(mode)
model: threads + events
Events
msg_rec(type, data)
msg_send_done)
Commands
TX_packet_done (success)
 constrained two-level scheduling
send_msg
(addr,
type, data)
•
8
application
Application = Graph of Components
Route map
Router
Sensor Appln
packet
Radio Packet
byte
Radio byte
bit
Active Messages
RFM
Serial Packet
UART
Temp
ADC
Photo
SW
Example: ad hoc, multi-hop
routing of photo sensor
readings
HW
3450 B code
226 B data
clock
Graph of cooperating
state machines
on shared stack
9
TOS Execution Model
•
commands request action
 ack/nack at every boundary
message-event driven
 call command or post task
events notify occurrence
 HW interrupt at lowest level
active message
event-driven packet-pump
packet
•
Radio Packet
 may signal events
•
Radio byte
encode/decode
event-driven bit-pump
bit
 post tasks
crc
event-driven byte-pump
byte
 call commands
data processing
application comp
RFM
tasks provide logical concurrency
 preempted by events
10
Event-Driven Sensor Access Pattern
command result_t StdControl.start() {
SENSE
return call Timer.start(TIMER_REPEAT, 200);
}
event result_t Timer.fired() {
return call sensor.getData();
Timer
Photo
LED
}
event result_t sensor.dataReady(uint16_t data) {
display(data)
return SUCCESS;
}
•
clock event handler initiates data collection
•
sensor signals data ready event
•
data event handler calls output command
•
device sleeps or handles other activity while waiting
•
conservative send/ack at component boundary
11
TinyOS Commands and Events
{
...
status = call CmdName(args)
...
}
event EvtName(args) {
...
return status;
}
command CmdName(args) {
...
return status;
}
{
...
status = signal EvtName(args)
...
}
12
TinyOS Execution Contexts
events
Tasks
commands
Interrupts
Hardware
•
Events generated by interrupts preempt tasks
•
Tasks do not preempt tasks
•
Both essential process state transitions
13
Tasks
•
provide concurrency internal to a component
 longer running operations
•
are preempted by events
•
able to perform operations beyond event context
•
may call commands
•
may signal events
•
not preempted by tasks
{
...
post TskName();
...
}
task void TskName {
...
}
14
Typical Application Use of Tasks
•
event driven data acquisition
•
schedule task to do computational portion
event result_t sensor.dataReady(uint16_t data) {
putdata(data);
post processData();
return SUCCESS;
}
task void processData() {
int16_t i, sum=0;
for (i=0; i ‹ maxdata; i++)
sum += (rdata[i] ›› 7);
display(sum ›› shiftdata);
}
• 128 Hz sampling rate
• simple FIR filter
• dynamic software tuning for centering the
magnetometer signal (1208 bytes)
• digital control of analog, not DSP
• ADC (196 bytes)
15
Task Scheduling
•
Currently simple fifo scheduler
•
Bounded number of pending tasks
•
When idle, shuts down node except clock
•
Uses non-blocking task queue data structure
•
Simple event-driven structure + control over complete
application/system graph
 instead of complex task priorities and IPC
16
Maintaining Scheduling Agility
•
Need logical concurrency at many levels of the graph
•
While meeting hard timing constraints
 sample the radio in every bit window

Retain event-driven structure throughout application

Tasks extend processing outside event window

All operations are non-blocking
17
The Complete Application
SenseToRfm
generic comm
IntToRfm
AMStandard
packet
RadioCRCPacket
UARTnoCRCPacket
CRCfilter
noCRCPacket
Timer
photo
byte
MicaHighSpeedRadioM
SecDedEncode
ChannelMon
bit
SPIByteFIFO
phototemp
RadioTiming
RandomLFSR
SW
UART
ClockC
ADC
HW
SlavePin
18
Programming Syntax
•
TinyOS 2.0 is written in an extension of C, called nesC
•
Applications are too
 just additional components composed with OS components
•
Provides syntax for TinyOS concurrency and storage model
 commands, events, tasks
 local frame variable
•
Compositional support
 separation of definition and linkage
 robustness through narrow interfaces and reuse
 interpositioning
•
Whole system analysis and optimization
19
Components
•
A component specifies a set of interfaces by which it is
connected to other components
 provides a set of interfaces to others
 uses a set of interfaces provided by others
•
Interfaces are bidirectional
 include commands and events
•
provides
Interface methods are the external namespace of the
component
StdControl
provides
interface StdControl;
interface Timer:
uses
interface Clock
Timer
Timer Component
uses
Clock
20
Component Interface
•
logically related set of commands and events
StdControl.nc
interface StdControl {
command result_t init();
command result_t start();
command result_t stop();
}
Clock.nc
interface Clock {
command result_t setRate(char interval, char scale);
event result_t fire();
}
21
Component Types
•
Configurations
 link together components to compose new component
 configurations can be nested
 complete “main” application is always a configuration
•
Modules
 provides code that implements one or more interfaces and
internal behavior
22
Example of Top Level Configuration
configuration SenseToRfm {
// this module does not provide any interface
}
implementation
{
components Main, SenseToInt, IntToRfm, ClockC, Photo as Sensor;
Main.StdControl -> SenseToInt;
Main.StdControl -> IntToRfm;
SenseToInt.Clock -> ClockC;
SenseToInt.ADC -> Sensor;
SenseToInt.ADCControl -> Sensor;
SenseToInt.IntOutput -> IntToRfm;
}
Main
StdControl
SenseToInt
Clock
ClockC
ADC
ADCControl
Photo
IntOutput
IntToRfm
23
Nested Configuration
includes IntMsg;
configuration IntToRfm
{
provides {
StdControl
IntToRfmM
interface IntOutput;
interface StdControl;
}
IntOutput
SubControl
}
SendMsg[AM_INTMSG];
GenericComm
implementation
{
components IntToRfmM, GenericComm as Comm;
IntOutput = IntToRfmM;
StdControl = IntToRfmM;
IntToRfmM.Send -> Comm.SendMsg[AM_INTMSG];
IntToRfmM.SubControl -> Comm;
}
24
IntToRfm Module
includes IntMsg;
command result_t StdControl.start()
{ return call SubControl.start(); }
module IntToRfmM
{
uses {
interface StdControl as SubControl;
interface SendMsg as Send;
}
provides {
interface IntOutput;
interface StdControl;
}
}
implementation
{
bool pending;
struct TOS_Msg data;
command result_t StdControl.init() {
pending = FALSE;
return call SubControl.init();
}
command result_t StdControl.stop()
{ return call SubControl.stop(); }
command result_t IntOutput.output(uint16_t value)
{
...
if (call Send.send(TOS_BCAST_ADDR,
sizeof(IntMsg), &data)
return SUCCESS;
...
}
event result_t Send.sendDone(TOS_MsgPtr msg,
result_t success)
{
...
}
}
25
Atomicity Support in nesC
•
•
•
Split phase operations require care to deal with pending
operations
Race conditions may occur when shared state is accessed
by premptible executions, e.g. when an event accesses a
shared state, or when a task updates state (premptible by
an event which then uses that state)
nesC supports atomic block
 implemented by turning of interrupts
 for efficiency, no calls are allowed in block
 access to shared variable outside atomic block is not allowed
26
Supporting HW Evolution
•
Distribution broken into
 apps:
 tos:
 lib:
 system:
 platform:
o
shared application components
hardware independent system components
hardware dependent system components
includes HPLs and hardware.h
 interfaces
 tools:
 contrib
 beta
•
top-level applications
development support tools
Component design so HW and SW look the same
 example: temp component
 may abstract particular channel of ADC on the microcontroller
 may be a SW I2C protocol to a sensor board with digital sensor or ADC
•
HW/SW boundary can move up and down with minimal changes27
Example: Radio Byte Operation
•
Pipelines transmission: transmits byte while encoding next byte
•
Trades 1 byte of buffering for easy deadline
•
Encoding task must complete before byte transmission completes
•
Separates high level latencies from low level real-time rqmts
•
Decode must complete before next byte arrives
Encode Task
Bit transmission
RFM Bits
Byte 1
start
Byte 2
Byte 1
Byte 3
Byte 2
…
Byte 4
Byte 3
28
Dynamics of Events and Threads
bit event filtered
at byte layer
bit event =>
end of byte =>
end of packet =>
end of msg send
thread posted to start
send next message
radio takes clock events to detect recv
29
Sending a Message
bool pending;
struct TOS_Msg data;
command result_t IntOutput.output(uint16_t value) {
IntMsg *message = (IntMsg *)data.data;
if (!pending) {
pending = TRUE;
message->val = value;
message->src = TOS_LOCAL_ADDRESS;
if (call Send.send(TOS_BCAST_ADDR, sizeof(IntMsg), &data))
return SUCCESS;
pending = FALSE;
}
return FAIL;
}
destination
length
• Refuses to accept command if buffer is still full or network refuses to
accept send command
•
User component provide structured msg storage
30
Send done Event
event result_t IntOutput.sendDone(TOS_MsgPtr msg,
result_t success)
{
if (pending && msg == &data) {
pending = FALSE;
signal IntOutput.outputComplete(success);
}
return SUCCESS;
}
}
•
Send done event fans out to all potential senders
•
Originator determined by match
 free buffer on success, retry or fail on failure
•
Others use the event to schedule pending communication
31
Receive Event
event TOS_MsgPtr ReceiveIntMsg.receive(TOS_MsgPtr m) {
IntMsg *message = (IntMsg *)m->data;
call IntOutput.output(message->val);
return m;
}
•
Active message automatically dispatched to associated handler
 knows format, no run-time parsing
 performs action on message event
•
Must return free buffer to the system
 typically the incoming buffer if processing complete
32
Tiny Active Messages
•
Sending
 declare buffer storage in a frame
 request transmission
 name a handler
 handle completion signal
•
Receiving
 declare a handler
 firing a handler: automatic
•
Buffer management
 strict ownership exchange
 tx: send done event
 reuse
 rx: must return a buffer
33
Tasks in Low-level Operation
•
transmit packet
 send command schedules task to calculate CRC
 task initiates byte-level data pump
 events keep the pump flowing
•
receive packet
 receive event schedules task to check CRC
 task signals packet ready if OK
•
byte-level tx/rx
 task scheduled to encode/decode each complete byte
 must take less time that byte data transfer
34
TinyOS tools
•
•
TOSSIM: a simulator for tinyos programs
ListenRaw, SerialForwarder: java tools to receive raw packets on PC
from base node
•
Oscilloscope: java tool to visualize (sensor) data in real time
•
Memory usage: breaks down memory usage per component (in contrib)
•
Peacekeeper: detect RAM corruption due to stack overflows (in lib)
•
Stopwatch: tool to measure execution time of code block by
timestamping at entry and exit (in osu CVS server)
•
Makedoc and graphviz: generate and visualize component hierarchy
•
Surge, Deluge, SNMS, TinyDB
35
Scalable Simulation Environment
•
target platform: TOSSIM
 whole application compiled for host native instruction set
 event-driven execution mapped into event-driven simulator
machinery
 storage model mapped to thousands of virtual nodes
•
radio model and environmental
model plugged in
 bit-level fidelity
•
Sockets = basestation
•
Complete application
 including GUI
36
Simulation Scaling
37
TinyOS Limitations
•
•
•
•
Static allocation allows for compile-time analysis, but can make
programming harder
No support for heterogeneity

Support for other platforms (e.g. stargate)

Support for high data rate apps (e.g. acoustic beamforming)

Interoperability with other software frameworks and languages
Limited visibility

Debugging

Intra-node fault tolerance
Robustness solved in the details of implementation

nesC offers only some types of checking
38
Em*
•
•
Software environment for sensor networks built from
Linux-class devices
Claimed features:
 Simulation and emulation tools
 Modular, but not strictly layered architecture
 Robust, autonomous, remote operation
 Fault tolerance within node and between nodes
 Reactivity to dynamics in environment and task
 High visibility into system: interactive access to all services
39
Contrasting Emstar and TinyOS
•
•
Similar design choices
 programming framework
 Component-based design
 “Wiring together” modules into an application

event-driven
 reactive to “sudden” sensor events or triggers

robustness
 Nodes/system components can fail
Differences

hardware platform-dependent constraints
 Emstar: Develop without optimization
 TinyOS: Develop under severe resource-constraints

operating system and language choices
 Emstar: easy to use C language, tightly coupled to linux (devfs)
 TinyOS: an extended C-compiler (nesC), not wedded to any OS
40
Em* Transparently Trades-off Scale vs. Reality
Em* code runs transparently at many degrees of “reality”:
high visibility debugging before low-visibility deployment
Pure Simulation
Scale
Deployment
Data Replay
Ceiling Array
Portable Array
Reality
41
Em* Modularity
•
Dependency DAG
•
Each module (service)
Collaborative Sensor
Processing Application
 Manages a resource &
State
Sync
resolves contention
3d MultiLateration
 Has a well defined interface
 Has a well scoped task
Acoustic
Ranging
Topology Discovery
 Encapsulates mechanism
 Exposes control of policy
Neighbor
Discovery
Leader
Election
Hardware
Radio
Reliable
Unicast
Time
Sync
 Minimizes work done by client
library
•
Application has same
structure as “services”
Audio
Sensors
42
Em* Robustness
•
Fault isolation via multiple processes
•
Active process management (EmRun)
•
Auto-reconnect built into libraries
scheduling
 “Crashproofing” prevents cascading failure
•
Soft state design style
depth map
path_plan
EmRun
 Services periodically refresh clients
 Avoid “diff protocols”
camera
motor_x
motor_y
43
Em* Reactivity
•
Event-driven software structure
 React to asynchronous notification
 e.g. reaction to change in neighbor list
•
scheduling
Notification through the layers
 Events percolate up
path_plan
notify
filter
 Domain-specific filtering at every level
motor_y
 e.g.
 neighbor list membership hysteresis
 time synchronization linear fit and outlier rejection
44
EmStar Components
•
Tools
 EmRun
 EmProxy/EmView
•
Standard IPC
 FUSD
 Device patterns
•
Common Services
 NeighborDiscovery
 TimeSync
 Routing
45
EmView/EmProxy: Visualization
Emulator
emview
emproxy
neighbor
linkstat
motenic
Mote
nodeN
nodeN
nodeN
Mote
…
…
Mote
46
EmSim/EmCee
•
•
•
Em* supports a variety of types of simulation and
emulation, from simulated radio channel and sensors to
emulated radio and sensor channels (ceiling array)
In all cases, the code is identical
Multiple emulated nodes run in their own spaces, on the
same physical machine
47
EmRun: Manages Services
•
Designed to start, stop, and monitor services
•
EmRun config file specifies service dependencies
•
Starting and stopping the system
 Starts up services in correct order
 Can detect and restart unresponsive services
 Respawns services that die
 Notifies services before shutdown, enabling graceful
shutdown and persistent state
•
Error/Debug Logging
 Per-process logging to in-memory ring buffers
 Configurable log levels, at run time
48
IPC : FUSD
•
Inter-module IPC: FUSD
 Creates device file interfaces
Client
Server
/dev/servicename
/dev/fusd
 Text/Binary on same file
User
 Standard interface
 Language independent
 No client library required
Kernel
kfusd.o
49
Device Patterns
•
FUSD can support virtually any semantics
 What happens when client calls read()?
•
But many interfaces fall into certain patterns
•
Device Patterns
 encapsulate specific semantics
 take the form of a library:
 objects, with method calls and callback functions
 priority: ease of use
50
Status Device
•
Designed to report current state
 no queuing: clients not guaranteed to see
every intermediate state
•
Supports multiple clients
•
Interactive and programmatic interface
 ASCII output via “cat”
 binary output to programs
•
Supports client notification
Server
Config
Handler
 notification via select()
•
Client configurable
O
State Request
Handler
I
Status Device
 client can write command string
 server parses it to enable per-client
behavior
Client1
Client2
Client3
51
Packet Device
•
Designed for message streams
•
Supports multiple clients
•
Supports queuing
 Round-robin service of output
queues
 Delivery of messages to all/
specific clients
Server
Packet Device
O
F
•
Client-configurable:
 Input and output queue lengths
I
I
F
O
I
F
O
I
O
 Input filters
 Optional loopback of outputs to
Client1
Client2
Client3
other clients (for snooping)
52
Device Files vs Regular Files
•
•
Regular files:

Require locking semantics to prevent race conditions between readers
and writers

Support “status” semantics but not queuing

No support for notification, polling only
Device files:

Leverage kernel for serialization: no locking needed

Arbitrary control of semantics:
 queuing, text/binary, per client configuration

Immediate action, like an function call:
 system call on device triggers immediate response from service, rather than
setting a request and waiting for service to poll
53
Interacting With em*
•
•
Text/Binary on same device file

Text mode enables interaction
from shell and scripts

Binary mode enables easy
programmatic access to data as C
structures, etc.
EmStar device patterns support
multiple concurrent clients

IPC channels used internally can
be viewed concurrently for
debugging

“Live” state can be viewed in the
shell (“echocat –w”) or using
emview
54