Software Challenges in Wireless Sensor Networks

Download Report

Transcript Software Challenges in Wireless Sensor Networks

Software Challenges in Wireless Sensor Networks

Jeremy Elson Microsoft Research IPSN/SPOTS Tutorial Wednesday, April 27, 2005

Outrageous

(?)

Opinion Tutorial

Outrageous opinion survey – A dozen contributors – Ideas that deserve credit are theirs, blame for dumb ideas is mine More questions than answers Basic question:

Why is writing software for sensor networks so hard?

Some examples of ongoing research

Is

it hard?

Internet: – Cost of new devices is really just the device – Pressure is on hardware designers – smaller, cheaper, faster – We can network every node that exists Far more motes exist than we can network – 100,000 (?) exist – Biggest networks, ~100 or maybe ~1000 – Why is this?

Can we just blame hardware?

Energy is “non-negotiable” – Un-tethered sensing is the reason we’re here – Will always limit radio ranges, link qualities “Architecture challenges” come somewhere in between hardware and software – e.g., how do we structure hierarchies?

– Billions of PCs can’t be wrong!

Even with the right structure, it’s harder – Why?

Worst of All Worlds

Data Uncertainty

Real-World Sensor Inputs Robotics Distributed Robotics Sensor Networks Timing Dependent Data OS Kernels/ Device Drivers TCP “Text” Squid MS Word MPI/Linda cat Single- or Few Threaded Multi-Threaded

System Uncertainty

Distributed, Timing Dependent System

Big Challenge 1: Visibility, Visibility, Visibility!

Of a dozen sensor network researchers polled, 10 listed “debugging support” in some form as a core challenge

Visibility, visibility, visibility… Ground truth is hard to capture, even with unlimited capacity systems –

damn you, real world!

– We want to interpret responses to stimuli – The stimuli, not the sensor outputs, are ground truths The lab is not the same as the field – Things that worked in the lab don’t work during deployment – Bugs in MAC layers, positioning errors, non-Guassian noise… It’s also hard to model – Models are appearing (e.g., Cerpa, Whitehouse) – But notice that models are highly environment specific

Visibility, visibility, visibility… The differences have

substantial effects

– If only I had $100 for every routing algorithm designed with a circular radio model – I still couldn’t afford to buy a circular radio Observing reality concurrent with design should be mandatory – Brings us back to needing visibility

Visibility, visibility, visibility… Even traditionally easy-to-observe things (e.g. messages, states) become hard to capture, because… Debugging information is now huge compared to data, instead of the opposite (vs. Internet) You can’t store it on Mote-sized devices, either

Simulators, Emulators, Testbeds TOSSIM (Levis) – Real-code-simulator for motes Avrora (Titzer, Palsberg) – Simulates down to microcode level Motelab (Welsh, et al.) – Always-on testbed, lowers the massive systems motivation effort required for deployments EmStar (Girod, Elson, et al.) Reminders: – Sensor outputs are not ground truths – Matt’s office is not the same as a Redwood forest

Simulators, Emulators, Testbeds

EmStar’s runtime environments allow high-visibility debugging before jumping into low-visibility deployment

Pure Simulation Deployment Data Replay Ceiling Array Portable Array Reality

Visibility for Design

Even understanding the behavior of a big parallel process is hard – How do you

know

what’s going on

But even if you do…

Designing local rules to cause global behavior is hard (Culler, Liu, Welsh) – How do you

control

what’s going on

Visibility for Management

The need for visibility doesn’t end when the design is done (Madden, Polastre) Which sensors have failed?

– For repair purposes – Because it tells you about the data Doesn’t obviate the need for statistical elimination of sensors we think are bad – “Shut up, you’re confusing everyone!”

Another take: Predictability

Instead of observing

what happened

(post-facto), predict what

will happen

(static analysis) (Srivastava)

“Giving Up”

Tenet (Kohler) – motes can only do the most basic tasks (e.g., thresholding), and route back to a microserver – In the low-visibility nodes, complexity is limited to reasoning about one node’s data – “Hard part” happens at microservers Mechanism vs. policy separation for routing (Girod) – Motes send link states to microserver – Microserver computes routes; installs them on motes

Big Challenge 2: Higher Level Abstractions

How long can we keep on doing it this way?

Why are abstractions important?

They let you reason about software at a higher level – Right now we manually script every packet sent and received, most timers… They (can) let software interoperate better – Applications can share the underlying building blocks; system is smaller, more consistent – Services like TCP port numbers

Higher Level Abstractions Consider compiler arc: – First, “compiled code is too slow” – Second, “But computers are now fast” – Third, “The compiler does it better anyway” We’re still at Step 1 – Not with CPU cycles (we use compilers)… – … but

bandwidth, energy, and memory

. Unfortunately there may not be a Step 2

What if there’s no Step 3?

The Internet has TCP, which is well-behaved, and which many apps use – Nice model: fixed data len, variable time Some (minority?) apps don’t fit this model, adapt at the app layer (e.g. video quality) Sensor network congestion control (e.g. Woo, Hull): good first steps but still has a focus on collision avoidance Root of the problem: like streaming video

rate adaptive sensor apps must be the common case; they aren’t!

Common case is no longer “fixed data size, transport it when you can” model – infinite data

Some Abstractions

TinyDB (Madden) – Among the first; programming interface is queries Abstract regions (Welsh) – Program collections at a higher layer than sending messages Reliable Multi-Hop State Sync (Girod) – Publish and update structs over lossy nets This week, we’ve seen State-Machines (Kasten) and new intermediate languages (Newton) But the real question: do they work across a diversity of applications? Only time will tell.

Sub Challenge: Re-Usable Software TinyOS, EmStar, etc. are modular, yet reuse isn’t as pervasive as it “should be” One part software engineering, one part Big Problem (as in congestion control) An encouraging first step: SP (Sensor Protocol) by Polastre, Culler, et al.

– Standardized interface to MAC, with some basic feedback in both directions

Meta-Challenge: Applications That Do More Than Web Cameras 1999 “Grand Challenges” paper:

“Data processing must be in-network”

Where are we now?

Many (most?) applications are “bring all the data back” – Some notable exceptions including sniper tracking (vanderbilt), Magneto-Car Tracking (berkeley), Self-Healing networks (sensoria)

Self-Healing Networks

Sensoria Corp under contract from DARPA Goal: Nodes localize themselves within 1m, MOVE to fill in gaps Network completely self-organizing, autonomous at many layers

Closing the Loop

20 Nodes; 10

MOBILE

. Then, network partitions…

Summary

Ultimately we want to get to systems that do amazing things We just keep building on ideas; we have to build on each others’ systems Abstractions are needed, so we can build additive systems instead of just

more

None of this will happen without visibility And above all…

What the heck is the killer application?!!??

Acknowledgements

David Culler Henri Dubois-Ferrier Lew Girod Richard Guy Bill Kaiser Eddie Kohler Jie Liu Sam Madden Andrew Parker Joe Polastre Matt Welsh Alec Woo Yan Yu Feng Zhao

Thank you!

Questions? Comments?