NEST PI Presentation Template

Download Report

Transcript NEST PI Presentation Template

Wireless Mobile Platform OAEP
Lessons, Progress, and Future Work
David Wagner
UC Berkeley
1
Platform Innovations
TinyOS 1.1
New release processes
2
Integrating Midterm Demo Enhancements
• TinyOS improvements
– Better radio stack (Chipcon
CC1000)
– Mica2Dot support
– Sensorboard support
acoustic
• Hardware designs published
– Magnetometer board with reset
– Ultrasound board with dedicated
TinyOS processor
– Dual-voltage Power board w/
rechargeable battery
– Acoustic board with greater
amplitude and dedicated TinyOS
processor
mag
ultrasound
dot
3
TinyOS 1.1 Release
• nesC 1.1: data race detection, atomicity
• TinyOS:
–
–
–
–
–
mica2 and Mica-Dot support (chipcon radio)
power management, battery monitoring
TinySec
ad-hoc routing (lib/Route)
Matchbox filesystem, for flash chip
• Development environment:
– network reprogramming (Xnp)
– TinyViz (visualisation for tossim)
– on-mote debugging (gdb over JTAG)
• Distribution re-structured
–
–
–
–
RPMs for Linux and Cygwin
HW platform support re-organized
contrib directory
minor release process created
4
Beta Release Process
•
New process: get your new ideas in TinyOS, faster
–
–
–
–
•
open a new ‘beta project’ - comes with SOW and time limit
develop - often multiple participants
merge: becomes part of default CVS tree
appears in next monthly minor release
Monthly release process: distribute new code faster
–
–
–
–
1st of the month: create CVS branch
run regression tests (new...) on branch
fix discovered problems
create release from branch (approx. 1 week after branch),
announce to world
– merge fixes back to main trunk
– merge major changes from beta directory into TinyOS CVS
(gives 1 month for widespread testing before release)
•
Example: UCLA/UCB collaboration on B-MAC
5
Network Innovations
A better MAC layer
High-speed sampling
Lib/Route
Reliable bulk transport
6
B-MAC
• New Features:
– Automatic Gain Control
(AGC)
– Noise floor estimation
– Filtering for Clear Channel
Assessment (CCA)
– Low-power listening
– Link-layer ACKs
• Included in TinyOS 1.1.2
(December ’03 Release)
• Example of a beta-process
success
• Improvements:
–Raise max bandwidth from
42 to 53 packets/sec
–Double effective data
throughput
–85% channel utilization
–Time synchronization
interfaces for UCLA/VU
– Configurable Backoff
7
High-frequency Sampling & Logging
sensor
16 KHz
RAM
RT averaging
1 KHz
Matchbox
flash filer
• Supports up to ~ 16 KHz sampling on mica, mica2 motes
• Fast logging flash-based file system (512kB)
• Low jitter: < 10μs at 6 KHz
2.5kHz, mote shaken
2.5kHz, mote dropped
8
Robust Ad-hoc Routing
• Convergence of lib/Route multihop routing interface
– Several different implementations (RFM and CC)
– Applications select implementation in config (e.g. TASK, Surge)
• Link quality Management
• Neighborhood Management
• Adaptive Cost-based Route Selection
RFM End-to-end Success vs. Distance
– Thresholded shortest-path
– Minimum path-loss
– Minimum retransmission
•
•
•
•
Link-level Retransmission
Send Queueing
Extensive Study (sensys)
>98% reliability on CC1000
9
Reliable Bulk Transport
• Goal: reliable, unicast transfer of large data sets
Throughput vs Loss Rate (Window =16)
• Principles:
300
Throughput
–Matched to physical hierarchy
250
Throughput (Byte/s)
• Data set/flash, segment/SRAM,
• Packet/radio
–Explicit session (segment) creation
handshake, implicit tear-down
–Selective acknowledgements,
retransmission
–Data transferred reliably from
RAM to RAM
–Speed controlled by timer,
to avoid channel saturation
# of packets sent / 10
200
150
100
50
0
0
5
10
15
Loss Rate
20
25
example throttled to 10 pkts/sec
• Results:
buf alloc
–Reliable, even for 20% loss rates
–Throughput degrades gracefully
***
start
ok data
ack vector
10
Pushing Scale Throughout the Platform
•Manufacture and deployment time
•Network Longevity
•Network Programming
•Simulation
•Testbeds and New Platforms
11
Some Lessons from Midterm Demo
reflector
Exposed components
O-ring Seal
ultrasound
Mica2 dot
mag sense
power
battery
Good thermal characteristics
Collision absorption
• Packaging matters
–Good: Minimize exposure to the elements, lift antenna away from ground
–Bug: Unexpected interaction between antenna & magnetometer
• Design for operation
–“If you have to touch every node, you lose”
–Bugs: no reset button, recharging requires reassembly, LEDs are invisible
–Antenna insertion and pin alignment were major time consumers
12
Midterm Lessons
• Time is critical
–Design, prototype, test hardware
and software
–Manufacturing a large number of
nodes
–Test, integrate, and fix cycle when
working at scale
–Assembly and disassembly of
nodes
–Develop glue between tiers in the
network
• Issues for large deployments
–Power consumption and recharge
–Maintenance
• Need robust hardware and packaging
• Must expect individual nodes to be
“hands-free”
–Reprogramming
–Don’t use modular hardware
• Only causes more faults for no
benefits
• Support services are invaluable
–Ident: ping, version, compile time
–Command-line scripting
–Remote configuration
–Reset / sleep
• Development methodology
–More nodes => simpler code
–Completely modular subsystems allow
testing and debugging in isolation
• Node position
• Sensing and report
• Mobile-to-mobile routing
• Pursuer navigation
–Extensive parameterization reduces the
need for reprogramming
• Thresholds, update rates, ranges, etc…
–Assume lossy links
• Simplicity pays
–Robustness trumps features
• Multiple fall-backs per service
–Allow testing and debugging of service
in isolation at scale
–Mitigates risk
–Localization: fixed, remote set, ranging
–Routing: single hop, 802.11 back
channel, landmark multihop
• Global access during debugging
–Big antenna
•Wish we had done
–Better regression suites
–Logged everything (the science of the demo)
13
Long-term Deployments in Harsh Climate
• Deployed two monitoring networks on Great Duck Island
– Node design based on NEST midterm package
– PIR for detection + microclimate
– Large, non-uniform geographic area
– Unintrusive, survive heat, rain, storms
• Shallow network deployed mid-June
–49 nodes: 23 weather station motes, 26 burrow motes
–Reading every 5 minutes
–Network diameter 70 meters
–Asymmetric, bi-directional communication with low power listening – send
data packets with short preambles, receive packets with long preambles
–Expected life time – 4+ months
• Deeper network in July/August
–92 nodes: 44 weather motes, 48 burrow motes
–Network diameter – 1/5 mile
–Low power listening: tradeoff channel capacity for average power
consumption (hit 10 uA sleep, 12 mA active)
–Reading sent to base station every 20 minutes, route updates every 20
minutes.
–Expected lifetime: 2.5 months
–2/3 of nodes join within 10 minutes, remainder within 6 hours.
–Still running after hurricane
14
Typical Network Architecture
Widely dispersed
Patch
Network
Dense patches
Sensor Node
Sensor Patch
Gateway
Transit Network
Client Data Browsing
and Processing
Basestation
1 km in GDI
Solar-powered
mote with big
antenna
Base-Remote Link
Internet
Data Service
15
Power Management: Cooperative Interfaces
• Power management extends std control
Application: query processing
–1000-fold range of power draw
detection, reporting
–Components informed of intention to go to
sleep
–Take internal actions
Sensor
–Propagate control
Comm
–Scoreboard determined permissible
Drivers &
depth of sleep state
Stack
Timer
Filters
–Scheduler drops to sleep on idle
–Key interrupts drive wake-up
• Watchdog is built in similar fashion
–Low level ‘grenade timer’
Radio
–Various levels of health monitoring to keep
it from going off
–Implemented and incorporated into
TinyOS Appln Sensor Kit (TASK)
Clock
Sensor 3
Sensor 2
Sensor1
Power Management
16
Monitoring and Management of the Network
• Determine what went wrong and take action
–Extremely simple and reliable
–Ensure liveness, preserve access to network nodes, help with fault diagnosis
• Networking monitoring
–Enforce minimum and maximum transmission rates
–Verify minimum reception rate
• Node health monitoring
–Beyond battery voltage – sensor data monitoring
–Failure of a sensor an indicator of node health
–Detectable failures impacting lifetime– GDI humidity and clock skew experience
• Program integrity checks
–Stack overflow
• Watchdog / deadman timer
–Require attention from different parts of the system, reset if abandoned
–Address many different time scales
–Test low-level system components – timers, and task queue to ensure basic system
liveness
–First use: min reception rate
• Partial system shutdown
–Flash/EEPROM writing under low voltage conditions, broken sensors, etc.
• Fault handling
–Error logging and/or reporting
17
Low-power Event Dissemination: Trickle
• Problem: Propagate news of rare events, efficiently
– eg. Reprogramming, configuration change, new capsules
– energy dominated by checking to see if have most current version
– trade-off between rate of notification can cost of maintenance
– must adapt to deployment density
• Denser => more listening, less transmitting
• Simple, “polite gossip” algorithm
• “Every once in a while, broadcast what “version” you
have, unless you’ve heard some other nodes broadcast
the same thing.”
• Behavior (simulation and deployment):
–Scalability: thousands of nodes, huge variations in density
–Maintenance: a few sends per hour
–Propagation: less than a minute
– msg flux per unit area ~constant, despite variation in node density
18
Time To Reprogram, Tau, 10 Foot Spacing
(seconds)
18-20
16-18
14-16
12-14
10-12
8-10
6-8
4-6
2-4
0-2
Time
19
Scalable Network Programming
• Out of core, epidemic algorithm
– Program (=flash) divided into sequence of pages
• Version has sequence # + hash per page + pages
– Page (segment) size determined by RAM, Flash xfer buffer, and size of
directory
– Nodes hold version + page diffs; disseminate either
• Basic primitive – reliable page multicast
– Fragment and transmit packets per page, vector ack, selective fix up per
receiver, snoop, consistency checking
– Receive / ack process must be limited to RAM (flash in XNP)
• Dissemination algorithms
– Fast push, pipeline series of pages in space
– Epidemic maintenance
• Eliminate the “wrong image” problem
• New version: propagate page diffs
– Potentially, multiple levels of pages to organize code and metadata
– System image: version number + vector of page hashes
– Organize image to reduce #pages that change
• Linearization of functions, holes
• Blue sky ideas
– default TinyOS network reprogramming kernel as the system restore
checkpoint
20
Network Reprogramming: Deluge
• Data dissemination protocol for program data
–Program size (128K) >> RAM (4K)
–Data represented as a set of fixed-size pages
• Basic Protocol
1. Advertise
Version: 2
Pages: (0,1,2)
2. Request
Page: 0
3. Transmit
Version: 2
Pg: 0, Pkt: 15
4. Repair
Pg: 0, Pkt: 3
• Optimizations Considered
–Transmit data rates, pipelining of pages, forward error-correction, sender
selection, sender suppression
• Currently:
–Implemented in nesC, working in TOSSIM.
–Page-based bootloader working on nodes
–Energy consumption, time-to-completion acceptable
–Pipelining of pages a big win, transmitting at maximum data rate minimizes
time-to-completion and energy wasted due to idle-listening.
–Forward error correction is beneficial in sparse nets
–All other optimizations show no significant difference and increase
complexity
–Testing on larger real-world networks.
21
Network Reprogramming: Deluge
Bre akdown of Ene rgy Use d (Full Spe e d Se nd)
1200
Effect of Packet Send Period
on Time to Completion (p < .01)
Energy Used (uAh)
1000
600
Time to Completion (s)
500
800
600
C
400
200
400
0
0
300
2
4
8
12
16
Spacing Between Nodes
200
EEPROM Writes
EEPROM Reads
Messages Sent
Radio Listening
Messages Received
100
Effect of Sender Selection Methods on Time
to Completion
0
0
2
4
8
12
16
Spacing Between Nodes
64ms
128ms
256ms
• Sample TOSSIM data on 25 nodes
in long rectangle
Time Elapsed (s)
Full speed
500
400
300
200
100
0
0
2
4
8
12
16
Spacing between Nodes
Most recent adv
Closest to src
Farthest from src
Closest to reqer
22
Scalable, Scriptable TinyOS Simulation
• TOSSIM: simulate TOS implementation at controlled
level of detail
– bit-level network: ~100 nodes in real time
– packet-level network: thousands of nodes
• TinyViz
– provides convenient means of looking at what is going on
inside your distributed applications
– extensible plug-ins
• T-Script
– allows scripting of everything going on around the simulation
– reproducible dynamism
– regression testing, parameter studies, mobility, fault injection,
environmental model (sensors/actuators), logging, debugging
– scripts are written in Java-based Python
• integrated with full tool chain
– event-driven => each script agent is a closure
23
Development Testbeds
• Often need deployed collection of test motes with powerful back-channel
– instrumentation, debugging, fast reprogramming
– intermediate point in simulation / emulation / deployment spectrum
– real network, controlled environment, large instrumentation and control
bandwidth
• Developed ethernet approach (with Intel) and provided to CrossBow
– atmel programmer on mote end
– lantronics 10BT ethernet-serial device
– uses power over ethernet
• Tool suite for programming, logging, data collection
• Ethernet based programming/monitoring platform
• eMote (NOT a mote)
–Production version of EPRB
–802.3af Power over Ethernet capable
• Enables large scale testbeds &
educational tools
24
Towards a Level 2 Node
• IPAQ, Intrynsic, etc don’t have the connectivity options
– often hard hard to maintain linux distribution (esp. iPAQ)
• STARGATE: All-purpose embedded computing
–400 MHz Xscale / Linux
–Ethernet, Bluetooth, USB, Serial, PCMCIA/CF (x2), JTAG,
BerkeleyMOTE interface
–Onboard Li-ION controller
• Users
–Intel Open-Source Robotics
–Sensor Nets
• Compact base station platform
• UCB,UCLA, U of Utah, Harvard, more...
Bottom View
Top View w/ Daughter Card
25
Telos: Exploring Next-Generation Possibilities
• Single board philosophy
–Robustness, Ease of use, Lower Cost
–Integrated Humidity & Temperature sensor
• First platform to use a 802.15.4 radio
–CC2420 radio, 2.4 GHz, 250 kbps (12x mica2)
–3x RX power consumption of CC1000, 1/3 turn on time
–Same TX power as CC1000
• Motorola HCS08 processor
–Lower power consumption, 1.8V operation,
faster wakeup time
–40 MHz CPU clock, 4K RAM
• Package
–Integrated onboard antenna, +3dBi gain
–Removed 51-pin connector
–Everything USB & Ethernet based
–2/3 A or 2 AA batteries
–Weatherproof packaging
• Support in upcoming TinyOS 1.1.3 Release
• Available February from Moteiv (moteiv.com)
26
Other Innovations
Ranging and Localization
27
Ranging and Localization
• Systematic study based on mid-term ultrasound board
– many nodes with uniform collection of pairs at many distances
– ranging data collection in many environments
• carpet, grass, dirt, pavement
• Goal: identify why range-error extrapolated to
localization performance is so “optimistic”
– so much better on paper than in practice
• Careful development of measurement technique and
filters
– Yields 10 cm (90 % confidence) over 5 m
• Key is that likelihood of valid range estimate decreases
with range and with number of nodes
• Yielded surprising result on collision behavior
28
Shadowing
• All modern RF/Acoustic time-of-flight systems encase
the acoustic pulse in the radio pkt
distance
–Use RF MAC to mediate acoustic
–Associate correct chirp with correct pkt
Acoustic
• Assumes collision => loss
– so does most of the wireless protocols
• Assumption is often false with FSK
RF
– slightly stronger packet earlier packet
– received despite collision
• Modified receiver processing
– scan for start symbol within packet
– recover stronger-late packet
time
• Recover data from most collisions!
– significant change in network behavior
29
MobiLoc: Mobility Enhanced Localization
• Use range/angle to target, as measured by default
sensors, to better localize nodes
– reduce (eliminate?) anchor nodes
– how much processing involved?
• Elegant simple approximation
– when a target crosses the line through two nodes consider the
sum and difference of their range measurement
– min sum (when between) = separation distance
– max diff when outside
30
NEST Final Demo Plans
• Redefine Pursuit-Evasion Game Scenario
– Road in the middle of the field is an evader safe-zone
– Pursuit occurs when evaders leave road
– Exposes and leverages NEST research
• Line in the sand
• Waking up Big Brother
• Sentry service
• Enhancements over NEST Midterm Demo
– Up to 5 robots: 3 pursuers, 2 evaders
• Multi-object tracking, Coordinated pursuit
– Larger field, more motes
– Estimate energy consumption versus evader activity
• Early challenges
– Select and freeze sensing modes
• Tied to decisions Extreme Scaling Demo
– Design nodes and network explicitly for multi-object detection
• Additional challenges and enhancements
– Secure links
– More distinct separation of composable services
– Packaging and deployment
• Consider solar-rechargable motes
31
Questions?
32
DARPA Template Slides
33
Administrative
•
•
•
•
•
•
•
•
•
•
•
Project Title:
Program Manager: Vijay Raghavan
PI Name(s):
PI Phone Number(s):
PI E-Mail Address(es):
Company/Institution:
Contract Number:
AO Number:
Award Start Date:
Award End Date:
Agent Name/Organization:
34
Subcontractors and Collaborators
• Subcontractors
–…
• Collaborators
–…
35
PI Name
Affiliation
Project Name
Problem and Challenge
Problem and Challenge Description
New Ideas
…
Project
Diagram
Here
Additional Supporting Text…
Impact
…
FY04 Schedule
Q1
Q2
Q3
Q4
1QFY03
• Tasks/Milestones
2QFY03
• Tasks/Milestones
3QFY03
• Tasks/Milestones
4QFY03
• Tasks/Milestones
36
Problem Description/Objective
• What specific technical problem(s) are you trying solve?
• How does your project contribute to the goals and
current status of the NEST program?
• Identify specific success criteria for your project
–You may be informal at this point, but thinking along these lines
will be useful in figuring out program metrics, something that will
be done during the course of the PI meeting
• If you have many different results you can talk about, it
would be nice to enumerate all of them in one chart but
talk in detail about just one of them—this way, the
audience gets an appreciation of your work without
having to do absorb a lot in a short amount of time
37
Project Status
• Identify current technical approach to reaching your
project's goals
• Describe changes in technical approach since last
NEST PI Meeting
• Describe progress made since last NEST PI Meeting
• Identify deliverables and publications
• Identify specific milestones accomplished
• Identify and update status to delayed milestones
38
Goals and Success Criteria
• Give very concrete metrics that can be
measured
• Specify explicit Go/No-Go decision points
39
Project Plans
• Describe your project's plans for FY04, including
detailed plans for the next six months
• Identify specific performance goals
• Identify how your progress will be measured and
success determined
–Try to use very concrete metrics
–If at all possible, indicate how success can be measured on the
OEP
40
Project Schedule and Milestones
• Show the key 3-5 tasks in the project and the schedule for
achieving these tasks
• Be very specific on contracted milestones and demonstration
events
• Annotate timeline using Government Fiscal Years/Quarters only
• Start timeline at award and end timeline at award end
• Show completed milestones and schedule advances/slippages
• Describe project milestones that occur in next six months
41
Technology Transition/Transfer
• Provide information on any technology transition
activities your team has identified
• Provide details on the technology transition
opportunities, the relevant DoD organizations, and the
applicable areas of the program and your project
42
Program Issues
• Provide information on existing or anticipated problems
with the project or the overall NEST program that you’d
like to see addressed
43