7_tutorial_sensor

Download Report

Transcript 7_tutorial_sensor

Sensor Networks
ACOE 422
Adopted from IEEE Tutorial
on Sensor Networks
Sensing
Remote Sensing
In-situ Sensing
Networked Sensing
Remote Sensing
 As the term implies, sensors not co-
located with phenomenon
 Generally, sensors detect
electromagnetic radiation from
target
 passive: spectroscopes or
magnetometers, camera or TV
 active: laser distance finders,
radar scanners
 Locations
 Near-surface (e.g. aerial
photography), satellites
[From http://www.science.edu.sg/]
Remote Sensing Applications
Mapping Earth’s
Physical Properties
Agriculture:
Mapping Vegetation
Remote Sensing
Prospecting for minerals
and other resources
Urban Reconnaisance
Planetary Exploration
[From http://rst.gsfc.nasa.gov ]
Remote Sensing
Analysis and Systems
 Analysis: mostly image-processing
related
 spectral analysis
 image filtering
 classification (maximum
likelihood)
 principal components analysis
 Systems
 GIS: Geographic Information
Systems
 Flexibly combine various
images

[From http://erg.usgs.gov/ ]
In-situ Sensing
 Sensors situated close to
phenomena
 accurate, microscopic
observations
 … but with limited range
 General uses in engineering
applications
 Condition monitoring
 Performance tuning
[From http://www.sensorland.com/ ]
State of In-Situ Sensing
 System architecture
One or a small cluster of
dumb sensors
 … wired to a data
acquisition unit
 … or to a device controller
 Very impressive device
engineering
 Sensing accuracy
 Miniaturization
 Ruggedization
 Calibration

[From http://www.sensorland.com/ ]
Current Applications

Agriculture





Engine pressure and oxygen
monitoring
Suspension positioning of racing cars
and motorcycles
Road noise measurements
Aviation


Plant positioning
Precision hoeing
Humidity in compressed air
 Tail-lift testing
 Fan pitch monitoring
 Calibrating bullet speeds



Rotational stability of Ferris wheels
Acoustic adjustments in symphony
halls
Railways



Aircraft engine pressure
Entertainment
Manufacturing processes

Automotive, highway systems



Shipping


Rudder positioning
Space


Braking control load distribution
Traction control in slippery
conditions
Rocket engine valve positioning
Utilities

Water distribution and storage
Industry
 Many, many sensor
 The applicability of sensors is
manufacturers
 Sensor Magazine’s buying
guide lists 240
manufacturers of
acceleration measurement
sensors!
 Some names
 Agilent, Wilcoxon,
Crossbow, GEMS, Penny
and Giles, Delphi,
Motorola, ScanTek, Bosch,
National Instruments
vast …
 Companies often stratified by
 Industry segment (e.g.
Delphi automotive)
 Application (e.g. vibration
measurement, or pressure
measurement)
 … and differentiate themselves
 by offering a wide range of
products with different
specifications or differing
form factors
Standardization
 Fair amount of activity since
the mid-1990s
 IEEE P1451.x
 Two thrusts:
 Sensor board to processor
interfaces




wired/wireless bus,
point-to-point
both for data access
and sensor selfidentification
Object-oriented
abstractions for sensor
data and application
interaction
[From Plug-and-Play Sensors
Sensors Magazine, December 02]
Sensing
Remote Sensing
In-situ Sensing
Networked Sensing
Networked Sensing Enabler
 Small (coin, matchbox sized)
nodes with
 Processor


Memory


8-bit processors to x86
class processors
Kbytes – Mbytes range
Radio

20-100 Kbps initially
 Battery powered
 Built-in sensors!
The Opportunity
 Large-scale fine-grain in-situ sensing and actuation
100s to 1000s of nodes
 5m to 50m spacing
 Inherently collaborative
 … sensors cannot act alone because they have limited view
 Inherently distributed
 … since communication is energy-intensive (we’ll see this later)

Embedded (In-Situ) Networked Sensing
Applications
Application Areas
Seismic Structure
Response
Contaminant
Transport
Marine
Microorganisms
Ecosystems,
Biocomplexity
Structural
Condition Assessment
Seismic Structure Response
 Interaction between ground
motions and
structure/foundation
response not well
understood.
 Current seismic networks not
spatially dense enough


to monitor structure
deformation in response to
ground motion.
to sample wavefield without
spatial aliasing.
A Wired Seismic Array
A Wireless Seismic Array
 Use motes for seismic data
collection


Small scale (10 or so)
Opportunity: validate
with existing wired
infrastructure
 Experiments
 Factor building
 Four Seasons building
Condition Assessment
 Longer-term
 Challenges:
 Detection of damage
(cracks) in structures
 Analysis of stress
histories for damage
prediction
 Applicable not just to
buildings

Bridges, aircraft
Contaminant Transport
Billions of dollars
 Industrial effluent dispersal can be
12 Responsible Party
contributions
10 for cleanup of
8 “Superfund” sites
U.S. EPA,
6 (source:
1996)
4
2
01980 1985 1990 1995
enormously damaging to the
environment
 marine contaminants
 groundwater contaminants
 Study of contaminant transport
involves
 Understanding the physical
(soil structure), chemical
(interaction with and impact on
nutrients), and biological
(effect on plants and marine
life) aspects of contaminants
 Modeling their transports
 Mature field!
 Fine-grain sensing can help
Lab-Scale Experiments
[From CENS Annual Technical Report, 03]
 Use surrogates (e.g. heat
transfer) to study
contaminant transport
 Testbed


Tank with heat source
and embedded
thermistors
Measure and model heat
flow
Field-Level Experiments
 Nitrates in groundwater
 Application
Wastewater used for irrigating
alfalfa
 Wastewater has nitrates,
nutrients for alfalfa
 Over-irrigation can lead to
nitrates in ground-water
 Need monitoring system, wells
can be expensive
 Pilot study of sensor network to
monitoring nitrate levels

Marine Micro-organism
Monitoring
 Algal Blooms (red, brown,
green tides) impact
 Human life
 Industries (fisheries and
tourism)
 Causes poorly understood,
mostly because
 Measurement of these
phenomena can be complex
and time consuming
 Sensor networks can help
 Measure, predict, mitigate
Lab-Scale Experimentation
 Build a tank testbed in which to
study the factors that affect
micro-organism growth
 Actuation is a central part of
this
 Can’t expect to deploy at
density we need
 Mobile sensors can help
sample at high frequency
 Initial study:
 thermocline detection
Tetheredrobot
sample
collectors
1m
Ecosystem Monitoring
 Remote sensing can enable global assessment of
ecosystem
 But, ecosystem evolution is often decided by local
variations

Development of canopy, nesting patterns often
decided by small local variations in temperature
 In-situ networked sensing can help us understand
some of these processes
James Reserve
 Clustered architecture
 Weather-resistant housing design
 Sensors

Light, temperature, pressure, humidity
Great Duck Island
Patch
Network
Sensor Node
Sensor Patch
Gateway
Transit Network
Client Data Browsing
and Processing


Study nesting behavior of Leach’s storm
petrels
Clustered architecture:



802.11 backbone
multihop sensor cluster
Now running for several months
Basestation
Base-Remote Link
Internet
Data Service
Challenges and Goals
Networked Sensing Challenges
 Energy is a design constraint
 Network lifetime now becomes a metric
 Interaction with the physical world
 A lot messier than we’ve been used to
 Autonomous deployment
 We’re not used to building systems that can self-deploy
 Single (or a small number) of users
 Need a different model
Communication is Expensive
 The Communication/Computation Tradeoff
 Received power drops off as the fourth power of distance
 10 m: 5000 ops/transmitted bit
 100 m: 50,000,000 ops/transmitted bit
 Gets networking and distributed systems researchers excited!
 At short transmission ranges, reception costs are significant
 Implications
 Avoid communication over long distances
 Cannot assume global knowledge, or centralized solutions
 Can leverage data processing/aggregation inside the network
Communication is Expensive
 The Communication/Computation Tradeoff
 Received power drops off as the fourth power of distance
 10 m: 5000 ops/transmitted bit
 100 m: 50,000,000 ops/transmitted bit
 Gets networking and distributed systems researchers excited!
 At short transmission ranges, reception costs are significant
 Implications
 Avoid communication over long distances
 Cannot assume global knowledge, or centralized solutions
 Can leverage data processing/aggregation inside the network
The Goal
An infrastructure that can be used by
many different sensing applications
Components of Infrastructure
Querying, Triggering
Data-centric Routing
Aggregation and Compression
Data-centric Storage
Monitoring
Collaborative Event Processing
Localization
Time Synchronization
Medium Access
Operating Systems
Processor Platforms
Radios
Sensors
Calibration
Security
Collaborative Signal Processing
Tutorial Overview
 Discuss components bottom-up
 Present a networking and systems view
 In each topic



Present relatively mature systems (to the extent
they exist) first
Then discuss systems research
Finally, present some theoretical underpinnings
Hardware: Platforms, Radios
and Sensors
Overview
 Processors
Atmel
 StrongARM
 X-Scale
 Radios
 Chipcon CC1000
 Bluetooth
 Zigbee
 Sensors
 Platforms
 Mica-2
 Stargate
 iMote

Berkeley motes
Cerfcube

Mantis
Processors
 Architecture
• CISC vs. RISC
• Von-Neumann vs. Harvard
• Most embedded
processors/MCUs are RISC
Harvard architecture!
 Speed
 Cache/Memory
 Power Dissipation
Atmel Atmega 128L
 Harvard 8-bit RISC
 Speed : 8MHz
 Memory:
 128KB program memory
 4KB SRAM
 4KB EEPROM
 Power draw:

run mode: 16.5mW
 sleep mode: < 60µW
 Microcontroller used in Mica2
Intel StrongARM SA1100
 Harvard 32-bit RISC
 Speed: 206MHz
 Cache:
 16KB instruction cache
 8KB data cache
 Power draw:
 run mode : 800mW
 idle mode : 210mW
 sleep mode : 50µW
 Microprocessor used in iPAQ
H3700
Intel XScale PXA-250
 A successor to the StrongARM
 Harvard 32-bit RISC
 Speed : 200/300/400MHz
 Cache:


32KB instruction cache
32 KB data cache
 Power draw:



run mode : 400mW
idle mode : 160mW
sleep mode : 50µW
 Microprocessor used in iPAQ
H3900
Processor Comparison
Atmel ATMEGA128(L)
Type
MicroController
StrongARM SA1100
Intel XScale
Microprocessor
Microprocessor
Architecture
Harvard, 8-bit RISC
Harvard 32-bit RISC
Harvard 32-bit RISC
Speed
8MHz/16MHz
206MHz
200/300/400 MHz
Cache/
Memory
128KB programming memory
4KB data memory
16KB instruction cache
8KB data cache
32KB instruction cache
32KB data cache
Power Draw
run mode: 16.5mW
sleep mode: < 60µW
run mode: 800 mW
idle mode: 210 mW
sleep mode: 50µW
run mode: 400 mW
idle mode: 160mW
sleep mode: 50µW
Example
Platforms
Mica2 mote
Compaq iPAQ3700
Compaq iPAQ3900
Radios
 Low power, short range a must
 Relevant criteria:
Frequency
 Modulation scheme
 Encoding scheme
 Data rates
 Frequency diversity
 Power considerations
 Receive power
 Transmit power
 For short range
communication, the two are
comparable!


RFM TR1000
 OOK/ASK
 433/916 MHz
 Date rate up to 115.2Kbps
 Power draw:
Rx: 3.8mA
 Tx:12mA
 No spread spectrum support
 Used in earlier platforms
 Now largely obsolete

[From http://www.rfm.com]
Chipcon CC1000
 FSK, up to 76.8 KBaud
 Frequency range 300 – 1000
MHz
 programmable frequency in
250Hz steps
 Encoding scheme: NRZ,
Manchester
 Current Draw:
 programmable, min 5.3
mA, max 26.7 mA
 Used in Mica2
[From http://www.chipcon.com]
Bluetooth

Modulation



Diversity coding


Frequency Hopping Spread
Spectrum.
Power consumption:




Gaussian Filtered FSK (GFSK)
2.4GHz (ISM)
Transmit: 150 mW
Receive: 90 mW
Gross data rate: 6-12 KBps
Point-to-point and point-to-multipoint
protocols.

up to 8 devices per piconet
IEEE 802.15.4

Carriers and Modulation



Diversity coding


CSMA/CA
Status


Direct sequence
MAC layer


868 Mhz, 900 Mhz ISM, 2.5 Ghz
ISM
Different modulations at different
frequencies
Standards complete, radios expected
soon
Zigbee Alliance is pushing for this


Home and industrial automation
They have defined a simple topology
construction/routing layer on top of
this
Sensors
 We describe some of the sensors types commonly
used in some of our applications


Theory of operation
Performance parameters
Temperature Sensor
 Operation

Change in resistance
induced by temperature
change


Semiconductor or metal
Differential changes in
resistance

Thermocouples,
thermopiles
 Parameters



Temperature range
Linearity
Resolution (sensitivity)
[From Sensors Magazine,
“How to Select and Use the Right Temperature Sensor”]
Photo Sensors

Operation

Uses Photoconductive material
 Resistance decreases with increase in light

Parameters



Peak Sensitivity Wavelength
Illuminance Range
Ambient Temperature
Accelerometer
 Operation
Capacitive
 Piezoresistive
 Parameters
 Single Axis, 2-Axis, 3-Axis
 Acceleration range ( in g)
 Acceleration sensitivity ( in
mV/g)
 Dynamic acceleration.
(vibration), static
acceleration. (gravity)
 Shock survival limit ( in g)

Displacement Sensors (LVDT)
 Operation
Iron core between Primary
and Secondary Coils
 Displacement causes
voltage output
 Parameters
 Range
 Linearity
 Sensitivity

Humidity Sensors

Operation




Capacitive
Polymer dielectric that absorbs or releases water
proportional to humidity
Change in capacitance measure of humidity
Parameters




Range (e.g. 10% to 90%) and Accuracy
Sensitivity
Ambient Temp. Range
Response Time
Magnetic Field Sensor
 Operation
 Ferro-magnetic material (e.g. iron, nickel, cobalt) change
shape and size when placed in a Magnetic Field
 Parameters
 Number of axes ( one, two, three) – direction of magnetic
field
 Range (Gauss)
 Noise Sensitivity
 Linearity
 Sensitivity (V/Gauss)
Pressure Sensors
 Operation
Pressure information is
converted to displacement
which is measured using a
Disp. Sensor
 Parameters
 Range
 Temperature Sensitivity
 Accuracy, Resolution
 Response Time

Platforms
 Several platforms in the community
 Various combinations of the processors, radios and
sensors discussed so far
MICA 2
 Atmel processor
 Multi channel radio receiver:
Chipcon CC1000
 Light, Temperature, pressure,
acceleration, acoustic, Magnetic
sensors
 Wireless Reprogramming
 Software Platform
 TinyOS
[http://www.xbow.com]
Intel Research Mote
 Strong Arm Processor
12 Mhz, 64kB SRAM, 512 kB
FLASH
Bluetooth for communication
Digital sensor interface
UART, JTAG
Link Layer reliability and security
Battery Life
 > 6 months with AA cells and
1% duty cycle
Software platform
 TinyOS
 Abstraction layer for Bluetooth







[From Intel Corp.]
Stargate
 High Processing Node






[From Crossbow Inc., http://www.xbow.com]
(Gateway)
400Mhz Xscale processor,
64MB RAM, 32MB Flash
3.5'' x 2.5 ''
Ethernet, UART, JTAG, USB
via daughter card
Connectors for sensor boards
Standard Mica2 Connector
Software Platform
 Linux
GNOME




16-bit MSP chip
12-bit ADC
60k Flash
Solar panel for rechargeable
battery
 Sensors for temperature,
humidity
 Compass and GPS
 Bluetooth support, RF radio,
Ethernet
http://cmlab.rice.edu/projects/sensors
Medusa MK-2
 High Capability node
 Two Microcontrollers
8-bit RISC Atmega 128L
4MHz, 32k Flash, 4kb
RAM, JTAG, UART
 32-bit RISC, 40MHz, 1MB
Flash, 136KB RAM, JTAG,
UART, GPS
 They communicate via UART
 On Board Power Mgmt. And
Tracking Unit

Medusa MK-2
 Power consumption 200 mW
(fully operational)
 RF Radio compatible with Mica
motes
 Two accessory board for
ultrasonic distance
measurement
 Software Platform
 PALOS (power aware light
weight OS)
 Event Driven
 Priority Support
[http://nesl.ee.ucla.edu]
MANTIS
 Hardware: The Nymph







[http://mantis.cs.colorado.edu]
Chipcon CC1000
10 bit ADC
GPS Support
UART and JTAG
interface
3.5 x 5.5 sq. cm
Support for up to 8
batteries
MANTIS
 MOS




[http://mantis.cs.colorado.edu]
Unix like development and
runtime environment
Multithreaded with priority
support. But Round-Robin
(not event driven)
New H/w addition via a
new H/w driver
Remote login and
reprogramming via wired
and wireless
Components of Infrastructure
Collaborative Event Processing
Querying, Triggering
Aggregation and Compression
Data-centric Storage
Monitoring
Data-centric Routing
Collaborative Signal Processing
Localization
Time Synchronization
Medium Access
Operating Systems
Processor Platforms
Radios
Sensors
Calibration
Operating Systems
 Depending the platform, various choices



Tiny OS [Hill et al. 2000]
Embedded Linux
-OS [Shih et al. 2001]
 We focus on Tiny OS

Most different from the *nix variants
TinyOS
 De-facto sensor programming platform
 Initially developed by UC-Berkeley
 History
 v0.5.1 – released at 10/19/2001
 v0.6.0 – released at 02/13/2002
 v0.6.1 – released at 05/10/2002
 v1.0.0 – released at 10/14/2002
 v1.1.0 – released at 09/23/2003
H/W Platforms using TinyOS
Crossbow Mica
Crossbow Mica2
Intel Mote
ATmega128L
ATmega128L(4MHz)
ARM core(12MHz)
Program
Memory
128k Flash memory
128k Flash memory
512k Flash memory
Data
Memory
4k SRAM
4k SRAM
64k SRAM
RFM TR1000
ChipCon CC1000
Bluetooth
Data Rate
40 kbps
76.8 kBaud
1 Mbps
Frequency
916.50 MHz
315/433/868/915M
Hz
2.4GHz
CPU
Radio
TinyOS Programming Model
 Event-driven execution
 no polling, no blocking
 Concurrency intensive operation
 multi-threads based, no long-running thread
 Component-based
 program := layering of components
 No dynamic memory allocation
 program analysis and code optimization[Gay03]
 easy migration from software to hardware
TinyOS Architecture
 A tiny scheduler + a graph of components
 2-level tiny scheduler
 task : run to completion (FIFO scheduling)
 event: immediately performed, preempt task
 Component




Task
Frame
Command handler
Event handler
Command
Signal
Component
Event
handlers
Event
Command
handlers
Frame
Task
Call
Task
TinyOS-nesC
 System programming language [Gay03]
 To support TinyOS programming model
 Component specification
 provides and uses interfaces
 interface contains commands and events
 Component implementation
 Module : provide code and implement interfaces
 Configuration : connect uses-interface of component to providesinterface of other component
 Support for concurrency
Component Specification
interface SendMsg {
command result_t send(uint16_t, uint8_t, TOS_Msg *m);
event result_t sendDone(TOS_Msg *m, result_t res);
}
interface ReceiveMsg {
event result_t receive(TOS_Msg *m);
}
interface StdControl {
command result_t init();
}
module AMStandard {
provides {
interface StdControl;
interface SendMsg[uint8_t id];
interface ReceiveMsg[uint8_t id];
}
uses {
interface BareSendMsg as RadioSend;
interface ReceiveMsg as RadioReceive;
}
}
SendMsg
ReceiveMsg
StdControl
SendMsg
ReceiveMsg
StdControl
AMStandardM
RadioSend
RadioReceive
StdControl
RadioSend
RadioReceive
StdControl
RadioCRCPacketM
Component Implementation
module AMstandard {
....
}
Implement {
state;
bool
TOS_Msg* buffer;
Code
configuration GenericComm {
....
}
Wiring
implementation
{
components AMStandard;
components RadioCRCPacket as RadioPacket;
components UARTFramedPacket as UARTPacket;
command result_t SendMsg.send[uint8_t id](uint16_t addr,
uint8_t length, TOS_Msg* data)
{ ....
post sendTask();
buffer = data;
Control = AMStandard.Control;
....
SendMsg = AMStandard.SendMsg;
}
ReceiveMsg = AMStandard.ReceiveMsg;
event result_t RadioSend.sendDone(TOS_Msg* msg, result_t res) sendDone = AMStandard.sendDone;
{ ....
signal SendMsg.sendDone[msg->type](msg, success);
AMStandard.UARTControl -> UARTPacket.Control;
....
AMStandard.UARTSend -> UARTPacket.Send;
}
AMStandard.UARTReceive -> UARTPacket.Receive;
event TOS_MsgPtr RadioReceive.receive(TOS_Msg* packet)
{ ....
AMStandard.RadioControl -> RadioPacket.Control;
signal ReceiveMsg.receive[packet->type](packet);
AMStandard.RadioSend -> RadioPacket.Send;
....
AMStandard.RadioReceive -> RadioPacket.Receive;
}
}
task void sendTask()
{ .....
call RadioSend.send(buffer);
....
}
A TinyOS application: DIM
DIM
application
routing
GPSR
Timer
Temper
ature
message
AM
Clock
ADC
HPLClock
HPLADC
packet
RadioCRC
Packet
UARTFra
mPacket
byte
CC1000
RadioInt
UART
CC1000
UART
SW
Clocks
I2C
HW
Common System Components
 AM (Active Message)
 Messaging layer
implementation that for
packet de-muxing
 RadioCRCPacket
 Provides simple radio
abstraction
 Send/receive packets
over radio
RadioCRCPacket
RadioCRCPacket
CC1000RadioInt
MicaHighSpeed
Radio
Mica2Radio
(76.8 KBaud)
MicaRadio
(40Kbps)
Common System Components
 UARTFramedPacket
 provides serial communication to Host PC
 19.2kbps(Mica/Mica2Dot), 57.6Kbps(mica2)
 Timer
 provides periodic and one-shot timers
 ADC
 abstraction of the analog-to-digital converter
 used by sensing components
 Temperature, light sensor
Components of Infrastructure
Collaborative Event Processing
Querying, Triggering
Aggregation and Compression
Data-centric Storage
Monitoring
Data-centric Routing
Collaborative Signal Processing
Localization
Time Synchronization
Medium Access
Operating Systems
Processor Platforms
Radios
Sensors
Calibration
Components of Infrastructure
Collaborative Event Processing
Querying, Triggering
Aggregation and Compression
Data-centric Storage
Monitoring
Data-centric Routing
Collaborative Signal Processing
Localization
Time Synchronization
Medium Access
Operating Systems
Processor Platforms
Radios
Sensors
Calibration
MAC Layer Issues
 Energy-efficient MAC layers
 Topology control for higher energy-efficiency
 MAC and radio layer performance
Medium Access Control
 Important design considerations
 Collision avoidance
 Energy efficiency
 Scalability in node density
 Latency
 Fairness
 Throughput
 Bandwidth utilization
 Reduce idle listening, collisions, control overhead,
overhearing
MAC Design in TinyOS
StartSym
MAC Delay
Transmitting encoded bits
TX
RX
Start Symbol Search
StartSym Detection

Receiving individual bits
Synchronization
CSMA/Collision Avoidance


Optional MAC layer acknowledgement (Mica)
Hill et al. 2002
Ack
Sensor-MAC (S-MAC)
 Tradeoffs
Higher latency, less fairness
 Higher energy efficiency
 Major components in S-MAC
 Periodic listen and sleep

 Collision avoidance
 Overhearing avoidance
 Message passing
 Combine TDMA and
contention-based protocols
 Ye et al., Infocom2002


Latency
Fairness
Energy
Collision Avoidance
 Solution: Similar to IEEE 802.11 ad hoc mode (DCF)
 Physical and virtual carrier sense
 Randomized backoff time
 RTS/CTS for hidden terminal problem
 RTS/CTS/DATA/ACK sequence
 Overhearing avoidance
 Reserve channel for duration of entire message (rather than a
fragment)
 … so that others can aggressively sleep to avoid overhearing
Periodic Listen and Sleep
Node 1
listen
Node 2
sleep
listen
listen
sleep
sleep
listen
sleep
 Reduce long idle time
Reduce duty cycle to ~ 10% (120ms on/1.2s off)
• Longer time-slots than TDM, looser synchronization requirements
 Schedule can differ
 Preferable if neighboring nodes have same schedule
• easy broadcast & low control overhead
•
Coordinated Sleep
Schedule 1
1
2
Schedule 2
 Nodes coordinate on sleep schedules
Nodes periodically broadcast schedules
  New node tries to follow an existing schedule
 Nodes on border of two schedules follow both
 Periodic neighbor discovery and synchronization
 Early part of listen interval devoted to this

Implementation on Testbed
 Platform
Mica/Mica2 Motes
 TinyOS
 Used as NIC for x86/xscale
embedded Linux box
 Configurable S-MAC options
 Low duty cycle with
adaptive listen
 Low duty cycle without
adaptive listen
 Fully active mode (no
periodic sleeping)

Energy consumption (mJ)
S-MAC Performance
 Two-hop network at different
1800
traffic loads
 S-MAC consumes much less
energy than 802.11-like protocol
w/o sleeping
 At heavy load, overhearing
avoidance is the major factor in
energy savings
1600


At light load, periodic sleeping plays
the key role
Average energy consumption in the source nodes
802.11-like protocol
without sleep
1400
1200
1000
800
Overhearin
g
600
avoidance
400
S-MAC
200
0
2
4
6
8
10
Message inter-arrival period (second)
Source 1
Sink 1
Source 2
Sink 2
Adaptive Topology Control

Can we put nodes to sleep for long
periods of time?



Topology adapts to







More aggressively than S-MAC
Leverage redundant deployments
Application activities
Environmental changes
Node density
Extend system lifetime
Reduce traffic collision
Complementary to topology control
schemes that adjust transmit power
levels
Example: ASCENT
 The nodes can be in active or passive state
Active nodes forward data packets (using a routing
mechanism that runs on the topology).
 Passive nodes do not forward packets but might sleep or
collect network measurements.
 Each node joins the network topology or sleeps according to
the number of neighbors and packet loss as measured locally.

ASCENT State Transitions
after Tt
Test
neighbors < NT
and
• loss > LT
• loss < LT & help
Active
neighbors > NT (high ID for ties);
or
loss > loss T0
after Tp
Sleep
Passive
after Ts
NT: neighbor threshold
LT: loss threshold
T?: state timer values (p: passive, s: sleep, t: test)
Topology Control Schemes
 Empirical adaptation: Each node adapts based on
measured operating region. ASCENT (Cerpa et al. 2002)
 Routing/Geographic topology based: Redundant links
are removed. SPAN (Chen et al. 2001), GAF (Xu et al.
2001)
 Cluster based: Workload is shared within clusters CEC
(Xu et al. 2002)
 Data/traffic driven: Nodes starts on demand using paging
channel STEM (Tsiatsis et al. 2002)
Understanding Radio Vagaries
 Notoriously unpredictable





Variable environment noise
Device calibration
Non-linear signal strength
decay
Multi-path effect
Transmission collision
 Additional constraints for
sensor networks


Energy efficiency (Low
power radio)
Possibly high density
deployment
 High packet loss, Asymmetry,
High temporal variance

Zhao et al.
 Impact on systems design

Hardware/Physical Layer








Modulation Scheme
Base-band Frequency
Encoding Scheme
MAC Protocol
Reliable Data Delivery
Path Selection in Routing
Congestion Control
“Soft-state” Maintenance
Spatial Profile of Packet Delivery

Node positions
4B6B Encoding
High Tx Power
In-door
2hrs (7200 pkts)
“Gray Area” is evident in the communication range
Grey Area in Packet Loss
 Relatively large region of
In-door
poor connectivity


Across a wide variety of
environments
Spanning as large as 30%
of the effective
transmission range Out-door
Unobstructed
High Packet Loss
4B6B Encoding
High Tx Power
Heavy tail in packet loss distributions for both in-door and
habitat environments
Note: Nodes are not uniformly spaced. CDF is slightly bias to bad link.
Standard Deviation in Packet Loss
Window size = 40
4B6B Encoding
High Tx Power
Variability over time with large dynamic range
Components of Infrastructure
Collaborative Event Processing
Querying, Triggering
Aggregation and Compression
Data-centric Storage
Monitoring
Data-centric Routing
Collaborative Signal Processing
Localization
Time Synchronization
Medium Access
Operating Systems
Processor Platforms
Radios
Sensors
Calibration
What is localization?
 Determining the location of a node in a global coordinate
system
 Availability of location information is a fundamental need




Interpreting the data
Routing (GPSR)
Geo-spatial queries
Location based addressing
Why not equip every node with
GPS?
 GPS needs line of sight, cannot be used


in indoor environments
in the presence of foliage
Early Schemes
 Active Bat (AT&T)
 People wear badges which emit ultra-sound pulse
 Receivers mounted in a regular grid on ceiling
 Time of flight based triangulation (centralized)
 CRICKET (MIT)
 Ultrasound ranging
 Fixed emitter infrastructure with known positions
 RADAR (Microsoft Research)
 Uses existing 802.11 LAN
 Signal/Noise ratio of the targets used for localization
Ad-hoc Localization
 Sensor nodes are randomly
scattered
 “Where am I”?
 Only a “small” fraction of
nodes have GPS
 Anchors
 The rest have to infer their
global positions somehow
General Approach
 Find distance to neighboring
nodes
 Ranging
 Neighbors of anchors fix
position relative to anchors
 Position fixing
 Other nodes fix their positions
relative to at least three
neighbors
 Iterative refinement
Ranging
 Radio



Received signal strength based
Use a path loss model to estimate range
Need careful calibration for accuracy

Can get to within 10% of radio range
Examples: SpotON (Hightower et al.), Calamari (Whitehouse et
al.)
 Acoustic
 Use time of flight of sound (ultrasound)
 Potentially high accuracy: 1% of radio range
 May need code spreading to counter multipath effects
 Examples: Girod et al., Savvides et al.


Position Fixing Taxonomy
•  Topological Schemes



Rely only on topology information
Can result in very inaccurate localization
Usually require less resources
 Geometric Schemes
 Use geometric techniques to determine location
 Usually result in highly accurate position estimates
 May require more resources
Topological Schemes
 DV-Hop (Niculescu et. al.)
Find average distance or hop
davg
 distance = davg*(No of Hops)
 requires no ranging!!!
 DV-Dist (Niculescu et. al.)
 Standard distance vector
algorithm
 range as metric
 Refinement can help significantly


d3
d2
d1+d2+d3
d1
Geometric Schemes
 Savvides et. al.
Each anchor defines a
coordinate system, anchor is
the origin.
 nodes localize in this
coordinate system using hop
by hop lateration
 nodes maintain (nodeId, x,y)
 3 such tuples can be used to
localize
Observations
 Local coordinates may be
translated, rotated or flipped
versions of the global system
 distance from anchor is
invariant


y
q
x
Comparison of existing schemes
 3 schemes (20% anchors, 1%
error)
 Geometric (Savvides et al.,
Niculescu et. al.)
 Topological (DV-dist)
 Localization extent
 Nodes localized within 2m
 Topological scheme
 Higher localization extent
than geometric scheme.
 Need 10-11 neighbors

The State of Localization
 Lots of research in the area
 Probably far from deploying robust systems in the
field
 Every component is hard and error-prone


Ranging
Position-fixing
Components of Infrastructure
Collaborative Event Processing
Querying, Triggering
Aggregation and Compression
Data-centric Storage
Monitoring
Data-centric Routing
Collaborative Signal Processing
Localization
Time Synchronization
Medium Access
Operating Systems
Processor Platforms
Radios
Sensors
Calibration
Time Synchronization
 Critical piece of functionality
 Applications:
Sample-level correlation
 Event time correlations
 Wide variety of requirements
 Microsecond level for acoustic
localization
 Perhaps less for events
 Global?
 Post-facto?

 Prior work
NTP
 Research
 Systems






RBS (Elson et al.)
TPSN (Ganeriwal et al.)
DMTS (Ping Su)
LTS (van Greunen et al.)
Theory

Optimal global
synchronization (Karp et al.
and Hu et al.)
One-Hop Synchronization
 Key step: determine clock
offset
 Observation: Can use
timestamped message
exchange to infer clock
offset

Similar to the NTP
algorithm
 Tricky
T1
T4
T2
T3
offset  0.5((T2  T1 )  (T4  T3 ))
Timing Components
Non-Deterministic
Send Processing
Time
Negligible
Deterministic
Access Time
Propagation
Latency
Transmission
Latency
Receive
Processing
Time
RBS (Elson et al.)

Inter-receiver synchronization




Sender side non-determinism
eliminated
Receive processing costs Gaussian



Based on broadcast from sender
Receivers exchange timestamps of
received messages
Inter-receiver
Synchronization
Estimated by averaging
Clock drift estimated by linear
regression
Reference
Broadcast
TPSN and LTS
 Sender-side synchronization
 Use the NTP algorithm
 TPSN
Timestamps packets as
close to radio as possible to
remove non-determinism
 LTS
 Uses RBS-style averaging
to remove non-determinism
T1
T4

T2
T3
offset  0.5((T2  T1 )  (T4  T3 ))
DMTS
 One-way, one-packet synchronization
 Receiver computes offset from sender’s clock

Do away with all sources of non-determinism by timestamping
close to radio layer
Non-Deterministic
Send Processing
Time
Negligible
Deterministic
Receive
Processing
Time
Access Time
Timestamp
Propagation
Latency
Transmission
Latency
Timestamp
Multihop Synchronization
 Goals
Synchronize a pair of nodes
across multiple hops
 Synchronize all nodes with
a base-station
 Basic idea is the same in both
cases
 Successively synchronize
nodes along a path
 In the latter case, do it over
a spanning tree
 Error accumulates linearly

Reducing the Error
 One-hop synchronization estimates offsets between nodes
 For global synchronization, need:
Minimum variance offset estimation
 Consistent maximum likelihood estimation of clock values
 Theoretical result: There exists estimators that jointly determine
minimum variance offsets that give maximum likelihood times
 Karp et al. and Hu et al.
 Both based on the observation that one can use information along
multiple paths
 Error grows logarithmically

Components of Infrastructure
Collaborative Event Processing
Querying, Triggering
Aggregation and Compression
Data-centric Storage
Monitoring
Data-centric Routing
Collaborative Signal Processing
Localization
Time Synchronization
Medium Access
Operating Systems
Processor Platforms
Radios
Sensors
Calibration