Transcript sn2007 6751
Smart Sleeping Policies for Wireless
Sensor Networks
Venu Veeravalli
ECE Department & Coordinated Science Lab
University of Illinois at Urbana-Champaign
http://www.ifp.uiuc.edu/~vvv
(with Jason Fuemmeler)
IPAM Workshop on Mathematical Challenges and
Opportunities in Sensor Networks, Jan 10, 2007
Saving Energy in Sensor Networks
Efficient source coding
Efficient Tx/Rx design
Efficient processor design
Power control
Efficient routing
Switching nodes between active and
sleep modes
Active
Sleep Transition
ui m
ckpTiremsse ed
™ )an
d aom pr e ss or
TF
I F ( UnQco
d ec
a re ne ed ed t o s ee th i s pi c tu r e.
QuickTime™ and a
TIFF (Uncompressed) decompressor
are needed to see this picture.
External Activation
Paging channel to wake up sensors when needed
But power for paging channel is usually not negligible
compared to power consumed by active sensor
Passive RF-ID technology?
Active
Z
Sleep Transition
Z
QuickTime™ and a
TIFF (Uncompressed) decompressor
are needed to see this picture.
Z
Practical Assumption
Sensor that is asleep cannot be communicated
with or woken up prematurely
⇒ sleep duration has to be chosen when sensor
goes to into sleep mode
Having sleeping sensors could result in
communication/sensing performance degradation
Design Problem
Find sleeping policies that optimize tradeoff
between energy consumption and performance
Sleeping Policies
active
sleep
active
active
sleep
sleep
Duty Cycle Policy
Sensor sleeps with deterministic or random (with
predetermined statistics) duty cycle
Synchronous or asynchronous across sensors
Duty cycle chosen to provide desired tradeoff
between energy and performance
Simple to implement, generic
Smart (Adaptive) Policies
Use all available information about the
state of the sensor system to set sleep
time of sensor
Application specific
⇒ system-theoretic approach required
Potential energy savings over duty
cycle policies
Tracking in Dense Sensor Network
Sensor detects
presence of object
within close vicinity
Sensors switch
between active
and sleep modes
to save energy
Sensors need to
come awake in
order to detect
object
Design Problem
Having sleeping sensors could
result in tracking errors
Design Problem
Find sleeping policies that optimize
tradeoff between energy consumption
and tracking error
General Problem Description
Sensors distributed
in two-dimensional
field
Sensor that is
awake can
generate an
observation
Object follows
random (Markov)
path whose
statistics are
assumed to be
known
General Problem Description
Central controller
communicates with
sensors that are awake
Sensor that wakes up
remains awake for one
time unit, during which it:
Central Controller
reports its observation to
the central controller
receives new sleep time
from central controller
sets its sleep timer to new
sleep time and enters sleep
mode
Markov Decision Process
Markov model for object
movement with absorbing
terminal state when object
leaves system
State consists of two parts:
Position of object
Residual sleep times of
sensors
Control inputs:
New sleep times
Exogenous input:
Central Controller
Markov object movement
Partially Observable Markov Decision Process (POMDP)
The state of the system is
only partially observable
at each time step
(POMDP)
Object position not known
-- only have distribution
for where the object might
be
Can reformulate MDP
problem in terms of this
distribution (sufficient
statistic) and residual
sleep times
Central Controller
Sensing Model and Cost Structure
Sensing Model: Each
sensor that is awake
provides a noisy
observation related to
object location
Energy Cost: each
sensor that is awake
incurs cost of c
Tracking Cost: distance
measure d(.,.) between
actual and estimated
object location
Central Controller
Dynamic System Model
Sensor Observations
Nonlinear Filter
Sleeping Policy
Posterior
Optimal location
estimate w.r.t.
distortion metric
^bk
Simple Sensing, Object Movement, Cost Model
Sensors distributed in
two-dimensional field
Sensor that is awake
detects object without
error within its sensing
range
Sensing ranges cover
field of interest without
overlap
Object follows Markov
path from cell to cell
Tracking cost of 1 per
unit time that object not
seen
What Can Be Gained
1
Duty Cycle
Always Track
0
n
Number of sensors awake per unit time
Always Track Policy
Unit random walk movement of object
n
1
Central Controller
Always Track Asymptotics
n
1
E[# awake per unit time] » O(log n)
1
n
E[# awake per unit time] » n0.5
Dynamic System Model
Sensor Observations
Nonlinear Filter
Sleeping Policy
Posterior
Optimal location
estimate w.r.t.
distortion metric
^bk
Nonlinear filter (distribution update)
k
k+1
k+1
Optimal Solution via DP
Can write down dynamic
programming (DP) equations to
solve optimization problem and find
Bellman equation
However, state space grows
exponentially with number of
sensors
DP solution is not tractable even for
relatively small networks
Separating the Problem
Problem separates into set of simpler
problems (one for each sensor) if:
Cost can be written as sum of costs under
control of each sensor (always true)
Other sensors’ actions do not affect state
evolution in future (only true if we make
additional unrealistic assumptions)
We make unrealistic assumptions only to
generate a policy, which can then be
applied to actual system
FCR Solution
At time sensor is set to sleep
assume we will have no future
observations of object (after sensor
comes awake)
Policy is to wake up at first time
that expected tracking cost
exceeds expected energy cost
Thus termed First Cost Reduction
(FCR) solution
QMDP Solution
At time sensor is set to sleep,
assume we will know location of
object perfectly in future (after
sensor comes awake)
Can solve for policy with low
complexity
Assuming more information than is
actually available yields lower
bound on optimal performance!
Line Network Results
Line Network Results
Line Network Results
Two Dimensional Results
Offline Computation
Can compute policies on-line, but this
requires sufficient processing power and
could introduce delays
Policies need to be computed for each
sensor location and each possible
distribution for object location
Storage requirements for off-line computation
may be immense for large networks
Off-line computation is feasible if we
replace actual distribution with point mass
distribution
Storage required is n values per sensor
Point Mass Approximations
n
1
Two options for placing point mass:
Centroid of distribution
Nearest point to sensor on support of
distribution
Distributed Implementation
Off-line computation also allows for
distributed implementation!
Partial Knowledge of Statistics
1
Support of distribution of object position
can be updated using only support of
conditional pdf of Markov prior!
Thus “nearest point” point mass
approximation is robust to knowledge of
prior
n
Point Mass Approximation Results
Point Mass Approximation Results
Conclusions
Tradeoff between energy consumption and
tracking errors can be considerably
improved by using information about the
location of the object
Optimal solution to tradeoff problem is
intractable, but good suboptimal solutions
can be designed
Methodology can be applied to designing
smart sleeping for other sensing
applications, e.g., process monitoring,
change detection, etc.
Methodology can also be applied to other
control problems such as sensor selection
Future Work
More realistic sensing model
More realistic object movement models
Object localization using cooperation among
all awake sensors at each time step
Joint optimization of sensor sleeping policies
and nonlinear filtering for object tracking
Partial known or unknown statistics for
object movement
Decentralized implementation
Tracking multiple objects simultaneously