Transcript Step 1

Consensus-Based Distributed
Least-Mean Square Algorithm
Using Wireless Ad Hoc Networks
Gonzalo Mateos, Ioannis Schizas and Georgios B. Giannakis
ECE Department, University of Minnesota
Acknowledgment: ARL/CTA grant no. DAAD19-01-2-0011
1
Motivation
 Estimation using ad hoc WSNs raises exciting challenges
 Communication constraints
Single-hop communications
 Limited power budget
 Lack of hierarchy / decentralized processing
Consensus
 Unique features
 Environment is constantly changing (e.g., WSN topology)
 Lack of statistical information at sensor-level
 Bottom line: algorithms are required to be
 Resource efficient
 Simple and flexible
 Adaptive and robust to changes
2
Prior Works
 Single-shot distributed estimation algorithms
 Consensus averaging [Xiao-Boyd ’05, Tsitsiklis-Bertsekas ’86, ’97]
 Incremental strategies [Rabbat-Nowak etal ’05]
 Deterministic and random parameter estimation [Schizas etal ’06]
 Consensus-based Kalman tracking using ad hoc WSNs
 MSE optimal filtering and smoothing [Schizas etal ’07]
 Suboptimal approaches [Olfati-Saber ’05], [Spanos etal ’05]
 Distributed adaptive estimation and filtering
 LMS and RLS learning rules [Lopes-Sayed ’06 ’07]
3
Problem Statement
 Ad hoc WSN with
sensors
 Single-hop communications only. Sensor ‘s neighborhood
 Connectivity information captured in
 Zero-mean additive (e.g., Rx, quantization) noise
 Each sensor
, at time instant
 Acquires a regressor
and scalar observation
 Both zero-mean w.l.o.g and spatially uncorrelated
 Least-mean squares (LMS) estimation problem of interest
4
Centralized Approaches
 If
,
jointly stationary
Wiener solution
 If global (cross-) covariance matrices
,
available
Steepest-descent converges avoiding matrix inversion
 If (cross-) covariance info. not available or time-varying
Low complexity suggests (C-) LMS adaptation
Goal: develop a distributed (D-) LMS algorithm for ad hoc WSNs
5
A Useful Reformulation

Introduce the bridge sensor subset
1)
2)

For all sensors
,
For
, there must
such that
such that
Consider the convex, constrained optimization
Proposition [Schizas etal’06]: For
WSN is connected, then
satisfying 1)-2) and the
6
Algorithm Construction

Problem of interest

Two key steps in deriving D-LMS
1)
Resort to the alternating-direction method of multipliers
Gain desired degree of parallelization
2)
Apply stochastic approximation ideas
Cope with unavailability of statistical information
7
Derivation of Recursions

Associated augmented Lagrangian

Alternating-direction method of Lagrange multipliers
Three-step iterative update process
Step 1: Multipliers
Step 2: Local estimates
Step 3: Bridge variables
Dual iteration
Minimize
Minimize
w.r.t.
w.r.t.
8
Multiplier Updates

Recall the constraints

Use standard method of multipliers type of update

Requires
from the bridge neighborhood
9
Local Estimate Updates

Given by the local optimization

First order optimality condition

Proposed recursion inspired by Robbins-Monro algorithm
1)
2)

is the local prior error
is a constant step-size
Requires


Already acquired bridge variables
Updated local multipliers
10
Bridge Variable Updates

Similarly,

Requires


from the neighborhood
from the neighborhood in a startup phase
11
D-LMS Recap and Operation

In the presence of communication noise, for
Step 1:
Step 2:
Step 3:
Steps 1,2:
Rx
from
Sensor

Step 3:
Tx
Rx
to
from
Tx
to
Bridge sensor
Simple, fully distributed, only single-hop exchanges needed
12
Further Insights
 Manipulating the recursions for
and
yields
 Introduce the instantaneous consensus error at sensor
 The update of
becomes
 Superposition of two learning mechanisms
 Purely local LMS-type of adaptation
 PI consesus loop
tracks the consensus set-point
13
D-LMS Processor
Sensor j
Local LMS
Algorithm
Consensus Loop
PI Regulator
To
 Network-wide information enters through the set-point
 Expect increased performance with
Flexibility
14
Mean Analysis

Independence setting
(As1)
signal assumptions for
is a zero-mean white random vector
, with spectral radius
(As2) Observations obey a linear model
where
is a zero-mean white noise
(As3)

Define
and
are statistically independent
and
Goal: derive sufficient conditions under which
15
Dynamics of the Mean
Lemma: Under (As1)-(As3), consider the D-LMS algorithm
initialized with
Then for
,
.
is given by the second-order recursion
with
and
, where
 Equivalent first-order system by state concatenation
16
First-Order Stability Result
Proposition: Under (As1)-(As3), the D-LMS algorithm whose
positive step-sizes
such that
sense i.e.,
and relevant parameters are chosen
, achieves consensus in the mean
 Step-size selection based on local information only
 Local regressor statistics
 Bridge neighborhood size
17
Simulations
node WSN,
Regressors:
Observations:
D-LMS:
True time-varying weight:
i.i.d.
,
18
Loop Tuning
 Adequately selecting
actually does make a difference
 Compared figures of merit:
 MSE (Learning curve):
 MSD (Normalized estimation error):
19
Concluding Summary
 Developed a distributed LMS algorithm for general ad hoc WSNs
 Intuitive sensor-level processing
 Local LMS adaptation
 Tunable PI loop driving local estimate to consensus
 Mean analysis under independence assumptions
step-size selection rules based on local information
 Simulations validate mss convergence and tracking capabilities
 Ongoing research
 Stability and performance analysis under general settings
 Optimality: selection of bridge sensors,
 D-RLS. Estimation/Learning performance Vs complexity tradeoff
20