A Prediction Problem
Download
Report
Transcript A Prediction Problem
AGC
Adaptive Signal Processing
DSP
Problem: Equalise through a FIR filter the distorting
effect of a communication channel that may be
changing with time.
If the channel were fixed then a possible solution
could be based on the Wiener filter approach
We need to know in such case the correlation matrix
of the transmitted signal and the cross correlation
vector between the input and desired response.
When the the filter is operating in an unknown
environment these required quantities need to be
found from the accumulated data.
Professor A G Constantinides©
1
AGC
Adaptive Signal Processing
DSP
The problem is particularly acute when not
only the environment is changing but also the
data involved are non-stationary
In such cases we need temporally to follow
the behaviour of the signals, and adapt the
correlation parameters as the environment is
changing.
This would essentially produce a temporally
adaptive filter.
Professor A G Constantinides©
2
AGC
Adaptive Signal Processing
DSP
A possible framework is:
{x[n]}
Adaptive
dˆ[n]
d [n]
Filter : w
e[n]
Algorithm
Professor A G Constantinides©
3
AGC
Adaptive Signal Processing
DSP
Applications are many
Digital Communications
Channel Equalisation
Adaptive noise cancellation
Adaptive echo cancellation
System identification
Smart antenna systems
Blind system equalisation
And many, many others
Professor A G Constantinides©
4
AGC
DSP
Applications
Professor A G Constantinides©
5
AGC
Adaptive Signal Processing
DSP
Echo Cancellers in Local Loops
Tx1
Rx2
Hybrid
Hybrid
Echo canceller
Echo canceller
Adaptive Algorithm
Rx1
Local Loop
-
Adaptive Algorithm
Rx2
+
Professor A G Constantinides©
+
6
AGC
Adaptive Signal Processing
DSP
Adaptive Noise Canceller
REFERENCE SIGNAL
FIR filter
Noise
Adaptive Algorithm
+
Signal +Noise
PRIMARY SIGNAL
Professor A G Constantinides©
7
AGC
Adaptive Signal Processing
DSP
System Identification
FIR filter
Adaptive Algorithm
Signal
+
Unknown System
Professor A G Constantinides©
8
AGC
Adaptive Signal Processing
DSP
System Equalisation
Signal
FIR filter
Unknown System
Adaptive Algorithm
+
Delay
Professor A G Constantinides©
9
AGC
Adaptive Signal Processing
DSP
Adaptive Predictors
Signal
FIR filter
Delay
Adaptive Algorithm
Professor A G Constantinides©
+
10
AGC
Adaptive Signal Processing
DSP
Adaptive Arrays
Linear Combiner
Interference
Professor A G Constantinides©
11
AGC
Adaptive Signal Processing
DSP
Basic principles:
1) Form an objective function (performance
criterion)
2) Find gradient of objective function with
respect to FIR filter weights
3) There are several different approaches that
can be used at this point
3) Form a differential/difference equation
from the gradient.
Professor A G Constantinides©
12
AGC
Adaptive Signal Processing
DSP
Let the desired signal be
The input signal x[n]
The output y[n]
Now form the vectors
d [n]
x[n] x[n] x[n 1] . x[n m 1]
T
h h[0] h[1] . h[m 1]
T
So that
y[n] x[n]T h
Professor A G Constantinides©
13
AGC
Adaptive Signal Processing
DSP
The form the objective function
2
J (w) E{d [n] y[n] }
J (w) d2 pT h hT p hT Rh
where
R E{x[n]x[n] }
T
p E{x[n]d [n]}
Professor A G Constantinides©
14
AGC
Adaptive Signal Processing
DSP
We wish to minimise this function at the
instant n
Using Steepest Descent we write
1 J (h[n])
h[n 1] h[n]
2
h[n]
But
J (h)
2p 2Rh
h
Professor A G Constantinides©
15
AGC
Adaptive Signal Processing
DSP
So that the “weights update equation”
h[n 1] h[n] (p Rh[n])
Since the objective function is quadratic this
expression will converge in m steps
The equation is not practical
If we knew R and p a priori we could find
the required solution (Wiener) as
1
h opt R p
Professor A G Constantinides©
16
AGC
Adaptive Signal Processing
DSP
However these matrices are not known
Approximate expressions are obtained by
ignoring the expectations in the earlier
complete forms
T
ˆ
R[n] x[n]x[n]
pˆ [n] x[n]d [n]
This is very crude. However, because the
update equation accumulates such quantities,
progressive we expect the crude form to
improve
Professor A G Constantinides©
17
AGC
The LMS Algorithm
DSP
Thus we have
h[n 1] h[n] x[n]( d [n] x[n] h[n])
T
Where the error is
And hence can write
e[n] (d [n] x[n]T h[n]) (d [n] y[n])
h[n 1] h[n] x[n]e[n]
This is sometimes called the stochastic
gradient descent
Professor A G Constantinides©
18
AGC
DSP
Convergence
The parameter is the step size, and it
should be selected carefully
If too small it takes too long to
converge, if too large it can lead to
instability
Write the autocorrelation matrix in the
eigen factorisation form
T
R Q ΛQ
Professor A G Constantinides©
19
AGC
Convergence
DSP
Where Q is orthogonal and Λ is
diagonal containing the eigenvalues
The error in the weights with respect to
their optimal values is given by (using
the Wiener solution for p
h[n 1] hopt h[n] hopt (Rh opt Rh[n])
We obtain
eh [n 1] eh [n] Reh [n]
Professor A G Constantinides©
20
AGC
Convergence
DSP
Or equivalently
eh [n 1] (1 Q ΛQ)eh [n]
T
I.e.
Qe h [n 1] Q(1 QT ΛQ)eh [n]
(Q QQ ΛQ)eh [n]
T
Thus we have
Qeh [n 1] (1 Λ)Qeh [n]
Form a new variable
v[n] Qeh [n]
Professor A G Constantinides©
21
AGC
Convergence
DSP
So that
v[n 1] (1 Λ) v[n]
Thus each element of this new variable is
dependent on the previous value of it via a
scaling constant
The equation will therefore have an
exponential form in the time domain, and the
largest coefficient in the right hand side will
dominate
Professor A G Constantinides©
22
AGC
Convergence
DSP
We require that
1 max 1
Or
2
0
max
In practice we take a much smaller
value than this
Professor A G Constantinides©
23
AGC
Estimates
DSP
E{h[n 1]} E{h[n]}
n the
Then it can be seen that as
weight update equation yields
And on taking expectations of both sides of it
we have
E{h[n 1]} E{h[n]} E{x[n]( d [n] x[n]T h[n])}
Or
0 E{( x[n]d [n] x[n]x[n] h[n])}
T
Professor A G Constantinides©
24
AGC
Limiting forms
DSP
This indicates that the solution
ultimately tends to the Wiener form
I.e. the estimate is unbiased
Professor A G Constantinides©
25
AGC
Misadjustment
DSP
The excess mean square error in the
objective function due to gradient noise
Assume uncorrelatedness set
J min d2 pT h opt
Where 2 is the variance of desired
d
response and h opt is zero when uncorrelated.
Then misadjustment is defined as
J XS ( J LMS () J min ) / J min
Professor A G Constantinides©
26
AGC
Misadjustment
DSP
It can be shown that the misadjustment
is given by
m
i
J XS / J min
i 11 i
Professor A G Constantinides©
27
AGC
Normalised LMS
DSP
To make the step size respond to the
signal needs
2
h[n 1] h[n]
x[n]e[n]
2
1 x[n]
In this case
0 1
And misadjustment is proportional to
the step size.
Professor A G Constantinides©
28
AGC
DSP
Transform based LMS
{x[n]}
Adaptive
dˆ[n]
Filter : w
d [n]
e[n]
Algorithm
Transform
Inverse Transform
Professor A G Constantinides©
29
AGC
Least Squares Adaptive
DSP
with
n
R[n] x[i ]x[i ]
T
i 1
n
p[n] x[n]d [n]
i 1
We have the Least
Squares solution
1
h[n] R[n] p[n]
However, this is computationally very
intensive to implement.
Alternative forms make use of recursive
estimates of the matrices involved.
Professor A G Constantinides©
30
AGC
Recursive Least Squares
DSP
Firstly we note that
p[n] p[n 1] x[n]d [n]
R[n] R[n 1] x[n]x[n]
T
We now use the Inversion Lemma (or the
Sherman-Morrison formula)
Let
Professor A G Constantinides©
31
AGC
Recursive Least Squares (RLS)
DSP
1
P[n] R[n]
Let
1
R[n 1] x[n]
k[n]
T
1
1 x[n] R[n 1] x[n]
Then
P[n] R[n 1] k[n]xT [n]P[n 1]
The quantity k[n] is known as the Kalman
gain
Professor A G Constantinides©
32
AGC
Recursive Least Squares
DSP
Now use k[n] P[n]x[n] in the computation of
the filter weights
h[n] P[n]p[n] P[n](p[n 1] x[n]d [n])
P[n] updates we
From the earlier expression for
have
P[n]p[n 1] P[n 1]p[n 1] k[n]x [n]P[n 1]p[n 1]
T
And hence
h[n] h[n 1] k[n](d [n] x[n]T h[n 1])
Professor A G Constantinides©
33
AGC
Kalman Filters
DSP
Kalman filter is a sequential estimation
problem normally derived from
The Bayes approach
The Innovations approach
Essentially they lead to the same equations
as RLS, but underlying assumptions are
different
Professor A G Constantinides©
34
AGC
Kalman Filters
DSP
The problem is normally stated as:
Given a sequence of noisy observations to
estimate the sequence of state vectors of a linear
system driven by noise.
Standard formulation
x[n 1] Ax[n] w[n]
y[n] C[n]x[n] ν[n]
Professor A G Constantinides©
35
AGC
Kalman Filters
DSP
Kalman filters may be seen as RLS with the
following correspondence
Sate space
Sate-Update matrix
Sate-noise variance
Observation matrix
Observations
State estimate
A[n]
Q[n] E{w[n]w[n]T }
C[n]
y[n]
x[n]
Professor A G Constantinides©
RLS
I
0
x[ n]T
d [n]
h[n]
36
AGC
Cholesky Factorisation
DSP
In situations where storage and to some
extend computational demand is at a
premium one can use the Cholesky
factorisation tecchnique for a positive definite
matrix
T
Express R LL , where L is lower
triangular
There are many techniques for determining
the factorisation
Professor A G Constantinides©
37