A Prediction Problem
Download
Report
Transcript A Prediction Problem
AGC
A Prediction Problem
DSP
Problem: Given a sample set of a stationary
processes
{x[n], x[n 1], x[n 2],..., x[n M ]}
to predict the value of the process some time
into the future as
x[n m] f ( x[n], x[n 1], x[n 2],..., x[n M ])
The function may be linear or non-linear. We
concentrate only on linear prediction
functions
Professor A G Constantinides©
1
AGC
A Prediction Problem
DSP
Linear Prediction dates back to Gauss in the
18th century.
Extensively used in DSP theory and
applications (spectrum analysis, speech
processing, radar, sonar, seismology, mobile
telephony, financial systems etc)
The difference between the predicted and
actual value at a specific point in time is
caleed the prediction error.
Professor A G Constantinides©
2
AGC
A Prediction Problem
DSP
The objective of prediction is: given the
data, to select a linear function that
minimises the prediction error.
The Wiener approach examined earlier
may be cast into a predictive form in
which the desired signal to follow is the
next sample of the given process
Professor A G Constantinides©
3
Forward & Backward
Prediction
AGC
DSP
If the prediction is written as
xˆ[n] f ( x[n 1], x[n 2],..., x[n M ])
Then we have a one-step forward prediction
If the prediction is written as
xˆ[n M ] f ( x[n], x[n 1], x[n 2],..., x[n M 1])
Then we have a one-step backward
prediction
Professor A G Constantinides©
4
AGC
Forward Prediction Problem
DSP
The forward prediction error is then
e f [n] x[n] xˆ[n]
Write the prediction equation as
M
xˆ[n] w[k ]x[n k ]
k 1
And as in the Wiener case we minimise
the second order norm of the prediction
error
Professor A G Constantinides©
5
AGC
Forward Prediction Problem
DSP
Thus the solution accrues from
2
2
J min E{(e f [n]) } min E{( x[n] xˆ[n]) }
w
w
Expanding we have
2
2
J min E{( x[n]) } 2 E{( x[n]xˆ[n]) E{( xˆ[n]) }
w
Differentiating with resoect to the
weight vector we obtain
J
xˆ[n]
xˆ[n]
2 E{( x[n]
) 2 E{xˆ[n]
}
wi
wi
wi
Professor A G Constantinides©
6
AGC
Forward Prediction Problem
DSP
However
And hence
xˆ[n]
x[n i ]
wi
J
2 E{( x[n]x[n i ]) 2 E{xˆ[n]x[n i ]}
wi
or
M
J
2 E{( x[n]x[n i]) 2 E{ w[k ]x[n k ]x[n i]}
wi
k 1
Professor A G Constantinides©
7
AGC
Forward Prediction Problem
DSP
On substituting with the correspending
correlation sequences we have
M
J
2r[i] 2 w[k ]rxx [i k ]
wi
k 1
Set this expression to zero for
minimisation to yield
M
w[k ]rxx [i k ] rxx [i]
k 1
i 1,2,3,..., M
Professor A G Constantinides©
8
AGC
Forward Prediction Problem
DSP
These are the Normal Equations, or WienerHopf , or Yule-Walker equations structured for
the one-step forward predictor
In this specific case it is clear that we need
only know the autocorrelation propertities of
the given process to determine the predictor
coefficients
Professor A G Constantinides©
9
AGC
Forward Prediction Filter
DSP
Set
aM [ m] w[ m] m 1,.., M
0
mM
And rewrite earlier expression as
M
rxx [0]
k 0
m 0
0
k 1,2,..., M
aM [m]rxx [m k ]
m0
1
These equations are sometimes known as the
augmented forward prediction normal
equations
Professor A G Constantinides©
10
AGC
Forward Prediction Filter
DSP
The prediction error is then given as
M
e f [n] aM [k ]x[n k ]
m 0
This is a FIR filter known as the
prediction-error filter
A f ( z ) 1 a1[1]z 1 aM [2]z 2 ... aM [ M ]z M
Professor A G Constantinides©
11
AGC
Backward Prediction Problem
DSP
In a similar manner for the backward
prediction case we write
And
eb [n] x[n M ] xˆ[n M ]
M
~[k ]x[n k 1]
xˆ[n M ] w
k 1
Where we assume that the backward
predictor filter weights are different from the
forward case
Professor A G Constantinides©
12
AGC
Backward Prediction Problem
DSP
Thus on comparing the the forward and
backward formulations with the Wiener least
squares conditions we see that the desirable
signal is now x[n M ]
Hence the normal equations for the backward
case can be written as
M
w~[m]rxx [m k ] rxx [ M k 1]
m1
k 1,2,3,..., M
Professor A G Constantinides©
13
AGC
Backward Prediction Problem
DSP
This can be slightly adjusted as
M
w~[ M m 1]rxx [k m] rxx [k ]
m1
k 1,2,3,..., M
On comparing this equation with the
corresponding forward case it is seen that the
two have the same mathematical form and
~[M m 1]
w[m] w
Or equivalently
~[m] w[M m 1]
w
Professor A G Constantinides©
m 1,2,..., M
m 1,2,..., M
14
AGC
Backward Prediction Filter
DSP
Ie backward prediction filter has the same
weights as the forward case but reversed.
Ab ( z ) aM [ M ] aM [ M 1]z 1 aM [ M 2]z 2 ... z M
This result is significant from which many
properties of efficient predictors ensue.
Observe that the ratio of the backward
prediction error filter to the forward
prediction error filter is allpass.
This yields the lattice predictor structures.
More on this later
Professor A G Constantinides©
15
AGC
Levinson-Durbin
DSP
Solution of the Normal Equations
The Durbin algorithm solves the following
R m w m rm
Where the right hand side is a column of R
as in the normal equations.
Assume we have a solution for
R k w k rk 1 k m
Where r [r , r , r ,..., r ]T
k
1 2 3
k
Professor A G Constantinides©
16
AGC
Levinson-Durbin
DSP
For the next iteration the normal equations can
be written as
*
Where
Set
Rk
T
rk J k
rk
rk 1
rk 1
zk
w k 1
k
J k rk
w k 1 rk 1
r0
Jk
Is the k-order
counteridentity
Professor A G Constantinides©
17
AGC
Levinson-Durbin
DSP
Multiply out to yield
Note that
zk
1
R k (rk
Hence
*
k J k rk )
wk
1
*
k R k J k rk
R k1J k J k R k1
zk wk
*
k Jk wk
Ie the first k elements of w k 1 are adjusted
versions of the previous solution
Professor A G Constantinides©
18
AGC
Levinson-Durbin
DSP
The last element follows from the
second equation of
Ie
Rk
T
rk J k
J k rk* w k rk
r
r0 k k 1
1
k (rk 1 rkT J k z k )
r0
Professor A G Constantinides©
19
AGC
Levinson-Durbin
DSP
The parameters k are known as the
reflection coefficients.
These are crucial from the signal
processing point of view.
Professor A G Constantinides©
20
AGC
Levinson-Durbin
DSP
The Levinson algorithm solves the
problem
R my b
In the same way as for Durbin we keep
track of the solutions to the problems
R k y k bk
Professor A G Constantinides©
21
AGC
Levinson-Durbin
DSP
Thus assuming w k , y k to be known
at the k step, we solve at the next step
the problem
Rk
T
rk J k
*
J k rk v k
bk
b
r0 k k 1
Professor A G Constantinides©
22
AGC
Levinson-Durbin
DSP
Where
vk
y k 1
k
Thus
1
*
*
v k R k (b k k J k rk ) y k k J k y k
k
T
bk 1 rk J k y k
T *
r0 rk y k
Professor A G Constantinides©
23
AGC
Lattice Predictors
DSP
Return to the lattice case.
We write
or
Ab ( z )
TM ( z )
Af ( z )
aM [ M ] aM [ M 1]z 1 aM [ M 2]z 2 ... z M
TM ( z )
1 a1[1]z 1 aM [2]z 2 ... aM [ M ]z M
Professor A G Constantinides©
24
AGC
Lattice Predictors
DSP
The above transfer function is allpass of order M.
It can be thought of as the reflection coeffient of
a cascade of lossless transmission lines, or
acoustic tubes.
In this sense it can furnish a simple algorithm for
the estimation of the reflection coefficients.
We strat with the observation that the transfer
function can be written in terms of another
allpass filter embedded in a first order allpass
structure
Professor A G Constantinides©
25
AGC
Lattice Predictors
DSP
This takes the form
1 z 1TM 1 ( z )
TM ( z )
1 1z 1TM 1 ( z )
Where 1 is to be chosen to make TM 1 ( z )
of degree (M-1) .
From the above we have
TM ( z ) 1
TM 1 ( z ) 1
z (1 1TM ( z ))
Professor A G Constantinides©
26
AGC
Lattice Predictors
DSP
And hence
(aM 1[ M ] aM 1[ M 1]z 1 ... z M
TM ( z ) 1
z (1 aM 1[1]z 1 aM 1[2]z 2 ... aM 1[ M ]z M )
aM [r ] 1aM [ M r ]
Where
aM 1[r ]
1 1aM [ M ]
Thus for a reduction in the order the constant
term in the numerator, which is also equal to
the highest term in the denominator, must be
zero.
Professor A G Constantinides©
27
AGC
Lattice Predictors
DSP
This requirement yields 1 aM [M ]
The realisation structure is
1
TM (z )
z
TM 1 ( z )
1
Professor A G Constantinides©
28
AGC
Lattice Predictors
DSP
There are many rearrangemnets that can be
made of this structure, through the use of
Signal Flow Graphs.
One such rearrangement would be to reverse
the direction of signal flow for the lower path.
This would yield the standard Lattice
Structure as found in several textbooks (viz.
Inverse Lattice)
The lattice structure and the above
development are intimately related to the
Levinson-Durbin Algorithm
Professor A G Constantinides©
29
AGC
Lattice Predictors
DSP
The form of lattice presented is not the usual
approach to the Levinson algorithm in that
we have developed the inverse filter.
Since the denominator of the allpass is also
the denominator of the AR process the
procedure can be seen as an AR coefficient to
lattice structure mapping.
For lattice to AR coefficient mapping we
follow the opposite route, ie we contruct the
allpass and read off its denominator.
Professor A G Constantinides©
30
AGC
PSD Estimation
DSP
It is evident that if the PSD of the prediction
error is white then the prediction transfer
function multiplied by the input PSD yields a
constant.
Therefore the input PSD is determined.
Moreover the inverse prediction filter gives us
a means to generate the process as the
output from the filter when the input is white
noise.
Professor A G Constantinides©
31