Transcript Document
Kalman’s Beautiful Filter
(an introduction)
George Kantor
presented to
Sensor Based Planning Lab
Carnegie Mellon University
December 8, 2000
What does a Kalman Filter do, anyway?
Given the linear dynamical system:
x( k 1) F ( k ) x( k ) G( k )u( k ) v( k )
y( k ) H ( k ) x( k ) w( k )
x ( k ) is the n - dimensional state vector (unknown)
u( k ) is the m - dimensional input vector (known)
y ( k ) is the p - dimensional output vector (known, measured)
F ( k ), G ( k ), H ( k ) are appropriately dimensioned system matrices (known)
v ( k ), w( k ) are zero - mean, white Gaussian noise with (known)
covariance matrices Q( k ), R ( k )
the Kalman Filter is a recursion that provides the
“best” estimate of the state vector x.
Kalman Filter Introduction
Carnegie Mellon University
December 8, 2000
What’s so great about that?
x( k 1) F ( k ) x( k ) G( k )u( k ) v( k )
y( k ) H ( k ) x( k ) w( k )
• noise smoothing (improve noisy measurements)
• state estimation (for state feedback)
• recursive (computes next estimate using only most
recent measurement)
Kalman Filter Introduction
Carnegie Mellon University
December 8, 2000
How does it work?
x( k 1) F ( k ) x( k ) G( k )u( k ) v( k )
y( k ) H ( k ) x( k ) w( k )
1. prediction based on last estimate:
xˆ (k 1 | k ) F (k ) xˆ (k | k ) G (k )u (k )
yˆ (k ) H (k ) xˆ (k 1 | k )
2. calculate correction based on prediction and current measurement:
x f y(k 1), xˆ (k 1 | k )
3. update prediction: xˆ (k 1 | k 1) xˆ (k 1 | k ) x
Kalman Filter Introduction
Carnegie Mellon University
December 8, 2000
Finding the correction (no noise!)
y Hx
Givenpredictionxˆ and outputy, find x so that xˆ xˆ x
is the" best"estimateof x.
xˆ
x | Hx y
x H HH
T
Kalman Filter Introduction
T 1
Carnegie Mellon University
December 8, 2000
A Geometric Interpretation
x | Hx y
xˆ
x
Kalman Filter Introduction
Carnegie Mellon University
December 8, 2000
A Simple State Observer
System:
x(k 1) Fx(k ) Gu(k ) v(k )
y(k ) Hx(k )
1. prediction:
xˆ (k 1 | k ) Fxˆ (k | k ) Gu(k )
Observer:
2. compute correction:
x H HH
T
T 1
y(k 1) Hxˆ(k 1 | k )
3. update:
xˆ (k 1 | k 1) xˆ (k 1 | k ) x
Kalman Filter Introduction
Carnegie Mellon University
December 8, 2000
Estimating a distribution for x
Our estimate of x is not exact!
We can do better by estimating a joint Gaussian distribution p(x).
p ( x)
(2 ) n / 2 P
where P E ( x xˆ )( x xˆ )T
Kalman Filter Introduction
1
1/ 2
1
( x xˆ )T P 1 ( x xˆ )
e2
is the covariance matrix
Carnegie Mellon University
December 8, 2000
Finding the correction (geometric intuition)
Givenpredictionxˆ , covarianceP, and outputy, find x so that
xˆ xˆ x is the " best"(i.e. most probable)estimateof x.
x | Hx y
xˆ
x
Kalman Filter Introduction
p ( x)
1
(2 ) n / 2 P
1/ 2
1
( x xˆ )T P 1 ( x xˆ )
e2
The most probablex is the one that :
1. satisfiesxˆ xˆ x
2. minimizes xT P 1x
Carnegie Mellon University
December 8, 2000
A new kind of distance
Supposewe define a new innerproducton R n to be :
x1 , x2 x1T P 1 x2
(thisreplacesthe oldinnerproductx1T x2 )
Then wecan define a new norm x
2
x, x xT P 1x
The xˆ in that minimizes x is the orthogonalprojectionof xˆ
onto, so x is orthogonalto .
, x 0 for in T null( H )
, x T P 1x 0 iff x column( PHT )
Kalman Filter Introduction
Carnegie Mellon University
December 8, 2000
Finding the correction (for real this time!)
Assumingthat x is linearin y Hxˆ
x PH T K
The conditiony H ( xˆ x) Hx y Hxˆ
Substituti
on yields:
Hx HPHT K
K HPH
T
1
x PH HPH
T
Kalman Filter Introduction
T
1
Carnegie Mellon University
December 8, 2000
A Better State Observer
We can create a better state observer following the same 3. steps, but now we
must also estimate the covariance matrix P.
We start with x(k|k) and P(k|k)
Step 1: Prediction
xˆ (k 1 | k ) Fxˆ (k | k ) Gu(k )
What about P? From the definition:
P(k | k ) E ( x(k ) xˆ(k | k ))(x(k ) xˆ(k | k ))T
and
P(k 1 | k ) E ( x(k 1) xˆ(k 1 | k ))(x(k 1) xˆ(k 1 | k ))T
Kalman Filter Introduction
Carnegie Mellon University
December 8, 2000
Continuing Step 1
To make life a little easier, lets shift notation slightly:
Pk1 E ( xk 1 xˆk1 )( xk 1 xˆk1 )T
E F x xˆ v F x xˆ v
EF x xˆ x xˆ F 2F x xˆ v
FEx xˆ x xˆ F Ev v
E Fxk Guk vk ( Fxˆk Guk )Fxk Guk vk ( Fxˆk Guk )T
T
k
k
k
k
T
k
k
k
k
k
k
T
T
k
k
T
k
k
k
T
k
k
vk vTk
T
k
k
FPk F T Q
P(k 1 | k ) FP(k | k ) F T Q
Kalman Filter Introduction
Carnegie Mellon University
December 8, 2000
Step 2: Computing the correction
From step1 we get xˆ (k 1 | k ) and P(k 1 | k ).
Now we use theseto computex :
x P(k 1 | k )H HP(k 1 | k )H
T
1
y(k 1) Hxˆ(k 1 | k )
For ease of notation, define W so that
x W
Kalman Filter Introduction
Carnegie Mellon University
December 8, 2000
Step 3: Update
xˆ (k 1 | k 1) xˆ (k 1 | k ) W
E( x
Pk 1 E ( xk 1 xˆk 1 )( xk 1 xˆk 1 )T
T
ˆ
ˆ
x
W
)(
x
x
W
)
k 1
k 1
k 1
k 1
(just take my word for it…)
P(k 1 | k 1) P(k 1 | k ) W HP(k 1 | k ) H T W T
Kalman Filter Introduction
Carnegie Mellon University
December 8, 2000
Just take my word for it…
E( x
E ( x
Pk 1 E ( xk 1 xˆk 1 )( xk 1 xˆk 1 )T
ˆk1
k 1 x
W )( xk 1 xˆk1 W )T
ˆ k1 ) W ( xk 1 xˆ k1 ) W
k 1 x
T
E ( xk 1 xˆk1 )( xk 1 xˆk1 )T 2W ( xk 1 xˆk1 )T W W T
Pk1 E 2W H( xk 1 xˆk1 )( xk 1 xˆk1 )T W H( xk 1 xˆk1 )( xk 1 xˆk1 )T H T W T
Pk1 2W HPk1 W HPk1H T W T
HP
Pk1 2Pk1H T HPk1H T
Pk1 2Pk1H T
HP
HP
1
T 1
H
k 1
T T
k 1 W HP
k 1H W
T
k 1H
HP
T 1
H
HPk1 W HPk1H TW T
k 1
Pk1 2W HPk1H T W T W HPk1H T W T
Kalman Filter Introduction
Carnegie Mellon University
December 8, 2000
Better State Observer Summary
System:
1. Predict
x(k 1) Fx(k ) Gu(k ) v(k )
y(k ) Hx(k )
xˆ (k 1 | k ) Fxˆ (k | k ) Gu(k )
Observer
P(k 1 | k ) FP(k | k ) F T Q
2. Correction
3. Update
Kalman Filter Introduction
W P(k 1 | k )H HP(k 1 | k )H T
x W y(k 1) Hxˆ (k 1 | k )
1
xˆ (k 1 | k 1) xˆ (k 1 | k ) W
P(k 1 | k 1) P(k 1 | k ) W HP(k 1 | k ) H T W T
Carnegie Mellon University
December 8, 2000
Finding the correction (with output noise)
y Hx w
x | Hx y
Since you don’t have a hyperplane to
aim for, you can’t solve this with algebra!
You have to solve an optimization problem.
xˆ
That’s exactly what Kalman did!
Here’s his answer:
x PHT HPHT R
Kalman Filter Introduction
1
y Hxˆ
Carnegie Mellon University
December 8, 2000
LTI Kalman Filter Summary
System:
Kalman Filter
1. Predict
2. Correction
x(k 1) Fx(k ) Gu(k ) v(k )
y(k ) Hx(k ) w(k )
xˆ (k 1 | k ) Fxˆ (k | k ) Gu(k )
P(k 1 | k ) FP(k | k ) F T Q
S HP(k 1 | k ) H T R
W P(k 1 | k ) HS 1
x W y(k 1) Hxˆ (k 1 | k )
xˆ (k 1 | k 1) xˆ (k 1 | k ) W
3. Update
Kalman Filter Introduction
P(k 1 | k 1) P(k 1 | k ) W SWT
Carnegie Mellon University
December 8, 2000