Transcript ppt

Introduction to Kalman Filters
Modified from source: http://users.cecs.anu.edu.au/~hartley/Vision-Reading-Course/Kalman-filters.ppt
1
Overview
•
•
•
•
•
•
What is a Kalman Filter?
Conceptual Overview
The Observer Problem
The Theory of Kalman Filter
Example – falling object
References
2
What is a Kalman Filter?
• Recursive data processing algorithm
• Generates optimal estimate of desired quantities
given the set of measurements
• Optimal?
– For linear system and white Gaussian errors, Kalman
filter is “best” estimate based on all previous
measurements
– For non-linear system optimality is ‘qualified’
• Recursive?
– Doesn’t need to store all previous measurements and
reprocess all data each time step
3
Applications
•
•
•
•
Tracking objects (e.g., missiles, faces, heads, hands)
Economics
Navigation
Many computer vision applications
–
–
–
–
Stabilizing depth measurements
Feature tracking
Cluster tracking
Fusing data from radar, laser scanner and stereo-cameras for
depth and velocity measurements
4
Conceptual Overview
x
• Lost on the 1-dimensional line
• Position – x(t)
• Assume Gaussian distributed measurements
5
Conceptual Overview
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
•
•
•
•
0
10
20
30
40
50
60
70
80
90
100
Sextant Measurement at t1: Mean = z1 and Variance = 2z1
Optimal estimate of position is: 𝑥 (t1) = z1
Variance of error in estimate: 2x (t1) = 2z1
Boat in same position at time t2 - Predicted position is z1
6
Conceptual Overview
0.16
0.14
prediction 𝑥 - (t2)
0.12
measurement
z(t2)
0.1
0.08
0.06
0.04
0.02
0
•
•
•
•
0
10
20
30
40
50
60
70
80
90
100
So we have the prediction 𝑥 -(t2)
GPS Measurement at t2: Mean = z2 and Variance = 2z2
Need to correct the prediction due to measurement to get 𝑥 (t2)
Closer to more trusted measurement
7
Conceptual Overview
prediction 𝑥 -(t2)
Bayes rule
0.16
corrected optimal
estimate 𝑥(t2)
0.14
px z 
p z x p x
pz
0.12
 px z 
0.1
0.08
1
N   z , z   N   x , x
measurement
z(t2)
0.06
p m easurem ent  p prior
2
2
  N   , 
2
0.04
0.02
0
0
10
20
30
40
50
60
70
80
90
100
• Corrected mean is the new optimal estimate of position
• New variance is smaller than either of the previous two variances
8
Conceptual Overview
z
x   x 

x 
x z
2
2
z
2
x
2
 z 
 


2
 z


x
2

2
z
2
x
x z
x 

2
2
z
2
 z 


 x 
2
x

 x  x
2

x z
2

x   x   z    x  z  x
2
2
x
2
2

2

x z
2

 x 
x
2
 
2
x
2
z  x   x

2
z


K zx
𝑥 − : prediction (a priori estimate)
𝑥: update (a posteriori estimate)
K: Kalman gain


x
2
K 
x z
2
2
9
Conceptual Overview
• Lessons so far:
Make prediction based on previous data - 𝑥 -, -
Take measurement – zk, z
Optimal estimate (ŷ) = Prediction + (Kalman Gain) * (Measurement - Prediction)
Variance of estimate = Variance of prediction * (1 – Kalman Gain)
10
Conceptual Overview
0.16
𝑥 (t2)
0.14
Naïve Prediction
𝑥 -(t3)
0.12
0.1
0.08
0.06
0.04
0.02
0
0
10
20
30
40
50
60
70
80
90
100
• At time t3, boat moves with velocity dx/dt=u
• Naïve approach: Shift probability to the right to predict
• This would work if we knew the velocity exactly (perfect model)
11
Conceptual Overview
Naïve Prediction
𝑥 -(t3)
0.16
𝑥 (t2)
0.14
0.12
0.1
0.08
Prediction 𝑥 -(t3)
0.06
0.04
0.02
0
0
10
20
30
40
50
60
70
80
90
100
• Better to assume imperfect model by adding Gaussian noise
• dx/dt = u + w
• Distribution for prediction moves and spreads out
12
Conceptual Overview
13
Conceptual Overview
0.16
Corrected optimal estimate 𝑥 (t3)
0.14
0.12
Measurement z(t3)
0.1
0.08
0.06
Prediction 𝑥 -(t3)
0.04
0.02
0
0
10
20
30
40
50
60
70
80
90
100
• Now we take a measurement at t3
• Need to once again correct the prediction
• Same as before
14
Conceptual Overview
• Lessons learnt from conceptual overview:
– Initial conditions (𝑥 k-1 and k-1)
– Prediction (𝑥 -k , -k)
• Use initial conditions and model (eg. constant velocity) to
make prediction
– Measurement (zk)
• Take measurement
– Correction (𝑥 k , k)
• Use measurement to correct prediction by ‘blending’
prediction and residual – always a case of merging only two
Gaussians
• Optimal estimate with smaller variance
15
Conceptual Overview
16
The Observer Problem
Black Box
System
Error Sources
External
Controls
System
System State
(desired but not
known)
Measuring
Devices
Optimal
Estimate of
System State
Observed
Measurements
Estimator
Measurement
Error Sources
• System state cannot be measured directly
• Need to estimate “optimally” from measurements
17
Theoretical Basis
• Process to be estimated: (state space)
xk = Axk-1 + Buk + wk-1
zk = Hxk + vk
Process Noise (w) with covariance Q
Measurement Noise (v) with covariance R
• Kalman Filter
Prediction: 𝑥 - is estimate based on measurements at previous time-steps
𝑥 -k = Axk-1 + Buk
P-k = APk-1AT + Q
Correction: ŷk has additional information – the measurement at time k
𝑥 k = 𝑥 -k + K(zk - H 𝑥 -k )
K = P-kHT(HP-kHT + R)-1

Pk = (I - KH)P-k

Pk H

H Pk H
T
T
R
18
Kalman gain / Blending factor, K
• If we are sure about measurements:
– Measurement error covariance (R) decreases to zero
– K decreases and weights residual more heavily than prediction
lim K  H
1
R0
• If we are sure about prediction
– Prediction error covariance P-k decreases to zero
– K increases and weights prediction more heavily than residual
lim
K 0

Pk  0
19
Theoretical Basis
Parameter
Udacity
Welch & Bishop
State
x
x
Measurement
z
z
Control input / driving function
u
u
State transition model
F
A
Control-input model
-
B
Observation model / Measure function
H
H
Measure error/noise covariance
R
R
-
Q
A priori estimate error covariance
/ uncertainty matrix
P
P-
A posteriori estimate error covariance
-
P
Kalman gain / Blending factor
K
K
- covariance matrix of process noise, zk
Process noise covariance
- covariance matrix of process noise, wk
20
Theoretical Basis
Parameter
Udacity
Welch & Bishop
State
x
x
Measurement
z
z
Control input / driving function
u
u
State transition model
F
A
Control-input model
-
B
Observation model / Measure function
H
H
Measure error/noise covariance
R
R
-
Q
A priori estimate error covariance
/ uncertainty matrix
P
P-
A posteriori estimate error covariance
-
P
Kalman gain / Blending factor
K
K
- covariance matrix of process noise, zk
Process noise covariance
- covariance matrix of process noise, wk
21
Theoretical Basis
Correction (Measurement Update)
Prediction (Time Update)
(1) Compute the Kalman Gain
(1) Project the state ahead
K = P-kHT(HP-kHT + R)-1
𝑥 - k = Axk-1 + Buk
(2) Project the error covariance ahead
P-k = APk-1AT + Q
(2) Update estimate with measurement zk
𝑥 k = 𝑥 - k + K(zk - H 𝑥 - k )
(3) Update Error Covariance
Pk = (I - KH)P-k
22
Basic steps – prediction and update
= P=𝑥-
Source: http://en.wikipedia.org/wiki/File:Basic_concept_of_Kalman_filtering.svg
23
Assumptions behind Kalman
Filter
• The model you use to predict the ‘state’ needs to
be a LINEAR function of the measurement
• Non-linear model linearize about nominal point
(EKF - Extended Kalman Filter)
• The model error and the measurement error
(noise) must be Gaussian with zero mean
24
Example – falling object
• Consider an object falling under a constant
gravitational field. Let y(t) denote the height of
the object, then:
y t    g
 y  t   y  t0   g   t  t0 
 y  t   y  t0   y  t0    t  t0  
g
2
  t  t0 
2
D iscrete tim e system w ith  t  t  t 0  1
y  k   y  k  1  g

y  k   y  k  1  y  k  1 
g
2
25
Example – falling object
• Exercise: construct state space model from equations,
when we are able to perform measurements, zk, of the
height.
y  k   y  k  1  g

y  k   y  k  1  y  k  1 
g
2
that is, find A, B, uk and H in:
xk = Axk-1 + Buk
zk = Hxk
26
Example – falling object
• Exercise: construct state space model from equations,
when we are able to perform measurements, zk, of the
height.
y  k   y  k  1  g
Solution:
x k  A x k 1  B u
zk  Hxk
y  k   y  k  1  y  k  1 

Xk

A
 yk  1
  
 yk  0
B
2
u
1   y k 1   0.5 


 g 
1   y k 1   1 
 yk 
0  
 yk 
z k  1
zk
Xk-1
g
H
Xk
27
Example – falling object
MATLAB demo
”kalman_demo.m”
References
1.
2.
3.
4.
5.
Kalman, R. E. 1960. “A New Approach to Linear Filtering and Prediction
Problems”, Transaction of the ASME--Journal of Basic Engineering, pp. 35-45
(March 1960).
Maybeck, P. S. 1979. “Stochastic Models, Estimation, and Control, Volume 1”,
Academic Press, Inc.
Welch, G and Bishop, G. 2001. “An introduction to the Kalman Filter”,
http://www.cs.unc.edu/~welch/kalman/
Williams, Michael. “Introduction to Kalman Filters” (powerpoint)
http://users.cecs.anu.edu.au/~hartley/Vision-Reading-Course/Kalman-filters.ppt
Thrun, Sebastian, Udacity. “Artificial Intelligence for Robotics – Unit 2 – Kalman
Filters”
https://www.udacity.com/course/cs373
29