Transcript Lecture 18
ASEN 5070: Statistical Orbit Determination I
Fall 2015
Professor Brandon A. Jones
Lecture 18: Minimum Variance Estimator and
Sequential Processing
University of Colorado
Boulder
Exam 1 – Friday, October 9
University of Colorado
Boulder
2
Minimum Variance w/ A Priori
Sequential Processing w/ Minimum Variance
University of Colorado
Boulder
3
Minimum Variance w/ A Priori
University of Colorado
Boulder
4
With the least squares solution, we minimized the square of the
residuals
Instead, what if we want the estimate that gives us the highest
confidence in the solution:
◦ What is the linear, unbiased, minimum variance estimate of the state x?
University of Colorado
Boulder
5
What is the linear, unbiased, minimum
variance estimate of the state x ?
◦ This encompasses three elements
Linear
Unbiased, and
Minimum Variance
We consider each of these to formulate a
solution
University of Colorado
Boulder
6
Turns out, we get the weighted, statistical least squares!
Hence, the linear least squares gives us the solution with
the smallest variance, i.e., highest confidence
◦ Of course, this is predicated on all of our statistical/linearization
assumptions
University of Colorado
Boulder
7
To add a priori in the least squares, we augment
the cost function J(x) to include the minimization
of the a priori error.
How do we control the weighting of the a priori
solution and the observations in the cost
function?
University of Colorado
Boulder
8
This is analogous to treating the a priori
information as an observation of the estimated
state at the epoch time
University of Colorado
Boulder
9
University of Colorado
Boulder
10
University of Colorado
Boulder
11
Like the previous case, the statistical least
squares w/ a priori is equivalent to the minimum
variance estimator
To use the least squares estimator, do I have to
use a statistical description of the
observation/state errors?
University of Colorado
Boulder
12
Least squares does not require a probabilistic
definition of the weights/state
The minimum variance estimator demonstrates
that, for a Gaussian definition of the observation
and state errors, the LS is the best solution
Also know as the Best Linear Unbiased Estimator
(BLUE)
Now, we can use the minimum variance estimator
as a sequential estimator…
University of Colorado
Boulder
13
Minimum Variance and Sequential Processing
University of Colorado
Boulder
14
X*
Batch – process all observations in a single run of the
filter
Sequential – process each observation vector
individually (usually as they become available over
time)
University of Colorado
Boulder
15
How do we map our a priori forward in time?
X*
University of Colorado
Boulder
16
Well, we’ve kind of covered this one before:
Note: Yesterdays estimate can become
today’s a priori…
◦ Not used much for the batch, but will be used for
sequential processing
University of Colorado
Boulder
17
University of Colorado
Boulder
18
Now, we may map the previous estimate in
time via the STM
Can we leverage this information to
sequentially process measurements in the
minimum variance / least squares algorithm?
University of Colorado
Boulder
19
Given from a previous filter run:
We have new a observation and mapping matrix:
We can update the solution via:
University of Colorado
Boulder
20
Two principle phases in any sequential estimator
◦ Time Update
Map previous state deviation and covariance matrix to the
current time of interest
◦ Measurement Update
Update the state deviation and covariance matrix given the
new observations at the time of interest
Jargon can change with communities
◦ Forecast and analysis
◦ Prediction and fusion
◦ others…
University of Colorado
Boulder
21
To perform the measurement updated, we
only require one observation at tk.
Wait, but what if we have fewer observations
than unknowns at tk?
◦ Do we have an underdetermined system?
University of Colorado
Boulder
22
The a priori x is based on independent analysis
or a previous estimate
◦ Independent analysis could be a product of:
Expected launch vehicle performance
Previous analysis of system
Initial orbit determination solution
University of Colorado
Boulder
23
We still have to invert an n × n matrix
Can be computationally expensive for large n
◦ Gravity field estimation: ~n2+2n-3 coefficients!
May become sensitive to numeric issues
University of Colorado
Boulder
24
Is there a better sequential processing
algorithm?
◦ YES! – This equations above may be manipulated to
yield the Kalman filter (next week)
University of Colorado
Boulder
25