Transcript Document

Chapter 6
Point Estimation
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
6.1
General Concepts
of
Point Estimation
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Point Estimator
A point estimator of a parameter  is a
single number that can be regarded as a
sensible value for  . A point estimator
can be obtained by selecting a suitable
statistic and computing its value from the
given sample data.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Unbiased Estimator
A point estimator ˆ is said to be an
ˆ

E
(

) 
unbiased estimator of if
for every possible value of  . If ˆ is
not biased, the difference E (ˆ)   is
called the bias of ˆ .
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
The pdf’s of a biased estimator ˆ1
and an unbiased estimator ˆ2 for a
parameter .
pdf of ˆ2
pdf of ˆ1

Bias of 1
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
The pdf’s of a biased estimator ˆ1
and an unbiased estimator ˆ2 for a
parameter .
pdf of ˆ
2
pdf of ˆ1

Bias of 1
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Unbiased Estimator
When X is a binomial rv with
parameters n and p, the sample
proportion pˆ  X / n is an unbiased
estimator of p.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Principle of Unbiased Estimation
When choosing among several different
estimators of  , select one that is
unbiased.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Unbiased Estimator
Let X1, X2,…,Xn be a random sample
from a distribution with mean  and
2
variance  . Then the estimator
ˆ  S
2
2
X



i
X

2
n 1
is an unbiased estimator.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Unbiased Estimator
If X1, X2,…,Xn is a random sample from
a distribution with mean  , then X is
an unbiased estimator of  . If in
addition the distribution is continuous
and symmetric, then X and any trimmed
mean are also unbiased estimators of  .
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Principle of Minimum Variance
Unbiased Estimation
Among all estimators of  that are
unbiased, choose the one that has the
minimum variance. The resulting ˆ is
called the minimum variance unbiased
estimator (MVUE) of .
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Graphs of the pdf’s of two
different unbiased estimators
pdf of ˆ1
pdf of ˆ2

Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
MVUE for a Normal Distribution
Let X1, X2,…,Xn be a random sample
from a normal distribution with
parameters  and  . Then the
estimator ˆ  X is the MVUE for  .
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
A biased estimator that is preferable
to the MVUE
pdf of ˆ1 (biased)
pdf of ˆ2
(the MVUE)

Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
The Estimator for 
X, X, X
e
, X tr (10)

1. If the random sample comes from a
normal distribution, then is the
best estimator since it has minimum
variance among all unbiased
estimators.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
The Estimator for 
X, X, X
e
, X tr (10)

2. If the random sample comes from a
Cauchy distribution, then X is good
(the MVUE is not known).
X and X e are quite bad.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
The Estimator for 
X, X, X
e
, X tr (10)

3. If the underlying distribution is
uniform, the best estimator is X e
this estimator is influenced by
outlying observations, but the lack
of tails makes this impossible.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
The Estimator for 
X, X, X
e
, X tr (10)

4. The trimmed mean X tr (10) works
reasonably well in all three
situations but is not the best for any
of them.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Standard Error
The standard error of an estimator ˆ is
ˆ

=
V
(

) . If
its standard deviation ˆ
the standard error itself involves
unknown parameters whose values can
be estimated, substitution into  ˆ yields
the estimated standard error of the
estimator, denoted ˆˆ or sˆ .
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
6.2
Methods
of
Point Estimation
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Moments
Let X1, X2,…,Xn be a random sample
from a pmf or pdf f (x). For k = 1, 2,…
the kth population moment, or kth
k
moment of the distribution f (x) is E( X ).
The kth sample moment is
1 n k
   i 1 X i .
n
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Moment Estimators
Let X1, X2,…,Xn be a random sample
from a distribution with pmf or pdf
f ( x;1 ,...,m ), where 1,..., m
are parameters whose values are
unknown. Then the moment estimators
1 ,...,m are obtained by equating the
first m sample moments to the
corresponding first m population
moments and solving for 1 ,..., m .
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Likelihood Function
Let X1, X2,…,Xn have joint pmf or pdf
f ( x1,..., xn ;1,...,m )
where parameters 1 ,..., m
have unknown values. When x1,…,xn
are the observed sample values and f is
regarded as a function of 1 ,..., m , it is
called the likelihood function.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Maximum Likelihood Estimators
The maximum likelihood estimates
(mle’s) ˆ1 ,...,ˆm are those values of the
 i 's that maximize the likelihood
function so that
f ( x1,..., xn ;ˆ1,...,ˆm )  f ( x1,..., xn ;1,...,m )
for all 1 ,...,m
When the Xi’s are substituted in the place of
the xi’s, the maximum likelihood estimators
result.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
The Invariance Principle
Let ˆ1 ,...,ˆm be the mle’s of the
parameters 1 ,..., m Then the mle of
any function h(1 ,...,m ) of these
parameters is the function h(ˆ1,...,ˆm )
of the mle’s.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.
Desirable Property of the Maximum
Likelihood Estimate
Under very general conditions on the joint
distribution of the sample, when the
sample size n is large, the maximum
likelihood estimator of any parameter 
is approx. unbiased [ E (ˆ)   ] and has
variance that is nearly as small as can be
achieved by any estimator.
mleˆ  MVUE of 
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc.