Transcript Statistical Assumptions for SLR
Statistical Assumptions for SLR
Recall, the simple linear regression model is
Y i
where
i
= 1, …,
n
.
= β
0 +
β
1
X i + ε i
The assumptions for the simple linear regression model are: 1)
E
(
ε i
)=0 2) Var(
ε i
) = σ 2 3)
ε i
’s are uncorrelated. • These assumptions are also called Gauss-Markov conditions.
• The above assumptions can be stated in terms of
Y
’s… week 2 1
Possible Violations of Assumptions
• Straight line model is inappropriate… • Var(
Y i
) increase with
X i … .
• Linear model is not appropriate for all the data… week 2 2
Properties of Least Squares Estimates
• The least-square estimates
b
0 exists constants
c i
,
d i
and such that ,
b
1
b
0
c i Y i
,
b
1 are linear in
Y
’s. That it, there
d i Y i
• Proof: Exercise..
• The least squares estimates are unbiased estimators for
β
0 • Proof:… and
β
1 . week 2 3
Gauss-Markov Theorem
• The least-squares estimates are BLUE (Best Linear, Unbiased Estimators).
• Of all the possible linear, unbiased estimators of
β
0 squares estimates have the smallest variance.
and
β
1 the least • The variance of the least-squares estimates is… week 2 4
Estimation of Error Term Variance σ
2
• The variance σ 2 of the error terms
ε i
’s needs to be estimated to obtain indication of the variability of the probability distribution of
Y
.
• Further, a variety of inferences concerning the regression function and the prediction of
Y
require an estimate of σ 2 .
• Recall, for random variable of
Z
based on
n Z
realization of the estimates of the mean and variance
Z
are….
• Similarly, the estimate of σ 2 is
s
2
n
1 2
i n
1
e i
2 •
S
2 is called the MSE – Mean Square Error it is an unbiased estimator of σ 2 (proof in Chapter 5).
week 2 5
Normal Error Regression Model
• In order to make inference we need one more assumption about
ε i
’s.
• We assume that
ε i
’s have a Normal distribution, that is
ε i
~
N
(0, σ 2 ).
• The Normality assumption implies that the errors
ε i
’s are independent (since they are uncorrelated).
• Under the Normality assumption of the errors, the least squares estimates of
β
0 and
β
1 are equivalent to their maximum likelihood estimators.
• This results in additional nice properties of MLE’s: they are consistent, sufficient and MVUE. week 2 6
Example: Calibrating a Snow Gauge
• Researchers wish to measure snow density in mountains using gamma ray transitions called “gain”.
• The measuring device needs to be calibrated. It is done with polyethylene blocks of known density.
• We want to know what density of snow results in particular readings from gamma ray detector. The variables are: Y- gain, X – density.
• Data: 9 densities in g/cm 3 and 10 measurements of gain for each.
week 2 7