Review of Probability and Statistics

Download Report

Transcript Review of Probability and Statistics

The Simple Regression Model
y = b0 + b1x + u
Prof. Dr. Rainer Stachuletz
1
Some Terminology
In the simple linear regression model,
where y = b0 + b1x + u, we typically refer
to y as the




Dependent Variable, or
Left-Hand Side Variable, or
Explained Variable, or
Regressand
Prof. Dr. Rainer Stachuletz
2
Some Terminology, cont.
In the simple linear regression of y on x,
we typically refer to x as the






Independent Variable, or
Right-Hand Side Variable, or
Explanatory Variable, or
Regressor, or
Covariate, or
Control Variables
Prof. Dr. Rainer Stachuletz
3
A Simple Assumption
The average value of u, the error term, in
the population is 0. That is,
E(u) = 0
This is not a restrictive assumption, since
we can always use b0 to normalize E(u) to 0
Prof. Dr. Rainer Stachuletz
4
Zero Conditional Mean
We need to make a crucial assumption
about how u and x are related
We want it to be the case that knowing
something about x does not give us any
information about u, so that they are
completely unrelated. That is, that
E(u|x) = E(u) = 0, which implies
E(y|x) = b0 + b1x
Prof. Dr. Rainer Stachuletz
5
E(y|x) as a linear function of x, where for any x
the distribution of y is centered about E(y|x)
y
f(y)
.
x1
. E(y|x) = b + b x
0
1
x2
Prof. Dr. Rainer Stachuletz
6
Ordinary Least Squares
Basic idea of regression is to estimate the
population parameters from a sample
Let {(xi,yi): i=1, …,n} denote a random
sample of size n from the population
For each observation in this sample, it will
be the case that
yi = b0 + b1xi + ui
Prof. Dr. Rainer Stachuletz
7
Population regression line, sample data points
and the associated error terms
E(y|x) = b0 + b1x
.{
u4
y
y4
y3
y2
y1
u2 {.
.} u3
} u1
.
x1
x2
x3
Prof. Dr. Rainer Stachuletz
x4
x
8
Deriving OLS Estimates
To derive the OLS estimates we need to
realize that our main assumption of E(u|x) =
E(u) = 0 also implies that
Cov(x,u) = E(xu) = 0
Why? Remember from basic probability
that Cov(X,Y) = E(XY) – E(X)E(Y)
Prof. Dr. Rainer Stachuletz
9
Deriving OLS continued
We can write our 2 restrictions just in terms
of x, y, b0 and b1 , since u = y – b0 – b1x
E(y – b0 – b1x) = 0
E[x(y – b0 – b1x)] = 0
These are called moment restrictions
Prof. Dr. Rainer Stachuletz
10
Deriving OLS using M.O.M.
The method of moments approach to
estimation implies imposing the population
moment restrictions on the sample moments
What does this mean? Recall that for E(X),
the mean of a population distribution, a
sample estimator of E(X) is simply the
arithmetic mean of the sample
Prof. Dr. Rainer Stachuletz
11
More Derivation of OLS
We want to choose values of the parameters that
will ensure that the sample versions of our
moment restrictions are true
The sample versions are as follows:
n
1
 y
n
i

 bˆ 0  bˆ 1 x i  0
i 1
n
1
n



x i y i  bˆ 0  bˆ 1 x i  0
i 1
Prof. Dr. Rainer Stachuletz
12
More Derivation of OLS
Given the definition of a sample mean, and
properties of summation, we can rewrite the first
condition as follows
y  bˆ 0  bˆ 1 x ,
or
bˆ 0  y  bˆ 1 x
Prof. Dr. Rainer Stachuletz
13
More Derivation of OLS
n





x i y i  y  bˆ 1 x  bˆ 1 x i  0
i 1
n

i 1
n

i 1
n
x i  y i  y   bˆ1  x i  x i  x 
i 1
n
2
ˆ
 x i  x  y i  y   b 1   x i  x 
i 1
Prof. Dr. Rainer Stachuletz
14
So the OLS estimated slope is
n
bˆ1 
 x
i
 x  y i  y 
i 1
n
 x
i
 x
2
i 1
n
provided
that
 x
 x  0
2
i
i 1
Prof. Dr. Rainer Stachuletz
15
Summary of OLS slope estimate
The slope estimate is the sample covariance
between x and y divided by the sample
variance of x
If x and y are positively correlated, the
slope will be positive
If x and y are negatively correlated, the
slope will be negative
Only need x to vary in our sample
Prof. Dr. Rainer Stachuletz
16
More OLS
Intuitively, OLS is fitting a line through the
sample points such that the sum of squared
residuals is as small as possible, hence the
term least squares
The residual, û, is an estimate of the error
term, u, and is the difference between the
fitted line (sample regression function) and
the sample point
Prof. Dr. Rainer Stachuletz
17
Sample regression line, sample data points
and the associated estimated error terms
y
.
y4
û4 {
yˆ  bˆ 0  bˆ 1 x
y3
y2
y1
û2 { .
.} û3
û1
}
.
x1
x2
x3
Prof. Dr. Rainer Stachuletz
x4
x
18
Alternate approach to derivation
Given the intuitive idea of fitting a line, we can
set up a formal minimization problem
That is, we want to choose our parameters such
that we minimize the following:
n

i 1
uˆ i  
2

n
ˆ
ˆ
yi  b 0  b 1 xi

2
i 1
Prof. Dr. Rainer Stachuletz
19
Alternate approach, continued
If one uses calculus to solve the minimization
problem for the two parameters you obtain the
following first order conditions, which are the
same as we obtained before, multiplied by n
 y
n
i

 bˆ 0  bˆ 1 x i  0
i 1
n

i 1


x i y i  bˆ 0  bˆ 1 x i  0
Prof. Dr. Rainer Stachuletz
20
Algebraic Properties of OLS
The sum of the OLS residuals is zero
Thus, the sample average of the OLS
residuals is zero as well
The sample covariance between the
regressors and the OLS residuals is zero
The OLS regression line always goes
through the mean of the sample
Prof. Dr. Rainer Stachuletz
21
Algebraic Properties (precise)
n
 uˆ
n

uˆ i  0 and thus,
i 1
i 1
i
 0
n
n

x i uˆ i  0
i 1
y  bˆ 0  bˆ 1 x
Prof. Dr. Rainer Stachuletz
22
More terminology
We can think
of each observatio
up of an explained
y i  yˆ i  uˆ i
 y  y
  yˆ  y 
 uˆ is the
2
i
i
2
i
2
n as being made
part, and an unexplaine
We then define the following
d part,
:
is the total sum of squares (SST)
is the explained
sum of squares (SSE)
residual sum of squares (SSR)
Then SST  SSE  SSR
Prof. Dr. Rainer Stachuletz
23
Proof that SST = SSE + SSR
 y
 y 
2
i
  y
 yˆ i    yˆ i  y 
2
i
 uˆ   yˆ  y 
  uˆ  2  uˆ  yˆ  y     yˆ  y 
 SSR  2  uˆ  yˆ  y   SSE
and we know that  uˆ  yˆ  y   0

2
i
i
2
i
i
i
i
2
i
i
i
Prof. Dr. Rainer Stachuletz
i
24
Goodness-of-Fit
How do we think about how well our
sample regression line fits our sample data?
Can compute the fraction of the total sum
of squares (SST) that is explained by the
model, call this the R-squared of regression
R2 = SSE/SST = 1 – SSR/SST
Prof. Dr. Rainer Stachuletz
25
Using Stata for OLS regressions
Now that we’ve derived the formula for
calculating the OLS estimates of our
parameters, you’ll be happy to know you
don’t have to compute them by hand
Regressions in Stata are very simple, to run
the regression of y on x, just type
reg y x
Prof. Dr. Rainer Stachuletz
26
Unbiasedness of OLS
Assume the population model is linear in
parameters as y = b0 + b1x + u
Assume we can use a random sample of
size n, {(xi, yi): i=1, 2, …, n}, from the
population model. Thus we can write the
sample model yi = b0 + b1xi + ui
Assume E(u|x) = 0 and thus E(ui|xi) = 0
Assume there is variation in the xi
Prof. Dr. Rainer Stachuletz
27
Unbiasedness of OLS (cont)
In order to think about unbiasedness, we need to
rewrite our estimator in terms of the population
parameter
Start with a simple rewrite of the formula as
bˆ1 
s 
2
x
 x
 x
i
i
 x yi
s
2
x
 x
, where
2
Prof. Dr. Rainer Stachuletz
28
Unbiasedness of OLS (cont)
  x  x  y    x  x  b  b
  x  x b    x  x b x
   x  x u 
b   x  x   b   x  x x
   x  x u
i
i
i
0
1
i
i
1
i
1
xi  u i  
i
i
i
0
0
i
i
i
i
Prof. Dr. Rainer Stachuletz
29
Unbiasedness of OLS (cont)
 x
 x
i
 x   0,
i
 x xi 
 x
so , the numerator
can be rewritten
i
i
i
1
 x
  x  x u , and
 x  x u

 b 
b 1s 
2
x
bˆ
i
1
s
2
as
thus
i
2
x
Prof. Dr. Rainer Stachuletz
30
Unbiasedness of OLS (cont)
let d i   x i  x , so that
 1 
ˆ
b i  b1  
2   d i u i , then
 sx 
 
 1 
ˆ
E b1  b1  
2   d i E u i   b 1
 sx 
Prof. Dr. Rainer Stachuletz
31
Unbiasedness Summary
The OLS estimates of b1 and b0 are
unbiased
Proof of unbiasedness depends on our 4
assumptions – if any assumption fails, then
OLS is not necessarily unbiased
Remember unbiasedness is a description of
the estimator – in a given sample we may be
“near” or “far” from the true parameter
Prof. Dr. Rainer Stachuletz
32
Variance of the OLS Estimators
Now we know that the sampling
distribution of our estimate is centered
around the true parameter
Want to think about how spread out this
distribution is
Much easier to think about this variance
under an additional assumption, so
Assume Var(u|x) = s2 (Homoskedasticity)
Prof. Dr. Rainer Stachuletz
33
Variance of OLS (cont)
Var(u|x) = E(u2|x)-[E(u|x)]2
E(u|x) = 0, so s2 = E(u2|x) = E(u2) = Var(u)
Thus s2 is also the unconditional variance,
called the error variance
s, the square root of the error variance is
called the standard deviation of the error
Can say: E(y|x)=b0 + b1x and Var(y|x) = s2
Prof. Dr. Rainer Stachuletz
34
Homoskedastic Case
y
f(y|x)
.
x1
. E(y|x) = b + b x
0
1
x2
Prof. Dr. Rainer Stachuletz
35
Heteroskedastic Case
f(y|x)
.
.
x1
x2
x3
Prof. Dr. Rainer Stachuletz
.
E(y|x) = b0 + b1x
x
36
Variance of OLS (cont)
 


 1 
ˆ
Var b 1  Var  b 1  

2  d iu i 

 sx 


2
 1 

2  Var
 sx 
 1 

2 
 sx 
 1 
d iu i   
2 
 sx 

2

d s
2
i
2
 1 
s 
2 
 sx 
 1  2 s
s 
2  sx 
 sx 

2
s
2
x
d i Var u i 
2
2
2
2
2
2

di 
2
 
 Var bˆ 1
Prof. Dr. Rainer Stachuletz
37
Variance of OLS Summary
The larger the error variance, s2, the larger
the variance of the slope estimate
The larger the variability in the xi, the
smaller the variance of the slope estimate
As a result, a larger sample size should
decrease the variance of the slope estimate
Problem that the error variance is unknown
Prof. Dr. Rainer Stachuletz
38
Estimating the Error Variance
We don’t know what the error variance, s2,
is, because we don’t observe the errors, ui
What we observe are the residuals, ûi
We can use the residuals to form an
estimate of the error variance
Prof. Dr. Rainer Stachuletz
39
Error Variance Estimate (cont)
uˆ i  y i  bˆ 0  bˆ 1 x i
  b 0  b 1 x i  u i   bˆ 0  bˆ 1 x i

 
 u i  bˆ 0  b 0  bˆ 1  b 1
Then, an unbiased
sˆ 
2
1

n  2 

estimator
of s
2
is
2
uˆ i  SSR /  n  2 
Prof. Dr. Rainer Stachuletz
40
Error Variance Estimate (cont)
sˆ 
sˆ
2
 Standard
 
recall that sd bˆ  s
if we substitute
the standard
sx
sˆ for s then we
have
error of bˆ 1 ,
se bˆ   sˆ /   x
1
error of the regression
 x
i
2

1
2
Prof. Dr. Rainer Stachuletz
41