Chapter 3 precision of the multiple regression coefficients (EC220)

Download Report

Transcript Chapter 3 precision of the multiple regression coefficients (EC220)

Christopher Dougherty
EC220 - Introduction to econometrics
(chapter 3)
Slideshow: precision of the multiple regression coefficients
Original citation:
Dougherty, C. (2012) EC220 - Introduction to econometrics (chapter 3). [Teaching Resource]
© 2012 The Author
This version available at: http://learningresources.lse.ac.uk/129/
Available in LSE Learning Resources Online: May 2012
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. This license allows
the user to remix, tweak, and build upon the work even for commercial purposes, as long as the user
credits the author and licenses their new creations under the identical terms.
http://creativecommons.org/licenses/by-sa/3.0/
http://learningresources.lse.ac.uk/
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

1
1 r
2
X 2 ,X 3
Yˆ  b1  b 2 X 2  b 3 X 3
u
2

n MSD ( X 2 )

1
1  rX 2 , X 3
2
This sequence investigates the variances and standard errors of the slope coefficients in a
model with two explanatory variables.
1
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

1
1 r
2
X 2 ,X 3
Yˆ  b1  b 2 X 2  b 3 X 3
u
2

n MSD ( X 2 )

1
1  rX 2 , X 3
2
The expression for the variance of b2 is shown above. The expression for the variance of b3
is the same, with the subscripts 2 and 3 interchanged.
2
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

1
1 r
2
X 2 ,X 3
Yˆ  b1  b 2 X 2  b 3 X 3
u
2

n MSD ( X 2 )

1
1  rX 2 , X 3
2
The first factor in the expression is identical to that for the variance of the slope coefficient
in a simple regression model.
3
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

1
1 r
2
X 2 ,X 3
Yˆ  b1  b 2 X 2  b 3 X 3
u
2

n MSD ( X 2 )

1
1  rX 2 , X 3
2
The variance of b2 depends on the variance of the disturbance term, the number of
observations, and the mean square deviation of X2 for exactly the same reasons as in a
simple regression model.
4
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

1
1 r
2
X 2 ,X 3
Yˆ  b1  b 2 X 2  b 3 X 3
u
2

n MSD ( X 2 )

1
1  rX 2 , X 3
2
The difference is that in multiple regression analysis the expression is multiplied by a factor
which depends on the correlation between X2 and X3.
5
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

1
1 r
2
X 2 ,X 3
Yˆ  b1  b 2 X 2  b 3 X 3
u
2

n MSD ( X 2 )

1
1  rX 2 , X 3
2
The higher is the correlation between the explanatory variables, positive or negative, the
greater will be the variance.
6
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

1
1 r
2
X 2 ,X 3
Yˆ  b1  b 2 X 2  b 3 X 3
u
2

n MSD ( X 2 )

1
1  rX 2 , X 3
2
This is easy to understand intuitively. The greater the correlation, the harder it is to
discriminate between the effects of the explanatory variables on Y, and the less accurate
will be the regression estimates.
7
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

1
1 r
2
X 2 ,X 3
Yˆ  b1  b 2 X 2  b 3 X 3
u
2

n MSD ( X 2 )

1
1  rX 2 , X 3
2
Note that the variance expression above is valid only for a model with two explanatory
variables. When there are more than two, the expression becomes much more complex and
it is sensible to switch to matrix algebra.
8
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

Yˆ  b1  b 2 X 2  b 3 X 3
1 r
u
2
1
2
X 2 ,X 3


n MSD ( X 2 )
u
2
standard deviation of b 2 
  X 2i
 X2
2

1
1  rX 2 , X 3
2
1
1  rX 2 , X 3
2
The standard deviation of the distribution of b2 is of course given by the square root of its
variance.
9
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

Yˆ  b1  b 2 X 2  b 3 X 3
1 r
u
2
1
2
X 2 ,X 3


n MSD ( X 2 )
u
2
standard deviation of b 2 
  X 2i
 X2
2

1
1  rX 2 , X 3
2
1
1  rX 2 , X 3
2
With the exception of the variance of u, we can calculate the components of the standard
deviation from the sample data.
10
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

Yˆ  b1  b 2 X 2  b 3 X 3
1 r
u
2
1
2
X 2 ,X 3


n MSD ( X 2 )
u
2
standard deviation of b 2 
  X 2i
 X2
2

1
1  rX 2 , X 3
2
1
1  rX 2 , X 3
2
nk 2
1
2 
E   ei  
u
n
n

The variance of u has to be estimated. The mean square of the residuals provides a
consistent estimator, but it is biased downwards by a factor (n – k) / n , where k is the
number of parameters, in a finite sample.
11
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

Yˆ  b1  b 2 X 2  b 3 X 3
1 r
u
2
1
2
X 2 ,X 3


n MSD ( X 2 )
u
2
standard deviation of b 2 
nk 2
1
2 
E   ei  
u
n
n

  X 2i
su 
2
 X2
1
nk

2

1
1  rX 2 , X 3
2
1
1  rX 2 , X 3
2
2
ei
Obviously we can obtain an unbiased estimator by dividing the sum of the squares of the
residuals by n – k instead of n. We denote this unbiased estimator su2.
12
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
Y  1  2 X 2  3 X 3  u
u
2

2
b2

  X 2i
 X2
2

Yˆ  b1  b 2 X 2  b 3 X 3
1 r
u
2
1
2
X 2 ,X 3


n MSD ( X 2 )
u
2
standard deviation of b 2 
  X 2i
nk 2
1
2 
E   ei  
u
n
n

su 
1
2
nk
2
s.e. ( b 2 ) 
su
  X 2i
 X2
2
 X2


2

1
1  rX 2 , X 3
2
1
1  rX 2 , X 3
2
2
ei
1
1  rX 2 , X 3
2
Thus the estimate of the standard deviation of the probability distribution of b2, known as
the standard error of b2 for short, is given by the expression above.
13
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
. reg EARNINGS S EXP if COLLBARG==1
Source |
SS
df
MS
-------------+-----------------------------Model | 3076.31726
2 1538.15863
Residual | 15501.9762
98
158.18343
-------------+-----------------------------Total | 18578.2934
100 185.782934
Number of obs
F( 2,
98)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
101
9.72
0.0001
0.1656
0.1486
12.577
-----------------------------------------------------------------------------EARNINGS |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------S |
2.333846
.5492604
4.25
0.000
1.243857
3.423836
EXP |
.2235095
.3389455
0.66
0.511
-.4491169
.8961358
_cons | -15.12427
11.38141
-1.33
0.187
-37.71031
7.461779
------------------------------------------------------------------------------
We will use this expression to analyze why the standard error of S is larger for the union
subsample than for the non-union subsample in earnings function regressions using Data
Set 21.
14
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
. reg EARNINGS S EXP if COLLBARG==1
Source |
SS
df
MS
-------------+-----------------------------Model | 3076.31726
2 1538.15863
Residual | 15501.9762
98
158.18343
-------------+-----------------------------Total | 18578.2934
100 185.782934
Number of obs
F( 2,
98)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
101
9.72
0.0001
0.1656
0.1486
12.577
-----------------------------------------------------------------------------EARNINGS |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------S |
2.333846
.5492604
4.25
0.000
1.243857
3.423836
EXP |
.2235095
.3389455
0.66
0.511
-.4491169
.8961358
_cons | -15.12427
11.38141
-1.33
0.187
-37.71031
7.461779
------------------------------------------------------------------------------
To select a subsample in Stata, you add an ‘if’ statement to a command. The COLLBARG
variable is equal to 1 for respondents whose rates of pay are determined by collective
bargaining, and it is 0 for the others.
15
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
. reg EARNINGS S EXP if COLLBARG==1
Source |
SS
df
MS
-------------+-----------------------------Model | 3076.31726
2 1538.15863
Residual | 15501.9762
98
158.18343
-------------+-----------------------------Total | 18578.2934
100 185.782934
Number of obs
F( 2,
98)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
101
9.72
0.0001
0.1656
0.1486
12.577
-----------------------------------------------------------------------------EARNINGS |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------S |
2.333846
.5492604
4.25
0.000
1.243857
3.423836
EXP |
.2235095
.3389455
0.66
0.511
-.4491169
.8961358
_cons | -15.12427
11.38141
-1.33
0.187
-37.71031
7.461779
------------------------------------------------------------------------------
Note that in tests for equality, Stata requires the = sign to be duplicated.
16
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
. reg EARNINGS S EXP if COLLBARG==1
Source |
SS
df
MS
-------------+-----------------------------Model | 3076.31726
2 1538.15863
Residual | 15501.9762
98
158.18343
-------------+-----------------------------Total | 18578.2934
100 185.782934
Number of obs
F( 2,
98)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
101
9.72
0.0001
0.1656
0.1486
12.577
-----------------------------------------------------------------------------EARNINGS |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------S |
2.333846
.5492604
4.25
0.000
1.243857
3.423836
EXP |
.2235095
.3389455
0.66
0.511
-.4491169
.8961358
_cons | -15.12427
11.38141
-1.33
0.187
-37.71031
7.461779
------------------------------------------------------------------------------
In the case of the union subsample, the standard error of S is 0.5493.
17
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
. reg EARNINGS S EXP if COLLBARG==0
Source |
SS
df
MS
-------------+-----------------------------Model | 19540.1761
2 9770.08805
Residual |
73741.593
436 169.132094
-------------+-----------------------------Total | 93281.7691
438 212.972076
Number of obs
F( 2,
436)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
439
57.77
0.0000
0.2095
0.2058
13.005
-----------------------------------------------------------------------------EARNINGS |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------S |
2.721698
.2604411
10.45
0.000
2.209822
3.233574
EXP |
.6077342
.1400846
4.34
0.000
.3324091
.8830592
_cons | -28.00805
4.643211
-6.03
0.000
-37.13391
-18.88219
------------------------------------------------------------------------------
In the case of the non-union subsample, the standard error of S is 0.2604, less than half as
large.
18
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
We will explain the difference by looking at the components of the standard error.
19
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
. reg EARNINGS S EXP if COLLBARG==1
Source |
SS
df
MS
-------------+-----------------------------Model | 3076.31726
2 1538.15863
Residual | 15501.9762
98
158.18343
-------------+-----------------------------Total | 18578.2934
100 185.782934
Number of obs
F( 2,
98)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
101
9.72
0.0001
0.1656
0.1486
12.577
-----------------------------------------------------------------------------EARNINGS |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------S |
2.333846
.5492604
4.25
0.000
1.243857
3.423836
EXP |
.2235095
.3389455
0.66
0.511
-.4491169
.8961358
_cons | -15.12427
11.38141
-1.33
0.187
-37.71031
7.461779
------------------------------------------------------------------------------
s 
2
u
1
nk
RSS
We will start with su. Here is RSS for the union subsample.
20
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
. reg EARNINGS S EXP if COLLBARG==1
Source |
SS
df
MS
-------------+-----------------------------Model | 3076.31726
2 1538.15863
Residual | 15501.9762
98
158.18343
-------------+-----------------------------Total | 18578.2934
100 185.782934
Number of obs
F( 2,
98)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
101
9.72
0.0001
0.1656
0.1486
12.577
-----------------------------------------------------------------------------EARNINGS |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------S |
2.333846
.5492604
4.25
0.000
1.243857
3.423836
EXP |
.2235095
.3389455
0.66
0.511
-.4491169
.8961358
_cons | -15.12427
11.38141
-1.33
0.187
-37.71031
7.461779
------------------------------------------------------------------------------
s 
2
u
1
nk
RSS
There are 101 observations in the non-union subsample. k is equal to 3. Thus n – k is equal
to 98.
21
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
. reg EARNINGS S EXP if COLLBARG==1
Source |
SS
df
MS
-------------+-----------------------------Model | 3076.31726
2 1538.15863
Residual | 15501.9762
98
158.18343
-------------+-----------------------------Total | 18578.2934
100 185.782934
Number of obs
F( 2,
98)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
101
9.72
0.0001
0.1656
0.1486
12.577
-----------------------------------------------------------------------------EARNINGS |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------S |
2.333846
.5492604
4.25
0.000
1.243857
3.423836
EXP |
.2235095
.3389455
0.66
0.511
-.4491169
.8961358
_cons | -15.12427
11.38141
-1.33
0.187
-37.71031
7.461779
------------------------------------------------------------------------------
s 
2
u
1
nk
RSS
RSS / (n – k) is equal to 158.183. To obtain su, we take the square root. This is 12.577.
22
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
We place this in the table, along with the number of observations.
23
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
. reg EARNINGS S EXP if COLLBARG==0
Source |
SS
df
MS
-------------+-----------------------------Model | 19540.1761
2 9770.08805
Residual |
73741.593
436 169.132094
-------------+-----------------------------Total | 93281.7691
438 212.972076
Number of obs
F( 2,
436)
Prob > F
R-squared
Adj R-squared
Root MSE
=
=
=
=
=
=
439
57.77
0.0000
0.2095
0.2058
13.005
-----------------------------------------------------------------------------EARNINGS |
Coef.
Std. Err.
t
P>|t|
[95% Conf. Interval]
-------------+---------------------------------------------------------------S |
2.721698
.2604411
10.45
0.000
2.209822
3.233574
EXP |
.6077342
.1400846
4.34
0.000
.3324091
.8830592
_cons | -28.00805
4.643211
-6.03
0.000
-37.13391
-18.88219
------------------------------------------------------------------------------
Similarly, in the case of the non-union subsample, su is the square root of 169.132, which is
13.005. We also note that the number of observations in that subsample is 439.
24
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
We place these in the table.
25
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
We calculate the mean square deviation of S for the two subsamples from the sample data.
26
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
. cor S EXP if COLLBARG==1
(obs=101)
|
S
EXP
--------+-----------------S |
1.0000
EXP | -0.4087
1.0000
. cor S EXP if COLLBARG==0
(obs=439)
|
S
EXP
--------+-----------------S |
1.0000
EXP | -0.1784
1.0000
The correlation coefficients for S and EXP are –0.4087 and –0.1784 for the union and nonunion subsamples, respectively. (Note that "cor" is the Stata command for computing
correlations.)
27
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
These entries complete the top half of the table. We will now look at the impact of each item
on the standard error, using the mathematical expression at the top.
28
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
The su components need no modification. It is a little larger for the non-union subsample,
having an adverse effect on the standard error.
29
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
The number of observations is much larger for the non-union subsample, so the second
factor is much smaller than that for the union subsample.
30
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
Perhaps surprisingly, the variance in schooling is a little larger for the union subsample.
31
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
The correlation between schooling and work experience is greater for the union subsample,
and this has an adverse effect on its standard error. Note that the sign of the correlation
makes no difference since it is squared.
32
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
Multiplying the four factors together, we obtain the standard errors. (The discrepancy in the
last digit of the non-union standard error has been caused by rounding error.)
33
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
We see that the reason that the standard error is smaller for the non-union subsample is
that there are far more observations than in the non-union subsample. Otherwise the
standard errors would have been about the same.
34
PRECISION OF THE MULTIPLE REGRESSION COEFFICIENTS
s.e. ( b 2 )  s u 
1

n
1
MSD ( X 2 )

1
1  rX 2 , X 3
2
Decomposition of the standard error of S
Component
su
n
Union
12.577
101
Non-union
13.005
439
MSD(S)
rS, EXP
s.e.
6.2325
–0.4087
0.5493
5.8666
–0.1784
0.2604
Factor
product
Union
12.577
0.0995
0.4006
1.0957
0.5493
Non-union
13.005
0.0477
0.4129
1.0163
0.2603
The greater correlation between S and EXP has an adverse effect on the union standard
error, but this is just about offset by the smaller su and the larger variance of S.
35
Copyright Christopher Dougherty 2011.
These slideshows may be downloaded by anyone, anywhere for personal use.
Subject to respect for copyright and, where appropriate, attribution, they may be
used as a resource for teaching an econometrics course. There is no need to
refer to the author.
The content of this slideshow comes from Section 3.3 of C. Dougherty,
Introduction to Econometrics, fourth edition 2011, Oxford University Press.
Additional (free) resources for both students and instructors may be
downloaded from the OUP Online Resource Centre
http://www.oup.com/uk/orc/bin/9780199567089/.
Individuals studying econometrics on their own and who feel that they might
benefit from participation in a formal course should consider the London School
of Economics summer school course
EC212 Introduction to Econometrics
http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx
or the University of London International Programmes distance learning course
20 Elements of Econometrics
www.londoninternational.ac.uk/lse.
11.07.25