Christopher Dougherty EC220 - Introduction to econometrics (chapter 12) Slideshow: testing for autocorrelation Original citation: Dougherty, C.

Download Report

Transcript Christopher Dougherty EC220 - Introduction to econometrics (chapter 12) Slideshow: testing for autocorrelation Original citation: Dougherty, C.

Slide 1

Christopher Dougherty

EC220 - Introduction to econometrics
(chapter 12)
Slideshow: testing for autocorrelation
Original citation:
Dougherty, C. (2012) EC220 - Introduction to econometrics (chapter 12). [Teaching Resource]
© 2012 The Author
This version available at: http://learningresources.lse.ac.uk/138/
Available in LSE Learning Resources Online: May 2012
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License. This license allows
the user to remix, tweak, and build upon the work even for commercial purposes, as long as the user
credits the author and licenses their new creations under the identical terms.
http://creativecommons.org/licenses/by-sa/3.0/

http://learningresources.lse.ac.uk/


Slide 2

TESTS FOR AUTOCORRELATION

Simple autoregression of the residuals

ut  ut 1   t
e t   e t  1  error

We will initially confine the discussion of the tests for autocorrelation to its most common
form, the AR(1) process. If the disturbance term follows the AR(1) process, it is reasonable
to hypothesize that, as an approximation, the residuals will conform to a similar process.
1


Slide 3

TESTS FOR AUTOCORRELATION

Simple autoregression of the residuals

ut  ut 1   t
e t   e t  1  error

After all, provided that the conditions for the consistency of the OLS estimators are
satisfied, as the sample size becomes large, the regression parameters will approach their
true values, the location of the regression line will converge on the true relationship, and
the residuals will coincide with the values of the disturbance term.

2


Slide 4

TESTS FOR AUTOCORRELATION

Simple autoregression of the residuals

ut  ut 1   t
e t   e t  1  error

Hence a regression of et on et–1 is sufficient, at least in large samples. Of course, there is
the issue that, in this regression, et–1 is a lagged dependent variable, but that does not
matter in large samples.
3


Slide 5

TESTS FOR AUTOCORRELATION

Simple autoregression of the residuals
tru e va lu e

Y t  10  2 . 0 t  u t

u t  0 .7 u t  1   t

T = 200

e t  ˆ e t  1
5
T = 100

T = 50

T = 25

0
-0 .5

0

0 .5

0 .7

1

ˆ

This is illustrated with the simulation shown in the figure. The true model is as shown, with
ut being generated as an AR (1) process with  = 0.7.
4


Slide 6

TESTS FOR AUTOCORRELATION

Simple autoregression of the residuals
tru e va lu e

Y t  10  2 . 0 t  u t

u t  0 .7 u t  1   t

T = 200

e t  ˆ e t  1
5
T = 100

T = 50

T = 25

0
-0 .5

0

0 .5

0 .7

1

ˆ

The values of the parameters in the model for Yt make no difference to the distributions of
the estimator of .
5


Slide 7

TESTS FOR AUTOCORRELATION

Simple autoregression of the residuals
tru e va lu e

Y t  10  2 . 0 t  u t

u t  0 .7 u t  1   t

T = 200

e t  ˆ e t  1

T

5

25
50
100
200

T = 100

T = 50

mean
0.47
0.59
0.65
0.68

T = 25

0
-0 .5

0

0 .5

0 .7

1

ˆ

As can be seen, when et is regressed on et–1, the distribution of the estimator of  is left
skewed and heavily biased downwards for T = 25. The mean of the distribution is 0.47.
6


Slide 8

TESTS FOR AUTOCORRELATION

Simple autoregression of the residuals
tru e va lu e

Y t  10  2 . 0 t  u t

u t  0 .7 u t  1   t

T = 200

e t  ˆ e t  1

T

5

25
50
100
200

T = 100

T = 50

mean
0.47
0.59
0.65
0.68

T = 25

0
-0 .5

0

0 .5

0 .7

1

ˆ

However, as the sample size increases, the downwards bias diminishes and it is clear that it
is converging on 0.7 as the sample becomes large. Inference in finite samples will be
approximate, given the autoregressive nature of the regression.
7


Slide 9

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2

The simple estimator of the autocorrelation coefficient depends on Assumption C.7 part (2)
being satisfied when the original model (the model for Yt) is fitted. Generally, one might
expect this not to be the case.
8


Slide 10

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

jt

 e t 1

j2

If the original model contains a lagged dependent variable as a regressor, or violates
Assumption C.7 part (2) in any other way, the estimates of the parameters will be
inconsistent if the disturbance term is subject to autocorrelation.
9


Slide 11

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

jt

 e t 1

j2

As a repercussion, a simple regression of et on et–1 will produce an inconsistent estimate of
. The solution is to include all of the explanatory variables in the original model in the
residuals autoregression.
10


Slide 12

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

jt

 e t 1

j2

If the original model is the first equation where, say, one of the X variables is Yt–1, then the
residuals regression would be the second equation.
11


Slide 13

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

jt

 e t 1

j2

The idea is that, by including the X variables, one is controlling for the effects of any
endogeneity on the residuals.
12


Slide 14

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

jt

 e t 1

j2

The underlying theory is complex and relates to maximum-likelihood estimation, as does
the test statistic. The test is known as the Breusch–Godfrey test.
13


Slide 15

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

jt

 e t 1

j2

Test statistic: nR2, distributed as c2(1) when
testing for first-order autocorrelation

Several asymptotically-equivalent versions of the test have been proposed. The most
popular involves the computation of the lagrange multiplier statistic nR2 when the residuals
regression is fitted, n being the actual number of observations in the regression.
14


Slide 16

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

jt

 e t 1

j2

Test statistic: nR2, distributed as c2(1) when
testing for first-order autocorrelation

Asymptotically, under the null hypothesis of no autocorrelation, nR2 is distributed as a chisquared statistic with one degree of freedom.
15


Slide 17

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

jt

 e t 1

j2

Alternatively, simple t test on coefficient of et–1

A simple t test on the coefficient of et–1 has also been proposed, again with asymptotic
validity.
16


Slide 18

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

 e t 1

jt

j2

q

k

et   1 


j2

j

X

jt





s

ets

s1

The procedure can be extended to test for higher order autocorrelation. If AR(q)
autocorrelation is suspected, the residuals regression includes q lagged residuals.
17


Slide 19

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

 e t 1

jt

j2

q

k

et   1 


j2

j

X

jt





s

ets

s1

Test statistic: nR2, distributed as c2(q)

For the lagrange multiplier version of the test, the test statistic remains nR2 (with n smaller
than before, the inclusion of the additional lagged residuals leading to a further loss of
initial observations).
18


Slide 20

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

 e t 1

jt

j2

q

k

et   1 


j2

j

X

jt





s

ets

s1

Test statistic: nR2, distributed as c2(q)

Under the null hypothesis of no autocorrelation, nR2 has a chi-squared distribution with q
degrees of freedom.
19


Slide 21

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

 e t 1

jt

j2

q

k

et   1 


j2

j

X

jt





s

ets

s1

Alternatively, F test on the lagged residuals
H0: 1 = ... = q = 0, H1: not H0

The t test version becomes an F test comparing RSS for the residuals regression with RSS
for the same specification without the residual terms. Again, the test is valid only
asymptotically.
20


Slide 22

TESTS FOR AUTOCORRELATION

Breusch–Godfrey test
k

Yt   1 



j

X

jt

 ut

j2
k

et   1 



j

X

 e t 1

jt

j2

q

k

et   1 


j2

j

X

jt





s

ets

s1

Test statistic: nR2, distributed as c2(q),
valid also for MA(q) autocorrelation

The lagrange multiplier version of the test has been shown to be asymptotically valid for the
case of MA(q) moving average autocorrelation.
21


Slide 23

TESTS FOR AUTOCORRELATION

Durbin–Watson test
T

d 



( e t  e t 1 )

2

t2
T



2

et

t 1

The first major test to be developed and popularised for the detection of autocorrelation
was the Durbin–Watson test for AR(1) autocorrelation based on the Durbin–Watson d
statistic calculated from the residuals using the expression shown.
22


Slide 24

TESTS FOR AUTOCORRELATION

Durbin–Watson test
T

d 



( e t  e t 1 )

2

t2
T



2

et

t 1

In large samples

d  2  2

No autocorrelation
Severe positive autocorrelation
Severe negative autocorrelation
It can be shown that in large samples d tends to 2 – 2, where  is the parameter in the
AR(1) relationship ut = ut–1 + t.
23


Slide 25

TESTS FOR AUTOCORRELATION

Durbin–Watson test
T

d 



( e t  e t 1 )

2

t2
T



2

et

t 1

In large samples
No autocorrelation

d  2  2

d  2

Severe positive autocorrelation
Severe negative autocorrelation
If there is no autocorrelation,  is 0 and d should be distributed randomly around 2.

24


Slide 26

TESTS FOR AUTOCORRELATION

Durbin–Watson test
T

d 



( e t  e t 1 )

2

t2
T



2

et

t 1

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation
If there is severe positive autocorrelation,  will be near 1 and d will be near 0.

25


Slide 27

TESTS FOR AUTOCORRELATION

Durbin–Watson test
T

d 



( e t  e t 1 )

2

t2
T



2

et

t 1

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

Likewise, if there is severe positive autocorrelation,  will be near –1 and d will be near 4.

26


Slide 28

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

negative
autocorrelation

2

In large samples

4

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

Thus d behaves as illustrated graphically above.

27


Slide 29

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dcrit

2

In large samples

negative
autocorrelation

dcrit

4

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

To perform the Durbin–Watson test, we define critical values of d. The null hypothesis is H0:
 = 0 (no autocorrelation). If d lies between these values, we do not reject the null
hypothesis.
28


Slide 30

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dcrit

2

In large samples

negative
autocorrelation

dcrit

4

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

The critical values, at any significance level, depend on the number of observations in the
sample and the number of explanatory variables.
29


Slide 31

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dcrit

2

In large samples

negative
autocorrelation

dcrit

4

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

Unfortunately, they also depend on the actual data for the explanatory variables in the
sample, and thus vary from sample to sample.
30


Slide 32

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dcrit dU

2

In large samples

negative
autocorrelation

dcrit

4

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

However Durbin and Watson determined upper and lower bounds, dU and dL, for the critical
values, and these are presented in standard tables.
31


Slide 33

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dcrit dU

2

In large samples

negative
autocorrelation

dcrit

4

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

If d is less than dL, it must also be less than the critical value of d for positive
autocorrelation, and so we would reject the null hypothesis and conclude that there is
positive autocorrelation.
32


Slide 34

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dcrit dU

2

In large samples

negative
autocorrelation

dcrit

4

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

If d is above than dU, it must also be above the critical value of d, and so we would not reject
the null hypothesis. (Of course, if it were above 2, we should consider testing for negative
autocorrelation instead.)
33


Slide 35

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dcrit dU

2

In large samples

negative
autocorrelation

dcrit

4

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

If d lies between dL and dU, we cannot tell whether it is above or below the critical value and
so the test is indeterminate.
34


Slide 36

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU

negative
autocorrelation

2

4

1.43 1.62
(n = 45, k = 3, 5% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

Here are dL and dU for 45 observations and two explanatory variables, at the 5% significance
level.
35


Slide 37

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU
1.43 1.62

negative
autocorrelation

2

4
2.38 2.57

(n = 45, k = 3, 5% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

There are similar bounds for the critical value in the case of negative autocorrelation. They
are not given in the standard tables because negative autocorrelation is uncommon, but it
is easy to calculate them because are they are located symmetrically to the right of 2.
36


Slide 38

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU
1.43 1.62

negative
autocorrelation

2

4
2.38 2.57

(n = 45, k = 3, 5% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

So if d < 1.43, we reject the null hypothesis and conclude that there is positive
autocorrelation.
37


Slide 39

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU
1.43 1.62

negative
autocorrelation

2

4
2.38 2.57

(n = 45, k = 3, 5% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

If 1.43 < d < 1.62, the test is indeterminate and we do not come to any conclusion.

38


Slide 40

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU
1.43 1.62

negative
autocorrelation

2

4
2.38 2.57

(n = 45, k = 3, 5% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

If 1.62 < d < 2.38, we do not reject the null hypothesis of no autocorrelation.

39


Slide 41

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU
1.43 1.62

negative
autocorrelation

2

4
2.38 2.57

(n = 45, k = 3, 5% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

If 2.38 < d < 2.57, we do not come to any conclusion.

40


Slide 42

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU
1.43 1.62

negative
autocorrelation

2

4
2.38 2.57

(n = 45, k = 3, 5% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

If d > 2.57, we conclude that there is significant negative autocorrelation.

41


Slide 43

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU

negative
autocorrelation

2

1.24 1.42

4
2.58 2.76

(n = 45, k = 3, 1% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

Here are the bounds for the critical values for the 1% test, again with 45 observations and
two explanatory variables.
42


Slide 44

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU

negative
autocorrelation

2

1.24 1.42

4
2.58 2.76

(n = 45, k = 3, 1% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

The Durbin-Watson test is valid only when all the explanatory variables are deterministic.
This is in practice a serious limitation since usually interactions and dynamics in a system
of equations cause Assumption C.7 part (2) to be violated.
43


Slide 45

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU

negative
autocorrelation

2

1.24 1.42

4
2.58 2.76

(n = 45, k = 3, 1% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

In particular, if the lagged dependent variable is used as a regressor, the statistic is biased
towards 2 and therefore will tend to under-reject the null hypothesis. It is also restricted to
testing for AR(1) autocorrelation.
44


Slide 46

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU

negative
autocorrelation

2

1.24 1.42

4
2.58 2.76

(n = 45, k = 3, 1% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

Despite these shortcomings, it remains a popular test and some major applications produce
the d statistic automatically as part of the standard regression output.
45


Slide 47

TESTS FOR AUTOCORRELATION

Durbin–Watson test
positive
autocorrelation

0

no
autocorrelation

dL dU

negative
autocorrelation

2

1.24 1.42

4
2.58 2.76

(n = 45, k = 3, 1% level)

In large samples

d  2  2

No autocorrelation

d  2

Severe positive autocorrelation

d  0

Severe negative autocorrelation

d  4

It does have the appeal of the test statistic being part of standard regression output.
Further, it is appropriate for finite samples, subject to the zone of indeterminacy and the
deterministic regressor requirement.
46


Slide 48

TESTS FOR AUTOCORRELATION

Durbin’s h test

h  ˆ

n
1  ns bY (  1 )
2

Durbin proposed two tests for the case where the use of the lagged dependent variable as a
regressor made the original Durbin–Watson test inapplicable. One was a precursor to the
Breusch–Godrey test.
47


Slide 49

TESTS FOR AUTOCORRELATION

Durbin’s h test

h  ˆ

n
1  ns bY (  1 )
2

The other is the Durbin h test, appropriate for the detection of AR(1) autocorrelation.

48


Slide 50

TESTS FOR AUTOCORRELATION

Durbin’s h test

h  ˆ

n
1  ns bY (  1 )
2

The Durbin h statistic is defined as shown, where ˆ is an estimate of  in the AR(1)
process, s b2
is an estimate of the variance of the coefficient of the lagged dependent
variable, and n is the number of observations in the regression.
Y ( 1 )

49


Slide 51

TESTS FOR AUTOCORRELATION

Durbin’s h test

h  ˆ

n
1  ns bY (  1 )
2

d  2  2

ˆ  1  0 . 5 d

There are various ways in which one might estimate  but, since this test is valid only for
large samples, it does not matter which is used. The most convenient is to take advantage
of the fact that d tends to 2 – 2 in large samples. The estimator is then 1 – 0.5d.
50


Slide 52

TESTS FOR AUTOCORRELATION

Durbin’s h test

h  ˆ

n
1  ns bY (  1 )
2

d  2  2

ˆ  1  0 . 5 d

The estimate of the variance of the coefficient of the lagged dependent variable is obtained
by squaring its standard error.
51


Slide 53

TESTS FOR AUTOCORRELATION

Durbin’s h test

h  ˆ

n
1  ns bY (  1 )
2

d  2  2

ˆ  1  0 . 5 d

Thus h can be calculated from the usual regression results. In large samples, under the
null hypothesis of no autocorrelation, h is distributed as a normal variable with zero mean
and unit variance.
52


Slide 54

TESTS FOR AUTOCORRELATION

Durbin’s h test

h  ˆ

n
1  ns bY (  1 )
2

d  2  2

ˆ  1  0 . 5 d

An occasional problem with this test is that the h statistic cannot be computed if n s b2
is greater than 1, which can happen if the sample size is not very large.

Y ( 1 )

53


Slide 55

TESTS FOR AUTOCORRELATION

Durbin’s h test

h  ˆ

n
1  ns bY (  1 )
2

d  2  2

ˆ  1  0 . 5 d

An even worse problem occurs when n s b2 is near to, but less than, 1. In such a situation
the h statistic could be enormous, without there being any problem of autocorrelation.
Y ( 1 )

54


Slide 56

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: LGFOOD
Method: Least Squares
Sample: 1959 2003
Included observations: 45
============================================================
Variable
Coefficient Std. Error t-Statistic Prob.
============================================================
C
2.236158
0.388193
5.760428
0.0000
LGDPI
0.500184
0.008793
56.88557
0.0000
LGPRFOOD
-0.074681
0.072864 -1.024941
0.3113
============================================================
R-squared
0.992009
Mean dependent var 6.021331
Adjusted R-squared
0.991628
S.D. dependent var 0.222787
S.E. of regression
0.020384
Akaike info criter-4.883747
Sum squared resid
0.017452
Schwarz criterion -4.763303
Log likelihood
112.8843
Hannan-Quinn crite-4.838846
F-statistic
2606.860
Durbin-Watson stat 0.478540
Prob(F-statistic)
0.000000
============================================================

The output shown in the table gives the result of a logarithmic regression of expenditure on
food on disposable personal income and the relative price of food.
55


Slide 57

TESTS FOR AUTOCORRELATION
0 .0 6

0 .0 5

0 .0 4

0 .0 3

0 .0 2

0 .0 1

0
1959

1963

1967

1971

1975

1979

1983

1987

1991

1995

1999

2003

-0 .0 1

-0 .0 2

-0 .0 3

-0 .0 4

Residuals, static logarithmic regression for FOOD
The plot of the residuals is shown. All the tests indicate highly significant autocorrelation.

56


Slide 58

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: ELGFOOD
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
ELGFOOD(-1)
0.790169
0.106603
7.412228
0.0000
============================================================
R-squared
0.560960
Mean dependent var 3.28E-05
Adjusted R-squared
0.560960
S.D. dependent var 0.020145
S.E. of regression
0.013348
Akaike info criter-5.772439
Sum squared resid
0.007661
Schwarz criterion -5.731889
Log likelihood
127.9936
Durbin-Watson stat 1.477337
============================================================

eˆ t  0 . 79 e t  1

ELGFOOD in the regression above is the residual from the LGFOOD regression. A simple
regression of ELGFOOD on ELGFOOD(–1) yields a coefficient of 0.79 with standard error
0.11.
57


Slide 59

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: ELGFOOD
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
ELGFOOD(-1)
0.790169
0.106603
7.412228
0.0000
============================================================
R-squared
0.560960
Mean dependent var 3.28E-05
Adjusted R-squared
0.560960
S.D. dependent var 0.020145
S.E. of regression
0.013348
Akaike info criter-5.772439
Sum squared resid
0.007661
Schwarz criterion -5.731889
Log likelihood
127.9936
Durbin-Watson stat 1.477337
============================================================

eˆ t  0 . 79 e t  1

Technical note for EViews users: EViews places the residuals from the most recent
regression in a pseudo-variable called resid. resid cannot be used directly. So the
residuals were saved as ELGFOOD using the genr command:
genr ELGFOOD = resid

58


Slide 60

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: ELGFOOD
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.175732
0.265081
0.662936
0.5112
LGDPI
-7.36E-05
0.006180 -0.011917
0.9906
LGPRFOOD
-0.037373
0.049496 -0.755058
0.4546
ELGFOOD(-1)
0.805744
0.110202
7.311504
0.0000
============================================================
R-squared
0.572006
Mean dependent var 3.28E-05
Adjusted R-squared
0.539907
S.D. dependent var 0.020145
S.E. of regression
0.013664
Akaike info criter-5.661558
Sum squared resid
0.007468
Schwarz criterion -5.499359
Log likelihood
128.5543
F-statistic
17.81977
Durbin-Watson stat
1.513911
Prob(F-statistic) 0.000000
============================================================

eˆ t  ...  0 . 81 e t  1

R  0 . 5720
2

nR

2

 44  0 . 5720  25 . 17

2
c 1  0 .1 %  10 . 83

Adding an intercept, LGDPI and LGPRFOOD to the specification, the coefficient of the
lagged residuals becomes 0.81 with standard error 0.11. R2 is 0.5720, so nR2 is 25.17.
59


Slide 61

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: ELGFOOD
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.175732
0.265081
0.662936
0.5112
LGDPI
-7.36E-05
0.006180 -0.011917
0.9906
LGPRFOOD
-0.037373
0.049496 -0.755058
0.4546
ELGFOOD(-1)
0.805744
0.110202
7.311504
0.0000
============================================================
R-squared
0.572006
Mean dependent var 3.28E-05
Adjusted R-squared
0.539907
S.D. dependent var 0.020145
S.E. of regression
0.013664
Akaike info criter-5.661558
Sum squared resid
0.007468
Schwarz criterion -5.499359
Log likelihood
128.5543
F-statistic
17.81977
Durbin-Watson stat
1.513911
Prob(F-statistic) 0.000000
============================================================

eˆ t  ...  0 . 81 e t  1

R  0 . 5720
2

nR

2

 44  0 . 5720  25 . 17

2
c 1  0 .1 %  10 . 83

(Note that here n = 44. There are 45 observations in the regression in Table 12.1, and one
fewer in the residuals regression.) The critical value of chi-squared with one degree of
freedom at the 0.1 percent level is 10.83.
60


Slide 62

TESTS FOR AUTOCORRELATION
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic
54.78773
Probability
0.000000
Obs*R-squared
25.73866
Probability
0.000000
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.171665
0.258094
0.665124
0.5097
LGDPI
9.50E-05
0.005822
0.016324
0.9871
LGPRFOOD
-0.036806
0.048504 -0.758819
0.4523
RESID(-1)
0.805773
0.108861
7.401873
0.0000
============================================================
R-squared
0.571970
Mean dependent var-1.85E-18
Adjusted R-squared
0.540651
S.D. dependent var 0.019916
S.E. of regression
0.013498
Akaike info criter-5.687865
Sum squared resid
0.007470
Schwarz criterion -5.527273
Log likelihood
131.9770
F-statistic
18.26258
Durbin-Watson stat
1.514975
Prob(F-statistic) 0.000000
============================================================

Technical note for EViews users: one can perform the test simply by following the LGFOOD
regression with the command auto(1). EViews allows itself to use resid directly.
61


Slide 63

TESTS FOR AUTOCORRELATION
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic
54.78773
Probability
0.000000
Obs*R-squared
25.73866
Probability
0.000000
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.171665
0.258094
0.665124
0.5097
LGDPI
9.50E-05
0.005822
0.016324
0.9871
LGPRFOOD
-0.036806
0.048504 -0.758819
0.4523
RESID(-1)
0.805773
0.108861
7.401873
0.0000
============================================================
R-squared
0.571970
Mean dependent var-1.85E-18
Adjusted R-squared
0.540651
S.D. dependent var 0.019916
S.E. of regression
0.013498
Akaike info criter-5.687865
Sum squared resid
0.007470
Schwarz criterion -5.527273
Log likelihood
131.9770
F-statistic
18.26258
Durbin-Watson stat
1.514975
Prob(F-statistic) 0.000000
============================================================

The argument in the auto command relates to the order of autocorrelation being tested. At
the moment we are concerned only with first-order autocorrelation. This is why the
command is auto(1).

62


Slide 64

TESTS FOR AUTOCORRELATION
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic
54.78773
Probability
0.000000
Obs*R-squared
25.73866
Probability
0.000000
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.171665
0.258094
0.665124
0.5097
LGDPI
9.50E-05
0.005822
0.016324
0.9871
LGPRFOOD
-0.036806
0.048504 -0.758819
0.4523
RESID(-1)
0.805773
0.108861
7.401873
0.0000
============================================================
R-squared
0.571970
Mean dependent var-1.85E-18
Adjusted R-squared
0.540651
S.D. dependent var 0.019916
S.E. of regression
0.013498
Akaike info criter-5.687865
Sum squared resid
0.007470
Schwarz criterion -5.527273
Log likelihood
131.9770
F-statistic
18.26258
Durbin-Watson stat
1.514975
Prob(F-statistic) 0.000000
============================================================

When we performed the test, resid(–1), and hence ELGFOOD(–1), were not defined for the
first observation in the sample, so we had 44 observations from 1960 to 2003.
63


Slide 65

TESTS FOR AUTOCORRELATION
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic
54.78773
Probability
0.000000
Obs*R-squared
25.73866
Probability
0.000000
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.171665
0.258094
0.665124
0.5097
LGDPI
9.50E-05
0.005822
0.016324
0.9871
LGPRFOOD
-0.036806
0.048504 -0.758819
0.4523
RESID(-1)
0.805773
0.108861
7.401873
0.0000
============================================================
R-squared
0.571970
Mean dependent var-1.85E-18
Adjusted R-squared
0.540651
S.D. dependent var 0.019916
S.E. of regression
0.013498
Akaike info criter-5.687865
Sum squared resid
0.007470
Schwarz criterion -5.527273
Log likelihood
131.9770
F-statistic
18.26258
Durbin-Watson stat
1.514975
Prob(F-statistic) 0.000000
============================================================

EViews uses the first observation by assigning a value of zero to the first observation for
resid(–1). Hence the test results are very slightly different.
64


Slide 66

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: ELGFOOD
Method: Least Squares
Sample(adjusted): 1960 2003
Included observations: 44 after adjusting endpoints
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.175732
0.265081
0.662936
0.5112
LGDPI
-7.36E-05
0.006180 -0.011917
0.9906
LGPRFOOD
-0.037373
0.049496 -0.755058
0.4546
ELGFOOD(-1)
0.805744
0.110202
7.311504
0.0000
============================================================
R-squared
0.572006
Mean dependent var 3.28E-05
Adjusted R-squared
0.539907
S.D. dependent var 0.020145
S.E. of regression
0.013664
Akaike info criter-5.661558
Sum squared resid
0.007468
Schwarz criterion -5.499359
Log likelihood
128.5543
F-statistic
17.81977
Durbin-Watson stat
1.513911
Prob(F-statistic) 0.000000
============================================================

We can also perform the test with a t test on the coefficient of the lagged variable.

65


Slide 67

TESTS FOR AUTOCORRELATION
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic
54.78773
Probability
0.000000
Obs*R-squared
25.73866
Probability
0.000000
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.171665
0.258094
0.665124
0.5097
LGDPI
9.50E-05
0.005822
0.016324
0.9871
LGPRFOOD
-0.036806
0.048504 -0.758819
0.4523
RESID(-1)
0.805773
0.108861
7.401873
0.0000
============================================================
R-squared
0.571970
Mean dependent var-1.85E-18
Adjusted R-squared
0.540651
S.D. dependent var 0.019916
S.E. of regression
0.013498
Akaike info criter-5.687865
Sum squared resid
0.007470
Schwarz criterion -5.527273
Log likelihood
131.9770
F-statistic
18.26258
Durbin-Watson stat
1.514975
Prob(F-statistic) 0.000000
============================================================

Here is the corresponding output using the auto command built into EViews. The test is
presented as an F statistic. Of course, when there is only one lagged residual, the F
statistic is the square of the t statistic.

66


Slide 68

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: LGFOOD
Method: Least Squares
Sample: 1959 2003
Included observations: 45
============================================================
Variable
Coefficient Std. Error t-Statistic Prob.
============================================================
C
2.236158
0.388193
5.760428
0.0000
LGDPI
0.500184
0.008793
56.88557
0.0000
LGPRFOOD
-0.074681
0.072864 -1.024941
0.3113
============================================================
R-squared
0.992009
Mean dependent var 6.021331
Adjusted R-squared
0.991628
S.D. dependent var 0.222787
S.E. of regression
0.020384
Akaike info criter-4.883747
Sum squared resid
0.017452
Schwarz criterion -4.763303
Log likelihood
112.8843
Hannan-Quinn crite-4.838846
F-statistic
2606.860
Durbin-Watson stat 0.478540
Prob(F-statistic)
0.000000
============================================================

dL = 1.24 (1% level, 2 explanatory variables, 45 observations)
The Durbin–Watson statistic is 0.48. dL is 1.24 for a 1 percent significance test (2
explanatory variables, 45 observations).
67


Slide 69

TESTS FOR AUTOCORRELATION

ut   1ut 1   2 ut  2   t

The Breusch–Godfrey test for higher-order autocorrelation is a straightforward extension of
the first-order test. If we are testing for order q, we add q lagged residuals to the right side
of the residuals regression. We will perform the test for second-order autocorrelation.
68


Slide 70

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: ELGFOOD
Method: Least Squares
Sample(adjusted): 1961 2003
Included observations: 43 after adjusting endpoints
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.071220
0.277253
0.256879
0.7987
LGDPI
0.000251
0.006491
0.038704
0.9693
LGPRFOOD
-0.015572
0.051617 -0.301695
0.7645
ELGFOOD(-1)
1.009693
0.163240
6.185318
0.0000
ELGFOOD(-2)
-0.289159
0.171960 -1.681548
0.1009
============================================================
R-squared
0.602010
Mean dependent var 0.000149
Adjusted R-squared
0.560117
S.D. dependent var 0.020368
S.E. of regression
0.013509
Akaike info criter-5.661981
Sum squared resid
0.006935
Schwarz criterion -5.457191
Log likelihood
126.7326
F-statistic
14.36996
Durbin-Watson stat
1.892212
Prob(F-statistic) 0.000000
============================================================

nR

2

 43  0 . 6020  25 . 89

2
c  2  0 .1 %  13 . 82

Here is the regression for ELGFOOD with two lagged residuals. The Breusch–Godfrey test
statistic is 25.89. With two lagged residuals, the test statistic has a chi-squared distribution
with two degrees of freedom under the null hypothesis. It is significant at the 0.1 percent
level

69


Slide 71

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: ELGFOOD
Method: Least Squares
Sample(adjusted): 1961 2003
Included observations: 43 after adjusting endpoints
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.071220
0.277253
0.256879
0.7987
LGDPI
0.000251
0.006491
0.038704
0.9693
LGPRFOOD
-0.015572
0.051617 -0.301695
0.7645
ELGFOOD(-1)
1.009693
0.163240
6.185318
0.0000
ELGFOOD(-2)
-0.289159
0.171960 -1.681548
0.1009
============================================================
R-squared
0.602010
Mean dependent var 0.000149
Adjusted R-squared
0.560117
S.D. dependent var 0.020368
S.E. of regression
0.013509
Akaike info criter-5.661981
Sum squared resid
0.006935
Schwarz criterion -5.457191
Log likelihood
126.7326
F-statistic
14.36996
Durbin-Watson stat
1.892212
Prob(F-statistic) 0.000000
============================================================

We will also perform an F test, comparing the RSS with the RSS for the same regression
without the lagged residuals. We know the result, because one of the t statistics is very
high.
70


Slide 72

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: ELGFOOD
Method: Least Squares
Sample: 1961 2003
Included observations: 43
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.027475
0.412043
0.066680
0.9472
LGDPI
-0.001074
0.009986 -0.107528
0.9149
LGPRFOOD
-0.003948
0.076191 -0.051816
0.9589
============================================================
R-squared
0.000298
Mean dependent var 0.000149
Adjusted R-squared -0.049687
S.D. dependent var 0.020368
S.E. of regression
0.020868
Akaike info criter-4.833974
Sum squared resid
0.017419
Schwarz criterion -4.711100
Log likelihood
106.9304
F-statistic
0.005965
Durbin-Watson stat
0.476550
Prob(F-statistic) 0.994053
============================================================

Here is the regression for ELGFOOD without the lagged residuals. Note that the sample
period has been adjusted to 1961 to 2003, to make RSS comparable with that for the
previous regression.
71


Slide 73

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: ELGFOOD
Method: Least Squares
Sample: 1961 2003
Included observations: 43
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.027475
0.412043
0.066680
0.9472
LGDPI
-0.001074
0.009986 -0.107528
0.9149
LGPRFOOD
-0.003948
0.076191 -0.051816
0.9589
============================================================
R-squared
0.000298
Mean dependent var 0.000149
Adjusted R-squared -0.049687
S.D. dependent var 0.020368
S.E. of regression
0.020868
Akaike info criter-4.833974
Sum squared resid
0.017419
Schwarz criterion -4.711100
Log likelihood
106.9304
F-statistic
0.005965
Durbin-Watson stat
0.476550
Prob(F-statistic) 0.994053
============================================================

F  2 , 38  

 0 . 017419

 0 . 006935

0 . 006935 / 38

/ 2

 28 . 72
F crit,0.1%

 2 , 35  

8 . 47

The F statistic is 28.72. This is significant at the 1% level. The critical value for F(2,35) is
8.47. That for F(2,38) must be slightly lower.
72


Slide 74

TESTS FOR AUTOCORRELATION
============================================================
Breusch-Godfrey Serial Correlation LM Test:
============================================================
F-statistic
30.24142
Probability
0.000000
Obs*R-squared
27.08649
Probability
0.000001
============================================================
Test Equation:
Dependent Variable: RESID
Method: Least Squares
Presample missing value lagged residuals set to zero.
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.053628
0.261016
0.205460
0.8383
LGDPI
0.000920
0.005705
0.161312
0.8727
LGPRFOOD
-0.013011
0.049304 -0.263900
0.7932
RESID(-1)
1.011261
0.159144
6.354360
0.0000
RESID(-2)
-0.290831
0.167642 -1.734833
0.0905
============================================================
R-squared
0.601922
Mean dependent var-1.85E-18
Adjusted R-squared
0.562114
S.D. dependent var 0.019916
S.E. of regression
0.013179
Akaike info criter-5.715965
Sum squared resid
0.006947
Schwarz criterion -5.515225
Log likelihood
133.6092
F-statistic
15.12071
Durbin-Watson stat
1.894290
Prob(F-statistic) 0.000000
============================================================

Here is the output using the auto(2) command in EViews. The conclusions for the two
tests are the same.

73


Slide 75

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: LGFOOD
Method: Least Squares
Sample (adjusted): 1960 2003
Included observations: 44 after adjustments
============================================================
Variable
Coefficient Std. Error t-Statistic Prob.
============================================================
C
0.985780
0.336094
2.933054
0.0055
LGDPI
0.126657
0.056496
2.241872
0.0306
LGPRFOOD
-0.088073
0.051897 -1.697061
0.0975
LGFOOD(-1)
0.732923
0.110178
6.652153
0.0000
============================================================
R-squared
0.995879
Mean dependent var 6.030691
Adjusted R-squared
0.995570
S.D. dependent var 0.216227
S.E. of regression
0.014392
Akaike info criter-5.557847
Sum squared resid
0.008285
Schwarz criterion -5.395648
Log likelihood
126.2726
Hannan-Quinn crite-5.497696
F-statistic
3222.264
Durbin-Watson stat 1.112437
Prob(F-statistic)
0.000000
============================================================

The table above gives the result of a parallel logarithmic regression with the addition of
lagged expenditure on food as an explanatory variable. Again, there is strong evidence that
the specification is subject to autocorrelation.
74


Slide 76

TESTS FOR AUTOCORRELATION
0 .0 4

0 .0 3

0 .0 2

0 .0 1

0
1959

1963

1967

1971

1975

1979

1983

1987

1991

1995

1999

2003

-0 .0 1

-0 .0 2

-0 .0 3

Residuals, ADL(1,0) logarithmic regression for FOOD
Here is a plot of the residuals.

75


Slide 77

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: ELGFOOD
Method: Least Squares
Sample(adjusted): 1961 2003
Included observations: 43 after adjusting endpoints
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
ELGFOOD(-1)
0.431010
0.143277
3.008226
0.0044
============================================================
R-squared
0.176937
Mean dependent var 0.000276
Adjusted R-squared
0.176937
S.D. dependent var 0.013922
S.E. of regression
0.012630
Akaike info criter-5.882426
Sum squared resid
0.006700
Schwarz criterion -5.841468
Log likelihood
127.4722
Durbin-Watson stat 1.801390
============================================================

eˆ t  0 . 43 e t  1

A simple regression of the residuals on the lagged residuals yields a coefficient of 0.43 with
standard error 0.14. We expect the estimate to be adversely affected by the presence of the
lagged dependent variable in the regression for LGFOOD.
76


Slide 78

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: ELGFOOD
Method: Least Squares
Sample(adjusted): 1961 2003
Included observations: 43 after adjusting endpoints
============================================================
Variable
CoefficientStd. Errort-Statistic Prob.
============================================================
C
0.417342
0.317973
1.312507
0.1972
LGDPI
0.108353
0.059784
1.812418
0.0778
LGPRFOOD
-0.005585
0.046434 -0.120279
0.9049
LGFOOD(-1)
-0.214252
0.116145 -1.844700
0.0729
ELGFOOD(-1)
0.604346
0.172040
3.512826
0.0012
============================================================
R-squared
0.246863
Mean dependent var 0.000276
Adjusted R-squared
0.167586
S.D. dependent var 0.013922
S.E. of regression
0.012702
Akaike info criter-5.785165
Sum squared resid
0.006131
Schwarz criterion -5.580375
Log likelihood
129.3811
F-statistic
3.113911
Durbin-Watson stat
1.867467
Prob(F-statistic) 0.026046
============================================================

nR

2

 43  0 . 2469  10 . 62

2
c 1  0 .1 %  10 . 83

With an intercept, LGDPI, LGPRFOOD, and LGFOOD(–1) added to the specification, the
coefficient of the lagged residuals becomes 0.60 with standard error 0.17. R2 is 0.2469, so
nR2 is 10.62, not quite significant at the 0.1 percent level. (Note that here n = 43.)
77


Slide 79

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: LGFOOD
Method: Least Squares
Sample (adjusted): 1960 2003
n
44
Included observations:
44
after
adjustments
h  ˆ
 0 . 445
2
============================================================
1  ns (  1 )
1  44 0 . 1102
Variable
Coefficient Std.bY Error
t-Statistic Prob.
============================================================
C
0.985780
0.336094
2.933054
0.0055
LGDPI
0.126657
0.056496
2.241872
0.0306
LGPRFOOD
-0.088073
0.051897 -1.697061
0.0975
LGFOOD(-1)
0.732923
0.110178
6.652153
0.0000
============================================================
R-squared
0.995879
Mean dependent var 6.030691
Adjusted R-squared
0.995570
S.D. dependent var 0.216227
S.E. of regression
0.014392
Akaike info criter-5.557847
Sum squared resid
0.008285
Schwarz criterion -5.395648
Log likelihood
126.2726
Hannan-Quinn crite-5.497696
F-statistic
3222.264
Durbin-Watson stat 1.112437
Prob(F-statistic)
0.000000
============================================================





2

 4 . 33

The Durbin–Watson statistic is 1.11. From this one obtains an estimate of  as 1 – 0.5d =
0.445. The standard error of the coefficient of the lagged dependent variable is 0.1102.
Hence the h statistic is as shown.
78


Slide 80

TESTS FOR AUTOCORRELATION
============================================================
Dependent Variable: LGFOOD
Method: Least Squares
Sample (adjusted): 1960 2003
n
44
Included observations:
44
after
adjustments
h  ˆ
 0 . 445
2
============================================================
1  ns (  1 )
1  44 0 . 1102
Variable
Coefficient Std.bY Error
t-Statistic Prob.
============================================================
C
0.985780
0.336094
2.933054
0.0055
LGDPI
0.126657
0.056496
2.241872
0.0306
LGPRFOOD
-0.088073
0.051897 -1.697061
0.0975
LGFOOD(-1)
0.732923
0.110178
6.652153
0.0000
============================================================
R-squared
0.995879
Mean dependent var 6.030691
Adjusted R-squared
0.995570
S.D. dependent var 0.216227
S.E. of regression
0.014392
Akaike info criter-5.557847
Sum squared resid
0.008285
Schwarz criterion -5.395648
Log likelihood
126.2726
Hannan-Quinn crite-5.497696
F-statistic
3222.264
Durbin-Watson stat 1.112437
Prob(F-statistic)
0.000000
============================================================





2

 4 . 33

Under the null hypothesis of no autocorrelation, the h statistic asymptotically has a
standardized normal distribution, so this value is above the critical value at the 0.1 percent
level, 3.29.
79


Slide 81

Copyright Christopher Dougherty 2011.
These slideshows may be downloaded by anyone, anywhere for personal use.
Subject to respect for copyright and, where appropriate, attribution, they may be
used as a resource for teaching an econometrics course. There is no need to
refer to the author.
The content of this slideshow comes from Section 12.2 of C. Dougherty,
Introduction to Econometrics, fourth edition 2011, Oxford University Press.
Additional (free) resources for both students and instructors may be
downloaded from the OUP Online Resource Centre
http://www.oup.com/uk/orc/bin/9780199567089/.
Individuals studying econometrics on their own and who feel that they might
benefit from participation in a formal course should consider the London School
of Economics summer school course
EC212 Introduction to Econometrics
http://www2.lse.ac.uk/study/summerSchools/summerSchool/Home.aspx
or the University of London International Programmes distance learning course
20 Elements of Econometrics
www.londoninternational.ac.uk/lse.

2012.02.23