Transcript Document

Dependent Variable: INCOME
Included observations: 30
Variable Coefficient Std. Error t-Statistic Prob.
C
-1001.869 520.7067 -1.924056 0.0654
AGE 8.846979
5.453205 1.622345 0.1168
EDU
95.16914
38.53614 2.469607 0.0204
INHER 1.514066
0.464237 3.261411 0.0031
R-squared 0.540692
S.E. of regression 353.2761
Sum squared resid 3244903.
F-statistic 10.20230 Prob(F-statistic)0.000128
Dependent Variable: INCOME
Sample: 1 30
Included observations: 30
Variable Coefficient Std. Error t-Statistic
C
884.5000
90.11339 9.815412
R-squared0.000000
S.E. of regression
493.5713
Sum squared resid 7064768.
Prob.
0.0000
F = [(RSSR - RSSU)/m]
 [RSSU/ (n-k)] where
RSSR  RSS of the regression
performed under the restrictions
imposed
RSSU  RSS of the regression
performed without any restriction
m  Number of restrictions
n  Total number of observations
k  no. of parameters to be estimated
The numerator of the F-statistic has m
degrees of freedom.
The denominator of the F-statistic has
(n-k) degrees of freedom.
We reject H0 if F exceeds the critical
value corresponding to the level of
significance. Otherwise we retain H0.
Series: INCOME
Sample 1 30
Observations 30
Mean
Median
Maximum
Minimum
Std. Dev.
Skewness
Kurtosis
884.5000
705.0000
2000.000
250.0000
493.5713
0.948059
2.811966
The Chow Test
Dependent Variable: QTYBEFORE
Variable Coefficient Std. Error t-Statistic Prob.
C
2.093668 2.375737 0.881271 0.3818
PRICE 2.997041 0.152499 19.65284 0.0000
BEFORE
R-squared
0.869438
S.E. of regression
3.644955
Sum squared resid 770.5704
Dependent Variable: QTYAFTER
Variable Coefficient Std. Error t-Statistic Prob.
C
19.73596 2.172561 9.084191 0.0000
PRICE 3.036940 0.291535 10.41708 0.0000
AFTER
R-squared0.651684
S.E. of regression
3.619314
Sum squared resid 759.7673
Dependent Variable: QTYALL
Variable Coefficient Std. Error t-Statistic Prob.
C
29.38339 1.417390 20.73063 0.0000
PRICEALL 1.371748 0.116072 11.81805
0.0000
R-squared0.542043
S.E. of regression
5.965826
Sum squared resid 4199.747
F = [(RSSR - RSSU)/k]
 [RSSU/ (n1+n2-2k)] where
RSSR  RSS from the pooled data regression
RSS1  RSS from the first regression
RSS2  RSS from the second regression
RSSU  RSS1+ RSS2
n1  Total number of observations in the first
dataset
n1  Total number of observations in the first
dataset
k  no. of parameters to be estimated
The F-statistic constructed above has
an F-distribution with the numerator
having k degrees of freedom and the
denominator having (n1+n2-2k)
degrees of freedom.
If the null hypothesis is wrong, then F
will be a “large” value and we shall
accept that the structural breakdown
has taken place. Otherwise, we do not
reject H0.
PRICEALL vs. QTYALL
20
15
PRICEALL
10
5
0
20
30
40
QTYALL
50
60
70
• Qualitative (indicator or dummy) Variables
• The number of dummy variables needed for a
qualitative variable is the number of
categories less one.
• For dichotomous variables, such as gender,
only one dummy variable is needed. There
are two categories (female, male); c = 1; c - 1
= 0.
• Your office is located in which region of the
country?
___Northeast___ Midwest
___South___West
Number of dummy variables = c - 1 = 4 - 1
=3
Dependent Variable: QTYALL
Variable Coefficient Std. Error t-Statistic
C
19.96464 1.083132
Prob.
18.43233
0.0000
PRICEALL 3.005518 0.134288
22.38111
0.0000
DUMMY
-14.28446 0.0000
-18.00042 1.260140
R-squared 0.833105
Sum squared resid
S.E. of regression 3.616830
1530.531 F-statistic 292.0197
Prob(F-statistic)
0.000000
The Classical Linear Regression
Model
Some Procedural Problems
1. Heteroskedasticity: The different
random terms do not have the same
variance s2(as was assumed)
2. Autocorrelation amongst the
random terms means that
Cov(ei ej)  0 for some i and j
Presence of either of these factors
means that E(ee’)  s2In
3. Multicollinearity: Some explanatory
variables are not linearly independent
and so the matrix (X’X)-1 does not exist
4. Explanatory variables correlated
with the random term:The OLS
estimators are no longer
unbiased
5. Some relevant explanatory
variables are omitted:The OLS
estimators are no longer
unbiased