Testing and Estimating Variances

Download Report

Transcript Testing and Estimating Variances

Linear Regression
Hypothesis testing and Estimation
Assume that we have collected data on two
variables X and Y. Let
(x1, y1) (x2, y2) (x3, y3) … (xn, yn)
denote the pairs of measurements on the on
two variables X and Y for n cases in a sample
(or population)
The Statistical Model
Each yi is assumed to be randomly generated from
a normal distribution with
mean mi = a + bxi and
standard deviation s.
(a, b and s are unknown)
slope = b
yi
s
a + b xi
a
xi
Y = a + bX
The Data
The Linear Regression Model
• The data falls roughly about a straight line.
160
Y = a + bX
140
120
100
unseen
80
60
40
20
0
40
60
80
100
120
140
The Least Squares Line
Fitting the best straight line
to “linear” data
Let
Y=a +bX
denote an arbitrary equation of a straight line.
a and b are known values.
This equation can be used to predict for each value
of X, the value of Y.
For example, if X = xi (as for the ith case) then the
predicted value of Y is:
yˆi  a  bxi
The residual
ri  yi  yˆi  yi  a  bxi 
can be computed for each case in the sample,
r1  y1  yˆ1, r2  y2  yˆ 2 ,, rn  yn  yˆ n ,
The residual sum of squares (RSS) is
n
n
n
RSS   ri    yi  yˆ i     yi  a  bxi 
2
i 1
i 1
2
i 1
a measure of the “goodness of fit of the line
Y = a + bX to the data
2
The optimal choice of a and b will result in
the residual sum of squares
n
n
n
RSS   ri    yi  yˆ i     yi  a  bxi 
2
i 1
i 1
2
i 1
attaining a minimum.
If this is the case than the line:
Y = a + bX
is called the Least Squares Line
2
The equation for the least squares line
n
Let
2
S xx   xi  x 
i 1
n
S yy    yi  y 
2
i 1
n
S xy   xi  x  yi  y 
i 1
Linear Regression
Hypothesis testing and Estimation
The Least Squares Line
Fitting the best straight line
to “linear” data
Computing Formulae:
2


  xi 
n
n
2
i 1


2
S xx   xi  x    xi 
n
i 1
i 1
2
n


  yi 
n
n
2
S yy    yi  y    yi2   i 1 
n
i 1
i 1
n
n
S xy   xi  x  yi  y 
i 1
 n  n 
  xi   yi 
n
i 1
i 1




  xi yi 
n
i 1
Then the slope of the least squares line can be
shown to be:
n
b
S xy
S xx

 x  x  y
i
i 1
n
 y
i
 x  x 
i 1
2
i
and the intercept of the least squares line can
be shown to be:
a  y  bx  y 
S xy
S xx
x
The residual sum of Squares
n
n
RSS    yi  yˆi     yi   a  bxi  
2
i 1
 S xy 
 S yy 
S xx
2
i 1
2
Computing
formula
Estimating s, the standard deviation in the
regression model :
n
s
y
i 1
i
 yˆ i 
n2
n
2

  y  a  bx 
2
i
i 1
i
n2

S xy 
1 

 S yy 
n  2 
S xx
2



Computing
formula
This estimate of s is said to be based on n – 2
degrees of freedom
Sampling distributions of the
estimators
The sampling distribution slope of the least
squares line :
n
b
S xy
S xx

 x  x  y
i
i 1
n
 y
i
 x  x 
2
i
i 1
It can be shown that b has a normal
distribution with mean and standard deviation
mb  b and s b 
s
S xx

s
n
 x  x 
i 1
2
i
Thus
z
b  mb
sb

bb
s
S xx
has a standard normal distribution, and
b  mb
bb
t

s
sb
S xx
has a t distribution with df = n - 2
(1 – a)100% Confidence Limits for slope b :
bˆ  t
a /2
s
S xx
ta/2 critical value for the t-distribution with n – 2
degrees of freedom
Testing the slope
H0 : b  b0 vs H A : b  b0
The test statistic is:
b  b0
t
s
S xx
- has a t distribution with df = n – 2 if H0 is true.
The Critical Region
Reject
H0 : b  b0 vs H A : b  b0
if
b  b0
t
 ta / 2 or t  ta / 2
s
S xx
df = n – 2
This is a two tailed tests. One tailed tests are
also possible
The sampling distribution intercept of the
least squares line :
a  y  bx  y 
S xy
S xx
x
It can be shown that a has a normal
distribution with mean and standard deviation
1
ma  a and s a  s

n
x
n
2
 x  x 
i 1
2
i
Thus
z
a  ma
sa
a a

1
s

n
x
2
n
 x  x 
i
i 1
has a standard normal distribution and
a  ma
t

sa
a a
1
s

n
x
2
n
 x  x 
i 1
i
has a t distribution with df = n - 2
2
2
(1 – a)100% Confidence Limits for intercept a :
2
1 x
aˆ  ta / 2 s

n S xx
ta/2 critical value for the t-distribution with n – 2
degrees of freedom
Testing the intercept
H0 : a  a0 vs H A : a  a0
The test statistic is:
t
1
s

n
a  a0
x
n
2
 x  x 
i 1
2
i
- has a t distribution with df = n – 2 if H0 is true.
The Critical Region
Reject
H0 : a  a0 vs H A : a  a0
if
a  a0
t
 ta / 2 or t  ta / 2
sa
df = n – 2
Example
The following data showed the per capita consumption of cigarettes per month
(X) in various countries in 1930, and the death rates from lung cancer for men
in 1950.
TABLE : Per capita consumption of cigarettes per month (Xi) in n = 11
countries in 1930, and the death rates, Yi (per 100,000), from lung cancer for
men in 1950.
Country (i)
Australia
Canada
Denmark
Finland
Great Britain
Holland
Iceland
Norway
Sweden
Switzerland
USA
Xi
48
50
38
110
110
49
23
25
30
51
130
Yi
18
15
17
35
46
24
6
9
11
25
20
50
Great Britain
death rates from lung cancer (1950)
45
40
35
Finland
30
25
Switzerland
Holland
20
USA
Australia
Denmark
Canada
15
Sweden
Norway
Iceland
10
5
0
0
20
40
60
80
100
Per capita consumption of cigarettes
120
140
Fitting the Least Squares Line
n
x
i 1
 664
i
x
i 1
n
 yi  226
i 1
x y
i
i
 16,914
2
i
n
y
i 1
n
i 1
n
2
i
 54,404
 6,018
Fitting the Least Squares Line
First compute the following three quantities:

664
 54404
2
S xx
 14322.55
11
S yy
2


226
 6018
S xy

664 226 
 16914 
 3271 .82
11
 1374.73
11
Computing Estimate of Slope (b), Intercept (a)
and standard deviation (s),
S xy
3271.82
b

 0.288
S xx 14322.55
226
 664
a  y  bx 
 0.288
  6.756
11
 11 

S xy 
1 
s
S yy 
n  2 
S xx
2

  8.35

95% Confidence Limits for slope b :
bˆ  t
a /2
s
S xx
0.288   2.262 
8.35
1432255
0.0706 to 0.3862
t.025 = 2.262 critical value for the t-distribution with 9
degrees of freedom
95% Confidence Limits for intercept a :
2
1 x
aˆ  ta / 2 s

n S xx
1  664 11
6.756   2.262  8.35

11 1432255
2
-4.34 to 17.85
t.025 = 2.262 critical value for the t-distribution with 9
degrees of freedom
death rates from lung cancer (1950)
50
Great Britain
45
40
35
Finland
30
25
Switzerland
Holland
20
USA
Australia
Denmark
Canada
15
Y = 6.756 + (0.228)X
Sweden
Norway
Iceland
10
5
0
0
20
40
60
80
100
120
Per capita consumption of cigarettes
95% confidence Limits for slope 0.0706 to 0.3862
95% confidence Limits for intercept -4.34 to 17.85
140
Testing the positive slope
H0 : b  0 vs H A : b  0
The test statistic is:
b0
t
s
S xx
The Critical Region
Reject
H0 : b  0 in favour of H A : b  0
if
b0
t
 t0.05 =1.833
s
df = 11 – 2 = 9
S xx
A one tailed test
b0
t
s
S xx
Since

0.288
8.35
 41.3  1.833
1432255
we reject
H0 : b  0
and conclude
HA : b  0
Confidence Limits for Points on the
Regression Line
• The intercept a is a specific point on the regression
line.
• It is the y – coordinate of the point on the regression
line when x = 0.
• It is the predicted value of y when x = 0.
• We may also be interested in other points on the
regression line. e.g. when x = x0
• In this case the y – coordinate of the point on the
regression line when x = x0 is a + b x0
y=a+bx
a + b x0
x0
(1- a)100% Confidence Limits for a + b x0 :
1 x0  x 
a  bx0  ta / 2 s

n
S xx
2
ta/2 is the a/2 critical value for the t-distribution with
n - 2 degrees of freedom
Prediction Limits for new values of the
Dependent variable y
• An important application of the regression line is
prediction.
• Knowing the value of x (x0) what is the value of y?
• The predicted value of y when x = x0 is:
yˆ  a  bx0
• This in turn can be estimated by:.
yˆ  aˆ  bˆx0  a  bx0
The predictor
yˆ  aˆ  bˆx0  a  bx0
• Gives only a single value for y.
• A more appropriate piece of information would
be a range of values.
• A range of values that has a fixed probability of
capturing the value for y.
• A (1- a)100% prediction interval for y.
(1- a)100% Prediction Limits for y when x = x0:
1  x0  x 
a  bx0  ta / 2 s 1  
n
S xx
2
ta/2 is the a/2 critical value for the t-distribution with
n - 2 degrees of freedom
Example
In this example we are studying building fires in a
city and interested in the relationship between:
1. X = the distance of the closest fire hall
and the building that puts out the alarm
and
2. Y = cost of the damage (1000$)
The data was collected on n = 15 fires.
The Data
Fire
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Distance Damage
3.4
26.2
1.8
17.8
4.6
31.3
2.3
23.1
3.1
27.5
5.5
36.0
0.7
14.1
3.0
22.3
2.6
19.6
4.3
31.3
2.1
24.0
1.1
17.3
6.1
43.2
4.8
36.4
3.8
26.1
Damage (1000$)
Scatter Plot
50.0
45.0
40.0
35.0
30.0
25.0
20.0
15.0
10.0
5.0
0.0
0.0
2.0
4.0
Distance (miles)
6.0
8.0
Computations
n
Fire
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Distance Damage
3.4
26.2
1.8
17.8
4.6
31.3
2.3
23.1
3.1
27.5
5.5
36.0
0.7
14.1
3.0
22.3
2.6
19.6
4.3
31.3
2.1
24.0
1.1
17.3
6.1
43.2
4.8
36.4
3.8
26.1
x
i 1
n
 49.2
i
x
i 1
2
i
n
y
 196.16
i
 396.2
2
i
 11376.5
i 1
n
y
i 1
n
x y
i 1
i
i
 1470.65
Computations Continued
n
x
x
i 1
i
n
 49.2
15
 3.28
n
y
y
i 1
i
n
 396.2
15
 26.4133
Computations Continued


  xi 
x 2   i 1 
n
n
S xx  
i 1
i


  yi 
y 2   i 1 
n
n
S yy  
i 1
2
i
n
2
49
.
2
 196.16 
 34.784
2
n
2
396
.
2
 11376.5 
 n  n 
  xi   yi 
n
S xy   xi yi   i 1  i 1 
i 1
15
n
 1470.65  49.2396.2  171.114
15
15
 911.517
Computations Continued
S xy 171.114
ˆ
bb 

 4.92
S xx 34.784
a  aˆ  y  bx  26.4133 4.9193.28  10.28
s

S yy 
S xy2
S xx
n2
2
171
.
114
911.517 
13
34.784  2.316
95% Confidence Limits for slope b :
bˆ  t
a /2
s
S xx
4.07 to 5.77
t.025 = 2.160 critical value for the t-distribution with
13 degrees of freedom
95% Confidence Limits for intercept a :
2
1 x
aˆ  ta / 2 s

n S xx
7.21 to 13.35
t.025 = 2.160 critical value for the t-distribution with
13 degrees of freedom
Least Squares Line
60.0
Damage (1000$)
50.0
40.0
30.0
y=4.92x+10.28
20.0
10.0
0.0
0.0
2.0
4.0
Distance (miles)
6.0
8.0
(1- a)100% Confidence Limits for a + b x0 :
1 x0  x 
a  bx0  ta / 2 s

n
S xx
2
ta/2 is the a/2 critical value for the t-distribution with
n - 2 degrees of freedom
95% Confidence Limits for a + b x0 :
x0
lower
upper
1
2
3
4
5
6
12.87
18.43
23.72
28.53
32.93
37.15
17.52
21.80
26.35
31.38
36.82
42.44
95% Confidence Limits for a + b x0
60.0
Damage (1000$)
50.0
40.0
30.0
20.0
Confidence limits
10.0
0.0
0.0
2.0
4.0
Distance (miles)
6.0
8.0
(1- a)100% Prediction Limits for y when x = x0:
1  x0  x 
a  bx0  ta / 2 s 1  
n
S xx
2
ta/2 is the a/2 critical value for the t-distribution with
n - 2 degrees of freedom
95% Prediction Limits for y when x = x0
x0
lower
upper
1
2
3
4
5
6
9.68
14.84
19.86
24.75
29.51
34.13
20.71
25.40
30.21
35.16
40.24
45.45
95% Prediction Limits for y
when x = x0
60.0
Damage (1000$)
50.0
40.0
30.0
Prediction limits
20.0
10.0
0.0
0.0
2.0
4.0
Distance (miles)
6.0
8.0
Linear Regression
Summary
Hypothesis testing and Estimation
(1 – a)100% Confidence Limits for slope b :
bˆ  t
a /2
s
S xx
ta/2 critical value for the t-distribution with n – 2
degrees of freedom
Testing the slope
H0 : b  b0 vs H A : b  b0
The test statistic is:
b  b0
t
s
S xx
- has a t distribution with df = n – 2 if H0 is true.
(1 – a)100% Confidence Limits for intercept a :
2
1 x
aˆ  ta / 2 s

n S xx
ta/2 critical value for the t-distribution with n – 2
degrees of freedom
Testing the intercept
H0 : a  a0 vs H A : a  a0
The test statistic is:
t
1
s

n
a  a0
x
n
2
 x  x 
i 1
2
i
- has a t distribution with df = n – 2 if H0 is true.
(1- a)100% Confidence Limits for a + b x0 :
1 x0  x 
a  bx0  ta / 2 s

n
S xx
2
ta/2 is the a/2 critical value for the t-distribution with
n - 2 degrees of freedom
(1- a)100% Prediction Limits for y when x = x0:
1  x0  x 
a  bx0  ta / 2 s 1  
n
S xx
2
ta/2 is the a/2 critical value for the t-distribution with
n - 2 degrees of freedom
Correlation
Definition
The statistic:
n
r
S xy
S xx S yy

 x  x  y  y 
i 1
n
i
i
n
 x  x    y  y 
i 1
2
i
i 1
2
i
is called Pearsons correlation coefficient
Properties
1. -1 ≤ r ≤ 1, |r| ≤ 1, r2 ≤ 1
2. |r| = 1 (r = +1 or -1) if the points
(x1, y1), (x2, y2), …, (xn, yn) lie along a
straight line. (positive slope for +1,
negative slope for -1)
The test for independence (zero correlation)
H0: X and Y are independent
HA: X and Y are correlated
The test statistic:
t  n2
The Critical region
r
1 r
2
Reject H0 if |t| > ta/2 (df = n – 2)
This is a two-tailed critical region, the critical
region could also be one-tailed
Example
In this example we are studying building fires in a
city and interested in the relationship between:
1. X = the distance of the closest fire hall
and the building that puts out the alarm
and
2. Y = cost of the damage (1000$)
The data was collected on n = 15 fires.
The Data
Fire
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Distance Damage
3.4
26.2
1.8
17.8
4.6
31.3
2.3
23.1
3.1
27.5
5.5
36.0
0.7
14.1
3.0
22.3
2.6
19.6
4.3
31.3
2.1
24.0
1.1
17.3
6.1
43.2
4.8
36.4
3.8
26.1
Damage (1000$)
Scatter Plot
50.0
45.0
40.0
35.0
30.0
25.0
20.0
15.0
10.0
5.0
0.0
0.0
2.0
4.0
Distance (miles)
6.0
8.0
Computations
n
Fire
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Distance Damage
3.4
26.2
1.8
17.8
4.6
31.3
2.3
23.1
3.1
27.5
5.5
36.0
0.7
14.1
3.0
22.3
2.6
19.6
4.3
31.3
2.1
24.0
1.1
17.3
6.1
43.2
4.8
36.4
3.8
26.1
x
i 1
n
 49.2
i
2
x
 i  196.16
n
y
i 1
i
i 1
 396.2
n
2
y
 i  11376.5
i 1
n
x y
i 1
i
i
 1470.65
Computations Continued
n
x
x
i 1
i
n
 49.2
15
 3.28
n
y
y
i 1
i
n
 396.2
15
 26.4133
Computations Continued


  xi 
x 2   i 1 
n
n
S xx  
i 1
i


  yi 
y 2   i 1 
n
n
S yy  
i 1
2
i
n
2
49
.
2
 196.16 
 34.784
2
n
2
396
.
2
 11376.5 
 n  n 
  xi   yi 
n
S xy   xi yi   i 1  i 1 
i 1
15
n
 1470.65  49.2396.2  171.114
15
15
 911.517
The correlation coefficient
r
S xy
171.114

 0.961
34.784 911.517
S xx S yy
The test for independence (zero correlation)
The test statistic:
t  n2
r
1 r
2
 13
0.961
1  0.961
2
 12.525
We reject H0: independence, if |t| > t0.025 = 2.160
H0: independence, is rejected
Relationship between Regression
and Correlation
r
Recall
Also
bˆ 
since
S xy
S xx

S xy
S xx S yy
S yy
S xy
S xx
S xx S yy

S yy
S xx
r
sy
sx
r
S yy
S xx
sx 
and s y 
n 1
n 1
Thus the slope of the least squares line is simply the ratio
of the standard deviations × the correlation coefficient
The test for independence (zero correlation)
H0: X and Y are independent
HA: X and Y are correlated
Uses the test statistic:
t  n2
Note:
bˆ 
S yy
S xx
r
and
r
1 r
r
2
S xx ˆ
b
S yy
The two tests
1. The test for independence (zero correlation)
H0: X and Y are independent
HA: X and Y are correlated
2. The test for zero slope
H0: b = 0.
HA : b ≠ 0
are equivalent
1. the test statistic for independence:
r
t  n2
Thus t  n  2
1 r2
S xy
S xy
S xx S yy
S xx
1
S xy2
 n2
S yy 1 
S xx S yy
S xy

S xx

S xy2
S xx S yy
bˆ
s
 1  
S xy2 
n2

  S yy 

S xx

 S 
S
xx 
 xx  
 the same statistic for testing for zero slope.
Regression (in general)
In many experiments we would have collected data on a
single variable Y (the dependent variable ) and on p (say)
other variables X1, X2, X3, ... , Xp (the independent variables).
One is interested in determining a model that describes the
relationship between Y (the response (dependent) variable)
and X1, X2, …, Xp (the predictor (independent) variables.
This model can be used for
– Prediction
– Controlling Y by manipulating X1, X2, …, Xp
The Model:
is an equation of the form
Y = f(X1, X2,... ,Xp | q1, q2, ... , qq) + e
where q1, q2, ... , qq are unknown parameters
of the function f and e is a random disturbance
(usually assumed to have a normal
distribution with mean 0 and standard
deviation s.
Examples:
1. Y = Blood Pressure, X = age
The model
Y = a + bX + e,thus q1 = a and q2 = b.
This model is called:
the simple Linear Regression Model
160
Y = a + bX
140
120
100
80
60
40
20
0
40
60
80
100
120
140
2.
Y = average of five best times for running the
100m, X = the year
The model
Y = a e-bX + g  e, thus q1 = a, q2 = b and q2 = g.
This model is called:
the exponential Regression Model
12.5
12
Y = a e-bX + g
11.5
11
10.5
10
9.5
9
8.5
8
1930
1940
1950
1960
1970
1980
1990
2000
2010
2.
Y = gas mileage ( mpg) of a car brand
X1 = engine size
X2 = horsepower
X3 = weight
The model
Y = b0 + b1 X1 + b2 X2 + b3 X3 + e.
This model is called:
the Multiple Linear Regression Model
The Multiple Linear
Regression Model
In Multiple Linear Regression we assume the
following model
Y = b0 + b1 X1 + b2 X2 + ... + bp Xp + e
This model is called the Multiple Linear Regression
Model.
Again are unknown parameters of the model and
where b0, b1, b2, ... , bp are unknown parameters and
e is a random disturbance assumed to have a normal
distribution with mean 0 and standard deviation s.
The importance of the Linear model
1.
It is the simplest form of a model in which
each dependent variable has some effect on the
independent variable Y.
– When fitting models to data one tries to find the
simplest form of a model that still adequately
describes the relationship between the dependent
variable and the independent variables.
– The linear model is sometimes the first model to
be fitted and only abandoned if it turns out to be
inadequate.
2. In many instance a linear model is the
most appropriate model to describe the
dependence relationship between the
dependent variable and the independent
variables.
–
This will be true if the dependent variable
increases at a constant rate as any or the
independent variables is increased while
holding the other independent variables
constant.
3.
Many non-Linear models can be
Linearized (put into the form of a Linear
model by appropriately transformation the
dependent variables and/or any or all of the
independent variables.)
– This important fact ensures the wide utility of
the Linear model. (i.e. the fact the many nonlinear models are linearizable.)
An Example
The following data comes from an experiment
that was interested in investigating the source
from which corn plants in various soils obtain
their phosphorous.
–The concentration of inorganic phosphorous (X1)
and the concentration of organic phosphorous (X2)
was measured in the soil of n = 18 test plots.
–In addition the phosphorous content (Y) of corn
grown in the soil was also measured. The data is
displayed below:
Inorganic
Phosphorous
X1
Organic
Phosphorous
X2
Plant
Available
Phosphorous
Y
Inorganic
Phosphorous
X1
Organic
Phosphorous
X2
Plant
Available
Phosphorous
Y
0.4
53
64
12.6
58
51
0.4
23
60
10.9
37
76
3.1
19
71
23.1
46
96
0.6
34
61
23.1
50
77
4.7
24
54
21.6
44
93
1.7
65
77
23.1
56
95
9.4
44
81
1.9
36
54
10.1
31
93
26.8
58
168
11.6
29
93
29.9
51
99
Coefficients
Intercept
56.2510241 (b0)
X1
1.78977412 (b1)
X2
0.08664925 (b2)
Equation:
Y = 56.2510241 + 1.78977412 X1 + 0.08664925 X2
The Multiple Linear
Regression Model
In Multiple Linear Regression we assume the
following model
Y = b0 + b1 X1 + b2 X2 + ... + bp Xp + e
This model is called the Multiple Linear Regression
Model.
Again are unknown parameters of the model and
where b0, b1, b2, ... , bp are unknown parameters and
e is a random disturbance assumed to have a normal
distribution with mean 0 and standard deviation s.
Summary of the Statistics
used in
Multiple Regression
The Least Squares Estimates:
b0 , b1 , b2 , , b p ,
- the values that minimize
n
RSS    yi  yˆi 
i 1
n

2
  yi   b0  b1 x1i  b2 x2i 
i 1
 b p x pi 

2
The Analysis of Variance Table Entries
a) Adjustedn Total Sum of Squares (SSTotal)
SSTotal 
 y  y . d.f.  n  1
_ 2
i
i1
b) Residual Sum of Squares (SSError)
n
RSS  SSError 

2
yi  yˆ i  . d.f.  n  p  1
i1
c) Regression Sum of Squares (SSReg)
n
SSReg  SSb 1 ,b 2 , ... , b p  
Note:

_ 2
ˆ
yi  y . d.f. p
i1
n

i1
n
_ 2
y i  y  

n
_ 2
yˆ i  y 
i1
i.e. SSTotal = SSReg +SSError

i1
2
yi  yˆ i  .
The Analysis of Variance Table
Source
Sum of Squares d.f.
Regression
Error
SSReg
SSError
Total
SSTotal
Mean Square
p
SSReg/p = MSReg
n-p-1 SSError/(n-p-1) =MSError = s2
n-1
F
MSReg/s2
Uses:
1. To estimate s2 (the error variance).
- Use s2 = MSError to estimate s2.
2. To test the Hypothesis
H0: b1 = b2= ... = bp = 0.
Use the test statistic
F  MSReg MSError  MSReg s
  SSReg p   SS Error
2
 n  p  1
- Reject H0 if F > Fa(p,n-p-1).
3. To compute other statistics that are useful in describing
the relationship between Y (the dependent variable) and
X1, X2, ... ,Xp (the independent variables).
a)R2 = the coefficient of determination
= SSReg/SSTotal
n
yˆ i  y


= i 1
2
n
2

y

y

 i
i 1
= the proportion of variance in Y explained by
X1, X2, ... ,Xp
1 - R2 = the proportion of variance in Y
that is left unexplained by X1, X2, ... , Xp
= SSError/SSTotal.
b) Ra2 = "R2 adjusted" for degrees of freedom.
= 1 -[the proportion of variance in Y that is left
unexplained by X1, X2,... , Xp adjusted for d.f.]
 1  MSError MSTotal
 1
SSError  n  p  1
SSTotal  n  1
n  1 SSError

 1
 n  p 1 SSTotal
n  1

1  R2 
 1
 n  p 1
c) R= R2 = the Multiple correlation coefficient of
Y with X1, X2, ... ,Xp
=
SS Re g
SS Total
= the maximum correlation between Y and a
linear combination of X1, X2, ... ,Xp
Comment: The statistics F, R2, Ra2 and R are
equivalent statistics.
Using Statistical Packages
To perform Multiple Regression
Using SPSS
Note: The use of another statistical package
such as Minitab is similar to using SPSS
After starting the SSPS program the following dialogue
box appears:
If you select Opening an existing file and press OK the
following dialogue box appears
The following dialogue box appears:
If the variable names are in the file ask it to read the
names. If you do not specify the Range the program will
identify the Range:
Once you “click OK”, two windows will appear
One that will contain the output:
The other containing the data:
To perform any statistical Analysis select the Analyze
menu:
Then select Regression and Linear.
The following Regression dialogue box appears
Select the Dependent variable Y.
Select the Independent variables X1, X2, etc.
If you select the Method - Enter.
All variables will be put into the equation.
There are also several other methods that can be
used :
1. Forward selection
2. Backward Elimination
3. Stepwise Regression
Forward selection
1. This method starts with no variables in the
equation
2. Carries out statistical tests on variables not in
the equation to see which have a significant
effect on the dependent variable.
3. Adds the most significant.
4. Continues until all variables not in the
equation have no significant effect on the
dependent variable.
Backward Elimination
1. This method starts with all variables in the
equation
2. Carries out statistical tests on variables in the
equation to see which have no significant
effect on the dependent variable.
3. Deletes the least significant.
4. Continues until all variables in the equation
have a significant effect on the dependent
variable.
Stepwise Regression (uses both forward and
backward techniques)
1. This method starts with no variables in the
equation
2. Carries out statistical tests on variables not in
the equation to see which have a significant
effect on the dependent variable.
3. It then adds the most significant.
4. After a variable is added it checks to see if any
variables added earlier can now be deleted.
5. Continues until all variables not in the
equation have no significant effect on the
dependent variable.
All of these methods are procedures for
attempting to find the best equation
The best equation is the equation that is the
simplest (not containing variables that are not
important) yet adequate (containing variables
that are important)
Once the dependent variable, the independent variables and the
Method have been selected if you press OK, the Analysis will
be performed.
The output will contain the following table
Model Summary
Model
1
R
.822a
R Sq uare
.676
Adjusted
R Sq uare
.673
Std. Error
of the
Estimate
4.46
a. Predictors: (Constant), WEIGHT, HORSE, ENGINE
R2 and R2 adjusted measures the proportion of variance
in Y that is explained by X1, X2, X3, etc (67.6% and
67.3%)
R is the Multiple correlation coefficient (the maximum
correlation between Y and a linear combination of X1,
X2, X3, etc)
The next table is the Analysis of Variance Table
ANOVAb
Model
1
Reg ression
Residual
Total
Sum of
Squares
16098.158
7720.836
23818.993
df
3
388
391
Mean
Square
5366.053
19.899
F
269.664
Sig .
.000a
a. Predictors: (Constant), WEIGHT, HORSE, ENGINE
b. Dependent Variable: MPG
The F test is testing if the regression coefficients of
the predictor variables are all zero.
Namely none of the independent variables X1, X2, X3,
etc have any effect on Y
The final table in the output
Coefficientsa
Model
1
(Constant)
ENGINE
HORSE
WEIGHT
Unstandardized
Coefficients
B
Std. Error
44.015
1.272
-5.53E-03
.007
-5.56E-02
.013
-4.62E-03
.001
Standardi
zed
Coefficien
ts
Beta
-.074
-.273
-.504
t
34.597
-.786
-4.153
-6.186
Sig .
.000
.432
.000
.000
a. Dependent Variable: MPG
Gives the estimates of the regression coefficients,
there standard error and the t test for testing if they are
zero
Note: Engine size has no significant effect on
Mileage
The estimated equation from the table below:
Coefficientsa
Model
1
(Constant)
ENGINE
HORSE
WEIGHT
Unstandardized
Coefficients
B
Std. Error
44.015
1.272
-5.53E-03
.007
-5.56E-02
.013
-4.62E-03
.001
Standardi
zed
Coefficien
ts
Beta
-.074
-.273
-.504
t
34.597
-.786
-4.153
-6.186
Sig .
.000
.432
.000
.000
a. Dependent Variable: MPG
Is:
Mileage  44.0 
5.53
5.56
4.62
Engine 
Horse 
Weight  Error
1000
100
1000
Note the equation is:
Mileage  44.0 
5.53
5.56
4.62
Engine 
Horse 
Weight  Error
1000
100
1000
Mileage decreases with:
1.
With increases in Engine Size (not
significant, p = 0.432)
With increases in Horsepower (significant,
p = 0.000)
With increases in Weight (significant, p =
0.000)
Logistic regression
Recall the simple linear regression model:
y = b 0 + b 1x + e
where we are trying to predict a continuous
dependent variable y from a continuous
independent variable x.
This model can be extended to Multiple linear
regression model:
y = b0 + b1x1 + b2x2 + … + + bpxp + e
Here we are trying to predict a continuous
dependent variable y from a several continuous
dependent variables x1 , x2 , … , xp .
Now suppose the dependent variable y is
binary.
It takes on two values “Success” (1) or
“Failure” (0)
We are interested in predicting a y from a
continuous dependent variable x.
This is the situation in which Logistic
Regression is used
Example
We are interested how the success (y) of a new
antibiotic cream is curing “acne problems” and
how it depends on the amount (x) that is applied
daily.
The values of y are 1 (Success) or 0 (Failure).
The values of x range over a continuum
The logisitic Regression Model
Let p denote P[y = 1] = P[Success].
This quantity will increase with the value of x.
p
is called the odds ratio
The ratio:
1 p
This quantity will also increase with the value of
x, ranging from zero to infinity.
 p 
The quantity: ln 

 1 p 
is called the log odds ratio
Example: odds ratio, log odds ratio
Suppose a die is rolled:
Success = “roll a six”, p = 1/6
The odds ratio
1
p
 61 
1 p 1 6
1
6
5
6
1

5
The log odds ratio
 p 
1
ln 
  ln    ln  0.2   1.69044
5
 1 p 
The logisitic Regression Model
Assumes the log odds ratio is linearly
related to x.
 p 
i. e. :
ln 
  b0  b1 x
 1 p 
In terms of the odds ratio
p
b0  b1 x
e
1 p
The logisitic Regression Model
Solving for p in terms x.
p
 e b0  b1x
1 p
b0  b1x
pe
1 p 
p  peb0 b1x  eb0 b1x
b0  b1 x
or
e
p
b0  b1 x
1 e
Interpretation of the parameter b0
(determines the intercept)
1
0.8
p
0.6
b0
0.4
e
b0
1 e
0.2
0
0
2
4
x
6
8
10
Interpretation of the parameter b1
(determines when p is 0.50 (along with b0))
1
b0  b1 x
0.8
p
e
1
1
p


b0  b1 x
1 e
11 2
when
0.6
0.4
b0
b0  b1 x  0 or x  
b1
0.2
0
0
2
4
x
6
8
10
b0  b1 x
Also dp  d e
dx dx 1  e b0  b1x


e
b0  b1 x
b1 1  e
b0  b1 x
1  e
e
b0  b1 x
1  e
b1
b0  b1 x

2

b1
4
e

b0  b1 x
b1e
b0  b1 x
b0  b1 x 2
b0
when x  
b1
b1 is the rate of increase in p with respect to x
4 when p = 0.50
Interpretation of the parameter b1
(determines slope when p is 0.50 )
1
0.8
p
0.6
slope 
0.4
b1
4
0.2
0
0
2
4
x
6
8
10
The data
The data will for each case consist of
1. a value for x, the continuous independent
variable
2. a value for y (1 or 0) (Success or Failure)
Total of n = 250 cases
case
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
x
0.8
2.3
2.5
2.8
3.5
4.4
0.5
4.5
4.4
0.9
3.3
1.1
2.5
0.3
4.5
1.8
2.4
1.6
1.9
4.6
y
0
1
0
1
1
1
0
1
1
0
1
0
1
1
1
0
1
0
1
1
case
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
x
4.7
0.3
1.4
4.5
1.4
4.5
3.9
0.0
4.3
1.0
3.9
1.1
3.4
0.6
1.6
3.9
0.2
2.5
4.1
4.2
4.9
y
1
0
0
1
1
1
0
0
1
0
1
0
1
0
0
0
0
0
1
1
1
Estimation of the parameters
The parameters are estimated by Maximum
Likelihood estimation and require a statistical
package such as SPSS
Using SPSS to perform Logistic regression
Open the data file:
Choose from the menu:
Analyze -> Regression -> Binary Logistic
The following dialogue box appears
Select the dependent variable (y) and the independent
variable (x) (covariate).
Press OK.
Here is the output
The Estimates and their S.E.
The parameter Estimates
b
X
1.0309
Constant -2.0475
b1
b0
SE
0.1334
0.332
1.0309
-2.0475
Interpretation of the parameter b0
(determines the intercept)
b0
-2.0475
e
e
intercept 

 0.1143
b0
-2.0475
1 e
1 e
Interpretation of the parameter b1
(determines when p is 0.50 (along with b0))
b0
2.0475
x

 1.986
b1
1.0309
Another interpretation of the parameter b1
b1
4
is the rate of increase in p with
respect to x when p = 0.50
b1
1.0309

 0.258
4
4
The Logistic Regression Model
The dependent variable y is binary.
It takes on two values “Success” (1) or
“Failure” (0)
We are interested in predicting a y from a
continuous dependent variable x.
The logisitic Regression Model
Let p denote P[y = 1] = P[Success].
This quantity will increase with the value of x.
p
is called the odds ratio
The ratio:
1 p
This quantity will also increase with the value of
x, ranging from zero to infinity.
 p 
The quantity: ln 

 1 p 
is called the log odds ratio
The logisitic Regression Model
Assumes the log odds ratio is linearly
related to x.
 p 
i. e. :
ln 
  b0  b1 x
 1 p 
In terms of the odds ratio
p
b0  b1 x
e
1 p
The logisitic Regression Model
In terms of p
b0  b1 x
e
p
b0  b1 x
1 e
The graph of p vs x
1
0.8
p
b0  b1 x
e
p
1  e b0  b1x
0.6
0.4
0.2
0
0
2
4
x
6
8
10
The Multiple Logistic Regression
model
Here we attempt to predict the outcome of a
binary response variable Y from several
independent variables X1, X2 , … etc
 p 
ln 
  b0  b1 X1 
 1 p 
 bp X p
b0  b1 X1   b p X p
e
or p 
b0  b1 X1 
1 e
b p X p
Multiple Logistic Regression
an example
In this example we are interested in determining
the risk of infants (who were born prematurely)
of developing BPD (bronchopulmonary
dysplasia)
More specifically we are interested in developing
a predictive model which will determine the
probability of developing BPD from
X1 = gestational Age and X2 = Birthweight
For n = 223 infants in prenatal ward the
following measurements were determined
1. X1 = gestational Age (weeks),
2. X2 = Birth weight (grams) and
3. Y = presence of BPD
case
The data
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Gestational Age
28.6
31.5
30.3
28.9
30.3
30.5
28.5
27.9
30
31
27.4
29.4
30.8
30.4
31.1
26.7
27.4
28
29.3
30.4
30.2
30.2
30.1
27
30.3
27.4
Birthweight
1119
1222
1311
1082
1269
1289
1147
1136
972
1252
818
1275
1231
1112
1353
1067
846
1013
1055
1226
1237
1287
1215
929
1159
1046
presence of BMD
1
0
1
0
0
0
0
1
0
0
0
0
0
0
1
1
1
0
0
0
0
0
0
1
0
1
The results
Variables in the Equation
Step
a
1
Birthweight
GestationalAg e
Constant
B
-.003
-.505
16.858
S.E.
.001
.133
3.642
Wald
4.885
14.458
21.422
df
1
1
1
Sig .
.027
.000
.000
a. Variable(s) entered on step 1: Birthweig ht, GestationalAg e.
 p 
ln 
  16.858  .003BW  .505GA
 1 p 
p
 e16.858.003 BW .505GA
1 p
e16.858.003 BW .505GA
p
1  e16.858.003 BW .505GA
Exp(B)
.998
.604
2.1E+07
Graph: Showing Risk of BPD vs GA and BrthWt
1
0.8
GA = 27
GA = 28
0.6
GA = 29
GA = 30
0.4
GA = 31
GA = 32
0.2
0
700
900
1100
1300
1500
1700
Non-Parametric Statistics