analysis of variance and experimental design

Download Report

Transcript analysis of variance and experimental design

Basic concept of statistics
Measures of central tendency
Measures of dispersion & variability
Measures of tendency central
Arithmetic mean (= simple average)
• Best estimate of population mean is
the sample mean, X
n
summation
X 
X
i 1
n
measurement in
population
i
index of
measurement
sample size
Measures of variability
All describe how “spread out” the data
1.
Sum of squares,
sum of squared deviations from the
mean
• For a sample,
SS   ( X i  X )
2
2. Average or mean sum of squares =
variance, s2:
• For a sample,
s
2
(X


i
 X)
n 1
2
Why?
s2 

2
(
X

X
)
 i
n 1
n – 1 represents the degrees of freedom, , or
number of independent quantities in the
estimate s2.
Greek letter
“nu”
n
 (X
i
 X)  0
i 1
• therefore, once n – 1 of all deviations are specified,
the last deviation is already determined.
• Variance has squared measurement units – to
regain original units, take the square root
3. Standard deviation, s
• For a sample,
s
 (X
i
 X)
n 1
2
4. Standard error of the mean
2
• For a sample,

s
sX 
n
Standard error of the mean is a measure of
variability among the means of repeated
samples from a population.
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
N = 28
μ = 44
σ² = 1.214
44
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
43
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
43
44
Body Weight Data (Kg)
A Population of Values
43
44
46
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
43
44
45
Body Weight Data (Kg)
A Population of Values
43
44
46
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
43
44
45
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
43
44
45
44
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
X  44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
46
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
46
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
46
44
46
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
46
44
46
45
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
46
44
46
45
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
X  45
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
42
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
42
42
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
42
42
43
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
42
42
43
45
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
42
42
43
45
43
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
X  43
For a large enough number of large
samples, the frequency distribution of
the sample means (= sampling
distribution), approaches a normal
distribution.
Frequency
Normal distribution: bell-shaped curve
Sample mean
Testing
statistical hypotheses between 2
means
1. State the research question in terms of
statistical hypotheses.
It is always started with a statement that
hypothesizes “no difference”, called the null
hypothesis = H0.
E.g., H0: Mean bill length of female
hummingbirds is equal to mean bill length of
male hummingbirds
Then we formulate a statement that must
be true if the null hypothesis is false,
called the alternate hypothesis = HA .
E.g., HA: Mean bill length of female
hummingbirds is not equal to mean bill length
of male hummingbirds
If we reject H0 as a result of sample evidence,
then we conclude that HA is true.
2. Choose an appropriate statistical test that
would allow you to reject H0 if H0 were
false.
E.g., Student’s t test for hypotheses about means
William Sealey Gosset
(a.k.a. “Student”)
t Statistic,
Standard error of the
difference between the
sample means
X1  X 2
t
s X 1  X 2 
Mean of
sample 1
Mean of
sample 2
To estimate s(X1 - X2), we must first know
the relation between both populations.
How to evaluate the success of
this experimental design class
Compare the score of statistics and
experimental design of several student
 Compare the score of experimental design of
several student from two serial classes
Compare the score of experimental design of
several student from two different classes
Comparing the score of
Statistics and experimental
experimental design of several
student
Similar
Student
Different
Student
Dependent
populations
Independent
populations
Identical
Variance
Not
Identical
Variance
Identical
Variance
Comparing the score of
experimental design of several
student from two serial classes
Different
Student
Independent
populations
Not
Identical
Variance
Identical
Variance
Comparing the score of
experimental design of several
student from two classes
Different
Student
Independent
populations
Not
Identical
Variance
Identical
Variance
Relation between populations


Dependent populations
Independent populations
1. Identical (homogenous ) variance
2. Not identical (heterogeneous) variance

Dependent Populations
Sample
Test statistic
d  do
t
SEd
Null hypothesis:
The mean difference is equal to o
compare
Null distribution
t with n-1 df
*n is the number of pairs
How unusual is this test statistic?
P < 0.05
Reject Ho
P > 0.05
Fail to reject Ho
Independent Population with
homogenous variances
Pooled variance:
Then,
s X 1  X 2  
SS1  SS2
s 
1   2
2
p
s
2
p
n1

s
2
p
n2
Independent Population with
homogenous variances
Y1  Y2
t
SE Y Y
1
2


df s  df s
1
1
2
2
SE Y1 Y2  sp    s p 
n1 n 2 
df1  df2
2
1 1
2
2 2
When sample sizes are small, the sampling
distribution is described better by the t
distribution than by the standard normal (Z)
distribution.
Shape of t distribution depends on degrees of
freedom,  = n – 1.
Z = t(=)
t(=25)
t(=5)
t
t(=1)
The distribution of a test statistic is divided into
an area of acceptance and an area of rejection.
ForArea
 =of
0.05
Rejection
0.025
Area of
Acceptance
0.95
Area of
Rejection
0.025
0
Lower critical
value
t
Upper critical
value
Critical t for a test about equality = t(2),
Independent Population with
heterogenous variances
t
Y1  Y2
2
1
2
2
s
s

n1 n2
df 
2
s s 
  
n1 n2 
2
1
2
2
 s2 n 2 s2 n 2 




1
1
2
2



 n1  1
n2  1 


Analysis of Variance
(ANOVA)
Independent T-test
 Compares the means of one variable for TWO groups
of cases.
 Statistical formula:
t X1  X 2
X1  X 2

S X1  X 2
Meaning: compare ‘standardized’ mean difference
 But this is limited to two groups. What if groups > 2?
• Pair wised T Test (previous example)
• ANOVA (ANalysis Of Variance)
From T Test to ANOVA
1. Pairwise T-Test
If you compare three or more groups using t-tests
with the usual 0.05 level of significance, you would
have to compare each pairs (A to B, A to C, B to C), so
the chance of getting the wrong result would be:
1 - (0.95 x 0.95 x 0.95) = 14.3%
Multiple T-Tests will increase the false alarm.
From T Test to ANOVA
2. Analysis Of Variance
In T-Test, mean difference is used. Similar, in
ANOVA test comparing the observed variance
among means is used.
The logic behind ANOVA:
• If groups are from the same population, variance
among means will be small (Note that the means
from the groups are not exactly the same.)
• If groups are from different population, variance
among means will be large.
What is ANOVA?
 ANOVA (Analysis of Variance) is a procedure designed to
determine if the manipulation of one or more independent
variables in an experiment has a statistically significant influence on
the value of the dependent variable.
 Assumption
 Each independent variable is categorical (nominal scale).
Independent variables are called Factors and their values are called levels.
 The dependent variable is numerical (ratio scale)
 The basic idea is that the “variance” of the dependent variable
given the influence of one or more independent variables
{Expected Sum of Squares for a Factor} is checked to see if it is
significantly greater than the “variance” of the dependent variable
(assuming no influence of the independent variables) {also known
as the Mean-Square-Error (MSE)}.
Pair-t-Test
Amir
Abas
Abi
Aura
69
64
70
67
Budi
Berta
Bambang
Banu
82
78
82
81
Ana
69 Betty
82
Anis
Average
69 Bagus
Berth
68
77
78
80
n
Var. sample
6
4.8
7
5.07
ANOVA TABLE OF 2 POPULATIONS
SV
Between
populations
Within
populations
SS
SSbetween
SSWithin
DF
1
(r1 - 1)+ (r2 - 1)
Mean (M.S.)
square
SSB
MSB
DFB =
SSW
= MSW
DFW
S²
TOTAL
SSTotal
r1 + r2 -1
Rationale for ANOVA
•
We can break the total variance in a study into
meaningful pieces that correspond to treatment effects
and error. That’s why we call this Analysis of Variance.
XG
The Grand Mean, taken over all observations.
XA
The mean of any group.
X A1
The mean of a specific group (1 in this case).
Xi
The observation or raw data for the ith subject.
The ANOVA Model
Note:
Xi  XG  ( X A  XG )  ( Xi  X A )
Trial i
The grand
mean
A treatment
effect
Error
SS Total = SS Treatment + SS Error
Analysis of Variance
 Analysis of Variance (ANOVA) can be used to test for the
equality of three or more population means using data
obtained from observational or experimental studies.
 Use the sample results to test the following hypotheses.

H0: 1 = 2 = 3 = . . . = k
Ha: Not all population means are equal
 If H0 is rejected, we cannot conclude that all population
means are different.
 Rejecting H0 means that at least two population means have
different values.
Assumptions for Analysis of Variance
 For each population, the response variable is normally
distributed.
 The variance of the response variable, denoted 2, is the
same for all of the populations.
 The effect of independent variable is additive
 The observations must be independent.
Analysis of Variance:
Testing for the Equality of t Population Means
 Between-Treatments Estimate of Population Variance
 Within-Treatments Estimate of Population Variance
 Comparing the Variance Estimates: The F Test
 ANOVA Table
Between-Treatments Estimate
of Population Variance
 A between-treatments estimate of σ2 is called the mean square
due to treatments (MSTR).
k
MSTR 
2
n
(
x

x
)
 j j
j 1
k1
 The numerator of MSTR is called the sum of squares due to
treatments (SSTR).
 The denominator of MSTR represents the degrees of freedom
associated with SSTR.
Within-Treatments Estimate
of Population Variance
 The estimate of 2 based on the variation of the sample
observations within each treatment is called the mean square due
to error (MSE).
k
MSE 
2
(
n

1)
s
 j
j
j 1
nT  k
 The numerator of MSE is called the sum of squares due to error
(SSE).
 The denominator of MSE represents the degrees of freedom
associated with SSE.
Comparing the Variance Estimates:
The F Test
 If the null hypothesis is true and the ANOVA assumptions
are valid, the sampling distribution of MSTR/MSE is an F
distribution with MSTR d.f. equal to k - 1 and MSE d.f. equal
to nT - k.
 If the means of the k populations are not equal, the value
of MSTR/MSE will be inflated because MSTR overestimates
σ 2.
 Hence, we will reject H0 if the resulting value of
MSTR/MSE appears to be too large to have been selected at
random from the appropriate F distribution.
Test for the Equality of k Population
Means

Hypotheses
H0: 1 = 2 = 3 = . . . = k
Ha: Not all population means are equal

Test Statistic
F = MSTR/MSE
Test for the Equality of k Population
Means

Rejection Rule
Using test statistic:
Using p-value:
Reject H0 if F > Fa
Reject H0 if p-value < a
where the value of Fa is based on an F distribution with t - 1
numerator degrees of freedom and nT - t denominator degrees
of freedom
Sampling Distribution of MSTR/MSE
The figure below shows the rejection region associated with a
level of significance equal to  where F denotes the critical
value.
Do Not Reject H0
Reject H0
F
Critical Value
MSTR/MSE
ANOVA Table
Source of
Sum of
Variation Squares
Treatment
SSTR
Error
SSE
Total
SST
Degrees of
Mean
Freedom
Squares
k- 1 MSTR
nT - k MSE
nT - 1
F
MSTR/MSE
SST divided by its degrees of freedom nT - 1 is simply the
overall sample variance that would be obtained if we treated the
entire nT observations as one data set.
k
nj
SST   ( xij  x) 2  SSTR  SSE
j 1 i 1
What does Anova tell us?
ANOVA will tell us whether we have sufficient
evidence to say that measurements from at least
one treatment differ significantly from at least
one other.
It will not tell us which ones differ, or how many
differ.
ANOVA vs t-test
ANOVA is like a t-test among multiple data sets
simultaneously
• t-tests can only be done between two data sets, or
between one set and a “true” value
ANOVA uses the F distribution instead of the tdistribution
ANOVA assumes that all of the data sets have
equal variances
• Use caution on close decisions if they don’t
ANOVA – a Hypothesis Test
H0: There is no significant difference among the
results provided by treatments.
Ha: At least one of the treatments provides
results significantly different from at least one
other.
Linear Model
Yij =  + j + ij
By definition,
t

j=1
j = 0
The experiment produces
(r x t) Yij data values.
The analysis produces estimates of , ,,t . (We
can then get estimates of the ij by subtraction).
1
2
3
4
5
6
…
t
Y11
Y12
Y13
Y14
Y15
Y16
…
Y1t
Y21
Y22
Y23
Y24
Y25
Y26
…
Y2t
Y31
Y32
Y33
Y34
Y35
Y36
…
Y3t
Y41
.
.
.
Yr1
Y42
.
.
.
Yr2
Y43
.
.
.
Yr3
Y44
.
.
.
Yr4
Y45
.
.
.
Yr5
Y46
.
.
.
Yr6
…
…
…
…
…
Y4t
.
.
.
Yrt
_______________________________________________________________________________
__
__
__
__
__
__
__
Y.1
Y.2
Y.3
_
_
Y.4
Y.5
Y.6
…
Y•1, Y•2, …, are Column Means
Y.t
t
Y• • = Y• j
j=1
/
t = “GRAND MEAN”
(assuming same # data points in each column)
(otherwise, Y• • = mean of all the data)
Yij =  + j + ij
MODEL:
Y• •
estimates 
Y •j - Y ••
estimates j (= j – )
(for all j)
These estimates are based on Gauss’ (1796)
PRINCIPLE OF LEAST SQUARES
and on COMMON SENSE
MODEL:
Yij =  + j + ij
If you insert the estimates into the MODEL,
<
(1)
Yij = Y • • + (Y•j - Y • • ) + ij.
it follows that our estimate of ij is
(2)
ij = Yij - Y•j
Then, Yij = Y• • + (Y• j - Y• • ) + ( Yij - Y• j)
{
{
{
or, (Yij - Y• • ) = (Y•j - Y• •) + (Yij - Y•j )
(3)
VARIABILITY
in Y
Variability
Variability
TOTAL
=
in Y
+
in Y
associated
associated
with X
with all other
factors
If you square both sides of (3), and double sum both sides (over i and
j), you get, [after some unpleasant algebra, but lots of terms which
“cancel”]
t r
t
2
t r
2
(Yij - Y• • ) = R •  (Y•j - Y• •) + (Yij - Y•j)
j=1
j=1 i=1
{
{
{
j=1 i=1
2
(
(
TSS
TOTAL SUM OF
SQUARES
=
SSBC
+
=
SUM OF
+
(
SQUARES
BETWEEN
COLUMNS
SSW (SSE)
(
SUM OF SQUARES
WITHIN COLUMNS
ANOVA TABLE
SV
SS
DF
Between
Columns (due to SSBc
brand)
Within Columns SSW
(due to error)
TOTAL
TSS
t-1
(r - 1) •t
tr -1
Mean
square
(M.S.)
SSBC
MSBC
t- 1 =
SSW
(r-1)•t
= MSW
Hypothesis,
HO: 1 = 2 = • • • c = 0
HI: not all j = 0
Or
HO: 1 = 2 = • • • • c
(All column means are equal)
HI: not all j are EQUAL
The probability Law of
MSBC
MSW
= “Fcalc” , is
The F - distribution with (t-1, (r-1)t)
degrees of freedom
Assuming

HO true.
Table Value
Example: Reed Manufacturing
Reed would like to know if the mean number of hours
worked per week is the same for the department
managers at her three manufacturing plants (Buffalo,
Pittsburgh, and Detroit).
A simple random sample of 5 managers from each of
the three plants was taken and the number of hours
worked by each manager for the previous week
Example: Reed Manufacturing

Sample Data
Observation
1
2
3
4
5
Sample Mean
Sample Variance
Plant 1
Buffalo
48
54
57
54
62
55
26.0
Plant 2
Pittsburgh
73
63
66
64
74
68
26.5
Plant 3
Detroit
51
63
61
54
56
57
24.5
Example: Reed Manufacturing

Hypotheses
H0:  1 =  2 =  3
Ha: Not all the means are equal
where:
 1 = mean number of hours worked per week by the managers at
Plant 1
 2 = mean number of hours worked per week by the managers at
Plant 2
 3 = mean number of hours worked per week by the managers at
Plant 3
Example: Reed Manufacturing
 Mean Square Due to Treatments
=
Since the sample
sizes are all equal
μ= (55 + 68 + 57)/3 = 60
SSTR = 5(55 - 60)2 + 5(68 - 60)2 + 5(57 - 60)2 = 490
MSTR = 490/(3 - 1) = 245
 Mean Square Due to Error
SSE = 4(26.0) + 4(26.5) + 4(24.5) = 308
MSE = 308/(15 - 3) = 25.667
Example: Reed Manufacturing

F - Test
If H0 is true, the ratio MSTR/MSE should be
near 1 because both MSTR and MSE are estimating
2.
If Ha is true, the ratio should be significantly larger
than 1 because MSTR tends to overestimate 2.
Example: Reed Manufacturing

Rejection Rule
Using test statistic:
Using p-value
:
Reject H0 if F > 3.89
Reject H0 if p-value < .05
where F.05 = 3.89 is based on an F distribution with 2
numerator degrees of freedom and 12 denominator
degrees of freedom
Example: Reed Manufacturing


Test Statistic
F = MSTR/MSE = 245/25.667 = 9.55
Conclusion
F = 9.55 > F.05 = 3.89, so we reject H0. The mean
number of hours worked per week by department
managers is not the same at each plant.

Example: Reed Manufacturing
ANOVA Table
Source of Sum of
Variation Squares
Degrees of Mean
Freedom
Square
Treatments
Error
Total
2
12
14
490
308
798
245
25.667
F
9.55
Using Excel’s Anova:
Single Factor Tool



Step 1 Select the Tools pull-down menu
Step 2 Choose the Data Analysis option
Step 3 Choose Anova: Single Factor
from the list of Analysis Tools
Using Excel’s Anova:
Single Factor Tool

Step 4 When the Anova: Single Factor dialog box appears:
Enter B1:D6 in the Input Range box
Select Grouped By Columns
Select Labels in First Row
Enter .05 in the Alpha box
Select Output Range
Enter A8 (your choice) in the Output Range box
Click OK
Using Excel’s Anova:
Single Factor Tool

Value Worksheet (top portion)
1
2
3
4
5
6
A
Observation
1
2
3
4
5
B
Buffalo
48
54
57
54
62
C
Pittsburgh
73
63
66
64
74
D
Detroit
51
63
61
54
56
E
Using Excel’s Anova:
Single Factor Tool
Value Worksheet (bottom portion)
A

8 Anova: Single Factor
9
10 SUMMARY
11
Groups
12 Buffalo
13 Pittsburgh
14 Detroit
15
16
17 ANOVA
18 Source of Variation
19 Between Groups
20 Within Groups
21
22 Total
B
C
Count
5
5
5
SS
490
308
798
D
E
F
G
Sum Average Variance
275
55
26
340
68
26.5
285
57
24.5
df
MS
F
P-value F crit
2
245 9.54545 0.00331 3.88529
12 25.6667
14
Using Excel’s Anova:
Single Factor Tool

Using the p-Value
The value worksheet shows that the p-value is .00331
The rejection rule is “Reject H0 if p-value < .05”
Thus, we reject H0 because the p-value = .00331 < 
= .05
We conclude that the mean number of hours
worked per week by the managers differ among the
three plants