analysis of variance and experimental design

Download Report

Transcript analysis of variance and experimental design

Inferential Statistics
Research is about trying to make valid
inferences


Inferential statistics:
The part of statistics that allows researchers to
generalize their findings beyond data collected.
Statistical inference:
a procedure for making inferences or
generalizations about a larger population from a
sample of that population
How Statistical Inference
Works
Basic Terminology

Population (statistical population):
Any collection of entities that have at least one
characteristic in common
A collection (a aggregate) of measurement about which an
inference is desired
Everything you wish to study

Parameter:
The numbers that describe characteristics of scores in the
population (mean, variance, standard deviation,
correlation coefficient etc.)
Body Weight Data (Kg)
P
o
p
u
l
a
t
i
o
n
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
N = 28
43
μ = 44 σ² = 1.214
44
44
Basic Terminology


Sample:
A part of the population
A finite number of measurements chosen from a
population
Statistics:
The numbers that describe characteristics of
scores in the sample (mean, variance, standard
deviation, correlation coefficient, reliability
coefficient, etc.)
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
n = 1 value …
X: student body weight
X1: 43
44
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
n = 2 values …
X: student body weight
x1: 43
x2: 44
44
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
n = 3 values …
X: student body weight
x1: 43
x2: 44
x3: 45
44
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
n = 4 values …
x: student body weight
x1: 43
x2: 44
x3: 45
x4: 44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
5 values …
a sample that has been selected in such a way that all
members of the population have an equal chance of being
picked (A Simple Random Sample )
x1: 43
x2: 44
x3: 45
x4: 44
x5: 44
Basic concept of statistics
Measures of central tendency
Measures of dispersion & variability
Measures of tendency central
Arithmetic mean (= simple average)
• Best estimate of population mean is
the sample mean, X
n
summation
X 
X
i 1
n
measurement in
population
i
index of
measurement
sample size
Measures of variability
All describe how “spread out” the data
1.
Sum of squares,
sum of squared deviations from the
mean
• For a sample,
SS   ( X i  X )
2
2. Average or mean sum of squares =
variance, s2:
• For a sample,
s
2
(X


i
 X)
n 1
2
Why?
s2 

2
(
X

X
)
 i
n 1
n – 1 represents the degrees of freedom, , or
number of independent quantities in the
estimate s2.
Greek letter
“nu”
n
 (X
i
 X)  0
i 1
• therefore, once n – 1 of all deviations are specified,
the last deviation is already determined.
• Variance has squared measurement units – to
regain original units, take the square root
3. Standard deviation, s
• For a sample,
s
 (X
i
 X)
n 1
2
4. Standard error of the mean
2
• For a sample,
s
sX 
n
Standard error of the mean is a measure of
variability among the means of repeated
samples from a population.
Basic Statistical Symbols
Body Weight Data (Kg)
P
o
p
u
l
a
t
i
o
n
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
N = 28
43
μ = 44 σ² = 1.214
44
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
43
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
43
44
Body Weight Data (Kg)
A Population of Values
43
44
46
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
43
44
45
Body Weight Data (Kg)
A Population of Values
43
44
46
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
43
44
45
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
43
44
45
44
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
repeated random sampling
, each with sample size, n = 5 values …
X  44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
46
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
46
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
46
44
46
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
46
44
46
45
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
46
44
46
45
44
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
X  45
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
42
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
42
42
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
42
42
43
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
42
42
43
45
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
42
42
43
45
43
Body Weight Data (Kg)
A Population of Values
44
46
43
44
45
44
42
44 46 44
43
44
44 42
43
44
43
43
46 45
44
43
44
45
46
43
44
44
Repeated random samples,
each with sample size, n = 5 values …
X  43
Summary
Sample
Sampling 1 Sampling 2
Sampling 2
First
Second
Third
43 (-1)
44 (+0)
45 (+1)
46 (+1)
44 (-1)
46 (+1)
42 (-1)
42 (-1)
43 (+0)
Fourth
Fifth
44 (+0)
44 (+0)
45 (+0)
44 (-1)
45 (+2)
43 (+0)
44
2
45
4
43
6
0.50
0.707
1.00
1.00
1.50
1.225
Average
Sum of square
Mean square
Standard
deviation
For a large enough number of large
samples, the frequency distribution of
the sample means (= sampling
distribution), approaches a normal
distribution.
Frequency
Normal distribution: bell-shaped curve
Sample mean
Testing statistical hypotheses
between 2 means
1. State the research question in terms of
statistical hypotheses.
It is always started with a statement that
hypothesizes “no difference”, called the null
hypothesis = H0.
H0: Mean heightof female student is equal to
mean height of male student
Then we formulate a statement that must
be true if the null hypothesis is false, called
the alternate hypothesis = HA .
HA: Mean height of female student is not equal
to mean height of male student
If we reject H0 as a result of sample evidence,
then we conclude that HA is true.
2. Choose an appropriate statistical test that
would allow you to reject H0 if H0 were
false.
E.g., Student’s t test for hypotheses about means
William Sealey Gosset
(“Student”)
t Statistic,
Standard error of the
difference between the
sample means
X1  X 2
t
s X 1  X 2 
Mean of
sample 1
Mean of
sample 2
To estimate s(X1 - X2), we must first know
the relation between both populations.
How to evaluate the success of
this experimental design class
Compare the score of statistics and
experimental design of several student
 Compare the score of experimental design of
several student from two serial classes
Compare the score of experimental design of
several student from two different classes
1. Comparing the score of statistics and
experimental experimental design of
several student
Similar
Student
Different
Student
Dependent
populations
Independent
populations
Identical
Variance
Not
Identical
Variance
Identical
Variance
2. Comparing the score of experimental
design of several student from two
serial classes
Different
Student
Independent
populations
Not
Identical
Variance
Identical
Variance
3. Comparing the score of experimental
design of several student from two
classes
Different
Student
Independent
populations
Not
Identical
Variance
Identical
Variance
Relation between populations


Dependent populations
Independent populations
1. Identical (homogenous ) variance
2. Not identical (heterogeneous) variance

Dependent Populations
Sample
Test statistic
d  do
t
SEd
Null hypothesis:
The mean difference is equal to o
compare
Null distribution
t with n-1 df
*n is the number of pairs
How unusual is this test statistic?
P < 0.05
Reject Ho
P > 0.05
Fail to reject Ho
Independent Population with
homogenous variances
Pooled variance:
Then,
 s  2 s
s 
 1  2
2
1 1
2
p
s X 1  X 2  
s
2
p
n1

s
2
p
n2
2
2
Independent Population with
homogenous variances
Y1  Y2
t
SE Y Y
1
2


df s  df s
1
1
2
2
SE Y1 Y2  sp    s p 
n1 n 2 
df1  df2
2
1 1
2
2 2
When sample sizes are small, the sampling
distribution is described better by the t
distribution than by the standard normal (Z)
distribution.
Shape of t distribution depends on degrees of
freedom,  = n – 1.
Z = t(=)
t(=25)
t(=5)
t
t(=1)
The distribution of a test statistic is divided into an
area of acceptance and an area of rejection.
ForArea
 =of
0.05
Rejection
0.025
Area of
Acceptance
0.95
Area of
Rejection
0.025
0
Lower critical
value
t
Upper critical
value
Critical t for a test about equality = t(2),
Independent Population with
heterogenous variances
t
Y1  Y2
2
1
2
2
s
s

n1 n2
df 
2
s s 
  
n1 n2 
2
1
2
2
 s2 n 2 s2 n 2 




1
1
2
2



 n1  1
n2  1 


Analysis of Variance
(ANOVA)
Independent T-test
 Compares the means of one variable for TWO groups
of cases.
 Statistical formula:
t X1  X 2
X1  X 2

S X1  X 2
Meaning: compare ‘standardized’ mean difference
 But this is limited to two groups. What if groups > 2?
• Pair wised T Test (previous example)
• ANOVA (Analysis of Variance)
From T Test to ANOVA
1. Pairwise T-Test
If you compare three or more groups using ttests with the usual 0.05 level of significance,
you would have to compare each pairs (A to B,
A to C, B to C), so the chance of getting the
wrong result would be:
1 - (0.95 x 0.95 x 0.95) = 14.3%
Multiple T-Tests will increase the false alarm.
From T Test to ANOVA
2. Analysis of Variance
In T-Test, mean difference is used. Similar, in
ANOVA test comparing the observed variance
among means is used.
The logic behind ANOVA:
• If groups are from the same population, variance
among means will be small (Note that the means
from the groups are not exactly the same.)
• If groups are from different population, variance
among means will be large.
What is ANOVA?
Analysis of Variance
A procedure designed to determine if the
manipulation of one or more independent
variables in an experiment has a statistically
significant influence on the value of the dependent
variable.
Assumption:
Each independent variable is categorical (nominal
scale). Independent variables are called Factors and
their values are called levels.
The dependent variable is numerical (ratio scale)
What is ANOVA?
The basic idea of Anova:
The “variance” of the dependent variable
given the influence of one or more
independent variables {Expected Sum of
Squares for a Factor} is checked to see if it is
significantly greater than the “variance” of
the dependent variable (assuming no
influence of the independent variables) {also
known as the Mean-Square-Error (MSE)}.
Pair-t-Test
Amir
Abas
Abi
Aura
6
8
10
6
Ana
10 Betty
Average
n
Var. sample
Pooled Var.
Budi
Berta
Bambang
Banu
9
4
7
5
5
8
5
4
= 4
6
5
4
tcalc =1.581
t-table 2.306
ANOVA TABLE OF 2 POPULATIONS
SV
Between
populations
Within
populations
SS
SSbetween
SSWithin
DF
1
(n1-1)+ (n2-1)
Mean square (M.S.)
SSB
MSB
DFB =
SSW
= MSW
DFW
S²
TOTAL
SSTotal
n1 + n2 -1
ANOVA TABLE OF 2 POPULATIONS
SV
Between
populations
Within
populations
TOTAL
SS
DF
10
1
32
8
Mean
square
(M.S.)
10
4
Fcalc = 2.50
42
9
Ftable = 5.318
Rationale for ANOVA
•
We can break the total variance in a study into
meaningful pieces that correspond to treatment effects
and error. That’s why we call this Analysis of Variance.
XG
The Grand Mean, taken over all observations.
XA
The mean of any group.
X A1
The mean of a specific group (1 in this case).
Xi
The observation or raw data for the ith subject.
The ANOVA Model
Xi  XG  ( X A  XG )  ( Xi  X A )
Trial i
The grand
mean
A treatment
effect
Error
SS Total = SS Treatment + SS Error
Analysis of Variance
 Analysis of Variance (ANOVA) can be used to test for the
equality of three or more population means using data
obtained from observational or experimental studies.
 Use the sample results to test the following hypotheses.

H0: 1 = 2 = 3 = . . . = k
Ha: Not all population means are equal
 If H0 is rejected, we cannot conclude that all population
means are different.
 Rejecting H0 means that at least two population means
have different values.
Assumptions for Analysis of Variance
For each population, the response variable is
normally distributed.
The variance of the response variable,
denoted 2, is the same for all of the
populations.
The effect of independent variable is additive
The observations must be independent.
Analysis of Variance:
Testing for the Equality of t Population Means
 Between-Treatments Estimate of Population Variance
 Within-Treatments Estimate of Population Variance
 Comparing the Variance Estimates: The F Test
 ANOVA Table
Between-Treatments Estimate
of Population Variance
 A between-treatments estimate of σ2 is called the mean
square due to treatments (MSTR).
k
MSTR 
2
n
(
x

x
)
 j j
j 1
k1
 The numerator of MSTR is called the sum of squares
due to treatments (SSTR).
 The denominator of MSTR represents the degrees of
freedom associated with SSTR.
Within-Treatments Estimate
of Population Variance
 The estimate of 2 based on the variation of the
sample observations within each treatment is called the
mean square due to error (MSE).
k
MSE 
2
(
n

1)
s
 j
j
j 1
nT  k
 The numerator of MSE is called the sum of squares due
to error (SSE).
 The denominator of MSE represents the degrees of
freedom associated with SSE.
Comparing the Variance Estimates:
The F Test
 If the null hypothesis is true and the ANOVA
assumptions are valid, the sampling distribution of
MSTR/MSE is an F distribution with MSTR d.f. equal
to k - 1 and MSE d.f. equal to nT - k.
 If the means of the k populations are not equal, the
value of MSTR/MSE will be inflated because MSTR
overestimates σamong2
 Hence, we will reject H0 if the resulting value of
MSTR/MSE appears to be too large to have been
selected at random from the appropriate F distribution.
Test for the Equality of k Population
Means

Hypotheses
H0: 1 = 2 = 3 = . . . = k
Ha: Not all population means are equal

Test Statistic
F = MSTR/MSE
Test for the Equality of k Population
Means

Rejection Rule
Using test statistic:
Using p-value:
Reject H0 if F > Fa
Reject H0 if p-value < a
where the value of Fa is based on an F distribution
with t - 1 numerator degrees of freedom and nT - t
denominator degrees of freedom
Sampling Distribution of MSTR/MSE
The figure below shows the rejection region associated with a
level of significance equal to  where F denotes the critical
value.
Do Not Reject H0
Reject H0
F
Critical Value
MSTR/MSE
ANOVA Table
Source of
Sum of
Variation Squares
Treatment
SSTR
Error
SSE
Total
SST
Degrees of
Mean
Freedom
Squares
k- 1 MSTR
nT - k MSE
nT - 1
F
MSTR/MSE
SST divided by its degrees of freedom nT - 1 is simply
the overall sample variance that would be obtained if
we treated the entire nT observations as one data set.
k
nj
SST   ( xij  x) 2  SSTR  SSE
j 1 i 1
What does Anova tell us?
ANOVA will tell us whether we have
sufficient evidence to say that
measurements from at least one treatment
differ significantly from at least one other.
It will not tell us which ones differ, or
how many differ.

ANOVA vs t-test
ANOVA is like a t-test among multiple data sets
simultaneously
• t-tests can only be done between two data sets, or
between one set and a “true” value
ANOVA uses the F distribution instead of the tdistribution
ANOVA assumes that all of the data sets have
equal variances
• Use caution on close decisions if they don’t
ANOVA – a Hypothesis Test
H0:
There is no significant difference among
the results provided by treatments.
Ha:
At least one of the treatments provides
results significantly different from at least
one other.
Linear Model
Yij =  + j + ij
By definition,
t

j=1
j = 0
The experiment produces
(r x t) Yij data values.
The analysis produces estimates of , ,,t
(We can then get estimates of the ij by subtraction).
1
2
3
4
5
6
…
t
Y11
Y12
Y13
Y14
Y15
Y16
…
Y1t
Y21
Y22
Y23
Y24
Y25
Y26
…
Y2t
Y31
Y32
Y33
Y34
Y35
Y36
…
Y3t
Y41
.
.
.
Yr1
Y42
.
.
.
Yr2
Y43
.
.
.
Yr3
Y44
.
.
.
Yr4
Y45
.
.
.
Yr5
Y46
.
.
.
Yr6
…
…
…
…
…
Y4t
.
.
.
Yrt
_______________________________________________________________________________
__
__
__
__
__
__
__
Y.1
Y.2
Y.3
_
_
Y.4
Y.5
Y.6
…
Y•1, Y•2, …, are Column Means
Y.t
t
Y• • = Y• j
j=1
/
t = “GRAND MEAN”
(assuming same # data points in each column)
(otherwise, Y• • = mean of all the data)
Yij =  + j + ij
MODEL:
Y• •
estimates 
Y •j - Y ••
estimates j (= j – )
(for all j)
These estimates are based on Gauss’ (1796)
PRINCIPLE OF LEAST SQUARES
and on COMMON SENSE
MODEL:
Yij =  + j + ij
If you insert the estimates into the MODEL,
<
(1)
Yij = Y • • + (Y•j - Y • • ) + ij.
it follows that our estimate of ij is
(2)
ij = Yij - Y•j
Then, Yij = Y• • + (Y• j - Y• • ) + ( Yij - Y• j)
{
{
{
or, (Yij - Y• • ) = (Y•j - Y• •) + (Yij - Y•j )
(3)
TOTAL
VARIABILITY =
in Y
Variability
Variability
in Y
+
in Y
associated
associated
with X
with all other
factors
If you square both sides of (3), and double sum both sides
(over i and j), you get, [after some unpleasant algebra, but
lots of terms which “cancel”]
t r
t
2
t r
2
(Yij - Y• • ) = R •  (Y•j - Y• •) + (Yij - Y•j)
j=1
j=1 i=1
{
{
{
j=1 i=1
2
(
(
TSS
TOTAL SUM OF
SQUARES
=
SSBC
+
=
SUM OF
+
(
SQUARES
BETWEEN
COLUMNS
SSW (SSE)
(
SUM OF SQUARES
WITHIN COLUMNS
ANOVA TABLE
SV
SS
DF
Among treatment
(among
columns)
SSAc
Within Columns SSWc
(due to error)
TOTAL
TSS
t-1
(r - 1) •t
tr -1
Mean
square
(M.S.)
SSAC
MSAC
t- 1 =
SSWc
(r-1)•t
= MSW
Hypothesis,
HO: 1 = 2 = • • • c = 0
HI: not all j = 0
Or
HO: 1 = 2 = • • • • c
(All column means are equal)
HI: not all j are EQUAL
The probability Law of
MSBC
MSWc
= “Fcalc” , is
The F - distribution with (t-1, (r-1)t)
degrees of freedom
Assuming

HO true.
Table Value
Example: Reed Manufacturing
Faculty of Agriculture, GMU would like to
know if the teaching quality of xperimental
design is similar among classes .
A simple random sample of 5 student from 3
classes was taken and the grade of experimental
design was collected
Example:
Grade of experimental design

Sample Data
Observation
1
2
3
4
5
Sample Mean
Sample Variance
Advance
Broadway
06
08
10
06
10
08
04
09
04
07
05
05
06
04
Cindy
04
10
10
05
06
07
08
Example: Experimental Design

Hypotheses
H0:  1 =  2 =  3
Ha: Not all the means are equal
where:
 1 = Advance class
 2 = Broadway class
 3 = Cindy class
Example: Experimental Design
 Mean Square Due to Treatments
=
Since the sample
sizes are all equal
μ= (8 + 6 + 7)/3 = 7
SSTR = 5(8 - 7)2 + 5(6 - 7)2 + 5(7 - 7)2 = 10
MSTR = 10/(3 - 1) = 5
 Mean Square Due to Error
SSE = 4(4) + 4(4) + 4(8) = 64
MSE = 64/(15 - 3) = 5.33
Example: Experimental Design

F - Test
If H0 is true, the ratio MSTR/MSE should be
near 1 because both MSTR and MSE are estimating
2.
If Ha is true, the ratio should be significantly larger
than 1 because MSTR tends to overestimate 2.
Example: Experimental Design

Rejection Rule
Using test statistic:
Using p-value
:
Reject H0 if F > 3.89
Reject H0 if p-value < .05
where F.05 = 3.89 is based on an F distribution with 2
numerator degrees of freedom and 12 denominator
degrees of freedom
Example: Experimental Design


Test Statistic
F = MSTR/MSE = 5.00/5.33 = 0.938
Conclusion
F =0.938 < F.05 = 3.89, so we accept H0.
There is no significant different quality among
experimental design classes

Example: Experimental Design
ANOVA Table
Source of
Sum of
Variation
Squares
Among classes 10
Within classes
64
Total
74
Degrees of
Freedom
2
12
14
Mean
Square Fcalc.
5.00
0.938
5.33
Using Excel’s Anova:
Single Factor Tool



Step 1 Select the Tools pull-down menu
Step 2 Choose the Data Analysis option
Step 3 Choose Anova: Single Factor
from the list of Analysis Tools
Using Excel’s Anova:
Single Factor Tool

Step 4 When the Anova: Single Factor dialog box appears:
Enter B1:D6 in the Input Range box
Select Grouped By Columns
Select Labels in First Row
Enter .05 in the Alpha box
Select Output Range
Enter A8 (your choice) in the Output Range box
Click OK
Using Excel’s Anova:
Single Factor Tool

Value Worksheet (top portion)
1
2
3
4
5
6
7
Observation
1
2
3
4
5
Advance Broadway
6
9
8
4
10
7
6
5
10
5
Cindy
4
10
10
5
6
Using Excel’s Anova:
Single Factor Tool
10

11
12
13
14
15
16
17
18
19
20
21
22
23
24
Value Worksheet
(bottom portion)
Count
Sum Average Variance
SUMMARY
Groups
Advance
Broadway
Cindy
ANOVA
Source of Variation
Among Groups
Within Groups
Total
5
5
5
40
30
35
8
6
7
10
64
2
12
MS
5,000
5,333
74
14
SS
df
4
4
8
F
P-value F crit
0,9375 0,00331 3,88529
Using Excel’s Anova:
Single Factor Tool

Using the p-Value
The value worksheet shows that the p-value is .00331
The rejection rule is “Reject H0 if p-value < .05”
Thus, we reject H0 because the p-value = .00331 <
 = .05
We conclude that the quality of among experimental
design classes is similar