Educational Research

Download Report

Transcript Educational Research

Educational Research
Chapter 13
Inferential Statistics
Gay, Mills, and Airasian
10th Edition
Topics Discussed in this Chapter


Concepts underlying inferential statistics
Types of inferential statistics

Parametric

T tests

ANOVA






One-way
Factorial
Post-hoc comparisons
Multiple regression
ANCOVA
Nonparametric

Chi square
Important Perspectives

Inferential statistics



Allow researchers to generalize to a population of
individuals based on information obtained from a
sample of those individuals
Assess whether the results obtained from a
sample are the same as those that would have
been calculated for the entire population
Probabilistic nature of inferential analyses
Underlying Concepts








Sampling distributions
Standard error
Null and alternative hypotheses
Tests of significance
Type I and Type II errors
One-tailed and two-tailed tests
Degrees of freedom
Tests of significance
Sampling Distributions


Sampling distribution tries to imagine that
you would take multiple samples from your
population to get multiple means and
standard deviations so that you can calculate
your inferential statistics based on the most
representative numbers for your population.
As you get more samples, you get better
information.
Standard Error

Error that occurs at random because you
used a sample (and not the whole
population). The more you know about the
information from a true sampling distribution
(more Ss in your sample or more samples
from your population) the lower your
standard error.
Null and Alternative Hypotheses


The null hypothesis represents a
statistical tool important to inferential
tests of significance
The alternative hypothesis usually
represents the research hypothesis
related to the study
Null and Alternative Hypotheses

Comparisons between groups



Null: no difference between the mean scores of
the groups
Alternative: differences between the mean scores
of the groups
Relationships between variables


Null: no relationship exists between the variables
being studied
Alternative: a relationship exists between the
variables being studied
Null and Alternative Hypotheses

Acceptance of the null
hypothesis


The difference between
groups is too small to
attribute it to anything
but chance
The relationship between
variables is too small to
attribute it to anything
but chance

Rejection of the null
hypothesis


The difference between
groups is so large it can
be attributed to
something other than
chance (e.g.,
experimental treatment)
The relationship between
variables is so large it
can be attributed to
something other than
chance (e.g., a real
relationship)
Tests of Significance


Statistical analyses to help decide whether to
accept or reject the null hypothesis
Alpha level


An established probability level which serves as
the criterion to determine whether to accept or
reject the null hypothesis
Common levels in education




.01
.05
.10
Way of thinking: .10 … there is a 10% probability
that this even happened by chance.
Type I and Type II Errors

Correct decisions



The null hypothesis is true and it is accepted
The null hypothesis is false and it is rejected
Incorrect decisions


Type I error - the null hypothesis is true and it is
rejected (there really isn’t anything going on but
you decide there is)
Type II error - the null hypothesis is false and it is
accepted (there really is something going on but
you decide there isn’t)
Type I and Type II Errors


Power: ability of a significance test to reject
a null hypothesis that is false (avoid Type II
error)
Control of Type I errors using alpha level


As alpha becomes smaller (.10, .05, .01, .001,
etc.) there is less chance of a Type I error
Meaning… as you move from 10% possibility of it
being chance to a 1% possibility of it being
chance, you are less likely to be incorrect when
you say that something did not happen by chance.
One-Tailed and Two-Tailed Tests

One-tailed – an anticipated outcome in a specific
direction



Two-tailed – anticipated outcome not directional



Treatment group is significantly higher than the control
group
Treatment group is significantly lower than the control group
Treatment and control groups are equal
You decide BEFORE you start your study if you think
that you will do a 1-tailed or 2-tailed test
Ample justification needed for using one-tailed tests
(this may be previous studies, etc.)
Degrees of Freedom


The more things you want to measure (IV)
means that you are more likely to make an
error in your model. Thus, you want to use
degrees of freedom in your calculation (and
not the actual number) to correct for this
chance of making an error. This can happen
in samples as well. Thus, you usually see
things like df=N-1 (where n=sample)
Used when entering statistical tables to
establish the critical values of the test
statistics
Tests of Significance

Two types


Parametric: A type of statistical test that
has certain “assumptions” that must be
met before your can use it.
Nonparametric: Good for ordinal or
nominal scale data, data the the
“parametric assumptions” have been
violated, or when you don’t know if it is a
normal distribution or not.
Tests of Significance

Four assumptions of parametric tests





Normal distribution of the dependent variable
Interval or ratio data
Independence of subjects: selection one S does
not effect the selection of another S (met by
random sampling)
Homogeneity of variance: Variance of the data
from the sample should be equal.
Advantages of parametric tests


More statistically powerful
More versatile
Tests of Significance

Assumptions of nonparametric tests



No assumptions about the shape of the
distribution of the dependent variable
Ordinal or categorical data
Disadvantages of nonparametric tests



Less statistically powerful
Require large samples
Cannot answer some research questions
Types of Inferential Statistics

Two issues discussed


Steps involved in testing for significance
Types of tests
Steps in Statistical Testing






State the null and alternative hypotheses
Set alpha level
Identify the appropriate test of significance
(Table 13.12 in your text)
Identify the sampling distribution
Identify the test statistic
Compute the test statistic
Steps in Statistical Testing

Identify the criteria for significance



Decide what would be considered statistically
significant for you.
This may be done by deciding on your alpha level.
Compare the computed test statistic to the
criteria for significance

If using SPSS-Windows, compare the probability
level of the observed test statistic to the alpha
level
Steps in Statistical Testing

Accept or reject the null hypothesis

Accept


The observed probability level of the observed
statistic is larger than alpha
Reject

The observed probability level of the observed
statistic is smaller than alpha
Specific Statistical Tests

T tests


Comparison of two means
Example - examining the difference
between the mean pretest scores for an
experimental and control group
Specific Statistical Tests

Simple analysis of variance (ANOVA)




Comparison of two or more means
Example – examining the difference
between posttest scores for two treatment
groups and a control group
Look at page 341-342 for an overview of
how this is calculated.
Look at page 347 to see how to interpret
it.
Specific Statistical Tests

Multiple comparisons

Omnibus ANOVA results




More than two sets of means are calculated (e.g.,
pretest, during test, posttest).
Significant difference indicates whether a difference
exists across all pairs of scores
Need to know which specific pairs are different
Types of tests


A priori contrasts (planned… you expect a difference b/w
only 2 sets of means before hand)
Post-hoc comparisons (unplanned… you are trying to
figure out where the differences are)


Scheffe (type of post-hoc comparison)
Tukey HSD (type of post-hoc comparison)
Specific Statistical Tests

Multiple comparisons (continued)

Example – examining the difference
between mean scores for Groups 1 & 2,
Groups 1 & 3, and Groups 2 & 3
Specific Statistical Tests

Two-factor ANOVA



Also known as factorial ANOVA
Comparison of means when two
independent variables are being examined
Effects


Two main effects – one for each independent
variable
One interaction effect for the simultaneous
interaction of the two independent variables
Specific Statistical Tests

Two-factor ANOVA (continued)

Example – examining the mean score
differences for male and female students in
an experimental or control group
Specific Statistical Tests

Analysis of covariance (ANCOVA)

Comparison of two or more means with
statistical control of an extraneous variable
Specific Statistical Tests

Multiple regression




Correlational technique which uses multiple
predictor variables to predict a single
criterion variable
Does not use variance or standard
deviation to determine significance.
You look for the relationship among things.
Relationship b/w gender, personality type,
and previous scores on college GPA
Specific Statistical Tests

Multiple regression (continued)

Example – predicting college freshmen’s
GPA on the basis of their ACT scores, high
school GPA, and high school rank in class
Specific Statistical Tests

Chi Square



A nonparametric test in which observed proportions are
compared to expected proportions
Looks at frequency counts, percentages, or proportions
Examples


Is there a difference between the proportions of parents in
favor of or opposed to an extended school year?
Is there a difference between the proportions of husbands and
wives who are in favor of or opposed to an extended school
year?