Statistics in Research Planning and Evaluation

Download Report

Transcript Statistics in Research Planning and Evaluation

Statistics in Research Planning
and Evaluation
KNES 510
Research Methods in Kinesiology
1
Probability
Alpha (α) – a level of probability (of chance
occurrence) set by the experimenter prior
to the study; sometimes referred to as
level of significance
 Typically it is set at 0.05 or 0.01
 Alpha is used to control for a type I error
 Type I error – a rejection of the null
hypothesis when the null hypothesis is
true

2
Probability, cont’d
Type II error – acceptance of the null
hypothesis when the null hypothesis is
false
 Truth table – a graphic representation of
correct and incorrect decisions regarding
type I and type II errors
 Beta (β) – the magnitude of a type II
error

3
Truth Table for
the Null Hypothesis
Ho true
Ho false
Accept
Correct
decision
Type II error
Reject
Type I error
Correct
decision
(alpha)
(beta)
4
Sampling for Null Hypothesis
From Experimental procedures for behavioral science, 3rd ed., by R.E. Kirk © 1995. Reprinted with
permission of Brooks/Cole, an imprint of the Wadsworth Group, a division of Thomson Learning. Fax
800-730-2215.
5
Controlling Type II Error
1.
2.
3.
4.
Increase sample size
Increase size of the treatment effects
Decrease experimental error
Use a more sensitive experimental
design
6
Testing for Significant Differences
Between Means

Example: Independent t-test
X1  X 2
t
sX  sX
s x x  SEM1  SEM2
2
SEM 
2
s
n
7
What factors affect the likelihood of
rejecting the null hypothesis (finding a
significant difference between means) in
the preceding test (independent t-test)?
 Look carefully at the formula for
calculating t (difference scores)

8
Estimating Effect Size

Effect size represents the standardized
difference between two means.
Formula: ES = (M1 – M2)/s
 ES allows comparison between studies
using different dependent variables
because it puts data in standard deviation
units.
 An effect size of 0 is no difference, 0.2 is
small, 0.5 medium, and 0.8 large.

9
Effect Size Examples
of 0.5s and 1.0s
10
Power
Power is the probability of rejecting the
null hypothesis when the null hypothesis is
false
 Power ranges from 0 to 1

11
Effect Size Curve to Estimate
Sample Size When p = .05
12
Effect Size Curve to Estimate
Sample Size When p = .01
13
Context of the Study
How do findings from the study fit within
the context of
 Theory
 Practice
14
Planning Research
Information needed in planning
 Alpha
 Effect size
 Power
 Sample size
15
Using the Power Calculator When
Reading a Research Study
When reading research, often sample size,
means, and standard deviations are
supplied. You can calculate the effect size
by the formula in chapter 7. Using this
data and the Power Calculator at the Web
site below, you can estimate the power to
detect a difference or relationship.
http://www.psycho.uniduesseldorf.de/abteilungen/aap/gpower3/
16
Using the Power Calculator
to Plan Research
If you are planning your own research, you
can often estimate the effect size from
other studies. By setting your alpha (say
.05) and power (say .8), you can use the
Power Calculator to estimate the sample
size you need.
17
Reporting Statistical Data
(Summary From APA and APS)








How was power analysis done?
Always report complications (screen your data).
Select minimally statistical analyses.
Report p values of confidence intervals.
Report magnitudes of the effects.
Control multiple comparisons.
Report variability using standard deviations.
Report data to appropriate level.
18