Experimental Design - National Sun Yat

Download Report

Transcript Experimental Design - National Sun Yat

Introduction to
Experimental Design
Validity vs. Reliability
Why Use an Experiment?

Quantitative questions


Causal questions


Not just descriptive
Does the IV cause differences in DV?
Tighter control over situation and
relevant variables

Rule out alternative explanations for
relationship between variables
Variables


Variable is … a characteristic that varies!
Types

Independent variable



Dependent variable



Manipulated by experimenter
Levels (> 2)
Measured by experimenter
Affected by IV
Extraneous variable

Related but not of interest to experiment
Definition of Concepts



Hypothesis: A tentative statement, subject to
empirical test, about the expected relationship
between variables.
Independent variable: The variable that is
manipulated in an experiment. The
independent variable is believed to have an
impact on the dependent variable. (multiple
levels)
Dependent variable: The variable measured in a
study.
Experimental Research Design


Experimental design: Research in which
independent variables are manipulated and
behavior is measured while all other variables
(extraneous variables) are controlled for.
Random sampling: Drawing from the
population in a way that ensures equal
opportunity for every member to be included in
one or more conditions of the experiment.
Experimental Research Design
(cont).

Control Group: A group of subjects in
an experiment that does not receive the
experimental treatment. The data from
the control group are used as a baseline
against which data from the
experimental group are compared.
Validity

Validity


The degree to which a research question corresponds
with reality
Types




Internal: Explanatory relationships between IVs and
DVs (causal relationship)
Construct: Do results support Theory behind research?
External: Can findings be generalized to other settings,
populations
Statistical: Are findings the result of chance processes?
Internal Validity



Fundamental “Logic” of experiment
Rule out alternative variables as
explanations
Confounding – co-varying variables


Can’t be completely eliminated
Internal validity can be weak in Quasi
experimental designs
Construct Validity




Degree to which research measures
theory
Rule out alternative theories
Auxiliary hypotheses needed to obtain
inferences can often be used to save a
theory
Need for additional research
External validity


Degree to which findings generalize
External validity can be most important
in organizational and evaluation
research
Statistical Validity




Rule out chance, sampling variation
Use statistical tests
Sample correctly
Meet test assumptions (e.g.,
independence)
Threats to Internal Validity





History: Events outside of the study/laboratory
Maturation: Participants grow older, wiser, or more
experienced between pre- and post-measurement
Instrumentation: The effect observed is due to
changes in the measuring instrument (Changes in
procedures)
Mortality:Dropout of participants from study
Selection (Randomization): The nature of the
participants in the group or groups being compared
(equalizes within “normal” limits)
Maximizing Internal Validity



Random assignment
Control of extraneous variables
Elimination of confounds

Extraneous variables that covary with the
IV
Threats to External Validity



Other Subjects (Ss)
Other times
Other settings
Internal vs. External Validity


Generally these are opposing
Internal Validity


External Validity


Are the results only applicable in the controlled seting or
can they be generalized to the real world?
More realistic situations tend to have less control



Does the Design lend itself to testing the hypotheses?
More variable sample
More variable situations
One compromises the other.
Types of Experiments
Laboratory Experiment
Experiment
Scientific investigation in which
an investigator manipulates and
controls one or more independent
variables and observes the
dependent variable for variation
concomitant to the manipulation
of the independent variables
Research investigation in which
investigator creates a situation with exact
conditions so as to control some, and
manipulate other, variables
Field Experiment
Research study in a realistic situation in
which one or more independent variables
are manipulated by the experimenter
under as carefully controlled conditions
as the situation will permit
Lab Experiment

Question: Will the response be the same
outside the laboratory?
Experimental Simulation
retain some realism of content though
context is not real
 Examples: simulated grocery aisles;
ad testing facilities

Experimental Simulation
Example: How does size of bottle and
amount left affect amount used?

Treatment: Size and fullness of bottles of
blue water changed.
Women were given a bowl and asked how to
spray in the amount (the dependent variable)
they do when cleaning their toilet
Summary of Laboratory Experiment






Experiments in which the experimental treatment is
introduced in an artificial or laboratory setting
Laboratory experiments tend to be artificial
Testing effect exists as respondents are aware of
being in a test and may not respond naturally
Results may not have external validity
Least costly and allow experimenter greater control
over the experiment
Alternative explanations of results are reduced,
increasing internal validity
Field Experiment



Take place in real settings
Control is traded off for realism
Example: Test marketing
Field Experiment (Cont.)
Example: PSAs about colon cancer are
run in four cities; phone calls are made
before and after to see if awareness
and doctor visits have increased
 What could be the problem here?
Field Experiment (Cont.)

Often used to fine-tune marketing
marketing strategies and to determine
sales volume
Summary of Field Experiment






Research study in which one or more independent
variables are manipulated by the experimenter
under carefully controlled conditions as the situation
will permit
Experimental treatment or intervention introduced in
a completely natural setting
Response tends to be natural
Tend to have much greater external validity
Difficult to control
Competing explanations for results exist
Threats to Construct Validity


Indefinite number of theories may account for
results
Loose connection between theory and experiment
tasks, measurements



Good measurements
Choose tasks carefully
Ss understanding of tasks, experiment





Instructions
Hawthorne
“good-subject” effect
Evaluation apprehension
Social desirability
Maximizing Construct Validity

Measurement is product of:






Construct of interest
Other constructs
Error
Well-defined operationalizations
Multiple measures
Control of extraneous variables
Experimenter Bias


Conscious fudging of data
Unconscious bias


Blind and double-blind procedures
Standardize procedure of experiment
Reliability
Old Rifle
New Rifle
New Rifle Sunglare
(low reliability) (high reliability) (reliable but not
valid)
Reliability (cont.)

The extent to which it consistently
discriminates individuals at one time or over
the course of time


Test-retest reliability (Stability)
Internal-consistency consistency (Reliability of
components): Cronbach’s alpha coefficient
(Cronbach, 1951) (The higher, the better)