Presentation of Selected Research Methodology
Download
Report
Transcript Presentation of Selected Research Methodology
Presentation of Selected
Research Methodology
Activity – Evaluating and
Interpreting Research Literature
After reading the materials titled “Evaluating and
Interpreting Research Literature, answer the
following questions:
1.
Have any of your peers, colleagues, or instructors ever stated
that a study “proves” something? If so, briefly describe what he
or she said, and in light of reading the materials provided, would
you be cautious about believing such a statement? Why?
(answer in one-two paragraphs)
According to the reading materials, what is the primary and
secondary purpose for preparing a literature review?
What do quantitatively oriented researchers emphasize when
sampling that qualitatively researchers do not emphasize?
Name two examples of common sampling flaws.
2.
3.
4.
Activity – Evaluating and
Interpreting Research Literature
1.
2.
3.
4.
5.
Name a trait, other than the ones mentioned in this chapter that
you think are inherently difficult to measure. Why?
Briefly explain why a highly reliable measuring instrument can
be invalid.
If a common well-known IQ scale is considered to have
adequate reliability and validity, does this mean the scale has
no flaws? If not, briefly explain why.
To study causality, what do researchers need to do? Why?
If a difference is statistically significant, does this mean the
difference is large? If not, what does the fact that a difference is
statistically significant tell you? What else can you look at to
indicate the magnitude of a difference?
Review
In your last assignment you developed a
research question
Your research question guides your design for
the research study methodology
This session will help you better formulate and
articulate the research design and
methodology section of your paper
Brief Description of Methodology
and Research Design
This section will define how you are going to
address the research question or questions
You will be able to…
Present an overview of the methodology
Describe the appropriateness of the methodology
Explain the rationale for selecting the methodology
Research Design Overview
Nonexperimental Research Designs
Quasi-experimental Research Designs
Experimental Research Designs
Qualitative Research Designs
Program Evaluation
Nonexperimental Research Designs
Descriptive
Purpose is to create a detailed description of some
phenomenon
Example: Satisfaction levels of employers with
business graduates’ job skills
Causal-Comparative
Purpose is to compare two or more groups in order
to explore possible causes or effects of a
phenomenon
Example: Effects of type of classroom (inclusion vs.
non-inclusion) on academic achievement
Nonexperimental Research Designs
(cont’d)
Correlational
Purpose is to determine the strength and
direction between two variables
Quasi-Experimental Design
Non-Equivalent Control Group
1.
The treatment group is composed of students already in the
program through self- or external selection. Random assignment
to treatment and control groups is not possible. Another group
that is as similar to the treatment group as possible is selected
for a control group. It does not receive the program.
2.
Administer the pre-test to both groups. Initial differences can be
adjusted later by statistical means.
3.
Expose the treatment group to the program while withholding it
from the control group.
4.
Administer the post-test to both groups.
5.
If there is a difference which favors the treatment group, you can
be fairly confident (though less so than with random assignment)
that it was due to the program. If there is no difference, i.e., if the
scores for both groups remained the same or changed equally
(up or down), that indicates the program is probably not
effective.
Quasi-Experimental Designs:
Non-Equivalent Control Group
Treatment Group (T)
Gets Program (X).
Control Group (C )
Does Not.
Treatment Group
Already Determined
T
Pre-Test
Select Similar
Control Group
C
Pre-Test
X
Post-Test
Post-Test
Controlling Non-Equated Variables Through Statistical
Analysis (or Making Non-Equivalent Groups Similar)
Often times we want to evaluate the effectiveness of a
program that is already in place, and we are not able to
construct a treatment and a control group.
For example, suppose we wanted to evaluate the effectiveness
of public schools vs.. private schools on academic
achievement; and we looked at the average NAEP math
scores for 4th grade students in public and private schools and
found the following:
240
230
220
210
200
190
180
Public
Priv ate
But what does the picture look like when we
control for SES?
When we compare public and private students of the
same SES, we find there is little difference in their
achievement. But because there are more high SES
students in private schools, the overall comparison is
misleading. (For the precise data on these comparisons, see
Lubienski and Lubienski, “A New Look at Public and Private Schools.”
Phi Delta Kappan, May 2005.)
300
250
200
Public
Privat e
150
100
50
0
Low SES
Mid SES
High SES
Quasi-Experimental Design:
Interrupted Time Series Design: multiple historical
measures only on a treatment group before and after
its exposure to the program.
In situations where a control group is not possible, if (1) data on
the treatment group can be obtained for several periods both
before and after the participants are exposed to the program,
and (2) there is change in scores immediately following the
implementation of the program, and (3) there is a continuation
of the change in subsequent time periods, that is considered
good evidence that the intervention produced the change.
Time
1
Experimental Group
0
2
0
3
0
X
4
X
5
X
6
New program (X) is introduced.
X
Campbell’s Example of the
Interrupted Time Series Design—1
Decline in Highway Deaths per Million
Vehicle Miles Before and After Crackdown
35
30
25
20
15
10
5
Year -2
Year -1
Crackdown
Year
Year +1
Year +2
Year +3
Campbell’s Example of the
Interrupted Time Series Design—2
Decline in Highway Deaths per Million
Vehicle Miles Before and After Crackdown
35
30
25
20
15
10
5
Year -2
Year -1
Crackdown
Year
Year +1
Year +2
Year +3
S imple Before-After
S cores
60
50
40
30
20
10
0
Program
1
2
3 Testing Times
4
5
6
Ti m e S e ri e s S h owi n g No Program Im pact
Program
80
60
40
20
0
1
2
3
4
5
6
Time Se rie s Showing Program Impact
60
Program
40
20
0
1
2
3
4
5
6
Experimental Research Designs
Involve the introduction of an intervention
by the researcher to determine a causeand-effect relationship
Strongest type of design (pre INT
post)!
To yield valid findings, these studies must
be rigorous!
Experimental Research
Designs (cont’d)
Single-Group Designs
All individuals in the study receive the treatment
Example: pre WebAchiever post
Threats to validity
Control-Group Designs
Strongest type of design!!
Experimental group: pre INT post
Control group: pre NO post
Experimental Research
Designs (cont’d)
Quasi-Experimental Designs
Random assignment of subjects is not
possible (e.g., using an entire classroom,
etc.)
Biggest problem
Experimental Research
Designs (cont’d)
Single Case Designs
Involves the intense study of one individual
(or more than one individual treated as
single group).
Validity Issues
Can the change in the posttest be
attributed only to the experimental
treatment that was manipulated by the
researcher?
Must be able to control extraneous
variables that could have undue
influence!
Validity Issues (cont’d)
Internal Validity
The extent to which to which extraneous
variables have been controlled by the
researcher, so that any observed effect can
be attributed to the treatment variable.
External Validity
Extent to which the findings can be applied
to individuals and settings beyond those
studied
Qualitative Research Designs
Case Study
Researcher collects intensive data about
particular instances of a phenomenon and
seek to understand each instance in its own
terms and in its own context
Historical Research
Understanding the present condition by
shedding light on the past
Program Evaluation
Making judgments about the merit, value,
or worth of a program
A valuable tool in program management
and policy analysis
Usually initiated by someone’s need for a
decision to be made concerning policy,
management, or political strategy.
Components Addressed in
Research Methodology
(See Syllabus for Details)
Participants
Instruments
Procedures
Limitations
Anticipated Outcomes
References
Participants
This section should include the following elements:
(a) the target population or sample to which it is
hoped the findings will be applicable should be
defined, consistent with the Statement of Problem
and the Research Question(s), (b) the population
from which the sample will actually be drawn should
be specified. This should also include demographic
information such as age, gender, ethnicity etc., (c)
procedures for selecting the sample should be
outlined, including justification for the sampling
method, (d) the implications for the generalizability of
findings from the sample to the accessible population
and then to the target population should be
addressed.
Procedures
Procedures – the procedures section will be based
directly on the research questions. That is, this is the
“how-to” section of the study and will introduce the
design of the research and how the data will be
collected based on the questions of interest. This
section should include the approach (i.e., design) to
conducting the research (e.g., experimental, quasiexperimental, survey, historical, or ethnographic) and
the appropriate procedures to be followed. For
example, for an experimental or quasi-experimental
study, the proposal should indicate how participants
will be assigned to treatments and how the research
will be conducted to ensure internal and external
validity. If an evaluation project is proposed, the
model to be followed should be specified.
Instruments
Examples of data-gathering instruments
include standardized tests, teacher-made
tests, questionnaires, interview guides,
psychological tests, or field-study logs
Indicate the source (literature citation) of
the instrument and cite appropriately
Include validity and reliability information
Limitations
Limitations are conditions, restrictions or
constraints that may affect the validity of the
project outcomes
A limitation is a weakness or shortcoming in
the project that could not be avoided or
corrected and is acknowledged in the final
report
Common limitations are the lack of reliability of
measuring instruments and the restriction of
the project to a particular organization setting
Anticipated Outcomes
Description of expected results of the
study
Detail the importance of conducting the
study as well as possible impact on
practice and theory
Research Question
Activity
State your research question
What are the key variables?
What are the Conceptual definitions of
your key variables?
What are the Operational definitions of
your key variables?
Conceptual and Operational
Variables
Concepts (or constructs) = terms that refer to
the characteristics of an event, situation, or
group being studied
We need to clearly specify how we define
each and every concept in our research
studies.
Two Types of Definitions
Conceptual Definition
...the “empirical definition” of a construct
Examples of Conceptual
Definitions
Cognitive dissonance = “the unpleasant state
of psychological arousal resulting from an
inconsistency within one’s important attitudes,
beliefs, or behaviors”
Self-esteem = “a person’s overall selfevaluation or sense of self worth”
Aggression = “behavior intended to injure
another”
Stereotype = “a belief about the personal
attributes of a group of people”
Conceptual definitions offer general, abstract
characterizations of psychological constructs.
This is exactly why we need operational
definitions!
Operational definition
...the precise specification of how a
concept is measured or manipulated in a
particular study
Operational Definitions
How can we operationalize “aggression”?
-punching another’s face?
-hitting another’s arm?
-spreading rumors about another?
-verbally insulting another?
-throwing glass at another?
-etc.
* The more specific, the better.
Operational Definitions
Why are they necessary/important?
They force us to think carefully and empirically
in precise and specific terms.
They make the concept public; they allow for
replication.
Measurable.