EVAL 6000: Foundations of Evaluation

Download Report

Transcript EVAL 6000: Foundations of Evaluation

EVAL 6000: Foundations
of Evaluation
Dr. Chris L. S. Coryn
Kristin A. Hobson
Fall 2011
Agenda
• Review
• Activity 2
• Lecture
– History of evaluation
– Basic principles and core concepts
– Shadish, Cook, & Leviton’s (1991) five
principles of program evaluation theory
• Questions and discussion
• Encyclopedia of Evaluation entries
General Logic
(Scriven)
Working Logic
(Fournier)
1. Establish criteria
Phenomenon: Functional
product
2. Construct standards
Question: Is X a good/less
good one of its type?
3. Measure performance and
compare to standards
Problem: Extent of
performance
4. Synthesize into a judgment
of merit or worth
Review
Claim: Performance/value
Evaluating
chocolate chip
cookies using
evaluation logic
Activity 2
Activity 2
• Discussion questions
1. How would you describe your experience
in establishing criteria? What were some
of the things you discussed with your
group?
2. How did you determine what standards to
use? Was your group in agreement with
the standards? How did you reconcile
differences?
3. How comfortable are you with your final
judgment about which cookie was the
best and which you would recommend?
Historical evolution
of evaluation
“In the beginning God created the heaven and the earth, then God
stood back, viewed everything made, and proclaimed, “Behold, it is
very good.” An the evening and the morning were the sixth day.
And on the seventh day God rested from all work.
God’s archangel came then, asking, “God, how do
you know that what you have created is ‘very
good’? What are your criteria? On what data do
you base your judgment? Just what results were
you expecting to attain? And aren’t you a little
close to the situation to make a fair and unbiased
evaluation?”
God thought about these questions and that day
and God’s rest was greatly disturbed. On the
eighth day God said, “Lucifer go to hell.”
Thus was evaluation born a blaze of glory.”
— Michael Q. Patton
Ancient Practice, New Discipline
• Arguably, evaluation is the single most
important and sophisticated cognitive
process in the repertoire of human
reasoning and logic
• It is a natural, evolutionary process
without which we would not survive
• Earliest known examples
– Product evaluation
– Personnel evaluation
Early History in the United
States
• Tyler’s national “Eight Year Study”
(1933-1941)
– Involved 30 secondary schools and 300
colleges and universities and addressed
narrowness and rigidity in high school
curricula
• Mainly educational assessments during
the 1950s and early 1960s conducted
by social scientists and education
researchers
Early History in the United
States
• Johnson’s “War on Poverty” and
“Great Society” programs of the
1960s
– Head Start, Follow Through
• Evaluation clause in Elementary and
Secondary Education Act (ESEA)
– Evaluation became part of every federal
grant
Toward Professionalization
• Two U.S.-based professional
evaluation organizations emerged in
mid-1970s
– Evaluation Network (E-Net)
– Evaluation Research Society (ERS)
• In 1985, the two merged to form
what is now the American Evaluation
Association (AEA)
Growing Concerns for Use
• Through the 1970s and 1980s,
growing concerns were voiced about
the utility of evaluation findings, in
general, and the use of experimental
and quasi-experimental designs,
more specifically
Decreased Emphasis
• In the 1980s, huge cuts in social programs
resulted from Reagan's emphasis on less
government involvement
• The requirement or evaluation was removed
or lessoned from many federal programs
during this period
• During the 1980s, many school districts,
universities, private companies, state
departments of education, The Federal
Bureau of Investigation (FBI), the Food and
Drug Administration (FDA), and the General
Accounting Office (GAO) developed internal
evaluation units
Increased Emphasis
• In the 1990s, there was an increased
emphasis on government program
accountability and organizations’ efforts to
be lean, efficient, global, and more
competitive
• Evaluation was conducted not only to meet
government accountability but also to
enhance effectiveness
• In addition, it was during this period that an
increasing number of foundations created
internal evaluation units, provided support
for evaluation activities, or both
Recent Milestones
• In 2001, the reauthorization of ESEA that
resulted in the No Child Left Behind (NCLB)
act is considered the most sweeping reform
of education since 1965
– It has redefined the federal role in K-12
education by focusing on closing the
achievement gap between disadvantaged and
minority students
• NCLB has had a profound influence on
evaluation design and methods by
emphasizing the use of randomized
controlled trials (RCT)
• To this day, the RCT debate is one of the
most pervasive in evaluation
Professionalization
• By 2010, there were
more than 65 national
and regional evaluation
organizations
throughout the world,
most in developing
countries
• Although specialized
training programs have
existed for several
decades, graduate
degree programs in
evaluation have
emerged only recently
Australasia
Africa
Canada
Central America
Europe (not every
country)
– Japan
– Malaysia
– United Kingdom
–
–
–
–
–
Definition
• Evaluation is the act or process of
determining the merit, worth, or
significance of something or the
product of that process
– Merit: Intrinsic quality; absent of
context and costs
– Worth: Synonymous with value; quality
under consideration of costs and context
– Significance: Synonymous with
importance; merit and worth in a
specific context
Competing Definitions
• Evaluation is “the use of social science
research procedures to systematically
investigate the effectiveness of social
intervention programs” (Rossi,
Freeman, & Lipsey).
• Proponents of theory-driven evaluation
approaches characterize evaluation as
explaining “how and why programs
work, for whom, and under what
conditions.”
Competing Definitions
• Advocates of the empowerment evaluation
movement portray evaluation as “the use of
evaluation concepts and techniques that
foster self-determination.”
• The Organization for Economic CoOperation and Development designates
evaluation as “the systematic and objective
assessment of an on-going or completed
project, programme or policy, its design,
implementation and results…the aim is to
determine the relevance and fulfillment of
objectives, development efficiency,
effectiveness, impact and sustainability.”
Purposes
• Formative: To improve
• Summative: To inform decision making
• Developmental/proformative: To help
develop an intervention or program;
ongoing formative
• Accountability: To hold accountable; usually
under summative
• Monitoring: To assess implementation and
gauge progress toward a desired end
• Knowledge generation: To generate
knowledge about general patterns of
effectiveness
• Ascriptive: Merely for the sake of knowing
Functional Forms
• Process evaluation
– Assessment of everything that occurs
prior to true outcomes
• Outcome evaluation
– Assessment of an evaluand’s effects
• Cost evaluation
– Assessment of monetary and nonmonetary costs, direct and indirect
costs, and actual and opportunity costs
The 7 “P”s
•
•
•
•
•
•
•
Program evaluation
Policy analysis
Personnel evaluation
Portfolio evaluation
Product evaluation
Performance evaluation
Proposal evaluation
Uses and Misuses
Use
2.
Misuse
1. Ideal Use
Mistaken Use
(incompetence, uncritical
acceptance, unawareness)
Mischievous Use
(manipulation, coercion)
Instrumental Use
Conceptual Use
Persuasive Use
Legitimate
Use
Misuse
Rational Non-Use
Abuse
(inappropriate suppression
of findings)
3.
Political Non-Use
Unjustified Non-Use
Non-Use
4.
Justified Non-Use
Professional Standards
•
•
•
•
•
Utility
Feasibility
Propriety
Accuracy
Evaluation
Accountability
Shadish, Cook, & Leviton’s
Elements of “Good Theory for
Social Program Evaluation”
1.
Social programming
–
2.
3.
4.
5.
Ways that social programs and policies develop, improve, and
change, especially in regard to social problems
Knowledge construction
– Ways researchers/evaluators construct knowledge claims
about social programs
Valuing
– Ways values can be attached to programs
Knowledge use
– Ways social science information is used to modify programs
and policies
Evaluation practice
– Tactics and strategies evaluators follow in their professional
work, especially given the constraints they face
Encyclopedia Entries
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Bias
Causation
Checklists
Conceptual Use
Consumer
Effectiveness
Efficiency
Epistemology
Evaluation Use
Experimental Design
Experimental Society
Impartiality
Independence
Instrumental Use
Intended Uses
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Judgment
Merit
Modus Operandi
Ontology
Outcomes
Paradigm
Positivism
Postpositivism
Process use
Quantitative Weight and
Sum
Recommendations
Synthesis
Transdiscipline
Treatments
Worth