Making Comparisons: Why Bother

Download Report

Transcript Making Comparisons: Why Bother

Building Evaluation Capacity (BEC)
Beatriz Chu Clewell, Urban Institute
Patricia B. Campbell, Campbell-Kibler Associates, Inc.
BEC: The Background
Evaluation Capacity Building (ECB) is a system for
enabling organizations and agencies to develop the
mechanisms and structure to facilitate evaluation to meet
accountability requirements.
ECB differs from mainstream evaluation by being
continuous and sustained rather than episodic. It is
context-dependent; it operates on multiple levels; it is
flexible in responding to multiple purposes, requiring
continuous adjustments and refinements; and it requires a
wide variety of evaluation approaches and methodologies
(Stockdale et al, 2002).
BEC: The Project
The goal of BEC was to develop a model to build
evaluation capacity in three different organizations: the
National Science Foundation (NSF), the National Institutes
of Health (NIH), and the GE Foundation.
More specifically, the project’s intent was to test the
feasibility of developing models to facilitate the collection
of cross-project evaluation data for programs within these
organizations that focus on increasing the diversity of the
STEM workforce.
BEC Guide I:
Designing A Cross-Project Evaluation
• Evaluation design & identification of program goals
• Construction of logic models
• The evaluation approach, including:
Generation of evaluation questions
Setting of indicators
• Integration of evaluation questions and indicators
• Measurement strategies including:
Selection of appropriate measures
The role of demographic variables
Highlights from Guide I:
Constructing a Logic Model
The basic components of a simplified logic model are:
1. Inputs (resources invested)
2. Outputs (activities implemented using the resources)
3. Outcomes/impact (results)
Highlights from Guide I:
Constructing a Logic Model (continued…)
Highlights from Guide I: Questions & Indicators
BEC Guide II:
Collecting and Using Cross Project Evaluation Data
• The strengths and weaknesses of various types of
formats that can be used in data collection
• Data collection scheduling
• Data quality and methods of ensuring it
• Data unique to individual projects
• Confidentiality and the protection of human subjects in
data collection
• Ways of building data collection capacity among projects
• Rationales, sources, and measures of comparison data
• Issues inherent in the reporting and displaying of data
• The uses to which data might be put
Highlights From Guide II:
Data Collection Formats
Highlights From Guide II:
Available Information on Comparison Databases
URL
Availability: Public Access/ Restricted Use
(fees/permission needed)
Data Format: Web Download/Other Electronic
Student Demographic Variables: Race/Ethnicity, Sex,
Disability, Citizenship
Data Level: National, State, Institution, Student
Student Population: Pre-College, College, Graduate
School, Employment
Survey Population: First Year; Most Recent Year
Available
Other Variables: Attitudes, Course-taking, Degrees,
Employment, etc
Highlights From Guide II: Making Comparisons
Precent of Under-Represented STEM Students in
17 Project Colleges
20%
15%
10%
5%
0%
2000
2001
2002
2003
2004
Percent of Under-Represented STEM Students in
17 Project and 17 Comparison Colleges
20%
Project Colleges
Comparison Colleges
15%
10%
5%
0%
2000
2001
2002
2003
2004
Other Sources of Comparison Data
The WebCASPAR database (http://caspar.nsf.gov)
provides free access to institutional level data on students
from surveys as Integrated Postsecondary Education Data
System (IPEDS) & the Survey of Earned Doctorates.
The Engineering Workforce Commission
(http://www.ewc-online.org/) provides institutional level
data (for members) on bachelors, masters and doctorate
enrollees & recipients by sex by race/ethnicity for US
students & by sex for foreign students.
Comparison institutions can be selected from the Carnegie
Foundation for the Advancement of Teaching’s website,
(http://www.carnegiefoundation.org/classifications/) based
on Carnegie Classification, location, private/public
designation, size and profit/nonprofit status.
Some Web-based Sources of Resources
OERL, the Online Evaluation Resource Library
http://oerl.sri.com/home.html
User Friendly Guide to Program Evaluation
http://www.nsf.gov/pubs/2002/nsf02057/start.htm
AGEP Collecting, Analyzing and Displaying Data
http://www.nsfagep.org/CollectingAnalyzingDisplayingDa
ta.pdf
American Evaluation Association
http://www.eval.org/resources.asp
Download the Guides
http://www.urban.org/publications/411651.html
or Google on “Building Evaluation Capacity
Campbell”