We Could Use Your Help

Download Report

Transcript We Could Use Your Help

Learning Outcomes
Assessment:
A National Perspective
George D. Kuh
Council of Graduate Schools
Washington, DC
November 22, 2010
Advance Organizers
 What kind of information about student
learning is compelling and useful for:
(a) guiding improvement efforts?
(b) responding to accountability
demands?
 What can be done to prepare the next
generation of faculty and motivate the
current generation to collect and use
assessment results to enhance student
learning?

And what about assessing learning in
graduate school?!? 
Context
 Global Competitiveness in Degree
Attainment
 The New Majority and Demographic
Gaps
 Questionable Levels of Student
Performance
NOLOA
“Colleges… do so little to measure
what students learn between freshman
and senior years. So doubt lurks: how
much does a college education – the
actual teaching and learning that
happens on campus – really matter?”
David Leonhardt, NYTimes, Sept 27, 2009
Context
 Global Competitiveness in Degree
Attainment
 The New Majority and Demographic
Gaps
 Questionable Levels of Student
Performance
 In a Most Challenging Fiscal
Environment …
 We Need Higher Levels of Student
Achievement
Assessment 2010
Greater emphasis on student learning
outcomes and evidence that student
performance measures up
“It’s the Learning,
Stupid”
Working Definition
Assess: (v.): to examine carefully
Assessment is the systematic
collection, review, and use of
information about educational
programs undertaken for the
purpose of improving student
learning and development
(Palomba & Banta, 1999, p. 4)
Assessment Purposes
 Improvement
 Accountability
Indicators
learning outcomes
educational attainment (persistence,
graduation)
course retention
transfer student success
success in subsequent courses
degree/certificate completion
graduate school
employment/employer evaluations
capacity for lifelong learning
Assessment Tools
 Direct (outcomes) measures
-- Evidence of what
students have learned or
can do
 Indirect (process) measures
-- Evidence of effective
educational activity by
students and institutions
Occasional Paper #1
Assessment,
Accountability, and
Improvement
Peter T. Ewell
Assessments of what students learn during
college are typically used for either
improvement or accountability, and
occasionally both. Yet, since the early days
of the “assessment movement” in the US,
these two purposes of outcomes
assessment have not rested comfortably
together.
www.learningoutcomeassessment.org/OccasionalPapers.htm
Two Paradigms of Assessment
Continuous
Improvement
Accountability
Strategic dimensions
Purpose
Formative (improvement)
Summative (judgment)
Orientation
Internal
External
Motivation
Engagement
Compliance
Instrumentation
Multiple/triangulation
Standardized
Nature of evidence
Quantitative and qualitative
Quantitative
Reference points
Over time, comparative,
established goal
Multiple internal channels
Comparative or fixed
standard
Public communication,
media
Reporting
Implementation
Communication of
results
Use of results
Multiple feedback loops
Ewell, Peter T. (2007). Assessment and Accountability in America Today: Background and Context. In Assessing and Accounting
for Student Learning: Beyond the Spellings Commission. Victor M. H. Borden and Gary R. Pike, Eds. Jossey-Bass: San Francisco.
Assessment 2010
Greater emphasis on student learning
outcomes and evidence that student
performance measures up
Demands for comparative measures
Increased calls for transparency --public disclosure of student and
institutional performance
Templates
 APLU/AASCU Voluntary System of
Accountability
 NAICU’s U-CAN
 College Navigator (NCES)
 Transparency by Design/College
Choices for Adults (WCET)
 AACC (yet to be named)
 Degree Qualifications Inventory
 Alliance Guidelines
 NILOA Transparency Framework
Assessment 2010
Greater emphasis on student learning
outcomes and evidence that student
performance measures up
Demands for comparative measures
Increased calls for transparency ---public
disclosure of student and institutional
performance
Assessment “technology” has
improved markedly, but still is
insufficient to document learning
outcomes most institutions claim
Sample Data Sources
• Locally-developed measures
• National instruments
– National Survey of Student Engagement (NSSE)
– Beginning College Survey of Student
Engagement (BCSSE)
– Faculty Survey of Student Engagement (FSSE)
– Cooperative Institutional Research Program
(CIRP)
– Your First College Year (YFCY)
– College Student Experiences Questionnaire
(CSEQ)
– Noel Levitz Student Satisfaction Inventory
– ETS MAPP and Major Field Tests
– ACT Collegiate Assessment of Academic
Proficiency
– Collegiate Learning Assessment (CLA)
• Institutional data -- GPA, financial aid,
transcripts, retention, certification tests,
alumni surveys, satisfaction surveys…
• Electronic portfolios
Valid Assessment of
Learning in Undergraduate
Education (VALUE) Rubrics















Inquiry and analysis
Critical thinking
Creative thinking
Written communication
Oral communication
Reading
Quantitative literacy
Information literacy
Teamwork
Problem solving
Civic knowledge and engagement
Intercultural knowledge and competence
Ethical reasoning and action
Foundations and skills for lifelong learning
Integrative learning
AAC&U VALUE Project – 15 Rubrics
Measuring Quality in Higher Education
(Vic Borden & Brandi Kernel, 2010)
Web-based inventory hosted by AIR of assessment
resources. Key words can be used to search the four
categories:
 instruments (examinations, surveys, questionnaires,
etc.);
 software tools and platforms;
 benchmarking systems and data resources;
 projects, initiatives and services.
http://applications.airweb.org/surveys/Default.aspx
Do we measure what we value?
or
Do we value what we measure?
Wise decisions are
needed about what
to measure in the
context of campus
mission, values, and
desired outcomes.
Summary
Perhaps more assessment underway
than some acknowledge or wish to
believe
More attention needed to using and
reporting assessment results
Involving faculty is a major challenge
More investment likely needed to
move from data to improvement
According to Provosts, what is the
driving force for assessment?
a. Institutional Commitment to
“high importance”
Improvement
85% Regional
b. Accreditation
80% Specialized
c. Faculty & Staff Interest
d. Governing Board Mandate
Summary
Perhaps more assessment underway than
some acknowledge or wish to believe
More attention needed to using and
reporting assessment results
Involving faculty is a major challenge
More investment likely needed to move
from data to improvement
Accreditation is a major force shaping
assessment
Regional accreditors cite deficiencies in
student learning outcomes assessment
with greater frequency
 Middle States - 2/3 of institutions have follow-up;
number one reason being assessment
 NEASC - 80% of institutions asked for follow-up
on student learning outcomes assessment
 HLC - 7 out of 10 institutions are being monitored;
the vast majority for student learning outcomes
assessment.
Looking Back: What’s Been
Accomplished?
Assessment Seen as Legitimate
Goals for Learning Established
A “Semi-Profession” for
Assessment
Much Better Instruments and
Methods
Looking Back: What Remains to be
Done?
Authentic Faculty Ownership
Assessment Still an “Add-On”
Use of Information for
Improvement is Underdeveloped
Sincere Institutional Engagement
with Accreditors in Assessment
Advance Organizers
 What kind of information about student
learning is compelling and useful for:
(a) guiding improvement efforts?
(b) responding to accountability
demands?
 What can be done to prepare the next
generation of faculty and motivate the
current generation to collect and use
assessment results to enhance student
learning?

Do we care about assessing learning in
graduate school?!? 
Questions
&
Discussion