Clinical Skills Verification rater Training MODULE 2

Download Report

Transcript Clinical Skills Verification rater Training MODULE 2

Clinical Skills Verification rater Training
MODULE 2
Training Faculty Evaluators of Clinical Skills:
Drivers of Change in Assessment
Joan Anzia, M.D.
Tony Rostain, M.D.
Outline
• Mini pretest!
• Brief history of assessment in medical education
• Drivers of change in assessment in medical
education
• Miller’s pyramid
• Why is faculty training necessary?
• Methods to train faculty to evaluate clinical skills.
• Post-test
Module 2
Pre-Test
1. A clinical skills exam of a trainee assesses
whether he or she “knows how” according to
Miller’s Pyramid.
a.
True
b.
False
Module 2
Pre-Test
2. Faculty evaluators in a group are preparing
their individual evaluation scores for a videotaped
trainee clinical skills exam, and comparing their
scores with the scores of “expert” raters. This
activity is called:
a.
Behavioral Observation Training
b.
Frame of Reference Training
c.
Direct Observation of Competence Training
d.
Performance Dimension Training
Brief history of assessment in medical
education
• Through the 1950s: knowledge evaluated
through essays and open-ended questions
graded by faculty. Clinical skill and
judgment tested with live oral examinations,
sometimes after bedside data-gathering by
the examinee.
• 1960s: multiple-choice exams to test
knowledge base
Clinical Skills Exams vs Multiple Choice Question
Exams
New technologies come on the scene
• Introduction of computers in the 1980s
enabled large-scale testing using MCQs that
are machine-scanned and scored.
• Computers also allow the assessment of
clinical decision-making through use of
interactive item formats.
• Advances in psychometrics allow shorter
tests, reduction of bias, and identification of
error sources.
Since the 1980s
• OSCEs (Objective Structured Clinical
Exams) have been fine-tuned with improved
psychometric qualities.
• Assessment of clinical skills and
performance has lagged behind – faculty are
inexperienced, don’t share common
standards, and have not been trained to
apply them consistently.
Drivers of change in medical education
• Outcomes-based education: a focus on the
“end product” rather than the process.
What should a psychiatrist “look like” at the
end of training?
• National initiatives in accountability, patient
safety and quality assurance: maintaining
the public trust in the medical profession
and improving the quality of healthcare.
Levels of assessment: Miller’s Pyramid
Miller’s Pyramid
• Knows: what a trainee “knows” in an area of
competence. MCQ-based exam.
• Knows how: does the trainee know how to use the
knowledge (acquire data, analyze and interpret
findings). An interactive reasoning exam.
• Shows how: can the trainee deliver a competent
performance of the skill with a patient. Clinical skills
exams.
• Does: does the clinician routinely perform at a
competent level outside of a controlled testing
environment? Performance-in-practice assessment,
critical incident systems.
Why is faculty training necessary?
• Assessment methods based on observation
are only as good as the individuals using
them.
Holmboe and Hawkins, 2008
• Faculty sometimes don’t possess sufficient
knowledge, skills and attitudes in particular
competencies.
• Competencies evolve over time, and faculty
may not have been trained in specific
competencies.
How do we train evaluators?
Empirically studied training methods:
• Behavioral Observation Training (BOT)
• Performance Dimension Training (PDT)
• Frame of Reference Training (FoRT)
• Direct Observation of Competence Training
Behavioral Observation Training
• Get faculty to increase the number of their
observations of their trainees.
• Provide a form of observational aide that
raters can use to record observations ( a
“behavioral diary”).
• Help faculty members learn how to prepare
for an observation. (Determining goals,
evaluator position, etc.)
Performance Dimension Training
• Designed to teach the faculty with the appropriate
performance dimensions used in the evaluation system.
• It is a critical element for all rater training programs:
goal is to define all the criteria for each dimension of
performance.
• Faculty interact to further define criteria (what
constitutes “superior performance” etc.) and work
towards consensus on framework and specific criteria.
Frame of Reference Training
• First, Performance Dimension Training
must be completed.
• FoRT targets accuracy in rating: goal is to
achieve consistency.
• First, minimal criteria for satisfactory
performance defined, then marginal criteria.
• Faculty are given clinical vignettes
describing performance in different ranges.
Frame of Reference Training (cont.)
• Faculty use vignettes to provide ratings.
• Trainer provides feedback on what the
“true” ratings should be, with an
explanation for each rating.
• Discussion of discrepancy between faculty
ratings and “true” ratings from trainer.
• Repeated practice: “calibration.”
Module 2
Post-Test
1. A clinical skills exam of a trainee assesses
whether he or she “knows how” according to
Miller’s Pyramid.
a.
True
b.
False
Module 2
Post-Test
1. A clinical skills exam of a trainee assesses
whether he or she “knows how” according to
Miller’s Pyramid.
b.
False
A CSV exam assesses whether a resident
can “show how.”
Module 2
Post-Test
2. Faculty evaluators in a group are preparing their
individual evaluation scores for a videotaped
trainee clinical skills exam, and comparing their
scores with the scores of “expert” raters. This
activity is called:
a.
Behavioral Observation Training
b.
Frame of Reference Training
c.
Direct Observation of Competence Training
d.
Performance Dimension Training
Module 2
Post-Test
2. Faculty evaluators in a group are preparing their
individual evaluation scores for a videotaped
trainee clinical skills exam, and comparing their
scores with the scores of “expert” raters. This
activity is called:
b.
Frame of Reference Training