Interpreting the Student Feedback Questionnaire (SFQ) Results
Download
Report
Transcript Interpreting the Student Feedback Questionnaire (SFQ) Results
Developing a feedback
questionnaire:
Principles and steps
Workshop for NHS staff
28 Dec 1999 (Tuesday)
Kam-Por Kwan, EDU
2766 6287
[email protected]
Workshop outline
Why use a feedback questionnaire?
How to develop a feedback questionnaire that
give useful and truthful information?
– What are the common problems?
– How to write good evaluation items?
– How to determine if the questionnaire is valid
and reliable?
How to interpret student feedback in a meaningful
way?
Developing a student feedback questionnaire on
clinical experience
2
Why use a feedback
questionnaire?
Economical to administer to the whole group, both in
terms of time and effort
Allow anonymity of responses
Allow respondents to control own pace of response
but
• Less chance to probe for clarification
• Emphasis on evaluator’s rather than respondent’s
perspectives
• Reliability, validity, and usefulness depends on items
included
3
Common problems
Feedback questionnaires often fail to provides true
and useful information because:
– the items are constructed in an ad hoc basis
without any theoretical framework behind
– they ask about things that the (student) raters
cannot validly comment on
– the items are ambiguous to (student) raters and/or
difficult to understand / interpret
– the interpretations of the items are not clear
– the items are too standardised to be useful
– respondents are not motivated to complete it
seriously
4
A systematic approach
7 steps to developing a questionnaire:
– determining the focus of the evaluation
– identifying all underlying dimensions and subdimensions involved
– drafting questionnaire items
– designing questionnaire: instructions and
sequencing, etc.
– pilot testing the questionnaire
– revising questionnaire and items
– implementing the questionnaire
5
Identifying focus and dimensions
Clear
guidelines
Providing useful comments
on what and how to improve
Feedback
Clinical
Providing regular feedback on
students’ clinical performance
supervision
Support
Learning
6
Examining the dimensions
Task 1
– Examine the draft questionnaire and
identify for each item:
• the underlying dimension that it pertains to
measure, and
• what kind of variable (presage, process, or
product) is being measured.
– What other dimensions or variables do
you think should be included in the
questionnaire?
7
Problematic items
Task 2: What’s wrong with the items?
In groups, discuss the problems of including the
following items in a student feedback
questionnaire.
– The teacher seemed to have an up-to-date
knowledge of evidence-based nursing practice.
– The teacher worked hard to demonstrate clearly
to me the proper skills of history-taking.
– My progress was a major concern of the teacher.
8
More problematic items
– I was provided with informed and constructive feedback
on my performance by the teacher immediately after my
clinical practice.
– I found it difficult to apply the theory I learned at
university to my clinical practice.
– Every student was encouraged to participate in class
discussions.
– The teacher did not discourage me from using
techniques that are not evidence-based.
– Appropriate computer technology and AV aids were
used by the teacher to facilitate learning.
9
Principles of item writing (1)
Use simple English and simple sentence
structure
Avoid questions that the intended respondents
do not have the knowledge to comment on
Avoid ambiguous questions or wordings that may
have alternative interpretations
Avoid double-barreled questions (items
containing more than one ideas)
10
Principles of item writing (2)
Avoid unnecessary jargons that may not be
understood by the intended respondents
Avoid words like “all the time”, “never”, “every”,
...
Avoid double negatives
Avoid questions with unwarranted underlying
assumptions
11
Using open-ended questions
Limitations of ratings:
– reflect evaluator’s rather than respondents’
perspectives
– suggest whether improvements are needed,
but not why or how
– give a false sense of objectivity and precision
Open-ended questions
– allow respondents’ perspectives to emerge
– offer chances for respondents to clarify
personal meanings and suggest changes
12
Optional questions
Standardised questions:
– allow comparisons across units or teachers
BUT
– cannot cater for individual needs
Optional questions:
– allow users to collect information on aspects
specific to the individual units or teachers
– useful for improvement purposes
13
Revising the draft questionnaire
Task 3
In groups, suggest how the draft questionnaire
might be further modified to make it more useful
and valid. You might consider:
– adding new items /deleting redundant items
– rewording the items as needed to make their
meaning clearer and less ambiguous
– inserting open-ended questions
– the possibility of allowing optional questions
to be included
14
Good design
as short and sharp as possible (a few short
questionnaires at different points may be better
than a long one at the end)
appeals to the intended respondents
with clear instruction
questions arranged in good psychological order,
from general to more specific
attractive and neat in appearance
clearly duplicated / printed
easy to code and interpret
15
Validity and Reliability (1)
Validity
– Does the questionnaire measure what it is
supposed to measure?
– Do the items together measure the most significant
aspects of the evaluation question?
– Improving validity by:
• judgment of a panel of experts
• pilot testing on a sample of intended
respondents
• relating to theory of teaching
16
Validity and Reliability (2)
Reliability
– Does the questionnaire give a
consistent results of it is measuring?
– Do the items yield results that agree
with each other, and are they consistent
over time?
– Improving reliability by establishing the:
• internal reliability of the instrument, scales,
and sub-scales
17
• test-retest reliability
Student feedback: what research
says
Quite reliable and consistent
Reasonably valid
Relatively uncontaminated by variables seen as
sources of potential bias
Useful for a number of purposes
BUT
• An ‘imperfect’ measure of teaching
• Must be interpreted in contexts
• Useful insofar as one source of information
• Can be abused if not interpreted properly
18
Nature of student feedback
Subjective perceptions
Based on what students have directly
experienced
Influenced by their own characteristics
such as prior knowledge, motivation,
interest, etc.
Reflected students’ implicit theory of
teaching and learning
19
Some pitfalls in interpretation
Treating student feedback as if it were a
totally objective, precise, and truthful
indicator of teaching
Over-interpreting small differences in ratings
Comparing ratings across units or teachers
without considering the context
Ranking units/teachers by their total scores
20
Interpretation guidelines
Avoid over-interpreting small differences: only
‘crude’ judgements can be made
Focus on the relative strengths and
weaknesses as reflect in the profile of ratings
rather than the total scores
Interpret feedback in context: need to take into
consideration the features of the centre and
the students
Consider ratings from different classes, and
over a number of years
Need to check student feedback against other
21
Some final words
Student feedback is a useful source
of information, not a verdict
Student feedback cannot replace
professional judgment of the teacher
Teacher’s self reflection on the
feedback collected is the key to
improvement
22