eXplorance Blue Presentation

Download Report

Transcript eXplorance Blue Presentation

Current status of Blue
 Implementation of Blue is effective since October
 Presentations to Colleges/Schools and Departments
are currently ongoing
 The Electronic Course Assessment Implementation
(ECAI) committee (subcommittee of the FDAI
committee) is supervising the implementation, in
conjunction with the Provost’s Office and OIT
 ECAI is the point of contact for the Faculty Senate
with regard to all issues about electronic evaluation
 You can find information about online course
evaluation at: https://www.uaf.edu/provost/blue/
Adoption of Blue
 What are the implications
 What could be done to improve confidence on
the survey for each class
Implications of adopting Blue
Inter-system comparison
 Consistency of evaluation over time
 Quality
Intra-system comparison
 Representativeness
 Accuracy
 Quality
Inter-system comparison
 Q: How my evaluation in Blue measures against IAS?
 Qualitatively, the two questionnaire are different (no
“effectiveness of teaching” is assessed).
 Quantitatively, the scores are expected to be
different because the structure and the content of
the surveys are different.
 Each Blue survey should be compared to Campus-
wide Blue aggregate data.
 Few rounds of evaluation will be needed to be able
to make comparison with IAS, if needed.
 Revise unit criteria for tenure and promotion.
Inter-system comparison: quality
 Q: Students who are strongly negative about the course or
the instructor have been the most likely group to complete
the online evaluation.
 Results from many studies on this topic have proven this to be a
misconception, with results from online evaluations shown to be
as trustworthy as those from paper-based evaluations (Liu, 2005;
Thorpe, 2002; Johnson, 2002).
 A large scale study of the results of the Individual Development
and Educational Assessment (IDEA) student rating system
between 2002 and 2008 (Benton et al., October, 2010)
examined a total of 651,587 classes that used paper-based
evaluations and 53,000 classes that used web-based
evaluations. This comparison showed no meaningful differences
between survey methods.
Intra-system comparison:
representativeness
 Q: Those who have responded to the survey
have very different views than those who have
not, hence the results from the survey would not
reflect the opinion of the population as a whole.
 The link between response rate and non-response
bias has not been established (Marketing Research
and Intelligence Association, October 2003 and
2011; Curtin et al., 2000, Langer, 2003; Holbrook et
al., 2005).
Intra-system comparison:
accuracy
 Q: Small sample size leads to greater margin of
error of the results.
 Depends on the class size and the opinions’
skewness
What can be done to increase
confidence toward this system
 Response analysis will reveal biases and potential
correlation with the polled cohort.
 Results of the analysis will be shared with students
and instructors.
 If the wording and/or the content of one or
more questions appear to skew quality of
responses, those questions will be re-evaluated
and re-worded or eliminated/substituted.
What can be done to increase
confidence toward this system
 Response rates:
 Showing evaluation matters
 Communication
 Making it easy for students
 Providing incentives
 UAF evaluation portal: www.uaf.edu/inspireus
 Intensify the use of Blue:
 Mid-term evaluation
 Department-specific questions