Part I: The State of Online Learning

Download Report

Transcript Part I: The State of Online Learning

Online Assessment and Evaluation
Techniques in Corporate Settings
Dr. Curtis J. Bonk
President, CourseShare.com
Associate Professor, Indiana University
http://php.indiana.edu/~cjbonk,
[email protected]
Workshop Overview
• Part I: The State of Online Learning
• Part II. Evaluation Purposes,
Approaches, and Frameworks
• Part III. Applying Kirkpatrick’s 4
Levels
• Part IV. ROI and Online Learning
• Part V. Collecting Evaluation Data &
Online Evaluation Tools
Sevilla & Wells (July, 2001), e-learning
We could be very productive by ignoring
assessment altogether and assume
competence if the learner simply gets
through the course.
Why Evaluate?
• Cost-savings
– Becoming less important reason to evaluate
as more people recognize that the initial
expense is balanced by long-term financial
benefits
• Performance improvement
– A clear place to see impact of online learning
• Competency advancement
16 Evaluation Methods
1. Formative Evaluation
2. Summative Evaluation
3. CIPP Model Evaluation
4. Objectives-Oriented
Evaluation
5. Marshall & Shriver's 5 Levels
of Evaluation
6. Bonk’s 8 Part Evaluation Plan
(& the Ridiculous Model)
7. Kirkpatrick’s 4 Levels
8. Return on Investment (ROI):
9. K-Level 6 budget and stability of
e-learning team.
10. K-Level 7 whether e-learning
champion(s) are promoted
11. Cost/Benefit Analysis (CBA)
12. Time to Competency
13. Time to Market
14. Return on Expectation
15. AEIOU: Accountability,
Effectiveness, Impact,
Organizational Context, U =
Unintended Consequences
16. Consumer-Oriented
Evaluation
Part I. The State of Online Learning
Survey of 201 Trainers, Instructors,
Managers, Instructional Designers,
CEOs, CLOs, etc.
Survey Limitations
•
•
•
•
•
Sample pool—e-PostDirect
The Web is changing rapidly
Lengthy survey, low response rate
No password or keycode
Many backgrounds—hard to
generalize
• Does not address all issues (e.g., ROI
calculations, how trained & supported,
specific assessments)
Figure 2. Size of Respondent Organizations
Percent of Respondents
30
25
20
15
10
5
0
1 to 30
31-100
101 to
500
501 to
1,000
1,001 to
5,000
5,001 to
10,000
Number of Employees
10,001 to More than
100,000 100,001
Figure 12. Methods Used to Deliver Training in
Organization
Other
Paper-Based Correspondence
Videotape
Multimedia
Internet/Intranet
Instructor-Led Classroom
0
20
40
60
80
100
120
Why Interested in E-Learning?
 Mainly cost savings
 Reduced travel time
 Greater flexibility in delivery
 Timeliness of training
 Better allocation of resources, speed of delivery,
convenience, course customization, lifelong
learning options, personal growth, greater
distrib of materials
Figure 25. Percent of Respondent Organizations
Conducting Formal Evaluations of Web-Based Learning
Yes
41%
No
59%
A Few Assessment
Comments
Level 1 Comments. Reactions
“We assess our courses based on
participation levels and online surveys
after course completion. All of our courses
are asynchronous.”
“I conduct a post course survey of course
material, delivery methods and mode, and
instructor effectiveness. I look for
suggestions and modify each course based
on the results of the survey.”
“We use the Halo Survey process of asking
them when the course is concluding.”
Level 2 Comments: Learning
“We use online testing and simulation
frequently
for
testing
student
knowledge.”
“Do multiple choice exams after each
section of the course.”
“We use online exams and use level 2
evaluation forms.”
Level 3 Comment: Job
Performance
“I feel strongly there is a need to measure
the success of any training in terms of the
implementation of the new behaviors on
the job. Having said that, I find there is
very limited by our clients in spending
the dollars required…”
More Assessment Comments
Multiple Level Evaluation
“Using Level One Evaluations for each session followed
by a summary evaluation. Thirty days post-training,
conversations occur with learners’ managers to assess
Level 2” (actually Level 3).”
“We do Level 1 measurements to gauge student
reactions to online training using an online evaluation
form. We do Level 2 measurements to determine
whether or not learning has occurred…
“Currently, we are using online teaching and following
up with manager assessments that the instructional
material is being put to use on the job.”
Who is Evaluating Online
Learning?
• 59% of respondents said they did not
have a formal evaluation program
• At Reaction level: 79%
• At Learning level: 61%
• At Behavior/Job Performance level: 47%
• At Results or Return on Investment: 30%
Percent of Respondents
Figure 26. How Respondent Organizations Measure
Success of Web-Based Learning
90
80
70
60
50
40
30
20
10
0
Learner satisfaction
Change in
knowledge, skill,
atttitude
Job performance
Kirkpatrick's Evaluation Level
ROI
Assessment Lacking or Too Early
“We are just beginning to use Web-based
technology for education of both
associates and customers, and do not
have the metric to measure our success.
However, we are putting together a
focus group to determine what to
measure (and) how.”
“We have no online evaluation for
students at this time.”
“We lack useful tools in this area.”
Limitations with Current System
“I feel strongly there is a need to measure the
success of any training in terms of the
implementation of the new behaviors on the
job. Having said that, I find there is very
limited by our clients in spending the
dollars required…”
“We are looking for better ways to track
learner progress, learner satisfaction, and
retention of material.”
“Have had fairly poor ratings on reliability,
customer support, and interactivity…”
Pause…How and
What Do You
Evaluate…?
Readiness Checklist
1.
___ Is your organization undergoing
significant change, in part related to elearning?
2. ___ Is there pressure from senior
management to measure the results of elearning?
3.
___ Has your company experienced one
or more training/learning disasters in the
past?
4.
___ Is the image of the training/learning
function lower than you want?
Part II
Evaluation Purposes,
Approaches and Frameworks
What is Evaluation???
“Simply put, an evaluation is concerned
with judging the worth of a program and
is essentially conducted to aid in the
making of decisions by stakeholders.”
(e.g., does it work as effectively as the standard
instructional approach).
(Champagne & Wisher, in press)
What is assessment?
• Assessment refers to…efforts to obtain info about
how and what students are learning in order to
improve…teaching efforts and/or to demo to
others the degree to which students have
accomplished the learning goals for a course.”
(Millar, 2001, p. 11).
• It is a way of using info obtained through various
types of measurement to determine a learner’s
performance or skill on some task or situation
(Rosenkrans, 2000).
Who are you evaluating for?
The level of evaluation will depend on
articulation of the stakeholders.
Stakeholders of evaluation in
corporate settings may range
from…???
Evaluation Purposes
• Determine learner progress
– What did they learn?
• Document learning impact
– How well do learners use what they learned?
– How much do learners use what they learn?
Evaluation Purposes
• Efficiency
– Was online learning more effective than
another medium?
– Was online learning more cost-effective than
another medium/what was the return on
investment (ROI)?
• Improvement
– How do we do this better?
Evaluation Purposes
“An evaluation plan can evaluate the
delivery of e-learning, identify ways to
improve the online delivery of it, and
justify the investment in the online
training package, program, or initiative.”
(Champagne & Wisher, in press)
Evaluation Plans
Does your company have a training
evaluation plan?
Steps to Developing an OL
Evaluation Program
• Select a purpose and framework
• Develop benchmarks
• Develop online survey instruments
– For learner reactions
– For learner post-training performance
– For manager post-training reactions
• Develop data analysis and management
plan
1. Formative Evaluation
• Formative evaluations focus on
improving the online learning experience.
• A formative focus will try to find out
what worked or did not work.
• Formative evaluation is particularly
useful for examining instructional design
and instructor performance.
Formative Questions
• -How can we improve our OL program?
• -How can we make our OL program
more efficient?
• -More effective?
• -More accessible?
2. Summative Evaluation
• Summative evaluations focus on the
overall success of the OL experience
(should it be continued?).
• A summative focus will look at
whether or not objectives are met,
the training is cost-effective, etc.
Course Completion
• Jeanne Meister, Corporate University Xchange,
found a 70 percent drop out rate compared to
classroom rates of 15%.
• Perhaps need new metrics. Need to see if they can
test out.
• “Almost any measure would be better than
course completion, which is not a predictor of
anything.” Tom Kelly, Cisco, March 2002, eLearning.
What Can OL Evaluation
Measure?
• Categories of Evaluation Info (Woodley
and Kirkwood, 1986)
•
•
•
•
•
•
Measures of activity
Measures of efficiency
Measures of outcomes
Measures of program aims
Measures of policy
Measures of organizations
Typical Evaluation
Frameworks for OL
• Commonly used frameworks include:
–
–
–
–
CIPP Model
Objectives-oriented
Marshall & Shriver’s 5 levels
Kirkpatrick’s 4 levels
• Plus a 5th level
– AEIOU
– Consumer-oriented
3. CIPP Model Evaluation
• CIPP is a management-oriented model
–
–
–
–
C = context
I = input
P = process
P = product
• Examines the OL within its larger
system/context
CIPP & OL: Context
• Context: Addresses the environment in
which OL takes place.
• How does the real environment compare
to the ideal?
• Uncovers systemic problems that may
dampen OL success.
– Technology breakdowns
– Inadequate computer systems
CIPP & OL: Input
• Input: Examines what resources are put
into OL.
• Is the content right?
• Have we used the right combination of
media?
• Uncovers instructional design issues.
CIPP & OL: Process
• Process: Examines how well the
implementation works.
• Did the course run smoothly?
• Were there technology problems?
• Was the facilitation and participation as
planned?
• Uncovers implementation issues.
CIPP & OL: Product
• Product: Addresses outcomes of the
learning.
• Did the learners learn? How do you
know?
• Does the online training have an effect on
workflow or productivity?
• Uncovers systemic problems.
4. Objectives-Oriented
Evaluation
• Examines OL training objectives as compared
to training results
• Helps determine if objectives are being met
• Helps determine if objectives, as formally
stated, are appropriate
• Objectives can be used as a comparative
benchmark between online and other training
methods
Evaluating Objectives & OL
• An objectives-oriented approach can
examine two levels of objectives:
– Instructional objectives for learners (did the
learners learn?)
– Systemic objectives for training (did the
training solve the problem?)
Objectives & OL
• Requires:
– A clear sense of what the objectives are
(always a good idea anyway)
– The ability to measure whether or not
objectives are met
• Some objectives may be implicit and hard
to state
• Some objectives are not easy to measure
5. Marshall & Shriver's
Five Levels of Evaluation
• Performance-based evaluation
framework
• Each level examines a different area’s of
performance
• Requires demonstration of learning
Marshall & Shriver's 5 Levels
• Level I: Self (instructor)
• Level II: Course Materials
• Level II: Course Curriculum
• Level IV: Course Modules
• Level V: Learning Transfer
6. Bonk’s Evaluation
Plan…
Considerations in Evaluation Plan
8. University
or
Organization
7. Program
6. Course
5. Tech Tool
1. Student
2. Instructor
3. Training
4. Task
What to Evaluate?
1.Learner—attitudes, learning, use, performance.
2.Instructor—popularity, course enrollments.
3.Training—internal and external components.
4.Task--relevance, interactivity, collaborative.
5.Tool--usable, learner-centered, friendly, supportive.
6.Course—interactivity, participation, completion.
7.Program—growth, long-range plans.
8.Organization—cost-benefit, policies, vision.
RIDIC5-ULO3US Model of
Technology Use
4. Tasks (RIDIC):
–
–
–
–
–
Relevance
Individualization
Depth of Discussion
Interactivity
Collaboration-Control-ChoiceConstructivistic-Community
RIDIC5-ULO3US Model of
Technology Use
5. Tech Tools (ULOUS):
– Utility/Usable
– Learner-Centeredness
– Opportunities with Outsiders Online
– Ultra Friendly
– Supportive
7. Kirkpatrick’s 4 Levels
• A common training framework.
• Examines training on 4 levels.
• Not all 4 levels have to be included in
a given evaluation.
The 4 Levels
• Reaction
• Learning
• Behavior
• Results
8. Return on Investment
(ROI): A 5th Level
• Return on Investment is a 5th level
• It is related to results, but is more clearly
stated as a financial calculation
• How to calculate ROI is the big issue here
Is ROI the answer?
• Elise Olding of CLK Strategies suggests
that we shift from looking at ROI to
looking at time to competency.
• ROI may be easier to calculate since
concrete dollars are involved, but time to
competency may be more meaningful in
terms of actual impact.
Example: Call Center Training
• Traditional call center training can take 3
months to complete
• Call center employees typically quit
within one year
• When OL was implemented, the time to
train (time to competency) was reduced
• Benchmarks for success: time per call;
number of transfers
Example: Circuit City
• Circuit City provided online product/sales
training
• What is more useful to know:
– The overall ROI or break-even point?
– How much employees liked the training?
– How many employees completed the training?
– That employees who completed 80% of
the training saw an average increase of
10% in sales?
Matching Evaluation Levels
with Objectives Pretest
Instructions: For each statement below, indicate
the level of evaluation at which the objective is
aimed.
1.
___ Show a 15 percent decrease in errors
made on tax returns by staff accountants
participating in the e-learning certificate
program.
2. ___ Increase use of conflict resolution skills,
when warranted, by 80 percent of employees
who had completed the first eight modules of
the online training. (see handout for more)
9. A 6th Level?
Clark Aldrich (2002)
• Adding Level 6 which relates to the budget and
stability of the e-learning team.
– Just how respected and successful is the e-learning
team.
– Have they won approval from senior management
for their initiatives.
– Aldrich, C. (2002). Measuring success: In a post-Maslow/Kirkpatrick
world, which metrics matter? Online Learning, 6(2), 30 & 32.
th
7
10. And Even a
Level?
Clark Aldrich (2002)
• At Level 7 whether the e-learning sponsor(s) or
champion(s) are promoted in the organization.
• While both of these additional levels address
the people involved in the e-learning initiative
or plan, such recognitions will likely hinge on
the results of evaluation of the other five levels.
11. ROI Alternative:
Cost/Benefit Analysis (CBA)
• ROI may be ill-advised since not all impacts hit
bottom line, and those that do take time.
• Shifts the attention from more long-term
results and quantifying impacts with numeric
values, such as:
– increased revenue streams,
– increased employee retention, or
– reduction in calls to a support center.
• Reddy, A. (2002, January). E-learning ROI calculations: Is a
cost/benefit analysis a better approach? e-learning. 3(1), 30-32.
Cost/Benefit Analysis (CBA)
• To both qualitative and quantitative measures:
–
–
–
–
–
–
–
–
job satisfaction ratings,
new uses of technology,
reduction in processing errors,
quicker reactions to customer requests,
reduction in customer call rerouting,
increased customer satisfaction,
enhanced employee perceptions of training,
global post-test availability.
• Reddy, A. (2002, January). E-learning ROI calculations: Is a
cost/benefit analysis a better approach? e-learning. 3(1), 30-32.
Cost/Benefit Analysis (CBA)
• In effect, CBA asks how does the sum of the
benefits compare to the sum of the costs.
• Yet, it often leads to or supports ROI and other
more quantitatively-oriented calculations.
• Reddy, A. (2002, January). E-learning ROI calculations: Is a
cost/benefit analysis a better approach? e-learning. 3(1), 30-32.
Other ROI Alternatives
12. Time to competency (need benchmarks)
– online databases of frequently asked questions can
help employees in call centers learn skills more
quickly and without requiring temporary leaves
from their position for such training
13. Time to market
– might be measured by how e-learning speeds up the
training of sales and technical support personnel,
thereby expediting the delivery of a software
product to the market
Raths, D. (2001, May). Measure of success. Online Learning, 5(5), 2022, & 24.
Still Other ROI Alternatives
14. Return on Expectation
1. Asks employees a series of questions related to how
training met expectations of their job performance.
2. When questioning is complete, they place a $ figure
on that.
3. Correlate or compare such reaction data with
business results or supplement Level 1 data to
include more pertinent info about the applicability
of learning to employee present job situation.
–
Raths, D. (2001, May). Measure of success. Online Learning, 5(5),
20-22, & 24.
15. AEIOU
• Provides a framework for looking at
different aspects of an online learning
program
• Fortune & Keith, 1992; Sweeney, 1995;
Sorensen, 1996
A = Accountability
• Did the training do what it set out to do?
• Data can be collected through
– Administrative records
– Counts of training programs (# of attendees,
# of offerings)
– Interviews or surveys of training staff
E = Effectiveness
• Is everyone satisfied?
– Learners
– Instructors
– Managers
• Were the learning objectives met?
I = Impact
• Did the training make a difference?
• Like Kirkpatrick’s level 4 (Results)
O = Organizational Context
• Did the organization’s structures and policies
support or hinder the training?
• Does the training meet the organization’s
needs?
• OC evaluation can help find when there is a
mismatch between the training design and the
organization
• Important when using third-party training or
content
U = Unintended Consequences
• Unintended consequences are often
overlooked in training evaluation
• May give you an opportunity to brag
about something wonderful that
happened
• Typically discovered via qualitative data
(anecdotes, interviews, open-ended
survey responses)
16. Consumer-Oriented
Evaluation
• Uses a consumer point-of-view
– Can be a part of vendor selection process
– Can be a learner-satisfaction issue
• Relies on benchmarks for comparison of
different products or different learning
media
Part III:
Applying Kirkpatrick’s
4 Levels to Online
Learning Evaluation
& Evaluation Design
Why Use the 4 Levels?
• They are familiar and understood
• Highly referenced in the training
literature
• Can be used with 2 delivery media
for comparative results
Conducting 4-Level
Evaluation
• You need not use every level
– Choose the level that is most
appropriate to your need and budget
• Higher levels will be more costly
and difficult to evaluate
• Higher levels will yield more
Kirkpatrick Level 1:
Reaction
• Typically involves “Smile sheets” or
end-of-training evaluation forms.
• Easy to collect, but not always very
useful.
• Reaction-level data on online courses
has been found to correlate with ability
to apply learning to the job.
• Survey ideally should be Web-based,
keeping the medium the same as the
course.
Kirkpatrick Level I:
Reaction
• Types of questions:
– Enjoyable?
– Easy to use?
– How was the instructor?
– How was the technology?
– Was it fast or slow enough?
Kirkpatrick Level 2:
Learning
• Typically involves testing
learners immediately following
the training
• Not difficult to do, but online
testing has its own challenges
– Did the learner take the test on
his/her own?
Kirkpatrick Level 2:
Learning
• Higher-order thinking skills (problem
solving, analysis, synthesis)
• Basic skills (articulate ideas in writing)
• Company perspectives and values
(teamwork, commitment to quality,
etc.)
• Personal development
Kirkpatrick Level 2:
Learning
• Might include:
– Essay tests.
– Problem solving exercises.
– Interviews.
– Written or verbal tests to assess
cognitive skills.
Shepard, C. (1999b, July). Evaluating online learning. TACTIX from
Fastrak Consulting. Retrieved February 10, 2002, from:
http://fastrakconsulting.co.uk/tactix/Features/evaluate/eval01.htm.
Kirkpatrick Level 3:
Behavior
• More difficult to evaluate than Levels 1 & 2
• Looks at whether learners can apply what
they learned (does the training change
their behavior?)
• Requires post-training follow-up to
determine
• Less common than levels 1 & 2 in practice
Kirkpatrick Level 3:
Behavior
• Might include:
– Direct observation by supervisors or coaches
(Wisher, Curnow, & Drenth, 2001).
– Questionnaires completed by peers,
supervisors, and subordinates related to work
performance.
– On the job behaviors, automatically logged
performances, or self-report data.
Shepard, C. (1999b, July). Evaluating online learning. TACTIX from
Fastrak Consulting. Retrieved February 10, 2002, from:
http://fastrak-consulting.co.uk/tactix/Features/evaluate/eval01.htm.
Kirkpatrick Level 4:
Results
• Often compared to return on investment
(ROI)
• In e-learning, it is believed that the
increased cost of course development
ultimately is offset by the lesser cost of
training implementation
• A new way of training may require a
new way of measuring impact
Kirkpatrick Level 4: Results
• Might Include:
– Labor savings (e.g., reduced duplication of
effort or faster access to needed information).
– Production increases (faster turnover of
inventory, forms processed, accounts opened,
etc.).
– Direct cost savings (e.g., reduced cost per
project, lowered overhead costs, reduction of
bad debts, etc.).
– Quality improvements (e.g., fewer accidents,
less defects, etc.).
Horton, W. (2001). Evaluating e-learning. Alexandria, VA:
American Society for Training & Development.
Kirkpatrick + Evaluation
Design
• Kirkpatrick’s 4 Levels may be
achieved via various evaluation
designs
• Different designs help answer
different questions
Pre/Post Control Groups
• One group receives OL training and one
does not
• As variation try 3 groups
– No training (control)
– Traditional training
– OL training
• Recommended because it may help
neutralize contextual factors
• Relies on random assignment as much
as possible
Multiple Baselines
• Can be used for a program that is
rolling out
• Each group serves as a control
group for the previous group
• Look for improvement in
subsequent groups
• Eliminates need for tight control of
control group
Time Series
• Looks at benchmarks before and
after training
• Practical and cost-effective
• Not considered as rigorous as
other designs because it doesn’t
control for contextual factors
Single Group Pre/Post
• Easy and inexpensive
• Criticized for lack of rigor (absence
of control)
• Needs to be pushed into
Kirkpatrick levels 3 and 4 to see if
there has been impact
Case Study
• A rigorous design in academic
practice, but often after-the-fact in
corporate settings
• Useful when no preliminary or
baseline data have been collected
Matching Evaluation Levels
with Objectives Posttest
Instructions: For each statement below, indicate the
level of evaluation at which the objective is aimed.
1. Union Pacific Railroad reported an increase in
bottom-line performance--on-time delivery of
goods--of over 35%, which equated to millions
of dollars in increased revenues and savings.
2. They also reported that learners showed a 40%
increase in learning retention and improved
attitudes about management and jobs.
(see handout for more)
Part IV:
ROI and Online
Learning
The Importance of ROI
• OL requires a great amount of $$
and other resources up front
• It gives the promise of financial
rewards later on
• ROI is of great interest because of
the investment and the wait period
before the return
Calculating ROI
• Look at:
– Hard cost savings
– Hard revenue impact
– Soft competitive benefits
– Soft benefits to individuals
See: Calculating the Return on Your eLearning
Investment (2000) by Docent, Inc.
Possible ROI Objectives
•
•
•
•
•
•
Better Efficiencies
Greater Profitability
Increased Sales
Fewer Injuries on the Job
Less Time off Work
Faster Time to Competency
Hard Cost Savings
• Travel
• Facilities
• Printed material costs (printing,
distribution, storage)
• Reduction of costs of business
through increased efficiency
• Instructor fees (sometimes)
The Cost of E-learning
• Brandon-hall.com estimates that an
LMS system for 8,000 learners costs
$550,000
• This price doesn’t include the cost of
buying or developing content
• Bottom line: getting started in elearning isn’t cheap
Hard Revenue Impact
• Consider
– Opportunity cost of improperly or
untrained personnel
– Shorter time to productivity through
shorter training times with OL
– Increased time on job (no travel
time)
– Ease of delivering same training to
partners and customers (for fee?)
Soft Competitive Benefits
•
•
•
•
Just-in-time capabilities
Consistency in delivery
Certification of knowledge transfer
Ability to track users and gather
data easily
• Increase morale from
simultaneous roll-out at different
sites
Individual Values
• Less wasted time
• Support available as needed
• Motivation from being treated as
an individual
Talking about ROI
• As a percentage
– ROI=[(PaybackInvestment)/Investment]*100
• As a ratio
– ROI=Return/Investment
• As time to break even
– Break even
time=(Investment/Return)*Time
Period
What is ROI Good For?
• Prioritizing Investment
• Ensuring Adequate Financial
Support for Online Learning
Project
• Comparing Vendors
The Changing Face of ROI
• “Return-on-investment isn’t what
it used to be … The R is no longer
the famous bottom line and the I is
more likely a subscription fee than
a one-time payment” (Cross, 2001)
More Calculations
• Total Admin Costs of Former Program
- Total Admin Costs of OL Program
=Projected Net Savings
• Total Cost of Training/# of Students
=Cost Per Student (CPS)
• Total Benefits * 100/Total Program Cost
=ROI%
Pause: How are costs
calculated in online programs?
ROI Calculators
Success Story #1
(Sitze, March 2002, Online Learning):
EDS and GlobalEnglish
Charge: Reduce money on English training
Goal: 80% online in 3 months
Result: 12% use in 12 months
Prior Costs: $1,500-5,000/student
New Cost: $150-300/user
Notes: Email to participants was helpful in
expanding use; rolling out other additional
languages.
Success Story #2 (Overby, Feb 2002, CIO):
Dow Chemical and Offensive Email
Charge: Train 40,000 employees across 70
countries; 6 hours of training on workplace respect
and responsibility.
Specific Results: 40,000 passed
Savings: Saved $2.7 million ($162,000 on record
keeping, $300,000 on classrooms and trainers,
$1,000,000 on handouts, $1,200,000 in salary
savings due to less training time).
Success Story #3 (Overby, Feb 2002, CIO):
Dow Chemical and Safety/Health
Charge: Train 27,000 employees on
environmental health and safety work
processes.
Results: Saved $6 million; safety incidents
have declined while the number of Dow
employees have grown.
Success Story #4 (Overby, Feb 2002, CIO):
Dow Chemical and e-learning system
Charge: $1.3 million e-learning system
Savings: $30 million in savings ($850,000 in
manual record-keeping, $3.1 in training
delivery costs, $5.2 in reduced classroom
materials, $20.8 in salaries since Web
required 40-60% less training time).
Success Story #5 (Ziegler, e-learning, April 2002):
British Telecom & sales training
Costs: Train 17,000 sales professionals to sell
Internet services using Internet simulation.
Result: Customer service rep training
reduced from 15 days to 1 day; Sales
training reduced from 40 days to 9 days.
Savings: Millions of dollars saved; sales
conversion went up 102 percent; customer
satisfaction up 16 points.
At the End of the Day...
• Are all training results quantifiable?
• NO! Putting a price tag on some costs
and benefits can be very difficult
• NO! Some data may not have much
meaning at face value
– What if more courses are offered and annual
student training hours drop simultaneously?
Is this bad?
Evaluation Cases
(homework…)
1. General Electric Case
2. Financial Services Company
3. Circuit Board Manufacturing Plant
Safety
4. Computer Company Sales Force
5. National HMO Call Center
Part V:
Collecting
Evaluation Data
& Online
Evaluation Tools
Collecting Evaluation Data
•
•
•
•
•
Learner Reaction
Learner Achievement
Learner Job Performance
Manager Reaction
Productivity Benchmarks
Forms of Evaluation
•
•
•
•
•
•
•
Interviews and Focus Groups
Self-Analysis
Supervisor Ratings
Surveys and Questionnaires
ROI
Document Analysis
Data Mining (Changes in pre and posttraining; e.g., sales, productivity)
How Collect Data?
• Direct Observation in Work Setting
– By supervisor, co-workers,
subordinates, clients
• Collect Data By Surveys,
Interviews, Focus Groups
– Supervisors, Co-workers,
Subordinates, Clients
• Self-Report by learners or teams
• Email and Chat
Learner Data
• Online surveys are the most effective way
to collect online learner reactions
• Learner performance data can be collected
via online tests
– Pre and post-tests can be used to
measure learning gains
• Learner post-course performance data can
be used for Level 3 evaluation
– May look at on-the-job performance
– May require data collection from
managers
Example: Naval Phys. Training
Follow-Up Evaluation
• A naval training unit uses an online
survey/database system to track
performance of recently trained
physiologists
• Learner’s self-report performance
• Managers report on learner
performance
• Unit heads report on overall
productivity
Learning System Data
• Many statistics are available, but which
are useful?
–
–
–
–
Number of course accesses
Log-in times/days
Time spent accessing course components
Frequency of access for particular
components
– Quizzes completed and quiz scores
– Learner contributions to discussion (if
applicable)
Computer Log Data
Chen, G. D., Liu, C. C., Liu, B. J. (2000). Discovering decision knowledge from Web log
portfolio for managing classroom processes by applying decision tree and data cute tech.
Journal of Educ Computing Research, 23(3), 305-332.
• In a corp training situation, computer log data
can correlate online course completions with:
– actual job performance improvements such as
• fewer violations of safety regulations,
• reduced product defects,
• increased sales, and
• timely call responses.
Learner System Data
• IF learners are being evaluated based
on number and length of accesses, it is
only fair that they be told
• Much time can be wasted analyzing
statistics that don’t tell much about the
actual impact of the training
• Bottom line: Easy data to collect, but
not always useful for evaluation
purposes
– Still useful for management purposes
Benchmark Data
• Companies need to develop benchmarks
for measuring performance
improvement
• Managers typically know the job areas
that need performance improvement
• Both pre-training and post-training data
need to be collected and compared
• Must also look for other contextual
factors
Online Survey Tools
for Assessment
Web-Based Survey
Advantages
• Faster collection of data
• Standardized collection format
• Computer graphics may reduce
fatigue
• Computer controlled branching and
skip sections
• Easy to answer clicking
• Wider distribution of respondents
Sample Survey Tools
•
•
Zoomerang
(http://www.zoomerang.com)
IOTA Solutions
(http://www.iotasolutions.com)
QuestionMark
•
SurveyShare (http://SurveyShare.com; from
•
Survey Solutions from Perseus
•
Infopoll (http://www.infopoll.com)
•
(http://www.questionmark.com/home.html)
Courseshare.com)
(http://www.perseusdevelopment.com/fromsurv.htm)
Online Testing Tools
(see: http://www.indiana.edu/~best/)
Test Selection Criteria
(Hezel, 1999; Perry & Colon, 2001)
•
•
•
•
Easy to Configure Items and Test
Handle Symbols, Timed Tests
Scheduling of Feedback (immediate?)
Flexible Scoring and Reporting
– (first, last, average, by individual or group)
• Easy to Pick Items for Randomizing
• Randomize Answers Within a Question
• Weighting of Answer Options
Web Resource: http://www.indiana.edu/~best/
Tips on Authentification
•
•
•
•
Check e-mail access against list
Use password access
Provide keycode, PIN, or ID #
(Futuristic Other: Palm Print,
fingerprint, voice recognition, iris
scanning, facial scanning, handwriting
recognition, picture ID)
Ziegler, April 2002, e-Learning
“…the key is not to measure every
possible angle, but rather to
focus on metrics that are
pragmatic and relevant to both
human and business
performance at the same time.”
E-Learning Evaluation
Measures
So which of the 16 methods
would you use???
Something ridiculous???
Some Final Advice…
Or Maybe Some Questions???