Alternative Approaches for Gathering End User Data

Download Report

Transcript Alternative Approaches for Gathering End User Data

Alternative Approaches for
Gathering End User Data
Anne de Ridder
Sr. User Experience Architect
White Horse, Portland, OR
UW HCDE Presentation
February 26, 2010
Overview

Introduction

The Why and When of using alternate approaches

So what are these approaches anyway?
–
–
–
–
–

Case
Case
Case
Case
Case
Study
Study
Study
Study
Study
1: Remote Unmoderated User Testing
2: Low/High-Tech User Testing
3a: Online Focus Groups
3b: Survey + Moderated Interviews
4: Online Card Sort
Questions/Discussion
Introduction



MS in Technical Communication from UW HCDE in 1999
Professional experience in areas of user experience design, usability,
technical writing, and marketing communications
Worked with big & small companies as both employee and consultant
–

IBM, Sharp Microelectronics, JPL, Tektronix, Mountain Hardwear, Trane
Commercial Systems, several municipalities, environmental consulting firms, and
many more!
Currently at White Horse
–
Digital marketing agency specializing in the convergence of emerging and
traditional media to create immersive Web experiences.
•
–
–
Web development, digital marketing, technical engineering, emerging media, audio/video
production
Persona-led Design
Dedicated User Experience team
• If it is UX-related, we can do it!
The Why and When of using alternate
approaches

Why the need for alternate approaches?
– Additional tools in the HCD toolbox

When to use alternate approaches?
– Defined by the project conditions:
• Project phase
– What is the question you are trying to answer? And why?
• Project sponsor
– Who are your “customers”?
– What is their objective?
• End user data requirements
– Type of data; volume of data; quality of data
– Geographic distribution and language requirements; interface being evaluated;
result objectives
• Timeframe
• Cost
So what are these approaches anyway?

Focus of this presentation is on Online tools

A lot of options out there that address many needs
–

Variation among options within sub-categories
–

Card sorts, navigation evaluation, surveys, user testing, and more.
Considerable differences in quality and depth of data collected
The option you select is based on what you’re looking for
–
–
Result objectives
End user data requirements
Case Study 1: Remote Unmoderated
User Testing

Testing objective:
–
Validate that a redesigned corporate Web site addresses the task-based and
informational needs of the business’s 7 distinct user segments
–
Collect data from a large number of geographically dispersed (US) group of users
from each segment in a short turn around time with relatively “low” cost
• Minimum 75 completed tests (5 to 15 tests per segment); maximum 400 tests
total

Recommendation: Remote Unmoderated User Testing
– WebEffective
• Users invited via email invite and intercept on the Web site.
• Task-based test using the client’s Web site
– Collection of both qualitative and quantitative data, though focus was
on the qualitative.
– Collection of user behavioral data including:
» Clicks, hovers, scrolling, text entries, form field selections
Case Study 1: Results


14 days of testing
Tests Begun: 1420; Reportable: 224
–
Disqualified: 585 (screener; filter); Dropped-Off: 611 (pre-background questions; 331
after background survey)

Biggest differences from traditional testing:
– It’s a lot of data!
• 40 hours of data analysis; 70 slides of data summaries—and that was just
capturing the high level picture
– Need to “recreate” the user experience
– Need to recruit/include greater number of participants to
off-set drop-offs, disqualifications, outliers, “no-navs”,
“give-ups”
– (Not so) accurate measurement of confidence ratings
Case Study 1: Community Perspective

What does the overall research community say?
–
Independent, informal study shows statistically comparable results
from lab-based tests and remote unmoderated tests
• Sauro, Tullis, Molich, Kirakowski, UPA 2009
• http://www.measuringusability.com/unmoderated-testing.php
• Study also highlights the need to collect larger numbers of completed
tests in order to have substantial sub-set of “qualifying” data
–
“Beyond the Usability Lab: Conducting Large-scale Online User
Experience Studies” by W. Albert, T. Tullis, and D. Tedesco, 2010
Case Study 2: “Low/High-Tech” Testing


User acceptance prototype testing
“Live” tests via online interface
– Project A:
• Low participant participation on day 1 of listening labs
• Double-booked day 2 to supplement live sessions
• Used GotoMeeting to allow users to interact with prototype plus
have ability to record sessions (voice; mouse movements)
– Project B:
• 5 hour budget for user testing
• Three distinct user segments
• Used GotoMeeting to allow users to interact with prototype plus
have ability to record sessions (voice; mouse movements)
• Problems with firewalls; did not allow users to fully engage with
prototype
Case Study 3: Online Focus Groups


Web site redesign to incorporate more customer-centric approach for
finding financial products and services
Outcome of Task Analysis and Modeling Phase
– Validate that interpretation and synthesis of past research
matches actual user behavior
• Collect feedback on where experience and task flow
diagrams match/miss current user behavior and needs
• 1,200 participants total (for statistical significance)
including both demographic and geographic segmentations
–
Large-scale synchronous online focus groups
• Allows quicker and cost-effective collection of significant
data across personas and geographic locations
• FacilitatePro (www.facilitate.com)
Online
Focus
Group
Case Study 3b: Survey + Moderated
Interviews

Design Concept Testing
–
Goal:
• Gain insight into user expectations about design elements, interaction
flows, and outside influences
• Gain clear picture of the motivators behind target user’s assessments of
design concepts
• Testing Outcome: Final concept direction selected based on user data;
inform content strategy
• 600 users
–
Method: iModerate iMpact
• Collection of both qualitative and quantitative data
for each target user group
• Survey to larger group (via any survey tool of choice)
• Percentage of users intercepted and “interviewed”
by professional moderators
Case Study 4: Remote Card Sort


Web site redesign for a domain service provider
Recommended a complete shift in information architecture and recategorization of offerings to better meet the needs of site users
–
Based on competitive benchmarking, buy flow analysis, customer survey

Segmentation: business/personal use; web site maintenance experience

Remote open card sort (www.websort.net) recommended to determine:


–
Market segment variances in user groupings of particular items and topic
category naming
–
Nomenclature participants used to describe the topic categories
Initial scope: 30 “recruited” participants
–
Limited budget for user research “How can we get user data in one day?”
–
Additional segmentation: 4 geographic markets
Outcome:
–
Customer felt confident in recommendations based on competitive
benchmarking report and internal stakeholder review
–
Used card sort as an internal IA tool
Other Available Options
Remote User Testing Options

WebEffective
–

Including mobile
Usertesting.com
–
Data reported: Tasks; participant videos, summaries,
ability to “watch” keystrokes/mouse movements/clicks
–

Use their panel or your own customers; $39 per user
Loop11
–
Data reported: Task completion rate; Time per task; Most common success page;
Most common fail page; Most common first click; Most common navigation path;
Detailed participant path analysis; Number of page views to complete tasks

–
$350 up to 1,000 participants, unlimited tasks/questions
–
http://konigi.com/tools/submissions/loop11-online-unmoderated-user-testing
UserZoom
–
–
Including mobile
Data reported: Effectiveness ratios; Efficiency ratios; Click-stream paths; Click-mapping;
Users' suggestions & feedback; Satisfaction & perception indicators; Cluster analysis &
dendograms
–
www.userzoom.com
Online Focus Group Options

FacilitatePro
–

Artafact
–

http://www.invoke.com/index/products_online_focusgroups
Itracks
–

http://www.artafact.com/online-focus-groups.html#
Invoke
–

http://www.facilitate.com/solutions/focus-groups.html
http://www.itracks.com/Products/OnlineFocusGroup/tabid/73/Default.aspx
Qual-vu
–
Online focus groups incorporating Video Diaries
–
http://www.qualvu.com/video_diary
Survey Tools

iModerate
–

http://www.imoderate.com/
ForeSee
–
–
–
–
Ability to tie satisfaction ratings to ROI based on proven American
Customer Satisfaction index
Initial survey during unique user session
Ability to track impact of satisfaction on future behaviors (true
conversion numbers based on cookie tracking)
http://foreseeresults.com/
IA validation tools

WebSort.net
–


Online Card Sort
UserZoom
–
Online Card Sort
–
www.userzoom.com
Optimal Workshop
– Online Card Sort
– Site Map testing
– Task-based “click test” of mockup or screenshot
–
http://www.optimalworkshop.com/
Case Study 1: Test development and
implementation

Test Design:
–
Test Kit development identical to traditional usability testing
• More consideration of user fatigue in completing “repetitive” follow-up
questions.
–
Additional UX time to:
• Input tasks and questions into test system and define test logic, including
“minimum” participation requirements
• Coordinate implementation of technical code on client site
• QA and optimize test for online experience
• 33 additional hours

Live Testing Period:
–
Monitor participation on a daily basis at a minimum
• Quotas; “auto-disqualified” participants; drop-off
Case Study 1: Participant Data

Number of participants:
–
–
–
Some navigations not captured
Some unofficial task “give-ups” due to lack of interest/time/other?
Inconsistent levels of participation
(need to scrub the reportable data even further)
–
–
–
–
–
–
–
–
Segment 1:
Segment 2:
Segment 3:
Segment 4:
Segment 5:
Segment 6:
Segment 7: 3
0
2
25
173
11
10
Better reflection of trends
Easy to identify patterns/trends
Starting to see trends/patterns
Jakob Nielson knows how many users are enough
• http://www.useit.com/alertbox/20000319.html Qualitative Studies
• http://www.useit.com/alertbox/quantitative_testing.html Quantitative Studies