Soarian™ User Interface
Download
Report
Transcript Soarian™ User Interface
1
Evaluating Interface Designs
How do you know your design is any good?
When will you know?
2
Evaluating Interface Designs
Determinates of the evaluation plan
Design Stage (early, middle, late)
Novelty of the project (well defined vs. exploratory)
Number of expected users
Criticality of the interface (e.g., life-critical medical systems vs. museumexhibit support)
Costs of product and finances allocated for testing (range of 5% to 20% of the
total project budget)
Time available
Experience of the design and evaluation team
Failure to perform and document testing can result in
Failed contract proposals
Malpractice lawsuits
3
Evaluating Interface Designs
Expert Reviews
Ask colleagues or customers for their feedback
Expert reviews can be conducted on short notice and with little time
commitment
Can occur early or late in the design phase
Deliverable can be a formal report with problems identified and
recommendations
Deliverable can also be an informal presentation with the development team
and managers
Expert reviews may require training on the task domain
4
Evaluating Interface Designs
Expert Reviews Methods
Heuristic Evaluation
http://www.youtube.com/watch?v=hWc0Fd2AS3s&feature=related
• Critique of the interface for conformation to a short list of heuristics
– Consistency
– Universal usability
– Informative feedback
– Closure
– Prevent errors
– Easy reversal of actions
– Internal locus of actions (user as initiator)
– Reduce short-term memory load
5
Evaluating Interface Designs
Expert Reviews Methods
Guidelines Review
• Based on organizational guidelines
6
Evaluating Interface Designs
Expert Reviews Methods
Consistency Inspection
• Terminology, fonts, colors, layout, input/output formats
7
Evaluating Interface Designs
Expert Reviews Methods
Cognitive walkthrough
• Experts simulate users walking through the interface to carry out a typical task.
• Start with high-frequency tasks
• Critical tasks should definitely be evaluated
8
Evaluating Interface Designs
Expert Reviews Methods
Bird’s Eye View
• Study a complete set of UI screens on the floor (or pinned to walls)
• Provides a easy way to see fonts, colors and terminology
9
Evaluating Interface Designs
Expert Reviews Methods
Expert-Review Report
• Can use the guidelines document to structure the report
• Comment on novice, intermittent and expert features
• Rank recommendations by importance and effort level
Effort Level
Low
High
Low
1, 3, 5
2, 4, 6
High
7, 9, 11
8, 10, 12
User
Importance
10
Evaluating Interface Designs
Usability Testing and Laboratories
Controlled experiments
• Generally have at least two treatments
• Need to show statistically significant differences
• Goal is validation or rejection of a hypothesis
Usability tests
• Goal is to find flaws in the interface
• Fewer participants
• Outcome is a report
Both studies include carefully prepared
set of tasks
11
Evaluating Interface Designs
Usability Testing and Laboratories
Having a usability lab on sight shows a commitment to customers, users and
employees
Generally contains two 10 x 10 rooms, divided by a half-silver mirror
Staffed by one or more people
• Ideally have been involved in early task analysis or design reviews
Example – Display based phones
12
Evaluating Interface Designs
Usability Testing and Laboratories
Two to six weeks before the usability test
• Develop the detailed test plan (list of tasks, subjective satisfaction questions,
debriefing questions)
• Identify the number, types and source of the participants
– Sources: Customer sites, personnel agencies, advertisements
• Conduct a pilot test one week ahead of testing
Participants
• Notify them that it is the software being evaluated, not them
• Inform them of the tasks they will be performing (e.g., ordering a product on a
website)
• Inform them of how long they will be in the session (normally 1 to 3 hours)
• Obtained informed consent
13
Evaluating Interface Designs
Usability Testing and Laboratories
Informed consent
• I have freely volunteered to participate in this study
• I have been informed in advance of the tasks and procedures
• I have been given the opportunity to ask questions
• I am aware that I have the right to withdraw consent and to discontinue
participation at any time, without prejudice to my future treatment
• My signature below may be taken a affirmation of all above statements; it was
given prior to my participation in this study
Post tasks
• Participants can make general comments or suggestions, or respond to specific
questions
Videotaping
• Reviewing can be tedious
• Log and annotate during the test
• Look for critical incidents
14
Evaluating Interface Designs
Usability Testing and Laboratories
Eye Tracking – Heat Maps
15
Evaluating Interface Designs
Usability Testing and Laboratories
Paper mockups
• Early is the design phase
• Get user reactions to wording, layout, and sequencing
16
Evaluating Interface Designs
Usability Testing and Laboratories
Discount usability testing
• Three to six participants (allows prompt revision and repeated testing)
• Formative evaluation – identifies problems that guide re-design
• Summative evaluation – provides evidence for product announcement
– “99% of our 100 testers completed their tasks without assistance
Competitive usability testing
• Compares the new interface to previous versions or similar products from
competitors
• Within-subjects designs are the most powerful
Think Aloud
• http://www.youtube.com/watch?v=tbKnFaW69e0&feature=related
17
Evaluating Interface Designs
Usability Testing and Laboratories
Field tests and portable labs
• Puts new interfaces to work in realistic environments for a fixed trial period
• Need portable labs with videotaping and logging facilities
Remote usability testing
• Web-based applications tested internationally, on-line
• Can recruit testers via email
• Less control over user behavior, and less chance to observe their reactions
• Usage logs and phone interviews are useful supplements
• UserWorks, Inc.
Can-you-break-this tests
• Destructive testing approach
• Users attempt to find fatal flaws
18
Evaluating Interface Designs
Usability Testing and Laboratories
Short comings
• Limited coverage of interface features
• Hard to predict success in long-term usage
• The lab environment is different than the real work environment
19
Evaluating Interface Designs
Survey Instruments
Often a companion to usability testing and expert reviews
Specify survey goals
• Ask the users for the subjective impressions about specific aspects of the
interface. E.g., representation of:
– Task domain objects and actions
» E.g., appointments, PAT, treatment series
– Interface domain metaphors
» E.g., shopping cart
– Syntax of inputs and design of displays
» E.g., copy, add
• User specific information
– Background (e.g., age, gender, education, income)
– Experience with computers (e.g., software packages, length of time, depth of
knowledge, TurboTax)
20
Evaluating Interface Designs
Survey Instruments
• User specific information
– Job responsibilities (e.g., trenches, manager)
– Personality type (e.g., introvert/extrovert, risk taking, early adopter)
– Reasons for not using an interface (e.g., too complex, too slow)
– Familiarity with features (e.g., printing, short-cuts, tutorials)
– Feelings about using the interface (e.g., confused vs. clear, frustrated vs. in
control, bored vs. excited)
• Coleman and Williges (1985) – Bipolar Semantically Anchored Items
– Hostile
1234567
Friendly
– Vague
1234567
Specific
– Misleading
1234567
Beneficial
– Discouraging 1234567
Encouraging
21
Evaluating Interface Designs
Survey Instruments
Questionnaire for User Interaction Satisfaction (QUIS) – Shneiderman
– Readability of characters
– Layout of displays
– Meaningfulness of icons
– Interface actions (e.g., short-cuts)
– Terminology
– Screen sequencing
22
Evaluating Interface Designs
Survey Instruments
QUIS: General Content
• System experience (e.g., time spent on the application)
• Past experience (e.g., operating systems, devices, software)
• Overall reactions (e.g., terrible/wonderful; rigid/flexible)
• Screen objects (e.g., characters, highlighting, layouts, sequence)
• Terminology (e.g., error messages, amount of system feedback)
• Learning (e.g., getting started, time to learn advanced features)
• Exploration of features by trial and error
• Remembering names and use of commands
• Steps to complete a task are in a logical sequence
• System capabilities (e.g., speed, reliability)
• User manuals, online help, and tutorials
• Multimedia (quality of picture and sound)
• Teleconferencing (e.g., set-up, image quality, connector indicators)
• Software installation
23
Evaluating Interface Designs
Acceptance Tests
Used for software acceptance today
• Specific cases with possible response time requirements
Applied to usability acceptance
• Time to learn specific functions
• Speed of task completion
• Rates of errors
• User retention of commands
• Subjective user satisfaction
The goal is not to detect flaws, but to verify adherence to requirements
24
Evaluating Interface Designs
Evaluation During Active Use
Major changes should be announced semi-annually or annually
Interviews and focus-groups
• One-on-one interviews and yield comments that can be discussed with a larger
audience
Continuous user performance data logging
• The software support the collection of:
– Patterns of usage (e.g., new vs. existing patient)
– Speed of user performance
– Rate of errors
– Frequency of errors
» Can be a candidate for a feature to receive specific attention
– Access to help or support on an issue
– Simplify access to frequently access features
– Rarely accessed features (why are they being avoided)
– Potential privacy issues
25
Evaluating Interface Designs
Evaluation During Active Use
Online or telephone consultants
• Excellent source of information about problems users are having
• Source of suggested improvements
Blogs to discuss user problems
On-line suggestion box and email trouble reporting
26
Evaluating Interface Designs
Goal of an index similar to miles-per-gallon, energy efficiency ratings
Learning time estimates
User satisfaction index
27
Evaluating Interface Designs
Simple Designs?
INFOBAR C01 Japan’s Newest Android Phone
28
Evaluating Interface Designs
Controlled Experiments
The scientific method and HCI
• Deal with practical problems
• State a testable hypothesis
• Identify a small number of independent variables
• Identify the key dependent variables
• Judicially select participants
• Control for biasing factors (participants, tasks)
• Apply appropriate statistical methods
• Resolve practical problems
Fractions of users can be given improvements for a limited amount of time,
and compared to a control group. Dependent measures may include:
• Performance times
• User satisfaction
• Error rates
• User retention over time