Transcript Document

Cognition
Human Factors
PSYC 2200
Michael J. Kalsher
Department of
Cognitive Science
Human Information
Processing System
• Perceptual Stage
– Bringing information in through senses (sensory memory)
and comparing it with existing knowledge from (long-term)
memory to give it meaning.
• Cognitive Stage
– Central processing (or thought stage) where we compare the
new information with current goals and memories.
– Transform the information, make inferences, solve problems,
consider responses.
• Action Stage
– Brain selects a response and then coordinates/sends motor
signals for action.
Generic Model for
Human Information Processing
Assumes that we
receive info from the
environment,
cognitively act on
that information, and
then emit some
response.
Assumes a linear
progression of
processing
Measuring Information
Processing
• Cognitive psychology (science of the mind)
began to replace behaviorism in the.late
1950s/early 1960s
• Modeling “information processing” required
the use of special techniques (or Dependent
Variables) to infer what is going on “inside the
head.”
– Reaction time (simple; choice)
– Verbal Protocol analysis (asking people to tell you
what they’re thinking as they work on a task)
Memory: The Atkinson-Shiffren Model
Sensory Memory
• Sensory Trace– temporary holding “bin”
– Iconic: visual
– Echoic: auditory
– Presumed to all be present for other sensory modalities
• Characteristics
– Veridical: accurate
– Large storage capacity
– Brief time—decays and/or is interfered with by new
incoming info (clears for next input)
– Pre-categorical—before the information is interpreted (raw
form)
Sensory Memory
• Differences between Iconic & Echoic
– Iconic held for briefer time (less than 1 sec) than
echoic (2-3 sec)
– Iconic memory is more spatially oriented
– Echoic memory is more temporarily oriented
Sensory Memory
• Backward Masking
Incoming newer information is obscured by older information in
visual (iconic) sensory store (e.g., priming studies)
Example: Presenting a "masking stimulus” immediately after
(less than 50 ms) a "target" visual stimulus, makes it less likely
people will consciously perceive the “target’.
• Suffix Effect
A weakening of the recency effect when an irrelevant item is
appended to a to-be-learned list of words (Morton, Crowder & Prussin,
1971).
Example: If asked to memorize a list of a numbers, say: 1,5, 7,
9, 15, you will remember 1 and 15 best. If an additional number
(e.g., 20) is added to the end of the list, but is not part of the to-beremembered information, recall for #15 will be much weaker.
Sensory Memory:
Object & Pattern Perception
• Perceptual recognition: The process by which
sensory elements are transformed into a mental
representation or code and then compared with
stored information to categorize the information.
Involves “Many-to-One” Mapping
Perception gives meaning to sensory input by efficiently
reducing many simple information “bits” into fewer
representations (aggregation).
• Two important characteristics of this process:
– Perception by feature analysis (bottom-up processing)
– Simultaneous top-down & bottom-up processing
Feature Analysis: Bottom-up
Processing
• Involves analyzing complex stimuli in terms of its
component parts.
• A three-stage process
– Break pattern into features
– Match features to stored patterns in long term
memory
– Decide which stored pattern is the best match
Feature Analysis:
Text Perception
• Features of letters are compared with the features of
stored knowledge specifying the features for these
letters for possible matches.
– Performed quickly and without awareness for familiar objects
(such as letters).
• Perception of print usually proceeds hierarchically:
– Features combined into letters, letters into words, words into
sentences
– Exception:
• Words seen frequently tend to be processed holistically rather
than as a set of letters.
• The transformation from feature analysis into more global
processing is called unitization.
Automaticity Controlled vs.
Automatic Processing
• Controlled Processing (aka Bottom-up)
– Effortful cognitive process that seem to require attention to
initiate and sustain them.
– Used when we must process relatively unfamiliar or complex
information.
• Automatic Processing (aka Top-Down)
– Process that can be initiated and run without any cognitive
demand on attentional resources.
– For task to become automatic, it must exhibit consistent
mapping between its cognitively stored elements.
– When tasks are combined there will be less crossinterference in time-sharing to the extent that one or both
tasks are automated.
Feature Analysis and Unitization
Implications for the Design of Textual Displays
• Feature compatibility
– Accuracy and speed of recognition will be greatest when the features of a display
are in a format compatible with features and units in memory.
• Upper and lower case
– For isolated words: upper case letters are more recognizable.
– In sentences: mixture of upper and lower case print is most easily perceived.
• Use print for text displays
– Print is more easily recognized and read than cursive writing.
• Minimize abbreviations
– Full words should be used for displays.
– Best abbreviations rule is truncation (use only first few letters of the word).
• Space between words or character strings
– Ensuring there are gaps between words is crucial for accurate perception.
– Make use of “Chunking” (e.g., arrangement of alphanumeric characters on license plates).
Feature Analysis & Object
Perception
Biederman’s
Geon Theory
Irving Biederman suggested that
everyday objects are recognized
on the basis of combinations of a
set of 30 simple geometric shapes
(sub-objects) he termed geons.
According to Biederman , geons
are made up of defining features
Object Perception and Geons
• Object Recognition Proceeds via 3 Steps:
– An object is broken into its component geons.
– Each geon categorized on the basis of feature matching.
– The object is identified on the basis of its component geons
and their configuration.
• Important assumptions of Biederman’s approach:
– Only edges of geons are critical for object recognition.
– Color and detail only necessary if sub-object shapes are
similar for two or more objects and therefore cannot be
discriminated on basis of component shape (e.g., an orange and a
basketball).
– Design implications (use of line drawings vs. photographs/pics).
Top-Down & Bottom-Up
Processing
Context is also important!
High level concepts and
information are sometimes
needed to process “low level”
perceptual features.
Use of top-down processing
occurs simultaneously and
automatically along with bottomup processing
Context-effects are evident when stimuli are
unclear or incomplete (e.g., a pharmacist’s
prescription).
Perceiving a “6” or a “b”
is determined by context
Top-Down Processing
• More critical for speech recognition than
reading printed words because of the
“fleeting” nature of speech (e.g., print remains;
speech decays)
• Verbal context around critical spoken words
greatly facilitates recognition.
– In a study of recognition of warnings among pilots,
the verbal warning “Your fuel is low” was more
comprehensible than “fuel low” (Simpson, 1976).
Information Processing Trade-offs
• Trade-offs
– When stimulus quality is high, bottom-up
processing will predominate.
– As stimulus quality degrades, increase in
context and redundancy is needed (topdown) to keep recognition levels high.
Display Guidelines for Design of
Text Displays & Icons
• Optimize Bottom-up Processing
– Optimize factors such as size and contrast and quality of the stimuli.
• Optimize Top-Down Processing
– Use actual words rather than random text strings
– Restrict overall message vocabulary (fewer possibilities = higher efficiency)
– Provide contextual cues to aid recognition/comprehension.
• Evaluate Tradeoffs
– Given limited space, evaluate environment for degraded viewing conditions
& availability of contextual cues to determine appropriate trade-off between
bottom-up and top-down processing.
• Usability Testing
– Evaluate recognition & comprehension of icons.
– Conduct usability testing using the actual environment and/or elements of
the task context.
Pictures and Icons:
Important Tradeoffs
Advantages:
– Universal/Not Language Dependent
– Takes advantage of dual-coding (analog spatial image & semantic/meaning)
– Useful for many applications (e.g., highway signs, computer displays, warnings)
Potential Disadvantages:
– Legibility
• May be difficult to discriminate under degraded viewing conditions
• Therefore, pictorial messages should be distinctive, sharing a minimal
number of features with other pictorials.
– Comprehension
• Standardization and training (e.g., ANSI; ISO).
• Whenever possible, words should be used with symbols/icons.
The GHS: Breaking All the Rules?
Respiratory
Hazard
Explosive
Hazardous to
Aquatic Environment
Flammable
Compressed
Gas
Health Hazard
Health Hazard
Oxidizer
The GHS: Considering Alternatives
Which of the preceding shapes best communicates URGENCY to you? Please choose one.
A
B
C
D
Not sure
Comments (if any)
33.33%
51.28%
5.13%
10.26%
0.00%
0.00%
The GHS: Breaking All the Rules?
2. The following are possible examples of symbols for chemicals that are flammable.
Which of the preceding shapes best communicates a FLAMMABLE danger to you? Please choose one.
A
B
C
D
E
F
Not sure
Comments (if any)
31.58%
26.32%
5.26%
5.26%
26.32%
5.26%
0.00%
0.00%
The GHS: Breaking All the Rules?
Which of the preceding shapes best communicates a RESPIRATORY danger to you? Please choose one.
A
B
C
D
E
F
Not sure
Comments (if any)
35.90%
38.46%
7.69%
12.82%
5.13%
0.00%
0.00%
0.00%
The GHS: Breaking All the Rules?
A
B
C
D
E
Not sure
Comments (if any)
14.74%
65.26%
6.32%
10.53%
0.00%
3.16%
0.00%
Working Memory
• Term used to describe short-term store of info that is
currently active in central processing.
• Only a limited amount of info can be brought from
sensory register to working memory (1st bottleneck of the
information processing system).
• Relatively transient and limited in size.
• “Work bench” of consciousness in which we compare,
evaluate, and transform cognitive representations.
• Must be able to accommodate demands while supporting
active problem solving operations (e.g., hold a phone
number in memory until we have completed dialing it).
• Holds two different types of information
– Verbal & Spatial
Working Memory
• Information in working memory decays rapidly unless
actively rehearsed.
• Brown-Peterson paradigm: Showed that without
rehearsal, the ability to recall drops exponentially to an
asymptote.
Baddeley’s Model of Working Memory
• Three components
– Central executive acts as attention control system, coordinating info
from the two storage systems.
– Visuospatial sketch pad holds information in analog spatial form
(visual imagery) while being used. Info is from senses or retrieved
from LTM
– Phonological loop represents verbal information in acoustical form
(active rehearsal, either vocally or sub-vocally).
Limits of Working Memory
Our ability to keep (verbal or spatial)
information in working memory is
dependent on:
– Capacity (how much information must be kept active)
– Time (how long it must be kept active)
– Similarity (how similar the to-be-remembered material is
to other elements in working memory)
– Attention (how much attention is required to keep the
material active)
Capacity
• Upper limit of capacity of working memory to
be 7 +/- 2 chunks of information.
• Chunk is a unit of working memory space
which is defined jointly by physical and
cognitive properties that bind items together.
– 8, 7, 5, 3 = 4 chunks
– 23, 27, 35, 46 = 4 chunks
Capacity: Chunking
• Sometimes cognitive binding can replace
physical binding
– Example:
• M T V N B C F B I C I A I Q = 14 chunks
• MTV, NBC, FBI, CIA, IQ = 5 chunks
• Extent to which units are “chunked” depends
on degree of user familiarity with groupings
• Chunking decreases the number of items in
working memory and therefore increases the
capacity of working memory storage
Time
• Strength of information decays over time
unless it’s actively rehearsed (reactivated)
• Maintenance rehearsal is essentially a serial
process of sub-vocally articulating each item
• The more chunks, the longer it will take to
cycle through items in maintenance rehearsal
• “Half-life” of working memory—delay after
which recall is reduced by one-half
– Estimated approx. 7 seconds for memory store of
3 chunks and approx. 70 seconds for one chunk
Attention and Similarity:
Interactions
Confusability and Similarity
Which list of letters is most likely to be retrieved from working memory?
EGBDVC or ENWRUJ?
Why?
Because of the greater confusion caused by the acoustic features of the
first list.
– This example demonstrates the dominant auditory aspect of the phonetic
loop since the memory confusion occurs whether the list is heard or seen.
1 - 800 - DOMINOS vs. 1 - 800 - 366 - 4667
Attention and Similarity
Working memory is resource-limited.
– If attentional resources are diverted to another task, rehearsal will stop and
memory decay will occur.
– If the competing activity uses the same resources, disruption will be worse
than if it does not (see multiple resource theory).
Human Factors Implications
• Minimize working memory load
– Items that operators need to retain in working memory during task performance
should be kept to a minimum
• Provide visual echoes
• Exploit chunking
(e.g., visual presentation of phone numbers)
Optimize physical chunk size (Optimal chunk size is 3 or 4 numbers or letters)
Use meaningful sequence (letters more meaningful than numbers)
Keep numbers separate from letters (e.g., license plate sequencing)
• Minimize Confusability
–
(More confusion when lists sound similar; Build in physical distinctions to reduce confusability)
• Exploit different working memory codes
– Working memory retains 2 qualitatively different types of information: visual spatial
and verbal phonetic
• Each system seems to process info somewhat independently
• If one code is being used, it will be interrupted more by processing that same type of info
than by processing info in alternative code
Long Term Memory
LTM is Conceptualized as having 3 basic stages:
– Encoding
Input of sensory memory, short term memory, and attention
– Storage
Some memorial representation is laid down
– Retrieval
Remembering the output
(Output probably loops back into short-term memory)
Theories of LTM
• Levels of Processing
• Paivio’s Dual Code Theory
• Tulving’s episodic vs. semantic
memory theory
Levels of Processing
Memory strength depends on the strength of associations
made between to-be-remembered items and and existing
information in LTM. Deep processing is superior to other
more superficial types of processing.
Example
– Participants in an study employing the incidental memory
paradigm in which they are unaware that their memory will be
tested later.
– They are given one of 3 tasks involving, say—an apple
Surface task (Is it red? Is it small?)
Phonological task (Does it rhyme with chapel?)
Semantic task (Is it edible? Is it a Fruit? Is it good for you?)
• People remember the information best when engaged in
semantic processing, as opposed to superficial processing.
Paivio’s Dual Code Theory
• Based on the fact that our memory for pictures is
better than our memory for words.
– Words are coded in memory by a verbal code
– Pictures are coded visually (by their appearance) and verbally.
• Dual coding theory helps explain why concrete words
(chair) are easier to remember than abstract words
(honor).
– Concrete words can easily be transformed to a pictorial image,
whereas abstract words cannot be transformed as easily.
Tulving’s Memory Theory: Types of Memory
Episodic
– Knowledge of specific experiences as we move
about the world.
Semantic (Jeopardy knowledge)
– Conceptual memory (knowledge of world and how it works)
– Devoid of memories of particular instances.
Procedural (“Muscle memory”)
– Implicit and skill-based knowledge that is usually
associated with performing an activity or
procedure (e.g., riding a bicycle).
Episodic Memory
Episodic memory represents our memory of experiences
and specific events in time in a serial form, from which we
can reconstruct the actual events that took place at any given
point in our lives.
It is the memory of autobiographical events (times, places,
associated emotions and other contextual knowledge) that
can be explicitly stated. Individuals tend to see themselves
as actors in these events, and the emotional charge and
the entire context surrounding an event is usually part of the
memory, not just the bare facts of the event itself.
Semantic Memory
• Semantic memory is a more structured record of facts,
meanings, concepts and knowledge about the external
world that we have acquired.
• It refers to general factual knowledge, shared with others and
independent of personal experience and of the
spatial/temporal context in which it was acquired. Semantic
memories may once have had a personal context, but now
stand alone as simple knowledge. It includes such things as
types of food, capital cities, social customs, functions of objects,
vocabulary, understanding of mathematics, etc. Much of
semantic memory is abstract and relational and is associated
with the meaning of verbal symbols.
Procedural Memory
• Procedural memory (“knowing how”) is the unconscious
memory of skills and how to do things, particularly the use of
objects or movements of the body, such as tying a shoelace,
playing a guitar or riding a bike.
• Typically acquired through repetition and practice, procedural
memory is composed of automatic sensorimotor behaviors that
are so deeply embedded that we are usually unaware of them.
Once learned, these "body memories" allow us to carry out
ordinary motor actions more or less automatically.
Memory and the Brain
• These different types of long-term memory are stored in different
regions of the brain and undergo quite different processes.
• Declarative memories (episodic and semantic) are encoded by
the hippocampus, entorhinal cortex and perirhinal cortex (all
within the medial temporal lobe of the brain), but are
consolidation and stored in the temporal cortex and elsewhere.
• Procedural memories, on the other hand, do not appear to
involve the hippocampus at all, and are encoded and stored by
the cerebellum, putamen, caudate nucleus and the motor
cortex, all of which are involved in motor control.
Long-Term Memory:
Basic Mechanisms
• Strength
– Determined by frequency and recency of use.
• Associations
– Retrieval of information in LTM dependent on the number and
richness of associations with other items. (related to depth of
processing)
• Forgetting
– Exponential function; most forgetting occurs quickly, then reaches
an asymptote.
– Major reasons for memory retrieval failure:
•
•
•
•
Weak strength due to low frequency or recency
Weak or few associations with other information
Interfering associations
Recall vs. Recognition
Long-Term Memory:
How Information is Organized
Information in LTM is organized around central concepts
Schema is the term used to describe a person’s knowledge structure
about a particular topic.
-- Example: The knowledge structure I have concerning my role as a college
professor.
– Schemas that describe a sequence of activities are called scripts
(e.g., interactive warnings; goal is to interrupt existing “unsafe” scripts).
Mental Models are a type of schema that describes our understanding of
system components, how a system works, and how to use it.
Mental models that are shared by a large group of people are termed
population stereotypes.
Cognitive Maps are mental representations of spatial information (e.g.,
layout of a city, campus, workplace).
-- Tolman’s work with rats
-- Orientation usually reflects our view point
-- Re-orienting our perspective through mental rotation is effortful.
Tolman Procedure
3 groups of rats placed in a complex maze ending at a food box.
Group 1: Rewarded with food every time they got to the end of the maze.
Group 2: Day 1 – 10 = No reward; Day 11 – 17 = Rewarded.
Group 3: No reward
Errors
Long-Term Memory:
1.
Implications for Design
Encourage regular use of information to increase
frequency (of use) and recency.
Encourage active verbalization or reproduction of the
to-be-remembered information (deep processing to form strong
2.
associations is best!).
3.
Standardization can reduce memory load
(e.g., equipment,
environments, displays, symbols, operating procedures).
–
Results in development of strong yet simple schemas and
mental models that are applicable to a wide variety of
circumstances.
Are there any down sides to standardization from a cognitive
perspective???
Long-Term Memory:
Implications for Design
4. Use Memory Aids
–
–
Particularly important for infrequent or safety-critical tasks.
Putting “knowledge in the world” so that people don’t have to rely on
“knowledge in the head” (LTM) Norman, 1988.
5. Carefully design to-be-remembered information
–
Info that must be remembered and retrieved unaided should have
characteristics such as:
•
•
•
•
•
•
Be meaningful to individual and semantically associated with other information
Concrete rather than abstract words when possible
Distinctive concepts and information (to reduce interference)
Well-organized sets of info (group or otherwise associated)
An item should be able to be guessed based on other info (Top-down Processing)
Little technical jargon
6. Design to support development of correct mental models
–
Apply the concept of visibility: users should immediately be able to
determine the state of a device and the alternatives for action.
Facilitating Memory:
Storage
Memory is better if the “to-be-remembered”
information is organized prior to memorization.
– Material induced organization
Organization is already provided in the stimuli
– Subjectively induced organization
Organized by having individuals use their own personal memories to
help process to-be-remembered information. That is, by integrating it
into existing mental structures, such as schemas, scripts, mental
models.
Facilitating Memory:
Encoding Specificity
Cues that are present when the to-beremembered event occurs are effective
retrieval cues.
– The learning environment (studying and testing in
the same room is better then changing rooms).
– State-dependent memory (study wired; test
wired?)
– Mood-dependent memory.
Facilitating Memory:
Memory Mnemonics
Method of Loci
– Select a well-known location and mentally walk
through and visualize a specific number of objects at
that location (e.g., furniture in each of several rooms of your apartment or
house).
– Establish an ordered list of these items that you
commit to memory.
– Learn to attach to-be-remembered items to the list.
– Later, when you want to recall, take a walk back
through the location and “see” what was placed there.
Facilitating Memory:
Memory Mnemonics
Pegword Method
– Commit an ordered set of pegs to memory
• Rhyming: One is a bun, two is a shoe, three is a
tree…Well-known list: A is an apple, B is a bird, C is a
cat…
• New List:
– 1 = tree
– 2 = light switch
– 3 = stool, and so on ….
– Then visualize each to-be-remembered
item interacting with your pegs (e.g., a
grocery list, terms for a test, etc.).
• Important to use elaborate imagery. Humor often
helps!
Memory Failures: Episodic Memory for Events
• Personal knowledge of memory of a specific
event or episode is acquired by a single
experience (high school prom; traumatic event).
• Biases observed in episodic memory
– Similar to top-down biases in perception; guided by expectations, mood, …
• May be biased by plausible scenarios of how the episode in question might have been
expected to unfold
• Biases toward the “typical” may become more pronounced as time passes
• May be influenced by suggestion
• Applications
– Eye-witness testimony
• The Biggers Criteria (A Supreme court decision [Neil vs. Biggers, 1972]
established 5 factors that should be considered when evaluating eyewitness
testimony accuracy: certainty, view, attention, description, and time).
– Accident investigation
Memory Failures: Sources of Bias at Each Stage
Process
Example
Encoding
Attention
Witnesses tend to focus on the weapon used in a crime
(salience) instead of on the perpetrator’s face or other features.
Expectancy
Witnesses may see what they expect to see.
Storage
Degraded memory/schema modification
A degraded visual recollection may be replaced by an LTM schema of
what a “typical criminal” looks like, or that an auto crash typically occurs
at a high speed (causing the person to overestimate the actual speed).
Biasing Events
Chance encounter with a suspect in handcuffs prior to the line-up
Retrieval
Recognition
Analogous to a signal detection task. We should maximize “hits” and
“correct rejections, but should we also aim for minimizing a guilty bias?
Memory Failures:
Prospective Memory for Future Events
Failures of prospective memory: forgetting to do
something in the future.
– Reminder strategies
• Low tech - String around the finger? Post-its?
• High tech - PDAs
– Cognitive/behavioral strategies
• verbally stating or physically taking some action regarding the
required future activity the moment it is scheduled
Situation Awareness
Definitions:
– Users’ awareness of the meaning of dynamic
(unexpected) changes in their environment.
– The perception of the elements in the environment
within a volume of time and space, the
comprehension of their meaning, and the projection
of their status in the near future (Endsley, 1995).
Importance of SA for Human Factors:
Perception (What’s going on?)
Understanding (Interpret the meaning of existing cues)
Prediction (Predict the future implications of the available
information).
Attention and Mental Resources
• Selective attention allows us to process important
information. We usually attend to one (auditory)
channel at a given time.
• Focused attention allows us to filter out unwanted
information.
• Sustained attention (Vigilance tasks; infrequent signals).
• Divided Attention (Time-Sharing) allows us to perform
more than one cognitive task by attending to both at
once or by rapidly switching attention.
– Time sharing among multiple tasks usually causes performance
drop for one or more tasks, relative to baseline task performance.
– Four factors determine the extent to which two or more tasks can
be time-shared: Resource demand, Task Structure, Similarity,
Resource allocation (or task management).
Performance Improvement Guidelines:
Selective Attention Tasks
• Reduce number of channels that must be monitored.
• Emphasize relative importance of each channel to
effectively focus attention.
• Reduce stress to ensure appropriate channel sampling
and train to ensure optimal channel scanning patterns.
• Provide guidelines regarding probable location of signals.
• Arrange visual channels to optimize scanning efficiency.
• Prevent potential “masking” problems given multiple
auditory channels.
• Optimize rate of signal presentation (not too slow or too
fast) and where possible, allow personal control over rate.
Performance Improvement Guidelines:
Focused Attention Tasks
• Make competing channels distinct
(salience).
• Physically separate irrelevant channels
from channel of interest (organization).
• Reduce the number of competing
channels (information density).
• Make the channel of interest different
(salience).
Performance Improvement Guidelines:
Sustained Attention Tasks
• Vigilance tasks – human operators are very bad
at tasks requiring sustained attention in order to
detect infrequently occurring signals.
• Consider the following:
– Arrange appropriate rest-work schedules
– Maximize signal conspicuity
– Reduce uncertainty regarding signal
occurrence/location
– Arrange frequent training/feedback sessions
– Maintain environmental conditions at optimal levels
Performance Improvement Guidelines:
Divided Attention Tasks
• Minimize potential sources of information
• Prioritize task importance ahead of time
• Minimize task difficulty and select
dissimilar dual tasks
• Capitalize on “automatic processing” via
over-learning
Mental Workload
• Amount of cognitive resources available
in a person versus amount of resources
demanded by the task
• Measuring mental workload
– Primary task measurement
– Secondary task measurement
– Psychological measures
– Subjective measures
Limitations
• Can’t assume that information will be
processed just because it is presented.
• People can only process so much
information per period of time.
• For complex displays-only a small amount
of info will be attended to at one time.
– The information most critical to task
performance must be provided in a way that
will attract attention (conspicuity/salience).
Multiple Resources
• Visual and auditory processing requires
separate resources.
• There are multiple kinds of informationprocessing resources that allow time sharing
to be more successful.
– Time Sharing (optimize resources optimally)
– Stages: Early vs. Late (resources for perceptual
processing/central processing separate from those used for
response selection/execution).
– Input Modalities: Visual vs. Auditory
– Processing Codes: Spatial vs. Verbal in Early
Processing and Manual vs. Vocal in Responding
Input Modalities
• Humans are generally better at dividing
attention between one visual and one
auditory input (cross-modal time-sharing) than
between 2 visual or 2 auditory channels
(intramodal time-sharing).
• Dual task interference is generally
reduced by spreading input across
visual and auditory modalities
Effect of presentation modality in direct-to-consumer
(DTC) prescription drug television advertisements
Applied Ergonomics, 45(5), 1330-6, Wogalter, M.S., Shaver, E.F., Kalsher, M.J.
Direct-to-consumer (DTC) drug advertising markets medications requiring a
physician's script to the general public. In television advertising, risk disclosures
(such as side effects and contraindications) may be communicated in either
auditory (voice) or visual (text) or both in the commercials.
This research examines presentation modality factors affecting the
communication of the risk disclosures in DTC prescription drug television
commercials.
The results showed that risk disclosures presented either visually only or both
visually and auditorily increased recall and recognition compared to no
presentation. Risk disclosures presented redundantly in both the visual and
auditory modalities produced the highest recall and recognition. Visual only
produced better performance than auditory only. Simultaneous presentation of
non-risk information together with risk disclosures produced lower recall and
recognition compared to risk disclosures alone-without concurrent non-risk
information. Implications for the design of DTC prescription drug television
commercials and other audio-visual presentations of risk information including
on the Internet, are discussed.
Processing Codes
• Spatial and verbal processing (codes)
depend on distinct resources
• To the extent that any two tasks draw on
separate, rather than common
resources, time-sharing is more efficient
– Less likely to impact performance of
concurrent tasks
Confusion
• When the same resources are used for two
tasks, the amount of interference between
them is increased or decreased by the
difference in similarity of the information
being processed.
• Similarity induces confusion.
• When two tasks are confused, it sometimes
produces cross-talk
– Output intended for one task inadvertently gets
delivered to the other.
Guidelines for maximizing
performance for concurrent tasks
1. Task Re-design. Do careful assessment of timesharing requirements and make them less resourcedemanding whenever possible.
2. Interface re-design. Off-load heavily demanded
resources (e.g., supplement voice display if visual
display is too attention-demanding).
3. If we are performing a primary task that imposes a
heavy mental workload—it is unlikely we will be able
to add a second task successfully.
4. Training.
5. Automation.