Kognitive Architekturen - ACT-R

Download Report

Transcript Kognitive Architekturen - ACT-R

Introduction to ACT-R 5.0
Tutorial
24th Annual Conference
Cognitive Science Society
Christian Lebiere
Human Computer Interaction Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
ACT-R Home Page:
http://act.psy.cmu.edu
Tutorial Overview
1. Introduction
2. Symbolic ACT-R
Declarative Representation: Chunks
Procedural Representation: Productions
ACT-R 5.0 Buffers: A Complete Model for Sentence Memory
3. Chunk Activation in ACT-R
Activation Calculations
Spreading Activation: The Fan Effect
Partial Matching: Cognitive Arithmetic
Noise: Paper Rocks Scissors
Base-Level Learning: Paired Associate
4. Production Utility in ACT-R
Principles and Building Sticks Example
5. Production Compilation
Principles and Successes
6. Predicting fMRI BOLD response
Principles and Algebra example
Motivations for a Cognitive Architecture
1. Philosophy: Provide a unified understanding of the
mind.
2. Psychology: Account for experimental data.
3. Education: Provide cognitive models for intelligent
tutoring systems and other learning environments.
4. Human Computer Interaction: Evaluate artifacts and
help in their design.
5. Computer Generated Forces: Provide cognitive agents to
inhabit training environments and games.
6. Neuroscience: Provide a framework for interpreting data
from brain imaging.
Approach: Integrated Cognitive Models

Cognitive model = computational process
that thinks/acts like a person

Integrated cognitive models
Driver
Model
…
User
Model
User
Model
•••
Study 1: Dialing Times

Total time to complete dialing
Model Predictions
Human Data
Study 1: Lateral Deviation

Deviation from lane center (RMSE)
Model Predictions
Human Data
These Goals for Cognitive Architectures
Require
1. Integration, not just of different aspects of higher level
cognition but of cognition, perception, and action.
2. Systems that run in real time.
3. Robust behavior in the face of error, the unexpected, and
the unknown.
4. Parameter-free predictions of behavior.
5. Learning.
History of the ACT-framework
Predecessor
HAM
(Anderson & Bower 1973)
Theory versions
ACT-E
ACT*
ACT-R
ACT-R 4.0
ACT-R 5.0
(Anderson, 1976)
(Anderson, 1978)
(Anderson, 1993)
(Anderson & Lebiere, 1998)
(Anderson & Lebiere, 2001)
Implementations
GRAPES
PUPS
ACT-R 2.0
ACT-R 3.0
ACT-R 4.0
ACT-R/PM
ACT-R 5.0
Windows Environment
Macintosh Environment
(Sauers & Farrell, 1982)
(Anderson & Thompson, 1989)
(Lebiere & Kushmerick, 1993)
(Lebiere, 1995)
(Lebiere, 1998)
(Byrne, 1998)
(Lebiere, 2001)
(Bothell, 2001)
(Fincham, 2001)
~ 100 Published Models in ACT-R 1997-2002
III. Problem Solving & Decision Making
I. Perception & Attention
1. Tower of Hanoi
1. Psychophysical Judgements
2. Choice & Strategy Selection
2. Visual Search
3. Mathematical Problem Solving
3. Eye Movements
4. Spatial Reasoning
4. Psychological Refractory Period
5. Dynamic Systems
5. Task Switching
6. Use and Design of Artifacts
6. Subitizing
7. Game Playing
7. Stroop
8. Insight and Scientific Discovery
8. Driving Behavior
9. Situational Awareness
IV. Language Processing
10. Graphical User Interfaces
1. Parsing
2. Analogy & Metaphor
II. Learning & Memory
3. Learning
1. List Memory
4. Sentence Memory
2. Fan Effect
3. Implicit Learning
V. Other
4. Skill Acquisition
1. Cognitive Development
5. Cognitive Arithmetic
2. Individual Differences
6. Category Learning
3. Emotion
7. Learning by Exploration
4. Cognitive Workload
and Demonstration
5. Computer Generated Forces
8. Updating Memory &
6. fMRI
Prospective Memory
7. Communication, Negotiation,
9. Causal Learning
Group Decision Making
Visit http://act.psy.cmu.edu/papers/ACT-R_Models.htm link.
ACT-R 5.0
Intentional Module
(not identified)
Declarative Module
(Temporal/Hippocampus)
Productions
(Basal Ganglia)
Goal Buffer
(DLPFC)
Retrieval Buffer
(VLPFC)
Matching (Striatum)
Selection (Pallidum)
Execution (Thalamus)
Visual Buffer
(Parietal)
Visual Module
(Occipital/etc)
Manual Buffer
(Motor)
Manual Module
(Motor/Cerebellum)
Environment
ACT-R: Knowledge Representation
Declarative-Procedural Distinction
Declarative Knowledge: Chunks
Configurations of small numbers of elements
addend1
Three
sum
Addition-Fact
Seven
addend2
Four
Procedural Knowledge: Production Rules
for retri eving chunks to solve problems.
336
+848
4
IF the goal is to add the numbers in a column
and n1 + n2 are in the column
THEN retrieve the sum of n1 and n2.
Productions serve to coordinate the retrieval of
information from declarative memory and the enviroment
to produce transformations in the goal state.
 goal buffer
 visual buffer
 retrieval buffer
ACT-R: Assumption Space
Performance
Declarative
Symboli c
Subsymboli c
Retrieval of
Chunks
Application of
Product ion Rules
Noisy Activations
Control Speed and
Accuracy
Noisy Utili ties
Control Choice
Learning
Declarative
Symboli c
Subsymboli c
Procedural
Encoding
Environment and
Caching Goals
Bayesian
Learning
Procedural
Product ion
Compil ation
Bayesian
Learning
Chunks: Example
(
CHUNK-TYPE
NAME SLOT1
(F
SLOT2
SLOTN
ACT3+4
isa
ADDITION-FACT
ADDEND1
THREE
ADDEND2
FOUR
SUM
SEVEN
)
)
Chunks: Example
(CLEAR-ALL)
(CHUNK-TYPE addition-fact addend1 addend2 sum)
(CHUNK-TYPE integer value)
(ADD-DM (fact3+4
isa addition-fact
addend1 three
addend2 four
sum seven)
(three
isa integer
value 3)
(four
isa integer
value 4)
(seven
isa integer
value 7)
Chunks: Example
ADDITION-FACT
3
VALUE
7
isa
ADDEND1
THREE
VALUE
SUM
FACT3+4
SEVEN
ADDEND2
isa
FOUR
isa
INTEGER
4
VALUE
isa
Chunks: Exercise I
Fact:
The cat sits on the mat.
Encoding:
proposition
(Chunk-Type proposition agent action object)
(Add-DM
(fact007
isa proposition
agent cat007
action sits_on
object mat)
)
isa
cat007
agent
fact007
action
sits_on
object
mat
Chunks: Exercise II
The black cat with 5 legs sits on the mat.
Fact
Chunks
(Chunk-Type proposition agent action object)
(Chunk-Type cat legs color)
cat
(Add-DM
(fact007 isa proposition
agent cat007
action sits_on
object mat)
(cat007 isa cat
legs 5
color black)
proposition
isa
isa
legs
5
cat007
fact007
agent
color
)
black
mat
object
action
sits_on
Chunks: Exercise III
(Chunk-Type proposition agent action object)
(Chunk-Type prof money-status age)
(Chunk-Type house kind price status)
Fact
The rich young professor buys a
beautiful and expensive city
house.
proposition
prof
agent
money- prof08
status
age
young
house
isa
isa
rich
Chunk
fact008
action
buys
expensive
price
isa
object
status
beautiful
obj1001
kind
city-house
(Add-DM
(fact008 isa proposition
agent prof08
action buys
object house1001
)
(prof08 isa prof
money-status rich
age young
)
(obj1001 isa house
kind city-house
price expensive
status beautiful
)
)
A Production is
1. The greatest idea in cognitive science.
2. The least appreciated construct in cognitive science.
3. A 50 millisecond step of cognition.
4. The source of the serial bottleneck in otherwise parallel
system.
5. A condition-action data structure with “variables”.
6. A formal specification of the flow of information from
cortex to basal ganglia and back again.
Productions
•
•
•
•
Key Properties
Structure of productions
( p
modularity
abstraction
goal/buffer factoring
conditional asymmetry
name
Specification of
Buffer Tests
condition part
delimiter
==>
action part
)
Specification of
Buffer Transformations
ACT-R 5.0 Buffers
1. Goal Buffer (=goal, +goal)
-represents where one is in the task
-preserves information across production cycles
2. Retrieval Buffer (=retrieval, +retrieval)
-holds information retrieval from declarative memory
-seat of activation computations
3. Visual Buffers
-location (=visual-location, +visual-location)
-visual objects (=visual, +visual)
-attention switch corresponds to buffer transformation
4. Auditory Buffers (=aural, +aural)
-analogous to visual
5. Manual Buffers (=manual, +manual)
-elaborate theory of manual movement include feature
preparation, Fitts law, and device properties
6. Vocal Buffers (=vocal, +vocal)
-analogous to manual buffers but less well developed
Model for Anderson (1974)
Participants read a story consisting of Active and Passive sentences.
Subjects are asked to verify either active or passive sentences.
All Foils are Subject-Object Reversals.
Predictions of ACT-R model are “almost” parameter-free.
DATA:
Targets:
Foils:
Studied-form/Test-form
Active-active Active-passive Passive-active Passive-passive
2.25
2.80
2.30
2.75
2.55
2.95
2.55
2.95
Predictions:
Targets:
Foils:
Active-active Active-passive Passive-active Passive-passive
2.36
2.86
2.36
2.86
2.51
3.01
2.51
3.01
CORRELATION:
0.978
MEAN DEVIATION: 0.072
250m msec in the life of ACT-R:
Reading the Word “The”
Identifying Left-most Location
Time 63.900: Find-Next-Word Selected
Time 63.950: Find-Next-Word Fired
Time 63.950: Module :VISION running command FIND-LOCATION
Attending to Word
Time 63.950: Attend-Next-Word Selected
Time 64.000: Attend-Next-Word Fired
Time 64.000: Module :VISION running command MOVE-ATTENTION
Time 64.050: Module :VISION running command FOCUS-ON
Encoding Word
Time 64.050: Read-Word Selected
Time 64.100: Read-Word Fired
Time 64.100: Failure Retrieved
Skipping The
Time 64.100: Skip-The Selected
Time 64.150: Skip-The Fired
Attending to a Word in Two Productions
(P find-next-word
=goal>
ISA
comprehend-sentence
word
nil
==>
+visual-location>
ISA
visual-location
screen-x lowest
attended nil
=goal>
word
looking
)
(P attend-next-word
=goal>
ISA
comprehend-sentence
word
looking
=visual-location>
ISA
visual-location
==>
=goal>
word
attending
+visual>
ISA
visual-object
screen-pos =visual-location
)
 no word currently being processed.
 find left-most unattended location
 update state
 looking for a word
 visual location has been identified
 update state
 attend to object in that location
Processing “The” in Two Productions
(P read-word
=goal>
ISA
comprehend-sentence
word
attending
=visual>
ISA
text
value
=word
status
nil
==>
=goal>
word
=word
+retrieval>
ISA
meaning
word
=word
)
(P skip-the
=goal>
ISA
comprehend-sentence
word
"the"
==>
=goal>
word
nil
)
 attending to a word
 word has been identified
 hold word in goal buffer
 retrieve word’s meaning
the word is “the”
 set to process next word
Processing “missionary” in 450 msec.
Identifying left-most unattended Location
Time 64.150: Find-Next-Word Selected
Time 64.200: Find-Next-Word Fired
Time 64.200: Module :VISION running command FIND-LOCATION
Attending to Word
Time 64.200: Attend-Next-Word Selected
Time 64.250: Attend-Next-Word Fired
Time 64.250: Module :VISION running command MOVE-ATTENTION
Time 64.300: Module :VISION running command FOCUS-ON
Encoding Word
Time 64.300: Read-Word Selected
Time 64.350: Read-Word Fired
Time 64.550: Missionary Retrieved
Processing the First Noun
Time 64.550: Process-First-Noun Selected
Time 64.600: Process-First-Noun Fired
Processing the Word “missionary”
Missionary 0.000
isa MEANING
word "missionary"
(P process-first-noun
=goal>
ISA
comprehend-sentence
 neither agent or action
agent
nil
has been assigned
action
nil
word
=y
=retrieval>
 word meaning has been
ISA
meaning
retrieved
word
=y
==>
=goal>
 assign meaning to agent
agent
=retrieval
and set to process next word
word
nil
)
Three More Words in the life of ACT-R: 950 msec.
Processing “was”
Time 64.600: Find-Next-Word Selected
Time 64.650: Find-Next-Word Fired
Time 64.650: Module :VISION running command FIND-LOCATION
Time 64.650: Attend-Next-Word Selected
Time 64.700: Attend-Next-Word Fired
Time 64.700: Module :VISION running command MOVE-ATTENTION
Time 64.750: Module :VISION running command FOCUS-ON
Time 64.750: Read-Word Selected
Time 64.800: Read-Word Fired
Time 64.800: Failure Retrieved
Time 64.800: Skip-Was Selected
Time 64.850: Skip-Was Fired
Processing “feared”
Time 64.850: Find-Next-Word Selected
Time 64.900: Find-Next-Word Fired
Time 64.900: Module :VISION running command FIND-LOCATION
Time 64.900: Attend-Next-Word Selected
Time 64.950: Attend-Next-Word Fired
Time 64.950: Module :VISION running command MOVE-ATTENTION
Time 65.000: Module :VISION running command FOCUS-ON
Time 65.000: Read-Word Selected
Time 65.050: Read-Word Fired
Time 65.250: Fear Retrieved
Time 65.250: Process-Verb Selected
Time 65.300: Process-Verb Fired
Processing “by”
Time 65.300: Find-Next-Word Selected
Time 65.350: Find-Next-Word Fired
Time 65.350: Module :VISION running command FIND-LOCATION
Time 65.350: Attend-Next-Word Selected
Time 65.400: Attend-Next-Word Fired
Time 65.400: Module :VISION running command MOVE-ATTENTION
Time 65.450: Module :VISION running command FOCUS-ON
Time 65.450: Read-Word Selected
Time 65.500: Read-Word Fired
Time 65.500: Failure Retrieved
Time 65.500: Skip-By Selected
Time 65.550: Skip-By Fired
Reinterpreting the Passive
(P skip-by
=goal>
ISA
word
agent
==>
=goal>
word
object
agent
)
comprehend-sentence
"by"
=per
nil
=per
nil
Two More Words in the life of ACT-R: 700 msec.
Processing “the”
Time 65.550: Find-Next-Word Selected
Time 65.600: Find-Next-Word Fired
Time 65.600: Module :VISION running command FIND-LOCATION
Time 65.600: Attend-Next-Word Selected
Time 65.650: Attend-Next-Word Fired
Time 65.650: Module :VISION running command MOVE-ATTENTION
Time 65.700: Module :VISION running command FOCUS-ON
Time 65.700: Read-Word Selected
Time 65.750: Read-Word Fired
Time 65.750: Failure Retrieved
Time 65.750: Skip-The Selected
Time 65.800: Skip-The Fired
Processing “cannibal”
Time 65.800: Find-Next-Word Selected
Time 65.850: Find-Next-Word Fired
Time 65.850: Module :VISION running command FIND-LOCATION
Time 65.850: Attend-Next-Word Selected
Time 65.900: Attend-Next-Word Fired
Time 65.900: Module :VISION running command MOVE-ATTENTION
Time 65.950: Module :VISION running command FOCUS-ON
Time 65.950: Read-Word Selected
Time 66.000: Read-Word Fired
Time 66.200: Cannibal Retrieved
Time 66.200: Process-Last-Word-Agent Selected
Time 66.250: Process-Last-Word-Agent Fired
Retrieving a Memory: 250 msec
Time 66.250: Retrieve-Answer Selected
Time 66.300: Retrieve-Answer Fired
Time 66.500: Goal123032 Retrieved
(P retrieve-answer
=goal>
ISA
comprehend-sentence
agent
=agent
action
=verb
object
=object
purpose
test
==>
=goal>
purpose
retrieve-test
+retrieval>
ISA
comprehend-sentence
action
=verb
purpose
study
)
 sentence processing complete
 update state
 retrieve sentence involving verb
Generating a Response: 410 ms.
Time 66.500: Answer-No Selected
Time 66.700: Answer-No Fired
Time 66.700: Module :MOTOR running command PRESS-KEY
Time 66.850: Module :MOTOR running command PREPARATION-COMPLETE
Time 66.910: Device running command OUTPUT-KEY
(P answer-no
=goal>
ISA
comprehend-sentence
agent
=agent
action
=verb
object
=object
purpose retrieve-test
=retrieval>
ISA
comprehend-sentence
- agent
=agent
action
=verb
- object
=object
purpose
study
==>
=goal>
purpose
done
+manual>
ISA
press-key
key
"d"
)
 ready to test
 retrieve sentence does not
match agent or object
 update state
 indicate no
Subsymbolic Level
The subsymbolic level reflects an analytic characterization of
connectionist computations. These computations have been implemented
in ACT-RN (Lebiere & Anderson, 1993) but this is not a practical modeling
system.
1. Production Utilities are responsible for determining which productions get selected
when there is a conflict.
2. Production Utilities have been considerably simplified in ACT-R 5.0 over ACT-R 4.0.
3. Chunk Activations are responsible for determining which (if any chunks) get
retrieved and how long it takes to retrieve them.
4. Chunk Activations have been simplified in ACT-R 5.0 and a major step has been
taken towards the goal of parameter-free predictions by fixing a number of the
parameters.
As with the symbolic level, the subsymbolic level is not a static level, but is changing
in the light of experience. Subsymbolic learning allows the system to adapt to the
statistical structure of the environment.
Activation
A i  Bi   Wj  Sji   MPk  Simkl  N(0,s)
j
Seven
k
Sum
Three
Addend1
Chunk i
Bi
Addend2
Four
Sji
=Goal>
isa write
+
relation sum Conditions
arg1 Three
arg2 Four
+Retrieval>
+
isa addition-fact
addend1 Three Actions
addend2 Four Sim
kl
Chunk Activation
activation
(
)(
associative
source
activation* strength
base
= activation
+
+
mismatch
penalty
*
similarity
value
)
+ noise
A i  Bi   Wj  Sji   MPk  Simkl  N(0,s)
j
k
Activation makes chunks available to the degree that past experiences
indicate that they will be useful at the particular moment:
Base-level: general past usefulness
Associative Activation: relevance to the general context
Matching Penalty: relevance to the specific match required
Noise: stochastic is useful to avoid getting stuck in local minima
Activation, Latency and Probability
• Retrieval time for a chunk is a negative
exponential function of its activation:
Timei  F e
 Ai
• Probability of retrieval of a chunk follows the
Ai
Boltzmann (softmax) distribution:
e t
Pi 
Aj
6
t
t  2s 
e


j
• The chunk with the highest activation is retrieved
provided that it reaches the retrieval threshold 
• For purposes of latency and probability, the
threshold can be considered as a virtual chunk
Base-level Activation
activation
Ai
base
= activation
=
Bi
The base level activation Bi of chunk Ci reflects a contextindependent estimation of how likely Ci is to match a production, i.e.
Bi is an estimate of the log odds that Ci will be used.
Two factors determine Bi:
• frequency of using Ci
• recency with which Ci was used
Bi = ln (
P(Ci)
)
P(Ci)
Source Activation
+
+
(
source
activation
*
W
j
j
*
associative
strength
)
Sji
The source activations Wj reflect the amount of attention
given to elements, i.e. fillers, of the current goal. ACT-R
assumes a fixed capacity for source activation
W=  Wj reflects an individual difference parameter.
Associative Strengths
+
(
source
activation
*
W
+
j
*
associative
strength
)
Sji
The association strength Sji between chunks Cj and Ci is a measure of
how often Ci was needed (retrieved) when Cj was element of the goal,
i.e. Sji estimates the log likelihood ratio of Cj being a source of
activation if Ci was retrieved.
Sji = ln
(
P(Ni Cj)
P(Ni)
)
= S - ln(P(Ni|Cj))
Application: Fan Effect
Partial Matching
(
+
mismatch
penalty
*
similarity
value
)
  MPk  Simkl
k
• The mismatch penalty is a measure of the amount of control over memory
retrieval: MP = 0 is free association; MP very large means perfect matching;
intermediate values allow some mismatching in search of a memory match.
• Similarity values between desired value k specified by the production and
actual value l present in the retrieved chunk. This provides generalization
properties similar to those in neural networks; the similarity value is
essentially equivalent to the dot-product between distributed
representations.
Application: Cognitive Arithmetic
Table 3.1
Data from Siegler & Shrager (1984)
and ACT-RÕs Predictions
Data
Probl e m
O th e r
In cl u din g
Re tri eva l
Fai l ure
An s we r
0
1
2
3
4
5
6
7
8
1+1
-
.05
.86
-
.02
-
.02
-
-
.06
1+2
-
.04
.07
.75
.04
-
.02
-
-
.09
1+3
-
.02
-
.10
.75
.05
.01
.03
-
.06
2+2
.02
-
.04
.05
.80
.04
-
.05
-
-
2+3
-
-
.07
.09
.25
.45
.08
.01
.01
.06
3+3
.04
-
-
.05
.21
.09
.48
-
.02
.11
Predictions
Probl e m
O th e r
In cl u din g
Re tri eva l
Fai l ure
An s we r
0
1
2
3
4
5
6
7
8
1+1
-
.10
.75
.10
.01
-
-
-
-
.04
1+2
-
.01
.10
.75
.10
.-
-
-
-
.04
1+3
-
-
.01
.10
.78
.06
-
-
-
.04
2+2
-
-
.0
.1
.82
.02
-
-
-
.04
2+3
-
-
-
.03
.32
.45
.06
.01
-
.13
3+3
-
-
-
.04
.04
.08
.61
.08
.01
.18
Noise
+
noise
 N(0,s)
• Noise provides the essential stochasticity of human behavior
• Noise also provides a powerful way of exploring the world
• Activation noise is composed of two noises:
• A permanent noise accounting for encoding variability
• A transient noise for moment-to-moment variation
Application: Paper Rocks Scissors
(Lebiere & West, 1999)
• Too little noise makes the system too deterministic.
• Too much noise makes the system too random.
• This is not limited to game-playing situations!
Effect of Noise (Lag2 Against Lag2)
Effect of Noise (Lag2 Against Lag1)
Noise = 0
Noise = 0.1
Noise = 0.25
300
200
100
0
0.0
0.2
0.4
0.6
Noise Level
0.8
Lag2 Noise = 0
400
Score Difference (Lag2 - Lag1)
Score Difference (High Noise - Low Noise)
400
1.0
Lag2 Noise = 0.1
Lag2 noise = 0.25
200
0
-200
-400
0.0
0.2
0.4
0.6
Noise Level of Lag1 Model
0.8
1.0
Base-Level Learning
Based on the Rational Analysis of the Environment
(Schooler & Anderson, 1997)
Base-Level Activation reflects the log-odds that a chunk
will be needed. In the environment the odds that a fact
will be needed decays as a power function of how long it
has been since it has been used. The effects of multiple
uses sum in determining the odds of being used.
1.5
1.0
Bi  ln(  t j d )
j 1
Base-Level Learning Equation
 k
1-d
1-d 
n
k
L
t
d
k

Bi  ln  t k +

1- d L - t k 
j 1

Activation Level
n
0.5
0.0
-0.5
-1.0
≈ n(n / (1-d)) - d*n(L)
-1.5
Note: The decay parameter d has been set to .5 in most
ACT-R models
0
50
100
Seconds
150
200
Paired Associate: Study
Time
Time
Time
Time
Time
Time
Time
Time
Time
5.000: Find Selected
5.050: Module :VISION running command FIND-LOCATION
5.050: Find Fired
5.050: Attend Selected
5.100: Module :VISION running command MOVE-ATTENTION
5.100: Attend Fired
5.150: Module :VISION running command FOCUS-ON
5.150: Associate Selected
5.200: Associate Fired
(p associate
=goal>
isa goal
arg1 =stimulus
step attending
state study
=visual>
isa text
value =response
status nil
==>
=goal>
isa goal
arg2 =response
step done
+goal>
isa goal
state test
step waiting)
 attending word during study
 visual buffer holds response
 store response in goal with stimulus
 prepare for next trial
Paired Associate: Successful Recall
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
10.000: Find Selected
10.050: Module :VISION running command FIND-LOCATION
10.050: Find Fired
10.050: Attend Selected
10.100: Module :VISION running command MOVE-ATTENTION
10.100: Attend Fired
10.150: Module :VISION running command FOCUS-ON
10.150: Read-Stimulus Selected
10.200: Read-Stimulus Fired
10.462: Goal Retrieved
10.462: Recall Selected
10.512: Module :MOTOR running command PRESS-KEY
10.512: Recall Fired
10.762: Module :MOTOR running command PREPARATION-COMPLETE
10.912: Device running command OUTPUT-KEY
Paired Associate: Successful Recall (cont.)
(p read-stimulus
=goal>
isa goal
step attending
state test
=visual>
isa text
value =val
==>
+retrieval>
isa goal
relation associate
arg1 =val
=goal>
isa goal
relation associate
arg1 =val
step testing)
(p recall
=goal>
isa goal
relation associate
arg1 =val
step testing
=retrieval>
isa goal
relation associate
arg1 =val
arg2 =ans
==>
+manual>
isa press-key
key =ans
=goal>
step waiting)
Paired Associate Example
Data
Trial
1
2
3
4
5
6
7
8
Accuracy
.000
.526
.667
.798
.887
.924
.958
.954
Predictions
Latency
0.000
2.156
1.967
1.762
1.680
1.552
1.467
1.402
Trial
1
2
3
4
5
6
7
8
Accuracy
.000
.515
.570
.740
.850
.865
.895
.930
Latency
0.000
2.102
1.730
1.623
1.584
1.508
1.552
1.462
? (collect-data 10)  Note simulated runs show random fluctuation.
ACCURACY
(0.0 0.515 0.570 0.740 0.850 0.865 0.895 0.930)
CORRELATION: 0.996
MEAN DEVIATION: 0.053
LATENCY
(0 2.102 1.730 1.623 1.589 1.508 1.552 1.462)
CORRELATION: 0.988
MEAN DEVIATION: 0.112
NIL
Production Utility
Making Choices: Conflict Resolution
P is expected probability of success
G is value of goal
C is expected cost
Expected Gain = E = PG-C
Probability of choosing i =
e
E /t
i
e
E /t
j
t reflects noise in evaluation
and is like temperature in
the Bolztman equation
j
Successes
P = Successes + Failures
Successes =  + m
Failures =  + n
 is prior successes
m is experienced successes
 is prior failures
n is experienced failures
Building Sticks Task (Lovett)
INITIAL STATE
desired:
current:
building:
a
b
c
possible first m oves
desired:
current:
building:
a
desired:
current:
c
b
UNDERSHOOT
Looks
Undershoot
building:
a
desired:
current:
b
c
building:
a
OVERSHOOT
b
c
UNDERSHOOT
Undershoot
Overshoot
M ore Successful
M ore Successful
10 Undershoot
10 (5) U ndershoot
0 Overshoot
10 (15) Overshoot
Looks
10 (15) Undershoot
Overshoot
10 (5) Overshoot
0 Undershoot
10 Overshoot
Proportion Choice More Successful Operator
Lovett & Anderson, 1996
Observed Data
Biased Condition
1
1
0.9
3
1
0
0.8
3
1
0.7
0.6
3
1
3
0.3
1
0.2
0
0.1
3
0
0.5
0
3
3
1
0.4
1
1
0
3
0.7
0.6
3
1
1
0
0.3
0.2
0
0.1
0
Proportion Choice More Successful Operator
3
0
1
0.9
0.8
0.5
0.4
Extrem e-Bi ased Condition(5/6)
(2/3)
0
0
0
High
Against
Low
Neutral
Low
Against
Toward
Test Problem Bias
High
Toward
High
Low
Neutral
Low
Against Against
Toward
Test Problem Bias
High
Toward
Predictions of Decay-Based ACT -R
1
1
0.9
3
1
0
0.8
0.7
0.3
0.2
3
1
3
1
0
0
0.8
0.5
0.4
3
1
0
3
1
0.6
0
0.5
3
1
0
0.9
0.7
3
1
0.6
0.4
3
1
0
3
1
3
1
0
0
0
0.3
0.2
0.1
0.1
0
0
High
Low
Neutral
Low
Against Against
Toward
Test Problem Bias
High
Toward
High
Low
Neutral
Low
Against Against
Toward
Test Problem Bias
High
Toward
Building Sticks Demo
Decide-Under
If the goal is to solve the BST task
and the undershoot difference is less
than the overshoot difference
Then choose undershoot.
Decide-Over
If the goal is to solve the BST task
and the overshoot difference is less
than the undershoot difference
Then choose overshoot.
Force-Under
If the goal i s to solve the BST task
Then choose undershoot.
Force-Over
If the goal i s to solve the BST task
Then choose overshoot.
Web Address:
ACT-R Home Page
Published ACT-R Models
Atomic Components of Thought
Chapter 4
Building Sticks Model
ACT-R model probabilities before and after
problem-solving experience in Experiment 3
(Lovett & Anderson, 1996)
Production
Force-Under
More Successful
Prior
Final Value
Probability
of Success 67% Condition 83% Condition
.50
.60
.71
.50
.38
.27
.96
.98
.98
.96
.63
.54
Context Free
Force-Over
Less Successful
Context Free
Decide-Under
More Successful
Context Sensitive
Decide-Over
Less Successful
Context Sensitive
Decay of Experience
Note: Such temporal weighting is critical in the real world.
Production Compilation: The Basic Idea
(p read-stimulus
=goal>
isa goal
step attending
state test
=visual>
isa text
value =val
==>
+retrieval>
isa goal
relation associate
arg1 =val
arg2 =ans
=goal>
relation associate
arg1 =val
step testing)
(p recall
=goal>
isa goal
relation associate
arg1 =val
step testing
=retrieval>
isa goal
relation associate
arg1 =val
arg2 =ans
==>
+manual>
isa press-key
key =ans
=goal>
step waiting)
(p recall-vanilla
=goal>
isa goal
step attending
state test
=visual>
isa text
value "vanilla
==>
+manual>
isa press-key
key "7"
=goal>
relation associate
arg1 "vanilla"
step waiting)
Production Compilation: The Principles
1. Perceptual-Motor Buffers: Avoid compositions that will result in
jamming when one tries to build two operations on the same buffer
into the same production.
2. Retrieval Buffer: Except for failure tests proceduralize out and
build more specific productions.
3. Goal Buffers: Complex Rules describing merging.
4. Safe Productions: Production will not produce any result that the
original productions did not produce.
5. Parameter Setting:
Successes = P*initial-experience*
Failures = (1-P) *initial-experience*
Efforts = (Successes + Efforts)(C + *cost-penalty*)
Production Compilation: The Successes
1. Taatgen: Learning of inflection (English past and German plural).
Shows that production compilation can come up with generalizations.
2. Taatgen: Learning of air-traffic control task – shows that production compilation
can deal with complex perceptual motor skill.
3. Anderson: Learning of productions for performing paired associate task from
instructions. Solves mystery of where the productions for doing an experiment
come from.
4. Anderson: Learning to perform an anti-air warfare coordinator task from
instructions. Shows the same as 2 & 3.
5. Anderson: Learning in the fan effect that produces the interaction between fan
and practice. Justifies a major simplification in the parameterization of
productions – no strength separate from utility.
Note all of these examples involve all forms of learning occurring in ACT-R
simultaneous – acquiring new chunks, acquiring new productions, activation
learning, and utility learning.
Predicting fMRI Bold Response from
Buffer Activity
Example: Retrieval buffer during equation-solving predicts activity
in left dorsolateral prefrontal cortex.
BR(t)   .344Di (t  ti )2e (t t i )/ 2
i
where Di is the duration of the ith retrieval and ti is the time of
initiation of the retrieval.
21 Second Structure of fMRI Trial
Load
a=18
b=6
c=5
Equation
cx+3=a
1.5 Second Scans
Blank Period
Solving 5 x + 3 = 18
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
Time
3.000: Find-Right-Term Selected
3.050: Find-Right-Term Fired
3.050: Module :VISION running command FIND- Time 3.050: Attend-Next-Term-Equation Selected
3.100: Attend-Next-Term-Equation Fired
3.100: Module :VISION running command MOVE- Time 3.150: Module :VISION running command FOCUS-ON
3.150: Encode Selected
3.200: Encode Fired
3.281: 18 Retrieved
3.281: Process-Value-Integer Selected
3.331: Process-Value-Integer Fired
3.331: Module :VISION running command FIND- Time 3.331: Attend-Next-Term-Equation Selected
3.381: Attend-Next-Term-Equation Fired
3.381: Module :VISION running command MOVE- Time 3.431: Module :VISION running command FOCUS-ON
3.431: Encode Selected
3.481: Encode Fired
3.562: 3 Retrieved
3.562: Process-Op1-Integer Selected
3.612: Process-Op1-Integer Fired
3.612: Module :VISION running command FIND- Time 3.612: Attend-Next-Term-Equation Selected
3.662: Attend-Next-Term-Equation Fired
3.662: Module :VISION running command MOVE- Time 3.712: Module :VISION running command FOCUS-ON
3.712: Encode Selected
3.762: Encode Fired
4.362: Inverse-of-+ Retrieved
4.362: Process-Operator Selected
Solving 5 x + 3 = 18 (cont.)
Time 4.412: Process-Operator Fired
Time 5.012: F318 Retrieved
Time 5.012: Finish-Operation1 Selected
Time 5.062: Finish-Operation1 Fired
Time 5.062: Module :VISION running command FIND- Time 5.062: Attend-Next-Term-Equation Selected
Time 5.112: Attend-Next-Term-Equation Fired
Time 5.112: Module :VISION running command MOVETime 5.162: Module :VISION running command FOCUS-ON
Time 5.162: Encode Selected
Time 5.212: Encode Fired
Time 5.293: 5 Retrieved
Time 5.293: Process-Op2-Integer Selected
Time 5.343: Process-Op2-Integer Fired
Time 5.943: F315 Retrieved
Time 5.943: Finish-Operation2 Selected
Time 5.993: Finish-Operation2 Fired
Time 5.993: Retrieve-Key Selected
Time 6.043: Retrieve-Key Fired
Time 6.124: 3 Retrieved
Time 6.124: Generate-Answer Selected
Time 6.174: Generate-Answer Fired
Time 6.174: Module :MOTOR running command PRESS-KEY
Time 6.424: Module :MOTOR running command PREPARATION- Time 6.574: Device running command OUTPUT-KEY
("3" 3.574)
Left Dorsolateral
Prefrontal Cortex
Bold Response for 2 Equation Types
Left Dorsolateral Prefrontal Cortex
0.6
0.5
Percent Activation
Change
5x + 3 = 18
0.4
cx + 3 = a
0.3
0.2
0.1
0.0
1
2
3
4
5
6
7
8
-0.1
Scan (1.5 sec.)
9
10
11
12
13
14