Transcript Slide 1

Overview of ACT-R
Adaptive Control of Thought
ACT-R What is it ?





It is a cognitive architecture - a theory for simulating and
understanding human cognition
Like programming language, it is a framework for creating models
The models contain ACT-R’s view of cognition and the assumptions of
the researchers
The assumptions are tested by comparing the results of the model
with the results of the people doing the same task
The comparison is done with respect to the following parameters

Time to perform the task

Accuracy of the task

Neurological data obtained from fMRI
ACT-R
Subset of
psychology
experiments
General assumption
about human cognition
ACT-R
Assumptions about a
particular domain
ACT-R Model
Experiment
Prediction
Quantitative
Measure
Quantitative
Measure
ACT-R
Model
Human
Subject
Match
?
Latency Accuracy FMRI data
ACT-R 5.0 Architecture
It consists of

Set of modules

Perceptual module

Motor module

Imaginal module

Declarative module

Goal module

Central Production System

Buffers
ACT-R 5.0 Architecture
Different Modules and Buffers

ACT-R architecture consists of different components called
Modules. ACT-R accesses its modules through buffer. For each
module a dedicated buffer serves as the interface with that
module.

Vision Module and Buffers

Visual Object Module and Buffers –


Visual Location Module and Buffers –



Used to identify objects in the visual field
Used to identify location of the objects in the visual field
Associated with Occipital Lobe and Parietal Lobe of brain
Aural Module and Buffers

Audio Object Module and Buffers –


Used to identify objects in the auditory field
Auditory Location Module and Buffers –

Used to identify location of the objects in the auditory field
Different Modules and Buffers




Manual Module and Buffers

Responsible for controlling and monitoring hand movements

It is a associated with motor and somatosensory cortical areas of brain
Imaginal Module and Buffers

Responsible for holding model’s internal representation of information

It is associated with parietal region of brain
Declarative Module and Buffers

Used to retrieve information from long term memory

It is associated with Hippocampus and Ventrolateral Prefrontal Cortex(VLPFC) of brain
Goal Module and Buffers

keeps track of various partial result as well as final result of certain task

It is associated with Dorsolateral Prefrontal Cortex(DLPFC), bilateral parietal region
and pre-motor cortex
Symbolic Components of ACT-R
ACT-R Production System

It consists of two types of knowledge representation

Declarative Knowledge – It is represented as structures called chunks. It is defined by types and slots.

Example
Action023
isa chase
agent dog
object cat

Procedural Knowledge – It is represented by rules called Productions. It is represented by condition, action pair. The condition(in the LHS)
specifies pattern of the pattern of the chunks in the buffers for the production to apply. The action(in the RHS) specifies the actions to be
taken for when the production fires.

Example
(P counting-example
=goal>
Isa count
State incrementing
Number =num1
=retrieval>
Isa count-order
First =num1
Second =num2
==>
=goal>
Number =num2
+retrieval>
Isa count-order
First =num2)
ACT-R Production System

A production is fired when the current contents of the buffers
match with one or more productions. Firing a production contains
following stages 
Matching – The available productions are matched against the current
content of the buffers. It is associated with Striatum portion of Basal
Ganglia.

Selection – Among the productions which are matched, one production
will be selected. One of the major bottlenecks of ACT-R is only one
production is selected to fire in each cycle. It is associated with Pallidum
portion of Basal Ganglia.

Execution – The selected production will be executed. It is associated with
Thalamus portion of Basal Ganglia.
Sample Model - Count
(clear-all)
(p start
=goal>
(define-model count
(sgp :esc t :lf .05 :trace-detail high)
ISA
count-from
start
=num1
count
nil
==>
(chunk-type count-order first second)
=goal>
(chunk-type count-from start end count)
count
=num1
+retrieval>
(add-dm
ISA
(b ISA count-order first 1 second 2)
first
(c ISA count-order first 2 second 3)
(d ISA count-order first 3 second 4)
(e ISA count-order first 4 second 5)
(f ISA count-order first 5 second 6)
(first-goal ISA count-from start 2 end 4)
)
)
count-order
=num1
Sample Model - Count
(P increment
(P stop
=goal>
ISA
count-from
count
=num1
- end
=goal>
=num1
=retrieval>
ISA
first
count-from
count
=num
end
=num
==>
count-order
-goal>
=num1
second
ISA
!output!
=num2
(=num)
)
==>
=goal>
count
=num2
)
+retrieval>
ISA
first
!output!
)
(goal-focus first-goal)
count-order
=num2
(=num1)
ACT-R General Parameters



sgp function is used to set ACT-R parameters.
These parameters are used to specify how the system operates and
do not affect model’s performance of the task.
Examples –

(sgp :esc t)


To use sub-symbolic components of ACT-R we set this parameter to t.
(sgp :v t :needs-mouse nil :show-focus t :trace-detail high)

If :v is t(which is the default value) then the trace is displayed and if :v is nil then trace
is not printed

The :needs-mouse parameter is used to specify whether the model needs to use the
mouse cursor

The :show-focus parameter controls whether the red visual attention ring will be
displayed in the window when the model is performing the task

The :trace-detail parameter specifies the amount of trace that will be printed in the
screen.
ACT-R Production Parameters



Spp function is used for setting production parameters.
Each production has a quantity associated with it which is called
Utility. Default utility of a production is 0.
The production utilities determines which production gets selected
during conflict resolution process if more than one productions match
with current buffer state.

Utility of a production is set with :u parameter.

Example

(spp start-report :u -2)

(spp end-report :u 10)
Perceiving the World


These are the mechanisms using which ACT-R percepts the outside
world. The following set of modules provides a model with visual,
motor, auditory and vocal capabilities as well as mechanisms for
interfacing those modules to the world.
Visual Perception

Vision Module – It interacts with visible stimuli and it provides a model with
a means for acquiring information.

Visual Location Buffers – It holds a chunk representing the location of the
object on the screen.

Visual Buffers – It holds a chunk representing the object at the location
specified by the above buffer.
Perceiving the World

Aural Perception

Unlike the visual perception, there is no support for hearing “real”
sounds generated by the computer. All the sounds are simulated using
commands.

Audio Module - The audio module provides a model with rudimentary
audio perception abilities and is very similar to the vision module.

Audio Location Buffer

Audio Buffer
Motor Actions



Motor Module – Here the actions of this module only implies finger presses
at a keyboard.
The mouse has one button and is controlled by the model’s right hand when
used.The default devices accept the input from the model’s virtual keyboard
and mouse and pass it along as if it was from a real user
Buffer for interacting with this module is called manual buffer. It never holds
a chunk. It is only used to issue commands and query the state of the motor
module.
Attention

In ACT-R we are mainly concerned with visual attention.

A model pays attention in following two steps –


Requesting a location to the visual location buffer

Moving attention to that element to that location and place the element in
the visual buffer

The display is assumed to be a two dimensional display
As in the visual case, there is an aural-location to hold the location of
an aural message and an aural buffer to hold the sound that is
attended.
Trace of demo-2
0.000 GOAL SET-BUFFER-CHUNK GOAL GOAL REQUESTED NIL
0.000 VISION SET-BUFFER-CHUNK VISUAL-LOCATION VISUAL-LOCATION0-0 REQUESTED NIL
0.000 PROCEDURAL CONFLICT-RESOLUTION
0.000 PROCEDURAL PRODUCTION-SELECTED FIND-UNATTENDED-LETTER
0.000 PROCEDURAL BUFFER-READ-ACTION GOAL
0.050 PROCEDURAL PRODUCTION-FIRED FIND-UNATTENDED-LETTER
0.050 PROCEDURAL MOD-BUFFER-CHUNK GOAL
0.050 PROCEDURAL MODULE-REQUEST VISUAL-LOCATION
0.050 PROCEDURAL CLEAR-BUFFER VISUAL-LOCATION
0.050 VISION Find-location
0.050 VISION SET-BUFFER-CHUNK VISUAL-LOCATION VISUAL-LOCATION0-0
0.050 PROCEDURAL CONFLICT-RESOLUTION
0.050 PROCEDURAL PRODUCTION-SELECTED ATTEND-LETTER
0.050 PROCEDURAL BUFFER-READ-ACTION GOAL
0.050 PROCEDURAL BUFFER-READ-ACTION VISUAL-LOCATION
0.050 PROCEDURAL QUERY-BUFFER-ACTION VISUAL
0.100 PROCEDURAL PRODUCTION-FIRED ATTEND-LETTER
0.100 PROCEDURAL MOD-BUFFER-CHUNK GOAL
0.100 PROCEDURAL MODULE-REQUEST VISUAL
Trace of demo-2
0.100 PROCEDURAL CLEAR-BUFFER VISUAL-LOCATION
0.100 PROCEDURAL CLEAR-BUFFER VISUAL
0.100 VISION Move-attention VISUAL-LOCATION0-0-1 NIL
0.100 PROCEDURAL CONFLICT-RESOLUTION
0.185 VISION Encoding-complete VISUAL-LOCATION0-0-1 NIL
0.185 VISION SET-BUFFER-CHUNK VISUAL TEXT0
0.185 PROCEDURAL CONFLICT-RESOLUTION
0.185 PROCEDURAL PRODUCTION-SELECTED ENCODE-LETTER
0.185 PROCEDURAL BUFFER-READ-ACTION GOAL
0.185 PROCEDURAL BUFFER-READ-ACTION VISUAL
0.235 PROCEDURAL PRODUCTION-FIRED ENCODE-LETTER
0.235 PROCEDURAL MOD-BUFFER-CHUNK GOAL
0.235 PROCEDURAL MODULE-REQUEST IMAGINAL
0.235 PROCEDURAL CLEAR-BUFFER VISUAL
0.235 PROCEDURAL CLEAR-BUFFER IMAGINAL
0.235 PROCEDURAL CONFLICT-RESOLUTION
0.435 IMAGINAL CREATE-NEW-BUFFER-CHUNK IMAGINAL ISA ARRAY
0.435 IMAGINAL SET-BUFFER-CHUNK IMAGINAL ARRAY0
0.435 PROCEDURAL CONFLICT-RESOLUTION
Move
Attention to a
specific
location
Trace of demo-2
0.435 PROCEDURAL PRODUCTION-SELECTED RESPOND
0.435 PROCEDURAL BUFFER-READ-ACTION GOAL
0.435 PROCEDURAL BUFFER-READ-ACTION IMAGINAL
0.435 PROCEDURAL QUERY-BUFFER-ACTION MANUAL
0.485 PROCEDURAL PRODUCTION-FIRED RESPOND
0.485 PROCEDURAL MOD-BUFFER-CHUNK GOAL
0.485 PROCEDURAL MODULE-REQUEST MANUAL
0.485 PROCEDURAL CLEAR-BUFFER IMAGINAL
0.485 PROCEDURAL CLEAR-BUFFER MANUAL
0.485 MOTOR PRESS-KEY v
0.485 PROCEDURAL CONFLICT-RESOLUTION
0.735 MOTOR PREPARATION-COMPLETE
0.735 PROCEDURAL CONFLICT-RESOLUTION
0.785 MOTOR INITIATION-COMPLETE
0.785 PROCEDURAL CONFLICT-RESOLUTION
0.885 MOTOR OUTPUT-KEY #(4 5)
0.885 PROCEDURAL CONFLICT-RESOLUTION
0.970 VISION Encoding-complete VISUAL-LOCATION0-0-1 NIL
0.970 VISION No visual-object found
0.970 PROCEDURAL CONFLICT-RESOLUTION
1.035 MOTOR FINISH-MOVEMENT
1.035 PROCEDURAL CONFLICT-RESOLUTION
1.035 ------ Stopped because no events left to process
Motor Action
Request Parameters




It acts like a slot of a chunk in the request.
Request parameters are used to supply general information to the module
about the request which may not be desirable to have in the resulting
chunk that is placed into the buffer.
A request parameter is specific to the particular buffer and always start
with a “:” which distinguishes it from an actual slot of the chunk-type
Example –

For a visual-location request one can use the :attended request parameter to specify
whether the vision module returns the location of an object which the model has
previously looked at (attended to) or not. If it is specified as nil, then the request is for a
location which the model has not attended, and if it is specified as t, then the request is
for a location which has been attended previously. There is also a third option, new.
This means that not only has the model not attended to the location, but also that the
object has recently appeared in the visual scene.
Finsts



There is a limit to the number of object which is marked as :attended
t.
The number of finsts and the length of time they persist can be set
with the parameters :visual-num-finst and :visual-finst-span
respectively.
The default number of finst is 4 and default time span 3 seconds. If
there are 4 items marked with t and the model wants to move
attention to a fifth item, then the oldest item will be marked with
:attended nil and the fifth item will marked as attended t.
Sub-symbolic Components of ACT-R
Activation




Every chunk in ACT-R’s declarative memory has a numeric value
called its activation.
The activation reflects the degree to which past experience and
current context indicate that the chunk will be useful in a particular
moment.
There is one parameter called retrieval threshold which specifies the
minimum amount of activation the chunk can have to be retrieved.
(sgp :rt -0.5)
The activation equation for ith chunk is –
Ai = B i + ε
Base Level Activation


Bi : It is called Base Level Activation. It reflects the recency and
frequency of practice of the chunk i
It is given by the following equation for chunk i
Bi = ln(tj-d),
where the summation is done over j = 1 to n

n is the number of presentation for chunk i

tj is the time since jth presentation

d is the decay parameter which is set using :bll parameter

There are two types of events which are considered as presentation of the
chunk. One is initial entry of the chunk into declarative memory. Another
occurs when a chunk is cleared from the buffer.
Noise



Two sources

Permanent noise – It is associated with a chunk

Instantaneous noise – It is recomputed at each retrieval attempt
Both noise values are generated according to logistic distribution by
a parameter s
The mean of the distribution is 0 and the variance 2 is related to s
by the equation
2 = 2s2/3

The permanent noise value s is set using :pas parameter and instantaneous
noise is set using :ans parameter.
Recall Probability

If a retrieval request is made and a matching chunk exists, then it
will retrieved if the activation exceeds retrieval activation threshold.
If the amount of noise in the system is given by s, then the Recall
Probability is given by the following equation –
Recall probabilityi = 1/e (ζ - Ai)/s

Ai is the activation of chunk I

ζ is the retrieval activation threshold

s is the permanent noise
Retrieval Latency

When a retrieval request is made the time it takes until the chunk
that is retrieved is available in the retrieval buffer is given by this
equation :
Time = Fe-A

The time it takes to signal a retrieval failure is given by the following
equation :
Time = Fe -

where F is latency factor parameter set using :lf parameter

A is the activation of the chunk to be retrieved

 is the retrieval threshold
Spread of Activation



The chunks in the buffers provide a context in which to perform the
retrieval.
Those chunks spread activation to the chunks present in the
declarative memory based on the contents of the slots and their
relation the other chunks.
The equation for activation is given by the following equation –
Ai = Bi + kjWkjSji + 

Activation spread is computed if :mas parameter is set
Spread of Activation






Measure of prior learning Bi : Base level activation reflects recency
and frequency of activation of the chunk
Across all buffers : The elements k being summed over are the
buffers
Sources of activation : The elements being summed over are the
chunks which are in the slots of the chunk in buffer k.
Wkj : The amount of activation from source j in buffer k
Strength of association : Sji : Strength of association from source j in
buffer k.
 : The source of noise
Spread of Activation
S1
Goal
W1
Slot1
Chunk1
S1
Slot1
S1
Slot2
Chunk2
S2
W2
Slot2
S2
S2
Chunk3
Spread of Activation


The weights, Wkj, of the activation spread default to an even distribution
from each slot in a buffer. The total amount of source activation for a buffer
will be called Wk and is settable for each buffer. The Wkj values are then
set to Wk /nk where nk is the number of chunks in the slots of the chunk in
buffer k.
The strength of association, Sji, between two chunks j and i is 0 if chunk j is
not in a slot of chunk i and j and i are not the same chunk. Otherwise, it is
set using this equation:
Sji = S – ln(fanj)


S: The maximum associative strength (set with the :mas parameter)
fanj: is the number of chunks in declarative memory in which j is the value of
a slot plus one for chunk j being associated with itself.
Utilities and Selecting Productions


Each production has a utility associated with it. Utility is required so
that one production will be preferred over another in the conflict
resolution process.
If there are a number of productions competing with expected utility
values Uj , then probability of choosing production i is described by
the formula
Probability(i) = e(Ui/2s)/ je(Uj/2s)

Utility is set using (:spp <production-name> :u <utility-value>)
Utility Learning

The utilities of productions can also be learned as the model runs based on
rewards that are received by the model. When utility learning is enabled,
they are updated according to a simple integrator model . If Ui(n-1) is the
utility of a production i after its n-1st application and Ri(n) is the reward the
production receives for its nth application, then its utility Ui(n) after its nth
application will be
Ui(n) = Ui(n – 1) + a [Ri(n) – Ui(n – 1)]
where a is the learning rate and is typically set at 0.2
Production Rule Learning


A model can acquire new production rules by collapsing two
production rules that apply in succession into a single rule. The
process of forming new production is called Production
Compilation.
The basic idea behind forming a new production is to combine the
tests in the two conditions into a single set of tests that will
recognize when the pair of productions will apply and combine
two sets of actions into a single set of action that has same
overall effect.
Production Rule Learning - Example
(p read-probe
=goal>
isa goal
state attending-probe
=visual>
isa text
value =val
==>
+imaginal>
isa pair
probe =val
+retrieval>
isa pair
probe =val
=goal>
state testing
)
(p recall
=goal>
isa goal
state testing
=retrieval>
isa pair
answer =ans
?manual>
state free
==>
+manual>
isa press-key
key =ans
=goal>
state read-study-item
+visual>
isa clear
)
Production Rule Learning - Example
(P PRODUCTION0
"READ-PROBE & RECALL - PAIR0-0“
=GOAL>
ISA GOAL
STATE ATTENDING-PROBE
=VISUAL>
ISA TEXT
VALUE "zinc"
?MANUAL>
STATE FREE
==>
=GOAL>
STATE READ-STUDY-ITEM
+VISUAL>
ISA CLEAR
+MANUAL>
ISA PRESS-KEY
KEY "9"
+IMAGINAL>
ISA PAIR
PROBE "zinc”)
Learnt
Production
fMRI




The ACT-R research group uses functional Magnetic
Imaging(fMRI) to confirm and improve cognitive models.
Resonance
A typical brain imaging activity will find activations in large number of
regions in brain.
The spatial resolution of fMRI is sufficient to allow identification of regions
that have function significance in the interpretation of complex cognition.
A typical brain imaging study finds activation in a large number of regions.
There are some methods that can be used to place such brain imaging data
rigorously in a interpretive framework of an information processing theory
ACT-R.
Application of ACT-R





Used in modeling human-computer interaction to produce user
models that can assess different computer interfaces,
Used in cognitive tutoring systems to provide focused help,
Computer-generated forces to provide cognitive agents that inhabit
training environments
Used by natural language researchers to model several aspects of
natural language understanding and production.
Used to predict patterns of brain activation during imaging
experiments
Conclusion



The cognitive architectures should not be interpreted to have complete truth
about the nature of the mind. ACT-R is very much a work in progress and
finally it will point to the way to a better concept of mind.
The ACT-R architecture in the given figure is incomplete. It is missing certain
modules. The proposal that all neural processing passes through basal
ganglia in not correct.
ACT-R has started with observation that mind consists of many independent
modules doing their own work in parallel. The buffers keep current state of
mind and a central production system can detect patterns in the buffers
and take coordinated actions. These characterization of the integration of
cognition gives some fundamental insight to the working of mind.
Thank You !