Syllabus P140C (68530) Cognitive Science

Download Report

Transcript Syllabus P140C (68530) Cognitive Science

What is Cognitive Science?
… is the interdisciplinary study of mind and
intelligence, embracing philosophy, psychology,
artificial intelligence, neuroscience, linguistics,
and anthropology
(Stanford Encyclopedia of Philosophy)
http://plato.stanford.edu/entries/cognitive-science/
Most cognitive scientists are cognitive psychologists or
computer scientists…
(from: Schunn et al. 2005)
Understanding
Computation
To understand how
the brain works
Computer Science/
Artificial Intelligence
Interdisciplinary
study of intelligent
behavior
To understand
limits of theories
Cognitive
Science
For human data in
various tasks
Philosophy
To understand
structure of
language
Neuroscience
Linguistics
Experimental
Cognitive Psychology
We will focus mostly on insights from Cognitive Psychology
Areas of Study
• Cognition is about internal processes that are often
unobservable, e.g.:
Perception, Attention, Memory, Visual Imagery,
Language, Concept Learning, Reasoning
?
• Need converging evidence from different perspectives to
really understand cognitive processes
Information Processing
• Information processing models resemble processing in
computers – made cognitive psychology popular
• Information made available by the environment is
processed by a series of processing systems
• Processing systems transform or alter the information in
various systematic ways
• The major goal of research is to specify these processes
and structures
Types of Processing
•
•
•
•
Bottom-up processing
Top-down processing
Parallel processing
Cascade processing
An early version of the
information-processing
approach
 purely bottom up or
stimulus-driven
A Demonstration of Top-Down Processing
Top-down
processing
Later stages of
processing affect
earlier stages
 can explain effects of
Knowledge, memory,
expectations and context
Top-down processing: perception affected by knowledge of world
Why do we seem to have a fairly robust interpretation of which shapes
are concave and convex when the perceptual information is perfectly
ambiguous? -> perception affected by knowledge
(Kleffner & Ramachandran, ’92)
Top down processing: perception affected by memory
• First time, sine wave speech
sounds incomprehensible
(to most)
http://psiexp.ss.uci.edu/research/teachingP140C/demos/sinewavespeech.aif
• After hearing the natural
utterance, perception of sinewave speech seems to be
quite different
"The steady drip is worse than a drenching rain."
http://psiexp.ss.uci.edu/research/teachingP140C/demos/naturalutterance.aif
(for more info: http://www.haskins.yale.edu/haskins/MISC/SWS/SWS.html)
Sound Induced Illusory Flashes
• Example of parallel and interactive processing:
perception of visual event affected by perception
of auditory events
• http://www.cns.atr.jp/~kmtn/soundInducedIllusor
yFlash2/
McGurk Effect
Perception of auditory event affected by visual processing
AVI: http://psiexp.ss.uci.edu/research/teachingP140C/demos/McGurk_large.avi
MOV: http://psiexp.ss.uci.edu/research/teachingP140C/demos/McGurk_large.mov
Harry McGurk and John MacDonald in "Hearing lips and seeing voices", Nature 264, 746-748 (1976).
McGurk Effect
• Demonstrates parallel & interactive processing: speech
perception is based on multiple sources of information,
e.g. lip movements, auditory information.
• McGurk effect in video:
– lip movements
– speech sound
– speech perception
= “ga”
= “ba”
= “da” (for 98% of adults)
• Brain makes reasonable assumption that both sources
are informative and “fuses” the information.
Four Main Approaches
• Experimental cognitive psychology
• Cognitive neuropsychology
• Computational cognitive science
• Cognitive neuroscience
COMPUTATIONAL COGNITIVE
SCIENCE
Computer Models
• Computational modeling
– Programming computers to model or mimic some
aspects of human cognitive functioning. Modeling
natural intelligence.
• Artificial intelligence
– Constructing computer systems that produce
intelligent outcomes
 Simulations of behavior
Why do we need computational models?
• Provides precision need to specify complex
theories. Makes vague verbal terms specific
• Provides explanations
• Can lead to predictions
– just as meteorologists use computer models to predict
tomorrow’s weather, the goal of modeling human
behavior is to predict performance in novel settings
Production Systems
Connectionist Networks
• Also known as:
– PDP: parallel distributed processing approach
– Artificial Neural Networks
• Alternative to traditional information processing
models
• Connectionist models are networks of simple
processors that operate simultaneously
• Some biological plausibility
idealized neurons (units)
Inputs
Processor
S
Output
Abstract, simplified description of a neuron
Connectionist Networks
• Inspired by real neurons and brain organization
but are highly idealized
• Can spontaneously generalize beyond
information explicitly given to network
• Retrieve information even when network is
damaged (graceful degradation)
• Networks can be taught: learning is possible by
changing weighted connections between nodes
• Diagram showing how
the inputs from a
number of units are
combined to determine
the overall input to uniti. Unit-i has a threshold
of 1; so if its net input
exceeds 1 then it will
respond with +1, but if
the net input is less than
1 then it will respond
with –1
Different ways to represent information with
connectionist networks: localist representation
concept 1
1 0 0 0 0 0
concept 2
0 0 0 1 0 0
concept 3
0 1 0 0 0 0
(activations of units; 0=off 1=on)
Each unit represents just one item  “grandmother” cells
Coarse Coding/ Distributed Representations
concept 1
1 1 1 0 0 0
concept 2
1 0 1 1 0 1
concept 3
0 1 0 1 0 1
(activations of units; 0=off 1=on)
Each unit is involved in the representation of multiple items
Advantage of Distributed Representations
• Efficiency
– Solve the combinatorial explosion problem: With n
binary units, 2n different representations possible.
(e.g.) How many English words from a combination of
26 alphabet letters?
• Damage resistance
– Even if some units do not work, information is still
preserved – because information is distributed across
a network, performance degrades gradually as
function of damage
– (aka: robustness, fault-tolerance, graceful
degradation)
Suppose we lost unit 5
concept 1
1 1 1 0 0 0
concept 2
1 0 1 1 0 1
concept 3
0 1 0 1 0 1
(activations of units; 0=off 1=on)
Can the three concepts still be discriminated?
Multi-layered Connectionist Networks
• Activation flows from a layer of
input units through a set of
hidden units to output units
output units
• Weights determine how input
patterns are mapped to output
patterns
• Network can learn to associate
output patterns with input
patterns by adjusting weights
• Hidden units tend to develop
internal representations of the
input-output associations
• Backpropagation is a common
weight-adjustment algorithm
hidden units
input units
Example: NETtalk
Connectionist network learns to pronounce English words: i.e., learns
spelling to sound relationships. Listen to this audio demo.
teacher
target output
/k/
26 output units
80 hidden units
7 groups of
29 input units
_
a
_
c
target letter
(after Hinton, 1989)
a
t
_
7 letters of
text input
Other demos of neural networks
Hopfield network
http://www.cbu.edu/~pong/ai/hopfield/hopfieldapplet.html
Backpropagation algorithm and competitive learning:
http://www.psychology.mcmaster.ca/4i03/demos/demos.html
Competitive learning:
http://www.neuroinformatik.ruhr-unibochum.de/ini/VDM/research/gsn/DemoGNG/GNG.html
Various networks:
http://diwww.epfl.ch/mantra/tutorial/english/
Optical character recognition:
http://sund.de/netze/applets/BPN/bpn2/ochre.html
Brain-wave simulator
http://www.itee.uq.edu.au/%7Ecogs2010/cmc/home.html
Limitations of Modeling Approach
• Computational models are rarely used to make
new predictions
• Connectionist models do not resemble the
human brain
• Numerous models can generally be found to
“explain” the same set of findings
• Computational models often fail to capture the
scope of cognitive phenomena
• Computational cognitive science may fail to
deliver a general unified theory of cognition