The Visual Pathway

Download Report

Transcript The Visual Pathway

The Visual Pathway

The (classical) receptive field: The region of a sensory surface (retina, skin) that, when stimulated, changes the membrane potential (firing rate, activity) of a neuron.

Retinal ganglion cell receptive field structure: ON-center/OFF-surround LGN receptive field structure: ON center/OFF-surround

1

Retinal Processing and Output

Ganglion cells are the sole source of visual input to the rest of the brain.

They have center-surround receptive fields.

Receptive field structure of a OFF-center retinal ganglion cell. (a) light (uniform illumination) across its entire RF, (b) dark spot in the center of the RF, (c) dark spot across entire RF.

2

Retinal Processing and Output

As a result of their antagonistic RF structure, ganglion cells respond mostly to differences in illumination across their receptive fields.

Example: An OFF-center ganglion cell

This means that ganglion cell output does not reflect the “amount of light or dark” but their spatial differences. Ganglion cells enhance or exaggerate contrast at borders (luminance boundaries).

3

4

Retinal Processing and Output

Does the gray box in the center have the same brightness for both stimuli?

5

Retinal Processing and Output

Do these two gray surfaces have the same brightness?

6

Retinal Processing and Output

Do these two gray surfaces have the same brightness?

7

The Visual Pathway

Layers of cortex and principal cell types: 6 Layers: I (superficial), II, II, IV, V, VI (deep).

Layer IV is subdivided into IVA, IVB, IVC.

Axons from LGN mainly terminate in IVC.

Pyramidal cells are found in layers III, IVB, V, and VI.

8

Primary Visual (Striate) Cortex

spike = action potential

Responses of a typical neuron in the M channel (simple cell): orientation selectivity

0 spikes 3 spikes

ON center – + – OFF surround

5 spikes 15 spikes

“preferred” or optimal orientation

5 spikes 3 spikes

9

Primary Visual (Striate) Cortex

Other cells in the M channel show direction selectivity.

10

The Visual Pathway

A cortical module in striate (primary visual) cortex

11

The Visual Pathway

A cortical module in striate cortex From: Amiram Grinvald

12

The Visual Pathway

Map of orientation columns

Laboratory Stimuli versus Natural Images

Laboratory stimuli are often carefully controlled, context-free, and stationary after “flashed onset”.

Is this a problem?

Let’s look at Jack Gallant et al. 1998…

14

Laboratory Stimuli versus Natural Images

A laboratory stimulus:

15

Laboratory Stimuli versus Natural Images

A “natural” stimulus:

16

Laboratory Stimuli versus Natural Images

“Natural vision” is different: 1. Natural images are “richer” stimuli, containing a broad spectrum of spatial frequencies, colors, and contrast. In addition, they fill the entire visual field.

2. Natural images are not stationary, in part due to motion of the observer (eye, body, head). (“Active vision”)

17

Laboratory Stimuli versus Natural Images

Gallant’s experiments:

- awake monkey, free viewing of visual scenes,

62 cells (V1, V2, V4).

- Same cells recorded during controlled

viewing.

- Comparison: lower activity levels during free

viewing, extended image patches tended to suppress activation (non-classical surround).

18

Visual “Illusions”

The tilt illusion You can actually “see” the effects of contextual features from outside of the “classical” receptive field.

19

Visual “Illusions”

20

21 W. W. Norton

22 W. W. Norton

23

24

The Visual Pathway

Multiple cortical areas

Van Essen, 1990

25

The Visual Pathway

Visual hierarchy and receptive field size Zeki, 1993

26

Center-Surround Cells

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

27

Neural Computation

ar e needed t o see t his pict ur e.

28

Orientation Detectors

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

29

Original Image

30

Output from Oriented Line Detection Cells

31

Output from Oriented Line Detection Cells- Larger Extent of Summation

32

Sum of Different Complex Cells at each retinotopic position

33

34

35

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Fates of Different V1 Layers

ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

36

Different Areas Analyze

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Different Aspects of A

Q uickTim e™ and a ar e needed t o see t his pict ur e.

Stimulus

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

37

Overview of Processing

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

38

Projections to Higher Visual

Q uickTim e™ and a ar e needed t o see t his pict ur e.

Cortical Areas

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

Q uickTim e™ and a Phot o - JPEG decom pr essor ar e needed t o see t his pict ur e.

39

Blindsight

40

41

42

43 W. W. Norton

44 W. W. Norton

The Visual Pathway

Higher in the visual hierarchy …

-receptive field sizes of neurons increase -visual topography becomes coarser -neurons become specialized for higher-order

features

-response latencies

increase

45

Motion and Color: “Constructs” of the Brain

Component motion (represented in V1, V2) is converted into pattern motion (represented in V5/MT).

Wavelength (represented in V1) is converted into color (represented in V4).

46

Motion

The basic problem: component and pattern motion

V1 cell receptive field V1 cells only “see” a small part of the visual field 47

Motion

Is this movement any different from the previous one?

V1 cell receptive field 48

Motion

Now let’s reveal the objects….

This object moves to the left.

V1 cell receptive field 49

This object moves up.

Motion

V1 cell receptive field The V1 cell cannot “tell the difference” between an object moving to the left or up. V1 cells only detect component motion.

50

Motion

Cells in MT have large receptive fields and are selective for pattern motion.

MT cell receptive field 51

The Visual Pathway

Area MT (V5) and pattern motion Zeki, 1993

52

Motion

Neuroimaging of the human motion area (PET)

posterior anterior Stimulus PET scan

The Visual Pathway

Mapping MT using high-resolution fMRI and surface reconstruction.

Sereno, 1999

54

Color

Area V4 is a component of the ventral stream. It is important for the perception of form and color.

But – wasn’t color already analyzed in V1 blobs?

Let’s take a look at the difference between color and wavelength.

The image above was constructed using only one shade of yellow 55

Color

Edwin Land performed experiments on human color perception using mondrian-like stimuli which were illuminated by three beams of colored light (red, green, blue).

56

Color Constancy

57

58

Color

Neuroimaging of the human color area (PET)

posterior anterior Stimulus PET scan from Zeki, 1993 59

Visual “Illusions”

Do you see a three-dimensional object?

How about now?

After Gaetano Kanisza

60

Visual “Illusions”

Do you see a triangle?

After Gaetano Kanisza

61

62

“Enigma” by Isia Leviant

63

Visual “Illusions”

MT activation due to illusory motion Semir Zeki, 1993

64

Visual “Illusions”

MT activation due to static stimuli with implied motion

65

Visual “Illusions”

These are the famous “Neil Illusions”

66

Multistable Percepts

Spontaneous reversal and ambiguous figures Necker cube (1832)

67

Multistable Percepts

The “duck-rabbit” (1900)

68

Ambiguous Perception

“My wife and my mother-in-law” (1915)

Ambiguous Perception

70

Ambiguous Perception

71

M.C. Escher

Ambiguous Perception

72

Ambiguous Perception

73

Perceptual Rivalry

Different images are presented simultaneously (e.g. to the two eyes), but only one of them is perceived by the observer.

Binocular perceptual rivalry: What is competing?

Eyes (monocular cells in V1) Representations (central, high-order) < Hemispheres (cortical) > Evidence

Representations

74

Perceptual Rivalry

Stimulus (does not change) Percept (alternating spontaneously)

75

76

Multistable Percepts and Perceptual Rivalry

What are possible brain mechanisms involved in multistable percepts: Low level: competitive interactions between neural activity patterns, e.g. within V1.

High level: involvement of separate areas in switching; top-down influences Stable periods versus switching periods.

Rate versus synchrony.

Experimental design: Subjects report perceptual

reversals, data is acquired continuously.

77

The Auditory System

Sound = audible variations in air pressure.

Frequency (pitch) Intensity (amplitude, loudness) Audible frequency range: 20-20000 Hz

78

Auditory pathways

The Auditory System

79

The Auditory System

Cochlea: key component is basilar membrane. Apex-base differ structurally.

Cochlea shown uncoiled

80

The Auditory System

Tonotopic maps on the basilar membrane and cochlear nucleus Frequency encoding: Low freq. – phase locking Intermediate freq. – tonotopy+phase locking High freq. - tonotopy

81

The Auditory System

Sound Localization: Important cue: differences in the arrival times of sound at the two ears.

Duplex theory of sound localization:

-Interaural time

differences

-Interaural intensity

differences

82

The Auditory System

Primary and secondary auditory areas, located on the superior temporal lobe. Insert shows tonotopic organization within primary auditory cortex (isofrequency bands).

83

The Auditory Cortex

Tonotopic map Isofrequency bands Columnar organization Most neurons are frequency tuned, but there are also cells responding to clicks, sound bursts, vocalizations.

Multiple segregated areas – e.g. Wernicke

84

Analogies Between Vision and Audition (?)

A “descending staircase” version of Shepard’s illusion Another version… Escher’s staircase and Shepard’s tones

85

Interactions between Vision and Audition

McGurk effect … ... here in a demonstration produced at Haskins labs in the 1980’s …and demonstrated by Pat Kuhl

86

Higher Perceptual Functions

Object Recognition

-Segregation of function -Visual hierarchy -What and where (ventral and dorsal streams) -Single cell coding and ensemble coding -Distributed representations of object categories -Face recognition

-Object recognition as a computational problem

87

Functional Segregation

Segregation of function exists already in the early visual system: M channel (magnocellular): from M-type retinal ganglion cells to magnocellular LGN layers to layer IVB of V1; wavelength-insensitive in LGN, orientation selectivity in V1 (“simple cells”), binocularity and direction selectivity in layer IVB; processing visual motion.

P channel (parvocellular): from P-type retinal ganglion cells to parvocellular LGN layers to interblob regions of layer III in V1; many cells in LGN show color opponency, cells in interblob regions of V1 have strong orientation selectivity and binocularity (“complex cells”), channel is also called P-IB; processing visual object shape.

88

Functional Segregation

Segregation of function can also be found at the cortical level:

- within each area: cells form distinct columns. - multiple areas form the visual hierarchy … 89

The Visual Hierarchy

van Essen and Maunsell, 1983

90

The Visual Hierarchy

The Visual Hierarchy

-functional segregation of visual features into separate

(specialized) areas.

-increased complexity and specificity of neural

responses.

- columnar groupings, horizontal integration within each

area.

-larger receptive fields at higher levels. -visual topography is less clearly defined at higher

levels, or disappears altogether.

-longer response latencies at higher levels. - large number of pathways linking each segregated

area to other areas.

- existence of feedforward, as well as lateral and feedback connections between hierarchical levels.

92

The Architecture of Visual Cortex

Lesion studies in the macaque monkey suggest that there are two large-scale cortical streams of visual processing: Dorsal stream (“where”) Ventral stream (“what”) Mishkin and Ungerleider, 1983 93

What and Where

Object discrimination task

Bilateral lesion of the temporal lobe leads to a behavioral deficit in a task that requires the discrimination of objects.

Landmark discrimination task

Bilateral lesion of the parietal lobe leads to a behavioral deficit in a task that requires the discrimination of locations (landmarks).

Mishkin and Ungerleider, 1983 94

The Architecture of Visual Cortex

Lateral views of the macaque monkey brain motion form color 95

Single Cells and Recognition

What is the cellular basis for visual recognition (visual long-term memory)?

1. Where are the cellular representations localized?

2. What processes generate these representations?

3. What underlies their reactivation during recall and recognition?

96

Single Cells and Recognition

Visual recognition involves the inferior temporal cortex (multiple areas). These areas are part of a distributed network and are subject to both bottom-up (feature driven) and top-down (memory driven) influences.

Miyashita and Hayashi, 2000

97

Single Cells and Recognition

Characteristics of neural responses in IT: 1. Object-specific (tuned to object class), selective for general object features (e.g. shape) 2. Non-topographic (large RF) 3. Long-lasting (100’s ms) Columnar organization (“object feature columns”) Specificity has often rather broad range (distributed response pattern)

98

Distributed Representations

Are there specific, dedicated modules (or cells) for each and every object category?

No. – Why not?

99

Distributed Representations

Evidence cortex.

feature based and widely distributed representation of objects across (ventral) temporal What is a distributed representation?

100

Distributed Representations

Experiments conducted by Ishai et al.: Experiment 1: 1. fMRI during passive viewing 2. fMRI during delayed match-to-sample Experiment 2: 1. fMRI during delayed match-to-sample with photographs 2. fMRI during delayed match-to-sample with line drawings Three categories: houses, faces, chairs.

101

Distributed Representations

Findings: Experiment 1: Consistent topography in areas that most strongly respond to each of the three categories.

Modules?

No - Responses are distributed (more so for non-face stimuli) Experiment 2: Are low-level features (spatial frequency, texture etc.) responsible for the representation?

No – line drawings elicit similar distributions of responses

102

Distributed Representations

From Ishai et al., 1999

103

Distributed Representations

From Ishai et al., 1999 houses faces chairs

104

Face Recognition

Face recognition achieves a very high level of specificity – hundreds, if not thousands of individual faces can be recognized.

Visual agnosia specific to faces: prosopagnosia.

High specificity of face cells “grandmother cells”

“gnostic units”, Many face cells respond to faces only – and show very little response to other object stimuli.

105

Face Recognition

Typical neural responses in the primate inferior temporal cortex: Desimone et al., 1984

106

Face Recognition

Face cells (typically) do not respond to: 1. “jumbled” faces 2. “partial” faces 3. “single components” of faces (although some face-component cells have been found) 4. other “significant” stimuli Face cells (typically) do respond to: 1. faces anywhere in a large bilateral visual field 2. faces with “reduced” feature content (e.g. b/w, low contrast) Face cell responses can vary with: facial expression, view-orientation

107

Face Recognition

Face cells are (to a significant extent) anatomically segregated from other cells selective for objects. They are found in multiple subdivisions across the inferior temporal cortex (in particular in or near the superior temporal sulcus)

108

Face Recognition

Faces versus objects in a recent fMRI study (Halgren et al. 1999)

109

Object Recognition: Why is it a Hard Problem?

Objects can be recognized over huge variations in appearance and context!

Ability to recognize objects in a great number of different ways: object constancy (stimulus equivalence) Sources of variability: - Object position/orientation - Viewer position/orientation - Illumination (wavelength/brightness) - Groupings and context - Occlusion/partial views

110

Object Recognition: Why is it a Hard Problem?

Examples for variability: field of view Translation invariance Rotation invariance

111

Object Recognition: Why is it a Hard Problem?

More examples for variability: field of view Size invariance Color

112

Object Recognition: Why is it a Hard Problem?

Variability in visual scenes: field of view Partial occlusion and presence of other objects

113

Object Recognition: Theories

Representation of visual shape (set of locations): Viewer-centered coordinate systems: frame of reference: viewer example: retinotopic coordinates, head-centered coordinates easily accessed, but very unstable … Environment-centered coordinate systems: locations specified relative to environment Object-centered coordinate systems: intrinsic to or fixed to object itself (frame of reference: object) less accessible

114

Object Recognition: Theories

A taxonomy: 1. Template matching models (viewer-centered, normalization stage and matching) 2. Prototype models 3. Feature analysis model 4. Recognition by components (object-centered)

115

Object Recognition: Geons

Theory proposed by Irv Biederman.

Objects have parts.

Objects can be described as configurations of a (relatively small) number of geometrically defined parts.

These parts (geons) form a recognition alphabet. 24 geons for four basic properties that are viewpoint-invariant.

116

Object Recognition: Geons

How geons are constructed:

117

Object Recognition: Geons

Geons in IT?

Irv Biederman, JCN, 2001

118

How does Invariance Develop?

119

Higher Perceptual Functions: Agnosias

Deficits of feature perception (such as achromatopsia) generally do not cause an inability to recognize objects.

Failure of knowledge or recognition = “agnosia”. (visual agnosia) In visual agnosias, feature processing and memory remain intact, and recognition deficits are limited to the the visual modality. Alertness, attention, intelligence and language are unaffected.

Other sensory modalities (touch, smell) may substitute for vision in allowing objects to be recognized.

120

Two Kinds of Agnosias

Apperceptive agnosia: perceptual deficit, affects visual representations directly, components of visual percept are picked up, but can’t be integrated, effects may be graded, often affected: unusual views of objects Associative agnosia: visual representations are intact, but cannot be accessed or used in recognition. Lack of information about the percept. “Normal percepts stripped of their meaning” (Teuber) This distinction introduced by Lissauer (1890)

121

Apperceptive Agnosia

Diagnosis: ability to recognize degraded stimuli is impaired A A Farah: Many “apperceptive agnosias” are “perceptual categorization deficits” …

122

Apperceptive Agnosia

Studies by E. Warrington: Laterality in recognition deficits: patients with right-hemispheric lesions (parietal, temporal) showed lower performance on degraded images than controls or left-hemispheric lesions.

Hypothesis: object constancy is disrupted (not contour perception) Experiment: Unusual views of objects – patients with right-hemispheric lesions show a characteristic deficit for these views.

123

Apperceptive Agnosia

Is “perceptual categorization deficit” a general impairment of viewpoint-invariant object recognition?

1. Patients are not impaired in everyday life (unlike associative agnosics).

2. They are not impaired in matching different “normal” views of objects, only “unusual views”.

3. Impairment follows unilateral lesions, not bilateral (as would be expected if visual shape representations were generally affected).

124

Associative Agnosia

Patients do well on perceptual tests (degraded images, image segmentation), but cannot access names (“naming”) or other information (“recognition”) about objects. Agnosics fail to experience familiarity with the stimulus.

When given names of objects, they can (generally) give accurate verbal descriptions.

Warrington’s analysis places associative agnosia in left hemisphere.

125

Associative Agnosia

Associative agnosics can copy drawings of objects but cannot name them (evidence for intactness of perceptual representations…) but…

126

Agnosia Restricted to Specific Categories

Specific deficits in recognizing living versus non-living things.

Warrington and Shallice (1984): patients with bilateral temporal lobe damage showed loss of knowledge about living things (failures in visual identification and verbal knowledge).

Their interpretation: distinction between knowledge domains – functional significance (vase-jug) versus sensory properties (strawberry-raspberry).

Evolutionary explanation…

127

Agnosia Restricted to Specific Categories

Another view: Damasio (1990) Many inanimate objects are manipulated by humans in characteristic ways.

Interpretation: inanimate objects will tend to evoke kinesthetic representations. Agreeing with Warrington, difficulty is not due to visual characteristics or visual discriminability.

128

Agnosia Restricted to Specific Categories

Yet another view: Gaffan and Heywood (1993) Presented images (line drawings) of animate and inanimate to normal humans and normal monkeys, tachistoscopically (20 ms). Both subject groups made more errors in identifying animate vs. inanimate objects.

Interpretation: Living things are more similar to each other than non-living things specific agnosia”

“category-

129

How is Semantic Knowledge Organized?

Category-based system Property-based system Network model by Farah and McClelland (1991).

130

Prosopagnosia

Is face recognition “special”?

Anatomical localization Functional independence Associative visual agnosia (prosopagnosia): Lost ability to recognize familiar faces.

Affects previous experience as well as (anterograde component) newly experienced faces.

Patients can recognize people by their voice, distinctive clothing, hairstyle etc.

131

Prosopagnosia

What is special about faces: 1. Higher specificity of categorization 2. Higher level of expertise 3. Higher degree of visual similarity 4. Evolutionary significance Can face and object recognition be dissociated?

Neuropsychological evidence suggests, yes (study by McNeil and Warrington) Also, remember Ishai et al. (object category map)

132

Prosopagnosia

Prosopagnosics have difficulty recognizing face stimuli, but do equally well on non-face objects (Farah et al., 1995)

133

Prosopagnosia

Is prosopagnosia a deficit of evoking a specific context from a stimulus belonging to a class of visually similar objects (other examples are bird-watcher unable to recognize birds, others unable to recognize car makes)?

Evidence (Gauthier et al., 2000): - Long-term expertise with birds and cars recruits face-selective areas of the brain. In other words, activation of a small area of cortex predicts “level of expertise”.

- Birds and cars are not alike visually!

- Claim: IT is NOT organized according to visual feature maps.

134

Attention - Overview

Definition Theories of Attention Neural Correlates of Attention

Human neurophysiology and neuroimagingSingle cell physiology – cellular mechanisms

Deficits of Attention

Unilateral neglect 135

Attention

Everyone knows what attention is. It is the taking possession of the mind in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others… - William James (1890) Circa 1880

136

Attention: Two Components

Tonic attention (vigilance): setting arousal level, detection efficiency, signal-to-noise ratio. – brainstem reticular formation, basal forebrain, locus coeruleus, etc.

Selective attention: space-, object-, modality selective attention. – temporal and parietal cortex.

137

What Does Attention Do?

1. Maintaining alertness and vigilance 2. Orienting to sensory events

-

overt vs. covert 3. Selection of sensory events - early vs. late selection 4. Detecting targets - limits on capacity - processing bottlenecks 5. Controlling access to memory and awareness

138

Change Blindness

A link between attention and awareness?

Change blindness in “everyday life” A second example….

Can you spot the difference?

How about this one?

Or this?

139

Where Does Attention Take Place?

Different sensory modalities (vision, audition etc.).

Attention is a distributed function.

Different processes – different anatomical substrates.

140

Attention as Competition for “Neural Resources”

Kastner and Ungerleider, 2000

141

Attention: A Covert Spotlight?

Helmholtz’ experiment: Attention selects information

142

The Cocktail Party Effect

Shadowing - selective listening Cherry’s dichotic listening experiments

143

The Filter Theory of Attention

Broadbent (1958) However, even unattended information can “break through” and produce a shift in attention or orienting. Treisman’s attenuation theory.

144

Where Does the Selection Occur (Early or Late)?

Early selection: before full analysis of input Late selection: at or after semantic encoding

145

Attention and Orienting

Voluntary orienting (expectancy) results in faster reaction times. Posner et al., 1980 Attention affects perceptual information processing, attention is spatial

“mental spotlight”

146

Inhibition of Return

Localized exogenous cues (light flash) can lead to faster performance at that location (within 250 msec) – then there is an inhibitory aftereffect (inhibition of return).

147

Searching a Scene

“pop-out” conjunction-search (sequential spotlight)

148

Competition and Visual Search

Interpretation of Treisman’s results: Feature search requires look-up within one feature map (bottom-up saliency-based mechanisms).

Conjunction-search requires coordination of multiple feature maps in register, serial search under guidance of visual attention (top-down influences of spatial or object-based attention). “Multiple objects are competing for neural representation.”

149

The Saliency Map

Idea originally proposed by Koch and Ullman, 1985.

Saliency map = encoding “visual conspicuity”, or saliency.

Two mechanisms:

-fast, parallel pre-attentive extraction of visual

features.

-slow, sequential focal attention, winner-take-all,

inhibition-of-return.

150

The Saliency Map

151

The Saliency Map: Application

Street signs

152

The Saliency Map: A Movie

153

The Saliency Map: Application

“Replication” of Treisman’s experiments:

154

Attention – Neurophysiology

Hillyard’s experiments – dichotic listening: attention-dependent effect on ERP amplitude.

Early or late?

Study by Woldorff et al., localization of an early (20-50 ms latency) attention effect using ERP(F)/MRI).

155

Attention – Neurophysiology

Woldorff et al., 1993:

156

Attention – Neurophysiology

Woldorff et al., 1993:

157

Attention – Neurophysiology

Woldorff et al., 1993: Localization: Heschl’s gyrus of auditory cortex

158

Attention – Neurophysiology

Voluntary focusing of spatial visual attention enhances visual ERP’s.

Recording from right lateral occipital cortex

159

Attention – Neuroimaging

Previous imaging studies revealed: changes in neural activity related to attentional shifts (parietal lobe) and attention-related specific activation of extrastriate areas (color, form, motion). No changes in V1.

Recent fMRI studies (e.g. Somers et al., 1999): - Selective visual attention modulates neural activity in extrastriate cortex, as well as in V1.

- Attentional modulations in V1 are spatially specific.

- “Window of attention can be spatially complex”, hints at object-selective attention.

160

Attention – Neuroimaging

Flattening of the occipital lobe (Somers et al., 1999)

161

(a) and (b): Stimulus (c) and (d): Topography (e) and (f): Attentional Modulation

162

Attention – Top-Down

Most “natural” visual scenes are composed of multiple objects.

Receptive fields in higher visual areas are large (up to 25 degrees) and typically contain multiple objects at one time.

This creates a problem for neurons encoding specific object features…

163

Attention – Top-Down

Ambiguous response

164

Attention – Top-Down

Ambiguity in neural response can be reduced by: a) Referencing spatial (retinal) location b) Attentional modulation of firing rate

165

Attention – Top-Down

Un-ambiguous response Prediction

166

Cellular Basis of Attention

Moran and Desimone, 1985 Note: visual input does not change (fixation point), what changes is the focus of covert attention

167

Cellular Basis of Attention

Other examples of attention-related modulations of neural activity: 1) Parietal (“where”) pathway: increased firing to attended stimuli (area 7a), and to remembered locations where stimuli had been present. Also, responses occur to inferred motion.

2) Temporal (“what”) pathway: increased firing to attended stimuli (IT) particularly during active discrimination, or to remembered stimuli (working memory) The prevalence of these effects makes it difficult to distinguish state-dependent (endogenous) and input driven (exogenous) components of “normal” neuronal responses. Are different cells specialized for each component?

168

Cellular Basis of Attention

Neuronal responses in IT during a delayed-match-to sample task.

Task: Chelazzi et al., Nature 363, 345, 1993

169

Cellular Basis of Attention

Neuronal responses in IT (20 trial average, smoothed mean firing rate)

Cell 1 Cell 2

Cue Delay Chelazzi et al., Nature 363, 345, 1993 Choice * = Saccade Onset

170

Model

Cellular Basis of Attention

Chelazzi et al., Nature 363, 345, 1993

171

Attention and Synchronization

Steinmetz et al., 2000: Task: Monkeys trained to switch attention between a visual (“dimming detection”) and a tactile (“raised letters”) task.

Recording: multiple neurons (neuron pairs) in SII (secondary somatosensory cortex), contralateral to hand involved in tactile task.

Results: Most neurons in SII increase firing rate with attention to tactile task. A proportion of neuron pairs (17%) showed increased cross correlation (synchrony) with attention.

172

Attention and Synchronization

SII neuron pair: tactile task visual task Nature 404, 187, 2000 increased correlations for tactile task over visual and chance

173

Attention and Synchronization

But are attentional effects on synchronization cell specific?

Experiments by Fries et al., 2001.

Simultaneous recordings of MUA and LFP, in primate area V4. Blue: no attention Red: attention receptive fields

174

Attention and Synchronization

Response histogram, showing stimulus-evoked responses. No clear attentional effects, either during stimulus period or during delay period.

175

Attention and Synchronization

delay period stimulus period Blue: no attention Red: attention

176

Two Disorders of Attention

Unilateral neglect Balint syndrome

177

Symptoms of Unilateral Neglect

left hemiparetic armanosagnosia- unawareness / denial of illness.rightward gaze deviationno obvious hemianopiaVisual extinction to double simultaneous stimulation

(DSS)

Tactile extinction to DSSConstructional apraxia: deficit in constructional and

drawing tasks

apraxia: disorder of skilled movementallesthesia: (gross) mislocalization of stimulation 178

Unilateral Neglect

A deficit in perceiving and responding to

stimulation on one side.

Not a visual or motor defect (hemianopia or

hemiparesis)!

Two components: spatial neglect, bodily

neglect.

Typical lesion site: unilateral parietal-occipital

junction, (dorsal) parietal cortex (Brodmann's area 7, 40)

Side opposite to lesioned hemisphere

(contralesional side) is affected.

179

Unilateral Neglect: Lesion Sites

Lesion sites (frontal and parietal) from 7 patients with left-sided neglect Husain et al., Nature 385, 154, 1997

180

Unilateral Neglect

Behavioral components of unilateral neglect: 1. Perceptual component: sensory events on one side have diminished impact on awareness (extinction).

2. Motor component: hemispatial exploratory weakness (manual exploration tasks) 3. Motivational (limbic) component: “nothing important is expected to be happening” on the affected side.

181

Unilateral Neglect

182

Unilateral Neglect

183

Unilateral Neglect

184

Unilateral Neglect

Eye movements from a patient with left unilateral neglect, during visual exploration

185

Disorders of Attention

Narrator: V.S. Ramachandran (UCSD) PBS NOVA 11-23-01 186

Unilateral Neglect: Frames of Reference

“On the side opposite to”: In what frame of reference does neglect occur (space, object, world)?

How do we define LEFT?

Reference Frame: system for representing locations relative to some standard coordinate system Neglect affects multiple reference frames

187

Unilateral Neglect: Frames of Reference

Neglect patient JM’s copying of a daisy presented in different orientions.

Spatial or object-centered?

188

Unilateral Neglect and Memory

Bisiach’s patient (unable to recall half of the piazza del duomo) – representations are affected, not just acute visual input (“unilateral neglect of representational space”)

189

What Causes Unilateral Neglect?

1. Neglect results from damage to the attentional orienting system. Attention is mostly deployed to the right.

2. Neglect is caused by a failure to construct a complete mental representation of contralesional space.

190

Unilateral Neglect: Patient J.R.

From Nature, 373, 1995, 521ff Patient cannot completely cross out local components of global forms (Navon figures)

191

Unilateral Neglect: Patient J.R.

From Nature, 373, 1995, 521ff However, patient can adequately describe the figure shown in (a) and mark its corners; patient then cannot cancel all the dots (b); patient can reconstruct figure from memory (c).

192

Unilateral Neglect: Patient J.R.

From Nature, 373, 1995, 521ff Patient cannot cancel all imaginary components of a drawn square (a); performance is better without vision (blindfolded) (b).

Note the contrast between exogenously (input) driven and endogenously (memory) driven task!

193

Unilateral Neglect: Patient J.R.

From Nature, 373, 1995, 521ff Patient cannot cancel all dots in (a), but can reproduce a circle of dots (driven by an internal global representation) (b). After drawing the circle, again dots cannot be canceled on the left (c ).

194

Unilateral Neglect: Patient J.R.

Marshall and Halligan summarize J.R.’s deficit as follows: “Conscious perception of the whole does not automatically lead to visual awareness of all the parts. […] J.R. can perceive the whole forest but cannot use that percept to search for and cut down the tress on the left thereof.”

195

Unilateral Neglect: Summary

A unilateral attention deficitLH- strong right bias; RH- possible bilateral

control (can direct left or right)

Attention operates on representations, neglect

can affect multiple representations

Brain represents space in multiple frames of

reference

Posterior parietal cortex critical for attention 196

Balint Syndrome

Main component: visual disorientation (simultanagnosia). Inability to attend to more than a very limited (and unstable) sector of the visual field (a single object) at any given moment (the rest is “out of focus”). Percept of a spatially coherent scene is lost.

Lesion: Most often, bilateral occipito-parietal lesions

197