Transcript Document

The Varieties of Conscious Seeing
Andy Clark
School of Philosophy, Psychology and
Language Sciences (PPLS)
University of Edinburgh, Scotland, UK
[email protected]
Thanks.
Thanks to Julian Kiverstein, Rob McIntosh, Matthew
Nudds, Tillmann Vierkant and the participants in the
Edinburgh University PPIG (Philosophy, Psychology
and Informatics Reading Group) for stimulating
discussion of some of these issues.
This talk was prepared thanks to support from the
AHRC, under the ESF Eurocores CNCC scheme, for
the CONTACT (Consciousness in Interaction) project,
AH/E511139/1.
Conscious Visual Experience.
What is conscious visual experience?
How can we tell what figures in conscious visual
experience? (It’s harder than it looks since asking people
is not good enough)
Is conscious visual experience all of one basic type, or are
there varieties of conscious seeing?
Is there some kind of information-processing profile
that is both distinctive of, and perhaps helps explain,
conscious visual experience itself?
Strategy:
Look at two familiar bodies of evidence that seem to
bear on the nature and extent of conscious visual
experience.
Change Blindness
Visual Form Agnosia
….while asking lots of difficult questions
..then offer a positive suggestion.
A General Issue to Pursue
Might we often be confusing the contents of visual
experience with something much more narrow, such as
the contents of noticed (Dretske (2006)), conceptualized
(Wallhagen (2007)), or world-describing (Jeannerod
(2007)), Nudds (ms)), visual experience??
See also Block (2007).
Call this the ‘Narrow Vision of Conscious Vision Worry’
1. Change Blindness (Again)
2. Seeing and Noticing
3. Visual Form Agnosia (Again)
4. Two Types of Conscious Visual Content?
5. A Positive Story
What should we conclude from these kinds of demos?
A popular early suggestion (Dennett (1969) (1991), Ballard (1991),
O’Regan (1992), Churchland et al (1994), Clark (1997)):
• Minimal internal representation: you don’t notice the difference
because your brain never created a ‘rich internal representation’ of
the scene in the first place.
• Rather, we make do with partial (perhaps ‘gist’-oriented) internal
encodings and a capacity for rapid information-accessing saccades.
Problem:
it now seems that the internal representations are not so
minimal after all….
Evidence for persisting, and not especially sparse, representations of
the pre-change stimulus.
Hollingworth and Henderson (2002) show that as long as a target
object is fixated (i.e. directly targeted by foveated vision) and
attended both before and after the change, subjects are able to
detect and report even quite small and subtle alterations, such as the
change of one telephone to another. (See also Simons et al (2002))
+ even when the change is not explicitly noticed, evidence of covert
awareness. Hollingworth et al (op cit) showed that fixation
duration on the changed object (post-change) was longer than under
normal (no change) conditions
Silverman and Mack (2001) showed priming effects for 'unnoticed'
changes.
A Good Question:
If we have these rich(er) int reps, why don’t we always notice
the changes?
Simons et al suggest we fail to compare the pre- and postchange representations. But that merely pushes the question
back a step. Why don’t we?
Suggestion: what is at work here is a very general principle of
operation of the embodied, situated, human nervous system,
that I call “motor deference”. (Clark (forthcoming), Ferreira,
Apel and Henderson (submitted))
We tend to prefer to use a motor routine to fetch information
from the scene in front of us, even if we command a perfectly
good stored representation.
Why Defer?
Ferreira, Apel and Henderson (submitted))
• Safe…perhaps more reliable than memory.
• Lessens the moment-by-moment load on internal working
memory: just fetches what is needed when it is needed.
But one question still unanswered:
How much of the detail that we moment-by-moment encode, and
that is sometimes demonstrably preserved even when we don’t
notice changes to those very elements in the scene, makes it (at
the time of exposure) into conscious visual awareness?
Concretely: In typical CB experiments, do you consciously
visually experience the very items whose changed appearance you,
later failed to notice?
1. Change Blindness (Again)
2. Seeing and Noticing
3. Visual Form Agnosia (Again)
4. Two Types of Conscious Visual Content?
5. A Positive Story
QuickTime™ and a GIF decompressor are needed to see this picture.
Fig courtesy of Jeroen Smeets
According to Wallhagen (2007) this clever manipulation
merely reveals (allows us to notice) what was already
present to visual experience even in the standard
unaugmented Müller-Lyer.
It is just that in the standard case we do not notice that our
own conscious perception has both the veridical and the
non-verdical content too (ie there are multiple inconsistent
contents given in conscious visual experience)
Similar line in Cara Spencer, re Ebbinghaus illusion (2007,
p.315, my emphasis):
Maybe we correctly represent each disc size in experience
but “the distorting effect of the illusion partly serves to draw
the attention away from certain features of the experience,
such as its representation of each individual disc, so that
the subject does not notice how they compare’.
These moves require a kind of division of the agent
into what might be termed the bare experiencing
agent and the noticing agent.
On this model the bare experiencing agent is fully
consciously aware of much that the noticing
agent misses.
Does the augmented Müller-Lyer illusion really lend
support to any unrestricted version of this claim?
The augmentations create a state of conflict in the
noticing agent, and this is our best evidence that
under those special conditions multiple
inconsistent contents are indeed present in
experience.
It is unclear what, if anything, we should then infer
about the unaugmented case.
AC: Sometimes such a division may make
sense…perhaps I genuinely experience
but don’t notice your new haircut, that is, I
experience the shape of your hair, which
is new, while not noticing that it is new.
But surely parasitic…had my attention
been drawn to the shape of your hair, I
could have reported all its salient features.
This will matter later
A better try: Dretske (2006)
asks: How do we tell what enters into conscious visual
awareness at some moment in time?
Makes a strong case that this is much harder than it seems.
If someone reports that yes, they saw (visually experienced)
X, and we have no reason to think they are being dishonest,
that provides good (not indefeasible) reason to believe that X
did enter visual awareness.
But failure to report X, and even actively reporting failure
to see X, are both way less reliable, for a variety of reasons.
The subjects may simply not be able to issue verbal
reports: infants, non-human animals.
More interestingly, subjects who CAN issue reports
often fail to (a) know what they have seen, or even
(b) that they have seen anything at all, even when
they
have.
At root, this all comes about for a simple, but
potentially rather important, reason: that seeing is
very different from believing!
Various examples will focus the ideas….
(a) Not Knowing What You Have Seen
To see an X requires little more (perhaps no more) than
being able, at that moment on demand, to pick X out by
pointing and ask “what’s that?”.
But to see an X as an X requires having the concept of X’s,
and somehow bringing that concept to bear on the visual
experience.
Dretske’s favourite old example: You might see a spy every
day, but not know you do, as you see Sarah but don’t know
Sarah is a spy or you don’t even know what Spies are,
[Even the fly you briefly fixate but pay no heed to on your
wall might be a spy, genetically engineered by the CIA]
The point of this simple observation is just to stress:
“the difference between awareness of a stimulus (an object of
some sort) and awareness of facts about it- including the fact that
one is aware of it” (Dretske (2006) p.147)
Fact-awareness versus Object-awareness
Concretely: “ignorance of the fact that one is seeing a spy does
not impair one’s vision [that is, one’s conscious experience] of
the [object that is a] spy” (148)
Object-awareness is in some sense cheaper than fact awareness
Sometimes, however, it seems plausible that visual stimuli can
affect us without even any conscious object-awareness…
Blindsight cases might be like this.
Here, even though subjects can perform above chance on some
forced choice tasks, they could never point to the stimulus
and ask “what’s that?”: so they fail even the minimal
condition on object-awareness that my Fly-Spy case meets.
A mistaken suggestion: it might seem that even if you don’t
know what you have seen (you lack fact-awareness of X) you
should (if you have vision-based object-awareness of X) at
least know that you have seen something.
But not so…conscious perception can occur even when we
think we did not see anything at all…how can this be?
(b) Not Knowing that you have seen anything at all
“suppose S looks at a scene in which there are seven people
gathered around a table. Each person is clearly visible. S gazes at
the scene for several seconds, runs her eye over (and, in the
process, foveates) each person at the table, but pays no particular
attention to any of them [ac: perhaps she has been given a
different task that incidentally requires this, eg, counting how
many are wearing red]. She then looks away. While S is looking
away, an additional person--call him Sam—joins the group. Sam
is clearly visible [ac: and not wearing red]. There are now eight
visible people. When S looks back, [ ac: and even after she
foveates each person in turn] she doesn't notice any difference.
Having no reason to suspect that a change has occurred, S thinks
she is looking at the same group of people. When asked whether
she sees a difference in the scene between the first and the second
observation, S says, "No." “ (Dretske, 162)
Sounds plausible to me. Simons and Levin have an old
case a bit like this, involving a photographer using a
manual focus camera..(but someone should do Dretske’s
experiment, to be sure!)
Assume that S honestly asserts that she saw no new
people at the second viewing.
Q/ Did S actually (consciously) see newbie Sam?
Q is not was S aware of the fact that she saw something
different (Sam) on the second viewing. She clearly
wasn’t aware of any such fact. But did she have objectawareness of Sam?
Seems hard to deny it. Dretske hammers it home well…
“S not only saw Sam, but… her experience of him was of the
same kind, a conscious experience, as was her experience
of the other seven people. She was aware of Sam in the same
way she was aware of each of the other seven people around
the table. She was aware of the person who made a difference
without being aware (realizing, noticing, or believing) that
there was this difference…It would be completely arbitrary
to say that S consciously sees only the same seven people
she saw the first time and that, therefore, her perception of
Sam, the new member of the group, is unconscious,
subliminal, or implicit. Why just Sam? Why not each of
the other seven people at the table?”
Dretske (2006) p.163
“One can be conscious of the objects that
constitute a visible difference and not be
conscious of the fact that one is conscious of
them” (163)
[+ meets minimal condition: she could, on the
basis of her current visual awareness, have, had
she wished to, pointed at Sam and asked “Who
is that?”
Enough object-info entered awareness for this to
be possible, unlike the blindsight case..164-5]
Important Caveat: Dretske’s claim is not that you are
consciously object-aware of all the elements in a complex
scene.
Just that “one can be consciously aware of more than one
realizes” (164)
Fits with his older work on ‘non-epistemic seeing’
I think Dretske is right, and this shows that at least one thing I
used to like to say is wrong…(imagine my surprise).
The original array will always comprise six cards of a similar
broad type: six face cards, or six assorted low-ranking cards
(between about 2 and 6) etc.
When the new, 5 card array appears, NONE of these cards will be
in the set. But the new 5 card array will be of the same type: all
face cards, low cards, whatever.
It looks as if ONLY YOUR CARD has been taken. But in fact
they are ALL DIFFERENT NOW (so no wonder we got yours!)
(me, 2004)
The brain knows that it can USUALLY get detailed information
about all the other cards just by looking.
So it encodes sth minimal (eg ‘lots of royal cards’) leaving all the
detailed info out in the world.
This works fine until magicians exploit the laziness.
Me (now)
Perhaps we consciously saw much of the detail on each
card as it was (not just as ‘royal’) even though we did not
notice (perhaps due to motor deference) that they were
all new cards second time around.
Many CB cases might well be like these ones, viz, they show
only “that sometimes we do not notice some of the things we are
consciously aware of” (165)
That is, CB failure is not (or not always) a failure of conscious
awareness after all.
A reasonable worry:
Have we now made ‘visual experience of X’ into something
so slippery as to be empirically intractable?
Dehaene et al (2006)
Develop a taxonomy of states according to which the cases
Dretske cites would (I think) be classified as preconscious:
“potentially accessible (they could quickly gain access to
conscious report if they were attended) but they are not
consciously accessed at the moment” (207)
Later, they muse that whether such states are phenomenally
conscious but elude report due to being unattended, or are not
phenomenally conscious, “does not seem to be, at this stage,
a scientifically addressable question” (209)
But consider the famous Sperling (1960) experiments recently
discussed by Dretske (2006), Block (2007), Fodor (2007). For a
recent version of the exsperiments, see Landman et al (2003)
Subjects are briefly (50ms) shown a 3x3 grid of letters
T D A
S R
N
F Z B
After stimulus is gone, subjects can reliably report (‘full report’
condition) only about 4 letters.
But say they saw them all.
Should we trust them?
Sperling showed that if rapidly asked instead for the letters in any
given row (‘partial report condition’) subjects could often do this,
regardless of which row was chosen.
So information about each and every letter seems temporarily
available, if attention is rapidly so directed, even though the
selection of some letters renders the rest (then) unavailable.
The experience, various philosophers (Fodor, Block, Dretske) now
suggest, contains more information that any full report can
display.
Landman et al (2003)(Skip)
Subjects shown 8 oriented rectangles for half a second, then gray
screen, then the array of 8 but one rectangle may have changed
orientation
Able to keep track of the orientation of about four rectangles
from a group of eight (capacity measure = 4)
Yet they typically report seeing the specific orientation of all
eight rectangles.
BUT with pointer added on gray screen, can track almost all
rectangles (capacity measure up to 6 or 7)
Possible Exp: Briefly persisting ‘iconic’ experience whose
expreiential content exceeds full report
See Block (forthcoming) and , for some worries about the exact
way Block uses this data, Clark and Kiverstein (forthcoming)
Dretske imagines a similar ‘partial report’ probe applied to his
table case.
Soon after seeing the second group, someone points to the space
around the table where newbie Sam had been placed and asks
“was anyone sitting there?”.
If she can answer “yes”, we can provisionally conclude she was
indeed consciously object-aware of Sam, even though she did not
notice the fact that Sam was ‘extra’
1. Change Blindness (Again)
2. Seeing and Noticing
3. Visual Form Agnosia (Again)
4. Two Types of Conscious Visual Content?
5. A Positive Story
Another body of Evidence: Selective Brain Lesion Data
Visual Form Agnosia (DF) and the Dual Visual Streams
Story
DF. Ventrally compromised (carbon monoxide poisoned)
patient with Visual Form Agnosia. DF says that she has no
conscious experience of seeing the shapes of objects,
and she cannot name the shapes and orientations of
objects (nor can she e.g. pantomime the shapes of
objects or their orientations in space).
Not only can DF not recognize many everyday objects, or
faces, she cannot distinguish between simple line
drawings of squares, rectangles, triangles and circles
But dorsal stream intact.
DF can perform a variety of fluent actions. She can run
through a novel arena without hitting things (in this she
is said to be ‘indistinguishable from normal subjects’
(Goodale and Milner (2004) p28), raising her foot just
enough, just in time, to clear the obstacles).
She can, perhaps most famously, post a card through an
oriented slot.
But she cannot tell you the orientation of the slot, nor can
she mime it for you.
Verdict: “[DF] is able to use visual properties of
objects…to guide a range of skilled actionsdespite having no conscious awareness of
those same visual properties…” Goodale and
Milner (2004) p.29
DF (ventral
lesions)
grasps
shapes
almost as
well as a
control, but
cannot
recognize
them.
Comparison
with RV, an
optic
ataxic
(dorsal stream
lesions).
RV
can
recognize
shape
but
cannot engage
the
object
appropriately
Evidence from Illusions Again: the Ebbinghaus
Illusion
Recall that all centre circles are the same size.
In physical (poker chip) version by Aglioti et al (1995) pre-shaping
of precision finger grip is perfect whichever you reach for, even
though you (wrongly) judge the sizes to vary
So the ‘zombie’ motor system, unlike conscious visual perception
(?), seems unaffected by the illusion.
Recent Version: Uses the Hollow Face Illusion
QuickTime™ and a
Sorenson Video 3 decompressor
are needed to see this picture.
Fast Flicking and the Hollow Face Illusion (Kroliczak et al
(2006)).
Again (as in Ebbinghaus) trajectories different from the
very start: effect not due to last second corrections.
But a much larger effect.
"This demonstrates that the visuomotor system can
use bottom-up sensory inputs…to guide movements
to the veridical locations of targets in the real
world, even when the [consciously] perceived
positions of the targets are influenced, or even
reversed, by top-down processing"
Milner and Goodale (2006) p.245
1. Change Blindness (Again)
2. Seeing and Noticing
3. Visual Form Agnosia (Again)
4. Two Types of Conscious Visual Content?
5. A Positive Story
Another Kind of Worry:
Types of Conscious Content
Non-Conscious Contents
versus Conscious/
(see Jacob/Jeannerod, Nudds).
Descriptive contents (belief-like: represent how things
are) versus
Directive contents (Nudds) (desire-like: represent how
things should go)
Re DF/Illusions:
Ventral stream plausibly specializes in descriptive
Dorsal stream (or some of it)in directive..the way we need
to to move to engage that object
But both descriptive and directive modes may
involve some representations that inform
conscious visual experience and some that
don’t.
Failures to report how things are shaped etc
(descriptive contents) need not imply lack of
any kind of visual perceptual experience at
all…
maybe there is some kind of visual experience
associated with the directive role too?
Compare: visual experience is very different to auditory
experience. But we would never suggest that the
auditory is non-conscious just because it is of a
radically different type.
(In the dorsal / ventral case, calling one kind ‘perceptual’
and the other ‘action-guiding’ tends to obscure the
possibility that the action-guiding stuff has a (different)
phenomenal character too)
Nudds:
Directive contents of visual *experience* may
guide action even if we are “not aware of an
object as being some way, and don’t have a
visual experience of the object as being that
way”
(from ms, “ Seeing How to Move: Visually
Guided Action and the ‘Directive’ Content of
Visual Experience”)
Nudds: Experiences with directive contents
represent and determine bodily movements.
But they do not do so by means of informing
an agent’s reasoning or intentions (we do
not intend to move in just-such-and-such a
way).
Instead, visual experience is said to guide
action in what Nudds calls a ‘direct way’
Why say experience guides us here, rather than just say
that visual input guides us?
Idea looks to be that we do not just find our hands moving
in just-such-and-such a way. Instead, the action has the
character of something we do.
This is (Nudds suggests) because the agent is aware that
they are trying to move in such-and-such a way.
So I don’t just find my arm tracing a trajectory. I move it
thus.
Nudds:
So..DF may have visual experiences with directive
contents…even though [she] “will not be under the
impression that anything is any way, nor have any
basis for judging that anything is any way” (Nudds,
ms)
See also Kelly (unpub) on ‘motor intentionality’ (DF
said to have a ‘motor intentional understanding’ of
the orientation of that infamous slot..).
See also O’Regan and Noe’s comments on DF
+ a take on Ebbinghaus etc:
Vis Exp guides both the action and the verbal response.
But it is the directive content of vis exp that guides the
visuomotor action and the descriptive that guides the verbal
response.
So on this model, both the judgment and the action are
guided by visual experience but there is no inconsistency
in the content of visual experience here.
Just multiple kinds of content of visual experience.
Comment:
If there is any ‘directive’ phenomenal content here at all,
why think it is visual? (Compare Matthen on the ‘feeling of
presence’)
The kind of putative visual experiential content (directive
content) at issue is clearly elusive.
To have it is not thereby to be able to report it, or to form
intentions to act based on it.
Red flags?
1. Change Blindness (Again)
2. Seeing and Noticing
3. Visual Form Agnosia (Again)
4. Two Types of Conscious Visual Content?
5. A Positive Story
Towards a Positive Story
I am dubious about the claim that these directive contents (in
DF or in us) form part of her conscious visual experience.
They seem too isolated and detached.
They break the link between conscious visual contents and
intentional agency.
Shouldn’t conscious contents be the kinds of thing that can
figure as reasons for our acts and choices, that can guide
deliberate action?
A positive suggestion versions of which are found in Evans
(1982), Marcel (1983), Milner and Goodale (1995), Goodale and
Miulner (2004), Hurley (1998), Clark (2001), Jacob and Jeannerod
(2003), Dretske (2006), Clark (2007)
Conscious perceptual experience occurs when, and only when,
information is poised, however briefly, for the guidance of (at
least minimal) rational action.
One version (see Campbell (2002), Clark (2001) (2007),
Dretske (2006))
Sensory transduction can sometimes simply channel
information so as to guide response, without providing the
agent herself with any reasons, justifications, or
rationales, for her action.
In such cases, successful behaviour whose success depends
on that very information will (c.p.) surprise the agent herself.
Eg Blindsight cases where agents claim to be “guessing”
Or DF and her initially self-surprising successful posting
behaviour?
Information got in, and made a difference, but was never (not
even briefly) poised so as to provide me with a reason for
my actions or choices.
At other times, the form or nature of the processing poises
transduced information in a way apt (if my attention is so
directed) to provide me with reasons and motivations for my
own actions and choices (what Dretske calls ‘justifying
reasons’ (168)).
Notice that unconceptualized (‘iconic’) encodings can
provide reasons for actions and choices and reports,
because (see e.g. Hurley (1997), Dretske (2006), Clark
(2007), Fodor (2007)) they still have contents, and those
contents can justify (make rational) perceptually-based
judgments, such as the judgment that the top line of the
Sperling grid contained a T a D and an A.
(epistemologists should probably care about this: see Fodor
(2007))
Upshot:
This looks like a more-or-less workable (?) criterion for
deciding when visual perceptual uptake is conscious
It is conscious when and only when it presents specific
contents (which may be of various types) in a form
suitable for the control of rational response: when and
only when its contents are poised to justify an agent in her
actions and responses, to reveal them as apt (see
Campbell (2004) p.275-276, also Ward, Roberts, and Clark
(submitted))
Most minimal case: the content is just that there is some
kind of object THERE, so you can at least be justified in
pointing at it and asking “what’s that?”.
There are usually all kinds of richer contents too, of course.
Application to Sperling Cases?
Information about the identity of each and every letter is there,
poised (albeit briefly) to guide and justify rational response.
For whichever row is selected, the visual information provides
a ‘justifying reason’ (Dretske) for the correct choice of
letters. The choice of letters is self-evidently apt to the chooser.
They do not feel they are guessing.
It is just that some other constraint, involving limits on STM, or
on the process of conceptualizing the information, or both,
results in any one such use (for row 1, say) obviating the
others.
But at the time, all the information was properly poised to
guide some form of rational (self-evidently apt) response.
A Virtue:
Relatively undemanding: No need to be a languageuser.
“As long as the animal or child can do things for reasons,
as long as it can be motivated to act by having reasons to
act, we can have grounds for inferring that it is conscious…
even though it cannot think that it is” (Dretske 171)
Dretske: If seeing the cat climb the tree provides Fido with
a ‘justifying reason’ for barking, then good enough…Fido
was conscious of the cat.
Notice that this is not simply a matter of confidence in
your barking per se.
It is a matter of your barking being, as I’d put it, selfevidently apt, or transparently reasonable, to you.
Campbell (2004) p.275-276
“ [the blindsighted subject] would still be guessing
no matter how much the subject practices, no
matter how confident, fast, and reliably accurate
she is in responding to the contents of the blind
field. The reason it is “guessing” is rather this:
when the subject reaches and grasps
successfully, she still does not know just why
the reaching and grasping has been
successful….[she] does not know why the
world has afforded just this and that…[she
lacks] perception of the reasons why the
object affords the various actions it does”
The sceptic, however, may still be unconvinced.
Why link phenomenal experience to the presence of such
‘justifying reasons’ or the self-evident aptness of
behaviours?
Because to deny this link, it seems to me, is to drive an
unwelcome wedge between the agent and her own
(putative) experience.
No isolated inner islands of perceptual experience:
experience is always the experience of an agent, and
should thus be in touch with her goals, plans and projects
Gareth Evans famously
claims that an informational
state
may
underpin
a
conscious experience only if
it (the informational state) is
in some sense input to a
reasoning subject.
To count as a conscious experience an
informational state must
"[serve] as the input to a thinking,
concept-applying
and
reasoning
system: so that the subject's thoughts,
plans, and deliberations are also
systematically
dependent
on
the
informational properties of the input.
When there is such a link we can say that
the person, rather than some part of his
or her brain, receives and processes
the information”
Evans (1982) p.158
I think this is almost right.
But (see Clark (2007) the real point here is (or should be)
quite independent of Evan's appeal to the subject as
concept-using..
As long as an animal can form (nonconceptualized) goals,
and can become aware of environmental opportunities
that allow the fulfillment of those (limited) goals and projects,
then transduced information can be, or fail to be, input to a
kind of reasoning subject.
Some Final Worries:
1. Sloman and Ballard’s Worry: The Complexity
of the Agent
“I think you make your own task harder by
assuming that it must be possible to ask and
answer questions relating to extraordinary
phenomena using ordinary language, with much
use of "I" and "we" in your questions instead of
talking about what's going on in the brain and its
virtual machines”Sloman
Eg by repeatedly asking what do I, the agent,
experience, as if that has a clean answer.
I agree that ‘we’ are fragmented agents whose
processing involves multiple, often inconsistent,
strands and sub-processes.
But I do think that some of those strands and
processes contribute their various contents to ‘our’
experience, while others do not.
At a given moment, ‘our’ experience just is the sum
of all the strands that do this (even if that sum is
inconsistent, and even if the here-and-now reporting
strands fail to notice a lot of it)
The suggestion is that for visual perceptual
awareness, the contributing strands all share one
feature, which is that the transduced information is
temporarily poised so that it could guide response
in a way that is self-evidently apt, so the
response is transparently reasonable ‘to the agent’.
Open and Difficult Question: “To the Agent”?
Since there are many possible responses, handled by
different strands in the architecture (eg saying you
see the car, pointing if you see the car, blinking if you
see the car: see Marcel), how do we decide which
constitutes self-evident aptness ‘to the agent’?
Suppose they come apart?
I suspect that the notion of a justified, or self-evidently
apt, response carries with it some requirement of scope:
that the transduced information be capable of figuring, if
our interest and attention was so directed, in any one of
an open sweep of personal-level plans and projects.
That’s what makes the experience the experience of a
distinct agent.
2. How to fully operationalize the key notion of a
‘justifying reason’ or of ‘self-evidently apt response’, so
as to apply it rigourously to Fido.
How do we know when an action is self-evidently apt
to an agent, once we downgrade the importance of
report?
Not easy.
We can track confidence in response (using signal
detection paradigms etc) but that isn’t enough
Can try counterfactuals (Clark (2007) e.g. is the
information available for use in an open-ended set of
doggy projects and purposes.?
3. Is attention unnecessary for conscious visual
awareness?
Probably.
Koch and Tsuchiya argue for ‘attention without
consciousness’ and ‘consciousness in the near
absence of attention’
If today’s story is right, active attention, of the kind
that yields noticing, is not necessary for conscious
perceptual experience
(“Attention and Consciousness: Two Distinct Brain
Processes” Trends in Cognitive Science 11: 1:2007 1622)
But see also Cory Wright on the phenomenal status of
‘unattended visual stuff’ as a question not worth trying
to resolve, for now at least.
Concluding (1)
Overall, my biggest worry is Sloman and Ballard’s worry.
I agree that we are fragmented bags of processing, yet
(perhaps unlike DD) I think there are facts concerning the
(complex, multiple, perhaps even inconsistent) contents of
visual perceptual experience.
If there are such facts, how do we get at them, and what
explains them?
The problem gets worse once we see that report and
noticing are inadequate indicators.
Tried for a story!!
( + If there are no such facts, what use is the notion of
perceptual experience (as against mere perceptual pickup) at all?? Should we just abandon it?)
Concluding (2)
1. CB cases do not reveal sparseness of internal
representations of the scene, so much as lack of
noticing, and the influence of ‘motor deference’
2. There is more in our perceptual experience than we
typically notice, and sometimes (Sperling cases)
more than we can (all at once) notice.
3. What is distinctive of visual perceptual experience is
that it provides for self-evidently apt responses
based on the visually transduced information.
Generally: Beware the ‘Narrow Vision of Conscious
Vision’ but don’t neglect the deep links between
conscious experience and agency.