Minds&Machines (Nov 17, 2014)
Download
Report
Transcript Minds&Machines (Nov 17, 2014)
Minds and Machines
Introduction to Philosophy ; Phil 11
Rasmus Grønfeldt Winther
November 18, 2014
Motivation
HAL 9000: Mission Explained
HAL 9000: Confrontation
Lal & Data (Schooling + Philosophy) @ 13.50 – 15.50
(Feeling) @ 22.06 – 28.34.
Practice
1. Will computers ever be able to feel pain?
2. Will computers ever become conscious?
Charles Babbage’s Analytical Engine
The Two Opposing Views
1. Computational Theory of the Mind (Turing,
Fodor)
2. Anti-CTM ; mind ≠ computer (Searle, Dreyfus)
Alan Turing
Turing’s Hope: CTM
•
I believe that in about fifty years' time it will be
possible to programme computers, with a storage
capacity of about 109, to make them play the
imitation game so well that an average interrogator
will not have more than 70 percent chance of making
the right identification after five minutes of
questioning. … I believe that at the end of the
century the use of words and general educated
opinion will have altered so much that one will be
able to speak of machines thinking without expecting
to be contradicted. (Turing in PBF p. 290)
Alan Turing
•
•
The Turing Test
“The
Imitation
Game”
The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a
way of dealing with the question whether machines can think. According to Turing, the question whether Alan Turing
machines can think is itself “too meaningless” to deserve discussion. However, if we consider the more
precise—and somehow related—question whether a digital computer can do well in a certain kind of
game that Turing describes (“The Imitation Game”), then—at least in Turing's eyes—we do have a
question that admits of precise discussion. Moreover, as we shall see, Turing himself thought that it
would not be too long before we did have digital computers that could “do well” in the Imitation Game.
(Oppy and Dowe, SEP, Introduction)
We now ask the question, “What will happen when a machine takes the part of A in this game?” Will the
interrogator decide wrongly as often when the game is played like this [with a machine instead of a man]
as he does when the game is played between a man and a woman? These questions replace our original,
“Can machines think?”
•
•
Note 1: behavioral criteria
Note 2: only looking at “cognitive” tasks “we do not wish to penalize the machine for its inability to
shine in beauty competitions, nor to penalize a man for losing in a race against an airplane.” [also:
asking whether a machine can think is as interesting as asking whether a submarine can swim?
Edsger Dijkstra
Dijkstra]
The Turing Machine
•
•
•
Not a literal computer; it is a model of computation in
general.
Alan Turing
“simple abstract computational devices intended to help
investigate the extent and limitations of what can be
computed.” (Copeland, SEP, first para)
“LCMs [logical computing machines: Turing's expression for
Turing machines] can do anything that could be described
as "rule of thumb" or "purely mechanical". (Turing 1948:7,
from Copeland, SEP, Introduction).
•
Universal Turing Machine
Alonzo Church
Computational Theory of Mind
In a nutshell:
CTM = CAR + RTM
Mind = Reasoning + Representation
::
Program = Algorithm + Data
Contemporaneous Critiques of Turing’s
CTM
•
•
mathematical objection: “There are a number of
results of mathematical logic which can be used to
show that there are limitations to the power of
discrete state machines.” (e.g., Gödel’s Theorem)
(Turing)
Lady Lovelace’s objection: “The Analytic Engine
has no pretensions to originate anything. It can do
whatever we know how to order it to perform”
(Turing citing Lady Lovelace)
Anti-CTM: The Chinese Room
•
Jack does not understand any Chinese. However, he inhabits a
room which contains a book with detailed instructions about
how to manipulate Chinese symbols. He does not know what
the symbols mean, but he can distinguish them by their shape. If
you pass a series of Chinese symbols into the room, Jack will
manipulate them according to the instructions in the book,
writing down some notes on scratch paper, and eventually will
pass back a different set of Chinese symbols. This results in
what appears to be an intelligible conversation in Chinese. (In
fact, we can suppose that "the room" containing Jack and the
book of instructions passes a Turing Test for understanding
Chinese.) (from James Pryor Harvard class,
http://www.jimpryor.net/teaching/courses/mind/notes/searle.ht
ml)
John Searle
YOU in a Chinese Room
Critiques of Searle’s CR
•
The most important of these is the Systems Reply.
According to the systems reply, Jack does not himself
implement the Chinese room software. He is only part
of the machinery. The system as a whole--which
includes Jack, the book of instructions, Jack's scratch
paper, and so on--is what implements the Chinese
room software. The functionalist is only committed to
saying that this system as a whole understands Chinese.
It is compatible with this that Jack does not understand
Chinese. (Ibid, Pryor)
Jim Pryor
Critique of Turing as
Computationalist
So I do not believe that Turing was a computationalist:
he did not think that thinking was just computation.
He was perfectly aware of the possibility that in order
to be able to pass the verbal TT (only symbols in and
symbols out) the candidate system would have to be a
sensorimotor robot, capable of doing a lot more than
the verbal TT tests directly, and drawing on those
dynamic capacities in order to successful pass the
verbal TT. (Harnad)
Stevan Harnad
•
•
Conclusion 1: Substrate-Neutral?
CTM adopts functionalism
•
•
substrate-neutrality of software
multiple realizability of software in hardware
Searle’s Anti-CTM
•
“perhaps Martians also have intentionality but their brains
are made of different stuff. That is an empirical question,
rather like the question whether photosynthesis can be done
by something with a chemistry different from that of
chlorophyll.” (Searle in PBF, p. 306) (Searle in Conversations
on Consciousness: “unlikely!”)
1. Is the
Mind
Software
to
the
Brain’s
Hardware?
(Strong
Conclusion 2: Three Questions
AI)
2. Is thinking/reasoning computing?
3. What is the relationship between the mind (or the brain),
and a model (or simulation) of the mind (or the brain)?
Identity, representation, mapping, none ? Note 1:
Distinguish “is” from “represents” Note 2: Consider
Strong vs. Weak AI
Question 3: Modeling the Brain
Henry Markram builds a brain in a supercomputer (5:00)
Note that Modeling the Brain ≠ Modeling the Mind
Why?
Philosophical Fields
According to Minds and Machines
Logic
????
Political
Philosophy
????
Ethics
Epistemology
Philosophy of Mind
Metaphysics
Philosophy of
Science