Transcript Document

Topics in Biological Physics
Seminar
Information and Computation – Session B:
A. Turing, 1950 – Computing Machinery and Intelligence.
J. Von-Neumann, 1951 – Design of Computers, Theory of
Automata and Numerical Analysis.
By: Adam Lampert
Can Computers Become as
Intelligent as We Are?
• Fundamental part – Are we simply
complicated computers?
• Practical part – How do we build
“intelligent” computers?
Fundamental Part - Are We
Simply Complicated
Computers?
What is a computer?
•
Turing Machine (Computers)
– Finite Automata (Control)
– Infinite tape (Store)
– Executive unit
Universality of Turing Machines
All of the following are equivalent to TM:
•
•
TM
•
TM with many tapes
•
Cellular automata
•
Usual computers*
•
Logic gates
•
Neural networks
•
Physical realizations
Church-Turing thesis (informally): any realized discrete
computation can be done by an equivalent Turing Machine
Are We Simply Complicated
Computers Then?
• Conjecture (Turing): Computers may become indistinguishable from humans.
• Why should we agree with Turing?
– Universality of computational devices (Church-Turing thesis)
– Consistent with our intuition about the physical world
• Why shouldn't we agree with Turing?
– Consciousness
– Limitation of computation
– Continuous computation
– Self replication
Arguments Against Turing
Conjecture - Consciousness
• Argument: Computes, as an automated device, does not have
consciousness, and therefore are distinguishable from humans.
• Answer 1: Complicated enough computers might be able to be conscious.
• Answer 2: Even if computers can not be conscious, it doesn’t mean that
they are empirically distinguishable from humans.
Arguments Against Turing Conjecture –
Computers Are Limited
• Background:
– Is there a problem that no computer can ever solve?
– YES!
– The halting problem.
– Goedel Theorem.
• Argument: Computers are fundamentally limited, so we can do better than
them.
• Answer: we are also fundamentally limited!
Proof of the Halting Problem
•
The problem: is there an algorithm which decides whether a given TM accepts a
given input or not.
•
Define <M> as the (string) representation of a TM M.
•
Assume that the algorithm exists: H(<M>,w) = True if M returns True on input w,
False otherwise.
•
Define: S(<M>) = H(<M,<M>>)
•
Define: T(<M>) = ¬S(<M>) = False if M returns True on input <M>, True otherwise
•
T(<T>) = False if T returns True on input <T>, True otherwise.
•
A contradiction!
•
Why did we get the contradiction?
– We have represented TM in the terms of its own language
– Therefore, we could announce: “this statement is false”.
Arguments Against Turing Conjecture Continuous Computation
• Background:
– The Church-Turing thesis applies to discrete machines.
– Continuous machines may be capable of computations beyond the Turing
limit, and may solve the halting problem for Turing machines.
– Among such machines are certain (theoretical models of) neural networks
and chaotic systems.
• Argument: Our brain is continuous
• Answer: Our world is noisy
capable of computations beyond
the Turing limit.
nearby states of the brain are
indistinguishable.
Arguments Against Turing
Conjecture – Self Replication
• Argument: if A construct B, then A
must be more complicated than B.
• Therefore, a computer can not selfreplicate, but we can.
• Answer: A can be just as complicated
as B, and a computer can indeed selfreplicate (Von-Neumann, 1951).
• Furthermore, this doesn’t induce any
limitation on the rest of the machine
abilities.
Self-Replicating Machine –
Von-Neumann Construction
•
Machine A – construct machine T from its
description IT.
•
Machine B – generate a copy of any given
instruction IT.
•
Machine C – receive instruction IT, operate A
to create T, operate B to copy IT, and supply T
with IT.
•
Machine D is composed of the triplet A + B +
C. It generates T + IT from IT.
•
In particular, D generates D + ID from ID.
•
Machine E is composed of D + ID.
•
E is self replicating!
Self-Replicating Machine - Langton
Loops - Background
• Cellular Automata – each time step, the value of each cell is determined
by the current value of its own and of its neighbors.
1D
2D
Self-Replicating Machine - Langton
Loops
• Langton loops are loops-shaped object within a certain 2D CA
invented by Langton in 1979.
• These loops are self replicating.
Cell Replication
• Very simplified description of cell replication:
– The DNA is composer of two strands.
– “DNA polimerase” et al. separated them and build up a new
DNA out of each strand.
– A always goes with T, C always goes with G.
– Each strand contain all the information.
– Two identical DNAs are composed.
– Then, the cell is divided into two cells such that each one
contains a DNA and about half of the other cell contents.
• Computational perspective
– Neither DNA nor DNA polymerase et al. contains any
information about itself.
– DNA polymerase can simply duplicate any given DNA.
– The DNA only contain the information how to construct
DNA polymerases
Practical Part - How Do We
Build “Intelligent”
Computers?
Who Do We Examine If a Machine Is
Intelligent? Empirical Test
• Standard Turing test:
• Imitation game:
Weaknesses of the Turing test
• May not cover all aspects of intelligence.
• The interrogator should be really tough to diagnose tough
computers.
• Many unintelligent human behaviors must also be
simulated by the machine.
• The original proposed test is mostly textual, although some
aspects of intelligence may not be so.
• Do not examine real time responses.
Turing’s Claim (1950)
• “I believe that in about 50 years’
time it will be possible to
program computers … to make
them play the imitation game so
well that an average interrogator
will not have more than 70
percent chance of making the
right identification after five
minutes of questioning.”
Historical background
• 1936 – Turing machine.
• 1939-40: The Bombe, machine for Enigma decryption
during World War II.
• 1943 J. Eckert and J. Mauchly - construction of
ENIAC. Considered the first electronic computer and was
used to calculate ballistic firing tables during World War II.
Historical Background - Continue
•
late 30s, and 40s - Recent research in neurology had shown that the brain is an electrical
network of neurons that may fire in all-or-nothing pulses.
•
1943 – W. Pitts and W. McCulloch showed that networks of idealized artificial neurons
may perform logical functions.
•
1947 - The invasion of the transistor (replaced the vacuum tubes later). Note that a
vacuum tube was estimated by Von-Neumann (1951) to be less efficient than a neuron
cell by a ratio of one to million, in comparison of performance per volume and energy
consumption.
•
1948-1949 – G. Walter's analogous robots - capable of phototaxis: found their way to
light.
•
1949 – EDSAC - inspired by J. Von-Neumann, constructed by M. Wilkes and his team in
England. Calculated arithmetic, differential equation, power series, etc.
Bottom line: computers are used mostly to numerical purposes, but
some inspiration could come from neurology
A Bit More About EDSAC (1949)
• Wight: 1 ton
• Area: 45 m2
• Storage: 2k bytes
Human Intelligence vs. Machine
Intelligence
• In certain problems humans have a clear advantage over
today’s computers.
– Visual recognition.
– Language.
• In certain other problems today’s computers has a clear
advantage over today’s humans.
– Numerical calculations.
– Searching over large amount of data.
Machine Learning
• Child computer
• Teacher
• Adult computer
Machine Learning – Example:
Generalization
• Goal: generate a computer that return y=ax+b for any given x.
• Equip the child computer with “the answer is a straight line”.
• This is called the inductive bias.
• Teacher or environment provide it with the value of y for two values
of x.
• the computer generalize the answer and can calculate y for any x.
Machine Learning – Example: The
Perceptron
•
The perceptron is a very simple model of neural
network.
•
Goal: give the correct answer y for any input x.
•
Learning program – get x as input and δ as the
correct output. Changes the weights w and b
according to:
•
The perceptron is a linear classifier: it converges if
and only if the data set is linearly separable, and
there are not too much mistakes.
•
Today there are much more sophisticated models
of neural networks.
Perceptron – Example 1: And
Initial randomized configuration:
0.22
0.47
0.51
Learning Rule:
For each wrong classification:
W1 = W1 ± α ×X1
W2 = W2 ± α ×X2
b = b +-α
Here, α = 0.1
Second Training Sample:
X1 = 0, X2 = 1, δ = 0
Result:
First Training Sample: X1 = 1, X2 = 0, δ = 0
Result:
0.42
0.37
0.32
0.37
0.51
0.41
Perceptron – Example 2: Xor
Our perceptron is not able to represent the XOR function.
0.55
0.65
0.06
Where Are We Today?
• Winner of Loebner prize in 2008 has managed to fool 3 out of 12
judges that he was human, in a short textual conversation (you can
talk with “Frank” at www.artificial-solutions.com. Try also
http://www.abenteuermedien.de/jabberwock, winner of 2003’s
prize).
• Great progress in many problems of AI.
• Humans are still much better at many tasks (recognition of objects
within pictures, translation of text).
• Today’s computers out-compete humans at games with few choices
and complete information (Checkers, Chess(?)).
• Expert humans are still better at games with many choices or
incomplete information (Go, Bridge).