Transcript Slide 1

digital soul
Intelligent Machines and Human Values
Thomas M. Georges
COMP 3851, 2009
Matthew Cudmore
Overview
[Artificial] Distinctions
•
•
•
•
Artificial intelligence
Weak AI
Virtual reality
Machine intelligence
•
•
•
•
Real intelligence
Strong AI
Reality
Human intelligence
• Carbon chauvinism
What Makes Computers So Smart?
• Computers’ jobs were to do arithmetic
• Turing point (1940s) – universal computers
• Divide and conquer – 1s and 0s
– Limitations?
Smarter Than Us?
• How could we create something smarter than
us?
• Brain power – Blue Brain project
– 100 billion neurons, 100 trillion synapses
• Computing power – Moore’s law
– Memory capacity, speed, exactitude
• Expert systems
• Simple learning machines
Machines Who Think
• “Can machines think?”
– Practically uninteresting
– Turing test; Chinese room
• Not If, but When
– Moore’s law
– Mere power isn’t enough
• “i dont want no robot
thinking like me.”
• A machine could never…?
Let the Android Do It
• Robots today have
specific functions
• Goal-seeking robots
with values (persistent
cognitive biases)
• Leave more decisions—
and more mistakes—to
the androids
Arthur C. Clarke:
“The future isn’t what it used to be.”
What Is Intelligence?
• The Gold Standard; IQ
• Common sense
• Memory, learning,
selective attention
• Pattern recognition
• Understanding
• Creativity, imagination
• Strategies, goals
• Self-aware
(CAPTCHA)
What Is Consciousness?
• Not just degree, but also nature of consciousness
• Self-monitoring, self-maintaining, self-improving
(knowledge of right and wrong)
• Short-term memory of thought
• Long-term memory of self
• Attention, high-level awareness
• Self-understanding
• Paradox of free will
Can Computers Have Emotions?
• Dualistic thinking –
head and heart
• Emotions as knob
settings – reorganize
priorities
• Mood-sensing
computers
– Personal assistants, etc.
Can Your PC Become Neurotic?
• Dysfunctional response
to conflicting
instructions
• HAL in 2001
– “Never distort
information”
– “Do not disclose the real
purpose of the mission
to the crew”
– Murdered crew
The Moral Mind
•
•
•
•
•
Moral creatures act out of self-interest
Different cultures, different morals
Moral inertia
Only at the precipice do we evolve
New moral codes based on reason
– A science of human values
Moral Problems
with Intelligent Artifacts
• Engineering & Ethics
• Four levels of moral/ethical problems
1.
2.
3.
4.
Old problems in a new light
How we see ourselves
How to treat sentient machines
How should sentient machines behave?
• Crime and punishment
The Moral Machine
• Isaac Asimov, Three Laws of Robotics
1. A robot may not injure a human being, or through
inaction allow a human being to come to harm.
2. A robot must obey the orders given to it by human
beings, except when such orders would conflict with
the First Law.
3. A robot must protect its own existence as long as
such protection does not conflict with the First or
Second Law.
• Prime directives, must not be violated
• Is HAL to blame?
Will Machines Take Over?
• Machines already do much of our work
• Humans will not understand the details of the
machines that run the world
• Machines might develop their own goals
• Out of control on Wall Street
• Painless, even pleasurable, transition
Why Not Just Pull the Plug?
• We’re addicted!
• Cannot stop research
– Scientists strongly oppose taboos and restrictions
on what they may and may not look into
– Would drive development underground
• Self-preservation
• Diversification
• Cybercide – murder?
Cultures in Collision
• The Other is dangerous
– History has taught us
that conquest can mean
enslavement or
extinction
• Scientists versus
humanists
Beyond Human Dignity
• Dignity, if machines meet/surpass us
– Our concepts of soul and free will
– Pride in humanity and its achievements
– Who could take credit?
– We are still somehow responsible, even if not free
– Demystify human nature: would we despair?
– What if we all believed there were no free will?
– We don’t know what’s possible: keep searching!
Extinction or Immortality?
• Homo cyberneticus
• Virtual reality – mind
uploading
• Genetic engineering
• Mechanical bodies
• Fermi’s paradox
• Peaceful coexistence
• Utopian hope
The Enemy Within
• “Our willingness to let others think for us”
– Humans who act like machines
– “Just following orders!”
– “Well, that’s what the computer says!”
• Groupthink & conformance
– Minimize conflict and reach consensus
– Diffusion of responsibility
• Waiting for the messiah
– The challenge now is to think for ourselves
• Critical thinking, a lost art
Electronic Democracy
• Teledemocracy
– Too much information, not enough attention
– Impractical today, and would exclude many people
• Intelligent delegates
• Supernegotiators
• No more secrets – dynamic open information
– Whistle-blowers anonymous
• The Napster effect – free information
– Information may cease to be considered property
Rethinking the Covenant
between Science and Society
• Risky fields (Bill Joy: GNR)
– Genetic engineering
– Nanotechnology
– Robotics & artificial intelligence
•
•
•
•
Knowledge is good, is dangerous
Science for sale – capitalism
Socially aware science
Slow down!
What about God?
• We resist changing our core values
• Altruism without religious inspiration?
• Gods of the future
– The force behind the universe
– Namaste: “I bow to the divine in you”
– Gaia: Earth as a single organism
– Superintelligence