Introduction - Texas Tech University

Download Report

Transcript Introduction - Texas Tech University

CS 5368: Artificial Intelligence
Fall 2010
Lecture 1: Introduction
8/26/2010
Mohan Sridharan
Slides adapted from Dan Klein
1
Course Information
http://www.cs.ttu.edu/~smohan/Teaching/Fall10_CS5368/index.html
2
Course Information
 Book: Russell & Norvig, AI: A Modern Approach, 3rd Edition
 Book website: http://aima.cs.berkeley.edu
 Prerequisites:
 Linear algebra, probability and calculus.
 There will be a significant amount of programming and mathematics.
 Work and Grading:





Assignments (programming: ~5, responses: once/week): 50%
Class participation: 10%
Final project: 40%
Pseudo-averaging for final grades.
Academic integrity : zero tolerance policy.
 Pac-man domain!
3
Announcements
• Important tasks this week:
• P0: Python tutorial is online. Please do not neglect to do this
task!
• Keep track of announcements on the course website.
• Return responses to the assignments by the due date.
4
Today
 What is artificial intelligence?
 What can AI do?
 What is this course?
5
Sci-Fi AI?
6
What is AI?
 The science of making machines that:
Think like humans
(Cognitive Modeling)
Think rationally
(Laws of Thought)
Act like humans
(Turing Test)
Act rationally
(focus of the course)
8
What About the Brain?
 Brains (human minds)
are very good at making
rational decisions (but
not perfect).
 “Brains are to
intelligence as wings
are to flight”.
 Brains aren’t as
modular as software.
 Lessons learned:
prediction and
simulation are key to
decision making.
11
Rational Decisions
We’ll use the term rational in a particular way:
 Maximizing your expected utility.
 Only concerns decisions, not the thought process behind
them.
 Goals are expressed in terms of the utility of outcomes.
A better title for this course would be:
Computational Rationality
14
Maximize Your
Expected Utility
Valid goal in this course or beyond 
15
Related Coursework
 Mathematics:
 Formal logic, algorithms, constraint satisfaction (Math, CS).
 Psychology and Philosophy:
 Learning and memory (Psych, Phil). HCI, human factors (Psych, IE).
 Design and control:
 Multiagent control, optimal decision-making, design (ME).
 Architecture and design:
 Multi-core architectures, VLSI (ECE, CS).
 Applications:
 Robotics, computer vision, multiagent systems, RL (CS).
17
A (Short) History of AI

1940-1950: Early days
 1943: McCulloch & Pitts: Boolean circuit model of brain.
 1950: Turing's “Computing Machinery and Intelligence”.

1950—70: Excitement!
 1950s: Early AI programs, including Samuel's checkers program, Newell &
Simon's Logic Theorist, Gelernter's Geometry Engine.
 1956: Dartmouth meeting: “Artificial Intelligence” adopted.
 1965: Robinson's complete algorithm for logical reasoning.

1970—88: Knowledge-based approaches
 1969—79: Early development of knowledge-based systems.
 1980—88: Expert systems industry booms.
 1988—93: Expert systems industry busts: “AI Winter”.

1988—: Statistical approaches
 Resurgence of probability, focus on uncertainty.
 General increase in technical depth.
 Agents and learning systems… “AI Spring”?

2000—: Where are we now?
18
What Can AI Do?
Quiz: Which of the following can be done at present?












Play a decent game of table tennis?
Drive safely along a curving mountain road?
Drive safely along Marsha Sharp Freeway?
Buy a week's worth of groceries on the web?
Buy a week's worth of groceries at United?
Discover and prove a new mathematical theorem?
Converse successfully with another person for an hour?
Perform a complex surgical operation?
Unload a dishwasher and put everything away?
Translate spoken Chinese into spoken English in real-time?
Write an intentionally funny story?
Translate Texan into English in real-time?
19
Unintentionally Funny Stories
 One day Joe Bear was hungry. He asked his friend Irving Bird where
some honey was. Irving told him there was a beehive in the oak
tree. Joe walked to the oak tree. He ate the beehive. The End.
 Henry Squirrel was thirsty. He walked over to the river bank where
his good friend Bill Bird was sitting. Henry slipped and fell in the
river. Gravity drowned. The End.
 Once upon a time there was a dishonest fox and a vain crow. One
day the crow was sitting in his tree, holding a piece of cheese in his
mouth. He noticed that he was holding the piece of cheese. He
became hungry, and swallowed the cheese. The fox walked over to
the crow. The End.
[Shank, Tale-Spin System, 1984]
21
Natural Language
 Speech technologies
 Automatic speech recognition (ASR)
 Text-to-speech synthesis (TTS)
 Dialog systems
 Language processing technologies
 Machine translation
 Information extraction
 Information retrieval, question answering
 Text classification, spam filtering, etc…
22
Vision (Perception)
• Object recognition.
• Scene segmentation.
• Image classification.
23
Robotics
 Robotics:
 Part mech. eng.
 Part AI.
 Reality much
harder than
simulations!
 Technologies:




Navigation.
Disaster rescue.
Automation.
Soccer 
 We ignore:
 Mechanical aspects.
 Methods for control.
24
Images from stanfordracing.org, UT-AustinVilla, Honda ASIMO sites
Vision on Robots
25
Logic
 Logical systems:
 Theorem provers.
 NASA fault diagnosis.
 Question answering.
 Methods:
 Deduction systems.
 Constraint satisfaction.
 Satisfiability solvers.
 Ignored in this course!
26
Image from Bart Selman
Game Playing
 May, '97: Deep Blue vs. Kasparov




First match won against world-champion.
200 million board positions per second!
Humans understood 99.9 of Deep Blue's moves
Can do the same now with a PC cluster.
 Open question:
 How does human cognition deal with the search
space explosion of chess?
 Can humans compete with computers at all??
 1996: Kasparov Beats Deep Blue
“I could feel --- I could smell --- a new kind of intelligence across the table.”
 1997: Deep Blue Beats Kasparov
“Deep Blue hasn't proven anything.”
27
Text from Bart Selman, image from IBM’s Deep Blue pages
Decision Making
•
•
•
•
Scheduling, e.g. airline routing, military.
Route planning, e.g. mapquest.
Medical diagnosis.
Automated help desks.
• Fraud detection.
• Spam classifiers.
• Web search engines.
• … Lots more!
28
Course Topics
 Part I: Making Decisions
 Fast search.
 Adversarial search.
 Part II: Modeling Uncertainty and Planning
 Probability theory, MDP and RL.
 Reasoning over time (HMM, KF, PF). Machine Learning.
 Classical planning and POMDPs.
 Applications
 Natural language, vision, robotics.
29
Designing Rational Agents



A rational agent selects actions
to maximize utility.
Percepts, environment, and
action space dictate techniques
for selecting rational actions.
Agent
Sensors
Percepts
?
Actuators
Environment

An agent is an entity that
perceives and acts.
Actions
Perfect rationality not always
possible!
 This course is about:
 General AI techniques for a variety of problems.
 Learning to recognize when and how a new problem can be
solved with an existing technique.
31
PEAS Scheme
 Task environment consists of:
 Performance measure.
 Environment definition.
 Actuators define action choices.
 Sensors provide input for decision making.
32
Properties of Task Environment
 Fully observable vs. partially observable.
 Single agent vs. multiagent.
 Deterministic vs. stochastic.
 Episodic vs. sequential.
 Static vs. dynamic.
 Discrete vs. continuous.
 Known vs. unknown.
33
Pacman as an Agent
Agent
Sensors
Percepts
?
Actuators
Actions
Environment
Agent Programs
 Simple reflex agents.
 Model-based reflex agents.
 Goal-based agents.
 Utility-based agents.
35
Reflex Agents
 Consider the past and present, but not
future predictions, to select an action.
 Encode preferences as a function of the
percepts and action.
Agent
Sensors
Preference function
Actuators
36
Reflex Agents
 Reflex agents:
 Choose action based on
current percept (and
maybe memory).
 May have memory or a
model of the world’s
current state.
 Do not consider the
future consequences of
their actions.
 Act on how the world IS.
 Can a reflex agent be
rational?
Goal Based Agents
 Goal-based agents:
 Plan ahead. Ask “what if”.
 Decisions based on
(hypothesized)
consequences of actions.
 Must have a model of how
the world evolves in
response to actions.
 Act on how the world
WOULD BE.
Agent Programs
 Simple reflex agents.
 Model-based reflex agents.
 Goal-based agents.
 Utility-based agents.
 How to transfer to learning agents?
Section 2.4.6.
39
Summary
 A broad introduction to AI.
 Some history and an overview.
 Some examples.
 Next chapter: search algorithms.
40