Transcript Document
CS 785, Fall 2001
George Tecuci
[email protected]
http://lalab.gmu.edu/
Learning Agents Laboratory
Department of Computer Science
George Mason University
G.Tecuci, Learning Agents Laboratory
Overview
1. Course objective and Class introduction
2. Artificial Intelligence and intelligent agents
3. Sample intelligent agent: presentation and demo
4. Agent development:
Knowledge acquisition and problem solving
5. Overview of the course
G.Tecuci, Learning Agents Laboratory
1. Course Objective
Present principles and major methods of knowledge acquisition for
the development of knowledge bases and problem solving agents.
Major topics include: overview of knowledge engineering, general
problem solving methods, ontology design and development,
modeling of the problem solving process, learning strategies, rule
learning and rule refinement.
The course will emphasize the most recent advances in this area,
such as: knowledge reuse, agent teaching and learning, knowledge
acquisition directly from subject matter experts, and mixedinitiative knowledge base development. It will also discuss open
issues and frontier research.
The students will acquire hands-on experience with a complex,
state-of-the-art methodology and tool for the end-to-end
development of knowledge-based problem-solving agents.
G.Tecuci, Learning Agents Laboratory
2. Artificial Intelligence and intelligent agents
What is Artificial Intelligence
What is an intelligent agent
Characteristic features of intelligent agents
Sample tasks for intelligent agents
G.Tecuci, Learning Agents Laboratory
What is Artificial Intelligence
G.Tecuci, Learning Agents Laboratory
Central goals of Artificial Intelligence
Understand the principles that make intelligence possible
(in humans, animals, and artificial agents)
Developing intelligent machines or agents
(no matter whether they operate as humans or not)
Formalizing knowledge and mechanizing reasoning
in all areas of human endeavor
Making the working with computers
as easy as working with people
Developing human-machine systems that exploit the
complementariness of human and automated reasoning
G.Tecuci, Learning Agents Laboratory
What is an intelligent agent
An intelligent agent is a system that:
• perceives its environment (which may be the physical
world, a user via a graphical user interface, a collection of
other agents, the Internet, or other complex environment);
• reasons to interpret perceptions, draw inferences, solve
problems, and determine actions; and
• acts upon that environment to realize a set of goals or
tasks for which it was designed.
input/
sensors
user/
environment
G.Tecuci, Learning Agents Laboratory
output/
effectors
Intelligent
Agent
What is an intelligent agent (cont.)
Humans, with multiple, conflicting drives, multiple
senses, multiple possible actions, and complex
sophisticated control structures, are at the highest end of
being an agent.
At the low end of being an agent is a thermostat.
It continuously senses the room temperature, starting or
stopping the heating system each time the current
temperature is out of a pre-defined range.
The intelligent agents we are concerned with are in
between. They are clearly not as capable as humans, but
they are significantly more capable than a thermostat.
G.Tecuci, Learning Agents Laboratory
What is an intelligent agent (cont.)
An intelligent agent interacts with a human or some other
agents via some kind of agent-communication language
and may not blindly obey commands, but may have the
ability to modify requests, ask clarification questions, or
even refuse to satisfy certain requests.
It can accept high-level requests indicating what the user
wants and can decide how to satisfy each request with
some degree of independence or autonomy, exhibiting
goal-directed behavior and dynamically choosing which
actions to take, and in what sequence.
G.Tecuci, Learning Agents Laboratory
What an intelligent agent can do
An intelligent agent can :
• collaborate with its user to improve the accomplishment of
his or her tasks;
• carry out tasks on user’s behalf, and in so doing employs
some knowledge of the user's goals or desires;
• monitor events or procedures for the user;
• advise the user on how to perform a task;
• train or teach the user;
• help different users collaborate.
G.Tecuci, Learning Agents Laboratory
Characteristic features of intelligent agents
Knowledge representation and reasoning
Transparency and explanations
Ability to communicate
Use of huge amounts of knowledge
Exploration of huge search spaces
Use of heuristics
Reasoning with incomplete or conflicting data
Ability to learn and adapt
G.Tecuci, Learning Agents Laboratory
Knowledge representation and reasoning
An intelligent agent contains an internal representation of its external
application domain, where relevant elements of the application
domain (objects, relations, classes, laws, actions) are represented
as symbolic expressions.
This mapping allows the agent to reason about the application
domain by performing reasoning processes in the domain model,
and transferring the conclusions back into the application domain.
ONTOLOGY
OBJECT
SUBCLASS-OF
represents
BOOK
CUP
TABLE
INSTANCE-OF
If an object is on top of
another object that is itself
on top of a third object
then the first object is on
top of the third object.
Application Domain
G.Tecuci, Learning Agents Laboratory
CUP1
ON
BOOK1
ON
TABLE1
RULE
x,y,z OBJECT,
(ON x y) & (ON y z) (ON x z)
Model of the Domain
Separation of knowledge from control
Implements a general method of
interpreting the input problem based on
the knowledge from the knowledge base
Intelligent Agent
Input/
Sensors
User/
Environment
Output/
Problem Solving
Engine
Knowledge Base
Ontology
Effectors
Rules/Cases/Methods
Data structures that represent the objects from the application domain,
general laws governing them, action that can be performed with them, etc.
G.Tecuci, Learning Agents Laboratory
Transparency and explanations
The knowledge possessed by the agent and its reasoning
processes should be understandable to humans.
The agent should have the ability to give explanations of
its behavior, what decisions it is making and why.
Without transparency it would be very difficult to accept,
for instance, a medical diagnosis performed by an
intelligent agent.
The need for transparency shows that the main goal of
artificial intelligence is to enhance human capabilities and
not to replace human activity.
G.Tecuci, Learning Agents Laboratory
Ability to communicate
An agent should be able to communicate with its users
or other agents.
The communication language should be as natural to
the human users as possible. Ideally, it should be free
natural language.
The problem of natural language understanding and
generation is very difficult due to the ambiguity of words
and sentences, the paraphrases, ellipses and references
which are used in human communication.
G.Tecuci, Learning Agents Laboratory
Ambiguity of natural language
Words and sentences have multiple meanings
Diamond
• a mineral consisting of nearly pure carbon in crystalline form,
usually colorless, the hardest natural substance known;
• a gem or other piece cut from this mineral;
• a lozenge-shaped plane figure ();
• in Baseball, the infield or the whole playing field.
Visiting relatives can be boring.
• To visit relatives can be boring.
• The relatives that visit us can be boring.
She told the man that she hated to run alone.
• She told the man: I hate to run alone !
• She told the man whom she hated: run alone !
G.Tecuci, Learning Agents Laboratory
Other difficulties with natural language processing
Paraphrase: The same meaning may be expressed by many sentences.
Ann gave Bob a cat.
Bob was given a cat by Ann.
What Ann gave Bob was a cat.
Ann gave a cat to Bob.
A cat was given to Bob by Ann.
Bob received a cat from Ann.
Ellipsis: Use of sentences that appear ill-formed because they are
incomplete. Typically the parts that are missing have to be extracted from
the previous sentences.
Bob: What is the length of
the ship USS J.F.Kennedy ?
Bob: The beam ?
John: 1072
John: 130
Reference: Entities may be referred to without giving their names.
Bob: What is the length of
the ship USS J.F.Kennedy ?
Bob: Who is her commander ?
G.Tecuci, Learning Agents Laboratory
John: 1072
John: Captain Nelson.
Use of huge amounts of knowledge
In order to solve "real-world" problems, an intelligent agent
needs a huge amount of domain knowledge in its memory
(knowledge base).
Example of human-agent dialog:
User: The toolbox is locked.
Agent: The key is in the drawer.
In order to understand such sentences and to respond
adequately, the agent needs to have a lot of knowledge
about the user, including the goals the user might want to
achieve.
G.Tecuci, Learning Agents Laboratory
Use of huge amounts of knowledge (example)
User:
The toolbox is locked.
Agent:
Why is he telling me this?
I already know that the box is locked.
I know he needs to get in.
Perhaps he is telling me because he believes I can help.
To get in requires a key.
He knows it and he knows I know it.
The key is in the drawer.
If he knew this, he would not tell me that the toolbox is locked.
So he must not realize it.
To make him know it, I can tell him.
I am supposed to help him.
The key is in the drawer.
G.Tecuci, Learning Agents Laboratory
Exploration of huge search spaces
An intelligent agent usually needs to search huge spaces
in order to find solutions to problems.
Example 1: A search agent on the internet
Example 2: A checkers playing agent
1
5
2
6
9
13
10
17
18
G.Tecuci, Learning Agents Laboratory
30
12
16
20
19
24
23
26
25
29
11
15
22
4
8
7
14
21
3
27
31
28
32
Exploration of huge search spaces: illustration
Determining the best move with minimax:
I
Opponent
lose
I
win
win
win
win lose
win
G.Tecuci, Learning Agents Laboratory
win
win
lose
lose
win
lose draw win lose
win
win
win
lose
win
Exploration of huge search spaces: illustration
The tree of possibilities is far too large to be fully generated and
searched backward from the terminal nodes, for an optimal move.
Size of the search space
A complete game tree for checkers has been estimated as
having 1040 nonterminal nodes. If one assumes that these
nodes could be generated at a rate of 3 billion per second,
the generation of the whole tree would still require around
1021 centuries !
Checkers is far simpler than chess which, in turn, is generally
far simpler than business competitions or military games.
G.Tecuci, Learning Agents Laboratory
Use of heuristics
Intelligent agents generally attack problems for which
no algorithm is known or feasible, problems that require
heuristic methods.
A heuristic is a rule of thumb, strategy, trick, simplification,
or any other kind of device which drastically limits the
search for solutions in large problem spaces.
Heuristics do not guarantee optimal solutions. In fact they
do not guarantee any solution at all.
A useful heuristic is one that offers solutions which are good
enough most of the time.
G.Tecuci, Learning Agents Laboratory
Use of heuristics: illustration
.
3. Back propagate the estimated values
1. Generate a partial game tree
node corresponding t o
t he current board sit uat ion
2. Est imat e the values of t he leaf nodes by using a st atic evaluation funct ion
Heuristic function for board position evaluation: w1.f1 + w2.f2 + w3.f3 + …
where wi are real-valued weights and fi are board features
(e.g. center control, total mobility, relative exchange advantage.
G.Tecuci, Learning Agents Laboratory
Reasoning with incomplete data
The ability to provide some solution even if not all the
data relevant to the problem is available at the time a
solution is required.
Example:
The reasoning of a physician in an intensive care unit.
Planning a military course of action.
If the EKG test results are not available, but the patient
is suffering chest pains, I might still suspect a heart
problem.
G.Tecuci, Learning Agents Laboratory
Reasoning with conflicting data
The ability to take into account data items that are more
or less in contradiction with one another (conflicting
data or data corrupted by errors).
Example:
The reasoning of a military intelligence analyst that has
to cope with the deception actions of the enemy.
G.Tecuci, Learning Agents Laboratory
Ability to learn
The ability to improve its competence and efficiency.
An agent is improving its competence if it learns to
solve a broader class of problems, and to make fewer
mistakes in problem solving.
An agent is improving its efficiency if it learns to solve
more efficiently (for instance, by using less time or space
resources) the problems from its area of competence.
G.Tecuci, Learning Agents Laboratory
Illustration: concept learning
Learn the concept of ill cell by comparing examples of ill cells
with examples of healthy cells, and by creating a generalized
description of the similarities between the ill cells :
Learned concept
((1 ? )
(? dark))
Concept
examples
G.Tecuci, Learning Agents Laboratory
((1 light)
(2 dark))
((1 dark)
(2 dark))
+
+
((1 light)
(2 light))
_
((1 dark)
(2 light))
_
((1 dark)
(1 dark))
+
Ability to learn: classification
The learned concept is used to diagnose other cells
“Ill cell” concept
((1 ?) (? dark))
Is this
cell ill?
No
Is this
cell ill?
((1 light)
(1 light))
((1 dark)
(1 light))
Yes
This is an example of reasoning with incomplete information.
G.Tecuci, Learning Agents Laboratory
Extended agent architecture
The learning engine implements methods
for extending and refining the knowledge
in the knowledge base.
Intelligent Agent
Input/
Sensors
User/
Environment
Problem Solving
Engine
Learning
Engine
Output/
Effectors
Knowledge Base
Ontology
Rules/Cases/Methods
G.Tecuci, Learning Agents Laboratory
Sample tasks for intelligent agents
Planning: Finding a set of actions that achieve a certain goal.
Example: Determine the actions that need to be performed in order to
repair a bridge.
Critiquing: Expressing judgments about something according to certain
standards.
Example: Critiquing a military course of action (or plan) based on the
principles of war and the tenets of army operations.
Interpretation: Inferring situation description from sensory data.
Example: Interpreting gauge readings in a chemical process plant to infer
the status of the process.
G.Tecuci, Learning Agents Laboratory
Sample tasks for intelligent agents (cont.)
Prediction: Inferring likely consequences of given situations.
Examples:
Predicting the damage to crops from some type of insect.
Estimating global oil demand from the current geopolitical world situation.
Diagnosis: Inferring system malfunctions from observables.
Examples:
Determining the disease of a patient from the observed symptoms.
Locating faults in electrical circuits.
Finding defective components in the cooling system of nuclear reactors.
Design: Configuring objects under constraints.
Example: Designing integrated circuits layouts.
G.Tecuci, Learning Agents Laboratory
Sample tasks for intelligent agents (cont.)
Monitoring: Comparing observations to expected outcomes.
Examples:
Monitoring instrument readings in a nuclear reactor to detect accident
conditions.
Assisting patients in an intensive care unit by analyzing data from the
monitoring equipment.
Debugging: Prescribing remedies for malfunctions.
Examples:
Suggesting how to tune a computer system to reduce a particular type of
performance problem.
Choosing a repair procedure to fix a known malfunction in a locomotive.
Repair: Executing plans to administer prescribed remedies.
Example: Tuning a mass spectrometer, i.e., setting the instrument's
operating controls to achieve optimum sensitivity consistent with correct
peak ratios and shapes.
G.Tecuci, Learning Agents Laboratory
Sample tasks for intelligent agents (cont.)
Instruction: Diagnosing, debugging, and repairing student behavior.
Examples:
Teaching students a foreign language.
Teaching students to troubleshoot electrical circuits.
Teaching medical students in the area of antimicrobial therapy selection.
Control: Governing overall system behavior.
Example:
Managing the manufacturing and distribution of computer systems.
Any useful task:
Information fusion.
Information assurance.
Travel planning.
Email management.
G.Tecuci, Learning Agents Laboratory
2. Sample intelligent agent: Presentation and demo
Agent task: Course of action critiquing
Knowledge representation
Problem solving
Demo
Why are intelligent agents important
G.Tecuci, Learning Agents Laboratory
Critiquing
Critiquing means expressing judgments about
something according to certain standards.
Example:
Critique various aspects of a military Course of Action,
such as its viability (its suitability, feasibility, acceptability
and completeness), its correctness (which considers the
array of forces, the scheme of maneuver, and the
command and control), and its strengths and
weaknesses with respect to the principles of war and the
tenets of army operations.
G.Tecuci, Learning Agents Laboratory
Sample agent: Course of Action critiquer
Source: Challenge problem for the DARPA’s High Performance
Knowledge Base (HPKB) program (FY97-99).
Background: A military course of action (COA) is a preliminary
outline of a plan for how a military unit might attempt to accomplish a
mission. After receiving orders to plan for a mission, a commander and
staff analyze the mission, conceive and evaluate potential COAs, select
a COA, and prepare a detailed plans to accomplish the mission based
on the selected COA. The general practice is for the staff to generate
several COAs for a mission, and then to make a comparison of those
COAs based on many factors including the situation, the commander’s
guidance, the principles of war, and the tenets of army operations. The
commander makes the final decision on which COA will be used to
generate his or her plan based on the recommendations of the staff and
his or her own experience with the same factors considered by the staff.
Agent task: Identify strengths and weaknesses in a COA, based on
the principles of war and the tenets of army operations.
G.Tecuci, Learning Agents Laboratory
COA Example – the sketch
Graphical depiction of a preliminary plan. It includes enough of the high
level structure and maneuver aspects of the plan to show how the actions
of each unit fit together to accomplish the overall purpose.
G.Tecuci, Learning Agents Laboratory
COA Example – the statement
Mission:
BLUE-BRIGADE2 attacks to penetrate RED-MECH-REGIMENT2 at 130600 Aug in order to enable the completion of seize
OBJ-SLAM by BLUE-ARMOR-BRIGADE1.
Close:
BLUE-TASK-FORCE1, a balanced task force (MAIN EFFORT) attacks to penetrate RED-MECH-COMPANY4, then clears
RED-TANK-COMPANY2 in order to enable the completion of seize OBJ-SLAM by BLUE-ARMOR-BRIGADE1.
BLUE-TASK-FORCE2, a balanced task force (SUPPORTING EFFORT 1) attacks to fix RED-MECH-COMPANY1 and REDMECH-COMPANY2 and RED-MECH-COMPANY3 in order to prevent RED-MECH-COMPANY1 and RED-MECHCOMPANY2 and RED-MECH-COMPANY3 from interfering with conducts of the MAIN-EFFORT1, then clears REDMECH-COMPANY1 and RED-MECH-COMPANY2 and RED-MECH-COMPANY3 and RED-TANK-COMPANY1.
…
Reserve:
The reserve, BLUE-MECH-COMPANY8, a mechanized infantry company, follows Main Effort, and is prepared to reinforce )
MAIN-EFFORT1.
Security:
SUPPORTING-EFFORT1 destroys RED-CSOP1 prior to begin moving across PL-AMBER by MAIN-EFFORT1 in order to
prevent RED-MECH-REGIMENT2 from observing MAIN-EFFORT1.
…
Deep:
Deep operations will destroy RED-TANK-COMPANY1 and RED-TANK-COMPANY2 and RED-TANK-COMPANY3.
Rear:
BLUE-MECH-PLT1, a mechanized infantry platoon secures the brigade support area.
Fires:
Fires will suppress RED-MECH-COMPANY1 and RED-MECH-COMPANY2 and RED-MECH-COMPANY3 and REDMECH-COMPANY4 and RED-MECH-COMPANY5 and RED-MECH-COMPANY6.
End State: At the conclusion of this operation, BLUE-BRIGADE2 will enable accomplishing conducts forward passage of lines through
BLUE-BRIGADE2 by BLUE-ARMOR-BRIGADE1.
MAIN-EFFORT1 will complete to clear RED-MECH-COMPANY4 and RED-TANK-COMPANY2.
SUPPORTING-EFFORT1 will complete to clear RED-MECH-COMPANY1 and RED-MECH-COMPANY2 and RED-MECHCOMPANY3 and RED-TANK-COMPANY1.
SUPPORGING-EFFORT2 will complete to clear RED-MECH-COMPANY5 and RED-MECH-COMPANY6 and RED-TANKCOMPANY3.
Explains what the units will do to accomplish the assigned mission.
G.Tecuci, Learning Agents Laboratory
COA critiquing task
Answer each of the
following questions:
Provide
general
guidance
for the
conduct of
war at the
strategic,
operational
and tactical
levels.
Describe
characteristics
of successful
operations.
G.Tecuci, Learning Agents Laboratory
The Principle of Surprise (from FM100-5)
Strike the enemy at a time or place or in a
manner for which he is unprepared.
Surprise can decisively shift the balance of combat power.
By seeking surprise, forces can achieve success well out of
proportion to the effort expended. Rapid advances in
surveillance technology and mass communication make it
increasingly difficult to mask or cloak large-scale marshaling
or movement of personnel and equipment. The enemy need
not be taken completely by surprise but only become aware
too late to react effectively. Factors contributing to surprise
include speed, effective intelligence, deception, application
of unexpected combat power, operations security (OPSEC),
and variations in tactics and methods of operation. Surprise
can be in tempo, size of force, direction or location of main
effort, and timing. Deception can aid the probability of
achieving surprise.
G.Tecuci, Learning Agents Laboratory
Knowledge representation: object ontology
<OBJECT>
GEOGRAPHICALREGION
ACTION
PLAN
ORGANIZATION
MILITARY-MANEUVER
MILITARY-ATTACK
EQUIPMENT
MILITARY-EVENT
MILITARY-TASK
COMPLEXMILITARY-TASK
SUBCLASS-OF
PENETRATE-MILITARY-TASK
INSTANCE-OF
INDICATES-MISSION-TYPE
IS-OFFENSIVE-ACTION-FOR
"military offensive operation"
RECOMMENDED-FORCE-RATIO
HAS-SURPRISE-FORCE-RATIO
OBJECT-ACTED-ON
PENETRATE1
"military offensive operation"
FORCE-RATIO
IS-TASK-OF-OPERATION
3
6
RED-MECH-COMPANY4
10.6
ATTACK2
The ontology defines the objects from an application domain.
G.Tecuci, Learning Agents Laboratory
Knowledge representation: problem solving rules
R$ASWCER-001
IF the task to accomplish is:
ASSESS-SECURITY-WRT-COUNTERING-ENEMY-RECONNAISSANCE
FOR-COA ?O1
Question: Is an enemy recon unit present in ?O1 ?
Answer: Yes, the enemy unit ?O2 is performing the action
?O3 which is a reconnaissance action.
Condition:
?O1
IS
COA-SPECIFICATION-MICROTHEORY
?O2
IS
MODERN-MILITARY-UNIT--DEPLOYABLE
SOVEREIGN-ALLEGIANCE-OF-ORG ?O4
TASK ?O3
?O3
IS
INTELLIGENCE-COLLECTION--MILITARY-TASK
?O4
IS
RED--SIDE
Then accomplish the task:
ASSESS-SECURITY-WHEN-ENEMY-RECON-IS-PRESENT
FOR-COA ?O1
FOR-UNIT ?O2
FOR-RECON-ACTION ?O3
A rule is an ontology-based representation
of an elementary problem solving process.
G.Tecuci, Learning Agents Laboratory
Illustration of the problem solving process
Assess COA wrt Principle of Surprise
for-coa COA411
To what extent does
COA411 conform to the
Principle of Surprise?
Does the COA assign appropriate
surprise and deception actions?
I consider enemy recon
Assess surprise wrt countering enemy reconnaissance
for-coa COA411
Is an enemy reconnaissance unit present?
There is a strength with respect
to surprise in COA411 because
it contains aggressive security /
counter-reconnaissance plans,
destroying enemy intelligence
collection units and activities.
Intelligence collection by REDCSOP1 will be disrupted by its
destruction by DESTROY1.
G.Tecuci, Learning Agents Laboratory
Yes, RED-CSOP1 which is performing
the reconnaissance action SCREEN1
Assess surprise when enemy recon is present
for-coa COA411
for-unit RED-CSOP1
for-recon-action SCREEN1
Is the enemy reconnaissance unit destroyed?
Yes, RED-CSOP1 is destroyed by DESTROY1
Report strength in surprise because of countering enemy recon
for-coa COA411
for-unit RED-CSOP1
for-recon-action SCREEN1
for-action DESTROY1
with-importance
high
COA critiquing demo
COA
Critiquing
Demo
G.Tecuci, Learning Agents Laboratory
Why are intelligent agents important
Humans have limitations that agents may alleviate
(e.g. memory for the details that isn’t effected by
stress, fatigue or time constraints).
Humans and agents could engage in mixed-initiative
problem solving that takes advantage of their
complementary strengths and reasoning styles.
G.Tecuci, Learning Agents Laboratory
Why are intelligent agents important (cont)
The evolution of information technology makes
intelligent agents essential components of our future
systems and organizations.
Our future computers and most of the other systems
and tools will gradually become intelligent agents.
We have to be able to deal with intelligent agents either
as users, or as developers, or as both.
G.Tecuci, Learning Agents Laboratory
Intelligent agents: Conclusion
Intelligent agents are systems which can perform
tasks requiring knowledge and heuristic methods.
Intelligent agents are helpful, enabling us to do our
tasks better.
Intelligent agents are necessary to cope with the
increasing challenges of the information society.
G.Tecuci, Learning Agents Laboratory
Recommended reading
G. Tecuci, Building Intelligent Agents, Academic Press, 1998, pp. 1-12.
Tecuci G., Boicu M., Bowman M., and Dorin Marcu, with a commentary
by Murray Burke,
“An Innovative Application from the DARPA Knowledge Bases
Programs: Rapid Development of a High Performance Knowledge Base
for Course of Action Critiquing,” invited paper for the special IAAI issue
of the AI Magazine, Volume 22, No, 2, Summer 2001,
pp. 43-61.
http://lalab.gmu.edu/publications/data/2001/COA-critiquer.pdf
G.Tecuci, Learning Agents Laboratory