Practicum Kennistechnologie

Download Report

Transcript Practicum Kennistechnologie

Design of Multi-Agent Systems
Teacher
Bart Verheij
Student assistants
Albert Hankel
Elske van der Vaart
Web site
http://www.ai.rug.nl/~verheij/teaching/dmas/
(Nestor contains a link)
Overview
Deductive reasoning agents
– Planning
– Agent-oriented programming
– Concurrent MetateM
Practical reasoning agents
–
–
–
–
Practical reasoning & intentions
Implementation: deliberation
Implementation: commitment strategies
Implementation: intention reconsideration
Deductive Reasoning Agents
Decide what to do on the basis of a theory stating the best
action to perform in any given situation
,  |– Do(a) with a  Ac
where
 is such a theory (typically a set of rules)
 is a logical database that describes the current state of
the world
Ac is the set of actions the agent can perform
Deductive Reasoning Agents
But:
Theorem proving is in general neither fast nor efficient
Calculative rationality (rationality with respect to the
moment calculation started) requires a static environment
Encoding of perception & environment into logical symbols
isn’t straightforward
So:
Use a weaker logic
Use a symbolic, not logic-based representation
Overview
Deductive reasoning agents
– Planning
– Agent-oriented programming
– Concurrent MetateM
Practical reasoning agents
–
–
–
–
Practical reasoning & intentions
Implementation: deliberation
Implementation: commitment strategies
Implementation: intention reconsideration
Planning: STRIPS
-
Only atoms and their negation
Only represent changes
Blocks world (blocks + a robot arm)
-
Stack(x,y)
Pre {Clear(y), Holding(x)}
Del {Clear(y), Holding(x)}
Add {ArmEmpty(y), On(x,y)
Problems with planning
Frame problem
Describe what does not change by an action
Qualification problem
Describe all preconditions of an action
Ramification problem
Describe all consequences of an action
Prediction problem
Describe the duration that something remains true
Overview
Deductive reasoning agents
– Planning
– Agent-oriented programming
– Concurrent MetateM
Practical reasoning agents
–
–
–
–
Practical reasoning & intentions
Implementation: deliberation
Implementation: commitment strategies
Implementation: intention reconsideration
Agent-oriented programming
Agent0 (Shoham)
Key idea: directly programming agents in terms of
intentional notions like belief, commitment, and
intention
In other words, the intentional stance is used as an
abstraction tool for programming!
Agent-oriented programming
Shoham suggested that a complete AOP system will
have 3 components:
– a logic for specifying agents and describing their
mental states
– an interpreted programming language for
programming agents (example: Agent0)
– an ‘agentification’ process, for converting ‘neutral
applications’ (e.g., databases) into agents
Agent-oriented programming
Agents in Agent0 have four components:
–
–
–
–
a set of capabilities (things the agent can do)
a set of initial beliefs
a set of initial commitments (things the agent will do)
a set of commitment rules
Agent-oriented programming
Each commitment rule contains
– a message condition
– a mental condition
– an action
On each ‘agent cycle’…
– The message condition is matched against the messages the
agent has received
– The mental condition is matched against the beliefs of the agent
– If the rule fires, then the agent becomes committed to the action
(the action gets added to the agent’s commitment set)
A commitment rule in Agent0
COMMIT(
( agent, REQUEST, DO(time, action)
), ;;; msg condition
( B,
[now, Friend agent] AND
CAN(self, action) AND
NOT [time, CMT(self, anyaction)]
), ;;; mental condition
self,
DO(time, action)
A commitment rule in Agent0
Meaning:
If I receive a message from agent which requests me
to do action at time, and I believe that:
– agent is currently a friend
– I can do the action
– At time, I am not committed to doing any other action
then I commit to doing action at time
Overview
Deductive reasoning agents
– Planning
– Agent-oriented programming
– Concurrent MetateM
Practical reasoning agents
–
–
–
–
Practical reasoning & intentions
Implementation: deliberation
Implementation: commitment strategies
Implementation: intention reconsideration
Concurrent METATEM
Concurrent METATEM is a multi-agent language in
which each agent is programmed by giving it a
temporal logic specification of the behavior it
should exhibit
These specifications are executed directly in order to
generate the behavior of the agent
Temporal logic is classical logic augmented by modal
operators for describing how the truth of
propositions changes over time
Concurrent METATEM
important(agents)
it is now, and will always be true that agents are important
important(ConcurrentMetateM)
sometime in the future, ConcurrentMetateM will be
important
important(Prolog)
sometime in the past it was true that Prolog was important
(friends(us)) U apologize(you)
we are not friends until you apologize
apologize(you)
tomorrow (in the next state), you apologize
Concurrent METATEM
MetateM is a framework for directly executing
temporal logic specifications
The root of the MetateM concept is Gabbay’s
separation theorem:
Any arbitrary temporal logic formula can be
rewritten in a logically equivalent past  future
form.
This past  future form can be used as execution
rules
Concurrent METATEM
A MetateM program is a set of such rules
Execution proceeds by a process of continually
matching rules against a “history”, and firing
those rules whose antecedents are satisfied
The instantiated future-time consequents become
commitments which must subsequently be
satisfied
Concurrent METATEM
Execution is thus a process of iteratively generating a
model for the formula made up of the program
rules
The future-time parts of instantiated rules represent
constraints on this model
x [ask( x)  give( x)]
All ‘asks’ at some time in the past are followed by a ‘give’ at
some time in the future
Concurrent METATEM
Execution is thus a process of iteratively generating a
model for the formula made up of the program
rules
The future-time parts of instantiated rules represent
constraints on this model
ConcurrentMetateM provides an operational
framework through which societies of MetateM
processes can operate and communicate
Overview
Deductive reasoning agents
– Planning
– Agent-oriented programming
– Concurrent MetateM
Practical reasoning agents
–
–
–
–
Practical reasoning & intentions
Implementation: deliberation
Implementation: commitment strategies
Implementation: intention reconsideration
Practical reasoning
Practical reasoning is reasoning directed towards
actions — the process of figuring out what to do:
“Practical reasoning is a matter of weighing conflicting
considerations for and against competing options,
where the relevant considerations are provided by
what the agent desires/values/cares about and what
the agent believes.” (Bratman)
Practical reasoning is distinguished from theoretical
reasoning – theoretical reasoning is directed
towards beliefs
Practical reasoning
Human practical reasoning consists of two activities:
– deliberation
deciding what state of affairs we want to achieve
– means-ends reasoning
deciding how to achieve these states of affairs
The outputs of deliberation are intentions
Intentions in practical reasoning
1.
2.
3.
Intentions pose problems for agents, who need to
determine ways of achieving them.
If I have an intention to , you would expect me to devote
resources to deciding how to bring about .
Intentions provide a “filter” for adopting other intentions,
which must not conflict.
If I have an intention to , you would not expect me to
adopt an intention  such that  and  are mutually
exclusive.
Agents track the success of their intentions, and are
inclined to try again if their attempts fail.
If an agent’s first attempt to achieve  fails, then all other
things being equal, it will try an alternative plan to achieve
.
Intentions in practical reasoning
4.
5.
6.
Agents believe their intentions are possible.
That is, they believe there is at least some way that the
intentions could be brought about. Otherwise: intentionbelief inconsistency
Agents do not believe they will not bring about their
intentions.
It would not be rational of me to adopt an intention to  if I
believed  was not possible. Otherwise: intention-belief
incompleteness
Under certain circumstances, agents believe they will bring
about their intentions.
It would not normally be rational of me to believe that I
would bring my intentions about; intentions can fail.
Moreover, it does not make sense that if I believe  is
inevitable that I would adopt it as an intention.
Intentions in practical reasoning
7.
Agents need not intend all the expected side effects of
their intentions.
If I believe  and I intend that , I do not necessarily
intend  also. (Intentions are not closed under
implication.)
This last problem is known as the side effect or package
deal problem.
Intentions in practical reasoning
Intentions are stronger than mere desires:
– “My desire to play basketball this afternoon is merely a
potential influencer of my conduct this afternoon. It
must vie with my other relevant desires [. . . ] before it
is settled what I will do. In contrast, once I intend to
play basketball this afternoon, the matter is settled: I
normally need not continue to weigh the pros and
cons. When the afternoon arrives, I will normally just
proceed to execute my intentions.” (Bratman, 1990)
Practical reasoning (abstract)
Current beliefs and perception determine next beliefs:
brf :( Bel)  Per ( Bel)
Current beliefs and intentions determine next desires:
option:( Bel) ( Int) ( Des)
Current beliefs, desires and intentions determine next
intentions:
filter :( Bel) ( Des) ( Int) ( Int)
Current beliefs, desires and available actions determine a
plan:
plan:( Bel) ( Int) ( Ac)  Plan
Overview
Deductive reasoning agents
– Planning
– Agent-oriented programming
– Concurrent MetateM
Practical reasoning agents
–
–
–
–
Practical reasoning & intentions
Implementation: deliberation
Implementation: commitment strategies
Implementation: intention reconsideration
Implementing practical reasoning agents
B := B_initial;
I := I_initial;
loop
p := see;
B := brf(B,p);
I := deliberate(B)
 := plan(B,I);
execute();
end;
//Update world model
//Use means-end reasoning
Interaction between deliberation and
planning
Both deliberation and planning take time, perhaps
too much time.
Even if deliberation is optimal (maximizes expected
utility), the resulting intention may no longer be
optimal when deliberation has finished.
(Calculative rationality)
Deliberation
How does an agent deliberate?
– Option generation
in which the agent generates a set of possible
alternatives
– Filtering
in which the agent chooses between competing
alternatives, and commits to achieving them.
Implementing practical reasoning agents
B := B_initial;
I := I_initial;
loop
p := see;
B := brf(B,p);
D := option(B,I);
I := filter(B,D,I)
 := plan(B,I);
execute();
end;
// Deliberate (1)
// Deliberate (2)
Overview
Deductive reasoning agents
– Planning
– Agent-oriented programming
– Concurrent MetateM
Practical reasoning agents
–
–
–
–
Practical reasoning & intentions
Implementation: deliberation
Implementation: commitment strategies
Implementation: intention reconsideration
Commitment Strategies
The following commitment strategies are commonly discussed
in the literature of rational agents:
– Blind commitment
A blindly committed agent will continue to maintain an intention
until it believes the intention has actually been achieved. Blind
commitment is also sometimes referred to as fanatical
commitment.
– Single-minded commitment
A single-minded agent will continue to maintain an intention until
it believes that either the intention has been achieved, or else
that it is no longer possible to achieve the intention.
– Open-minded commitment
An open-minded agent will maintain an intention as long as it is
still believed possible.
Commitment Strategies
An agent has commitment both to ends (i.e., the
wishes to bring about), and means (i.e., the
mechanism via which the agent wishes to achieve
the state of affairs)
Currently, our agent control loop is overcommitted,
both to means and ends
Modification: replan if ever a plan goes wrong
B := B_initial;
I := I_initial;
loop
p := see;
B := brf(B,p);
D := option(B,I);
I := filter(B,D,I)
 := plan(B,I);
while not empty() do
a := head();
execute(a);
 := tail();
p := see;
B := brf(B,p);
if not sound(,B,I) then
 := plan(B,I);
end;
end;
end;
//Start plan execution
//Update world plan
//Replan if necessary
Commitment Strategies
Still overcommitted to intentions: Never stops to
consider whether or not its intentions are
appropriate
Modification: stop to determine whether intentions
have succeeded or whether they are impossible:
(Single-minded commitment)
B := B_initial;
I := I_initial;
loop
p := see;
B := brf(B,p);
D := option(B,I);
I := filter(B,D,I)
 := plan(B,I);
while not (empty() or succeeded(B,I) or impossible (B,I) do
a := head();
//Check whether intentions succeeded
execute(a);
//and are still possible
 := tail();
p := see;
B := brf(B,p);
if not sound(,B,I) then
 := plan(B,I);
end;
end;
end;
Overview
Deductive reasoning agents
– Planning
– Agent-oriented programming
– Concurrent MetateM
Practical reasoning agents
–
–
–
–
Practical reasoning & intentions
Implementation: deliberation
Implementation: commitment strategies
Implementation: intention reconsideration
Intention Reconsideration
Our agent gets to reconsider its intentions once every time
around the outer control loop, i.e., when:
– it has completely executed a plan to achieve its current
intentions; or
– it believes it has achieved its current intentions; or
– it believes its current intentions are no longer possible.
This is limited in the way that it permits an agent to reconsider
its intentions
Modification: Reconsider intentions after executing every
action
B := B_initial;
I := I_initial;
loop
p := see;
B := brf(B,p);
D := option(B,I);
I := filter(B,D,I)
 := plan(B,I);
while not (empty() or succeeded(B,I) or impossible (B,I) do
a := head();
execute(a);
 := tail();
p := see;
B := brf(B,p);
D := option(B,I);
//Reconsider (1)
I := filter(B,D,I);
//Reconsider (2)
if not sound(,B,I) then
 := plan(B,I);
end;
end;
end;
Intention Reconsideration
But intention reconsideration is costly!
A dilemma:
– an agent that does not stop to reconsider its intentions
sufficiently often will continue attempting to achieve its intentions
even after it is clear that they cannot be achieved, or that there is
no longer any reason for achieving them
– an agent that constantly reconsiders its attentions may spend
insufficient time actually working to achieve them, and hence
runs the risk of never actually achieving them
Solution: incorporate an explicit meta-level control component,
that decides whether or not to reconsider
B := B_initial;
I := I_initial;
loop
p := see;
B := brf(B,p);
D := option(B,I);
I := filter(B,D,I)
 := plan(B,I);
while not (empty() or succeeded(B,I) or impossible (B,I) do
a := head();
execute(a);
 := tail();
p := see;
B := brf(B,p);
if reconsider(B,I) then
//Decide whether to reconsider or not
D := option(B,I);
I := filter(B,D,I);
end;
if not sound(,B,I) then
 := plan(B,I);
end;
end;
end;
Overview
Deductive reasoning agents
– Planning
– Agent-oriented programming
– Concurrent MetateM
Practical reasoning agents
–
–
–
–
Practical reasoning & intentions
Implementation: deliberation
Implementation: commitment strategies
Implementation: intention reconsideration