슬라이드 1 - Yonsei

Download Report

Transcript 슬라이드 1 - Yonsei

Software Agent - BDI architecture -

Outline

• BDI Agent • AgentSpeak(L) • Summary

1/39

Definition of BDI Architecture

• The

Belief-Desire-Intention

(

BDI

) architectures are examples of practical reasoning: the process of deciding, moment by moment which action to perform in the furtherance of our goals – Two-front reasoning • Agents not only strive to achieve their goals but must take time to reflect on whether their goals are still valid and possibly revise them • The BDI architecture is an example of balancing

reactive behavior

with

goal-directed behavior

• BDI agents use two important processes – Deliberations: deciding what goals we want to achieve – Means-ends reasoning: deciding how we are going to achieve them • To differentiate between these three concepts – I

believe

that if I study hard I will pass this course – I

desire

to pass this course – I

intend

to study hard

2/39

BELIEFS DESIRES

SELECTION FUNCTION

INTENTION

3/39

Examples of BDI Agents

• Hotel manager – Duties: Rooms supervising (occupation checking, making reservation), gives cleaning orders, gives fixing orders, needs submission – Models • Beliefs – room status and schedule • Desires – provide reservation to client • Intentions – order to clean / order to fix • Hotel maid – Duties: room cleaning (i.e. cyclic) (changing bedclothes, tablecloths and towels, vacuuming, bathroom cleaning, cabinet keeping), room cleaning on managers demand, defects submission, taking new stuff (bedclothes, towels etc.) from warehouse manager and giving him used ones – Models • Beliefs – room stuff and device status • Desires – make room clean / defect detection • Intentions – clean room / submits defects / exchange used stuff with warehouse

4/39

Role of Intentions

• Intentions drive means-ends reasoning: – Having formed an intention, I will attempt to achieve it – This involves deciding how to achieve it – If one action fails, I will try another approach • Intentions constrain future deliberation: – I will not entertain options that are inconsistent with my intentions • Intentions persist: – I will not give up my intentions without good reason • Typically, they persist until either I believe they have been achieved or I believe they will never be achieved, or if the reason I had the intention is no longer present • Intentions influence beliefs on which future practical reasoning is based – If I adopt an intention, I will plan for the future on the assumption that I will achieve that intention

5/39

Balancing Reactivity and Goal-directed

• How often should an agent reconsider its intentions?

Bold agents

: an agents that rarely stops to reconsider will continue attempting to achieve intentions even after they are no longer possible or it has no reason for achieving them.

Cautious agents

: an agent that constantly reconsiders will spend insufficient time actually working to achieve its intentions, and hence may never achieve them.

• The main factor that affects how thee agents perform in different environments is

the rate of world change, r

: – If

r is low

, bold agents do well compared with cautious ones, while cautious agents waste time reconsidering their intentions, bold agents are busy working towards and achieving their goals.

– If

r is high

, cautious agents outperform bold agents; cautious agents can recognize when their intentions are doomed, and can also take advantage of serendipitous situations and new opportunities.

6/39

Schematic of BDI Architecture

• A belief revision function ( brf ) •

A set of current beliefs

• An option generation function •

A set of current desires (options)

• A filter function •

A set of current intentions

• An action selection function

7/39

Components of a BDI Agent (1)

• A set of current beliefs – These represent information that agent has about the current environment • A belief revision function (brf) – Takes perceptual input and agent’s current belief and, using these, determines a new set of beliefs • An option generation function – Determines options available to agent, using current belief about environment and current intentions • A set of current desires (options) – These represent possible courses of action available to agent

8/39

Components of a BDI Agent (2)

• A filter function – Represents agent’s deliberation process and determines agent’s intentions on the basis of current beliefs, desires and intentions • A set of current intentions – These represent agent’s current focus – those states of affairs that is has committed to trying to bring about • An action selection function – That determines an action to perform on the basis of current intentions

9/39

Formalization of a BDI Agent

• Let Bel, Des, Int be sets of all possible beliefs, desires and intentions – We will not consider the contents of these sets – it depends on the purpose of the agent – but often they are logical formulas • There must be a notion of consistency defined on these sets. e. g., – Is an intention to achieve x consistent with the belief y?

• The internal state of a BDI agent at a given instant is a triple (B, D, I) where

B

  

Int

• The belief revision function is a mapping:  

P

– On the basis of the current percept and current beliefs determines a new set of beliefs

10/39

Functions

• The option generation function is a mapping:

– Is responsible for the agent’s means-ends reasoning – how to achieve its intentions – Some of the options feedback – recursively elaborating a hierarchical plan structure, until it reaches executable actions • The agent’s deliberation process (filter) is a mapping:

  

– Updates agent’s intentions – Drops impossible or non-beneficial intentions – Retains those not yet achieved and expected to be of benefit – Finally, it adopts new intentions either to achieve existing intentions, or to exploit new opportunities.

• NB. Intentions must come from somewhere:

B

, ,

D

11/39

Action Selection Function

• The execute function simply returns any executable intention:

 

A

• The whole action function of a BDI agent is simply:

A

 :  :

A begin

:         

12/39

Example

Passing the Course

• A student agent perceives the following beliefs:

Beliefs

1 

brf

    , 

workhard

 

passCourse attendLectures workhard

completeCoursework

• The agent has an initial intention to pass the course:

Intentions

0  

passCourse

 

review

     • The agent’s desires are freshly generated each for cycle (they do not persist). The option generation function leads to desires to pass the course and its consequence: 

Desires

1   1 , 0 ,   , 

13/39

Example

Generating Intentions

• The filter function leads to some new intentions being added:

Intentions

1   1 , 1 , 0    ,

completeCoursework review

, • One or more of which will then be executed before the agent’s deliberation cycle recommences.

14/39

Example

Obtaining New Beliefs

• Suppose the agent perceives new information which leads to his beliefs being revised:

Beliefs

2     1 ,  

cheat cheat

passCourse

,

workhard

          

workHard cheat cheate

 

attendLecture workHard

,

passCourse

completeCoursework passCourse workHard

, , 

review

     

15/39

Example

Revising Desires and Intentions

• The agent recomputes his current desires

Desires

2   1 , 1  

cheat

 • And intentions

Intentions

2     2 , 2 , 1   • The agent drops his original intention to work hard (and its consequences) and adopts a new one to cheat

16/39

Example

Adding More Beliefs

• Subsequently, the agent perceives that if caught cheating, he will no longer pass the course. What’s more, he is certain to be caught

Beliefs

3 

Beliefs

2 

cheat

  /    2 

cheat caught

   ,  

cheat caught

passCourse

caught

   

passCourse

,     • Because the new beliefs lead to an inconsistency, the agent has had to drop his belief in

cheat

passCourse

17/39

Example

Revising Desires and Intentions

• The agent recomputes his desires and intentions

Desires

3   2 , 2    ,

Intentions

3   3 , 3 , 2  ,  , ,   • Because it’s not longer consistent to cheat (even through it may be preferable to working hard), the agent drops that intention and re adopts workHard (and consequences) 

18/39

AgentSpeak(L)

• A model that shows a one-to-one correspondence between the model theory, proof theory and the abstract interpreter – Attempt to bridge the gap between theory and practice – Natural extension of logic programming for the BDI agent architecture – Provides an elegant abstract framework for programming BDI agents – Based on a restricted first-order language with events and actions – The behavior of the agent (i.e., its interaction with the environment) is dictated by the programs written in AgentSpeak(L)

19/39

AgentSpeak(L)

Basic Notions (1)

• A set of base beliefs: facts in the logic programming sense • A set of plans: context-sensitive, event-invoked recipes that allow hierarchical decomposition of goals as well as the execution of actions with the purpose of accomplishing a goal • Belief atom – A first-order predicate in the usual notation – Belief atoms or their negations are termed belief literals • Goal: a state of the system, which the agent wants to achieve.

– Achievement goals • predicates prefixed with the operator “!” • state to achieve a state of the world where the associated predicate is true • in practice, these initiate the execution of subplans – Test goals • predicates prefixed with the operator‘?’ • returns a unification for the associated predicate with one of the agent’s beliefs; it fails if no unification is found

20/39

AgentSpeak(L)

Basic Notions (2)

• Triggering event – Defines which events may initiate the execution of a plan.

– An event can be • internal, when a subgoal needs to be achieved • external, when generated from belief updates as a result of perceiving the environment – Two types of triggering events: related to the addition (‘+’) and deletion (‘-’) of attitudes (beliefs or goals) • Plans: refer to the basic actions that an agent is able to perform on its environment

p ::= te : ct <- h

Where:

  

te - triggering event (denoting the purpose for that plan) ct - a conjunction of belief literals representing a context.

The context must be a logical consequence of that agent’s current beliefs for the plan to be applicable.

h - a sequence of basic actions or (sub)goals that the agent has to achieve (or test) when the plan, if applicable, is chosen for execution.

21/39

Triggering event Context

+concert (A,V) : likes(A) < !book_tickets(A,V).

+!book_tickets(A, V) : ¬busy(phone) <- call(V); …; !choose seats(A,V).

Achievement goal added Basic action

22/39

AgentSpeak(L)

Basic Notions (3)

• Intentions – Plans the agent has chosen for execution.

– Intentions are executed one step at a time – A step can • query or change the beliefs • perform actions on the external world • suspend the execution until a certain condition is met • submit new goals – The operations performed by a step may generate new events, which, in turn, may start new intentions – An intention succeeds when all its steps have been completed. It fails when certain conditions are not met or actions being performed report errors

23/39

Syntax

ag ::= bs ps bs ::= at1. … atn.

at ::= P(t1, … tn) ps ::= p1 … pn p ::= te : ct <- h.

te ::= +at | -at | +g | -g ct ::= true | l1 & … & ln h ::= true | f1 ; … ; fn l ::= at | not (at) f ::= A(t1, … tn) | g | u g ::= !at | ?at

u ::= +at | -at (n  0) (n  0) (n  1) (n  1) (n  1) (n  0)

AgentSpeak(L)

24/39

AgentSpeak(L)

Informal Semantic (1)

• The interpreter for AgentSpeak(L) manages – A set of events – A set of intentions – Three selection functions •

Events

, which may start off the execution of plans that have relevant triggering events, can be: – External, when originating from perception of the agent’s environment (i.e., addition and deletion of beliefs based on perception are external events). External events create new intentions.

– Internal, when generated from the agent’s own execution of a plan (i.e., a subgoal in a plan generates an event of type “addition of achievement goal”). •

Intentions

are particular courses of actions to which an agent has committed in order to handle certain events. Each intention is a stack of partially instantiated plans

25/39

AgentSpeak(L)

Informal Semantic (2)

• SE (the event selection function) – Selects a single event from the set of events • SO – Selects an “option” (i.e., an applicable plan) from a set of applicable plans • SI – Selects one particular intention from the set of intentions • The selection functions are agent-specific, in the sense that they should make selections based on an agent’s characteristics

26/39

Informal Semantic: Overview (1)

AgentSpeak(L)

27/39

Informal Semantic: Overview (2)

AgentSpeak(L)

28/39

Example

ALICE  During lunch time, forward all calls to Carla.  When I am busy, incoming calls from colleagues should be forwarded to Denise.

29/39

Beliefs

• user(alice).

• user(bob).

• user(carla).

• user(denise).

• ~status(alice, idle).

• status(bob, idle).

• colleague(bob).

• lunch_time(“11:30”).

Example

30/39

Example

Plan

user(alice).

user(bob).

user(carla).

user(denise).

~status(alice, idle).

status(bob, idle).

colleague(bob).

lunch_time(“11:30”).

“During lunch time, forward all calls to Carla”.

+invite(X, alice) : lunch_time(t)

!call_forward(alice, X, carla). (p1)

“When I am busy, incoming calls from colleagues should be forwarded to Denise”.

+invite(X, alice) : colleague(X)

call_forward_busy(alice,X,denise). (p2) +invite(X, Y): true

connect(X,Y). (p3) 31/39

Example

Plan

user(alice).

user(bob).

user(carla).

user(denise).

~status(alice, idle).

status(bob, idle).

colleague(bob).

lunch_time(“11:30”).

+invite(X, alice) : lunch_time(t)  +invite(X, alice) : colleague(X)  +invite(X, Y): true  connect(X,Y).

!call_forward(alice, X, carla). (p1) call_forward_busy(alice,X,denise).(p2) (p3)

+!call_forward(X, From, To) : invite(From, X)

+invite(From, To), - invite(From,X) (p4) +!call_forvard_busy(Y, From, To) : invite(From, Y)& not(status(Y, idle)))

+invite(From, To), - invite(From,Y). (p5) 32/39

Example

Plan

user(alice).

user(bob).

user(carla).

user(denise).

~status(alice, idle).

status(bob, idle).

colleague(bob).

lunch_time(“11:30”).

+invite(X, alice) : lunch_time(t)    !call_forward(alice, X, carla). (p1) +invite(X, alice) : colleague(X) call_forward_busy(alice,X,denise). (p2) +invite(X, Y): true  connect(X,Y). (p3) +!call_forward(X, From, To) : invite(From, X) +invite(From, To), - invite(From,X) (p4) +!call_forvard_busy(Y, From, To) : invite(From, Y)& not(status(Y, idle)))  +invite(From, To), - invite(From,Y). (p5)

33/39

Example

Execution 1

• A new event is sensed from the environment, +invite(Bob, Alice) (there is a call for Alice from Bob).

• There are three relevant plans for this event (p1, p2 and p3) – the event matches the triggering event of those three plans.

Relevant Plans p1: +invite(X, alice) : lunch_time(t)

!call_forward(alice, X, carla) p2: +invite(X, alice) : colleague(Bob)

!call_forward_busy(alice, X, denise).

p3 : +invite(X, Y): true

connect(X,Y).

Unifier

{X=bob} {Y=alice, X=bob}

34/39

Example

Execution 2

• Only the context of plan p2 is satisfied - colleague(bob) => p2 is applicable • A new intention based on this plan is created in the set of intentions, because the event was external, generated from the perception of the environment • The plan starts to be executed. It adds a new event, this time an internal event: !call_forward_busy(alice,bob,denise)

Intention ID Intension Stack

1

+invite(X,alice):colleague(X) <- !call_forward_busy(alice,X,denise) Unifier

{X=bob}

35/39

Example

Execution 3

• A plan relevant to this new event is found (p5):

Relevant Plans p5: +!call_forvard_busy(Y, From, To) : invite(From, Y) & not(status(Y, idle)))

+invite(From, To), - invite(From,Y).

Unifier

{From=bob, Y=alice, To=denise} • p5 has the context condition true, so it becomes an

applicable

plan and it is pushed on top of

intention 1 (

it was generated by an internal event)

Intention ID Intension Stack Unifier

1

+!call_forward_busy(Y,From,To) invite(From,Y) & not status(Y,idle) <- +invite(From,To); -invite(From,Y) +invite(X,alice) : colleague(X) <- !call_forward_busy(alice,X,denise) :

{From=bob, Y=alice, To=denise} {X=bob}

36/39

Example

Execution 4

• A new internal event is created, +invite(bob, denise).

• Three relevant plans for this event are found, p1, p2 and p3.

• However, only plan p3 is applicable in this case, since the others don’t have the context condition true.

• The plan is pushed on top of the existing intention.

Intention ID Intension Stack

1

+invite(X,Y) : <- connect(X,Y) Unifier

{Y=denise, X=bob}

+!call_forward_busy(Y,From,To) invite(From,Y) & not status(Y,idle) <- +invite(From,To); -invite(From,Y) +invite(X,alice) : colleague(X) <- !call_forward_busy(alice,X,denise) :

{From=bob, Y=alice, To=denise} {X=bob}

37/39

Example

Execution 5

• On top of the intention is a plan whose body contains an action.

• The action is executed, connect(bob, denise) and is removed from the intention.

• When all formulas in the body of a plan have been removed (i.e., have been executed), the whole plan is removed from the intention, and so is the achievement goal that generated it.

Intention ID Intension Stack

1

+!call_forward_busy(Y,From,To) invite(From,Y) & not status(Y,idle) <- -invite(From,Y) +invite(X,alice) : colleague(X) <- !call_forward_busy(alice,X,denise) : Unifier

{From=bob, Y=alice, To=denise} {X=bob} • The only thing that remains to be done is –invite(bob, alice) (this event is removed from the beliefs base). • This ends a cycle of execution, and the process starts all over again, checking the state of the environment and reacting to events.

38/39

Summary

• BDI Agents – An example of practical reasoning: the process of deciding, moment by moment which action to perform in the furtherance of our goals – An example of balancing reactive behavior with goal-directed behavior • AgentSpeak(L) has many similarities with traditional logic programming, which would favor its becoming a popular language – It proves quite intuitive for those familiar with logic programming.

– It has a neat notation, thus providing quite elegant specifications of BDI agents.

39/39