1. INTRODUCTION - University of South Carolina

Download Report

Transcript 1. INTRODUCTION - University of South Carolina

Principles of Agents and
Multiagent Systems
Munindar P. Singh
Michael N. Huhns
[email protected]
[email protected]
http://www.csc.ncsu.edu/faculty/mpsingh/
http://www.ece.sc.edu/faculty/Huhns/
© 1999 Singh & Huhns
1
Tremendous Interest in Agent
Technology
Evidence:
• 400 people at Autonomous Agents 98
• 550 people at Agents World in Paris
Why?
•
•
•
•
Vast information resources now accessible
Ubiquitous processors
New interface technology
Problems in producing software
© 1999 Singh & Huhns
2
What is an Agent?
• The term agent in computing covers a wide range of
behavior and functionality.
• In general, an agent is an active computational entity
– with a persistent identity
– that can perceive, reason about, and initiate activities in
its environment
– that can communicate (with other agents)
© 1999 Singh & Huhns
3
The Agent Test
• “A system containing one or more reputed
agents should change substantively if
another of the reputed agents is added to
the system.”
© 1999 Singh & Huhns
4
Social Engineering for Agents
Computers are making more and more decisions
autonomously:
•
•
•
•
•
•
when airplanes land and take off
how phone calls are routed
how loads are controlled in an electrical grid
when packages are delivered
which stocks are bought and sold
electronic marketplaces
(fuel vs. tax)
(pricing; choose carrier dynamically)
© 1999 Singh & Huhns
5
An Agent Should Act
• Benevolently
• Predictably
– consistent with its model of itself
– consistent with its model of other agents’ beliefs about
itself
© 1999 Singh & Huhns
6
Benevolence
“A Mattress in the Road”
cars
© 1999 Singh & Huhns
Mattress
7
A Collective Store
• Benevolent agents might contribute information they have
retrieved, filtered, and refined to a collective store database
• Access to the collective store might be predicated on
contributions to it
Collective Store
Database
World Wide Web...
Query Agents
© 1999 Singh & Huhns
8
Agent Behavior Testbed University of South Carolina
Aan
© 1999 Singh & Huhns
= agent in cell
[]n
= box in cell
(+)n
= target
9
Agents in a Cooperative Information
System Architecture
Agent
Application
Application
Application
Agent
Agent
Agent
Application
Agent
E-Mail
System
Agent
Agent
Web
System
Agent
Database
System
(Mediators, Proxies, Aides, Wrappers)
© 1999 Singh & Huhns
10
Agent Characteristics/1
•
•
•
•
•
•
•
•
•
•
Locality: local or remote
Uniqueness: homogeneous or heterogeneous
Granularity: fine- or coarse-grained
Persistence: transient or long-lived
Level of Cognition: reactive or deliberative
Sociability: autistic, aware, responsible, team player
Friendliness: cooperative or competitive or antagonistic
Construction: declarative or procedural
Semantic Level: communicate what or how
Mobility: stationary or itinerant
© 1999 Singh & Huhns
11
Agent Characteristics/2
• Autonomy: independent or controlled
• Adaptability: fixed or teachable or autodidactic
• Sharing: degree and flexibility with respect to
– communication: vocabulary, language, protocol
– intellect: knowledge, goals, beliefs, specific ontologies
– skills: procedures, "standard" behaviors, implementation languages
• Interactions: direct or via facilitators, mediators, or
“nonagents”
• Interaction Style/Quality/Nature: with each other or with
“the world”, or both?
• Do the agents model their environment, themselves, or
other agents?
© 1999 Singh & Huhns
12
Dimensions of CIS: System
Scale is the number of agents:
Individual
Committee
Society
Interactions:
Reactive
Planned
Coordination (self interest):
Competitive
Cooperative
Benevolent
Antagonistic
Collaborative
Altruistic
Agent Heterogeneity:
Identical
Unique
Communication Paradigm:
Point-to-Point Multi-by-name/role
© 1999 Singh & Huhns
Broadcast
13
Dimensions of CIS: Agent
Dynamism is the ability of an agent to learn:
Fixed
Teachable
Autodidactic
Interdependent
Independent
Autonomy:
Controlled
Interactions:
Simple
Complex
Sociability (awareness):
Autistic
© 1999 Singh & Huhns
Committing
Collaborative
14
Challenges
• Doing the "right" thing
Shades of autonomy
Conventions: emergence and maintenance
Coordination
Collaboration
Communication: semantics and pragmatics
Interaction-oriented programming
© 1999 Singh & Huhns
15
BASIC CONCEPTS
© 1999 Singh & Huhns
16
Categories of Agent Research
Human
Intelligence
Reasoning
Behavior
© 1999 Singh & Huhns
Ideal
Intelligence
Agents that think like Agents that think
humans
rationally
(cognitive science)
(logic)
Agents that act like
humans
(Turing test)
Agents that behave
rationally
(“do the right thing”)
17
Agent Environments
• Accessible vs. Inaccessible
• Deterministic vs. Nondeterministic
• Episodic vs. Nonepisodic
• Static vs. Dynamic
• Discrete vs. Continuous
Open information environments (e.g., InfoSleuth)
are inaccessible, nondeterministic, nonepisodic,
dynamic, and discrete
© 1999 Singh & Huhns
18
Agent Abstractions/1
• The traditional abstractions are from AI and
are mentalistic
–
–
–
–
–
beliefs: agent’s representation of the world
knowledge: (usually) true beliefs
desires: preferred states of the world
goals: consistent desires
intentions: goals adopted for action
© 1999 Singh & Huhns
19
Agent Abstractions/2
• The agent-specific abstractions are
inherently interactional
–
–
–
–
social: about collections of agents
organizational: about teams and groups
ethical: about right and wrong actions
legal: about contracts and compliance
© 1999 Singh & Huhns
20
Agent Abstractions/3
Inherently interactional
Agents, when properly understood
• lead naturally to multiagent systems
• provide a means to capture the fundamental
abstractions that apply in all major
applications and which are otherwise
ignored by system builders
© 1999 Singh & Huhns
21
Agents versus AI
Traditional AI
Entities
Stand-alone
Actions
Cause and effect
(in terms of)
Simplistic
(in terms of) obligations
Contracts
© 1999 Singh & Huhns
Agents
Social: flexible
autonomy, communities,
responsibility
Ethical concepts of right
and wrong
Directed relationships
capturing rights, duties,
powers, and liabilities.
22
How to Apply the Abstractions
Consider how the components of a any
practical situation involving large and
dynamic software systems.
– Dynamism => autonomly
– Openness and compliance => ability to enter
into and obey contracts
– Trustworthiness => ethical behavior
© 1999 Singh & Huhns
23
Why Do These Abstractions
Matter?
• Because of modern applications that
demand going beyond traditional metaphors
and models
– virtual enterprises: manufacturing supply
chains, autonomous logistics,
– electronic commerce: utility management
– communityware: social user interfaces
– problem-solving by teams
© 1999 Singh & Huhns
24
A Rational Agent
Rationality depends on...
• The performance measure for success
• What the agent has perceived so far
• What the agent knows about the environment
• The actions the agent can perform
An ideal rational agent: for each possible percept
sequence, it acts to maximize its expected utility,
on the basis of its knowledge and the evidence
from the percept sequence
© 1999 Singh & Huhns
25
A Simple Reactive Agent
Sensors
Condition-action rules
Agent
© 1999 Singh & Huhns
What action I
should do now
Environment
What the world
is like now
Effectors
26
A Simple Reactive Agent
function Simple-Reactive-Agent(percept)
static: rules, a set of condition-action rules
state <-- Interpret-Input(percept)
rule <-- Rule-Matching(state, rules)
action <-- Rule-Action(rule)
return action
© 1999 Singh & Huhns
27
A Reactive Agent with State
Sensors
State
How the world evolves
What my actions do
Condition-action rules
Agent
© 1999 Singh & Huhns
What action I
should do now
Environment
What the world
is like now
Effectors
28
A Reactive Agent with State
function Reactive-Agent-with-State(percept)
static: rules, a set of condition-action rules
state, a description of the current world
state <-- Update-State(state, percept)
rule <-- Rule-Matching(state, rules)
action <-- Rule-Action(rule)
state <-- Update-State(state, action)
return action
© 1999 Singh & Huhns
29
A Goal-Based Agent
Sensors
State
How the world evolves
What my actions do
What it will be like
if I do action A
Goals
Agent
© 1999 Singh & Huhns
What action I
should do now
Environment
What the world
is like now
Effectors
30
A Utility-Based Agent
Sensors
State
How the world evolves
What my actions do
What it will be like
if I do action A
Utility
How happy I will
be in such a state
What action I
should do now
Agent
© 1999 Singh & Huhns
Environment
What the world
is like now
Effectors
31
A Utility-Based Agent
function Utility-Based-Agent(percept)
static: a set of probabilistic beliefs about the
state of the world
Update-Probs-for-Current-State(percept,old-action)
Update-Probs-for-Actions(state, actions)
Select-Action-with-Highest-Utility(probs)
return action
© 1999 Singh & Huhns
32
5. INTERACTION AND
COMMUNICATION
© 1999 Singh & Huhns
33
Cognitive Economy
Prefer the simpler (more economical) explanation ("but not too
simple" - Einstein)
Implications of Cognitive Economy:
• Agents must represent their environment
• Agents must represent themselves
• Agents must represent other agents ad infinitum
• Zero-order model: other agents are the same as oneself
© 1999 Singh & Huhns
34
Coordination
A property of interaction among a set of agents performing
some activity in a shared state. The degree of coordination
is the extent to which they
avoid extraneous activity
– reduce resource contention
– avoid livelock
• avoid deadlock
• maintain safety conditions
Cooperation is coordination among nonantagonistic agents.
Typically,
• each agent must maintain a model of the other agents
• each agent must develop a model of future interactions
© 1999 Singh & Huhns
35
The Contract Net Protocol
An important generic protocol
• A manager announces the existence of tasks via a (possibly selective)
multicast
• Agents evaluate the announcement. Some of these agents submit bids
• The manager awards a contract to the most appropriate agent
• The manager and contractor communicate privately as necessary
© 1999 Singh & Huhns
36
Task Announcement Message
• Eligibility specification: criteria that a node must
meet to be eligible to submit a bid
• Task abstraction: a brief description of the task to
be executed
• Bid specification: a description of the expected
format of the bid
• Expiration time: a statement of the time interval
during which the task announcement is valid
© 1999 Singh & Huhns
37
Bid and Award Messages
• A bid consists of a node abstraction—a brief
specification of the agent’s capabilities that are
relevant to the task
• An award consists of a task specification—the
complete specification of the task
© 1999 Singh & Huhns
38
Applicability of Contract Net
The Contract Net is
• a high-level communication protocol
• a way of distributing tasks
• a means of self-organization for a group of agents
Best used when
• the application has a well-defined hierarchy of tasks
• the problem has a coarse-grained decomposition
• the subtasks minimally interact with each other, but cooperate when
they do
© 1999 Singh & Huhns
39
CONTROL
© 1999 Singh & Huhns
40
Goals for Multiagent Control
Develop Technologies for...
• Locating and allocating capabilities and resources
that are dispersed in the environment
• Predicting, avoiding, or resolving contentions over
capabilities and resources
• Mediating among more agents, with more
heterogeneity and more complex interactions
• Maintaining stability, coherence, and effectiveness
© 1999 Singh & Huhns
41
Control Challenges
What makes control difficult can be broken
down into several major characteristics of
the overall system, including:
• The Agents that comprise the system
• The Problems that those agents are solving
individually and/or collectively
• The Solution characteristics that are critical
© 1999 Singh & Huhns
42
Control Challenges:Agents
quantity
complexity
heterogeniety
© 1999 Singh & Huhns
Control is harder as agents are:
• More numerous
• More complex individually
(e.g., more versatile)
• More heterogeneous in their
capabilities, means of
accomplishing capabilities,
languages for describing
capabilities, etc.
43
Control Challenges:Problems
degree of
interaction
volatility
severity of
failure
© 1999 Singh & Huhns
Control is harder as the problems
agents solve are
• More interrelated
• Changing more rapidly, or
pursued in an uncertain and
changing world
• More unforgiving of control
failures (e.g., involving
irreversible actions)
44
Control Challenges:Solutions
quality /
efficiency
robustness
low overhead
© 1999 Singh & Huhns
Control is harder as solutions
to agent problems must be
• Better (e.g., more efficient)
for the circumstances
• More robust to changing
circumstances
• Cheaper/faster to develop
individually and in concert
45
Technologies for Agent Control
•
•
•
•
•
•
•
Broker-based
Matchmaker-based
Market-based; auctions
BDI and commitment based
Decision theoretic
Workflow (procedural) based
Standard operating
procedures
© 1999 Singh & Huhns
•
•
•
•
•
Learning / adaptive
Coordinated planning
Conventions / protocols
Stochastic or physics-based
Organizations: teams and
coalitions
• Constraint satisfaction/
optimization
46
Example Experiments:
Capability Location
(1) Investigate matchmaking and distributed matchmaking
complexities as a function of numbers of agents
Matchmaking
activity
# of agents
(2) Investigate brokering vs. matchmaking vs. direct interaction
as a function of different task types and allocation mechanisms
Brokering
Matchmaking
Task Type
allocation mechanism
© 1999 Singh & Huhns
47
Example Experiments:
Capability Allocation and Scheduling
(1) Investigate quality/cost of allocating scarce capabilities as
number of capabilities and their consumers rises
Allocation
costs
# of agents
(2) Investigate quality/cost of scheduling reusable/nonsharable
capabilities as volatility/uncertainty in agents’ future needs rises
capability utilization
volatility/uncertainty
scheduling mechanism
© 1999 Singh & Huhns
48
Parameters of Tasks and
Experiments
• Number of tasks
• Types of tasks
–
–
–
–
number of resources
duration of resource need
complementarity/substitutability
sequencing of resource needs
• Resource contention/overlap in needs
• Types of resources
– reusable/sharable/scaleable
© 1999 Singh & Huhns
49
Dimensions of Control
Control how
capabilities
are
Possibly by
Why difficult? Why needed?
Allocated or
scheduled
Markets, CSP,
hierarchy,
planning,
teamwork
Many demands,
scarce supply
Avoid
contention, use
resources well
Located
Brokers,
matchmakers,
markets,
broadcast
Arrival/
departure rate,
variety, scale
Must find
resources to
employ them
© 1999 Singh & Huhns
50
Control how
capabilities
are
Possibly by
Why difficult? Why needed?
Not wasted
Brokermonitored
requests,
caching,
communication
Reacting to
prices,
replanning, goal
adjustment
Similar tasks
arising various
places
Efficiency
Number of
alternative
choices, self
interest
Avoid contention
Reprogram or
train agents,
evolve/spawn,
inject "friction"
Oscillations or
Adapt/grow with
chaos from
need
sporadic demand,
allocating
processes
51
Demanded
Supplied
© 1999 Singh & Huhns
Control how
capabilities
are
Possibly by
Described
Provide more or
less detail, lump
or differentiate
capabilities
Initially
allocated
Differentiated
© 1999 Singh & Huhns
Why difficult? Why needed?
Rich space of
capabilities,
disagree on better
description,
propagating
descriptions
Organization or Detecting need,
role restructuring assessing
choices,
propagating
Description must
follow use
Prevention
Continuous
(maintain
operation, agent
consistency),
adaptation
response (enlarge
capability
language)
Differentiation
inevitable
Choice
constrains quality
of coordination
52
SOCIAL ABSTRACTIONS
© 1999 Singh & Huhns
53
Social Abstractions
• Commitments: social, joint, collective, ...
Organizations and roles
Teams and teamwork
Mutual beliefs and problems
Joint intentions
Potential conflict with individual rationality
© 1999 Singh & Huhns
54
Coherence and Commitments
• Coherence is how well a system behaves as a unit.
It requires some form of organization, typically
hierarchical
• Social commitments are a means to achieve
coherence
© 1999 Singh & Huhns
55
Example: Electronic Commerce
• Define an abstract sphere of commitment
(SoCom) consisting of two roles: buyer and
seller, which require capabilities and
commitments about, e.g.,
– requests they will honor
– validity of price quotes
• To adopt these roles, agents must have the
capabilities and acquire the commitments.
© 1999 Singh & Huhns
Buyer and Seller Agents
SoComs provide the context for the concepts represented & communicated.
Example: Electronic Commerce
• Agents can join
– during execution—requires publishing the
definition of the commerce SoCom
– when configured by humans
• The agents then behave according to the
commitments
• Toolkit should help define and execute
commitments, and detect conflicts.
© 1999 Singh & Huhns
Virtual Enterprises (VE)
Two sellers come together with a
new proxy agent called VE.
Example of VE agent
commitments:
• notify on change
• update orders
• guarantee the price
• guarantee delivery date
VE and EC Composed
Social Commitments
• Operations on commitments (instantiated as
social actions):
–
–
–
–
–
–
create
discharge (satisfy)
cancel
release (eliminate)
delegate (change debtor)
assign (change creditor).
© 1999 Singh & Huhns
Policies and Structure
• Spheres of commitment (SoComs)
– abstract specifications of societies
– made concrete prior to execution
• Policies apply on performing social actions
• Policies related to the nesting of SoComs
• Role conflicts can occur when agents play
multiple roles, e.g., because of nonunique
nesting.
© 1999 Singh & Huhns
ETHICAL ABSTRACTIONS
© 1999 Singh & Huhns
63
Ethical Abstractions
• Utilitarianism
Consequentialism
Obligations
Deontic logic
Paradoxes
© 1999 Singh & Huhns
64
Motivation
The ethical abstractions help us specify agents
who act appropriately.
• Intuitively, we think of ethics as just the
basic way of distinguishing right from
wrong.
• It is difficult to entirely separate ethics from
legal, social, or even economic
considerations
© 1999 Singh & Huhns
65
Right and Good
• Right: that which is right in itself
• Good: that which is good for someone or
for some end
© 1999 Singh & Huhns
66
Deontological vs Teleological
• Deontological theories
– right before good
– being good does not mean being right
– ends do not justify means
• Teleological theories
– good before right
– right maximizes good
– ends justify means
© 1999 Singh & Huhns
67
Deontological Theories
• Constraints
– negatively formulated
– narrowly framed
• e.g., lying is not not-telling-the-truth
– narrowly directed at the agent’s specific action
• not its occurrence by other means
• not the consequences that are not explicitly chosen
© 1999 Singh & Huhns
68
Teleological Theories
• Based on how actions satisfy various goals,
not their intrinsic rightness
• comparison-based
• preference-based
© 1999 Singh & Huhns
69
Consequentialism
© 1999 Singh & Huhns
70
Utilitarianism
This is the view that a moral action is one that is
useful
• must be good for someone
• good may be interpreted as
– pleasure: hedonism
– preference satisfaction: microeconomic rationalism
(assumes each agent knows its preferences)
– interest satisfaction: welfare utilitarianism
– aesthetic ideals: ideal utilitarianism
© 1999 Singh & Huhns
71
Obligations
• For deontological theories, obligations are
those that are impermissible to omit
© 1999 Singh & Huhns
72
Applying Ethics
• The deontological theories
– are narrower
– ignore practical consideration
– but are only meant as incomplete constraints (of all
right actions, the agent can choose any)
• The teleological theories
– are broader
– include practical considerations
– but leave fewer options for the agent, who must always
choose the best available alternative
© 1999 Singh & Huhns
73
LEGAL ABSTRACTIONS
© 1999 Singh & Huhns
74
Legal Abstractions
• Contracts
Directed obligations
Hohfeldian concepts: right, duty, power,
liability, immunity, ...
Following protocols
Defining and testing compliance
© 1999 Singh & Huhns
75
UNDERSTANDING
COMMUNICATION
© 1999 Singh & Huhns
76
Interaction and Communication
• Interactions occur when agents exist and act in close
proximity:
– resource contention, e.g., bumping into each other
• Communication occurs when agents send messages to one
another with a view to influencing beliefs and intentions.
Implementation details are irrelevant:
• can occur over communication links
– can occur through shared memory
– can occur because of shared conventions
© 1999 Singh & Huhns
77
Speech Act Theory
Speech act theory, initially meant for natural language, views
communications as actions. It considers three aspects of a
message:
• Locution, or how it is phrased, e.g.,
– "It is hot here" or "Turn on the cooler"
• Illocution, or how it is meant by the sender or understood
by the receiver, e.g.,
– a request to turn on the cooler or an assertion about the temperature
• Perlocution, or how it influences the recipient, e.g.,
– turns on the cooler, opens the window, ignores the speaker
Illocution is the main aspect.
© 1999 Singh & Huhns
78
Syntax, Semantics, Pragmatics
For message passing
• Syntax: requires a common language to represent
information and queries, or languages that are
intertranslatable
• Semantics: requires a structured vocabulary and a shared
framework of knowledge-a shared ontology
• Pragmatics:
– knowing whom to communicate with and how to find them
– knowing how to initiate and maintain an exchange
– knowing the effect of the communication on the recipient
© 1999 Singh & Huhns
79
KQML Semantics
• Each agent manages a virtual knowledge base (VKB)
• Statements in a VKB can be classified into beliefs and
goals
• Beliefs encode information an agent has about itself and its
environment
• Goals encode states of an agent’s environment that it will
act to achieve
• Agents use KQML to communicate about the contents of
their own and others’ VKBs
© 1999 Singh & Huhns
80
Semantics of Communications
What if the agents have
• different terms for the same concept?
• same term for different concepts?
• different class systems or schemas?
• differences in depth and breadth of coverage?
© 1999 Singh & Huhns
81
Common Ontologies
• A shared representation is essential to successful
communication and coordination
• For humans, this is provided by the physical, biological,
and social world
• For computational agents, this is provided by a common
ontology:
– terms used in communication can be coherently defined
– interaction policies can be shared
• Current efforts are
–
–
–
–
Cyc
DARPA ontology sharing project
Ontology Base (ISI)
WordNet (Princeton)
© 1999 Singh & Huhns
82
ECONOMIC ABSTRACTIONS
© 1999 Singh & Huhns
83
Motivation
The economic abstractions have a lot of
appeal as an existing approach to capture
complex systems of autonomous agents.
• By themselves they are incomplete
• Can provide a basis for achieving some of
the contractual behaviors, especially in
– helping an agent decide what to do
– helping agents negotiate.
© 1999 Singh & Huhns
84
Market-Oriented Programming
• An approach to distributed computation
based on market price mechanisms
• Effective for coordinating the activities of
many agents with minimal communication
• Goal: build computational economies to
solve problems of distributed resource
allocation
© 1999 Singh & Huhns
85
Benefits
• For agents, the state of the world is described completely
by current prices
• Agents do not need to consider the preferences or abilities
of others
• Communications are offers to exchange goods at various
prices
• Under certain conditions, a simultaneous equilibrium of
supply and demand across all of the goods is guaranteed to
exist, to be reachable via distributed bidding, and to be
Pareto optimal.
© 1999 Singh & Huhns
86
Market Behavior
• Agents interact by offering to buy or sell
quantities of commodities at fixed unit
prices
• At equilibrium, the market has computed
the allocation of resources and dictates the
activities and consumptions of the agents
© 1999 Singh & Huhns
87
Agent Behavior
• Consumer agents: exchange goods
• Producer agents: transform some goods into
other goods
• Assume individual impact on market is
negligible
• Both types of agents bid so as to maximize
profits (or utility)
© 1999 Singh & Huhns
88
Principles of Negotiation
• Negotiation involves a small set of agents
• Actions are propose, counterpropose, support, accept,
reject, dismiss, retract
• Negotiation requires a common language and common
framework (an abstraction of the problem and its solution)
• RAD agents exchange DTMS justifications and class
information
• Specialized negotiation knowledge may be encoded in
third-party agents
• The only negotiation formalism is unified negotiation
protocol [Rosenschein, Hebrew U.]
© 1999 Singh & Huhns
89
Negotiation
• A deal is a joint plan between two agents that would satisfy both of
their goals
• The utility of a deal for an agent is the amount he is willing to pay
minus the cost to him of the deal
• The negotiation set is the set of all deals that have a positive utility for
every agent
The possible situations for interaction are
• conflict: the negotiation set is empty
• compromise: agents prefer to be alone, but will agree to a negotiated
deal
• cooperative: all deals in the negotiation set are preferred by both
agents over achieving their goals alone
[Rosenschein and Zlotkin, 1994]
© 1999 Singh & Huhns
90
Negotiation Mechanism
The agents follow a Unified Negotiation Protocol, which
applies to any situation. In this protocol,
• the agents negotiate on mixed-joint plans, i.e., plans that
bring the world to a new state that is better for both agents
• if there is a conflict, they "flip a coin" to decide which
agent gets to satisfy his goal
© 1999 Singh & Huhns
91
Negotiation Mechanism
Attributes
• Efficiency
• Stability
• Simplicity
• Distribution
• Symmetry
e.g., sharing book purchases, with cost
decided by coin flip
© 1999 Singh & Huhns
92
Third-Party Negotiation
• Resolves conflicts among antagonistic agents directly or
through a mediator
• Handles multiagent, multiple-issue, multiple-encounter
interactions using case-based reasoning and multiattribute
utility theory
• Agents exchange messages that contain
– the proposed compromise
– persuasive arguments
– agreement (or not) with the compromise or argument
– requests for additional information
– reasons for disagreement
– utilities / preferences for the disagreed-upon issues
[Sycara]
© 1999 Singh & Huhns
93
Negotiation in RAD
• Resolves conflicts among agents during problem solving
• To negotiate, agents exchange
– justifications, which are maintained by a DTMS
– class information, which is maintained by a frame system
• Maintains global consistency, but only where necessary for
problem solving
© 1999 Singh & Huhns
94
Negotiation among
Utility-Based Agents
Problem: How to design the rules of an
environment so that agents interact
productively and fairly, e.g.,
• Vickrey’s Mechanism: lowest bidder wins,
but gets paid second lowest bid (this
motivates telling the truth?? and is best for
the consumer??)
© 1999 Singh & Huhns
95
Problem Domain Hierarchy
Worth-Oriented Domains
State-Oriented Domains
Task-Oriented Domains
© 1999 Singh & Huhns
96
Task-Oriented Domains
• A TOD is a tuple <T, A, c>, where T is the
set of tasks, A is the set of agents, and c(X)
is a monotonic function for the cost of
executing the set of tasks X
• Examples
– delivery domain: c(X) is length of minimal path that visits X
– postmen domain: c(X) is length of minimal path plus return
– database queries: c(X) is minimal number of needed DB ops
© 1999 Singh & Huhns
97
TODs
• A deal is a redistribution of tasks
• Utility of deal d for agent k is
Uk (d) = c(Tk) - c(dk)
• The conflict deal, D, is no deal
• A deal d is individual rational if d>D
• Deal d dominates d’ if d is better for at least one agent and
not worse for the rest
• Deal d is Pareto optimal if there is no d’>d
• The set of all deals that are individual rational and Pareto
optimal is the negotiation set, NS
© 1999 Singh & Huhns
98
Monotonic Concession Protocol
• Each agent proposes a deal
• If one agent matches or exceeds what the other demands,
the negotiation ends
• Else, the agents propose the same or more (concede)
• If no agent concedes, the negotiation ends with the conflict
deal
This protocol is simple, symmetric, distributed, and
guaranteed to end in a finite number of steps in any TOD.
What strategy should an agent adopt?
© 1999 Singh & Huhns
99
Zeuthen Strategy
Offer deal that is best among all deals in NS
• Calculate risks of self and opponent
R1=(utility A1 loses by accepting A2’s offer)
(utility A1 loses by causing a conflict)
• If risk is smaller than opponent, offer minimal sufficient
concession (a sufficient concession makes opponent’s risk
less than yours); else offer original deal
• If both use this strategy, they will agree on deal that
maximizes the product of their utilities (Pareto optimal)
• The strategy is not stable (when both should concede on last step, but it’s
sufficient for only one to concede, then one can benefit by dropping strategy)
© 1999 Singh & Huhns
100
Deception-Free Protocols
• Zeuthen strategy requires
full knowledge of
–
–
–
–
tasks
protocol
strategies
commitments
• Hidden tasks
• Phantom tasks
• Decoy tasks
© 1999 Singh & Huhns
P.O.
A1
A1 (hidden)
A2
101
8. SYNTHESIS
© 1999 Singh & Huhns
102
Research Trends
• Economic: Sycara, Rosenschein, Sandholm, Lesser
• Social: organizational theory and open systems—Hewitt,
Gasser, Castelfranchi
• Ethical:
• Legal:
• Communication:
• Coordination:
• Collaboration:
• Formal Methods—Singh, Wooldridge, Jennings, Georgeff
© 1999 Singh & Huhns
103
Interaction-Oriented Software
Development
• Active modules, representing real-world
objects
• Declarative specification (“what,” not
“how”)
• Modules that volunteer
• Modules hold beliefs about the world,
especially about themselves and others
© 1999 Singh & Huhns
104
What is IOP?
• A collection of abstractions and techniques
for programming MAS.
• Classified into three layers of mechanisms :
– coordination: living in a shared environment
– commitment: organizational or social coherence
(adds stability over time)
– collaboration: high-level interactions
combining mental and social abstractions.
© 1999 Singh & Huhns
IOP Contribution
• Enhances and formalizes ideas from different
disciplines
• Separates them out in an explicit conceptual
metamodel to use as a basis for programming and
for programming methodologies
• Makes them programmable
© 1999 Singh & Huhns
Benefits of IOP
• Like all conceptual modeling, IOP offers a
higher-level starting point than traditionally
available. Specifically:
– key concepts of coordination, commitment,
collaboration as first-class concepts that can be
applied directly
– aspects of the underlying infrastructure are
separated, leading to improved portability.
© 1999 Singh & Huhns
Representations for IOP
• Functionalities, which typically exist
– effected by humans in some unprincipled way
– hard-coded in applications
– buried in operating procedures and manuals
• Information, which typically exists
– in data stores
– in the environment or with interacting entities.
Problem: interactive aspects are not modeled.
© 1999 Singh & Huhns
108
Lessons
• Advanced abstractions
– must be simple
– must reflect true status
© 1999 Singh & Huhns
109
Challenges
•
•
•
•
Formal semantics
Operational semantics related to formal semantics
Tools
Design rules capturing useful patterns, but respecting the
formal semantics
© 1999 Singh & Huhns
110
To Probe Further
• Readings in Agents (Huhns & Singh, eds.), Morgan
Kaufmann, 1997
http://www.mkp.com/books_catalog/1-55860-495-2.asp
• [email protected]
• Journal of Autonomous Agents and Multiagent Systems
• International Conference on Multiagent Systems (ICMAS)
• International Joint Conference on Artificial Intelligence
• International Workshop on Agent Theories, Architectures,
and Languages (ATAL)
© 1999 Singh & Huhns
111