Algorithmic Mechanism Design

Download Report

Transcript Algorithmic Mechanism Design

Algorithmic Issues in Noncooperative (i.e., strategic)
Distributed Systems
Two Research Traditions

Theory of Algorithms: computational issues





What can be feasibly computed?
How much does it take to compute a solution?
Which is the quality of a computed solution?
Centralized or distributed computational models
Game Theory: interaction between self-interested
individuals


What is the outcome of the interaction?
Which social goals are compatible with selfishness?
Different Assumptions

Theory of Algorithms (in distributed systems):



Processors are obedient, faulty, or adversarial
Large systems, limited computational resources
Game Theory:


Players are strategic (selfish)
Small systems, unlimited computational resources
The Internet World

Agents often autonomous (users)



Users have their own individual goals
Network components owned by providers
Often involve “Internet” scales


Massive systems
Limited communication/computational
resources
 Both strategic and computational issues!
Fundamental Questions

What are the computational aspects of a game?

What does it mean to design an algorithm for a
strategic (i.e., non-cooperative) distributed
system?
Algorithmic
Theory of
Game
=
+
Game Theory Algorithms
Theory
Basics of Game Theory

A game consists of:





A set of players
A set of rules of encounter: Who should act when,
and what are the possible actions (strategies)
A specification of payoffs for each combination of
strategies
A set of outcomes
Game Theory attempts to predict the
outcome of the game (solution) by taking into
account the individual behavior of the players
(agents)
Equilibrium



Among the possible outcomes of a game,
equilibria play a fundamental role.
Informally, an equilibrium is a strategy
combination in which individuals are not willing
to change their state.
When a player does not want to change his
state? In the Homo Economicus model, when he
has selected a strategy that maximizes his
individual payoff, knowing that other players
are also doing the same.
Roadmap

Nash Equilibria (NE)





Does a NE always exist?
Can a NE be feasibly computed, once it exists?
Which is the “quality” of a NE?
How long does it take to converge to a NE?
Algorithmic Mechanism Design



Which social goals can be (efficiently) implemented
in a non-cooperative selfish distributed system?
VCG-mechanisms and one-parameter mechanisms
Mechanism design for some basic network design
problems
FIRST PART:
(Nash)
Equilibria
Formal representation
of a game: Normal Form



N rational players
Si =Strategy set of player i
The strategy combination (s1, s2, …, sN) gives
payoff (p1, p2, …, pN) to the N players
 S1S2  …  SN payoff matrix
Types of games






Cooperative versus non-cooperative
Symmetric versus asymmetric
Zero sum versus non-zero sum
Simultaneous versus sequential
Perfect information versus imperfect
information
One-shot versus repeated
A famous one-shot game:
the Prisoner’s Dilemma
Prisoner II
Prisoner I
Don’t
Implicate
Implicate
Don’t
Implicate
-1, -1
-6, 0
Implicate
0, -6
-5, -5
Strategy
Set
Strategy
Set
Payoffs
Prisoner I’s decision
Prisoner II
Prisoner I Don’t Implicate
Implicate

Prisoner I’s decision:



Don’t Implicate
Implicate
-1, -1
-6, 0
0, -6
-5, -5
If II chooses Don’t Implicate then it is best to Implicate
If II chooses Implicate then it is best to Implicate
It is best to Implicate for I, regardless of what II does:
Dominant Strategy
Prisoner II’s decision
Prisoner II
Don’t Implicate
Implicate
-1, -1
-6, 0
0, -6
-5, -5
Prisoner I Don’t Implicate
Implicate

Prisoner II’s decision:



If I chooses Don’t Implicate then it is best to Implicate
If I chooses Implicate then it is best to Implicate
It is best to Implicate for II, regardless of what I does:
Dominant Strategy
Hence…
Prisoner II
Prisoner I




Don’t
Implicate
Implicate
Don’t
Implicate
-1, -1
-6, 0
Implicate
0, -6
-5, -5
It is best for both to implicate regardless of what the other one does
Implicate is a Dominant Strategy for both
(Implicate, Implicate) becomes the Dominant Strategy Equilibrium
Note: If they might collude, then it’s beneficial for both to Not
Implicate, but it’s not an equilibrium as both have incentive to deviate
Dominant Strategy
Equilibrium

Dominant Strategy Equilibrium: is a strategy
combination s*= (s1*, s2*, …, sN*), such that si* is
a dominant strategy for each i, namely, for
each s= (s1, s2, …, si , …, sN):
pi (s1, s2, …, si*, …, sN) ≥ pi (s1, s2, …, si, …, sN)



Dominant Strategy is the best response to any
strategy of other players
It is good for agent as it needs not to
deliberate about other agents’ strategies
Of course, not all games (only very few in the
practice!) have a dominant strategy equilibrium
A more relaxed solution concept:
Nash Equilibrium [1951]

Nash Equilibrium: is a strategy combination
s*= (s1*, s2*, …, sN*) such that for each i, si* is a
best response to (s1*, …,si-1*,si+1*,…, sN*), namely,
for any possible alternative strategy si
pi (s*) ≥ pi (s1*, s2*, …, si, …, sN*)
Nash Equilibrium




In a NE no agent can unilaterally deviate from
its strategy given others’ strategies as fixed
Agent has to deliberate about the strategies
of the other agents
If the game is played repeatedly and players
converge to a solution, then it has to be a NE
Dominant Strategy Equilibrium  Nash
Equilibrium (but the converse is not true)
Nash Equilibrium: The Battle of
the Sexes (coordination game)
Woman
Man


Stadium
Cinema
Stadium
2, 1
0, 0
Cinema
0, 0
1, 2
(Stadium, Stadium) is a NE: Best responses to each other
(Cinema, Cinema) is a NE: Best responses to each other
 but they are not Dominant Strategy Equilibria … are
we really sure they will eventually go out
together????
A big game theoretic issue:
the existence of a NE


Unfortunately, for pure strategies games
(as those seen so far), it is easy to see
that we cannot have a general result of
existence
In other words, there may be no, one, or
many NE, depending on the game
A conflictual game: Head or Tail
Player II
Player I
Head
Tail
Head
1,-1
-1,1
Tail
-1,1
1,-1
Player I (row) prefers to do what Player II
does, while Player II prefer to do the
opposite of what Player I does!
 In any configuration, one of the players
prefers to change his strategy, and so on and
so forth…thus, there are no NE!

On the existence of a NE (2)



However, when a player can select his strategy
stochastically by using a probability distribution over
his set of possible strategies (mixed strategy), then
the following general result holds:
Theorem (Nash, 1951): Any game with a finite set of
players and a finite set of strategies has a NE of mixed
strategies (i.e., the expected payoff cannot be
improved by changing unilaterally the selected
probability distribution).
Head or Tail game: if each player sets
p(Head)=p(Tail)=1/2, then the expected payoff of each
player is 0, and this is a NE, since no player can improve
on this by choosing a different randomization!
Three big computational issues
1. Finding a NE, once it does exist
2. Establishing the quality of a NE, as
compared to a cooperative system, i.e., a
system in which agents can cooperate
(recall the Prisoner’s Dilemma)
3. In a repeated game, establishing
whether and in how many steps the
system will eventually converge to a NE
(recall the Battle of the Sex)
On the computability of a NE
for pure strategies
By definition, it is easy to see that an entry (p1,…,pN) of the
payoff matrix is a NE if and only if pi is the maximum ith
element of the row (p1,…,pi-1,{p(s):sSi},pi+1,…,pN), for each
i=1,…,N.
 Notice that, with N players, an explicit (i.e., in normal form)
representation of the payoff functions is exponential in N
 brute-force search for pure NE is then exponential in
the number of players (even if it is still linear in the input
size, but the normal form representation needs not be a
minimal-space representation of the input!)
 Alternative cheaper methods are sought (we will see that
for some categories of games of our interest, a NE can be
found in poly-time w.r.t. to the number of players)

On the computability of a NE
for mixed strategies


How do we select the probability distribution? It
looks like a problem in the continuous…
…but it’s not, actually! It can be shown that such a
distribution can be found by checking for all the
(exponentially large) possible combinations for
each player of the underlying pure strategies!
In the practice, the problem is solved by using a
simplex-like technique called the Lemke–Howson
algorithm, which however is exponential in the
worst case
Is finding a NE of MS NP-hard?



W.l.o.g., we restrict ourself to the 2-NASH problem:
Given a finite 2-player game, find a mixed NE.
QUESTION: Is 2-NASH NP-hard?
Reminder: a problem P is NP-hard if one can reduce in
poly-time every problem P’ in NP to it in such a way
that:
“yes”-instance of P’  “yes”-instance of P
“no”-instance of P’  “no”-instance of P
But each instance of 2-NASH is a “yes”-instance (since
every game has a mixed NE), and so if 2-NASH is NPhard, then NP = coNP (hard to believe!)
The complexity class PPAD


Definition (Papadimitriou, 1994): roughly
speaking, PPAD (Polynomial Parity Argument –
Directed case) is the class of all problems whose
solution space can be set up as the set of all
sinks in a suitable directed graph (generated by
the input instance), having an exponential number
of vertices in the size of the input, though.
Remark: It could very well be that PPAD=PNP…
…but several PPAD-complete problems are
resisting for decades to poly-time attacks (e.g.,
finding Brouwer fixed points)
2-NASH is PPAD-complete!

3D-BROUWER is PPAD-complete

4-NASH is PPAD-complete (Daskalakis,

3-NASH is PPAD-complete

2-NASH is PPAD-complete !!!
(Papadimitriou, JCSS’94)
Goldberg, and Papadimitriou, STOC’06)
(Daskalakis & Papadimitriou, ECCC’05,
Chen & Deng, ECCC’05)
(Chen & Deng, FOCS’06)
On the quality of a NE


How inefficient is a NE in comparison to an
idealized situation in which the players
would strive to collaborate selflessly with
the common goal of maximizing the social
welfare?
Recall: in the Prisoner’s Dilemma game, the
DSE  NE means a total of 10 years in jail
for the players. However, if they would
not implicate reciprocally, then they would
stay a total of only 2 years in jail!
The price of anarchy


Definition (Koutsopias & Papadimitriou, 1999): Given a
game G and a social-choice minimization (resp.,
maximization) function f (i.e., the sum of all
players’ payoffs), let S be the set of NE, and let
OPT be the outcome of G optimizing f. Then, the
Price of Anarchy (PoA) of G w.r.t. f is:
f ( s) 
f ( s) 
 resp., inf

G ( f )  sup
sS f (OPT )
sS f (OPT )

Example: in the PD game, G(f)=-10/-2=5
A case study for the existence and quality
of a NE: selfish routing on Internet




Internet components are made up of
heterogeneous nodes and links, and the network
architecture is open-based and dynamic
Internet users behave selfishly: they generate
traffic, and their only goal is to download/upload
data as fast as possible!
But the more a link is used, the more is slower, and
there is no central authority “optimizing” the data
flow…
So, why does Internet eventually work is such a
jungle???
Modelling the flow problem
Internet can be modelled by using game theory
players
strategies
users
paths over which users
can route their traffic
Non-atomic Selfish Routing:
• There is a large number of (selfish) users;
• All the traffic of a user is routed over a single
path simultaneously;
• Every user controls a tiny fraction of the
traffic.
Mathematical model
•
•
•
•
•
•
A directed graph G = (V,E)
A set of source–sink pairs si,ti for i=1,..,k
A rate ri  0 of traffic between si and ti for each i=1,..,k
A set Pi of paths between si and ti for each i=1,..,k
The set of all paths Π=Ui=1,…,k Pi
A flow vector f specifying a traffic routing:
fP = amount of traffic routed on si-ti path P
• A flow is feasible if for every i=1,..,k
PPi fP =ri
Mathematical model (2)
• For each eE, the amount of flow absorbed by e
is:
fe=P:eP fP
• For each edge e, a latency function le(fe)
• Cost or total latency of a path P:
C(P)=ePfe •le(fe)
• Cost or total latency of a flow f (social welfare):
C(f)=PΠ C(P)
Flows and NE
QUESTION: Does it exist a NE for the
non-atomic selfish routing over a given
instance (G,r,l) of the min-latency flow
problem (i.e., a flow – named Nash flow
- such that no agent can improve its
latency by changing unilaterally its
path)? And in the positive case, what is
the PoA of such Nash flow?
Example: Pigou’s game [1920]
Latency depends on
the congestion (x is
the fraction of flow
using the edge)
( x )  x
Rate: 1
s
t
( x)  1
Latency is
fixed
 What is the NE of this game? Trivial: all the fraction of
flow tends to travel on the upper edge  the cost of the
flow is 1·1 +0·1 =1
 What is the PoA of this NE? The optimal solution is the
minimum of C(x)=x·x +(1-x)·1  C ’(x)=2x-1  OPT=1/2 
C(OPT)=1/2·1/2+(1-1/2)·1=0.75  G(C) = 1/0.75 = 4/3
The Braess’s paradox
Does it help adding edges to improve the PoA?
NO! Let’s have a look at the Braess Paradox
(1968)
v
x
1
Latency of each path=
0.5·0.5+0.5·1 =0.75
1/2
s
t
1
1/2
w
x
Latency of the flow= 2·0.75=1.5
(notice this is optimal)
The Braess’s paradox (2)
To reduce the latency of the flow, we try to add a nolatency road between v and w. Intuitively, this should
not worse things!
x
v
1
0
s
1
w
t
x
The Braess’s paradox (3)
However, each user is incentived to change its route
now, since the route s→v→w→t has less latency
(indeed, x≤1)
v
x
1
0
s
1
If only a single user changes its
route, then its latency
decreases approximately to 0.5.
t
x
w
But the problem is that all the
users will decide to change!
The Braess’s paradox (4)



So, the new latency of the flow is now:
1·1+1·0+1·1=2>1.5
Even worse, this is a NE!
The optimal min-latency flow is equal to that
we had before adding the new road! So, the
PoA is
2
4
G ( f ) 

1.5
3
Notice 4/3, as in
the Pigou’s example
Existence of a Nash flow


Theorem (Beckmann et al., 1956): If x·l(x) is
convex and continuously differentiable, then the
Nash flow of (G,r,l(x)) exists and is unique, and is
equal to the optimal min-latency flow of the
following instance:
x
(G,r, λ(x)=[∫0 l(t)dt]/x).
Remark: The optimal min-latency flow can be
computed in polynomial time through convex
programming methods.
Flows and Price of Anarchy

Theorem 1: In a network with linear latency

Theorem 2: In a network with general latency
functions, the cost of a Nash flow is at most 4/3
times that of the minimum-latency flow.
functions, the cost of a Nash flow is at most n/2
times that of the minimum-latency flow.
(Roughgarden & Tardos, JACM’02)
A bad example for non-linear latencies
Assume i>>1
xi
s
1
1-
1
0
t

A Nash flow (of cost 1) is arbitrarily more
expensive than the optimal flow (of cost
close to 0)
Convergence towards a NE
(in pure strategies games)



Ok, we know that selfish routing is not so bad
at its NE, but are we really sure this point of
equilibrium will be eventually reached?
Convergence Time: number of moves made by
the players to reach a NE from an initial
arbitrary state
Question: Is the convergence time
(polynomially) bounded in the number of
players?
The potential function method



(Rough) Definition: A potential function for a
game (if any) is a real-valued function, defined on
the set of possible outcomes of the game, such
that the equilibria of the game are precisely the
local optima of the potential function.
Theorem: In any finite game admitting a
potential function, best response dynamics
always converge to a NE of pure strategies.
But how many steps are needed to reach a NE?
It depends on the combinatorial structure of the
players' strategy space…
Convergence towards the Nash flow
 Negative result: It is possible to show that there
exist instances of the non-atomic selfish routing
game for which the convergence time is exponential
(unless finding a local optimum in any Polynomial
Local Search (PLS class) problem can be done in
polynomial time, against the common belief)
 Positive result: The non-atomic selfish routing
game is a potential game (and moreover, for many
instances, the convergence time is polynomial).