Transcript Dynamic Stackelberg Problems
DYNAMIC STACKELBER G PROBLEMS
R E C U R S I V E M A C R O E C O N O M I C T H E O R Y , L J U N G Q V I S T A N D S A R G E N T , 3 R D E D I T I O N , C H A P T E R 1 9 1 Taylor Collins
BACKGROUND INFORMATION
• A new type of problem • Optimal decision rules are no longer functions of the natural state variables • A large agent and a competitive market • A rational expectations equilibrium • Recall Stackelberg problem from Game Theory • The cost of confirming past expectations Taylor Collins 2
THE STACKELBERG PROBLEM
• Solving the problem – general idea • Defining the Stackelberg leader and follower • Defining the variables: • Z t is a vector of natural state variables • X t is a vector of endogenous variables • U t is a vector of government instruments • Y t is a stacked vector of Z t and X t Taylor Collins 3
THE STACKELBERG PROBLEM
• • • • The government’s one period loss function is
r
(
y
,
u
) =
y
'
Ry
+
u
'
Qu
Government wants to maximize ¥ å b
t r
(
y t
,
u t
)
t
= 0 subject to an initial condition for Z 0 , but not X 0 Government makes policy in light of the model
y t
+ 1 =
Ay t
+
Bu t
Û Û Û
z t
+ 1
x t
+ 1 Û Û Û Û
A
11
A
21
A
12
A
22 Û Û Û Û
z t x t
The government maximizes (1) by choosing Û Û {
u t
,
x t
subject to (2) ,
z t
+ 1 }
t
¥ = 0
Bu t
Taylor Collins (1) (2) 4
PROBLEM S
•
“The Stackelberg Problem is to maximize (2) by choosing an X sequence of decision rules, the time t component of which maps the time t history of the state Z t 0 and a into the time t decision of the Stackelberg leader.”
•
The Stackelberg leader commits to a sequence of decisions
• •
The optimal decision rule is history dependent Two sources of history dependence
•
Government’s ability to commit at time 0
•
Forward looking ability of the private sector
•
Dynamics of Lagrange Multipliers
• • •
The multipliers measure the cost today of honoring past government promises Set multipliers equal to zero at time zero Multipliers take nonzero values thereafter
Taylor Collins 5
SOLVING THE STACKELBERG PROBLEM
•
4 Step Algorithm
• Solve an optimal linear regulator • Use stabilizing properties of shadow prices • Convert Implementation multipliers into state variables • Solve for X 0 and
μ
x0 Taylor Collins 6
STEP 1: SOLVE AN O.L.R.
• • • • Assume X 0 • is given This will be corrected for in step 3 • With this assumption, the problem has the form of an optimal linear regulator The optimal value function has the form
v
(
y
) = -
y
'
Py
where P solves the Riccati Equation The linear regulator is
v
(
y
0 ) = -
y
0
Py
0 = max {
u t
,
y t
+ 1 } ¥
t
= 0 ¥ å b
t
(
y t
'
Ry t
subject to an initial Y 0 Then, the Bellman Equation is -
y
'
Py
= max { -
y
'
Ry
-
u
'
Qu
b
y
* '
Py
* }
u
,
y
* +
u t s
.
t
.
'
Qu t
)
t
= 0 and the law of motion from (2)
y
* =
Ay
+
Bu
(3) Taylor Collins 7
STEP 1: SOLVE AN O.L.R.
• • Taking the first order condition of the Bellman equation and solving gives us
u
= -
Fy s
.
t
.
F
= b [
Q
+ b
B
'
PB
] 1
B
'
PA
(4) Plugging this back into the Bellman equation gives us -
y
'
Py
= _
y
'
Ry
-
u
_ '
Qu
_ b (
Ay
+
Bu
_ )'
P
(
Ay
+
Bu
) • • such that ū is optimal, as described by (4) Rearranging gives us the matrix Riccati Equation
P
=
R
+ b
A
'
PA
b 2
A
'
PB
(
Q
+ b
B
'
PB
) 1
B
'
PA
Denote the solution to this equation as P * Taylor Collins 8
STEP 2: USE THE SHADOW PRICE
• • • • Decode the information in P * Adapt a method from 5.5 that solves a problem of the form (1),(2) Attach a sequence of Lagrange multipliersto the sequence of constraints (2) and form the following Lagrangian
L
= ¥ å
t
= 0 Partition μ t b
t
[
y t
'
Ry t
+
u t
'
Q u t
+ 2 bm
t
' + 1 (
Ay t
+
Bu t
-
y t
+ 1 )] conformably with our partition of Y Taylor Collins 9
STEP 2: USE THE SHADOW PRICE
• • • Want to maximize L w.r.t. U t ¶
L
¶
u t
= 0 Þ 0 =
Qu t
+ b and Y t+1
B
' m
t
+ 1 ¶
L
¶
y t
= 0 Solving for U t
y t
+ 1 =
Ay t
Þ and plugging into (2) gives us b
BQ
m
t
1 =
B
'
Ry t
m
t
+ 1 +
BA
' m
t
+ 1 Combining this with (5), we can write the system as ) (5 Û Û
I
0 b
BQ
1
B
' b
A
' Û Û Û
y t
+ 1 m
t
+ 1 Û Û Û Û
A
-
R I
0 Û Û Û
y t
m
t
Û Û
L
* Û
y t
+ 1 m
t
+ 1 Û Û
N
Û
y t
m
t
Û Û ) (6 Taylor Collins 10
STEP 2: USE THE SHADOW PRICE
• • • We now want to find a stabilizing solution to (6) • ie, a solution that satisfies ¥ å b
t y t
'
y t
< ¥
t
= 0 In section 5.5, it is shown that a stabilizing solution satisfies m 0 =
P
*
y
' 0 Then, the solution replicates itself over time in the sense that m
t
=
P
*
y t
' (7) Taylor Collins 11
STEP 3: CONVERT IMPLEMENTATION MULTIPLIERS
• • • We now confront the inconsistency of our assumption on Y 0 • Forces multiplier to be a jump variable Focus on partitions of Y and μ Convert multipliers into state variables • • Write the last n x m
x t
=
P
21
z t
equations of (7) as +
P
22
x t
Pay attention to partition of P • Solving this for X t
x t
=
P
1 22 m
x t
-
P
gives us 1 22
P
21
z t
(8) Taylor Collins 12
STEP 3: CONVERT IMPLEMENTATION MULTIPLIERS
• • Using these modifications and (4) gives us
u t
= -
F
é ë
I
-
P
22 1
P
21 0
P
1 22 ù û é
z t
m
x t
ù û We now have a complete description of the Stackelberg problem
y t
+ 1 =
Ay t
Û
z t
+ 1 m
t
+ 1 Û Û + Û Û
I P
21
Bu t x t
éë
P
1 22
P
21 Û 0
P
22 Û Û (
A
-
P
1 22 ùû é
z t
m
x t BF
) Û Û Û
I
-
P
1 22
P
21 ù û 0
P
1 22 Û Û Û Û Û Û
z t
m
x t
Û Û Û Taylor Collins (9) (9’) (9’’) 13
STEP 4: SOLVE FOR X
0
AND
μ x0 • • • • The value function satisfies
v
(
y
0 ) = -
y
' 0
P
*
y
0 = -
z
' 0
P
* 11
z
0 2
x
' 0 *
P
21
z
0 -
x
' 0 *
P
22
x
0 Now, choose X 0 V(Y 0 ), w.r.t. X 0 2
P
* 21
z
0 2
P
* 22 Then, recall (8) (8) Þ m
x
0 =
x
0 0 by equating to zero the gradient of = 0 Þ
x
0 = -
P
* 1 22
P
* 21
z
0 Finally, the Stackelberg problem is solved by plugging in these initial conditions to (9), (9’), and (9’’) and iterating the process to get {
u t
,
x t
,
z t
+ 1 }
t
¥ = 0 Taylor Collins 14
CONCLUSION
• Brief Review • Setup and Goal of problem • 4 step Algorithm • Questions, Comments, or Feedback Taylor Collins 15