TM 661 Engineering Economics for Managers

Download Report

Transcript TM 661 Engineering Economics for Managers

TM 732
Markov Chains
Motivation; Inventory
Inventory Level
Consider (S,s)
inventory system:
Inventory
4
3
2
1
0
0
2
4
6
time
8
10
Motivation; Inventory
Inventory Level
Consider (S,s)
inventory system:
Inventory
order at s=0
Dt = demand in per.
t
Xt = inv. on hand at t
4
3
2
1
0
0
2
4
6
time
8
10
Motivation; Inventory
Inventory Level
Consider (S,s)
inventory system:
Inventory
order at s=0
Dt = demand in per.
t
Xt = inv. on hand at t
Inventory = 3, Dt =2,
4
3
2
1
0
0
2
4
6
8
time
Inventory End = 1
10
Motivation; Inventory
Inventory Level
Consider (S,s)
inventory system:
Inventory
order at s=0
Dt = demand in per.
t
Xt = inv. on hand at t
Inventory = 2, Dt =3,
4
3
2
1
0
0
2
4
6
8
time
Inventory End = 0
10
Motivation; Inventory
Inventory Level
Consider (S,s)
inventory system:
Inventory
order at s=0
Dt = demand in per.
t
Xt = inv. on hand at t
Inventory = 0, Dt =2,
4
3
2
1
0
0
2
4
6
8
10
time
Inventory End = 3 - 2
Motivation; Inventory
Inventory Level
Consider (S,s)
inventory system:
Inventory
4
3
2
1
order at s=0
0
0
2
Dt = demand in per.
t
Xt = inv. on hand at t
max (Xt - Dt , 0)
Xt+1 =
max (3 - Dt , 0)
4
6
time
, Xt > 1
, Xt = 0
8
10
Inventory Example
Recall,
Xt = 0, 1, 2, 3
Let
Pij = P{End Inv. = j | Start Inv. = i}
= P{Xt+1= j | Xt = i }
Inventory Example
P =transition matrix showing probabilities from
state i to state j.
p00 p01 p02 p03
P=
p10 p11 p12 p13
p20 p21 p22 p23
p30 p31 p32 p33
Pij = P{Xt+1=j | Xt = i }
Computing Transition
Prob.
p10 = P{ X t +1 = 0 | X t = 1}
= P{Dt +1  1}
Computing Transition
Prob.
p10 = P{ X t +1 = 0 | X t = 1}
= P{Dt +1  1}
Let’s suppose that demand follows a Poisson
distribution with l =1.
Computing Transition
Prob.
p01 = P{ X t +1 = 0 | X t = 1}
= P{Dt +1  1}
Let’s suppose that demand follows a Poisson
distribution with l =1.
P{Dt = x} =
lx e - l
-1
e
0.36788
=
=
x!
x!
x!
Computing Transition
Prob.
p10 = P{ X t +1 = 0 | X t = 1}
= P{Dt +1  1}
-1
e
=
P{Dt x} =
-1
-1
x!
e
e
+
+ ...
p10 = P{Dt+1 1} =
1! 2 !

1
-1
=e 
x =1 x !
Computing Transition
Prob.
p10 = P{ X t +1 = 0 | X t = 1}
= P{Dt +1  1}
p10 = 1 - P{Dt+1= 0}
-1
e
= 1= 0.632
0!
-1
e
=
P{Dt x} =
x!
Computing Transition
Prob.
p20 = P{ X t +1 = 0 | X t = 2}
= P{Dt +1  2}
= 1 - P{Dt +1  1}
Computing Transition
Prob.
p20 = P{ X t +1 = 0 | X t = 2}
= P{Dt +1  2}
= 1 - P{Dt +1  1}
-1
-1
e
e
= 1- [
+
]
0! 1!
= 0.264
Computing Transition
Prob.
Class Exercise:
Compute the following transition probabilities
a. p21
d.
p00
b. p22
e.
p01
c. p23
Computing Transition
Prob.
Class Exercise:
Compute the following transition probabilities
Computing Transition
Prob.
Prob.
a. p21 = 0.368d.
p00 = 0.080
b. p22 = 0.368
e.
p21 = P{ X t +1 = 1 | X t = 2}
= P{Dt +1 = 1}
=
-
e1
1!
= 0.264
Computing Transition Prob.
Prob.
p22 = P{ X t +1 = 2 | X t = 2}
= P{Dt +1 = 0}
=
-
e1
0!
= 0.368
Computing Transition
Prob.
Prob.
c. p23 = 0.000
p23 = P{ X t +1 = 3 | X t = 2}
= P{Dt +1 = -1 }
=0
Computing Transition Prob.
Prob.
p00 = P{ X t +1 = 0 | X t = 0}
In this case, we assume that with Xt=0 order up to 3 comes in.
= P{Dt +1 > 3 }
Computing Transition
Prob.
Prob.
p01 = P{ X t +1 = 1 | X t = 0}
In this case, we assume that with Xt=0 order up to 3 comes in.
= P{Dt +1 = 2 }
= [ P{Dt = 0} + {Dt = 1} ]
= [
-
-
e1 e1
+
]
0! 1!
= 0.184
p01 = 0.184
Inventory Example
Transition Matrix
0.080 0.184 0.368 0.368
P=
0.632 0.368
0.0
0.264 0.368 0.368
0.0
0.0
0.080 0.184 0.368 0.368
Markovian Property
P{ X t +1 = j | X o = k 0 , X 1 = k1 ,..., X t = i}
= P{ X t +1 = j | X t = i}
Markovian Property
P{ X t +1 = j | X o = k 0 , X 1 = k1 ,..., X t = i}
= P{ X t +1 = j | X t = i}
In words,
Past history does not matter; the probability of
finding yourself in a given state is only
dependent upon the previous state.
Transition Probabilities
P{ X t +1 = j | X t = i} = P{ X 1 = j | X 0 = i}
In words,
One step transition probabilities are
stationary.
Transition Probabilities
P{ X t +1 = j | X t = i} = P{ X 1 = j | X 0 = i}
In words,
One step transition probabilities are
stationary.
pij = P{ X t +1 = j | X t = i}
Transition Probabilities
n-step transition probabilities
pij
(n)
= P{ X t + n = j | X t = i}
n-step stationarity
P{ X t + n = j | X t = i} = P{ X n = j | X 0 = i}
Transition Probabilities
Rules of Probability
1). pij
0
(n)
N
2).
p
j =0
ij
(n)
=1
, " i, j
Transition Probabilities
Rules of Probability
1). pij
(n)
0
N
2).
 pij
j =0
(n)
=1
The probability of transitioning
from any state to any other state
must be > 0.
At any point in time, we must be
in some state.
Markov Chain
Def: A stochastic process {Xt: t=0, 1, 2, . . . } is
a markov chain if it has the Markovian
property.
P{ X t +1 = j | X o = k 0 , X 1 = k1 ,..., X t = i}
= P{ X t +1 = j | X t = i}
Example; Weather
Let State
0 =
1 =
dry
rain
p00 = P{dry today | dry yesterday} = 0.7
p01 = P{rain today | dry yesterday} = 0.3
p10 = P{dry today | rain yesterday} = 0.5
p11 = P{rain today | rain yesterday} = 0.5
Example; Weather
Let State
0 =
1 =
L
0.7
P=M
N0.5
dry
rain
O
P
0.5 Q
0.3
Example; Weather II
Let State
0 =
1 =
2 =
3 =
dry today and yesterday
dry today and rain yesterday
rain today and dry yesterday
rain today and yesterday
p00 = P{dry today & yesterday | dry yesterday & day
before}
= 0.9
Example; Weather
Let State
0 =
1 =
2 =
3 =
dry today and yesterday
dry today and rain yesterday
rain today and dry yesterday
rain today and yesterday
p01 = P{dry today & rain yesterday | dry yesterday & day
before}
= not possible
Example; Weather
State
0
1
2
3
=
=
=
=
dry today and yesterday
dry today and rain yesterday
rain today and dry yesterday
rain today and yesterday
0.9 0.0 01
. 0.0
P=
0.6 0.0 0.4 0.0
0.0 0.5 0.0 0.5
0.0 0.3 0.0 0.7
Example; Gambling
Gambler bets $1 with each play. He wins $1
with probability p and loses $1 with probability
1-p. Game ends when he wins $3 or goes
broke.
Example; Gambling
Gambler bets $1 with each play. He wins $1
with probability p and loses $1 with probability
1-p. Game ends when he wins $3 or goes
broke.
Let,
Xt = money on hand = 0, 1, 2, 3
Example; Gambling
Gambler bets $1 with each play. He wins $1
with probability p and loses $1 with probability
1-p. Game ends when he wins $3 or goes
broke.
Let,
Xt = money on hand = 0, 1, 2, 3
1
0
0
1- p 0
p
P=
0 1- p 0
0
0
0
0
0
p
1
2-step transitions
Suppose we start with 3 in inventory. What is
the probability we will have 1 in inventory after 2
time periods (2 steps)?
P{ X t + 2 = 1| X t = 3}
2-step transitions
Suppose we start with 3 in inventory. What is
the probability we will have 1 in inventory after 2
time periods (2 steps)?
P{ X t + 2 = 1| X t = 3}
= P{ X t + 2 = 1| X t +1 = 3} P{ X t +1 = 3| X t = 3}
+ P{ X t + 2 = 1| X t +1 = 2} P{ X t +1 = 2 | X t = 3}
+ P{ X t + 2 = 1| X t +1 = 1} P{ X t +1 = 1| X t = 3}
+ P{ X t + 2 = 1| X t +1 = 0} P{ X t +1 = 0 | X t = 3}
2-step transitions
Suppose we start with 3 in inventory. What is
the probability we will have 1 in inventory after 2
time periods (2 steps)?
P{ X t + 2 = 1| X t = 3}
= P{ X t + 1 = 1| X t = 3} P{ X t +1 = 3| X t = 3}
+ P{ X t + 1 = 1| X t = 2} P{ X t +1 = 2 | X t = 3}
+ P{ X t + 1 = 1| X t = 1} P{ X t +1 = 1| X t = 3}
+ P{ X t + 1 = 1| X t = 0} P{ X t +1 = 0 | X t = 3}
2-step transitions
P{ X t + 2 = 1| X t = 3}
= P{ X t + 1 = 1| X t = 3} P{ X t +1 = 3| X t = 3}
+ P{ X t + 1 = 1| X t = 2} P{ X t +1 = 2 | X t = 3}
+ P{ X t + 1 = 1| X t = 1} P{ X t +1 = 1| X t = 3}
+ P{ X t + 1 = 1| X t = 0} P{ X t +1 = 0 | X t = 3}
= p31p33 + p21p32 + p11p31 + p01p30
2-step transitions
Suppose we start with 3 in inventory. What is
the probability we will have 1 in inventory after 2
time periods (2 steps)?
P{ X t + 2 = 1| X t = 3}
= p31p33 + p21p32 + p11p31 + p01p30
Chapman-Kolmogorov
Eqs.
P{ X t + 2 = 1| X t = 3}
= p31p33 + p21p32 + p11p31 + p01p30
3
p31
(2)
=  p3k
k =0
(1)
pk 1
(1)
Chapman-Kolmogorov
Eqs.
3
p31
(2)
=  p3k
(1)
k =0
pk 1
(1)
In general,
M
pij
(n)
=  pik
k =0
( m)
pkj
( n -m)
Aside; Matrix
Multiplication
p00
p10
p20
p30
p01
p11
p21
p31
p02
p12
p22
p32
p03
p00 p01
p13
p10 p11
x
p23
p20 p21
p33
p30 p31
p02
p12
p22
p32
p03
p13
p23
p33
a31 = p31p33 + p21p32 + p11p31 + p01p30
Aside; Matrix
Multiplication
p00
p10
p20
p30
p01
p11
p21
p31
p02
p12
p22
p32
p03
p00 p01
p13
p10 p11
x
p23
p20 p21
p33
p30 p31
p02
p12
p22
p32
p03
p13
p23
p33
a31 = p31p33 + p21p32 + p11p31 + p01p30
= P{ X t + 2 = 1| X t = 3}
Chapman-Kolmogorov
Eqs.
M
pij
(n)
=  pik
k =0
In matrix form,
P
( n)
=P
n
( m)
pkj
( n -m)
Classification of States
Communicate
If Pij(n) > 0 for some n, i,j communicate
For gambler’s ruin,
1
0
0
1- p 0
p
P=
0 1- p 0
0
0
0
0
0
p
1
Classification of States
Communicate
If Pij(n) > 0 for some n, i,j communicate
Properties,
1.
2.
3.
k.
Any state communicates with itself.
If i communicates with j, j communicates with i
If i comm. with j, j comm. with k, then i comm. with
Classification of States
Irreducible
all states communicate.
Inventory example,
0080
.
0632
.
=
P
0264
.
0080
.
0184
.
0368
.
0368
.
0184
.
0368
.
00
.
0368
.
0368
.
0368
.
00
.
00
.
0368
.
0
1
2
3
Classification of States
Irreducible
all states communicate.
Inventory example,
0.286 0.285 0.263 0.166
P
( 8)
=
0.286 0.285 0.263 0.166
0.286 0.285 0.263 0.166
0.286 0.285 0.263 0.166
Classification of States
Irreducible
all states communicate.
Gambler’s ruin,
1
0
0
1- p 0
p
P=
0 1- p 0
0
0
0
0
0
p
1
0
1
2
3
Classification of States
Recurrent
f ij = P{ X t = i | X t - k = i } = 1 , for some k
Classification of States
Recurrent
f ij = P{ X t = i | X t - k = i } = 1 , for some k
In words,
Given a continuing process, once we leave a state,
it is certain that we will eventually return to it.
Classification of States
Recurrent
f ij = P{ X t = i | X t - k = i } = 1 , for some k
Transient
f ij = P{ X t = i | X t - k = i } < 1 , for all k
Classification of States
Absorbing
If the one-step transition probability equals
1.0
pii = 1
1
0
0
1- p 0
p
P=
0 1- p 0
0
0
0
0
0
p
1
0
1
2
3
Recurrence
1
E[ # times systemin statei ] =
1 - f ii
For recurrent state, fii = 1
1
=
E[ # timesini ] = lim
f ii 1 1 - f
ii
Recurrence
Let
In =
{
1
0
, Xn =i
, Xn i
givenX 0 = i
Then,

 (I
n =1
n
| X 0 = i ) = # timesi is visited
Recurrence


n =1
n =1
E[  ( I n | X 0 ) ] =  E[ ( I n | X 0 ) ]

=  P{ X n = i | X 0 = i }
n =1

=  Pii
n =1
(n)
Recurrence


n =1
n =1
E [  ( I n | X 0 ) ] =  Pii
(n)
But, a recurrent site will be visited infinitely often,
Recurrence


n =1
n =1
E [  ( I n | X 0 ) ] =  Pii
(n)
=
But, a recurrent site will be visited infinitely often,
For a recurrent state,

P
n =1
ii
( n)
=
Recurrence Property
• All states in a class are either recurrent or
transient
• All states in an irreducible Markov Chain are
recurrent
Period
Gambler’s ruin
1 0
1- p 0
P=
0 1- p
0 0
0 0
p 0
0 p
0 1
0
1
2
3
It is possible to enter state 1 at times t = 2, 4, 6,
...
period = 2
Period
The period of state I is defined to be the integer t (t
> 1) such that Pii(n) = 0 for all values of n other than
t, 2t, 3t, . . .
State with period = 1 is aperiodic
Definitions
Positive Recurrent
expected time to re-enter state i is finite.
Null Recurrent
expected time to re-enter state i is 
Ergodic
positive recurrent state with period = 1