Al-Imam Mohammad Ibn Saud University CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains 12 Apr 2009 Dr.

Download Report

Transcript Al-Imam Mohammad Ibn Saud University CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains 12 Apr 2009 Dr.

Al-Imam Mohammad Ibn Saud University

CS433 Modeling and Simulation Lecture 06 – Part 03

Discrete Markov Chains

12 Apr 2009

Dr. Anis Koubâa

Classification of States: 1

2  A

path

is a sequence of states, where each transition has a positive probability of occurring.

 State

j

is

reachable (or accessible)

) هيلإ لوصولا نكمي ( from state

i

(

i

j

) if there is a path from

i

to

j

–equivalently

P ij (n)

> 0 for some

n

≥0, i.e. the probability to go from

i

to

j

in

n

steps is greater than zero.

 States i and j

communicate

( i

j )

) لصتي ( if

i

is reachable from

j

and

j

is reachable from

i .

(Note: a state

i always communicates with itself)

 A set of states C is a

communicating class

if every pair of states in C communicates with each other, and no state in C communicates with any state not in C.

Classification of States: 1

3  A state

i

is said to be an

absorbing state

if

p ii

= 1 .

 A subset

S

of the state space

X

is a

closed set

if no state outside of

S

is reachable from any state in

S

(like an absorbing state, but with multiple states), this means

p ij =

0 for every

i

S

and

j

S

 A closed set

S

of states is

irreducible

) ضيفختلل لباق ريغ

(

if any state

j

S

is reachable from every state

i

S

.

 A Markov chain is said to be

irreducible

if the state space

X

is irreducible.

Example

4 

Irreducible Markov Chain

p

01

p

00

0

p

10 

Reducible Markov Chain

p

01

0

p

00

p

10

1

Absorbing State

1

p

12

p

21

p

12

2

4

p

14

p

22 Closed irreducible set

2

p

32

p

23

p

22

3

p

33

Classification of States: 2

5  State

i

is a

transient state

) ةرباع ةلاح ( if there exists a state

j

such that

j

is reachable from

i

but

i

is not reachable from

j .

 A state that is not transient is

recurrent

) ةرركتم ةلاح ( . There are two types of recurrent states:

1. Positive recurrent

: if the expected time to return to the state is finite .

2. Null recurrent

(less common): if the expected time to return to the state is infinite (this requires an infinite number of states) .

 A state

i

is

periodic

with

period

k

>1, if

k

is the smallest number such that all paths leading from state

i

back to state

i

have a multiple of

k

transitions .

 A state is

aperiodic

if it has period

k =1.

 A state is

ergodic

if it is

positive recurrent

and

aperiodic

.

6

Classification of States: 2

Example from Book

Introduction to Probability: Lecture Notes

D. Bertsekas and J. Tistsiklis – Fall 200

Transient and Recurrent States

7  We define the

hitting time

T ij

from state

j

to stat

i

as the random variable that represents the time to go , and is expressed as:

T ij

 min 

k

 0 :

X k

 0 

i

     

k

is the number of transition in a path from

i

to

j

.

T ij

is the minimum number of transitions in a path from

i

to

j

.

We define the

recurrence time

state

i

.

T ii

 min 

k

 0 :

X k

 0 

T ii i

 as the

first time

that the Markov Chain returns to The probability that the

first recurrence to state

i

f ii

 Pr 

T ii

n

 

n

n

 1 

T i

 Pr 

T

n X i

Time for first visit to

i

 0 given

X 0 i

= i.

occurs at the

n th step

i

,...,

X

1  0  is

i

 The probability of

recurrence to state

i

is

f i

f ii

 Pr 

T ii n

   1

f ii

Transient and Recurrent States

8   The

mean recurrence time

is

M i

  

ii

i

|

X

0 

i

 

n

   0 A state is

recurrent

f i

 if

f i

=1

Pr 

T ii

  Pr 

T i

If

M i

If

M i <

=

 then it is said

Positive Recurrent

then it is said

Null Recurrent

ii

  |

X

0 

i

  1  A state is

transient

f i

 if

f i

<1

Pr 

T ii

Pr 

T i

  |

X

0 

i

  1  If , then

i

  1 

f i

  Pr 

T ii

   is the probability of never returning to state

i

.

Transient and Recurrent States

9  We define

N i

as the

number of visits to state

N i

i

   0

n

  

n

i

given

X 0 =i

,

i

    1 if

X

0 if

X n n

i

i

Theorem:

If

N i

is the number of visits to state

i

given

X 0 =i,

then

i

|

X

0 

i

 

n

   0

P ii

 1 1 

f i P ii

Transition Probability from state

i

to state

i

after n steps 

Proof

Transient and Recurrent States

10  The probability of reaching state

j f ij

 Pr 

T ij

n

 

n

for first time 

n

 1  in n-steps

j

,...,

X

1 starting from

X 0

 0 

i

 =

i

.

 The probability of ever reaching

j f ij

 Pr 

T ij

starting from state

i

n

  1

f ij

is

Three Theorems

11  If a Markov Chain has

finite state space

, then: at least one of the states is

recurrent

.

If state

i

is

recurrent

and state

j

is

reachable

from state

i

then: state

j

is also

recurrent

.

If

S

is a

finite closed irreducible

set of states, then: every state in

S

is recurrent.

Positive and Null Recurrent States

12     Let M

i

be the mean recurrence time of state i

M i

  

ii

k

  1

k

Pr A state is said to be

positive recurrent

T ii

if

M i

<∞ . If

M i

=∞ then the state is said to be

null-recurrent

.

Three Theorems

k

    If state

i

is

positive recurrent

and state then, state

j

is also

positive recurrent

.

j

If

S

is a

closed irreducible set

of states, then every state in

S

is

positive recurrent

or, every state in S is

null recurrent

, or, every state in

S

is

transient

.

is

reachable

If

S

is a

finite closed irreducible

set of states, then every state in

S

is

positive recurrent

.

from state

i

13

Example

p

00 0 Transient States

p

01

p

10 Recurrent State 1 4

p

14

p

12 2

p

22 Positive Recurrent States

p

32

p

23 3

p

33

14

Periodic and Aperiodic States

   Suppose that the structure of the Markov Chain is such that state

i

is visited after a number of steps that is an integer multiple of an integer

d

>1 . Then the state is called

periodic

with period

d

.

If no such integer exists (i.e.,

d

aperiodic

.

=1 ) then the state is called

Example

1 0.5

0 Periodic State

d

= 2 0.5

1 1 2

P

     0 0.5 0 0 1 1 0 0.5

0    

Steady State Analysis

15  Recall that the state probability, which is the probability of finding the MC at state

i

after the

k

th step is given by: 

i

 Pr 

X k

i

π

   0 ,  1 ...

   An interesting question is what happens in the “

long run

”, i.e., 

i

k

lim      This is referred to as

steady state state

probability or

equilibrium

or

stationary

 Questions:  Do these limits exists?

  If they exist, do they converge to a legitimate probability distribution, i.e.,  

i

 1 How do we evaluate

π j

, for all j.

Steady State Analysis

16     Recall the recursive probability

π

k

 1  

π

If steady state exists, then π(

k

+1 )   

P

π(

k

) , and therefore the

steady state probabilities

are given by the solution to the equations and  

i

 1

i

If an

Irreducible Markov Chain

, then the presence of

periodic states

prevents the existence of a steady state probability Example: periodic.m

P

     0 0.5 0 0 1 1 0 0.5

0    

π

  1 0 0 

Steady State Analysis

17  THEOREM: In an

irreducible aperiodic

positive recurrent

states a Markov chain consisting of

unique stationary state probability

vector

π

exists such that

π j

> 0 and where M

j

j

k

lim  

j

 1

M j

is the mean recurrence time of state j  The steady state vector

π

is determined by solving and 

i

i

 1 

Ergodic Markov chain

.

Discrete Birth-Death Example

18

p

0

1-p

p

P

      

p p

0 1-p

1

1 

p

0

p

1  0

p p

0     

i

1-p

p

 Thus, to find the steady state vector

π

we need to solve and 

i

i

 1

Discrete Birth-Death Example

19    In other words  

j

0     0

p j

 1   1   1

p p

  

j

 1 Solving these equations we get  1  1 

p p

 0  2 In general 

j

 1 

p p

 

j

 0  1, 2,...

 1 

p p

  2  0  Summing all terms we get  0

i

   0   1 

p p

i

 0  1

i

   0   1 

p p

i

Discrete Birth-Death Example

20   Therefore, for all states j we get If p<1/2, then

i

   0   1 

p p

 

i j

   1 

p p

 

j

 

i

  

j

0    1 

p p

 

i

0, for all

j

All states are

transient

i

   0 If p>1/2, then   1 

p p

 2

p p

 1  0  

j

 2

p p

   1 

p p

 

j

, for all

j

All states are

positive recurrent

Discrete Birth-Death Example

21  If p=1/2, then

i

   0   1 

p p

i

   

j

 0, for all

j

All states are

null recurrent

22

Reducible Markov Chains

Transient Set T Irreducible Set S 1   Irreducible Set S 2 In steady state, we know that the Markov chain will eventually end in an irreducible set and the previous analysis still holds, or an absorbing state. The only question that arises, in case there are two or more irreducible sets, is the probability it will end in each set

Reducible Markov Chains

23 Transient Set T

r i s

1

s n

Irreducible Set S   Suppose we start from state i. Then, there are two ways to go to S.

 In one step or  Go to r T after k steps, and then to S.

Define 

i

 Pr 

X k

 0    1, 2,...

Reducible Markov Chains

24   First consider the one-step transition Pr 

X

1   0 Next consider the general case for k=2,3,… Pr 

X k

k

 1 

r k

 1 

T

...,

X

1  Pr 

X

k

X

1   Pr  

r X k

 

i

p ir k

 1  

k

 1

p ij

 

r k

 1 

T

i

 ...,|

X

1 0 

r k

 1 

T

...,|  

r X

1

p ir

p ij

0 

i

  

ir

0 

i