Lecture 3: Markov processes, Master equation

Download Report

Transcript Lecture 3: Markov processes, Master equation

Lecture 3: Markov processes,
master equation
Outline:
• Preliminaries and definitions
• Chapman-Kolmogorov equation
• Wiener process
• Markov chains
• eigenvectors and eigenvalues
• detailed balance
• Monte Carlo
• master equation
Stochastic processes
Random function x(t)
Stochastic processes
Random function x(t)
Defined by a distribution functional P[x], or by all its moments
Stochastic processes
Random function x(t)
Defined by a distribution functional P[x], or by all its moments
x(t) ,

x(t1)x(t 2 ) ,
x(t1)x(t 2 )x(t3 ) , L
Stochastic processes
Random function x(t)
Defined by a distribution functional P[x], or by all its moments
x(t) ,
x(t1)x(t 2 ) ,
x(t1)x(t 2 )x(t3 ) , L
or by its characteristic functional:




G[k]  exp i  k(t)x(t)dt
Stochastic processes
Random function x(t)
Defined by a distribution functional P[x], or by all its moments
x(t) ,
x(t1)x(t 2 ) ,
x(t1)x(t 2 )x(t3 ) , L
or by its characteristic functional:



G[k]  exp i  k(t)x(t)dt
1 i  k(t) x(t) dt  12

  k(t ) x(t )x(t ) k(t )dt dt
1
1
2
2
1
2
L
Stochastic processes (2)
Cumulant generating functional:
Stochastic processes (2)
Cumulant generating functional:
logG[k]  1 i  k(t) x(t) dt  12
 3!i

  k(t )
   k(t )k(t )k(t )
1
2
3
1
x(t1 )x(t 2 ) k(t 2 )dt1dt2
x(t1)x(t 2 )x(t 2 ) dt1dt2 dt3  L
Stochastic processes (2)
Cumulant generating functional:
logG[k]  1 i  k(t) x(t) dt  12
 3!i
where


  k(t )
   k(t )k(t )k(t )
1
2
3
1
x(t1 )x(t 2 ) k(t 2 )dt1dt2
x(t1)x(t 2 )x(t 2 ) dt1dt2 dt3  L
x(t1)x(t 2 )  x(t1)x(t2 )  x(t1) x(t2 )
Stochastic processes (2)
Cumulant generating functional:
logG[k]  1 i  k(t) x(t) dt  12
 3!i
where


  k(t )
   k(t )k(t )k(t )
1
2
3
1
x(t1 )x(t 2 ) k(t 2 )dt1dt2
x(t1)x(t 2 )x(t 2 ) dt1dt2 dt3  L
x(t1)x(t 2 )  x(t1)x(t2 )  x(t1) x(t2 )
correlation function
Stochastic processes (2)
Cumulant generating functional:
logG[k]  1 i  k(t) x(t) dt  12
 3!i
where

  k(t )
   k(t )k(t )k(t )
1
2
3
1
x(t1 )x(t 2 ) k(t 2 )dt1dt2
x(t1)x(t 2 )x(t 2 ) dt1dt2 dt3  L
x(t1 )x(t 2 )  x(t1 )x(t 2 )  x(t1) x(t 2 )
correlation function
x(t1 )x(t 2 )x(t 3 )  x(t1 )x(t 2 )x(t 3 )  x(t1)x(t2 ) x(t 3 )
 x(t 2 )x(t 3 ) x(t1)  x(t 3 )x(t1 ) x(t 2 )
etc.

Stochastic processes (3)
Gaussian process:
Stochastic processes (3)
Gaussian process:

G[k]  exp i  k(t) x(t) dt  12

  k(t )
1
x(t1 )x(t 2 ) k(t 2 )dt1dt 2

Stochastic processes (3)
Gaussian process:

G[k]  exp i  k(t) x(t) dt  12
  k(t )
(no higher-order cumulants)

1
x(t1 )x(t 2 ) k(t 2 )dt1dt 2

Stochastic processes (3)
Gaussian process:

G[k]  exp i  k(t) x(t) dt  12
  k(t )
(no higher-order cumulants)

Conditional probabilities:
1
x(t1 )x(t 2 ) k(t 2 )dt1dt 2

Stochastic processes (3)
Gaussian process:

G[k]  exp i  k(t) x(t) dt  12
  k(t )
1
x(t1 )x(t 2 ) k(t 2 )dt1dt 2

(no higher-order cumulants)

Conditional probabilities:
Qx(t1),L x(tk ) | x(t k 1),L x(t m ), t1  t 2 L  tk  t k 1L  t m

Stochastic processes (3)
Gaussian process:

G[k]  exp i  k(t) x(t) dt  12
  k(t )
1
x(t1 )x(t 2 ) k(t 2 )dt1dt 2

(no higher-order cumulants)

Conditional probabilities:
Qx(t1),L x(tk ) | x(t k 1),L x(t m ), t1  t 2 L  tk  t k 1L  t m
= probability of x(t1) … x(tk), given x(tk+1) … x(tm)

Wiener-Khinchin theorem
Fourier analyze x(t):
1
x( ) 
T


T /2
T / 2
dte it x(t)
Wiener-Khinchin theorem
Fourier analyze x(t):
1
x( ) 
T

T /2
it
dt
e
x(t)
T / 2
Power spectrum:
P( )  x( )

2
Wiener-Khinchin theorem
Fourier analyze x(t):
1
x( ) 
T

T /2
T / 2
dte it x(t)
Power spectrum:
P( )  x( )

2
1
T  T
 lim

T /2
T / 2

T /2
T / 2
dtdt eit e it x( t )x(t)
Wiener-Khinchin theorem
Fourier analyze x(t):
1
x( ) 
T

T /2
T / 2
dte it x(t)
Power spectrum:
P( )  x( )
1
 lim
T  T

2

1
T  T
 lim
T /2
T / 2

T /2
T / 2
 dtd e

T /2
T / 2
dtdt eit e it x( t )x(t)
i (t  ) it
e
x(t   )x(t)
Wiener-Khinchin theorem
Fourier analyze x(t):
1
x( ) 
T

T /2
T / 2
dte it x(t)
Power spectrum:
P( )  x( )
1
 lim
T  T
1
 lim
T  T

2


1
 lim
T  T
T /2
T / 2
T /2
T / 2

T /2
T / 2

T /2
T / 2
dtdt eit e it x( t )x(t)
 dtd e
i (t  ) it
 dtd e
i (t  ) it
e
x(t   )x(t)
e C( )
Wiener-Khinchin theorem
Fourier analyze x(t):
1
x( ) 
T

T /2
T / 2
dte it x(t)
Power spectrum:
P( )  x( )
2
1
T  T
 lim

T /2
T / 2

T /2
T / 2
dtdt eit e it x( t )x(t)
1 T /2
i (t  ) it
 lim 
dtd

e
e x(t   )x(t)

T / 2
T  T
1 T /2
 lim T / 2  dtd ei (t  )e it C( )
T  T
  d e i C( )

Wiener-Khinchin theorem
Fourier analyze x(t):
1
x( ) 
T

T /2
T / 2
dte it x(t)
Power spectrum:
P( )  x( )
2
1
T  T
 lim

T /2
T / 2

T /2
T / 2
dtdt eit e it x( t )x(t)
1 T /2
i (t  ) it
 lim 
dtd

e
e x(t   )x(t)

T / 2
T  T
1 T /2
 lim T / 2  dtd ei (t  )e it C( )
T  T
  d e i C( )
Power spectrum is Fourier transform of the correlation function

Markov processes
No information about the future from past values
earlier than the latest available:
Markov processes
No information about the future from past values
earlier than the latest available:
Qx(t1),L x(tk ) | x(t k 1),L x(t m )  Qx(t1),L x(tk ) | x(t k 1)

Markov processes
No information about the future from past values
earlier than the latest available:
Qx(t1),L x(tk ) | x(t k 1),L x(t m )  Qx(t1),L x(tk ) | x(t k 1)
Can get general distribution by iterating Q:

Markov processes
No information about the future from past values
earlier than the latest available:
Qx(t1),L x(tk ) | x(t k 1),L x(t m )  Qx(t1),L x(tk ) | x(t k 1)
Can get general distribution by iterating Q:
 Px(tn ),L , x(t0 )  Qx(t n ) | x(tn1)Qx(t n1) | x(tn2 )L Qx(t1) | x(t0 )Px(t 0 )
Markov processes
No information about the future from past values
earlier than the latest available:
Qx(t1),L x(tk ) | x(t k 1),L x(t m )  Qx(t1),L x(tk ) | x(t k 1)
Can get general distribution by iterating Q:
 Px(tn ),L , x(t0 )  Qx(t n ) | x(tn1)Qx(t n1) | x(tn2 )L Qx(t1) | x(t0 )Px(t 0 )
where P(x(t0)) is the initial distribution.
Markov processes
No information about the future from past values
earlier than the latest available:
Qx(t1),L x(tk ) | x(t k 1),L x(t m )  Qx(t1),L x(tk ) | x(t k 1)
Can get general distribution by iterating Q:
 Px(tn ),L , x(t0 )  Qx(t n ) | x(tn1)Qx(t n1) | x(tn2 )L Qx(t1) | x(t0 )Px(t 0 )
where P(x(t0)) is the initial distribution.
Integrate this over x(tn-1), … x(t1) to get
Markov processes
No information about the future from past values
earlier than the latest available:
Qx(t1),L x(tk ) | x(t k 1),L x(t m )  Qx(t1),L x(tk ) | x(t k 1)
Can get general distribution by iterating Q:
 Px(tn ),L , x(t0 )  Qx(t n ) | x(tn1)Qx(t n1) | x(tn2 )L Qx(t1) | x(t0 )Px(t 0 )
where P(x(t0)) is the initial distribution.
Integrate this over x(tn-1), … x(t1) to get
Qx(tn ) | x(t0 ) 
 dx(t
n1
)L dx(t1)Qx(tn ) | x(tn1)Qx(tn1) | x(tn2 )L Qx(t1) | x(t0 )
Markov processes
No information about the future from past values
earlier than the latest available:
Qx(t1),L x(tk ) | x(t k 1),L x(t m )  Qx(t1),L x(tk ) | x(t k 1)
Can get general distribution by iterating Q:
 Px(tn ),L , x(t0 )  Qx(t n ) | x(tn1)Qx(t n1) | x(tn2 )L Qx(t1) | x(t0 )Px(t 0 )
where P(x(t0)) is the initial distribution.
Integrate this over x(tn-1), … x(t1) to get
Qx(tn ) | x(t0 ) 
 dx(t
n1
)L dx(t1)Qx(tn ) | x(tn1)Qx(tn1) | x(tn2 )L Qx(t1) | x(t0 )
The case n = 2 is the
Chapman-Kolmogorov equation
Qx(t f ) | x(ti )

 dx(t)Qx(t
f
) | x(t)Qx(t) | x(ti )
Chapman-Kolmogorov equation
Qx(t f ) | x(ti )

 dx(t)Qx(t
f
) | x(t)Qx(t) | x(ti )
Chapman-Kolmogorov equation
Qx(t f ) | x(ti )
(for any t’)

 dx(t)Qx(t
f
) | x(t)Qx(t) | x(ti )
Chapman-Kolmogorov equation
Qx(t f ) | x(ti )
 dx(t)Qx(t
f
) | x(t)Qx(t) | x(ti )
(for any t’)

Examples:
(1)Wiener process (Brownian motion/random walk):
Chapman-Kolmogorov equation
Qx(t f ) | x(ti )
 dx(t)Qx(t
f
) | x(t)Qx(t) | x(ti )
(for any t’)

Examples:
(1)Wiener process (Brownian motion/random walk):
 x  x  2
1
Q(x 2,t 2 | x1,t1 ) 
exp 2 1 
2 (t 2  t1)

 2t 2  t1  


Chapman-Kolmogorov equation
Qx(t f ) | x(ti )
 dx(t)Qx(t
f
) | x(t)Qx(t) | x(ti )
(for any t’)

Examples:
(1) Wiener process (Brownian motion/random walk):
 x  x  2
1
Q(x 2,t 2 | x1,t1 ) 
exp 2 1 
2 (t 2  t1)

 2t 2  t1  

(2) (cumulative) Poisson process


t  t 
Q(n2,t 2 | n1,t1)  2 1
n2  n1!
n 2 n1
e(t2 t1 )
Markov chains
Both t and x discrete, assuming stationarity
Q(x,t | x , t )  Q(n,t  1 | n ,t)  Tnn 

Markov chains
Both t and x discrete, assuming stationarity
Q(x,t | x , t )  Q(n,t  1 | n ,t)  Tnn 
Tnn  0,


T
nn 
n
1
Markov chains
Both t and x discrete, assuming stationarity
Q(x,t | x , t )  Q(n,t  1 | n ,t)  Tnn 
Tnn  0,


T
nn 
n
1
(because they are probabilities)
Markov chains
Both t and x discrete, assuming stationarity
Q(x,t | x , t )  Q(n,t  1 | n ,t)  Tnn 
Tnn  0,
nn 
1
(because they are probabilities)
n


T
Equation of motion:
Pn (t 1)  TnnPn(t)
n 

Markov chains
Both t and x discrete, assuming stationarity
Q(x,t | x , t )  Q(n,t  1 | n ,t)  Tnn 
Tnn  0,
nn 
1
(because they are probabilities)
n


T
Equation of motion:
Pn (t 1)  TnnPn(t)
n 
Formal solution:


P(t)  Tt P(0)
Markov chains (2): properties of T
T has a left eigenvector 0  (1,1,L ,1)

Markov chains (2): properties of T
T has a left eigenvector 0  (1,1,L ,1)
(because
T
nn 
n


1)
Markov chains (2): properties of T
T has a left eigenvector 0  (1,1,L ,1)
(because
T
nn 
n
Its eigenvalue is 1.


1)
Markov chains (2): properties of T
T has a left eigenvector 0  (1,1,L ,1)
(because
T
nn 
n
p10 
 0 

p2 

The corresponding right eigenvector is 0   
...

 0 

pN 
Its eigenvalue is 1.

1)
Markov chains (2): properties of T
T has a left eigenvector 0  (1,1,L ,1)
(because
T
nn 
n
p10 
 0 

p2 

The corresponding right eigenvector is 0   
...

 0 

(the stationary state,
pN 
because the eigenvalue is 1: T 0  0 )
Its eigenvalue is 1.


1)
Markov chains (2): properties of T
T has a left eigenvector 0  (1,1,L ,1)
(because
T
nn 
1)
n
p10 
 0 

p2 

The corresponding right eigenvector is 0   
...

 0 

(the stationary state,
pN 
because the eigenvalue is 1: T 0  0 )
Its eigenvalue is 1.
For all other right eigenvectors j with components pmj , j 1;




Markov chains (2): properties of T
T has a left eigenvector 0  (1,1,L ,1)
(because
T
nn 
1)
n
p10 
 0 

p2 

The corresponding right eigenvector is 0   
...

 0 

(the stationary state,
pN 
because the eigenvalue is 1: T 0  0 )
Its eigenvalue is 1.
For all other right eigenvectors j with components pmj , j 1;

j
m
m


p

0
Markov chains (2): properties of T
T has a left eigenvector 0  (1,1,L ,1)
(because
T
nn 
1)
n
p10 
 0 

p2 

The corresponding right eigenvector is 0   
...

 0 

(the stationary state,
pN 
because the eigenvalue is 1: T 0  0 )
Its eigenvalue is 1.
For all other right eigenvectors j with components pmj , j 1;


(because they must be orthogonal to 0 :


j
m
m
0 j  0)


p
0
Markov chains (2): properties of T
T has a left eigenvector 0  (1,1,L ,1)
(because
T
nn 
1)
n
p10 
 0 

p2 

The corresponding right eigenvector is 0   
...

 0 

(the stationary state,
pN 
because the eigenvalue is 1: T 0  0 )
Its eigenvalue is 1.
For all other right eigenvectors j with components pmj , j 1;


(because they must be orthogonal to 0 :
All other eigenvalues 
are < 1.

j
m
m
0 j  0)


p
0
Detailed balance
If there is a stationary distribution P0 with components pm0 and

Detailed balance
If there is a stationary distribution P0 with components pm0 and
Tmn pm0
 0
Tnm pn


Detailed balance
If there is a stationary distribution P0 with components pm0 and
Tmn pm0
 0
Tnm pn


Detailed balance
If there is a stationary distribution P0 with components pm0 and
Tmn pm0
 0
Tnm pn

can prove (if ergodicity*) convergence to P0 from any initial state:

Detailed balance
If there is a stationary distribution P0 with components pm0 and
Tmn pm0
 0
Tnm pn

can prove (if ergodicity*) convergence to P0 from any initial state:

* Can reach any state from any other and no cycles
Detailed balance
If there is a stationary distribution P0 with components pm0 and
Tmn pm0
 0
Tnm pn

can prove (if ergodicity*) convergence to P0 from any initial state:
Define
Smn  mn p,m0

* Can reach any state from any other and no cycles
Detailed balance
If there is a stationary distribution P0 with components pm0 and
Tmn pm0
 0
Tnm pn

can prove (if ergodicity*) convergence to P0 from any initial state:
Define
Smn  mn p,m0
make a similarity transformation

* Can reach any state from any other and no cycles
Detailed balance
If there is a stationary distribution P0 with components pm0 and
Tmn pm0
 0
Tnm pn

can prove (if ergodicity*) convergence to P0 from any initial state:
Define
Smn  mn p,m0
make a similarity transformation
R  S1TS, i.e., Rmn 


* Can reach any state from any other and no cycles
1
pm0
Tmn pn0
Detailed balance
If there is a stationary distribution P0 with components pm0 and
Tmn pm0
 0
Tnm pn

can prove (if ergodicity*) convergence to P0 from any initial state:
Define
Smn  mn p,m0
make a similarity transformation
R  S1TS, i.e., Rmn 
1
pm0
Tmn pn0
j

R is symmetric, has complete set of eigenvectors q j, components qm
(Eigenvalues λj same as those of T.)

* Can reach any state from any other and
 no cycles

Detailed balance (2)
Rq j   j q j

Detailed balance (2)
Rq j   j q j

 S1TSq j   j q j

Detailed balance (2)
Rq j   j q j
TSq j   j Sq j

 S1TSq j   j q j


Detailed balance (2)
Rq j   j q j
TSq j   j Sq j
 S1TSq j   j q j


Right eigenvectors of T: p j  j  Sqj or


pmj  pm0 qmj
Detailed balance (2)
Rq j   j q j
TSq j   j Sq j
 S1TSq j   j q j


Right eigenvectors of T: p j  j  Sqj or

Now look at evolution:


P(t)  Tt P(0)
pmj  pm0 qmj
Detailed balance (2)
Rq j   j q j
TSq j   j Sq j
 S1TSq j   j q j


Right eigenvectors of T: p j  j  Sqj or

Now look at evolution:


pmj  pm0 qmj
P(t)  Tt P(0)


t
 T 
a0 0   a j j 


j

Detailed balance (2)
Rq j   j q j
TSq j   j Sq j
 S1TSq j   j q j


Right eigenvectors of T: p j  j  Sqj or

Now look at evolution:

pmj  pm0 qmj
P(t)  Tt P(0)


t
 T 
a0 0   a j j 


j

 a0 0   a j tj j
j
Detailed balance (2)
Rq j   j q j
TSq j   j Sq j
 S1TSq j   j q j


Right eigenvectors of T: p j  j  Sqj or

Now look at evolution:

pmj  pm0 qmj
P(t)  Tt P(0)


t
 T 
a0 0   a j j 


j

 a0 0   a j tj j t
a0 0

j
Detailed balance (2)
Rq j   j q j
TSq j   j Sq j
 S1TSq j   j q j


Right eigenvectors of T: p j  j  Sqj or

Now look at evolution:

pmj  pm0 qmj
P(t)  Tt P(0)


t
 T 
a0 0   a j j 


j

 a0 0   a j tj j t
a0 0

j
(since  j  1,
j  0)
Detailed balance (2)
Rq j   j q j
TSq j   j Sq j
 S1TSq j   j q j


Right eigenvectors of T: p j  j  Sqj or

Now look at evolution:

pmj  pm0 qmj
P(t)  Tt P(0)


t
 T 
a0 0   a j j 


j

 a0 0   a j tj j t
a0 0  0

j
(since  j  1,
j  0)
Monte Carlo
an example of detailed balance
72
Monte Carlo
an example of detailed balance
Ising model: Binary “spins” Si(t) = ±1
73
Monte Carlo
an example of detailed balance
Ising model: Binary “spins” Si(t) = ±1
Dynamics: at every time step,
74
Monte Carlo
an example of detailed balance
Ising model: Binary “spins” Si(t) = ±1
Dynamics: at every time step,
(1) choose a spin (i) at random
75
Monte Carlo
an example of detailed balance
Ising model: Binary “spins” Si(t) = ±1
Dynamics: at every time step,
(1) choose a spin (i) at random
(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)
76
Monte Carlo
an example of detailed balance
Ising model: Binary “spins” Si(t) = ±1
Dynamics: at every time step,
(1) choose a spin (i) at random
(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)
Jij = Jji
77
Monte Carlo
an example of detailed balance
Ising model: Binary “spins” Si(t) = ±1
Dynamics: at every time step,
(1) choose a spin (i) at random
(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)
(3) Si(t + Δt) = +1 with probability
P(hi ) 
Jij = Jji
exphi 
exphi   exphi 

78
Monte Carlo
an example of detailed balance
Ising model: Binary “spins” Si(t) = ±1
Dynamics: at every time step,
(1) choose a spin (i) at random
(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)
(3) Si(t + Δt) = +1 with probability
P(hi ) 
Jij = Jji
exphi 
1

exphi   exphi  1 exp(2hi )

79
Monte Carlo
an example of detailed balance
Ising model: Binary “spins” Si(t) = ±1
Dynamics: at every time step,
(1) choose a spin (i) at random
(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)
(3) Si(t + Δt) = +1 with probability
P(hi ) 
Jij = Jji
exphi 
1

exphi   exphi  1 exp(2hi )

80
Monte Carlo
an example of detailed balance
Ising model: Binary “spins” Si(t) = ±1
Dynamics: at every time step,
(1) choose a spin (i) at random
(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)
(3) Si(t + Δt) = +1 with probability
P(hi ) 
Jij = Jji
exphi 
1

exphi   exphi  1 exp(2hi )
(equilibration of Si, given
current values of other S’s)

81
Monte Carlo (2)
In language of Markov chains, states (n) are S  (S1,S2,L ,SN )

Monte Carlo (2)
In language of Markov chains, states (n) are S  (S1,S2,L ,SN )
Single-spin flips: transitions only between neighboring points
on hypercube

Monte Carlo (2)
In language of Markov chains, states (n) are S  (S1,S2,L ,SN )
Single-spin flips: transitions only between neighboring points
on hypercube
  S1,S2,L Si1,1,Si1,L ,SN 

  S1,S2,L Si1,1,Si1,L ,SN 

Monte Carlo (2)
In language of Markov chains, states (n) are S  (S1,S2,L ,SN )
Single-spin flips: transitions only between neighboring points
on hypercube
  S1,S2,L Si1,1,Si1,L ,SN 

  S1,S2,L Si1,1,Si1,L ,SN 
T matrix elements:

  T 

  T 



exphi 

 T   exphi   exphi 



 T   
exphi 

exphi   exphi 

exphi 

exphi   exphi 

exphi 

exphi   exphi 
Monte Carlo (2)
In language of Markov chains, states (n) are S  (S1,S2,L ,SN )
Single-spin flips: transitions only between neighboring points
on hypercube
  S1,S2,L Si1,1,Si1,L ,SN 

  S1,S2,L Si1,1,Si1,L ,SN 
T matrix elements:

  T 

  T 


exphi 

 T   exphi   exphi 



 T   
exphi 

exphi   exphi 
all other Tmn = 0.


exphi 

exphi   exphi 

exphi 

exphi   exphi 
Monte Carlo (2)
In language of Markov chains, states (n) are S  (S1,S2,L ,SN )
Single-spin flips: transitions only between neighboring points
on hypercube
  S1,S2,L Si1,1,Si1,L ,SN 

  S1,S2,L Si1,1,Si1,L ,SN 
T matrix elements:

  T 

  T 


exphi 

 T   exphi   exphi 



 T   
exphi 

exphi   exphi 
all other Tmn = 0.
Note: T
T

exp(hi )
exp(hi )

exphi 

exphi   exphi 

exphi 

exphi   exphi 
Monte Carlo (2)
In language of Markov chains, states (n) are S  (S1,S2,L ,SN )
Single-spin flips: transitions only between neighboring points
on hypercube
  S1,S2,L Si1,1,Si1,L ,SN 

  S1,S2,L Si1,1,Si1,L ,SN 
T matrix elements:

  T 

  T 


exphi 

 T   exphi   exphi 



 T   
exphi 

exphi   exphi 
all other Tmn = 0.
exp(  Jij S j )
exp(hi )
j


T exp(hi ) exp(  Jij S j )
Note: T
j

exphi 

exphi   exphi 

exphi 

exphi   exphi 
Monte Carlo (3)
T satisfies detailed balance:
T exp(  j Jij S j ) p0

 0,
T exp(  Jij S j ) p
j

Monte Carlo (3)
T satisfies detailed balance:
T exp(  j Jij S j ) p0

 0,
T exp(  Jij S j ) p
j
where p0 is the Gibbs distribution:





p (S)  Z exp
  Jij Si S j  expE S
 ij

0
1
Monte Carlo (3)
T satisfies detailed balance:
T exp(  j Jij S j ) p0

 0,
T exp(  Jij S j ) p
j
where p0 is the Gibbs distribution:




p (S)  Z exp
  Jij Si S j  expE S
 ij

0
1
After many Monte Carlo steps, converge to p0:

Monte Carlo (3)
T satisfies detailed balance:
T exp(  j Jij S j ) p0

 0,
T exp(  Jij S j ) p
j
where p0 is the Gibbs distribution:




p (S)  Z exp
  Jij Si S j  expE S
 ij

0
1
After many Monte Carlo steps, converge to p0:
 S’s sample Gibbs distribution
Monte Carlo (3): Metropolis version
The foregoing was for “heat-bath” MC. Another possibility is
the Metropolis algorithm:
Monte Carlo (3): Metropolis version
The foregoing was for “heat-bath” MC. Another possibility is
the Metropolis algorithm:
If hiSi < 0, Si(t+Δt) = -Si(t),
Monte Carlo (3): Metropolis version
The foregoing was for “heat-bath” MC. Another possibility is
the Metropolis algorithm:
If hiSi < 0, Si(t+Δt) = -Si(t),
If hiSi > 0, Si(t+Δt) = -Si(t) with probability exp(-hiSi)
Monte Carlo (3): Metropolis version
The foregoing was for “heat-bath” MC. Another possibility is
the Metropolis algorithm:
If hiSi < 0, Si(t+Δt) = -Si(t),
If hiSi > 0, Si(t+Δt) = -Si(t) with probability exp(-hiSi)
Thus,

T T  1 e2 hi

  2 hi
T  T  e
1 
, hi  0;
0
Monte Carlo (3): Metropolis version
The foregoing was for “heat-bath” MC. Another possibility is
the Metropolis algorithm:
If hiSi < 0, Si(t+Δt) = -Si(t),
If hiSi > 0, Si(t+Δt) = -Si(t) with probability exp(-hiSi)
Thus,

T T  1 e2hi 1
, hi  0;

  2hi
T
T
    e
0
T T  0
e 2 hi 
, hi  0.

 
2 hi
T
T
    1 1 e 
Monte Carlo (3): Metropolis version
The foregoing was for “heat-bath” MC. Another possibility is
the Metropolis algorithm:
If hiSi < 0, Si(t+Δt) = -Si(t),
If hiSi > 0, Si(t+Δt) = -Si(t) with probability exp(-hiSi)
T T  1 e2hi 1
, hi  0;

  2hi
T
T
    e
0
T T  0
e 2 hi 
, hi  0.

 
2 hi
T
T
    1 1 e 
Thus,
In either case,


T
p0
 exp2hi   0
T
p
Monte Carlo (3): Metropolis version
The foregoing was for “heat-bath” MC. Another possibility is
the Metropolis algorithm:
If hiSi < 0, Si(t+Δt) = -Si(t),
If hiSi > 0, Si(t+Δt) = -Si(t) with probability exp(-hiSi)
T T  1 e2hi 1
, hi  0;

  2hi
T
T
    e
0
T T  0
e 2 hi 
, hi  0.

 
2 hi
T
T
    1 1 e 
Thus,
In either case,
T
p0
 exp2hi   0
T
p

0
i.e., detailed balance with Gibbs p

Continuous-time limit:
master equation
For Markov chain: P(t  t)  TP (t)

Continuous-time limit:
master equation
For Markov chain: P(t  t)  TP (t)
 P(t)  (T 1)P(t)

Continuous-time limit:
master equation
For Markov chain: P(t  t)  TP (t)
 P(t)  (T 1)P(t)
T 1
dP(t)

lim

P(t)
Differential equation:
t
0
 t 
dt

Continuous-time limit:
master equation
For Markov chain: P(t  t)  TP (t)
 P(t)  (T 1)P(t)
T 1
dP(t)

lim

P(t)
Differential equation:
t
0
 t 
dt

dPm
1 
In components:
 lim  Tmn Pn  Pm 
dt t 0 t  n


Continuous-time limit:
master equation
For Markov chain: P(t  t)  TP(t)
 P(t)  (T 1)P(t)
T 1
dP(t)

lim

P(t)
Differential equation:
t
0
 t 
dt

dPm
1 
In components:
 lim  Tmn Pn  Pm 
dt t 0 t  n


1 
(using normalization of
 lim  Tmn Pn   Tnm Pm 
t 0 t
 n

n
columns of T:)
  W mn Pn  W nm Pm 
nm

Continuous-time limit:
master equation
For Markov chain: P(t  t)  TP(t)
 P(t)  (T 1)P(t)
T 1
dP(t)
lim

P(t)

Differential equation:
0
t
 t 
dt

1 
dPm
In components:
 lim  Tmn Pn  Pm 
dt t 0 t  n


1 
(using normalization of
 lim  Tmn Pn   Tnm Pm 
t 0 t

 n
n
columns of T:)
  W mn Pn  W nm Pm 
nm
(expect Tmn  t, m ≠ n)
T
W mn  lim mn
t 0 t


Continuous-time limit:
master equation
For Markov chain: P(t  t)  TP(t)
 P(t)  (T 1)P(t)
T 1
dP(t)
lim

P(t)

Differential equation:
0
t
 t 
dt

1 
dPm
In components:
 lim  Tmn Pn  Pm 
dt t 0 t  n


1 
(using normalization of
 lim  Tmn Pn   Tnm Pm 
t 0 t

 n
n
columns of T:)
  W mn Pn  W nm Pm 
nm
(expect Tmn  t, m ≠ n)
T
W mn  lim mn
transition rate matrix
t 0 t

