Non-equilibrium Statistical Mechanics: The Physics of

Download Report

Transcript Non-equilibrium Statistical Mechanics: The Physics of

Non-equilibrium Statistical Mechanics:
The Physics of Fluctuations and Noise
SCI-B1-0910
Blok 1, 2009
Aud M (NBI Blegdamsvej), kl 10-12 Mondays and Fridays
1
Non-equilibrium Statistical Mechanics:
The Physics of Fluctuations and Noise
SCI-B1-0910
Blok 1, 2009
Aud M (NBI Blegdamsvej), kl 10-12 Mondays and Fridays
John Hertz
office: Kc-10 (NBI Blegdamsvej)
email: [email protected]
tel. 3532 5236 (office Kbh), +46 8 5537 8808 (office Sth), 2055 1874 (mobil)
http://www.nbi.dk/~hertz/noisecourse/coursepage.html
2
Source material
“text”: N G van Kampen, Stochastic Processes in Physics and Chemistry (North –
Holland) [very clear, good on general formal methods, but little recent stuff. I will
not follow it slavishly in the lectures, but I recommend you buy and read it.]
These two are good for anomalous diffusion:
L Vlahos et al, Normal and Anomalous Diffusion: a Tutorial arXiv.org/abs/0805.0419
R Metzler and J Klafter, The Random Walker’s Guide to Anomalous Diffusion, Physics
Reports 339, 1-77 (2000)
On first-passage-time problems:
S Redner, A Guide to First-Passage Problems (Cambridge U Press) [library reserve]
On finance-theoretical applications:
J-P Bouchaud and M Potters, Theory of Financial Risks: From Statistical Physics to Risk
Management (Cambridge U Press) [library reserve]
(and more to be mentioned as we go along)
3
Lecture 1:
A random walk through the course
4
Lecture 1:
A random walk through the course
The ubiquity of noise (especially in biology): Changing conditions does not change
states; rather, it changes the relative probabilities of states.
5
Lecture 1:
A random walk through the course
The ubiquity of noise (especially in biology): Changing conditions does not change
states; rather, it changes the relative probabilities of states.
Example: protein conformational change
Potential energy:
6
Lecture 1:
A random walk through the course
The ubiquity of noise (especially in biology): Changing conditions does not change
states; rather, it changes the relative probabilities of states.
Example: protein conformational change
Potential energy:
V1(x)
x
7
Lecture 1:
A random walk through the course
The ubiquity of noise (especially in biology): Changing conditions does not change
states; rather, it changes the relative probabilities of states.
Example: protein conformational change
Potential energy:
V1(x)
x
8
Lecture 1:
A random walk through the course
The ubiquity of noise (especially in biology): Changing conditions does not change
states; rather, it changes the relative probabilities of states.
Example: protein conformational change
Potential energy:

V1(x)
V2(x)
x
9
Lecture 1:
A random walk through the course
The ubiquity of noise (especially in biology): Changing conditions does not change
states; rather, it changes the relative probabilities of states.
Example: protein conformational change
Potential energy:

V1(x)
V2(x)
x
10
Lecture 1:
A random walk through the course
The ubiquity of noise (especially in biology): Changing conditions does not change
states; rather, it changes the relative probabilities of states.
Example: protein conformational change
Potential energy:
The real story:
P1(x)

V1(x)
P2(x)
V2(x)
x
11
From (equilibrium) stat mech:
P1,2 (x)  exp[ V1,2 (x)]
This course: how P(x) changes from P1(x) to P2(x):

12
From (equilibrium) stat mech:
P1,2 (x)  exp[ V1,2 (x)]
This course: how P(x) changes from P1(x) to P2(x):
Dynamics of P(x,t)

dP(x,t)
L
dt
www.nbi.dk/hertz/noisecourse/demos/Pseq.mat

www.nbi.dk/hertz/noisecourse/demos/runseq.m
13
From (equilibrium) stat mech:
P1,2 (x)  exp[ V1,2 (x)]
This course: how P(x) changes from P1(x) to P2(x):
Dynamics of P(x,t)

dP(x,t)
L
dt
www.nbi.dk/hertz/noisecourse/demos/Pseq.mat

www.nbi.dk/hertz/noisecourse/demos/runseq.m
d x
L
dt
d x(t1 )x(t)
L
dt
or

etc.
14
Random walks
www.nbi.dk/~hertz/noisecourse/demos/brown.m
15
Random walks
www.nbi.dk/~hertz/noisecourse/demos/brown.m
16
Random walks
N
XN   x i ;
i1
x i  0;
xi
2
 a2;
xi  x j  0

17
Random walks
N
XN   x i ;
i1
x i  0;
xi
2
 a2;
xi  x j  0
independent steps

18
Random walks
N
XN   x i ;
i1
x i  0;
xi
2
 a2;
xi  x j  0
independent steps

N
N
i1
i j
XN  XN   x i  x i   x i  x j
 Na 2
19
Random walks
N
XN   x i ;
i1
x i  0;
xi
2
 a2;
xi  x j  0
independent steps

N
N
i1
i j
XN  XN   x i  x i   x i  x j
 Na 2
i.e, rms distance
XN  N a
2
20
Random walks and diffusion
Step length distribution χ(y)
 dy (y) 1;  dy y(y)  0;  dy y (y)  a
2
2

21
Random walks and diffusion
Step length distribution χ(y)
 dy (y) 1;  dy y(y)  0;  dy y (y)  a
2
2
Change in P from one step:

P(x,t  t) 
 dy  (y)P(x  y,t)

22
Random walks and diffusion
Step length distribution χ(y)
 dy (y) 1;  dy y(y)  0;  dy y (y)  a
2
2
Change in P from one step:

P
P(x,t  t) 
P
 dy  (y)P(x  y,t)
χ

23
Random walks and diffusion
Step length distribution χ(y)
 dy (y) 1;  dy y(y)  0;  dy y (y)  a
2
2
Change in P from one step:

P
P(x,t  t) 
P
χ
 dy  (y)P(x  y,t)


P 1 2  2 P
  dy  (y)P(x,t)  y
 y
 L 
2

x
2

x



24
Random walks and diffusion
Step length distribution χ(y)
 dy (y) 1;  dy y(y)  0;  dy y (y)  a
2
2
Change in P from one step:

P
P(x,t  t) 
P
χ
 dy  (y)P(x  y,t)


P 1 2  2 P
  dy  (y)P(x,t)  y
 y
 L 
2

x
2

x


1 2  2P
 P(x,t)  a
2 x 2

25
Random walks and diffusion
Step length distribution χ(y)
 dy (y) 1;  dy y(y)  0;  dy y (y)  a
2
2
Change in P from one step:

P
P(x,t  t) 
P
χ
 dy  (y)P(x  y,t)


P 1 2  2 P
  dy  (y)P(x,t)  y
 y
 L 
2

x
2

x


1 2  2P
 P(x,t)  a
2 x 2
Diffusion equation:

P
 2P
D 2
t
x
a2
D
2t
26
Random walks and diffusion
Step length distribution χ(y)
 dy (y) 1;  dy y(y)  0;  dy y (y)  a
2
2
Change in P from one step:

P
P(x,t  t) 
P
χ
 dy  (y)P(x  y,t)


P 1 2  2 P
  dy  (y)P(x,t)  y
 y
 L 
2

x
2

x


1 2  2P
 P(x,t)  a
2 x 2
Diffusion equation:

P
 2P
D 2
t
x
a2
D
2t
diffusion constant
Solution of the diffusion equation:
28
Solution of the diffusion equation:
 x 2 
1
P(x,t) 
exp

4Dt
 4Dt 

29
Solution of the diffusion equation:
 x 2 
1
P(x,t) 
exp

4Dt
 4Dt 
Gaussian, spreading with time, variance 2Dt

http;//www.nbi.dk/~hertz/noisecourse/gaussspread.m
30
Solution of the diffusion equation:
 x 2 
1
P(x,t) 
exp

4Dt
 4Dt 
Gaussian, spreading with time, variance 2Dt

http;//www.nbi.dk/~hertz/noisecourse/gaussspread.m
31
Distribution obtained by simulating
20000 random walks:
32
Anomalous diffusion
Normal diffusion: x 2  2Dt

Anomalous diffusion
Normal diffusion: x 2  2Dt
An experimental counterexample:

Anomalous diffusion
Normal diffusion: x 2  2Dt
An experimental counterexample:

Motion of lipid granules in yeast cells
Tolic-Nørrelykke et al, Phys Rev Lett 93, 078102 (2004)
Anomalous diffusion
Normal diffusion: x 2  2Dt
An experimental counterexample:

Motion of lipid granules in yeast cells
x 2  t 0.75
Tolic-Nørrelykke et al, Phys Rev Lett 93, 078102
 (2004)
Sub- and superdiffusion
x t
2
H

37
Sub- and superdiffusion
x t
2
H
H: Hurst exponent
H < 1: subdiffusion
H > 1: superdiffusion

38
Sub- and superdiffusion
x t
2
H
H: Hurst exponent
H < 1: subdiffusion
H > 1: superdiffusion

One way to get superdiffusion: long-time correlations between steps
39
Sub- and superdiffusion
x t
2
H
H: Hurst exponent
H < 1: subdiffusion
H > 1: superdiffusion

One way to get superdiffusion: long-time correlations between steps
One way to get subdiffusion: long-time anti-correlations between steps
40
Levy walks
Step length distribution χ(y):
 dy (y) 1;  dy y(y)  0; _______________
 dy y (y)  
2

41
Levy walks
Step length distribution χ(y):
 dy (y) 1;  dy y(y)  0; _______________
 dy y (y)  
2
power law tail in step length distribution:
 (y)  y a , a  3


42
Levy walks
Step length distribution χ(y):
 dy (y) 1;  dy y(y)  0; _______________
 dy y (y)  
2
power law tail in step length distribution:
 (y)  y a , a  3

Example: Cauchy (Lorentz) distribution
 (y) 
1/ 
1 y 2


43
Levy walks
Step length distribution χ(y):
 dy (y) 1;  dy y(y)  0; _______________
 dy y (y)  
2
power law tail in step length distribution:
 (y)  y a , a  3

Example: Cauchy (Lorentz) distribution
 (y) 
1/ 
1 y 2

http://www.nbi.dk/~hertz/noisecourse/levy.m
(a = 5/2)

44
Levy walks
Step length distribution χ(y):
 dy (y) 1;  dy y(y)  0; _______________
 dy y (y)  
2
power law tail in step length distribution:
 (y)  y a , a  3

Example: Cauchy (Lorentz) distribution
 (y) 
1/ 
1 y 2

http://www.nbi.dk/~hertz/noisecourse/levy.m
(a = 5/2)

45
Levy walks
Step length distribution χ(y):
 dy (y) 1;  dy y(y)  0; _______________
 dy y (y)  
2
power law tail in step length distribution:
 (y)  y a , a  3

Example: Cauchy (Lorentz) distribution
 (y) 
1/ 
1 y 2

http://www.nbi.dk/~hertz/noisecourse/levy.m
(a = 5/2)

Note: <x2> = ∞ for all t
46
Brown vs Levy
47
Ising model
(an example of a system with many degrees of freedom)
48
Ising model
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
49
Ising model
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
Dynamics: at every time step,
50
Ising model
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
Dynamics: at every time step,
(1) choose a spin (i) at random
51
Ising model
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
Dynamics: at every time step,
(1) choose a spin (i) at random
(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)
52
Ising model
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
Dynamics: at every time step,
(1) choose a spin (i) at random
(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)
(3) Si(t + Δt) = +1 with probability
P(h) 
1
1 exp(2hi )

53
Ising model
(an example of a system with many degrees of freedom)
Binary “spins” Si(t) = ±1
Dynamics: at every time step,
(1) choose a spin (i) at random
(2) compute “field” of neighbors hi(t) = ΣjJijSj(t)
(3) Si(t + Δt) = +1 with probability
P(h) 
1
1 exp(2hi )

54
Changing the interaction strength:
http://www.nbi.dk/~hertz/noisecourse/ising.m
55
Changing the interaction strength:
http://www.nbi.dk/~hertz/noisecourse/ising.m
Varying the interaction strength: Jneighbors = 0.25:
56
Changing the interaction strength:
http://www.nbi.dk/~hertz/noisecourse/ising.m
Varying the interaction strength: Jneighbors = 0.25, 0.45:
57
Changing the interaction strength:
http://www.nbi.dk/~hertz/noisecourse/ising.m
Varying the interaction strength: Jneighbors = 0.25, 0.45, 0.65:
58
Some basic concepts
in probability theory
Random variable x
59
Some basic concepts
in probability theory
Random variable x
Probability distribution (“density”) P(x)
60
Some basic concepts
in probability theory
Random variable x
Probability distribution (“density”) P(x)
If x is discrete-valued: P(xn) or Pn
61
Some basic concepts
in probability theory
Random variable x
Probability distribution (“density”) P(x)
If x is discrete-valued: P(xn) or Pn
Normalization:
 P(x)dx 1
 P(x ) 1
n
n

62
Some basic concepts
in probability theory
Random variable x
Probability distribution (“density”) P(x)
If x is discrete-valued: P(xn) or Pn
Normalization:
 P(x)dx 1
 P(x ) 1
n
n
Averages:
A(x) 
 A(x)P(x)dx


63
Some basic concepts
in probability theory
Random variable x
Probability distribution (“density”) P(x)
If x is discrete-valued: P(xn) or Pn
Normalization:
 P(x)dx 1
 P(x ) 1
n
n
Averages:
A(x) 

Moments:


xn 
 A(x)P(x)dx
n
x
 P(x)dx
64
Some basic concepts
in probability theory
Random variable x
Probability distribution (“density”) P(x)
If x is discrete-valued: P(xn) or Pn
Normalization:
 P(x)dx 1
 P(x ) 1
n
n
Averages:
A(x) 

Moments:

Mean: x1

xn 
 A(x)P(x)dx
n
x
 P(x)dx
65
Some common distributions
Gaussian (normal):
1  12 x 2
P(x) 
e
2

66
Some common distributions
Gaussian (normal):
1  12 x 2
P(x) 
e
2
Cauchy (Lorentzian):
P(x) 

1
1
 1 x 2

67
Some common distributions
Gaussian (normal):
1  12 x 2
P(x) 
e
2
Cauchy (Lorentzian):
P(x) 

1
1
 1 x 2
(one-sided) exponential: P(x)  (x)ex


68
Some common distributions
Gaussian (normal):
1  12 x 2
P(x) 
e
2
Cauchy (Lorentzian):
P(x) 

1
1
 1 x 2
(one-sided) exponential: P(x)  (x)ex
Levy:


P(x) 
 1 
(x) 1
exp
 
3/2
 2x 
2 x

69
Some common distributions
Gaussian (normal):
1  12 x 2
P(x) 
e
2
Cauchy (Lorentzian):
P(x) 

1
1
 1 x 2
(one-sided) exponential: P(x)  (x)ex
Levy:


Poisson:


P(x) 
 1 
(x) 1
exp
 
3/2
 2x 
2 x
a n a
Pn  e
n!
70
Thin and fat tails
71
Characteristic functions
72
Characteristic functions
G(k)  expikx   eikxP(x)dx

73
Characteristic functions
G(k)  expikx   eikxP(x)dx
G(k) is the moment-generating function (expand the exponential):

74
Characteristic functions
G(k)  expikx   eikxP(x)dx
G(k) is the moment-generating function (expand the exponential):
(ik) n n
G(k)  
x
n!
n


75
Characteristic functions
G(k)  expikx   eikxP(x)dx
G(k) is the moment-generating function (expand the exponential):
(ik) n n
G(k)  
x
n!
n

Note: G(0) = 1 (normalization)

76
Characteristic functions
G(k)  expikx   eikxP(x)dx
G(k) is the moment-generating function (expand the exponential):
(ik) n n
G(k)  
x
n!
n

Note: G(0) = 1 (normalization)
Gaussian:

Exponential:
G(k)  e
G(k) 
 12 k 2
1
1 ik
Cauchy: G(k)  e
Levy:
k
G(k)  e
2ik
77
Cumulants
Expanding log G generates the cumulants κn:
Cumulants
Expanding log G generates the cumulants κn:

(ik) n
logG(k)  
n
n!
n1

Cumulants
Expanding log G generates the cumulants κn:

(ik) n
logG(k)  
n
n!
n1
1  x
2

 x  x  x
2  x

3  x 3  3 x 2 x  2 x
3
2
4  x  4 x
4

3
x 3 x
2
2 2
2
 12 x 2 x  6 x
2
4
Cumulants
Expanding log G generates the cumulants κn:

(ik) n
logG(k)  
n
n!
n1
1  x
(mean)
2

 x  x  x
2  x

3  x 3  3 x 2 x  2 x
3
2
4  x  4 x
4

3
x 3 x
2
2 2
2
 12 x 2 x  6 x
2
4
Cumulants
Expanding log G generates the cumulants κn:

(ik) n
logG(k)  
n
n!
n1
1  x
(mean)
2

 x  x  x
2  x

3  x 3  3 x 2 x  2 x
3
2
4  x  4 x
4

3
x 3 x
2
2 2
2
variance
 12 x 2 x  6 x
2
4
Cumulants
Expanding log G generates the cumulants κn:

(ik) n
logG(k)  
n
n!
n1
1  x
(mean)
2

 x  x  x
2  x

3  x 3  3 x 2 x  2 x
3
2
4  x  4 x
4
3
skewness: γ3 = κ3/(κ2)3/2

x 3 x
2
2 2
2
variance
 12 x 2 x  6 x
2
kurtosis: κ4/(κ2)2
4
Multivariate distributions
P(x1,L , x n ), P(x)

84
Multivariate distributions
P(x1,L , x n ), P(x)
marginal distribution of x1:
P(x1) 
 P(x ,L , x )dx L
1
n
2
dxn


85
Multivariate distributions
P(x1,L , x n ), P(x)
marginal distribution of x1:

 P(x ,L , x )dx L dx
P(x , x )   P(x ,L , x )dx L dx
P(x1) 
1
2
1
n
1
2
n
n
3
n
etc.

86
Multivariate distributions
P(x1,L , x n ), P(x)
marginal distribution of x1:

 P(x ,L , x )dx L dx
P(x , x )   P(x ,L , x )dx L dx
P(x1) 
1
1
2
n
1
2
n
n
3
n
etc.
Independence:

P(x,y)  P(x)P(y)

87
Multivariate distributions
P(x1,L , x n ), P(x)
marginal distribution of x1:

 P(x ,L , x )dx L dx
P(x , x )   P(x ,L , x )dx L dx
P(x1) 
1
1
2
n
1
2
n
n
3
n
etc.
Independence:

P(x,y)  P(x)P(y)
Conditional probabilities:
P(y | x) :
P(x,y)  P(x | y)P(y)  P(y | x)P(x)


88
Multivariate distributions
P(x1,L , x n ), P(x)
marginal distribution of x1:

 P(x ,L , x )dx L dx
P(x , x )   P(x ,L , x )dx L dx
P(x1) 
1
1
2
n
1
2
n
n
3
n
etc.
Independence:

P(x,y)  P(x)P(y)
Conditional probabilities:
P(y | x) :
P(x,y)  P(x | y)P(y)  P(y | x)P(x)


Bayes’s (Bayes) rule: P(y | x) 
P(x | y)P(y) P(x,y)

P(x)
P(x)
89
Adding random variables
x : P1(x);
y : P2 (y)

90
Adding random variables
x : P1(x);
y : P2 (y)
zxy:


91
Adding random variables
x : P1(x);
y : P2 (y)
zxy:

P(z) 
 (z  x  y)P (x)P (y)dxdy
1
2


92
Adding random variables
x : P1(x);
y : P2 (y)
zxy:

 (z  x  y)P (x)P (y)dxdy
  P (x)P (z  x)dx
P(z) 
1
1

2
2

93
Adding random variables
x : P1(x);
y : P2 (y)
zxy:

 (z  x  y)P (x)P (y)dxdy
  P (x)P (z  x)dx
P(z) 
1
1

2
2
characteristic functions:

G(k)  G1 (k)G2 (k)

94
Change of variables:
x:
Px (x)
y  f (x)

Change of variables:
x:
Px (x)
y  f (x)
Py (y) 

 P (x)[y  f (x)]dx
x
Change of variables:
x:
Px (x)
y  f (x)
Py (y) 
 P (x)[y  f (x)]dx
x
dx
  Px (x)[y  f (x)] dy
dy

Change of variables:
x:
Px (x)
y  f (x)
Py (y) 

 P (x)[y  f (x)]dx
x
 Px (x)[y  f (x)]
df 1 (y)
 Px ( f (y))
dy
1

dx
dy
dy
Change of variables:
x:
Px (x)
y  f (x)
Py (y) 

 P (x)[y  f (x)]dx
x
 Px (x)[y  f (x)]
df 1 (y)
 Px ( f (y))
dy
1

dx
dy
dy
(or use Py(y)dy = Px(x)dx)
Change of variables:
x:
Px (x)
y  f (x)
Py (y) 

 P (x)[y  f (x)]dx
x
 Px (x)[y  f (x)]
dx
dy
dy
df 1 (y)
 Px ( f (y))
dy
1
Multivariate case:
f 1(y)
Py (y)  Px (f (y))
y

1

(or use Py(y)dy = Px(x)dx)
Change of variables:
x:
Px (x)
y  f (x)
Py (y) 

 P (x)[y  f (x)]dx
x
 Px (x)[y  f (x)]
dx
dy
dy
df 1 (y)
 Px ( f (y))
dy
1
(or use Py(y)dy = Px(x)dx)
Multivariate case:
f 1(y)
Py (y)  Px (f (y))
y

1
 inverse of Jacobian J

J ij 
y i
x j
Gaussian (normal) distribution
P(x) 
1
2 2
exp 12 (x  ) 2 / 2 

102
Gaussian (normal) distribution
P(x) 

1
2 2
exp 12 (x  ) 2 / 2 
characteristic function:
G(k)  expik  12 k 2 2 

103
Gaussian (normal) distribution
P(x) 

1
2 2
exp 12 (x  ) 2 / 2 
characteristic function:
cumulants:
G(k)  expik  12 k 2 2 
1  
 2  2
 m  0; m  2

104
Gaussian (normal) distribution
P(x) 

1
2 2
exp 12 (x  ) 2 / 2 
characteristic function:
cumulants:
G(k)  expik  12 k 2 2 
1  
 2  2
 m  0; m  2
moments (μ = 0 case):


x 2n  (2n 1)!! x 2  (2n 1)(2n  3)L 31 x 2
105
Multivariate Gaussian
correlation matrix

C jk  x j  x j
x
k
 xk


106
Multivariate Gaussian
correlation matrix
P(x) 
1
(2 ) d / 2


C jk  x j  x j
x
k
 1
exp  x j  x j
det C

 2 jk

 xk


C jk x k  x k 


1

107
Multivariate Gaussian
correlation matrix
P(x) 
1
(2 ) d / 2


C jk  x j  x j
x
k
 1
exp  x j  x j
det C

 2 jk

 xk


C jk x k  x k 


1
characteristic function:


1
G(k)  exp
 ik j x j  2  k j C jk k k 

 j
jk



108
Multivariate Gaussian
correlation matrix
P(x) 
1
(2 ) d / 2


C jk  x j  x j
x
k
 1
exp  x j  x j
det C

 2 jk

 xk


C jk x k  x k 


1
characteristic function:


1
G(k)  exp
 ik j x j  2  k j C jk k k 

 j
jk


higher moments (Wick’s theorem):

109
Multivariate Gaussian
correlation matrix
P(x) 
1
(2 ) d / 2


C jk  x j  x j
x
k
 1
exp  x j  x j
det C

 2 jk

 xk


C jk x k  x k 


1
characteristic function:


1
G(k)  exp
 ik j x j  2  k j C jk k k 

 j
jk


higher moments (Wick’s theorem):

x1x 2 x 3 x 4  x1x 2 x 3 x 4  x1x 3 x 2 x 4  x1x 4 x 2 x 3

110
Multivariate Gaussian
correlation matrix
P(x) 
1
(2 ) d / 2


C jk  x j  x j
x
k
 1
exp  x j  x j
det C

 2 jk

 xk


C jk x k  x k 


1
characteristic function:


1
G(k)  exp
 ik j x j  2  k j C jk k k 

 j
jk


higher moments (Wick’s theorem):

x1x 2 x 3 x 4  x1x 2 x 3 x 4  x1x 3 x 2 x 4  x1x 4 x 2 x 3
(sum of all pairwise contractions)

111
Multivariate Gaussian
correlation matrix
P(x) 
1
(2 ) d / 2


C jk  x j  x j
x
k
 1
exp  x j  x j
det C

 2 jk

 xk


C jk x k  x k 


1
characteristic function:


1
G(k)  exp
 ik j x j  2  k j C jk k k 

 j
jk


higher moments (Wick’s theorem):

x1x 2 x 3 x 4  x1x 2 x 3 x 4  x1x 3 x 2 x 4  x1x 4 x 2 x 3
(sum of all pairwise contractions)
etc. for higher orders

112
Central limit theorem
sum of N iid random variables

1
y
N
N
x
i1
i
Central limit theorem
sum of N iid random variables
distribution of xi:
1
y
N
p(x i )

N
x
i1
i
Central limit theorem
sum of N iid random variables
distribution of xi:
1
y
N
p(x i )

N
x
i
i1
assume finite variance
Central limit theorem
sum of N iid random variables
distribution of xi:
1
y
N
p(x i )
characteristic function

N
x
i
i1
assume finite variance
g(k)  exp 12 k 2 2  3!i k 3 3  4!1 k 4 4  L

Central limit theorem
N
sum of N iid random variables
distribution of xi:
characteristic function
1
y
xi

N i1
p(x i )
assume finite variance
g(k)  exp 12 k 2 2  3!i k 3 3  4!1 k 4 4  L

characteristic function of y: G(y)  g k


N
N

Central limit theorem
sum of N iid random variables
distribution of xi:
characteristic function
1
y
N
p(x i )
N
x
i
i1
assume finite variance
g(k)  exp 12 k 2 2  3!i k 3 3  4!1 k 4 4  L

characteristic function of y: G(y)  g k
N

logG(y)  N logg k


N
N


Central limit theorem
N
sum of N iid random variables
distribution of xi:
characteristic function
1
y
xi

N i1
p(x i )
assume finite variance
g(k)  exp 12 k 2 2  3!i k 3 3  4!1 k 4 4  L

characteristic function of y: G(y)  g k


N
N

logG(y)  N logg k
N

 k 2 2

ik 3
k4
 N 
  3/2 3 
  L 
2 4
6N
24N
 2N

Central limit theorem
N
sum of N iid random variables
distribution of xi:
characteristic function
1
y
xi

N i1
p(x i )
assume finite variance
g(k)  exp 12 k 2 2  3!i k 3 3  4!1 k 4 4  L

characteristic function of y: G(y)  g k


N
N

logG(y)  N logg k
N

 k 2 2

ik 3
k4
 N 
  3/2 3 
  L 
2 4
6N
24N
 2N

1 2 2
N



k
2

Central limit theorem
N
sum of N iid random variables
distribution of xi:
1
y
xi

N i1
p(x i )
assume finite variance
g(k)  exp 12 k 2 2  3!i k 3 3  4!1 k 4 4  L
characteristic function

characteristic function of y: G(y)  g k


N
N

logG(y)  N logg k
N

 k 2 2

ik 3
k4
 N 
  3/2 3 
  L 
2 4
6N
24N
 2N

1 2 2
N



k
2

 y is Gaussian!
Stable distributions
N-1/2 x sum of N Gaussian variables has same distribution as the original
variables:
stable
122
Stable distributions
N-1/2 x sum of N Gaussian variables has same distribution as the original
variables:
stable
Are there distributions which are stable but with a different scaling factor
N-1/α (α ≠ 2) instead?
123
Stable distributions
N-1/2 x sum of N Gaussian variables has same distribution as the original
variables:
stable
Are there distributions which are stable but with a different scaling factor
N-1/α (α ≠ 2) instead?
Require

gk N 
1/ 
N
 g(k)
124
Stable distributions
N-1/2 x sum of N Gaussian variables has same distribution as the original
variables:
stable
Are there distributions which are stable but with a different scaling factor
N-1/α (α ≠ 2) instead?
Require
gk N 
1/ 
N
 g(k)
N loggk N 1/   logg(k)

125
Stable distributions
N-1/2 x sum of N Gaussian variables has same distribution as the original
variables:
stable
Are there distributions which are stable but with a different scaling factor
N-1/α (α ≠ 2) instead?
Require
gk N 
1/ 
N
 g(k)
N loggk N 1/   logg(k)
Solution:

logg(k)  ck 
126
Stable distributions
N-1/2 x sum of N Gaussian variables has same distribution as the original
variables:
stable
Are there distributions which are stable but with a different scaling factor
N-1/α (α ≠ 2) instead?
Require
gk N 
1/ 
N
 g(k)
N loggk N 1/   logg(k)
Solution:
or
logg(k)  ck 
g(k)  expck 
(  2)
127
Stable distributions
N-1/2 x sum of N Gaussian variables has same distribution as the original
variables:
stable
Are there distributions which are stable but with a different scaling factor
N-1/α (α ≠ 2) instead?
gk N 
Require
1/ 
N
 g(k)
N loggk N 1/   logg(k)
Solution:
or
logg(k)  ck 
g(k)  expck 
(  2)
characteristic function for stable distribution of order α
128
Stable distributions
Stable distribution of order α:
P (x) 
 g (k)eikx
dk

2
 dk
exp
ikx

ck
 
2

129
Stable distributions
Stable distribution of order α:
P (x) 
 g (k)eikx
Asymptotic behaviour for large x:
dk

2
 dk
exp
ikx

ck
 
2
P(x) ~ 1/x1+α

130
Stable distributions
Stable distribution of order α:
P (x) 
 g (k)eikx
Asymptotic behaviour for large x:
dk

2
 dk
exp
ikx

ck
 
2
P(x) ~ 1/x1+α
Note:
Stable distributions have infinite variance for α < 2
131
Stable distributions
Stable distribution of order α:
P (x) 
 g (k)eikx
Asymptotic behaviour for large x:
dk

2
 dk
exp
ikx

ck
 
2
P(x) ~ 1/x1+α
Note:
Stable distributions have infinite variance for α < 2
Stable distributions have infinite mean for α < 1
132
Stable distributions
Stable distribution of order α:
P (x) 
 g (k)eikx
dk

2
 dk
exp
ikx

ck
 
2
P(x) ~ 1/x1+α
Asymptotic behaviour for large x:
Note:
Stable distributions have infinite variance for α < 2
Stable distributions have infinite mean for α < 1
For symmetric distributions, use



g (k)  exp c k ;

P (x) 

 exp ikx  c k


dk
2
133
Stable distributions: examples
Special cases (can do the Fourier inversion analytically):
α = 1/2: Levy
α = 1:
Cauchy/Lorentzian
α = 3/2: Holtsmark
α = 2:
Gaussian
134