Introduction and review of Matlab

Download Report

Transcript Introduction and review of Matlab

Statistical Physics Notes 1. Probability

Discrete distributions A variable x takes n discrete values, {x i , i = 1,…, n} (e.g. throwing of a coin or a dice) After N events, we get a distribution, {N 1 , N 2 , …, N n } Probability:

P

(

x i

)  Lim

N

 

N i N

Normalization:

i n

  1

P

(

x i

)  1

N i n

  1

N i

 1 Markovian assumption: events are independent

Continuous distributions A variable x can take any value in a continuous interval [a,b] (most physical variables; position, velocity, temperature, etc) Partition the interval into small bins of width dx.

If we measure dN(x) events in the interval [x,x+dx], the probability is

P

(

x

)

dx

 Lim

N

 

dN

(

x

)

N

Normalization:

P

(

x

)  1 Lim

N

 

N dN dx

P

(

x

)

dx

 1

Examples: Uniform probability distribution

P

(

x

)  1  0 / ,

a

, 0 

x

a

0 

x

a

P(x) 1/a Gaussian distribution

P

(

x

) 

Ae

 (

x

x

0 ) 2 2 s 2 A: normalization constant

x

0 : position of the maximum s : width of the distribution

e

x

2 2 s 2  1 2 

x

 0 .

98 s  s 2s a x

Normalization constant Substitute    

Ae

 (

x

x

0 ) 2 2 s 2

dx

 1

y

 (

x

 2

x

0 s ) ,

dx

 2 s

dy A

2 s    

e

y

2

dy

 1 

A

 1 2  s Normalized Gaussian distribution

P

(

x

)  1 2  s

e

 (

x

x

0 ) 2 2 s 2

Properties of distributions Average (mean)

x

 

i n

  1   

x i P

(

x i xP

(

x

)

dx

) Median (or central) value: 50% split point discrete distribution continuous distribution Most probable value: P(x) is maximum For a symmetric distribution all of the above are equal (e.g. Gaussian).

Mean value of an observable f(x)

f

    

i f

(

f x

) (

x i P

( )

P

(

x i x

)

dx

) discrete continuous

Variance: measures the spread of a distribution var(

x

)  

x

x

 2  

i

x i

x

 2

P

(

x i

)  

i x i

2

P

(

x i

)  2

x

x i i P

(

x i

) 

x

2 

i P

(

x i

) 

x

2 

x

2 Root mean square (RMS) or standard deviation: RMS dev  var(

x

)  

x

Same dimension as mean, used in error analysis:

x

 

x

Examples: Uniform probability distribution, [0, a ]

P

(

x

)  1

a

,

x

a

2 , var(

x

) 

a

2 12  

x

 2

a

3 Gaussian distribution

P

(

x

)  1 2  s

e

 (

x

x

0 ) 2 2 s 2 , For variance, assume

x

 0

x

x

0 var(

x

) 

x

2  1 2  s    

x

2

e

x

2 2 s 2

dx

, var(

x

)  2 s  2    

y

2

e

y

2

dy

 s 2  

x

 s Let

y

x

2 s

Detour: Gaussian integrals Fundamental integral: Introduce

I

(

b

)     

e

bx

2    

e dx

 

x

2

dx

b

dI db

     

x

2

e

bx

2

dx

   2

b

3 / 2      

x

2

e

x

2

dx

d

2

I db

2     

x

4

e

bx

2

dx

 3  2 2

b

5 / 2  2     

x

4

e

x

2

dx

 1  3 2 2  

d n I db n

    

x

2

n e

bx

2

dx

 1  3  ( 2

n

2

n

b

( 2

n

 1 ) 1 / ) 2      

x

2

n e

x

2

dx

 ( 2

n

 1 )!

!

2

n

Addition and multiplication rules Addition rule for exclusive events:

P

(

x i

or

x j

) 

P

(

x i

) 

P

(

x j

)

i

j

Multiplication rule for independent variables x and y

P

joint (

x i

,

y j

) 

P

1 (

x i

)

P

2 (

y j

)

P xy

(

x

,

y

)

da

P

(

x

)

dx

P

(

y

)

dy

discrete continuous Examples: 2D-Gaussian distribution

P xy

(

x

,

y

)

da

 2 s 1

x

s

y e

x

2 2 s 2

x e

y

2 2 s 2

y dxdy

If s

x

 s

y

 s ,  1 2 s 2

e

 (

x

2 

y

2 ) 2 s 2

dxdy

In cylindrical coordinates, this distribution becomes

P r

 (

r

,  )

da

 1 2 s 2

e

r

2 2 s 2

rdrd

 Since it is independent of  , we can integrate it out

P r

(

r

)

dr

 1 2 s 2

e

r

2 2 s 2 2 

r dr P r

(

r

)  s

r

2

e

r

2 Mean and variance 2 s 2

r

  2 s

r

2  2 s var(

x

)  ( 2   2 ) s

3D-Gaussian distribution with equal s ’s

P xyz

(

x

,

y

,

z

)

d

3

r

 1 ( 2  ) 3 / 2 s 3

e

 (

x

2 

y

2 

z

2 ) 2 s 2

d

3

r

In spherical coordinates

P r

 (

r

,  ,  )

d

3

r

 1 ( 2  ) 3 / 2 s 3

e

r

2 2 s 2

r

2 sin 

dr d

d

 We can integrate out  and 

P r

(

r

)

dr

 1 ( 2  ) 3 / 2 s 3

e

r

2 2 s 2 4 

r

2

dr P r

(

r

)  2 

r

2 s 3

e

r

2 2 s 2 Here

r

refers to a vector physical variable, e.g. position, velocity, etc.

Mean and variance of 3D-Gaussian distribution

r

 

r

2   2  1 s 3 0  

r

3

e

r

2 2 s 2

dr

2  1 s 3 2 s 4 0  

ue

u du

 8  s 2  1 s 3 0  

r

4

e

r

2 2 s 2

dr

2  1 s 3 2 5 / 2 s 5 0  

y

4

e

y

2

dy

 3 s 2 substitute

u

r

2 2 s 2 (integrati on by parts gives 1) substitute

y

r

2 s (integral gives 3  8 ) var(

r

)   3  8   s 2  0 .

45 s 2  

r

 0 .

67 s

Most probable value for a 3D-Gaussian distribution Set

dP r dr

 0 for

P r

(

r

)  2 

r

2 s 3

e

r

2 2 s 2   2

r

r

2 2

r

2 s  

e

r

2 2 s 2  0 2

r

   1  2

r

2 s     0 2  2 s 2   2 s Summary of the properties of a 3D-Gaussian dist.:  2 s  1 .

4 s ,

r

 8  s  1 .

6 s ,

r

2  3 s  1 .

7 s

Binomial distribution If the probability of throwing a head is p and tail is q (p+q=1), then the probability of throwing n heads out of N trials is given by the binomial distribution:

P

(

n

) 

n

!

(

N N

!

n

)!

p n q N

n

The powers of p and q in the above equation are self-evident. The prefactor can be found from combinatorics. An explicit construction for 1D random walk or tossing of coins is shown in the next page.

Explicit construction of the binomial distribution Trial 1 2 3 

N

L

 2

L x

0

L

2

L

 3

L

L L

3

L

NL

  (

N

 1 )

L

 (

N

 1 )

L NL TT T T

/

H H TH

HT HH T TTT N TTH THH

T N

 1

H

TH N

 1

HHH H N

There is only 1 way to get all H or T, N ways to get 1  T and (N-1)  H (or vice versa), N(N-1)/2 ways to get 2  T and (N-2)  H, N(N-1)(N 2)…(N-n+1)/n! ways to get n  T and (N-n)  H (binomial coeff.)

Properties of the binomial distribution: Normalization follows from the binomial theorem

S

p

,

q

  (

p

q

)

N

n N

  0

n

!

(

N N

!

n

)!

p n q N

n

n N

  0

P

(

n

)  (

p

q

)

N

 1 Average value of heads

n

n N

  0

nP

(

n

) 

n N

  0

n n

!

(

N N

!

n

)!

p n q N

n

p

S

p

p

 

p

(

p

q

)

N

pN

(

p

q

)

N

 1 

pN

For

p

 1 / 2 ,

n

N

/ 2

Average position in 1D random walk after N steps

x

 2

n

N L

  2

n

N

L

 ( 2

p

 1 )

NL

 (

p

q

)

NL x

 0 , if

p

q

 1 / 2 For large N, the probability of getting

x

 0 To find the spread, calculate the variance is actually quite small.

n

2 

n N

  0

n

2

n

!

(

N N

!

n

)!

p n q N

n

   

p pN

[ (  

p p

   2

S

  

q

)

N

 1

p

  

p

  2

p

(

N

(

p

  1 )(

q

)

N p

  

p q

)

N

 2 

p

] [

Np

(

p

q

)

N

 1 ] 

pN

 1 

pN

p

 

pN

(

pN

q

) 

n

2 

Npq

Hence the variance is var(

n

) 

n

2 

n

2 

Npq

var(

n

) 

N

/ 4 , if To find the spread in position, we use var(

x

) 

x

2

p

 

q

 1 /  4

n

2  2 4

Nn

N

2 

L

2

x

2

x x

2 2   

[

4 [ 4

x n

2  4

N

2

n

 2  4 [ 4

N n

2

n n

 

N

2

]

L

2 

n N

2 2 ]

L

2 ]

L

2  4

NpqL

2 var(

x

) 

x

2 

NL

2 , if

p

q

 1 / 2 rms (

x

) 

N L

Large N limit of the binomial distribution: Mean collision times of molecules in liquids are of the order of picosec.

Thus in macroscopic observations, N is a very large number To find the limiting form of P(n), Taylor expand its log around the mean ln

P

(

n

)  ln

N

!

 ln

n

!

 ln(

N

n

)!

n

ln

p

 (

N

n

) ln

q

 ln

P

(

n

)  (

n

n

)

d dn

ln

P

(

n

)

n

 1 2 (

n

n

) 2

d

2

dn

2 ln

P

(

n

)

n

  Stirling’s formula for ln(n!) for large n, ln

n

!

n

ln

n

n

 1 2 ln( 2 

n

)

d dn

ln

n

!

 ln

n

 1  1  1 / 2

n

 ln

n

ln

P

(

n

)  ln

N

!

 ln

n

!

 ln(

N

n

)!

n

ln

p

 (

N

n

) ln

q d dn

ln

P

(

n

)

n

  ln

n

 ln(

N

n

)  ln

p

 ln

q

 ln (

N

n q n

)

p

 ln (

n

n p

)

n q

 ln

n

( 1 

n q p

)  ln 1  0

d

2

dn

2 ln

P

(

n

)

n

  1

n

N

1 

n

  1

N

  1

p

 1

q

   

p

q Npq

  1

Npq

Substitute the derivatives in the expansion of P(n) ln

P

(

n

)  ln

P

(

n

)  1 2 (

n

n

) 2 1

Npq

 

P

(

n

) ln

P

(

n

)   (

n

n

2

Npq

) 2

Thus the large N limit of the binomial distribution is the Gaussian dist.

P

(

n

) 

P

(

n

)

e

 (

n

n

) 2 2

Npq

Np

is the mean value, and the width and normalization are s 

Npq

,

P

(

n

)  1 2  s  1 2 

Npq

For the position variable

x

 ( 2

n

N

)

L

, we have

x

x

 (

n

n

) 2

L

, s

x

Npq

2

L P

(

x

)  2  1 s

x e

 (

x

x

) 2 2 s

x

2

x

 ( 2

n

N

)

L

 ( 2

p

 1 )

NL

 (

p

q

)

NL

How good is the Gaussian approximation?

N=4 Bars: Binomial distribution with p=q Solid lines: Gaussian distribution N=14

P j

N

 2

N

!

2

j

!

1 2

N P

(

x

)  1 2 

e

x

2 2

2. Thermal motion

Ideal gas law:

PV

NkT

Macroscopic observables: P: pressure, V: volume, T: temperature N: number of molecules k = 1.38 x 10 -23 J/K (Boltzmann constant) At room temperature (T r = 298 K), kT r = 4.1 x 10 -21 J = 4.1 pN nm (kT r provides a convenient energy scale for biomolecular system) The combination NkT suggests that the kinetic energy of individual molecules is about kT. To link the macroscopic properties to molecular ones, we need an estimate of pressure at the molecular level.

Derivation of the average kinetic energy from the ideal gas law Consider a cubic box of length L filled with N gas molecules The pressure on the walls arises from the collision of molecules Momentum transfer to the y-z wall Average collision time Force on the wall due a single coll.

q

mv x

 ( 

mv x

)  2

mv x

t

 2

L v x f x

 

q

t

 2

mv

2

L v x x

mv

2

x L

In general, velocities have a distribution, so we take an average Average force due to one molec.

Average force due to N molec’s Pressure on the wall Generalise to all walls Average kinetic energy

f x

m L v

2

x F x

N f x

Nm L v

2

x P

F x A

Nm AL v

2

x

N V m v

2

x PV

m N v

2

x

m v

2

y

m v

2

z

kT K

 1 2

m v

2  3 2

kT

Equipartition thm.: Mean energy associated with each deg. of fredom is: 1 2

kT

Distribution of speeds Experimental set up Experimental results for Tl atoms ○ T=944 K ● T=870 K ▬▬ 3D-Gaussian dist.

v is reduced by  2

kT

/

m

Velocity filter

Velocities in a gas have a Gaussian distribution (Maxwell) The rms is

P

(

v x

)  1 2  s

e

v x

2 2 s 2

v

2

x

 s 2 , since

m v

2

x

kT

 s 2 

kT m

Distribution of speeds (3D-Gaussian)

P v

(

v

)

dv

  1 2  s  3

e

v

2 2 s 2 4 

v

2

dv P v

(

v

)  2 

m kT

 3 / 2

v

2

e

mv

2 2

kT

This is the probability of a molecule having speed v regardless of direction

Example: the most common gas molecule N 2 s  

kT

m

4 .

1  10  21 4 .

7  10  26  300 m/s 2 s  420 m/s

v

 8  s  470 m/s

v

2  3 s  510 m/s Oxygen is 16/14 times heavier, so for the O 2 molecule, scale the above results by 7 / 8  0 .

94 Hydrogen is 14 times lighter, so for H 2 scale the above results by 14  3 .

7

Generalise the Maxwell distribution to N molecules (Use the multiplication rule, assuming they move independently)

P

(

v

1 ,

v

2 ,  ,

v

N

) 

e

m

v

2 1 2

kT e

m

v

2 2 2

kT

e

m

v

2

N

2

kT

e

m

(

v

2 1 

v

2 2 ,  ,

v

2

N

) 2

kT

e

E

kin

kT

This is simply the Boltzmann distribution for non-interacting N particles.

In general, the particles interact so there is also position dependence:

P

(

r

1 ,

v

1 ,

r

2 ,

v

2 ,  ,

r

N

,

v

N

) 

e

E

tot /

kT E

tot 

E

kin 

E

pot 

E

(

r

1 ,

v

1 ,

r

2 ,

v

2 ,  ,

r

N

,

v

N

) Universality of the Gaussian dist. arises from the quadratic nature of E.

Activation barriers and relaxation to equilibrium Speed dist. for boiling water at 2 different temperatures Removing the most energetic molecules creates a non-equilibrium state.

When boiled, water molecules with sufficient kinetic energy evaporate Arrhenius rate law:

e

E

barrier /

kT

E barrier : Activation barrier Those molecules with K.E. > E barrier can escape

Equilibrium state (i.e. Gaussian dist.) is restored via molecular collisions Injecting very fast molecules in a box of molecules results in an initial spike in the Gaussian distribution Gas molecules collide like billiard balls (Energy and momentum are conserved) Thus in each collision, the fast molecules lose energy to the slower ones (friction)

3. Entropy, Temperature and Free Energy

Entropy is a measure of disorder in a closed system.

When a system goes from an ordered to a disordered state, entropy increases and information is lost. The two quantities are intimately linked, and sometimes it is easier to understand information loss or gain.

• • • • Consider any of the following 2-state system Tossing of N coins Random walk in 1-D (N steps) Box of N gas molecules divided into 2 parts N spin ½ particles with magnetic moment m in a magnetic field B Each of these systems can be described by a binomial distribution

P

(

N

1 ,

N

2 ) 

N

!

N

1 !

N

2 !

p

1

N

1

p

2

N

2 ,

p

1 

p

2  1 ,

N

1 

N

2 

N

There are 2 N states in total, but only N+1 are distinct.

Introduce the number of states with a given (N 1 , N 2 ) as  (

N

1 ,

N

2 ) 

N

!

N

1 !

N

2 !

We define the disorder (or information content) as

I

K

ln  ,

K

 1 ln 2  1 .

44 (Shannon’s formula) For the 2-state system, assuming large N, we obtain

I I

K

 ln

N

!

 ln

N

1 !

 ln

N

2 !

 

K

N

ln

N

N

N

1 ln

N

1 

N

1 

N

2 ln

N

2 

N

2   

K

[

N

1 ln (

N

1

N

) 

N

2 ln (

N

2

N

) ]

N

 

K

[ (

N

1

N

) ln (

N

1

N

)  (

N

2

N

) ln (

N

2

N

) ]

Thus the amount of disorder per event is

I N

 

K

p

1 ln

p

1 

p

2 ln

p

2  I vanishes for either p 1 =1 or p 2 =1 (zero disorder, max info) and It is maximum for p 1 =p 2 =1/2 (max disorder, min info) Generalization to m-levels: 

i p i

 1 , 

i N i

N P

(

N

1 ,

N

2 , 

N m

) 

N

!

N

1 !

N

2 !

N m

!

p

1

N

1

p N

2 2 

p N m m I

 (

N

1 ,

N

2 , 

N m

) 

N

!

N

1 !

N

2 !

N m

!

K

ln [ ln

N

!

 

i

ln

N i

] 

I N

 

K

i m

 1

p i

ln

p i

Max disorder when all p i are equal, and zero disorder when one is 1.

Entropy Statistical postulate: An isolated system evolves to thermal equilibrium.

Equilibrium is attained when the probability dist. of microstates has the maximum disorder (i.e. entropy). Entropy of a physical system is defined as

S

k

ln  (

E

,

N

,  ) Entropy of an ideal gas Total energy:

E

i N

  1 1 2

mv i

2  1 2

m i N

  1

p i

2  1 2

m i N

3 

k

1

p

2

ik

2

mE

   

i N

3 

k

1

p

2

ik

   1 / 2 Radius of the sphere in 3N dimensions

Area of such a sphere is proportional to r 3N-1 ≈ r 3N Hence the area of the 3N-D volume in momentum space is (2mE) 3N/2 The number of allowed states is given by the phase space integral   

d

3

r

1 

d

3

r N

d

3

p

1 

d

3

p N S

  

V N

/ 2 ( 2

mE

) 3

N k

ln

Nk

ln [

CV

VE N

3 ( 2

mE

) / 2   3

N

/ 2

const

.

]

C

   2  ( 3

N

/ 3

N

2 /  2 1 )!

  1 2

N

!

h

3

N

Sakure-Tetrode formula Area of a unit sphere in 3N-D Planck’s const.

Temperature: Isolated system Total energy is conserved

E

E A

E B

If the energies are not equal, and we allow exchange of energy via a small membrane, how will the energy evolve? (maximum disorder)

S

(

E A

) 

k

[

N A

 2 3 ln

E A

 ln

V A

 

N B

 2 3 ln(

E

E A

)  ln

V B

 ]

dS dE A

 0 

N A E A

N B E B

 0 

E A N A

E B N B

 3 2

kT

Example: N A = N B = 10 Fluctuations in energy are proportional to s

E

N

N

1

N

Definition of temperature At equilibrium:

dS dE A

 3 2

k

 

N A E A

N B E B

   1

T A

 1

T B

 0

T A

T B

(zeroth law of thermodynamics) In general 1

T

dS dE or T

dS dE

 1 Free energy of a microscopic system “a” in a thermal bath

F a

E a E a

TS a

 

kT

ln

Z

 

i P i E i

Average energy

Z

 

i e

E i kT

Partition function

Example: 1D harmonic oscillator in a heat bath

E a

(

x

,

v

)  1 2

mv

2  1 2

Kx

2

P x

(

x

)  1 2  s

x e

x

2 2 s 2

x

, s

x

kT K P v

(

v

)  1 2  s

v e

v

2 2 s

v

2 , s

v

E a

 

dxdv P

(

x

,

v

)

E a

(

x

,

v

)

kT m

 1 2

m

s

v

2  1 2

K

s 2

x

 1 2

kT

 1 2

kT

kT

In 3D:

E a

 1 2

m

v

2  1 2

K

r

2  3 2

kT

 3 2

kT

 3

kT

Equipartition of energy: each DOF has kT/2 of energy on average

Free energy of the harmonic oscillator

E a

(

x

,

p

)  1 2

m p

2  1 2

m

 2

x

2

Z

 1

h



dxdp e

E a

(

x

,

p

)  1

h

2 s

x

s

p

kT

 

kT

1   s

x h

 

dx kT e

x

2 2 s 2

x m

 2 , 

dp

s

p e

p

2  2 s 2

p kTm

 Free energy:

F a

 

kT

ln

Z

 

kT

ln

kT

  Entropy:

S a

 1

T

E a

F a

 

k

1

S a

  

F

T a

  

T kT

ln

kT

  ln

kT

  

k

1 ln

kT

 

Harmonic oscillator in quantum mechanics Energy levels

E n

 

n

 1 2   

Z

n

   0

e

 

n

 1 2   

kT

e

   2

kT n

   0

x n

,

x

e

  

kT

,

n

   0

x n

 1 1 

x

Free energy:  1

e

    2

kT e

  

kT F a

 

kT

ln

Z

 1 2   

kT

ln  1 

e

  

kT

 Entropy:

S a

  

F a

T

k

    

kT

1

e

   

e

  

kT kT

 ln  1 

e

  

kT

   

To calculate the average energy let   1

kT E a

 1

Z

i E i e

 

E i

  1

Z

   

i e

 

E i

  1

Z

  

Z

  1

Z

   1

e

   

e

2       2 1  1 

e

  

e

   Using yields the same entropy expression.

Classical limit:

kT

   ,    1

E a

kT F a

 1 2   

kT

ln  1 

e

  

kT

  

kT

ln

kT

 

S a

k

    

kT

1

e

   

e

  

kT kT

 ln  1 

e

  

kT

    

k

1 ln

kT

 