Lecture 4. Macrostates and Microstates (Ch. 2 )

Download Report

Transcript Lecture 4. Macrostates and Microstates (Ch. 2 )

Chapter 2: The Second Law.
Start with Combinatorics, Probability and Multiplicity
• Combinatorics and probability
• 2-state paramagnet and Einstein solid
• Multiplicity of a macrostate
– Concept of Entropy
• Directionality of thermal processes
(irreversibility)
– Overwhelmingly probable
Combinatorics and probability
Combinatorics is the branch of mathematics studying the
enumeration, combination, and permutation of sets of
elements and the mathematical relations that characterize
their properties.
Examples: random walk, two-state systems, …
Probability is the branch of mathematics that studies the
possible outcomes of given events together with the
outcomes' relative likelihoods and distributions. In common
usage, the word "probability" is used to mean the chance
that a particular event (or set of events) will occur.
Probability
An event (very loosely defined) – any possible outcome of some measurement.
An event is a statistical (random) quantity if the probability of its occurrence, P, in the
process of measurement is < 1.
The “sum” of two events: in the process of measurement, we observe either one of the
events. Addition rule for independent events:
P (i or j) = P (i) + P (j)
(independent events – one event does not change the probability for the
occurrence of the other).
The “product” of two events: in the process of measurement, we observe
both events.
Multiplication rule for independent events: P (i and j) = P (i) x P (j)
Example:
What is the probability of the same face appearing on two successive
throws of a dice?
The probability of any specific combination, e.g., (1,1): 1/6x1/6=1/36 (multiplication
rule) . Hence, by addition rule, P(same face) = P(1,1) + P(2,2) +...+ P(6,6) = 6x1/36 = 1/6
Expectation value of a macroscopic
observable A:
(averaged over all accessible microstates)
A   P1 ,..., N A1 ,..., N 
 
Two model systems with fixed positions of particles
and discrete energy levels
- the models are attractive because they can be described in terms of
discrete microstates which can be easily counted (for a continuum of
microstates, as in the example with a freely moving particle, we still need
to learn how to do this). This simplifies calculation of . On the other
hand, the results will be applicable to many other, more complicated
models.
Despite the simplicity of the models, they describe a number of
experimental systems in a surprisingly precise manner.
- two-state paramagnet
(“limited” energy spectrum)
- the Einstein model of a solid
(“unlimited” energy spectrum)
....
The Two-State Paramagnet
- a system of non-interacting magnetic dipoles in an external magnetic field B, each dipole
can have only two possible orientations along the field, either parallel or any-parallel to this
axis (e.g., a particle with spin ½ ). No “quadratic” degrees of freedom (unlike in an ideal gas,
where the kinetic energies of molecules are unlimited), the energy spectrum of the particles
is confined within a finite interval of E (just two allowed energy levels).
B
A particular microstate (....)
is specified if the directions of all spins are
specified. A macrostate is specified by the total
# of dipoles that point “up”, N (the # of dipoles
that point “down”, N  = N - N ).
E
E2 = + B
an arbitrary choice
of zero energy
0
N - the number of “up” spins
N  N  N
E1 = - B
N - the number of “down” spins
 - the magnetic moment of an individual dipole (spin)
The total magnetic moment:
(a macroscopic observable)
The energy of a single dipole in the
external magnetic field:
The energy of a macrostate:
(
)
(
M = m N- - N¯ = éëN¯ = N - N-ùû = m 2N- - N
)
- B for  parallel to B,
e i = -mi ×B
+B for  anti-parallel to B
(
)
(
U = -M × B = m B N¯ - N- = m B N - 2N-
)
Example
Consider two spins. There are four possible configurations of microstates:
M=
2
0
0
- 2
In zero field, all these microstates have the same energy (degeneracy). Note
that the two microstates with M=0 have the same energy even when B0:
they belong to the same macrostate, which has multiplicity =2. The
macrostates can be classified by their moment M and multiplicity :
M=
2
0
- 2
=
1
2
1
For three spins:
M=
macrostates:
3



-
M=
3

-
-3
=
1
3
3
1
-
-
-3
The Multiplicity of Two-State Paramagnet
Each of the microstates is characterized by N numbers, the number of
equally probable microstates – 2N, the probability to be in a particular
microstate – 1/2N.
For a two-state paramagnet in zero field, the energy of all macrostates is
the same (0). A macrostate is specified by (N, N). Its multiplicity - the
number of ways of choosing N objects out of N :
 ( N ,0)  1
 ( N ,1)  N
 ( N ,2) 
N  N  1
2
 ( N ,3) 
N  N  1 N  2
3 2
n !  n factorial =
N  N  1  ...  N  n  1
N!
 ( N , n) 

1·2·....·n
n  ...  3  2  1
n ! N  n ! 0 !  1 (exactly one way to
N
  
n
The multiplicity of a
macrostate of a two-state
paramagnet with (N, N):
arrange zero objects)
 ( N , N ) 
N!
N!

N! N! N! ( N  N )!
Stirling’s Approximation for N! (N>>1)
Multiplicity depends on N!, and we need an approximation for ln(N!):
N
lnN! = ln1 + ln2 + ln3 +… + lnN » ò ln ( x) dx = éë xln x - xùû » N lnN - N
1
N
1
ln N! N ln N  N
More accurately:
N N
N!  N e
Check:
or
N
N!  
e
N
2N   
e
N
N
2N
1
1
lnN !  N lnN   N  ln N  ln 2  N lnN   N
2
2
because ln N << N for large N
The Probability of Macrostates of a Two-State PM (B=0)
P( N , N  ) 
 ( N , N )
 ( N , N )
 ( N , N )


# of all microstates  ( N , all N )
2N
N!
N N e N
P( N , N  ) 

 N  N   N  N   N
N
N
N
N  !N  N  !2
N   e  N  N    e
2
NN

N  N 
N
N   N  N    2 N
- as the system becomes larger, the
P(N,N) graph becomes more
sharply peaked:
N =1  (1,N) =1, 2N=2, P(1,N)=0.5
P(1, N)
P(15, N)
P(1023, N)
0.5


- random orientation
of spins in B=0 is
overwhelmingly
more probable
2nd law!
0
1
N
N
0
0.5·1023
(http://stat-www.berkeley.edu/~stark/Java/Html/BinHist.htm)
1023
Nn
Multiplicity (Entropy) and Disorder
In general, we can say that small multiplicity implies
“order”, while large multiplicity implies “disorder”. An
arrangement with large  could be achieved by a
random process with much greater probability than an
arrangement with small .

small 

large 
The Einstein Model of a Solid
In 1907, Einstein proposed a model that reasonably predicted the thermal
behavior of crystalline solids (a 3D bed-spring model):
a crystalline solid containing N atoms behaves as if it contained
3N identical independent quantum harmonic oscillators, each of
which can store an integer number ni of energy units  = ħ.
We can treat a 3D harmonic oscillator as if it were oscillating
independently in 1D along each of the three axes:
classic:
quantum:
1 2 1 2 æ1 2 1 2ö æ1
1 2ö æ1 2 1 2ö
2
E = mv + k r = ç mv x + k x ÷ + ç mv y + k y ÷ + ç mv z + k z ÷
2
2
2
2
2
è2
ø è2
ø è2
ø
æ
æ
æ
1ö
1ö
1ö 3 æ
1ö
Ei = w çni,x + ÷ + w çni,y + ÷ + w çni,z + ÷ = åe çni + ÷
2ø
2ø
2 ø i=1 è
2ø
è
è
è
the solid’s internal
energy:
3N
æ
1 ö 3N
1 3N
3N
U = åe çni + ÷ = åe ni + å e = åe ni +
e
2 ø i=1
2
i=1 è
i=1 2
i=1
3N
the zero-point
energy
3N
the effective internal
energy:
U = åe ni
ħ
i=1
1
2
all oscillators are identical, the energy quanta are the same
3
3N
The Einstein Model of a Solid (cont.)
solid
dU/dT,
J/K·mole
At high kBT >> ħ (the classical limit of large ni):
Lead
26.4
Gold
25.4
Silver
25.4
Copper
24.5
Iron
25.0
Aluminum
26.4
3N
1
dU
U    ni  3N (2) k BT  3Nk BT 
 3Nk B
2
dT
i 1
 24.9 J/K  mole
Dulong-Petit’s rule
To describe a macrostate of an Einstein solid, we have
to specify N and U, a microstate – ni for 3N oscillators.
Example: the “macrostates” of an Einstein Model with only one atom
 (1,0) =1
 (1,1) =3
 (1,3) =10
 (1,2) =6
The Multiplicity of Einstein Solid
The multiplicity of a state of N oscillators (N/3 atoms) with q energy quanta
distributed among these oscillators:
q + N -1) ! æ
(
W (N,q) =
=ç
q!(N -1)!
q + N -1 ö÷
ç
÷
q
è
ø
Proof: let’s consider N oscillators, schematically represented as follows:
   - q dots and N-1 lines, total q+N-1 symbols. For given q
and N, the multiplicity is the number of ways of choosing n of the symbols
to be dots, q.e.d.
In terms of the total
internal energy U =q:
 ( N ,U ) 
U /   N  1 !
U /  !( N  1)!
Example: The multiplicity of an Einstein solid with three atoms and eight units of
energy shared among them
 (9, 8) 
8  9  1 !
8!(9  1)!
12,870
Multiplicity of a Large Einstein Solid (kBT >> )
q = U/ = N - the total # of energy quanta in a solid.
 = U/( N) - the average # of quanta (microstates) available for each molecule
  q  N  1! 
  q  N ! 
ln ( N , q)  ln 
  ln 
  ln  q  N !   ln  q!  ln  N !
  q !( N  1)! 
  q ! N ! 
Stirling approxmation: ln  N !  N ln N  N
  q  N  ln  q  N    q  N   q ln q  q  N ln N  N
  q  N  ln  q  N   q ln  q   N ln N
High temperature limit: kBT
e Ûq
N
U = q × e üï
ý Þ q × e = Nk BT
Dulong-Petit’s rule: U = Nk BT ï
þ
k BT
e Þ Nk BT
Ne
ü
ï
ï
ýq ×e
ï
ïþ
Ne Þ q
N
Multiplicity of a Large Einstein Solid (kBT >> )
q = U/ = N - the total # of energy quanta in a solid.
 = U/( N) - the average # of quanta (microstates) available for each molecule
é æ N öù
N
ln(q + N) = lnêq ç1+ ÷ú » lnq +
q
ë è q øû
é
Nù
lnW(N,q) = q + N êlnq + ú - qln q - N lnN
qû
ë
high temperatures:
(kBT >> ,  >>1, q >> N )
(
)
()
N2
q
= N lnq + N +
- N lnN » N ln + N
q
N
W(N,q) » e
N ln
q
N
N
æ eq ö
N
N
e = ç ÷ = (eb )
èNø
N
æ eU ö
N
W(U,N) » ç
÷ = f (N) U
è Ne ø
General statement:
For any system with N “quadratic” degrees of freedom
(“unlimited” spectrum), the multiplicity is proportional to U N/2.
Einstein solid:
(2N degrees
of freedom)
Multiplicity of a Large Einstein Solid (kBT << )
low temperatures:
(kBT << ,  <<1, q << N )
 eN
 ( N , q)  
 q
q
 e
   
  
N
(Pr. 2.17)
i
Microstates of a system (e.g. ideal gas)
1
The evolution of a system can be represented by a trajectory
in the multidimensional (configuration, phase) space of microparameters. Each point in this space represents a microstate.
2
During its evolution, the system will only pass through accessible microstates
– the ones that do not violate the conservation laws: e.g., for an isolated
system, the total internal energy must be conserved.
Microstate: the state of a
system specified by describing
the quantum state of each
molecule in the system. For a
classical
particle
–
6
parameters (xi , yi , zi , pxi ,
pyi , pzi )
For a macro system – 6N
parameters.
Statistics  Probabilities of Macrostates
The
statistical
approach:
to connect the
macroscopic observables (averages) to the probability
for a certain microstate to appear along the system’s
trajectory in configuration space, P( 1,  2,..., N).
Macrostate: the state of a macro system specified
by its macroscopic parameters. Two systems with the
same values of macroscopic parameters are
thermodynamically indistinguishable. A macrostate tells
us nothing about a state of an individual particle.
For a given set of constraints (conservation laws), a
system can be in many macrostates.
The Phase Space vs. the Space of Macroparameters
some macrostate
P
numerous microstates
in a multi-dimensional
configuration (phase)
space that correspond
the same macrostate
T
V
the surface
defined by an
equation of
state
i
i
1
2
i
i
1
1
2
2
1
etc., etc., etc. ...
2
Examples: Two-Dimensional Configuration Space
motion of a particle in a
one-dimensional box
K=K0
L
-L
0
K
“Macrostates” are characterized by a
single parameter: the kinetic energy K0
Another example: one-dimensional
harmonic oscillator
U(r)
K + U =const
px
-L
x
L x
px
-px
x
Each “macrostate” corresponds to a continuum of
microstates, which are characterized by specifying the
position and momentum
The Fundamental Assumption of Statistical Mechanics
i
1
2
microstates which
correspond to the
same energy
The ergodic hypothesis: an isolated system in
an equilibrium state, evolving in time, will pass
through all the accessible microstates at the
same recurrence rate, i.e. all accessible
microstates are equally probable.
The ensemble of all equi-energetic states
 a microcanonical ensemble.
The average over long times will equal the average over the ensemble of all
equi-energetic microstates: if we take a snapshot of a system with N
microstates, we will find the system in any of these microstates with the same
probability.
many identical measurements
on a single system
Probability for a
stationary system
a single measurement on
many copies of the system
Probability of a Macrostate, Multiplicity
Probability of a particularmicrostateof a microcanonical ensemble
1

# of all accessible microstates
The probability of a certain macrostate is determined by how many
microstates correspond to this macrostate – the multiplicity of a given
macrostate  .
Probabilit y of a particular macrostate 
 # of microstates that correspondto a given macrostate

# of all accessiblemicrostates
This approach will help us to understand why some of the macrostates are
more probable than the other, and, eventually, by considering the interacting
systems, we will understand irreversibility of processes in macroscopic
systems.
Concepts of Statistical Mechanics
1.
The macrostate is specified by a sufficient number of macroscopically
measurable parameters (for an Einstein solid – N and U).
2.
The microstate is specified by the quantum state of each particle in a
system (for an Einstein solid – # of the quanta of energy for each of N
oscillators)
3.
The multiplicity is the number of microstates in a macrostate. For
each macrostate, there is an extremely large number of possible
microstates that are macroscopically indistinguishable.
4.
The Fundamental Assumption: for an isolated system, all
accessible microstate are equally likely.
5.
The probability of a macrostate is proportional to its multiplicity. This
will be sufficient to explain irreversibility.
Entropy and Temperature (Ch. 2 and a bit of 3)
Ideas:
Each accessible microstate of an isolated system is equally probable
(the fundamental assumption).
Every macrostate has a countable number of microstates (follows from
Q.M.).
The probability of a macrostate is proportional to its multiplicity. When
systems get large, multiplicities get outrageously large.
On this basis, we will introduce the concept of entropy and discuss the
Second Law of Thermodynamics.
Our plan:
As our point of departure, we’ll use the models of an Einstein solid. We
have already discussed one advantage of this model – “discrete” degrees
of freedom. Another advantage – by considering two interacting Einstein
solids, we can learn about the energy exchange between these two
systems, i.e. how to reach thermal equilibrium.
By using our statistical approach, we’ll identify the most probable
macrostate of a combined system of two interacting Einstein solids after
reaching an equilibrium;
We’ll introduce the entropy as a measure of the multiplicity of a given
macrostate
The Second Law of Thermodynamics
Two Interacting Einstein Solids, Macropartitions
Suppose that we bring two Einstein solids A and B (two sub-systems with
NA, UA and NB, UB) into thermal contact, to form a larger isolated system.
What happens to these solids (macroscopically) after they have been
brought into contact?
The combined sys. – N = NA+ NB , U = UA + UB
The macropartition of the
combined system is defined
by macroparameter UA
energy
NA, UA
NB, UB
Macropartition: a given pair of macrostates for sub-systems A and B that
are consistent with conservation of the total energy U = UA + UB.
Different macropartitions amount to different ways that the energy can be
macroscopically divided between the sub-systems.
Example: the pair of macrostates where UA= 2  and UB= 4  is one possible
macropartition of the combined system with U = 6 
As time passes, the system of two solids will randomly shift between
different microstates consistent with the constraint that U = const.
Question: what would be the most probable macrostate for given NA,
NB , and U ?
The Multiplicity of Two Sub-Systems Combined
The probability of a macropartition is
proportional to its multiplicity:
 AB   A  B
macropartition
A+B
sub-system
A
sub-system
B
Example: two one-atom “solids” into thermal contact, with the total U = 6.
Possible macropartitions for NA= NB = 3, U = qA+qB= 6
Macropartition
UA
UB
A
B
 AB
0:6
0
6
1
28
28
1:5
1
5
3
21
63
2:4
2
4
6
15
90
3:3
3
3
10
10
100
4:2
4
2
15
6
90
5:1
5
1
21
3
63
6:0
6
0
28
1
28
Grand total # of microstates:
U /   N  1 !  6  6  1 !  462
U /  !( N  1)! 6!(6  1)!
Exercise: check the multiplicities of macrostates for NA= NB = 100, U = qA+qB= 200
Recall
The Probability of Macrostates of a Two-State PM (B=0)
 ( N , N )
 ( N , N )
 ( N , N )
P( N , N  ) 


# of all microstates  ( N , all N )
2N
N!
N N e N
P( N , N  ) 

 N  N   N  N   N
N
N
N
N  !N  N  !2
N   e  N  N    e
2
NN

N  N 
N
N   N  N    2 N
- as the system becomes larger, the
P(N,N) graph becomes more
sharply peaked:
N =1  (1,N) =1, 2N=2, P(1,N)=0.5
P(1, N)
P(15, N)
P(1023, N)
0.5


- random orientation
of spins in B=0 is
overwhelmingly
more probable
2nd law!
0
1
N
N
0
0.5·1023
1023
Nn
(http://stat-www.berkeley.edu/~stark/Java/Html/BinHist.htm)
The Multiplicity of Two Sub-Systems Combined
The probability of a macropartition
is proportional to its multiplicity:
 AB   A  B
macropartition
A+B
sub-system
A
sub-system
B
In real systems, N ~1023, U ~1023 
How to count the multiplicity?
(Spreadsheet fails)
How to find out the maximum multiplicity?
Answer: Analytic approximation
Where is the Maximum? The Average Energy per Atom
Let’s explore how the macropartition multiplicity for two subsystems A and B (NA, NB, A= B= ) in thermal contact
depends on the energy of one of the sub-systems:
NA
 eU A 
 ,
 A ( N A , U A )  
N

 A 
N
 eU A 

  A ( N A , U A )   B ( N B , U B )  
 N A 
The high-T
limit (q >> N):
 AB
 eU 
d AB
 N A  A 
dU A
 N A 
N A 1
e
N A
 eU  U A  


 N B 
NB
A
 eU B
 B ( N B ,U B )  
 NB 
N
 eU  U A  


 N B 
 eU 
 N B  A 
 N A 



NB
U B  U U A
B
NA
 eU  U A  


 N B 
N B 1
 e 

  0
 N B 
U A UB

N A NB
In general, for two systems in thermal contact, the
equilibrium (most probable) macropartition of the combined
system is the one where the average energy per atom in
each system is the same (the basis for introducing the
temperature).
Simpler argument
A special case: two identical sub-systems (NA = NB),
 AB(UA) is peaked at UA= UB= ½ U :
A

B

B
A
U/2
UA
U A UB

N A NB
 AB
 AB
U/2
UA
U/2
Take-home exercise: find the position of the maximum of  AB(UA)
for NA = 200, NB = 100, U = 180 
UA
 AB
Sharpness of the Multiplicity Function
How sharp is the peak? Let’s consider small deviations
from the maximum for two identical sub-systems:
2U
U/2
UA= (U/2) (1+x)
UA
WAB µ (UA ) (UB )
N
N A  NB  N
Example: N = 100,000 x = 0.01
æ e ö
=ç ÷
è Ne ø
2N
More rigorously
(p. 65):
WAB
æ DU ö
÷ =1
èU / 2 ø
2
The peak width: N ç
2N
(U ) (U )
A
Þ
æU ö
=ç ÷
è2ø
N

N
B
UB= (U/2) (1-x)
N
(x <<1)
(1- x) (1+ x)
N
N
µW
eq
AB
(1- x )
2
N
(0.9999)100,000 ~ 4.5·10-5 << 1
2ù
é æ
ö
DU ú
µ (U / 2) ´ (U / 2) exp ê-N ç
÷
êë è U / 2 ø úû
DU =
N
1
N
N
U /2
a Gaussian function
U
U  x
2
When the system becomes large, the probability as a function of UA
(macropartition) becomes very sharply peaked, i.e. “fluctuation” is very small
Implications? Irreversibility!
The vast majority of microstates are in macropartitions close to the most
probable one (in other words, because of the “narrowness” of the
macropartition probability graph). Thus,
(a) If the system is not in the most probable macropartition, it will rapidly
and inevitably move toward that macropartition. The reason for this
“directionality” (irreversibility): there are far more microstates in that
direction than away. This is why energy flows from “hot” to “cold”
and not vice versa.
(b) It will subsequently stay at that macropartition (or very near to it), in
spite of the random shuffling of energy back and forth between the
two solids.
When two macroscopic solids are in thermal equilibrium with each other,
completely random and reversible microscopic processes (leading to
random shuffling between microstates) tend at the macroscopic level to
push the solids inevitably toward an equilibrium macropartition (an
irreversible macro behavior). Any random fluctuations away from the
most likely macropartition are extremely small !
Problem:
Consider the system consisting of two Einstein solids P and Q in thermal
equilibrium. Assume that we know the number of atoms in each solid and
. What do we know if we also know
(a) the quantum state of each atom in each solid?
(b) the total energy of each of the two solids?
(c) the total energy of the combined system?
the system’s
macrostate
NA, UA
the system’s
microstate
energy
the system’s
macropartition
(a)
(b)
(c)
+fluctuation
NB, UB
Problem:
Imagine that you discover a strange substance whose multiplicity is
always 1, no matter how much energy you put into it. If you put an
object made of this substance (sub-system A) into thermal contact
with an Einstein solid having the same number of atoms but much
more energy (sub-system B), what will happen to the energies of
these sub-systems?
A.
Energy flows from B to A until they have the same energy.
B.
Energy flows from A to B until A has no energy.
C.
No energy will flow from B to A at all.
Entropy
Units: J/K
of a system in a given
macrostate (N,U,V...):
S  kB ln  N, U ,V ...
The entropy is a state function, i.e. it depends on the
macrostate alone and not on the path of the system to this
macrostate.
Entropy is just another (more convenient) way of talking about multiplicity.
Convenience:
reduces ridiculously large numbers to manageable numbers
23
Examples: for N~1023,  ~ 1010 , ln  ~1023, being multiplied by kB ~ 10-23, it gives
S ~ 1J/K.
The “inverse” procedure: the entropy of a certain macrostate is 4600kB.
What is the multiplicity of the macromacrostate?
e
S
kB
 e 4600 ~ 102000
if a system contains two or more interacting sub-systems having their own
distinct macrostates, the total entropy of the combined system in a given
macropartition is the sum of the entropies of the subsystems they have in that
macropartition:  AB =  A x  B x  C x....  S AB = S A + S B + S C + ...
Problem:
Imagine that one macropartition of a combined system of two Einstein
solids has an entropy of 1 J/K, while another (where the energy is more
evenly divided) has an entropy of 1.001 J/K. How many times more likely
are you to find the system in the second macropartition compared to the
first?
Prob(mp2) W2 e
e
7.2´1019
=
= S /k » 0.72536×1023 = e
Prob(mp1) W1 e 1 B e
S2 /k B
7.2´1019
e
=10
æ 7.2´1019 ö
log10 ç e
÷
è
ø
=10
1000
0.72464×1023
7.2´1019 ´log10 e
000
3´1019
» 10
3´1019
The Second Law of Thermodynamics
An isolated system, being initially in a non-equilibrium state, will evolve
from macropartitions with lower multiplicity (lower probability, lower
entropy) to macropartitions with higher multiplicity (higher probability,
higher entropy). Once the system reaches the macropartition with the
highest multiplicity (highest entropy), it will stay there. Thus,
The entropy of an isolated system never decreases.
(one of the formulations of the second law of thermodynamics).
( Is it really true that the entropy of an isolated system never decreases? consider a
pair of very small Einstein solids. Why is this statement more accurate for large
systems than small systems? )
Whatever increases the number of microstates will happen if it is
allowed by the fundamental laws of physics and whatever constraints
we place on the system.
“Whatever” - energy exchange, particles exchange, expansion of a
system, etc.
Entropy and
Temperature
UA, VA, NA
UB, VB, NB
To establish the relationship between S and T, let’s
consider two sub-systems, A and B, isolated from the
environment. The sub-systems are separated by a rigid
membrane with finite thermal conductivity (Ni and Vi are
fixed, thermal energy can flow between the sub-systems).
The sub-systems – with the “quadratic” degrees of
freedom (~U fN/2). For example, two identical Einstein
solids (NA = NB = N) near the equilibrium macropartition
(UA= UB= U/2):
equilibrium
2N
 e 
N
N
 AB   A ( N ,U A )  B ( N ,U B )  
 U A  U B 
 N 
 e 
S AB  S A  S B  2 N ln
  N lnU A   N lnU B 
 N 
Equilibrium:
S A
S B
S AB S A S B
S A S B





0 
U A U B
U A U A U A U A U B
Thus, when two solids are in equilibrium, the slope
is the same for both of them.
On the other hand, when two solids are in
equilibrium, they have the same temperature.
S AB
SA
SB
U/2
The stat. mech. definition
of the temperature
1
 S 

T  
  U V , N
Units: T – K, S – J/K, U - J
UA
1
1. Note that the partial derivative in the definition of T is
calculated at V=const and N=const.
 S 

T  
  U V , N
We have been considering the entropy changes in the processes where two interacting
systems exchanged the thermal energy but the volume and the number of particles in
these systems were fixed. In general, however, we need more than just one parameter
to specify a macrostate:
S  kB ln  U ,V , N 
The physical meaning of the other two partial derivatives of S will be considered in L.7.
2. The slope S / U is inversely proportional to T.
- the energy should flow from higher T to
lower T; in thermal equilibrium, TA and TB
should be the same.
S AB
The sub-system with a larger S/U (lower T)
should receive energy from the sub-system with
a smaller S/U (higher T), and this process
will continue until SA/UA and SB/UB become
the same.
SA
SB
U/2
UA
1
Problems
 S 

T  
  U V , N
Problem: An object whose multiplicity is always 1, no matter what its thermal
energy is has a temperature that: (a) is always 0; (b) is always fixed; (c) is
always infinite.
Problem: Imagine that you discover a strange substance whose
multiplicity is always 1, no matter how much energy you put into it. If
you put an object made of this substance (sub-system A) into thermal
contact with an Einstein solid having the same number of atoms but
much more energy (sub-system B), what will happen to the energies
of these sub-systems?
Problem: If an object has a multiplicity that decreases as its thermal energy
increases (e.g., a two-state paramagnetic over a certain U range), its
temperature would: (a) be always 0; (b) be always fixed; (c) be negative; (d)
be positive.
From S(N,U,V) - to U(N,T,V)
Now we can get an (energy) equation of state U = f (T,V,N,...) for any
system for which we have an explicit formula for the multiplicity (entropy)!!
Thus, we ’ ve bridged the gap between statistical mechanics and
thermodynamics! The recipe:
Find  (U,V,N,...) – the most challenging step
S (U,V,N,...) = kB ln  (U,V,N,...)
1
  S (U , V , N ,...) 

T  
U

V , N
Solve for U = f (T,V,N,...)
Even if we cannot calculate S,
we can still measure it:
Measuring Entropy
For V=const and N=const :
dS 
Q
T
dS 
dU Q CV (T ) dT


T
T
T
- in L.6, we’ll see that this equation holds for all reversible
(quasi-static) processes (even if V is changed in the process).
This is the “thermodynamic” definition of entropy, which Clausius introduced in
1854, long before Boltzmann gave his “statistical” definition S  kBln .
CV (T ) dT 
S (T )  S (0)  
T
0
T
By heating a cup of water (200g, CV =840 J/K) from 200C to 1000C, we
increase its entropy by 373
dT 
S   (840 J/K)
 200 J/K
T
293
At the same time, the multiplicity of the system is increased by
1.510 25
e
An Einstein Solid: from S(N,U) to U(N,T) at high T
High temperatures:
(kBT >> , q >>N)
N
 q
 eU 
 ( N ,U )   e   

 N
 N 
N
 eU 
 e 
S (U , N )  N k B ln
  N k B ln U  N k B ln

 N 
 N 
1 S N k B


T U
U
U ( N , T )  N k BT
- in agreement with the equipartition theorem: the total energy should be
½kBT times the number of degrees of freedom.
To compare with experiment, we can measure the heat capacity:
C
CV 
Q
dT

dU  PdV
dT

N k B T   N k B
T
the heat capacity at
constant volume
 U 
CV  

 T V , N
- in a nice agreement with experiment
An Einstein Solid: from S(N,U) to U(N,T) at low T
Low temperatures:
(kBT <<, q <<N)
U
q
 eN   eN  
  
 ( N ,U )  

 q   U 
1 S k B  eN  k B k B  N 

 ln
 ln


T U   U  
 U 
S ( N ,U ) 
 eN 
ln


U


k BU
 U (N, ,T )  N  e
- as T  0, the energy goes to zero as expected (Pr. 3.5).
The low-T heat capacity:
(more accurate result will be
obtained on the basis of the
Debye model of solids)


 
CV 
N e k B T
T 



  N e k B T


2

  
    kB T

  Nk B 
 e
2 
 k BT 
 k BT 


k BT
Example (Pr. 3.14, page 97)
For a mole of aluminum, CV = aT + bT3 at T < 50 K (a = 1.35·10-3 J/K2, b =
2.48·10–5 J/K4). The linear term – due to mobile electrons, the cubic term –
due to the crystal lattice vibrations. Find S(T) and evaluate the entropy at
T = 1K,10 K.


CV (T ) dT 
aT   bT 3 dT 
b
S (T )  

 aT  T 3
T
T
3
0
0
T
T
1
S (1K )  1.35 10 3 J/K 2 1K  2.48 10 5 J/K 4 1K 3  1.36 10 3 J/K
3
- at low T, nearly all the entropy comes from the mobile electrons
T = 1K
T = 10K
1
S (10 K )  1.35 10 3 J/K 2 10 K  2.48 10 5 J/K 4 10 3 K 3  2.18 10  2 J/K
3
- most of the entropy comes from lattice vibrations
S (1K ) 1.35103 J/K
20


10
kB
1.381023 J/K
S (10K ) 2.18102 J/K
21


1
.
6

10
kB
1.381023 J/K
- much less than the # of particles, most degrees of freedom are still frozen out.
Residual Entropy
liquid
S
Glasses aren ’ t really in
equilibrium, the relaxation time
– huge. They do not have a
well-defined T or CV. Glasses
have a particularly large
entropy at T = 0.
supercooled
liquid
glass
residual
entropy
Debenedetti & Stillinger, Nature (2001)
crystal
T
Entropy of an Ideal Gas
Now we will derive the equation(s) of state for an ideal gas from the principles
of statistical mechanics. We will follow the path prescribed by the
‘microcanonical’ ensemble thinking.
Find  (U,V,N,...) – the most challenging step
S (U,V,N,...) = kB ln  (U,V,N,...)
  S (U , V , N ,...) 

T  

U


1
Solve for U = f (T,V,N,...)
So far we have treated quantum systems whose states in the configuration
(phase) space may be enumerated. When dealing with classical systems with
translational degrees of freedom, we need to learn how to calculate the
multiplicity.
Multiplicity for a Single particle
- is more complicated than that for an Einstein solid, because it depends on three
rather than two macro parameters (e.g., N, U, V).
Example: particle in a onedimensional “box”
-L
L
The total number of ways of filling up the cells in phase space is the
product of the number of ways the “space” cells can be filled times
the number of ways the “momentum” cells can be filled
px
-L
L x
px
-px
The number of microstates:
x

Q.M.
L px
2 L  px

x  px
h
  space  p
Quantum mechanics (the
uncertainty principle) helps
us to numerate all different
states in the configuration
(phase) space:
x  px  h
Multiplicity of a Monatomic Ideal Gas (simplified)
For a molecule in a three-dimensional box: the state of the
molecule is a point in the 6D space - its position (x,y,z) and its
momentum (px,py,pz). The number of “space” microstates is:
Wspace =
V
V
= 3
Dx × Dy × Dz Dx
N
For N molecules:
Wspace
æ V ö
=ç 3÷
è Dx ø
n
There is some momentum distribution of molecules in an ideal
gas (Maxwell), with a long “tail” that goes all the way up to p
= (2mU)1/2 (U is the total energy of the gas). However, the
momentum vector of an “average” molecule is confined within
a sphere of radius p ~ (2mU/N)1/2 (U/N is the average energy
per molecule). Thus, for a single “average” molecule:
p
p
4
p p3
3
Wp µ
Dpx × Dpy × Dpz
N
The total number of microstates
for N molecules:
N
æ V ´ p 3 ö æV ´V ö
p
ç
÷÷
÷
W = WspaceW p » çç 3
=
3÷
3
ç
è Dx × Dpx ø è h ø
However, we have over-counted the multiplicity,
because we have assumed that the atoms are
distinguishable. For indistinguishable quantum
particles, the result should be divided by N! (the
number of ways of arranging N identical atoms in a
given set of “boxes”):
1 æ V ´V p ö
çç 3 ÷÷
Windistinguishable »
N !è h ø
N
pz
More Accurate Calculation of N (I)
Momentum constraints:
py
px
1 particle -
px2 + py2 + pz2 = 2mU
2 particles -
2
2
2
p1x2 + p1y2 + p1z2 + p2x
+ p2y
+ p2z
= 2mU
The accessible momentum volume for N particles = the “area” of a 3N-dimensional hypersphere p
2p 3N/2 3N-1 N =1
"area" =
æ 3N ö
-1÷!
ç
è 2
ø
1 V N æ 2p mU ö
WN »
ç
÷
2
N! (3N / 2) ! è h ø
r
3N/2
Monatomic
ideal gas:
(3N degrees
of freedom)
2Dp
2p 3/2 2 é
r = ë(1/ 2) ! = p / 2ùû = 4p r 2
(1/ 2)!
The reason why m matters: for a
given U, a molecule with a larger
mass has a larger momentum, thus a
larger “ volume ” accessible in the
momentum space.
N
Windistinguishable
W (U ) µU
3/2 ù
é 5/2 æ
ö
e V 4 p mU ú
ê
U,V,N
=
(
) ê h3 N çè 3N ÷ø ú
ë
û
2Dp
f N/2
f N- the total # of “quadratic” degrees of freedom
pz
More Accurate Calculation of N (II)
For a particle in a box (L)3: (Appendix A)
py
px2 + p 2y + pz2
h
h2
2
2
2
Dp
=
E nx ,ny ,nz =
=
nx + ny + nz
x
2
2L
px
2m
8mL
nx , ny , nz  0
If p>>p, the total degeneracy (multiplicity)
(
)
(
of 1 particle with energy U is:
)
1 S Dp 1 ( 2L)
V p
p
W1 (U ) = 3
=
S
Dp
=
S3 Dp
3
3
3
3
3
2 ( Dp) 2 h
h
3
p
3
(
)
(
If p>>p, the total degeneracy (multiplicity) of N
indistinguishable particle with energy U is:
Plug in the “area” of the hyper-sphere:
)
1 VN p
WN (U ) =
S3N Dp
3N
N!h
(
N
3/2 ù
é 5/2 æ
e V 4p mU ö ú
ê
Windistinguishable (U ,V , N ) = 3 ç
× ( 2Dp )
÷
êë h N è 3N ø úû
)
Entropy of an Ideal Gas
S (U,V , N ) = k B lnW (U,V , N )
 V  4 m U 3/ 2  5
S ( N ,V ,U )  Nk B ln   2
   Nk B  ln  2p 
 N  3h N   2
The Sackur-Tetrode equation: (Monatomic ideal gas)
3/2
ì é æ
ü
ï V 4p m U ö ù 5ï
S(N,V,U) = N kB ílnê ç
÷ ú+ ý
2
ïî êë N è 3h N ø úû 2 ïþ
 V  U 3/ 2  3
V 3
U
5
 4 m  
 Nk B ln      Nk B   ln  2    Nk B ln  Nk B ln   ( N , m)
N 2
N
 3h  
3
 N  N   2
an average volume an average energy
per molecule
per molecule
In general, for a gas of
polyatomic molecules:
S(N,V,T) = N kB ln
V f
+ N kB lnT + f (N,m)
N 2
f  3 (monatomic), 5 (diatomic), 6 (polyatomic)
Problem
For each gas:
The temperature
after mixing:
S H 2
S ( N ,V , T )  N kB ln
U1 T1  U2 T2   Utotal Tf 
Tf 
H2 :
Two cylinders (V = 1 liter each) are
connected by a valve. In one of the
cylinders – Hydrogen (H2) at P = 105
Pa, T = 200C , in another one –
Helium (He) at P = 3·105 Pa,
T=1000C. Find the entropy change
after mixing and equilibrating.
5
3
N1k BT1  N 2 k BT2
2
2
5
3
N1k B  N 2 k B
2
2
Tf
5
 N k B ln 2  N k B ln
2
T1
S total  S H 2  S He
k
 N1  N 2  k B ln 2  B
2
Vf
Vi

Tf
f
N k B ln
2
Ti
5
3
3
5

N1k BT1  N 2 k BT2   N1k B  N 2 k B T f
2
2
2
2


5P1  3P2
P
P
5 1 3 2
T1
T2
He :
Tf
3
S He  N 2 k B ln 2  N 2 k B ln
2
T2
Tf
Tf 

5 N1 ln  3N 2 ln 
T1
T2 

Stotal  0.67 J/K
Entropy of Mixing
Consider two different ideal gases (N1,
N2) kept in two separate volumes (V1,V2)
at the same temperature. To calculate the
increase of entropy in the mixing process,
we can treat each gas as a separate
system. In the mixing process, U/N
remains the same (T will be the same
after mixing). The parameter that
changes is V/N:
3/ 2

  V  4 m U   5 

S ( N ,V ,U )  N k B ln  




2
N
3
h
N
2





 

V 
V 
S total S A  S B

 N1 ln   N 2 ln 
kB
kB
 V1 
 V2 
if N1=N2=1/2N , V1=V2=1/2V
Stotal N  V  N  V 
 ln
  ln
  N ln 2
kB
2 V / 2  2 V / 2 
The total entropy of the system is greater after mixing – thus,
mixing is irreversible.
Gibbs “Paradox”
V 
V 
S total S A  S B



 N1 ln   N 2 ln 
kB
kB
 V1 
 V2 
- applies only if two gases are different !
If two mixing gases are of the same kind (indistinguishable molecules):
 V V 
V 
V 
Stotal / k B   Stotal  S A  S B  k B   N1  N 2  ln  1 2   N1 ln  1   N 2 ln  2 
 N1  N 2 
 N1 
 N2 
 V N1 
 V N2 
 N1 ln 
  N 2 ln 
0
N
V
N
V
1 
2 


if
N1 N 2 N


V1 V2 V
Stotal = 0 because U/N and V/N available for each molecule remain the same after mixing.
 V  4  m U 3 / 2  5
S ( N ,V ,U )  N k B ln  
   N kB
2
N
3
h
N
  2
 
1 V N 2 3 N / 2
N 
N! h3 N  3N 

!
2



2m U

3N
Quantum-mechanical indistinguishability is important! (even though this equation
applies only in the low density limit, which is “classical” in the sense that the distinction
between fermions and bosons disappear.
Problem
Two identical perfect gases with the same pressure P and the same number of
particles N, but with different temperatures T1 and T2, are confined in two vessels,
of volume V1 and V2 , which are then connected. find the change in entropy after
the system has reached equilibrium.
3/ 2
 V  4  m U 3 / 2  5
 V  4 m 3
  5
S ( N ,V ,U )  N k B ln  
k BT    N k B
   N k B  N k B ln  
2
2
N
3
h
N
2
N
3
h
2
 
  2
 
 
5
5
V
V
3/ 2 
3/ 2 
Si  S1  S 2  N k B ln  1 T1    N k B  N k B ln  2 T2    N k B
N
 2
N
 2
5
 V  V 
3/ 2 
S f  2 N k B ln  1 2 T f    2 N k B
 2N
 2
Tf 
T1  T2
2
- prove it!
2


 V1  V2  2 N 2 

 V1  V2 2  3  T f  
T f 3
S
 ln 
 ln 

  ln 

  ln 
3/ 2
3/ 2 
N kB
2
N
V
V
4
V
V
2
T
T
 1 2 

1 2
 T1  T2  
 1 2 


2
 V1  V2 2  3  T1  T2 2  
T1  T2  5  T1  T2  
ln 
 ln 
  ln 
   PV1  V2   2 Nk B


4
V
V
2
4
T
T
2
2
4
T
T


1 2
1 2
1 2






at T1=T2, S=0, as it should be (Gibbs paradox)
An Ideal Gas: from S(N,V,U) - to U(N,V,T)
 U ,V , N   f N  V NU f N / 2
Ideal gas:
(fN degrees of freedom)
S U , V , N  
1 S
f Nk B


T U 2 U

f
U
V
Nk B ln  Nk B ln   ( N , m)
2
N
N
U ( N ,V , T ) 
f
N k BT
2
- the “energy”
equation of state
- in agreement with the equipartition theorem, the total energy should be
½kBT times the number of degrees of freedom.
The heat capacity for
a monatomic ideal gas:
f
 U 
CV  
  N kB
 T V , N 2
Partial Derivatives of the Entropy
We have been considering the entropy changes in the processes where two
interacting systems exchanged the thermal energy but the volume and the number of
particles in these systems were fixed. In general, however, we need more than just
one parameter to specify a macrostate, e.g. for an ideal gas
S  S U,V , N   kB ln  U,V , N 
When all macroscopic quantities S,V,N,U are allowed to vary:
 S
dS  
 U

S
 d U  
 N ,V
 V

 S 
 d V  
 d N

N
 N ,U

U ,V
We are familiar with the physical meaning only
one partial derivative of entropy:
Today we will explore what happens if we let the V
vary, and analyze the physical meaning of the
other two partial derivatives of the entropy:
1
 S 




U

V , N T
 S 


 V U , N
Mechanical Equilibrium
and Pressure
UA, VA, NA
e.g. ideal gas
UB, VB, NB
kB N P
 S 





V
V
T

U , N
S AB
Let’s fix UA,NA and UB,NB , but allow V to vary
(the membrane is insulating, impermeable for
gas molecules, but its position is not fixed).
Following the same logic, spontaneous
“exchange of volume” between sub-systems
will drive the system towards mechanical
equilibrium (the membrane at rest). The
equilibrium macropartition should have the
largest (by far) multiplicity  (U, V) and
entropy S (U, V).
In mechanical equilibrium:
S AB S A S B S A S B




0
VA VA VA VA VB
 VA   VB 
SA
SB
VAeq
VA
The stat. phys.
definition of pressure:
S A S B

VA VB
- the volume-per-molecule should be the same
for both sub-systems, or, if T is the same, P must
be the same on both sides of the membrane.
S 
S 


P  T 
 
  V U , N   V U , N
 S
/ 
 U


V , N
The “Pressure” Equation of State for an Ideal Gas
The “energy” equation of state (U  T):
1  S 
f
1

  N kB
T  U V , N 2
U
Ideal gas:
(fN degrees of freedom)
S ( N ,V , T )  N k B ln
U
f
N kB T
2
V f
 N k B ln T   ( N , m)
N 2
The “pressure” equation of state (P  T):
N kB
 S 
P T 

T


V
V

U , N
PV  N k BT
- we have finally derived the equation of state of an ideal gas from first principles!
Thermodynamic identity I
Let’s assume N is fixed,
 S 
 S 
dS  
 dU  
 dV
 U  N ,V
 V  N ,U
1
 S 

 
 U V , N T
thermal equilibrium:
P
 S 



 V U , N T
mechanical equilibrium:
1
P
 dS  dU  dV
T
T
i.e.
dU  TdS  PdV
Quasi-Static Processes
d U  T d S  P dV
d U  Q  W
(quasi-static processes with fixed N)
(all processes)
Thus, for quasi-static processes :
Q  T d S
dS
T
Q
T
P
Comment on State Functions :
Q
dS
Q  0
- is an exact differential (S is a state function).
Thus, the factor 1/T converts Q into an exact
differential for quasi-static processes.
Quasistatic adiabatic (Q = 0) processes: d S  0
S  0
V
 isentropic processes
The quasi-static adiabatic process with an ideal gas :
PV   const
VT  1  const
- we’ve derived these equations from the 1st
Law and PV=RT
On the other hand, from the Sackur-Tetrode equation for an isentropic process :
S  0  VT
f /2
 const
VT
1
 1
 const
Problem:
(all the processes are quasi-static)
(a) Calculate the entropy increase of an ideal gas in an isothermal process.
(b) Calculate the entropy increase of an ideal gas in an isochoric process.
You should be able to do this using (a) Sackur-Tetrode eq. and (b) d S 
T
f

dT  dV 
V
2

Q  dU  PdV  Nk B 
 U ,V , N   f N  V NU f N / 2
ST const
 Vf
 Nk B ln
 Vi



Q
T
Q
 f dT dV 
dS 
 Nk B 


T
V 
2 T

S  NkB ln g N VT f / 2
SV const
 Tf
f
 Nk B ln
2
 Ti




Let’s verify that we get the same result with approaches a) and b) (e.g., for T=const):
Vf
N k BT
Q  W  
dV  N k BT ln
V
 Vi
V
Vf
Since U = 0,
Q

  S 
T

(Pr. 2.34)
Problem:
A bacterias of mass M with heat capacity (per unit mass) C, initially at temperature
T0+T, is brought into thermal contact with a heat bath at temperature T0..
(a) Show that if T<<T0, the increase S in the entropy of the entire system (body+heat
bath) when equilibrium is reached is proportional to (T)2.
(b) Find S if the body is a bacteria of mass 10-15kg with C=4 kJ/(kg·K), T0=300K,
T=0.03K.
(c) What is the probability of finding the bacteria at its initial T0+T for t =10-12s over
the lifetime of the Universe (~1018s).
(a)
ΔS body 
 T0 
Q
CdT 

  0


C
ln




T
T
T


T
 0

T  T
T  T
T0
0
T0
ΔS heat bath 
0
Q

T0

T0
 CdT 
T0  T
T0

CT
0
T0
2
ΔS total  ΔS body  ΔS heat bath
(b)
 T0  T  
 C  T 
T
2 3
  ln1      
  0
C
 C ln

 ...  
T0
2
3
 2  T0 
 T0  
2
ΔS total
C  T  4 103 11015 J / K  0.03 
 20
 
 

  2 10 J / K
2  T0 
2
 300 
2
Problem (cont.)
(b)
 for the (non-equilibrium) state with Tbacteria = 300.03K is greater than 
in the equilibrium state with Tbacteria = 300K by a factor of
T0
T0  T
 Stotal 
 2 10 20 J / K  1450
630
  exp 

 exp 

e

10
 23

 1.38 10 J / K 
 kB 
The number of “1ps” trials over the lifetime of the Universe:
1018
 1030
12
10
Thus, the probability of the event happening in 1030 trials:
# events probability of occurrence of an event   1030 10630  0