Professional Documents
Culture Documents
Murthy
Statistical Mechanics
January 29, 2014
Springer
Contents
13
13
13
13
14
14
15
15
16
17
17
VI
Contents
2.13.2 N ! = N N exp(N ) 2N . . . . . . . . . . . . . . . . . . . . . . . . .
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
17
19
19
20
21
22
25
25
27
29
30
31
32
33
33
34
34
35
37
41
41
42
43
44
45
45
46
47
47
49
51
51
51
52
52
53
55
56
58
Contents
VII
63
63
63
64
66
70
72
74
75
76
77
78
80
81
81
82
84
84
85
85
VIII
Contents
()d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.1.2
k
8.1.3 g3/2 () versus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
8.1.4 Graphical inversion to determine fugacity . . . . . . . . . . . . . 141
8.1.5 Treatment of the Singular Behaviour . . . . . . . . . . . . . . . . . 141
8.1.6 Bose-Einstein Condensation Temperature . . . . . . . . . . . . . 147
8.1.7 Grand Potential for Bosons . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.1.8 Average Energy of Bosons . . . . . . . . . . . . . . . . . . . . . . . . . . 148
8.1.9 Specific Heat Capacity of Bosons . . . . . . . . . . . . . . . . . . . . 150
8.1.10 Mechanism of Bose-Einstein Condensation . . . . . . . . . . . 154
Contents
9.4
9.5
9.6
9.7
IX
List of Figures
3.1
3.2
4.1
4.2
N!
pn (1 p)N n with
n!(N n)!
N = 10; B(n) versus n; depicted as sticks; (Left) p = 0.5;
(Right) p = .35 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
n
exp() with mean ;
Poisson distribution : P (n) =
n!
P (n) versus n; depicted
as sticks;
Gaussian distribution :
(x )2
1
with mean and variance
G(x) = exp
2 2
2
2 = : continuous line.
(Left) = 1.5; (Right) = 9.5. For large Poisson and
Gaussian coincide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.1
8.1
8.2
8.3
8.4
8.5
8.6
List of Tables
3.1
4.1
4.2
5.1
5.2
5.3
Micro states of three dice with the constraint that they add to
six . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
A micro state with occupation number representation (2, 3, 4) . . 68
A few micro states with the same occupation number
representation of (2, 3, 4) There are 1260 micro states with the
same occupation number representation . . . . . . . . . . . . . . . . . . . . . 68
7.1
1
Why should we study statistical mechanics ?
A quick answer : because it is one of the core courses, like Classical Mechanics, Quantum Mechanics, Electrodynamics, and Mathematical Physics, in your
post graduate curriculum. Fortunately, you have only one course in statistical mechanics, unlike quantum mechanics, electrodynamics and mathematical
physics !
b
1.2 S = kB ln (E,
V, N )
This is the first and the most important link, between the microscopic and the
macroscopic worlds; it was proposed by Boltzmann1 . S stands for entropy and
b is the number
belongs to the macro world described by thermodynamics.
of micro states of a macroscopic system2 . kB is the Boltzmann constant 3
that establishes correspondence of the statistical entropy of Boltzmann to the
thermodynamic entropy of Clausius.
1.2.1 S = kB
pi ln(pi )
I would call this Boltzmann-Gibbs-Shannon entropy. The sum is over all the
micro states of a macroscopic system; the micro states are labelled by i. The
probability of a micro state i is denoted by pi .
An interesting question : We resort to probabilistic description to hide
our ignorance or to reconcile with our inability to keep track of the innumerable micro states through which an equilibrium macroscopic system would go
through, dictated by Newtons equations of motion and initial conditions.
In thermodynamics, entropy is a property of a system. However in statistical mechanics entropy is defined in terms of the probabilities of the micro
states. Does it imply that entropy is not only determined by then system but
also also by the ignorance or the inability of the observer ? Looks paradoxical
?
1.2.2 Q =
Ei dpi
This equation provides a microscopic description of heat. The sum runs over
all the micro states of the macroscopic system. Ei is the energy of the system
when it is in micro state i. The probability that the system can be found in
micro state i given by pi . We need to impose an additional constraint that
P
i dpi is zero to ensure that the total probability is unity.
1.2.3 W =
pi dEi
This equation defines work in the vocabulary of the micro world; the sum runs
over all micro states of the system.
1
1.2.4 F = kB T ln Q(T, V, N )
Helmholtz free energy F (T, V, N ), defined in thermodynamics, is related to
the canonical partition function Q(T, V, N ) of statistical mechanics. This is
another important micro-macro connection.
2
1.2.5 E
= kB T 2 CV
hN i2 kB T
V
kT
The zeroth law that tells of thermal equilibrium, provides a basis for the
thermodynamic property, we call temperature. It is the starting point for
the game of thermodynamics.
The first law that articulates in a smart way, the law of conservation
of energy; it provides a basis for the thermodynamic property called the
internal energy. You can put in energy into a system by heat or by work.
The second law that tells, come what may, the entropy increases; it
provides a basis for the thermodynamic property called entropy. An engine
can draw energy from the surroundings by work and deliver the same
amount or energy by heat. On the other hand if the machine draws energy
from the surroundings by heat, then the energy it can deliver by work is
invariably less. The second law is a statement of this basic assymetry.
The third law that tells that entropy vanishes at absolute zero. We can
say that the third law provides the basis for absolute zero temperature on
entropy scale. The third law is also about the unattainability of absolute
zero. You can go as close as you desire but you can never reach it.
Of these the second law is tricky. It breaks the time-symmetry present
in the microscopic descriptors. Macroscopic behaviour is not time-reversal
invariant. There is a definite direction of time - the direction of increasing
entropy.
statement, perhaps, provides the raison detre for the statistics in statistical
mechanics.
In these lectures I shall not address the second issue - concerning the emergence of time asymmetry observed in macroscopic phenomena. I shall leave
this question to the philosophers and/or better equipped theoretical physicists. Instead we shall concentrate on how to derive the macroscopic properties
from the properties of its microscopic constituents and their interactions.
the second law ... a demon that extracts work from an equilibrium system. If
time permits, I shall discuss Maxwells demon and its later incarnations.
Toward the end, and again if time permits, I shall discuss some recent
developments in thermodynamics - work fluctuation theorems and second law
violations.
that you do not notice that the book is written in awful English and at
several places, flawed.
James P Sethna, Entropy, Order Parameters, and Complexity, Clarendon Press, Oxford (2008).
James Sethna covers an astonishingly wide range of modern applications;
a book, useful not only to physicists, but also to biologists, engineers, and
sociologists. I find exercises and footnotes very interesting; often more
interesting than the main text! However thermodynamics gets bruised.
Is entropy a property of the system or a property of the (knowledge or
ignorance) of the fellow, observing the system ?
C Kittel, and H Kr
omer, Thermal physics, W H Freeman (1980)
A good book; somewhat terse. I liked the parts dealing with entropy,
temperature, chemical potential, and Boltzmann weight; contains a good
collection of examples.
Daniel V Schrhr
oder, An Introduction to Thermal Physics, Pearson
(2000).
Schroder has excellent writing skills. The book reads well. Contains plenty
of examples. Somewhat idiosyncratic.
M Glazer, and J Wark, Statistical Mechanics : A Survival Guide, Oxford
University Press (2010) This book gives a neat introduction to statistical mechanics; well organized; contains a good collection of worked-out
problems; a thin book and hence does not threaten you !
H C Van Ness, Understanding Thermodynamics, Dover (1969).
This is an awesome book; easy to read and very insightful. In particular,
I enjoyed reading the first chapter on the first law of thermodynamics,
the second on reversibility, and the fifth and sixth on the second law. My
only complaint is that Van Ness employs British Thermal Units. Another
minor point : Van Ness takes the work done by the system as positive and
that done on the system as negative. Engineers always do this. Physicists
and chemists employ the opposite convention. For them the sign coincides
with the sign of change of internal energy caused by the work process.
When the system does work, its internal energy decreases; hence the work
done by the system is negative. When work is done on the system its
internal energy increases; hence work done on the system is positive.
H B Callen, Thermodynamics, John Wiley (1960).
A standard textbook. This book has influenced generations of teachers
and students alike, all over the world. Callen is a house hold name in the
community of physicists. The book avoids all the pitfalls in the historical
development of thermodynamics by introducing a postulational formulation.
H B Callen, Thermodynamics and an Introduction to thermostatistics,
Second Edition, Wiley, India (2005).
Another classic from H B Callen. He has introduced statistical mechanics
without undermining the inner strength of thermodynamics. In fact, the
statistical mechanics he presents, enhances the beauty of thermodynamics.
10
The simple toy problem with a red die (the closed system) and two white
dice (the heat reservoir), and the restricting sum to a fixed number (conservation of total energy) explains beautifully the canonical formalism.
The pre-gas model introduced for explaining grand canonical ensemble of
Fermions and Bosons is simply superb. I also enjoyed the discussions on
the subtle mechanism underlying Bose condensation. I can go on listing
several such examples. The book is full of beautiful insights.
A relatively inexpensive, Wiley student edition of the book is available in
the Indian market. Buy your copy now !
Gabriel Weinreich, Fundamental Thermodynamics, Addison Wesley
(1968).
Weinreichs is original; he has a distinctive style. Perhaps you will feel
uneasy when you read his book for the first time. But very soon, you will
get used to Weireichs idiosyncracy; and you would love this book 5 .
C B P Finn, Thermal Physics, Nelson Thornes (2001).
Beautiful; concise; develops thermodynamics from first principles. Finn
brings out the elegance and power of thermodynamics.
Max Planck, Treatise on Thermodynamics, Third revised edition, Dover;
first published in the year 1897. Translated from the seventh German
edition (1922).
A carefully scripted master piece; emphasises chemical equilibrium. I do
not think any body can explain irreversibility as clearly as Planck does.
If you think third law of thermodynamics is irrelevant, then read the last
chapter. You may change your opinion.
E Fermi, Thermodynamics, Dover (1936)
A great book from a great master; concise; the first four chapters (on thermodynamic systems, first law, the Second law, and entropy) are superb.
I also enjoyed the parts covering Clapeyron and van der Waal equations.
J S Dugdale, Entropy and its physical meaning, Taylor and Francis
(1998).
An amazing book. Dugdale de-mystifies entropy. This book is not just
about entropy alone, as the name would suggest. It teaches you thermodynamics and statistical mechanics. A book that cleverly avoids unnecessary
rigour.
M W Zamansky, and R H Dittman, Heat and Thermodynamics, an
intermediate textbook, Sixth edition, McGraw-Hill (1981)
A good and dependable book for a first course in thermodynamics. I am
not very excited about the problems given in the book. Most of them are
routine and requires uninteresting algebraic manipulations.
R Shanthini, Thermodynamics for the Beginners, Science Education
Unit, University of Peredeniya (2009)
11
12
2
Experiment, outcomes, events, probabilities
and ensemble
14
2.5 Probabilities
Probability is defined for an event.
What is the probability of H in a toss of a coin ?
One-half. This would be your immediate response. The logic is simple.
There are two outcomes : Heads and Tails. We have no reason to believe
why should the coin prefer Heads over Tails or vice versa. Hence we say
both outcomes are equally probable.
What is the probability of having at least one H in a toss of two coins ?
The event corresponding this statement is {HH, HT, T H} and contains three
elements. The sample size contains four elements. The required probability is
thus 3/4. All the four outcomes are equally probable 1 .
Thus, if all the outcomes are equally probable, then the probability of an
event is the number of elements in that event divided by the total number of
elements in the sample space. For e.g., the event A of rolling an even number
in a game of dice, P (A) = 3/6 = 0.5.
The outcome can be a continuum. For example, the angle of scattering of
a neutron is a real number between zero and . We then define an interval
(1 , 2 ) where 0 i : i = 1, 2 as an event. A measurable subset of a
sample space is an event.
Physicists have a name for this. They call it the axiom (or hypothesis or assumption) of Ergodicity. Strictly ergodicity is not an assumption; it signifies absence
of an assumption.
15
(2.1)
(2.2)
In other words f (x)dx is the probability of the event (measurable subset) that
contains all the outcomes to which we have attached a real number between
x and x + dx.
2
16
dx x f (x).
2.11 Counting of the number of elements in events of the sample space : Coin tossing
Later we shall generalise the notion of Maxwell ensemble and talk of ensemble
as a collection identical copies of a macroscopic system. We shall call it a Gibbs
ensemble.
17
18
b ). We have (N
b )=
of elements of this set (N ) is denoted by the symbol, (N
N
2 .
Let (n; N ) denote a subset of (N ), containing only those outcomes
with n Heads (and hence (N n) Tails). How many outcomes are there in
the set (n; N ) ?
b N ) denote the number of elements in the event (n; N ). I shall
Let (n;
tell you how to count the number of elements of this set6 .
Take one outcome belonging to (n; N ). There will be n Heads in that
outcome. Imagine for a moment that all these Heads are distinguishable. If
you like, you can label them as H1 , H2 , , Hn . Permute all the Heads
and produce n! new configurations. From each of these new configurations,
produce (N n)! configurations by carrying out the permutations of the (N
n) Tails. Thus from one outcome belonging to the set (n; N ), we have
produced n! (N n) new configurations. Repeat the above for each element
b N ) n! (N n)! configurations. A
of the set (n; N ), and produce (n;
moment of thought will tell you that this number should be the same as N !7 .
We thus have,
It follows, then,
b N ) n! (N n)! = N ! .
(n;
b N) =
(n;
N!
n!(N n)!
(2.3)
(2.4)
n=0
b N ) = (N
b ) = 2N .
(n;
(2.5)
n=1
N!
= 2N
n!(N n)!
(2.6)
19
b (N ). We have,
b (N ) = (n
b =
Let us denote this number by the symbol
N/2, N ).
Thus we have
b )=
(N
8
10
11
N
X
N!
= 2N
n!
(N
n)!
n=0
(2.7)
For example the given coin is a system. Let p denote the probability of Heads
and q = 1 p the probability of tails. The coin can be in a micro state Heads
or in a micro state Tails.
This means the values of p and q are the same for all the coins belonging to the
ensemble.
If you want to estimate the probability of Heads in the toss of a single coin
experimentally then you have to toss a large number of identical coins. Larger
the size of the ensemble more (statistically) accurate is your estimate .
you can find this in several ways. Just guess it. I am sure you would have guessed
the answer as N/2. We know that the binomial coefficient is largest when n = N/2
if N is even, or when n equals the two integers closest to N/2 for N odd. That is
it.
b
If you are more sophisticated, take the derivative of (n;
N ) with respect to n
and set it zero; solve the resulting equation to get the value of N for which the
function is an extremum.
b
You may find it useful to take the derivative of logarithm of (n;
N ); employ
Stirling approximation for the factorials : ln(m!) = m ln(m) m for large m.
Stirling approximation to large factorials is described in the next section.
You can also employ any other pet method of yours to show that for n = N/2
b
the function (n;
N ) is maximum.
Take the second derivative and show that the extremum is a maximum.
20
b (N ) = (n
b = N/2; N ) =
N!
(N/2)! (N/2)!
(2.8)
(2.9)
N ! = N N exp(N ) 2N
We have
N N exp(N ) 2N
b (N ) = (n
b = N/2; N ) = h
i2
p
(N/2)(N/2) exp(N/2) 2(N/2)
2
=2
N
N
(2.10)
Let us evaluate the natural logarithm of both the quantities under discussion.
Let
SG
SB
b ) = N ln 2
ln (N
(2.11)
N ln 2 (1/2) ln N
(2.13)
(2.12)
21
N ! = N (N 1) 3 2 1
ln N ! = ln 1 + ln 2 + ln 3 + + ln N
=
N
X
ln(k)
k=1
ln x dx
N
= (x ln x x)1
= N ln N N 1
N ln N N
(2.14)
2N
Z
dx xN exp(x),
dx exp [N ln(x) x] ,
dx exp [F (x)] ,
(2.15)
22
Problems
2.1. Consider a coin with probability of Heads given by 0.3. The experiment
consists of tossing the coin once. Write down a possible ensemble of realisations
of the experiment.
2.2. Consider a p-coin; i.e. a coin for which p is the probability of Heads.
Consider an experiment of tossing the p-coin independently twice. Write down
a possible ensemble of realisations, for the following cases.
(a) p = 1/2
(b) p = 1/4
2.3. Let x = X() denote a continuous random variable defined in the range
0 x 1 with a uniform probability density function. Find the mean and
variance of x.
2.4. Let x = X() denote a continuous random variable defined in the range
0 x +, with an exponential probability density : f (x) = exp(x). Let
Mn denote the n-th moment of the random variable x. It is defined as
Z
Mn =
xn exp(x)dx.
0
PN
23
3
Binomial, Poisson, and Gaussian
N!
pn q N n
n! (N n)!
(3.1)
Figure (??) depicts Binomial distribution for N = 10, p = 0.5 and 0.35. What
is average value of n ? The average is also called the mean, the first moment,
the expectation value etc.. Denote it denoted by the symbol M1 or hni. It is
given by
26
0.2
B(n)
B(n)
0.25
0.15
0.1
0.2
0.15
0.1
0.05
0
0.05
0
10
10
N!
pn (1 p)Nn with N = 10;
n!(N n)!
B(n) versus n; depicted as sticks; (Left) p = 0.5; (Right) p = .35
Fig. 3.1. Binomial distribution : B(n) =
M1 = hni =
N
X
n B(n; N )
n=0
N
X
n=0
N
X
N!
pn q N n
n! (N n)!
Np
n=0
= Np
N
X
n=0
= Np
N
1
X
n=0
(N 1)!
pn1 q (N 1)(n1)
(n 1)! [(N 1) (n 1)]!
(N 1)!
pn q (N 1)n
n![(N 1) n]!
B(n; N 1)
= Np
(3.2)
Thus the first moment (or the average) of the random variable n is N p.
We can define higher moments. The k-th moment is defined as
Mk = hnk i =
N
X
n=0
nk B(n)
(3.3)
27
N
X
(n M1 )2 B(n)
N
X
n2 B(n) M12
n=0
n=0
= M2 M12
(3.4)
N!
pn q N n
n!(N n)!
(3.5)
B(z)
=
N
X
z n B(n),
(3.6)
n=0
= 1) = 1. This guarantees that the probThe first thing we notice is that B(z
ability distribution B(n) is normalized. The moment generating function is
like a discrete transform of the probability distribution function. We transform
the variable n to z.
Let us now take the first derivative of the moment generating function
with respect to z. We have,
N
X
dB
(z) =
=B
n z n1 B(n)
dz
n=0
(z) =
zB
N
X
n z n B(n)
(3.7)
n=0
(3.8)
28
d2 B
=
n(n 1)z n2 B(n)
dz 2
n=0
z2
N
X
d2 B
=
z n n(n 1) B(n)
dz 2
n=0
(3.9)
(3.10)
For the Binomial random variable, we can derive the moment generating
function :
B(z)
=
N
X
z n B(n)
n=0
N
X
N!
(zp)n q N n
n!
(N
n)!
n=0
= (q + zp)N
(3.11)
10!
(0.1)n (0.9)10n
n!(10 n)!
(3.12)
B(n; 10)
B(n; 10)
0
1
2
3
4
5
0.0001
0.0016
0.0106
0.0425
0.1115
0.2007
6
7
8
9
10
0.2508
0.2150
0.1209
0.0403
0.0060
29
The table below gives the probabilities calculated from the Binomial distribution.
Consider the same problem with v = 103 M 3 and N = 105 . We have
p = 104 and N p = 10. Immediately we recognize that Binomial distribution
is not appropriate for this problem. Calculation of the probability of finding
n molecules in v involves evaluation of 100000!.
What is the right distribution for this problem and problems of this kind ?
To answer this question, consider what happens to the Binomial distribution
in the limit of N , p 0, and N p = , a constant1 . Note that
N p = N v/V = v = constant.
We shall show below that in this limit, Binomial goes over to Poisson
distribution.
3.1.2 Poisson distribution
We start with
B(z)
= (q + zp)N
(3.13)
B(z)
= q N (1 + zp/q)N
= (1 p)N (1 + zp/q)N
1
(3.14)
30
= B(z)
exp(N p) exp(zN p/q)
= exp() exp(z)
= P (z)
(3.15)
(3.16)
n
exp()
n!
(3.17)
p0
N ! pn q N n (n 1)! (N n + 1)!
n! (N n)!
N ! pn1 q N n+1
p (N n + 1)!
nq
N p p (n 1)
nq
N p=
(3.18)
Start with
31
B(n = 0; N ) = q N
= (1 p)N
p0
exp(N p)
= P (n = 0; ) = exp()
(3.19)
We get
P (n = 1; N ) = exp()
(3.20)
P (n = 2; N ) =
2
exp()
2!
(3.21)
P (n = 3; N ) =
3
exp()
3!
(3.22)
n
exp()
n!
(3.23)
in radioactive decay, is called the decay constant. Problem: Find how and the
half - life of the radioactive substance are related to each other?
32
we can interpret P (n, t) as the probability that your get n when you count
over a duration of t. Show that,
P (n, t) = P (n, t t)[1 t] + P (n 1, t t)t
(3.24)
(3.26)
The above equation can be solved by an easy method and an easier method.
3.1.5 Easy method
Write down the differential equation for n = 0 and solve it to get,
P (0, t) = exp(t)
where we have taken the intial condition as
0 n 6= 0
P (n, t = 0) = n,0 =
1n=0
(3.27)
(3.28)
Write down the equation for n = 1. In the resulting equation substitute for
P (0, t) and solve the resulting differential equation and get,
P (1, t) = t exp(t)
(3.29)
(t)2
exp(t)
2!
(3.30)
P (3, t) =
(t)3
exp(t)
3!
(3.31)
(t)n
exp(t)
n!
(3.32)
33
(3.33)
(3.34)
Show that
P (z, t = 0) =
z n P (n.t = 0)
n=1
z n n,0
n=1
=1
(3.35)
and hence
P (z, t) = exp [ (1 z) t]
(3.36)
X
(ik)n
X (k) =
dx xn f (x)
n!
n=0
=
X
(ik)n
Mn
n!
n=0
(3.38)
34
(3.39)
X
(ik)n
Mn
X (k) = ln 1 +
n!
n=1
= ln(1 + )
X
(1)n+1 n
=
n
n=1
X
(1)n+1
=
n!
n=1
X
(ik)m
Mm
m!
m=1
!n
(3.40)
X
(ik)n
n
n!
n=1
(3.41)
dx1
exp ik
dx2
dxN
x1 + x2 + xN
N
f (x1 , x2 , xN )
(3.42)
35
Z
N
dx exp(ikx/N )f (x)
N
[X (k k/N )]
exp [N ln X (k k/N )]
"
"
X
(ik)n n
exp N
n! N n
n=1
X
(ik)n n
exp
n! N n1
n=1
k2 2
2
+ O(1/N )
exp ik
2! N
k2 2
exp ik
2! N
(3.43)
36
(3.44)
(3.45)
Substitute the power series expansion of the exponential function and get,
"
#
X (ik)n
P (k; ) = exp
(3.46)
n!
n=1
P (k) = exp ik
(3.47)
2!
The above is the Fourier transform or the characteristic function of a Gaussian
random variable with mean as and variance also as .
0.35
0.14
0.3
0.12
0.25
0.1
0.2
0.08
0.15
0.06
0.1
0.04
0.05
0.02
0
5
10
15
20
n
exp() with mean ; P (n) versus n;
n!
1
(x )2
depicted as sticks; Gaussian distribution : G(x) = exp
with mean
2 2
2
and variance 2 = : continuous line.
(Left) = 1.5; (Right) = 9.5. For large Poisson and Gaussian coincide
Fig. 3.2. Poisson distribution : P (n) =
37
Problems
3.1. Derive an expression for hn(n 1)i - called the first factorial moment.
Find the variance of the random variable n.
3.2. Consider a system to two coins each with P (H) = p = 0.6 and P (T ) =
q = 1 p = 0.4.
(a) Write down the micro states of the experiment.
(b) Write down a possible ensemble of micro states describing the probabilities
of the micro states.
3.3. Consider the moment generating function of the binomial random vari
able given by B(z)
= (q + zp)N . Let hnk i denote the k the moment of n. By
taking the derivatives with respect to z calculate the first four moments of n
. Let 2 = hn2 i hni2 denote the variance of n. Calculate 2 . Relative fluctuations of n are given by /hni. How does this quantity vary with increase
of N ?
3.4. Consider a coin for which the probability of Heads is p and the probability
of Tails is q = (1 p). The experiment consists of tossing the coin until you
get Heads for the first time. The experiment stops once you get Heads. Let n
denote the number of tosses in an experiment.
1. What is the sample space or micro-state space underlying this experiment ?
2. What is the discrete probability density function of n ?
3. From the probability density function calculate the mean and variance of
n
4. Derive an expression for the moment generating function/partition function of the random variable n
5. From the moment generating function calculate the mean and variance of
the random variale n.
3.5. Consider a random walker starting from origin of a one dimensional lattice. He tosses a coin; if Heads he takes a step toward right; if Tails he steps
to the left. Let p be the probability for Heads and q = 1p be the probability
for Tails. Let P (m, n) denote the probability that the random walker is at
m after n steps. Derive an expression for P (m, n). Let nR be the number of
right jumps and nL be the number of left jumps. We have n = nR + nL and
m = nR nL . We have P (nR , nL ; n) = [n!/(nR !nL !)]pnR q nL . etc.
3.6. Consider the problem with V = 10 M3 ; v = 103 M3 and N = 105 . The
probability of finding n molecules in v is given by the Poisson distribution
with = N p = 10. Plot the Poisson distribution.
3.7. Show that P
the mean and variance of a Poisson
variable are the
P random
2
same : M1 =
n
P
(n;
)
=
;
M
=
n
P
(n;
) =?; 2 =
2
n=0
n=0
2
M2 M1 =
38
1
.
1 + ik
N
1 X
Xi .
N i=1
where {Xi : i = 2, N } are independent and identically distributed exponential random variables.
An exponential random variable is defined for x 0 and its probability
density function is given by exp(x). The n-th cumulant is (n1)!. In particular its mean is unity and its variance is also unity. Show that the characteristic
function of Y is given by
ik
Y (k) = exp N ln 1 +
N
We have,
ln(1 x) =
X
xn
.
n
n=1
X
(ik)n (n 1)!
n!
N n1
n=1
(3.48)
39
4
Isolated system: Micro canonical ensemble
4.1 Preliminaries
We are going to study an isolated system of N particles confined to a volume
V . The particle do not interact with each other. We will count the number of
b of the system. This will be in general
micro states, denoted by the symbol ,
a function of energy E, volume V and the number of particles N . We shall
do the counting for both classical and quantum particles. Before we address
the full problem, we shall consider a simpler problem of counting the micro
states taking into account only the spatial coordinates neglecting completely
the momentum coordinates. Despite this simplification, we shall discover that
statistical mechanics helps you derive the ideal gas law1 .
I must tell you of a beautiful derivation of the ideal gas law by Daniel
Bernoulli(1700-1782). It goes as follows. Bernoulli imagined air to be made of
tis billiard balls all the time in motion, colliding with each other and with the
walls of the container. When a billiard ball bounces off the wall, it transmits a
certain momentum to the wall and Bernoulli imagined it as pressure. It makes
sense. First consider air contained in a cube of side one meter. There is a certain
amount of pressure felt by the wall. Now imagine the cube length to be doubled
with out changing the speeds of the molecule. In modern language this assumption is the same as keeping the temperature constant. The momentum transferred
per collision remains the same. However since each billiard ball molecule has to
travel twice the distance between two collision the force on the wall should be
smaller by an factor of two. Also pressure is force per unit area. The ares of the
side of the cube is four times more now. Hence the pressure should be less by a
further factor of four. Taking into account both these factors, we find the pressure
should be eight times less. We also find the volume of cube is now eight times
more. Bernoulli concluded that the product of pressure and volume must be a
constant when there is no change in the molecular speeds - a brilliant argument
based on simple scaling ideas.
42
Fig. 4.1. Two ways of keeping a particle in a box divided into two equal parts.
V
b
(V,
N = 1, = V /2) =
=2
b = kB ln(2)
S = kB ln
(4.1)
(4.2)
Now consider two distinguishable particles into these two cells each of
volume = V /2, see figure below.
Fig. 4.2. Four ways of keeping two distinguishable particles in a box divided into
two equal halves.
We then have
b
(V,
N = 2, = V /2) =
For N particles we have,
2
=4
b = 2kB ln(2)
S = kB ln
(4.3)
(4.4)
b
(V,
N, = V /2) =
N
= 2N
b = N kB ln(2)
S = kB ln
43
(4.5)
(4.6)
Let us now divide the volume equally into V / parts and count the number
of ways or organizing N (distinguishable) particles. We find
b
(V,
N) =
N
(4.7)
b
S = kB ln
= N kB ln(V /)
= N kB ln V N kB ln
(4.8)
We will discover later that the above formula captures the volume dependence
of entropy quite accurately.
(4.9)
P =
U
V
44
S
V
E,N
P
T
(4.10)
(4.11)
N kB
dV
V
(4.12)
P dV
T
(4.13)
q
T
(4.14)
which shows that Boltzmann entropy and thermodynamic entropy are the
same.
dS =
S
U
+
V
S
V
P
1
dU + dV
T
T
S
V
=
U
P
T
45
N
b
S(V, N ) = kB ln (V,
N)
= N kB ln
V
N
(4.15)
+ N kB N kB ln
(4.16)
S(V, N ) = S(V, N )
Time has come for us to count the micro states of an isolated system
of N non interacting point particles confined to a volume V , taking into
considerations the positions and the momenta of all the particles.
Each particle for its specification requires six numbers : three positions and
three momenta. The entire system can be specified by a string of 6N numbers.
In a 6N dimensional phase space the system is specified by a point. The phase
space point is all the time moving. We would be interested determining the
region of the phase space accessible to the system when it is in equilibrium.
3
This is called Gibbs paradox. More precisely Gibbs formulated the paradox in
terms of entropy of mixing of like and unlike gases. We shall see these in details
later when we consider closed system described by canonical ensembles.
The remedy suggested by Boltzmann is only temporary. Non extensivity of entropy points to a deeper malady in the statistical mechanics based on classical
formalism. For the correct resolution of the non-extensivity-paradox we have to
wait for the arrival of Quantum Mechanics. We shall see of these issues in details
later when we consider quantum statistics.
46
If we are able to count the phase space volume, then we can employ the first
micro-macro connection proposed by Boltzmann and get an expression for
entropy as a function of energy, volume and the number of particles.
The system is isolated. It does not transact energy or matter with the
surroundings. Hence its energy remains a constant. The potential energy is
zero since the particles do not interact with each other. The kinetic energy is
given by
E=
3N
X
p2i
2m
i=1
(4.17)
where > 0.
Define
0
for x
1
1
f (x; ) =
x + for x +
2
2
2
1
for + x +
2
(4.18)
lim.
(x) = 0 f (x; ).
(x) is called the step function, Heaviside step function, unit step function
or theta function. It is given by,
0 for x < 0
(4.19)
(x) =
1 for 0 < x +
5
6
Oliver Heaviside(1850-1925)
Paul Adrien Maurice Dirac(1902-1984)
47
0 for x < /2
df
g(x; ) =
(4.20)
= 1 for /2 < x + /2
dx
(x) = limit
0 g(x; )
(4.21)
(4.22)
We find that the integral is the same for all values of . This gives us an
important property of the Dirac-delta cunction:
Z +
dx (x) = 1
(4.23)
i=1
Let
yi = xi /R for i = 1, 2.
Then,
V2 (R) = R2
We have
dy1
dy2 R2 (1
2
X
i=1
yi2
(4.25)
48
Therefore,
V2 (R) = R
dy1
2
X
dy2 1
yi2
i=1
(4.26)
= R2 V2 (R = 1)
(4.27)
V2 (R = 1)R2 =
dx1
dx2 R2
2
X
x2i
i=1
(4.28)
Now differentiate both sides of the above equation with respect to the variable
R. We have already seen that the derivative of a Theta function is the Diracdelta function. Therefore
!
Z +
Z +
2
X
2
2
(4.29)
xi
V2 (R = 1)2R = 2R
dx1
dx2 R
i=1
Now multiply both sides of the above equation by exp(R2 )dR and integrate
over the variable R from 0 to . We get,
Z
Z
Z +
V2 (R = 1)
exp(R2 )2RdR =
exp(R2 ) 2RdR
dx1
0
V2 (R = 1)
dt exp(t) =
V2 (R = 1) 1 =
Z
V2 (R = 1) = 2
V2 (R = 1) =
dx1
Z
dx2 R
Z
(4.30)
2
dx exp(x )
2 Z
dx exp(x ) =
i=1
x2i
Z
2
X
1/2
dx x
(4.32)
2
exp(x) (4.33)
2
2
x(1/2)1 exp(x)dx = [ (1/2)] =
(4.34)
49
i=1
"
N
X
yi2
i=1
#!
= 1
N
X
yi2
i=1
We have,
VN (R) = R
dy1
dy2
dyN 1
N
X
yi2
i=1
= VN (R = 1)RN
(4.36)
(4.37)
i=1
Differentiate both sides of the above expression with respect to R and get,
Z +
Z +
dx1
N VN (R = 1)RN 1 =
dx2
dxN R
N
X
i=1
x2i
2R (4.39)
Now, multiply both sides by exp(R2 )dR and integrate over R from 0 to .
The Left Hand Side:
50
LHS = N VN (R = 1)
dR exp(R2 )RN 1
(4.40)
dR =
We get,
N
LHS = VN (R = 1)
2
= VN (R = 1)
N
+1
2
N
2
VN (R = 1)
dx1
x 2 1 exp(x)dx
dxN
(4.41)
dx2
N
X
i=1
x2i
2R
(4.42)
t = R2
dt = 2RdR
RHS =
dt exp(t)
dx1
dxN
dx1
Z
= N/2
dx2
t
Z
N
X
i=1
dx2
x2i
dxN exp (x21 + x22 + x2N )
N
dx exp(x )
2
(4.43)
51
Thus we get
VN (R = 1) =
VN (R) =
N/2
N2 + 1
N/2
RN
N2 + 1
(4.44)
(4.45)
3N
X
p2i
2m
i=1
(4.46)
b
(E)
=
E
0
g(E )dE
(4.47)
52
b
(E,
V, N )
E
g(E, V, N ) =
(4.48)
V,N
b
Let us take the partial derivative of (E,
V, N ) with respect to E and get,
g(E, V, N ) =
V N (2m)3N/2 3N (3N/2)1
E
h3N ( 3N
2
2 + 1)
(4.49)
Let us substitute N = 1 in the above and get the single particle density of
states, g(E, V ) as,
g(E, V ) =
V
(8m)3/2 E 1/2
h3 4
(4.50)
(4.51)
N
R
= 1 1
R
(4.52)
= 1 for N
(4.53)
Hence in the limit of N the number of micro states with energy less
than or equal to E is nearly the same as the number of micro states with
energy between E E and E.
53
shall follow Boltzmanns prescription and divide (E, V, N ), see Eq. (??), by
N !.
V N 1 (2mE)3N/2
b
(E,
V, N ) = 3N
h N ! 3N
2 +1
(4.54)
T =
2E
3N kB
(4.57)
(4.58)
The above is called equi-partition theorem. Each quadratic term in the Hamiltonian carries an energy of kB T /28 .
The pressure of an isolated system of ideal gas as a function of E, V , and
N , is given by,
P
N kB
S
=
=
(4.59)
V E,N
T
V
P =
N kB T
V
(4.60)
3N
X
p2i
.
2m
i=1
54
P =
2E
3V
(4.61)
=
(4.62)
N E,V
T
Therefore,
= N kB T ln
V
N
3
N kB T ln
2
4mE
3N h2
(4.63)
Substituting in the above the expression for T from Eq. (??), we get,
4mE
V
2E
2E ln
(4.64)
ln
=
3
N
3N h2
In the above expression for the micro canonical chemical potential, let us
substitute
E = 3N kB T /2
and express chemical potential in terms of T , V and N . We get,
3
2mkB T
V
N kB T ln
= N kB T ln
N
2
h2
= N kB T ln
V
N
+ 3N kB T ln()
(4.65)
(4.66)
(4.67)
(4.68)
Let me end this section by saying that the micro canonical ensemble formalism
leads to the ideal gas law: See the expression for P given in Eq.(??). We have,
P V = N kB T
(4.69)
55
V (2m)
b
(E,
V) = 3
h ((3/2) + 1)
(4.70)
~2
2 +U (q)
2m
The first operator on the right is kinetic energy and the second the potential
energy.
E in the Schr
odinger equation is a scalar ... a real number... called energy. It
is an eigenvalue of the Hamiltonian operator : we call it energy eigenvalue.
The Schr
odinger equation is a partial differential equation. Once we impose
boundary condition on the solution, then only certain discrete energies are permitted. We call these energy eigenvalues.
Energy Eigenvalue by Solving Schr
odinger Equation Once we specify
boundary conditions, then a knowledge of the Hamiltonian is sufficient to determine its eigenvalues and the corresponding eigenfunctions. There will be usually
several eigenvalues and corresponding eigenfunctions for a given system.
56
2L
n
: n = 1, 2, ,
: n = 1, 2, ,
(4.71)
(4.72)
h
h
=
n : n = 1, 2, .
2L
This yields
h2
p2
n2 : n = 1, 2, .
=
2m
8mL2
Consider a particle in an L L L cube - a three dimensional infinite well.
The energy of the system is given by
n =
nx ,ny ,nz =
h2
(n2 + n2y + n2z )
8mL2 x
where nx = 1, 2, , , ny = 1, 2, , and nz = 1, 2 , .
The ground state is (nx , ny , nz ) = (1, 1, 1); it is non degenerate; the energy
eigenvalue is
1,1,1 =
3h2
8mL2
(4.73)
~2 2
2m x2
h2
n2 ;
8mL2
n = 1, 2,
57
3h2
.
4mL2
(4.74)
We start with,
=
h2
(n2 + n2y + n2z )
8mL2 x
b
and take one-eighth of it. Let us denote this number by ().
We have,
3/2
8mL2
1 4
b
R3 =
(4.75)
()
=
8 3
6
h2
b V ) = V (8m)3/2 = V 4 (2m)3/2
(,
h3 6
h3 3
=
V (2m)3/2
V 4 (2m)3/2
= 3
3
3/2
h 3
h (3/2)(1/2)
(2m)3/2
V (2m)3/2
V
= 3
3
h (3/2)(1/2) (1/2)
h ((3/2) + 1)
(4.76)
The above is exactly the one we obtained by classical counting, see Eq. (??)
Notice that in quantum counting of micro states, the term h3 comes naturally,
while in classical counting it is hand-put11 .
11
58
b V ) with
The density of (energy) states is obtained by differentiating (,
respect to the variable . We get
g(, V ) =
V
(8m)3/2 1/2
h3 4
(4.77)
0
A
B
A, B
2
B
A
We find that
b
(E
= 2, N = 2) = 3.
Now add a particle, labelled C, such that the energy of the system does
not change. In other words, the three-particle system has energy 2 which is
b
the same as that of the two particle system. Let (E
= 2, N = 3) denote
the number of micro states of the three-particle system with a total energy of
E = 2. The table below gives the micro states.
particle) as filled with non-overlapping exhaustive set of such tiny cubes. We have
to do all these because of Boltzmann ! He told us that entropy is logarithm of
number of micro states. We need to count the number of micro states.
0
A, B
B, C
C, A
A
B
C
B, C
C, A
A, B
59
2
C
A
B
We find
b
(E
= 2, N = 3) = 6.
b
S(E = 2, N = 3) = kB ln (E
= 2, N = 3).
U
.
N S,V
In other words, is the change in energy of the system when one particle
is added in such a way that the entropy and volume of the system remain
unchanged. To achieve this we must remove amount of energy from the
three particle system. In other words we demand S(E = , N = 3) = S(E =
2, N = 2).
Thus = . This problem that the chemical potential of a system of
non-interacting particles is negative.
We find S(E = 2, N = 3) > S(E = 2, N = 2). Note that =
is given by,
b
S(E, V, N ) = kB ln (E,
V, N )
3
5
3
E
4m
V
+ ln
. (4.78)
+
+ ln
S(E, V, N ) = N kB ln
N
2
N
2
3h2
2
S
N
E,V
60
Problems
4.1. Consider an isolated system of N non-interacting point particles occupying two states of energies and +. The energy of the system is E. Define
E
. Show that the entropy of the system is given by
x=
N
N kB
1+x
1x
S(x) =
(1 + x) ln
+ (1 x) ln
2
2
2
Also show that
kB
1
=
ln
T
2
1x
1+x
4.2. Sketch the function f (x; ) for = 2, 1, 1/2, 1/4; sketch also the function
in the limit of 0.
4.3. Sketch g(x; ) for = 2, 1, 1/2, 1/4. How does it look in the limit 0
?
dx (x)(x x0 ) = (x0 )
61
4.11. Carry out classical counting of the number of micro state of an isolated system of N particles confined to a length L with energy E. Since the
particles are confined to a line we require N position and N momentum coorb
dinates to specify a micro state. Calculate (E,
L, N ) - the number of micro
states with energy less than or equal to E. Substitute N = 1. Differentiate the
resulting expression with respect to E and verify whether you get the same
answer as of Problem No. 4.9
4.12. Show that we must remove of energy from the system to restore enb
tropy to its original value. In other words show that (E
= , N = 3) =
b
(E = 2, N = 2) = 3
4.13. Derive an expression for as a function of E, V and N . Substitute in
the expression12
1
E = 3N
kB T
2
Now add a particle to the system such that the energy change from E to E +
and the entropy changes from S(E, V, N )
to S(E + , V, N + 1).
To derive an expression for S(E + , V, N + 1) replace in Eq. (??): E by
E + and N by N + 1.
Show that for = we get S(E + , V, N + 1) = S(E, V, N ). In the
derivation, assume N to be large and the magnitude of to be small compared
to total energy i.e || << E.
12
13
Equipartition theorem : Every quadratic term in the Hamiltonian carries an energy of kB T /2.
Read G Cook and R H Dickerson, Understanding the chemical potential, American
Journal of Physics 63(8), 737 (1995)
5
Closed system : Canonical ensemble
A heat bath is one which transacts energy with the system, but its temperature
does not change. Example: Keep a cup of hot coffee at temperature say 60 C
in a room; let the room be at temperature 30 C. The coffee cools to 30 C. The
temperature of the room does not increase. May be, I should say the temperature
of the room increases by an extremely small amount which for all practical purposes can be considered as zero. We idealize and say the room is a heat bath and
its temperature does not change when it transacts energy with the cup of coffee.
H B Callen, Thermodynamics and and Itroduction to Thermostatistics, Wiley
Student Edition (2005)
64
Table 5.1. Micro states of three dice with the constraint that they add to six
P (2) = 0.3; P (3) = 0.2; P (4) = 0.1; P (5) = P (6) = 0. The important point
is that the micro states of the system are not equi probable.
In other words, if the system interacts thermally with the surroundings
then the probability differs from one micro state to the other.
What is the probability of a micro state in a closed system ? Let us calculate the probability in the next section by a fairly straight forward procedure
involving Taylor expansion of S(E). I learnt of this first, from the book of
Balescu3 .
65
For the isolated system, all micro states are equally probable. Thus we can
say that the probability of finding the closed system in its micro state C is
given by
P (C) =
b E(C))
(E
bt
where we have denoted the total number of micro states of the isolated system
bt .
as
b E(C)). Therefore
We have S(E E(C)) = kB ln (E
1
b
(E E(C) = exp
S(E E(C))
kB
Also since E(C) < < E, we can Taylor expand S(E E(C)) around E retaining
only the first two terms. We get,
S
S(E E(C)) = S(E) E(C)
E E=E
= S(E)
1
E(C)
T
= exp[E(C)]
exp [E(C)] = 1
Q(T, V, N ) =
X
1
=
exp [E(C)]
the surroundings. The system and the surroundings interact at the boundaries
and hence there shall exist micro states of the isolated system which can not be
neatly viewed as the system micro state juxtaposed with the surroundings micro
state. Such micro states are so few in number we shall ignore them.
66
X
E
b
(E)
exp(E)
b
b
where (E)
is the degeneracy of the eigenvalue E. In other words (E)
is
the number of micro states of the equilibrium closed system with energy E.
If energy is a continuous variable the we consider g(E), the density of
(energy) states. Then g(E)dE denotes the number of micro states with energy
between E and E + dE. Canonical partition function can be expressed as an
integral
Z +
dE g(E) exp[E)
(5.1)
Q(T, V, N ) =
0
The above is just a transform of the g(E) Q(T ) where we have transformed
energy E in favour of T - the temperature5 . Physically the transform means
that We are going from a micro canonical description with independent variables E, (V and N ) to a canonical description with independent variables
T , (V , and N ). In other words we are going from an isolated system with
energy as an independent variable to a closed system with temperature as an
independent variable.
67
Let us imagine that the isolated system represented by a big cube is divided
into a set of small cubes of equal volumes by means of imaginary walls. Each
cube represents a macroscopic part of the isolated system.
Each small cube is, in its own right, a macroscopic system with a volume
V . Since the the walls of a small cube permits molecules and energy to move
across, the number of molecules in a cube, is not fixed. It shall fluctuate around
some mean value; the fluctuations, however, are extremely small. The above
observations hold good for energy also. Let A denote the number of cubes
contained in the big cube.
The isolated system - the big cube, has a certain amount energy say E and
certain number of molecules N and a certain volume V and these quantities
are constants.
You can immediately see that what we have is a grand canonical ensemble
of open systems - each cube represents an open system. Each cube is a member of a grand canonical ensemble. All the members are identical as far as
their macroscopic properties are concerned. This is to say the volume V , the
temperature T and chemical potential are all the same for all the members.
Now, let us imagine that the walls are made impermeable to movement
of molecules across. A cube can not exchange matter with its neighbouring
cubes. Let us also assume that each cube contains exactly N molecules. Energy in a cube is however not fixed. Energy can flow from one cube to its
neighbouring cubes. This constitutes a canonical ensemble6 .
Aim :
To find the probability for the closed system to be in its micro state i.
First we list down all the micro states of the equilibrium closed system. Let
us denote the micro states as {1, 2, }. Note that the macroscopic properties
T , V , and N are the same for all the micro states. In fact the system switches
from one micro state to another all the time. Let Ei denote the energy of the
system when it is in micro state i. The energy can vary from one micro state
to another. However the fluctuations of energy from its mean value are very
small for an equilibrium macroscopic system.
To each cube, we can attach an index i. The index i denotes the micro
state of the closed system with fixed T , V and N . An ordered set of A indices
uniquely specifies a micro state of the isolated system.
6
68
Let us take an example. Let the micro states of the closed system be
denoted by the indices { 1,2,3}. There are only three micro states. Let us
represent the isolated system by a big square and construct nine small squares,
each of which represents a member of the ensemble. Each square is attached
with an index which can be 1, 2 or 3. Thus we have a micro state of the
isolated system represented by
3
2
2
1
3
3
2
3
1
In the above micro state, there are two squares with index 1, three with
index 2 and four with index 3. Let {a1 = 2, a2 = 3, a3 = 4} be the occupation
number representation of the microstate. There are several micro states having
the same occupation number representation. I haver given below a few of them.
133
212
323
233
312
321
121
232
232
112
223
333
123
123
233
Table 5.3. A few micro states with the same occupation number representation of
(2, 3, 4) There are 1260 micro states with the same occupation number representation
Notice that all the micro states given above have the same occupation
number string {2, 3, 4}. How many micro states are there with this occupation
number string ? We have
b 3, 4) =
(2,
9!
= 1260
2!3!4!
I am not going to list all the 1260 of the microstates belonging to the occupation number string {2, 3, 4}
Let me generalize and say that a string (of occupation numbers) is denoted
by the symbol a
= {a1 , a2 , }, where a1 + a2 + = A. We also have an
additional constraint namely a1 E1 + a2 E2 + = E.
b a) = (a
b 1 , a2 , ) denote the number of micro states of the
Let (
isolated system belonging to the string a
. For a given string, we can define
the probability for the closed system to be in its micro state indexed by i as
pi (
a) =
ai (
a)
A
69
(5.2)
A
X
i=1
ai (
a) = A strings a
ai (
a)Ei = E strings a
(5.3)
(5.4)
Note that the value of pi varies from one string to another. It is reasonable
to obtain the average value of pi over all possible strings a
. We have
X ai (
a)
Pi =
P(
a)
(5.5)
A
a
where P(
a) is the number of micro states of the isolated system in the string a
divided by the total number of micro states of the isolated system : All micro
states of an isolated system are equally probable. We have,
where,
b a)
(
P(
a) = P
b a)
a
(
b a) =
(
A!
a1 !a2 !
(5.6)
(5.7)
A
(5.8)
Pi =
P b
a)
a
(
By taking A we can always ensure7 ai (
a) i. In this limit,
the size of the ensemble is arbitrarily large. It should be large enough so that
even a micro state of smallest probability is present in the ensemble.
70
Pi =
b a )
ai (
a ) (
b a )
A (
(5.9)
ai (
a )
A
(5.10)
ai
A
(5.11)
(5.12)
j
X
j
aj (
a)Ej = E a
(5.13)
h
h
dx +
dy = 0
x
y
(5.14)
(5.15)
(5.16)
We have two equations and two unknowns. In principle we can solve the above
two equations and obtain (x , y ) at which h is maximum.
71
Now imagine there is a road on the mountain which does not necessarily
pass through the peak of the mountain. If you are travelling on the road, then
what is the highest point you will pass through ? In the equation
dh =
h
h
dx +
dy = 0
x
y
(5.17)
the infinitesimals dx and dy are not independent. You can choose only one
of them independently. The other is detemined by the constraint which says
that you have to be on the road.
Let the projection of the mountain-road on the plane be described by the
curve
g(x, y) = 0.
This gives us a constraint
g
g
dx +
dy = 0
x
y
(5.18)
g
x
dy = dx
g
y
(5.19)
We then have,
dh =
h
h
dx +
dy = 0
x
y
g
x
(5.20)
h
h
dx +
dx
g
x
y
y
y
h
g
=
dx = 0
x g x
(5.21)
(5.22)
=0
x
x
(5.23)
72
=
h
y
g
y
(5.24)
=0
x
x
(5.25)
h
g
=0
y
y
(5.26)
We can solve and and get x x () and y = y (). The value of x and y at
which h(x, y) is maximum under constraint g(x, y) = 0 can be found in terms
of the unknown Lagrange multiplier .
Of course we can determine the value of by substituting the solution
(x (), y ()) in the constraint equation : g(x (), y ()) = 0.
N
X
f
dxi = 0
xi
i=1
(5.27)
for maximum. In the set {dx1 , dx2 , dx , dxN }, not all are independent.
They are related by the constraint
N
X
g
dxi = 0
x
i
i=1
(5.28)
g
xi
g
i=1,i6= x
N
X
dxi
(5.29)
N
X
g
h
dxi = 0
xi
xi
73
(5.30)
i=1;i6=
where
=
h
x
g
x
(5.31)
= 0 i 6=
xi
xi
(5.32)
=0
x
x
(5.33)
= 0 i = 1, N
xi
xi
(5.34)
There are N equations and N unknowns. In principle we can solve the equation
and get
xi xi () i = 1, N,
where the function h is maximum under constraint
g(x1 , x2 , xN ) = 0.
The value of the undetermined multiplier can be obtained by substituting
the solution in the constraint equation.
If we have more than one constraints we introduce separate Lagrange
multipliers for each constraint. Let there be m N constraints. Let these
constraints be given by
gi (x1 , x2 , xN ) = 0 i = 1, m.
We introduce m number of Lagrange multipliers, i : i = 1, m and write
f
g1
g2
gm
1
2
m
= 0 i = 1, N
xi
xi
xi
xi
where the m N .
74
ai
A
b a) =
(
X
j
X
j
A!
a1 !a2 !
aj (
a) = A
aj (
a)Ej = E
b a).
For convenience we extremize ln (
b 1 , a2 , ) = ln A!
ln (a
ln aj ln aj +
aj
=0
ai
ai
ai
75
Thus we get the probability that a closed system shall be found in its
micro state j in terms of the constants which can be expressed as a function
of the Lagrange multiplier and which is the Lagrange multiplier for the
constraint on the total energy of the isolated system.
The task now is to evaluate the constants and .
The
constant can be evaluated by imposing the normalization condition
P
: j Pj = 1. The closed system has to be in one of its micro state with unit
probability. Thus we have,
Pj =
Q(, V, N ) =
1
exp(Ej )
Q
X
exp(Ej )
exp[Ei (V, N )]
where, = 1/[kB T ], and the sum runs over all the micro states of a closed sysb
tem at temperature T , volume V and number of particles N . Let (E,
V, N )
denote the density of (energy) states. In other words
b
(E,
V, N )dE
is the number of micro states having energy between E and E + dE. The
canonical partition function can be written as an integral over energy,
76
Q(, V, N ) =
b
dE (E,
V, N ) exp [E(V, N )]
b ) U
ln Q = ln (U
b
kB T ln Q = U T kB ln
= U TS
We identify the right hand side of the above as (Helmholtz) free energy :
F (T, V, N ) = U (T, V, N ) T S(T, V, N ).
Thus we get a relation between (the microscopic description enshrined in)
the canonical partition function (of statistical mechanics) and (the macroscopic description given in terms of) (Helmholtz) free energy (of thermodynamics) :
F (T, V, N ) = kB T ln Q(T, V, N )
77
Statistical mechanics aims to connect the micro world (of say atoms and
molecules) to the macro world (of solids and liquids). In other words it helps
you calculate the macroscopic properties of a system say a solid, in terms of
the properties of its microscopic constituents (atoms and molecules) and their
interactions.
Boltzmann started the game of statistical mechanics by first proposing
a micro - macro connection for an isolated system, in the famous formula
engraved on his tomb:
b
S = kB ln
You will come across several micro-macro connection during this course on
statistical mechanics. The formula
F (T, V, N ) = kB T ln Q(T, V, N ),
provides a micro - macro connection for a closed system.
pi ln pi
A!
a1 !a2 !
b For convenience
The aim is to find {ai : i = 1, 2, } that maximizes .
b Let us consider the limit ai i. Also consider the
we maximize ln .
variables pi = ai /A. Then
78
b = A ln A A
ln
=
X
i
ai ln A
= A
ai ln
ai ln ai +
ai
ai ln ai
a
i
pi ln pi
X
b
ln
pi ln pi
=
A
i
The above is the entropy of one of the
PA number of closed systems constituting
the the isolated assembly. Thus, i pi ln pi provides a natural formula for
the entropy of a system whose micro states are not equi-probable.
Physicists would prefer to measure entropy in units of Joules per Kelvin.
For, that is what they have learnt from Claussius, who defined
dS =
q
,
T
79
S
= (F U )
kB
exp(Ei )
pi =
1
exp(Ei )
Q
F = kB T ln Q
U = hEi =
1 X
Ei exp(Ei )
Q i
and write,
#
"
S
1 X
Ei exp(Ei )
= kB T ln Q
kB
Q i
#
1 X
1 X
exp(Ei )
Ei exp(Ei )
= ln Q
Q i
Q i
"
X 1
i
exp(Ei )
exp(Ei ) ln
Q
Q
X 1
i
exp(Ei )
ln Q Ei
pi ln pi
80
(5.35)
where pi is the probability of the micro state i and Ei is the energy of the
system when in micro state i. For a closed system
pi =
1
exp(Ei )
Q
(5.36)
(5.37)
We have,
P
i Ei exp(Ei )
hEi = P
i exp(Ei )
=
(5.38)
1 X
Ei exp(Ei )
Q i
(5.39)
1 Q
Q
(5.40)
ln Q
(5.41)
We identify hEi with the internal energy, usually denoted by the symbol U in
thermodynamics.
We have,
U =
1 Q
Q
1 2Q
+
Q 2
(5.42)
1 Q
Q
= hE 2 i hEi2
2
= E
2
(5.43)
(5.44)
(5.45)
81
Now write
U
T
U
=
= CV (kB T 2 )
(5.46)
(5.47)
We get the relation between the fluctuations of energy of an equilibrium system and the reversible heat required to raise the temperature of the system
by one degree Kelvin :
2
E
= kB T 2 CV .
(5.48)
The left hand side of the above equation represents the fluctuations of
energy when the system is in equilibrium. The right hand side is about how
the system would respond when you heat it8 . Note CV is the amount of
reversible heat you have to supply to the system at constant volume to raise
its temperature by one degree Kelvin. The equilibrium fluctuations in energy
are related to the linear response; i.e. the response of the system to small
perturbation9 .
Z +
2
2
2
dp3N exp
p + p2 + p3N
2m 1
VN 1
N ! h3N
Z
3N
1 p2
dp exp
2 mkB T
82
I=
1 p2
dp exp
2 mkB T
p2
2mkB T
Therefore,
dx =
p
dp
mkB T
dp =
mkB T 1
dx
2 x1/2
Z
p
2mkB T
dx x(1/2)1 exp(x)
p
1
= 2mkB T
2
=
p
2mkB T
since (1/2) =
VN 1
3N/2
(2mkB T )
N ! h3N
g(E) =
83
N
1 (2mE)3N/2
b= V
N ! h3N ( 3N
2 + 1)
V N 1 (2m)3N/2 3N 3N 1
=
E 2
E
N ! h3N ( 3N
2
2 + 1)
=
V N 1 (2m)3N/2 3N 1
E 2
N ! h3N ( 3N
2 )
+1 =
2
2
2
The partition function is obtained as a transform of the density of states
where the variable E transformed to the variable .
Z
V N 1 (2m)3N/2
3N
Q(, V, N ) =
dE exp( E) E 2 1
3N
3N
N! h
2
0
Consider the integral
I=
dE exp(E)E
3N
2
Let,
x = E then dx = dE
Z
3N
1
I = 3N/2
dx x 2 1 exp(x)
0
=
( 3N
2 )
3N/2
Substituting the above in the expression for the partition function we get,
Q(T, V, N ) =
VN 1
(2mkB T )3N/2
N ! h3N
84
The above equation suggests that the internal energy of a closed system can
be changed by two ways.
1. change {Ei : i = 1, 2, } keeping {pi : i = 1, 2, } the same. This
we call as work.
2. change {pi i = 1, 2, } keeping {Ei : iP= 1, 2, } the same. The changes
in pi should be done in such way that i pi = 1. This we call as heat.
Thus we have,
dU =
pi dEi +
Ei dpi
where the super script in the second sum should remind us that all dpi s are
not independent and that they should add up to zero.
In the first sum we change Ei by dEi i keeping pi i unchanged.
In the second
P sum we change pi by dpi i keeping Ei unchanged for all i
and ensuring i dpi = 0.
pi dEi
pi dEi =
X
i
pi
pi dEi
P
( i pi Ei )
Ei
dV =
dV
V
V
U
hEi
dV =
dV = P dV = dW
V
V
(5.50)
85
dS = kB
pi ln pi
(5.51)
[1 + ln pi ] dpi
(5.52)
T dS = dq = kB T
= kB T
ln pi dpi
[Ei ln Q] dpi
Ei dpi
(5.53)
Problems
5.1. Find the probability of the micro states of the red die under the condition
that the three die add to 12.
5.2. Maximise A(x1 , x2 ) = x1 x2 under constraint x1 + x2 = 10.
We have
A
g
=0
x1
x1
g
A
=0
x2
x2
For the given problem we have,
x2 = 0
x1 = 0
The constraint x1 + x2 = 10 gives = 5. Thus for x1 = x2 = 5 the function
f is maximum under constraint x1 + x2 = 10. For a fixed perimeter, the area
of a rectangle is maximum only if the sides are same.
5.3. Maximise x3 y 5 under the constraint x + y = 8. Answer: x = 3; y = 5.
86
ni !
87
ni = N
i=1
X
S
pi ln pi
=
kB
i
In the above the right hand side can be interpreted as hln pi, where the angular
bracket denotes an average over {pi : i = 1, 2, }. Consider a closed system
for which pi = exp(Ei )/Q. Show that S/kB = U ln Q. From this
deduce that F = kB T ln Q.
5.11. See S. B. Cahn, G. D. Mahan, and B. E. Nadgorny, A Guide to Physics
Problems Part 2: Thermodynamics, Statistical Physics, and Quantum Mechanics, Plenum ((1997) problem No. 4.45 page 24
Consider a system composed of a very large number N of distinguishable particles at rest. The particles do not interact with each other. Each particle has
only two non-degenerate energy levels: 0 and > 0. Let E denote the total
energy of the system. Note that E is a random variable; it varies, in general,
from one micro state of the system to another. Let = E/N denote energy
per particle.
1. Assume that the system is not necessarily in thermal equilibrium. What
is the maximum possible value of ?
2. Let the system be in thermal equilibrium at temperature T . The canonical
ensemble average of E is the the thermodynamic energy, denoted by U . i.e.
U = hEi, where hi denote an average over a canonical ensemble of micro
states 10 . Let = U/N denote the (thermodynamic, equilibrium) energy
per particle. Derive an expression for as a function of temperature.
10
88
ln Q
89
kB T
2
consistent with equi-partition theorem which says that each degree of freedom
(each quadratic term in the Hamiltonian) carries an energy of kB T /2.
5.16. Helmholtz free energy and the canonical partition function are related
:
F (T, V, N ) = kB T ln Q(T, V, N )
Derive an expression for the free energy of an ideal gas of N molecules confined
to a volume V at temperature T .
5.17. From thermodynamics we know that
dF = P dV SdT + dN.
Consider the expression for the free energy of an ideal gas, see last problem.
Take the partial derivative of F (T, V, N ), with respect to V and show that it
leads to ideal gas law P V = N kB T .
5.18. R K Pathria, Statistical Mechanics Second Edition Butterworth and
Heinemann (1996) p. 87; Problem 3.32
The quantum states available to a given physical system are
(i) a group of g1 equally likely states with a common energy 1 .
(ii) a group of g2 equally likely states with a common energy 2 =
6 1 .
Show that the entropy of the system is given by,
p2
p1
+ p2 ln
S = kB p1 ln
g1
g2
(5.54)
90
1
h3N
VN
3N/2
(2mkB T )
N!
(5.57)
From the partition function derive an expression for the entropy. The expression for entropy is called the Sackur-Tetrode equation. Consider a process in
which there is no change in entropy : S = 0. The volume and temperature
of the gas changes during such an isentropic (iso-entropic process; constant
entropy process etc.) process. Show that T V 2/3 is a constant during such a
process. Combined with the equation of state for an ideal gas : P V is a constant at constant N and T , this gives the formula for an adiabatic process :
P V 5/3 is a constant.
5.20. Adiabatic Process : A microscopic view
A M Glazer and J S Wark, Statistical Mechanics : A survival guide Oxford
(2001) sec. 4.5; pages : 53-56
Change in internal energy,
X
pi i ,
U=
i
pi di
Change
{pi : i = 1, 2, } keeping {i : i = 1, 2, } the same. Also
P
p
=
1.
i
i
h2
(n2 + n22 + n23 )
8mL2 1
ni = 1, 2, 3, for i = 1, 2, 3
We can change L keeping {pi : i = 1, 2, } the same. From these considerations show that for an adiabatic process P V 5/3 is a constant for a mono
atomic ideal gas.
6
Grand canonical ensemble
An open system is one which exchanges energy and matter with its surroundings. The surroundings act as a heat bath as well as a particle (or material)
bath.
A heat bath transacts energy with the system. The temperature of the
heat bath does not change because of the transaction of energy.
A material (or particle) bath transacts matter (or particles) with the system. The chemical potential of the material bath does not change because
of the transaction matter.
The system is in thermal equilibrium with the surroundings1 . The system
is also in diffusional equilibrium with the surroundings2 .
The system can be described by its temperature3 , T , volume, V and chemical potential4 , . Notice that the temperature, T , chemical potential, , and
volume V are independent properties of an open system.
Since the system is not isolated, its micro states are not equi-probable.
Aim : To calculate the probability of a micro state of the open system.
Let us take the open system, its boundary and surroundings and construct
an isolated system. We are interested in constructing an isolated system because, we want to start with the only assumption we make in statistical mechanics : all the micro states of an isolated system are equally probable. We
have called this the ergodic hypothesis.
1
2
U
N
S,V
92
>> E,
E(c) << E,
and
N (c) << N ,
we can Taylor-expand S retaining only the first two terms. We have
5
The picture I have is the following. I am visualizing a micro state of the isolated
system as consisting of two parts. One part holds the signature of the open system;
the other holds the signature of the surroundings. For example a string of positions
and momenta of all the particles in the isolated system defines a micro state. This
string consists of two parts. The first part contains the string of positions and
momenta of all the particle in the open system and the second part contains the
positions and momenta of all the particles in the surroundings. Since the system
is open the length system-string is a fluctuating quantity and so is the length
of bath-string. However the string of the isolated system is of fixed length. I am
neglecting those micro states of the isolated system which hold the signature of
the interaction between the system and the surroundings at the boundaries.
93
S E E(c), V V, N N (c) = S E, V, N
E(c)
N (c)
S
E
(6.2)
S
N
(6.3)
V,N E,N
(6.4)
E,V E,N
(6.5)
1
P
dE + dV dN
T
T
T
(6.6)
We have
S S(E, V, N )
dS =
S
E
(6.7)
dE +
V,N
S
V
dV +
E,N
S
N
dN
S
V
S
N
E,N
E,V
(6.8)
E,V
P
T
(6.9)
(6.10)
(6.11)
Therefore,
1
P (c) =
bTotal
(6.12)
(6.13)
94
We are able to write the above because of the postulate of ergodicity : All
micro states of an isolated system are equally probable. We have,
1
1
exp
S E E(c), N N (c)
(6.14)
P (c) =
bTotal
kB
1
bTotal
+
exp
kB
kB T
kB T
(6.15)
(6.16)
where the sum runs over all the micro states of the open system. We have,
P (c) =
1
exp[{E(c) N (c)}]
Q
(6.17)
(6.18)
Let = ; then we can write the grand canonical partition function as,
X
Q(T, V, ) =
N (c) exp[E(c)]
(6.19)
c
Collect those micro states of a grand canonical ensemble with a fixed value
of N . Then these micro states constitute a canonical ensemble described the
canonical partition function, Q(T, V, N ), see also the footnote6 . Thus we can
write the grand canonical partition function as,
X
Q(T, V, ) =
N Q(T, V, N )
(6.20)
N
We can further collect all those micro states of the canonical ensemble with a
fixed energy. Then these micro states constitute a micro canonical ensemble.
We shall experience this while trying to evaluate the canonical partition function
for Fermions and Bosons. We will not be able to carry out the sum over occupation
95
U
S
U
N
(6.21)
(6.22)
V,N
(6.23)
S,V
96
G = kB T ln Q
We follow the same method we employed for establishing the connection between Helmholtz free energy and canonical partition function. We have,
X
exp(Ei Ni )
(6.24)
Q(T, V, ) =
i
where the sum runs over all the microstates i of the open system. Ei is the
energy of the micro state i, and Ni is the number of particles in the system
when it is in its micro state i.
We replace the sum over micro states by sum over energy and number of
particles. Let g(E, N ) denote the density of states. We have then,
Z
Z
Q(T, V, ) = dE dN g(E, N ) exp[(E N )]
(6.25)
The contribution to the integrals come overwhelmingly from a single term at
hEi, and hN i. We then get,
Q(T, V, ) = g(hEi, hN i) exp[(hEi hN i)]
(6.26)
kB T ln Q = T [kB ln g(E, N )] + E N
= E T S N
=G
(6.27)
(6.28)
(6.29)
(6.30)
N Q(T, V, N )
(6.31)
N
+ ln Q(T, V, N )
kB T
(6.32)
(6.33)
kB T ln Q = N + kB T ln Q(T, V, N )
97
(6.34)
(6.35)
kB T lQ = F N
(6.36)
= U T S N
(6.37)
=G
(6.38)
(6.39)
Our next task is to show that G(T, V, ) = P V . To this end, let me tell
you of a beautiful formula proposed by Euler, in the context of homogeneous
function.
(6.40)
(6.41)
U
U
U
+V
+N
= U (S, V, N )
(S)
(V )
(N )
(6.42)
98
U
U
U
+V
+N
= U (S, V, N )
S
V
N
T S P V + N = U
(6.43)
(6.44)
6.3 P V = kB T ln Q
We proceed as follows. From Eq. (??) we have
P V = U T S N
(6.45)
(6.46)
= kB T ln Q(T, V, )
P V = kB T ln Q(T, V, )
(6.47)
(6.48)
(6.49)
dU = T dS P dV + dN + SdT V dP + N d
(6.50)
(6.51)
V
S
dT +
dP
N
N
= sdT + vdP
(6.52)
(6.53)
99
where s is the specific entropy - entropy per particle and v is specific volume
- volume per particle.
X
c
exp {E(c) N (c)}
(6.54)
In the above
c denotes a micro state of the open system
E(c) denotes the energy of the open system when in micro state c
N (c) denotes the number of particles of the open when in micro state c
Let us now take the partial derivative of all the terms in the above equation,
with respect to the variable , keeping the temperature and volume constant.
We have,
X
Q
=
N (c) exp {E(c) N (c)} = hN iQ(T, V, (6.55)
)
T,V
c
"
2
#
hN i
Q
Q
(6.56)
+Q
= hN i
2 T,V
T,V
T,V
The left hand side of the above equation equals
Substituting this in the above, we get,
2 hN 2 iQ.
Q
2
N (c) exp {E(c) N (c)}
[N (c)] exp {E(c) N (c)}
T,V
T,V
= 2 hN 2 iQ.
100
2 hN 2 iQ = 2 hN i2 Q + Q
= hN i hN i = kB T
hN i
hN i
(6.57)
T,V
(6.58)
T,V
V
.
hN i
T,V
hN i2
=
V
V
v2
(6.59)
(6.60)
T,V
(6.61)
T,V
hN i
=
T,V
hN i2
kT
V
where kT denotes isothermal compressibility - an experimentally measurable property. Isothermal compressibility is defined as
1 V
kT =
V P T
=
T,V
v
P
T,V
101
(6.62)
T,V
(6.64)
Therefore,
T,V
hN i
=
V
v
P
T,V
1
=
v
v
P
T,V
= kT
(6.65)
Finally we get,
hN i
T,V
hN i2
kT
V
= kB T
= kB T
hN i
hN i2
kT
V
2
kB T
=
kT
2
hN i
V
(6.66)
(6.67)
T,V
(6.68)
(6.69)
11
102
hN i
(6.71)
V,T
(6.72)
13
2
6.7 Alternate derivation of the relation : N
/hN i2 = kB T kT /V
V
dP
hN i
d =
hN i
V
hN i
T,V
(6.73)
P
hN i
(6.74)
T,V
V
hN i
T,V
103
(6.75)
hN i
,
V
which denotes the particle density : number of particles per unit volume. We
have,
P
P
=
(6.76)
hN i V,T
(V ) V,T
=
1
V
(6.77)
V,T
hN i
V,T
P
V
V
hN i
P
hN i
1
hN i
V2
hN i2
(6.79)
hN i,T
(6.80)
V,T
(6.81)
V,T
P
V
(6.82)
hN i,T
104
(6.83)
Then we get,
hN i2
V
2
N
= kB T 2
V
P T,hN i
= kB T
hN i2
kT
V
(6.84)
(6.85)
Problems
6.1. Start with Helmholtz free energy : F F (T, V, N ). F is an extensive
thermodynamic variable. F is a first order homogeneous function of the extensive thermodynamic variables V, N . Note that F also depends on the intensive variable T . Therefore we can write, F (T, V N ) = F (T, V, N ). Employ
Eulers formula and derive Gibbs-Duhem relation.
6.2. Start with enthalpy: H H(S, P, N ). H is an extensive thermodynamic
variable. H is a first order homogeneous function of the extensive thermodynamic variables S, N . Note that H also depends on the intensive variable P .
Therefore we can write H(S, P, N ) = H(S, P, N ). Employ Eulers formula
and show that H = T S + N . Also derive Gibbs-Duhem relation.
6.3. Start with Gibbs free energy G G(T, P, N ). Employ Eulers formula
and show that G = N. Derive Gibbs-Duhem relation.
6.4. Let f be a second order homogeneous function of the variable x1 , x2 ,
xN . By this we mean
f (x1 , x2 , xN ) = 2 f (x1 , x2 , xN ).
Show that
2 f (x1 , x2 . xN ) =
N
X
xi
i=1
f
xi
nf (x1 , x2 , xN ) =
N
X
i=1
xi
f
xi
2
6.7 Alternate derivation of the relation : N
/hN i2 = kB T kT /V
105
V,T
lnQ
P = kB T
V
,T
= kB T
1
ln Q
V
X
i
X
i
pi i = hEi = U ;
pi Ni = hN i
1
exp[Ei Ni )]
Q
where the and are the Lagrange multipliers for the second and third
constraints respectively. The first constraint gives rise to the grand canonical
partition function Q.
The energy of the system when in micro state i is Ei ; the number of
particles in the system when in micro state i is Ni .
6.8. Start with
Q(, V, ) =
X
c
where the sum runs over all the micro states of an open system. The average
number of particles is given by
hN i =
1 X
N (c) exp [ {E(c) N (c)}]
Q c
106
= hN i hN i = kB T
hN i
T,V
6.9. Donald A McQuarrie, Statistical Mechanics, Harper and Row (1976) page
: 65; Problem : 3-4.
Show that the pressure of an open system, in a canonical ensemble, is given
by
P = kB T
ln Q
V
(6.86)
,V
Use Eulers theorem for first order homogeneous functions and show that
P =
kB T
ln Q(T, V, ).
V
exp(N )Q(T, V, N )
Q(T, V, )
2
6.7 Alternate derivation of the relation : N
/hN i2 = kB T kT /V
107
hN i
V,T
V2
=
hN i2
P
V
hN i,T
T,V
hN i
T,V
hN i
P
T,V
7
Quantum Statistics
{n1 ,n2 , }
exp [(n1 1 + n2 2 + )]
(7.1)
where the sum runs over all possible strings of occupation numbers (i.e. micro
states) obeying the constraint
110
7 Quantum Statistics
ni = N.
To remind us of this constraint I have put a star over the summation sign.
N Q(, V, N )
(7.2)
N =0
N =0
{n1 ,n2 , }
N =0 {n1 ,n2 , }
N =0 {n1 ,n2 , }
exp [(n1 1 + n2 2 + )]
(7.3)
I shall discuss formally the grand canonical ensemble in full glory in some later
lectures. For the present it is sufficient to consider the grand canonical partition
function as a transform of the canonical partition function with the variable N
transformed to fugacity or chemical potential = kB T ln().
[exp { (1 )}]n1
xn1 1 xn2 2
N =0 {n1 ,n2 , }
[exp { (2 )}]n2
N =0 {n1 ,n2 , }
111
(7.6)
(7.7)
(7.8)
We have a restricted sum over strings of occupation numbers. The restriction is that the occupation numbers constituting a string should add to N .
We then take a sum over N from 0 to . which removes the restriction. To
appreciate this, see Donald A McQuairrie, Statistical Mechanics, Harper and
Row (1976)p.77;Problem:4-6. I have worked out this problem below.
Consider first the sum over restricted sums :
I1 =
X
X
xn1 1 xn2 2
(7.9)
N =0 {n1 ,n2 }
where the star over the summation sign reminds us of the restriction n1 +n2 =
N . Also let us assume ni can be 0, 1 or 2, for i = 1, 2. Thus we have We can
write down I1 as,
I1 = 1 + x1 + x2 + x21 + x22 + x1 x2 + x21 x2 + x1 x22 + x21 x22
(7.10)
2 X
2
Y
xni i
(7.11)
(1 + xi + x2i )
(7.12)
i=1 ni =0
2
Y
i=1
= (1 + x1 + x21 ) (1 + x2 + x22 )
= 1 + x2 + x22 + x1 + x1 x2 + x1 x22 + x21 + x21 x2 + x21 x22
(7.13)
(7.14)
= I1
(7.15)
We can now write the grand canonical partition function of N quantum particles occupying single-particle quantum states determined by volume V .
Q(T, V, ) =
all
YX
xni i =
i ni =0
all
YX
[ exp(i )]
ni
(7.16)
i ni =0
all
YX
i ni =0
exp { (i )}
ni
(7.17)
112
7 Quantum Statistics
1 n2
xn
1 x2
n1
n2
x1
x2
x21
x22
x1 x2
x21 x2
x1 x22
x21 x22
Y
i
1 + exp[(i )
(7.18)
QBE =
Y
i
1
1 exp[(i )]
(i > i)
113
(7.19)
We then have
QCS (T, V, ) =
N =0
N =0
{n1 ,n2 m }
"
X
i
N!
n1 !n2 !
(7.20)
N!
[exp(1 )]n1
n1 !n2 !
#N
exp(i )
[exp(2 )]n2
N [Q1 (T, V )]N
(7.21)
N =0
114
7 Quantum Statistics
QMB =
=
N =0
{n1 ,n2 , }
N =0 {n1 ,n2 , }
N =0 {n1 ,n2 , }
1
exp[(n1 1 + n2 2 + )]
n1 !n2 !
[ exp(2 )]n2
[ exp(1 )]n1
n1 !
n2 !
[exp((1 )]n1
[exp((2 )]n2
n1 !
n2 !
xn1 1 xn2 2
n1 ! n2 !
N =0 {n1 ,n2 , }
!
!
X
X
xn2 2
xn1 1
=
n !
n !
n =0 2
n =0 1
= exp(x1 ) exp(x2 )
=
exp(xi )
exp[ exp(i )] =
Y
i
exp[exp{(i )}]
(7.22)
We can also express the grand canonical partition function for classical indistinguishable ideal gas as,
QMB = exp(x1 ) exp(x2 )
= exp
xi
X
exp[(i )]
= exp
X
= exp[
exp(i )]
= exp[Q1 (T, V )]
7.6.1 QM B (T, V, N ) QM B (T, V, )
We could have obtained the above in a simple manner, by recognizing that
QMB (T, V, N ) =
7.7 Grand canonical partition function, grand potential,and thermodynamic properties of an open system
Then we have,
=
N =0
(7.23)
(7.24)
N =0
QN
MB (T, V, N = 1)
N!
1
Y
BoseEinstein
Q=
(7.25)
1
exp
[(
i )]
Recall that
QM B (T, V, ) =
N=0
N QM B (T, V, N )
(7.26)
115
116
7 Quantum Statistics
Recall from thermodynamics that the G is obtained as a Legendre transform of U (S, V, N ) : S T ; N ; and U G.
G(T, V, ) = U T S N
T =
U
S
U
N
(7.27)
(7.28)
V,N
(7.29)
S,V
(7.30)
It follows,
P (T, V, ) =
G
V
(7.31)
S(T, V, ) =
G
T
(7.32)
N (T, V, ) =
(7.33)
T,
V,
T,
If we have an open system of particles obeying Maxwell-Boltzmann, BoseEinstein, or Fermi-Dirac statistics at temperature T and chemical potential
, in a volume V , then the above forrmulae help us calculate the pressure,
entropy and average number of particles in the system. In fact, in the last
section on grand ganonical ensemble, we have derived formal expressions for
the mean and fluctuations of the number of particles in an open system; we
have related the fluctuations to isothermal compressibility - an experimentally
measurable property.
The grand potential for the three statistics is given by,
G(T, V, ) = kB T ln Q
(7.34)
P
Maxwell Boltzmann
kB T i exp[(i )]
P
= kB T i ln [1 exp {(i )}] Bose Einstein (7.35)
P
kB T i ln [1 + exp {(i )}] Fermi Dirac
117
(7.36)
G(T, V, ) = kB T ln Q
X
exp [(i )]
= kB T
(7.37)
Q(T, V, ) =
T,V
hN i =
X
i
exp[(i )
(7.38)
exp[(i )]
exp(i )
= Q1 (T, V, )
(7.39)
N =0
N =0
N =0
Q(T, V, N )
(7.40)
QN
1
N!
(7.41)
N ,
exp(N )
QN
1
N!
= Q1
G(T, V, ) = kB T exp()Q1
G
hN i =
= Q1 (T, V )
T,V
(7.42)
(7.43)
(7.44)
(7.45)
Y
i
1
1 exp[(i )]
(7.46)
118
7 Quantum Statistics
ln Q =
hN i =
T,V
exp[(i )]
1 exp[(i )]
(7.47)
(7.48)
Y
i
1 + exp[(i )]
G = kB T ln Q = kB T
hN i =
T,V
X
i
X
i
ln [1 + exp{(i )}]
exp[(i )]
1 + exp[(i )]
(7.49)
(7.50)
(7.51)
7.9 All the three statistics are the same at high temperature and/or low densities
i
kB T
(7.52)
1
Y
Bose Einstein
Q=
(7.53)
1
exp(
i)
VN 1
N ! 3N
(7.54)
119
120
7 Quantum Statistics
h
=
2mkB T
(7.55)
(7.56)
= kB T [3N ln + N ln V N ln N + N ]
(7.57)
When you take the partial derivative of the free energy with respect to N
keeping temperature and volume constant, you get the chemical potential5 .
Therefore,
F
= kB T [3 ln ln V + 1 + ln N 1]
(7.62)
=
N T,V
= kB T ln 3 + ln(N/V )
= kB T ln(3 )
= ln(3 )
kB T
(7.63)
(7.64)
(7.65)
where = N/V is the number density, i.e. number of particles per unit
volume. Thus we get,
i =
i
ln(3 )
kB T
(7.66)
In thermodynamics we have
F = U TS
(7.58)
dF = dU T dS SdT
(7.59)
From the first law of thermodynamics we have dU = T dS P dV + dN . Substituting this in the expression for dF above we get,
dF = P dV + dN SdT
(7.60)
Therefore,
=
F
N
(7.61)
T,V
7.9 All the three statistics are the same at high temperature and/or low densities
Y 1
Bose Einstein
Q=
1 xi
1 + xi Fermi Dirac
(7.67)
exp(xi ) xi
0 1 + xi Maxwell Boltzmann
1
Y
xi 0 1 + xi Bose Einstein
(7.68)
Q=
1
x
i
1 + xi
= 1 + xi Fermi Dirac
For all the three statistics, the grand canonical partition function take the
same expression. Bosons, Fermions and classical indistinguishable particles
behave the same way when 3 0.
When do we get 3 0 ?
Note that is inversely proportional to square-root of the temperature.
h
=
2mkB T
Hence 0, when T . For a fixed temperature ( is constant), 3 0
when 0. For a fixed , when T ( the same as 0), then 3 0.
Classical behaviour obtains at low densities and/or high temperatures.
Quantum effects manifest only at low temperatures and/or high densities.
7.9.2 Easier Method : 0
Another simple way to show that the three statistics are identical in the limit
of high temperatures and low densities is to recognise (see below) that 3 0
implies 0. Here = exp(), is the fugacity. Let us show this first.
We have shown that
= ln(3 )
kB T
Therefore the fugacity is given by
(7.69)
121
122
7 Quantum Statistics
= exp
kB T
= exp(ln[3 ]) = 3
Thus 3 0 implies 0.
In the limit 0 we have, for Maxwell-Boltzmann statistics,
Y
exp[ exp(i )]
QMB =
(7.70)
(7.71)
(1 + exp(i ))
(7.72)
1
1 exp(i )
(7.73)
[1 + exp(i )]
(7.74)
(7.75)
Thus in the limit of hight temperatures and low densities Maxwell Boltzmann
statistics and Bose Einstein statistics go over to Fermi - Dirac statistics.
b 1 , n2 , ) = 1
7.9.3 Easiest Method (n
We could have shown easily that in the limit of high temperature and low
density, the three statistics are identical by considering the degeneracy factor
1
Bose Einstein and Fermi Dirac statistics
b=
(7.76)
1
When the temperature is high the number of quantum states that become
available for occupation is very large; When the density is low the number
of particles in the system is low. Thus we have a very few number of particles occupying a very large of quantum states. In other words the number of
quantum states is very large compared to the number of particles.
123
all
all
all
YX
X
X
n
xi
nxnk
nxnk
i6=k n=0
hnk i =
n=0
all
X
i6=k n=0
xni
all
X
n=0
xnk
n=0
all
X
n=0
xk
1 + xk
1
+1
x1
k
1
exp[(k )] + 1
xnk
124
7 Quantum Statistics
xk
(1 xk )
(1 xk )2
xk
1
1
=
= 1
1 xk
exp[(
xk 1
k )] 1
n
n
X
YX
x
x
i
k
n
n!
n!
n=0
i6=k n=0
=
!
n
n
X
YX
x
x
i
k
n!
n!
n=0
n=0
hnk iMB
i6=k
X
xn
n k
n!
n=0
n
X
x
k
n=0
n!
In the above the summation in the numerator and the denominator are evaluated analytically as follows. We start with the definition,
6
Consider
S(x) = 1 + x + x2 + =
1
1x
1
dS
= 1 + 2x + 3x2 + 4x3 + =
dx
(1 x)2
x
dS
x
= x + 2x2 + 3x3 + =
dx
(1 x)2
125
X
xn
= exp(x)
n!
n=0
Differentiate both sides of the above equation with respect to x. You get
X
nxn1
= exp(x)
n!
n=0
1 X nxn
= exp(x)
x n=0 n!
X
nxn
= x exp(x)
n!
n=0
Therefore,
hnk iMB =
xk exp(xk )
1
= xk = exp[(k )] =
exp(xk )
exp[(k )]
+1
Fermi Dirac
1
hnk i =
0 Maxwell Boltzmann
where a =
exp[(k )] + a
1
Bose Einstein
Variation of hnk i with energy is shown in the figure, see next page. Note that
k
the x axis is
and the y axis is hnk i.
kB T
7.11.1 Fermi-Dirac Statistics
We see that for Fermi-Dirac statistics the occupation number never exceeds
unity. When k is negative and |k | is large the value of hnk i tends to
unity. For = and T 6= 0, we have hnk i = 1/2.
126
7 Quantum Statistics
k
,
kB T
all the three statistics coincide. We have already seen that at high temperatures classical behaviour obtains. Then, the only way
k
kB T
can become large at high temperature (note in the expression T is in the
denominator) is when is negative and its magnitude also should increase with
increase of temperature. Thus for all the statistics at high temperature the
chemical potential is negative and its magnitude must be large. Essentially
for Bosons the chemical potential is negative at all temperature, and zero at zero
temperature and at temperatures less than a critical temperature called BoseEinstein condensation temperature.
for classical indistinguishable the chemical potential is negative at high temperature, positive at low temperatures and zero at zero temperature.
127
1.5
M axwell Boltzmann
hnk i
Bose E instein
0.5
F ermi Dirac
0
0.5
1
3
k
kB T
Fig. 7.1. Average occupation number of a quantum state under Bose-Einstein,
Fermi-Dirac, and Maxwell-Boltzmann statistics
= ln(3 ).
kB T
This means that 3 << 1 for classical behaviour to emerge9 . This is in
complete agreement with our earlier surmise that classical behaviour obtains
at low and/or high T .
Hence all the approaches are consistent with each other and all the issues
fall in place.
In this lecture we saw of the first order statistics - the mean/average/expectation/first
moment - of the occupation number in Fermi-Dirac, Bose-Einstein, and
Maxwell-Boltzmann statistics. In the next lecture I shall tell you of the second
order statistics - the fluctuations of the occupation number.
Note that ln(x) = 0 for x = 1 and is negative for x < 1. As x goes from 1 to 0,
the quantity ln(x) goes from 0 to .
128
7 Quantum Statistics
variable because it takes values that are, in general, different for different micro
states. For Bose-Einstein and Fermi-Dirac statistics, a string of occupation
numbers specifies completely a micro state. We found that for Bosons and
Fermions, the average value of nk can be expressed as,
hnk i =
all
X
nxnk
n=0
all
X
xnk
n=0
where,
xk = exp[(k )]
Formally
hnk i =
all
X
nP (n)
n=0
In the above P (n) P (nk = n) is the probability that the random variable
nk takes a value n. Comparing the above with the first equation, we find
P (n) P (nk = n) =
1
all
X
xnk
xm
k
m=0
for n = 0
1 + xk
P (n) =
xk for n = 1
1 + xk
We have
hnk i =
1
X
nP (n) =
n=0
129
xk
,
1 + xk
xk
.
1 + xk
We thus have,
P (n) =
1 for n = 0
for n = 1
hn2k i =
1
X
nP (n) =
n=0
1
X
n2 P (n) =
n=0
2 = hn2k i hnk i2 = (1 )
The relative fluctuations of the random variable nk is defined as the standard
deviation divided by the mean . Let us denote the relative fluctuation by
the symbol . For Fermi-Dirac statistics we have,
r
1
F D = =
1
n=0
nP (n) =
xk
1 xk
130
7 Quantum Statistics
consistent with the result obtained earlier. Inverting the above, we get,
xk =
1+
n
(1 + )n+1
z n P (n)
n=0
n
1 X n
=
z
1 + n=0
1+
1
1
z
1+
1
1+
1
1 + (1 z)
Let us now differentiate P (z) with respect to z and in the resulting expression
set z = 1. We shall get hnk i, see below.
P
=
z
(1 + (1 z))2
10
P
=
z z=1
see Problem No. 7 (Problem set : 2, page 4. Let me recollect : The simplest
problem in which geometric distribution arises is in coin tossing. Take a p-coin;
.e. a coin for which the probability of Heads is p and that of Tails is q =
1 p. Toss the coin until the side Heads appears. The number of tosses is a
random variable with a geometric distribution P (n) = q n1 p. We can write this
distribution in terms of = hni = 1/p and get P (n) = ( 1)n1 / n .
131
hn2k i = 2 2 +
2 = hn2k i hnk i2 = 2 +
BE
= =
1
+1
For doing the problem , You will need the following tricks.
S(x) =
n=0
xn = 1 + x + x2 + x3 + =
1
1x
X
1
dS
=
nxn1 = 1 + 2x + 3x2 + 4x3 + =
dx
(1
x)2
n=1
X
x
dS
=
nxn = x + 2x2 + 3x3 + 4x4 + =
dx
(1
x)2
n=1
d
dx
X
dS
2x
1
x
=
n2 xn1 = 1 + 22 x + 32 x2 + 42 x3 + =
+
3
dx
(1
x)
(1
x)2
n=1
d
dx
X
dS
2x2
x
x
=
n2 xn = x + 22 x2 + 32 x3 + 42 x4 + =
+
3
dx
(1 x)
(1 x)2
n=1
You can employ the above trick to derive power series for ln(1 x), see
below.
Z
x2
x3
x4
dx
= ln(1 x) = x +
+
+
1x
2
3
4
132
7 Quantum Statistics
1
xnk
n! exp(xk )
n
exp()
n!
The random variable nk has Poisson distribution. The variance equals the
mean. Thus the relative standard deviation is given by
MB =
1
=
We can now write the relative fluctuations for the three statistics in one
single formula as,
+1 for
Fermi Dirac Statistics
1
0 for Maxwell Boltzmann Statistics
=
a with a =
1 for
Bose Einstein Statistics
r=
P (n)
.
P (n 1)
For the Maxwell-Boltzmann statistics, r = /n. The ratio r is inversely proportional to n. This is the normal behaviour; inverse dependence of r on n is
what we should expect, when the events are uncorrelated. Recall the discussions we had on Poisson process : Problem 22, Assignment 6.
On the other hand, for Bose-Einstein statistics, the ratio is given by
r=
P (n)
=
P (n 1)
+1
r is independent of n. This means, a new particle will get into any of the
quantum states, with equal probability irrespective of how abundantly or
how sparsely that particular quantum state is already populated. An empty
quantum state has the same probability of acquiring an extra particle as an
abundantly populated quantum state.
133
Thus, compared to classical particles obeying Maxwell-Boltzmann statistics, Bosons exhibit a tendency to bunch together. By nature, Bosons like
to be together. Note that this bunching-tendency is not due to interaction
between Bosons. We are considering ideal Bosons. This bunching is purely a
quantum mechanical effect; it arises due to symmetry property of the wave
function.
For Fermions, the situation is quite the opposite. There is what we may
call an aversion to bunching; call it anti-bunching if you like. No Fermion
would like to have another Fermion in its quantum state. A Fermion behaves
like a dog in the manger.
Problems
7.1. Consider Fermi-Dirac statistics at T = 0. Show how the graph of
hnk i versus
k
kB T
1
L+1
134
7 Quantum Statistics
n
(1 + )n+1
nP (n) =
n=0
7.6. For Bose-Einstein statistics, calculate the second moment, hn2k i, the hard
way
hn2k i =
n
1 X 2
n
1 + n=0
1+
Calculate the relative fluctuations and show that your results agree with the
ones obtained employing generating function technique
7.7. Show that the chemical potential of Bosons is negative at all temperatures. At best it can become zero at very low temperatures. Consider the
expression
hnk i =
1
exp[(k )] 1
Take the lowest energy quantum state to be of energy zero. i.e. 0 = 0. Show
positive is unphysical.
8
Bose Einstein Condensation
Y
i
1
1 exp[(i )]
(8.1)
(8.2)
(8.3)
Recall, from thermodynamics, that the G is obtained by Legendre transform of U (S, V, N ) : S T ; N ; and U G.
G(T, V, ) = U T S N
(8.4)
U
S
(8.5)
U
N
T =
V,N
(8.6)
S,V
(8.7)
136
P (T, V, ) =
G
V
(8.8)
S(T, V, ) =
G
T
(8.9)
N (T, V, ) =
(8.10)
T,
V,
T,
(8.11)
(8.12)
8.1.1 hN i =
k hnk i
For Bosons, we found that the average occupancy of a (single-particle) quantum state k, is given by,
hnk i =
exp(k )
1 exp(k )
X
k
exp(k )
1 exp(k )
(8.13)
137
k ()
R
()d
Let us now convert the sum over quantum states to an integral over energy.
To this end we need an expression for the number of quantum states in infinitesimal intervals d centered at . Let us denote this quantity by g()d.
We call g() the density of (energy) states. Thus we have,
Z
exp()
N =
g()d
(8.14)
1
exp()
0
We need an expression for the density of states. We have done this exercise
earlier. In fact we have carried out classical counting and quantum counting
and found both lead to the same result. The density of states is given by,
g() = V 2
2m
h2
3/2
1/2
(8.15)
We then have,
N = V 2
2m
h2
3/2 Z
exp() 1/2
d
1 exp()
(8.16)
We note that 0 < 1. This suggests that the integrand in the above can
be expanded in powers of . To this end we write
X
1
=
k exp(k)
1 exp()
(8.17)
k=0
This gives us
X
exp()
=
k+1 exp[(k + 1)]
1 exp()
(8.18)
k=0
k exp[k]
k=1
(8.19)
138
2m
h2
3/2 X
2m
h2
3/2 X
2mkB T
h2
3/2 X
= V 2
2mkB T
h2
3/2 X
= V 2
2mkB T
h2
3/2
2mkB T
h2
3/2 X
2
k 3/2
k=1
N = V 2
= V 2
= V 2
= V 2
=V
exp(k)1/2 d
(8.20)
exp(k)(k)1/2 d(k)
3/2 k 3/2
(8.21)
k=1
2mkB T
h2
k=1
k
k 3/2
k=1
exp(x)x1/2 dx
k
(3/2)
k 3/2
k=1
k=1
(8.23)
X k
1
(1/2)
2
k 3/2
3/2 X
(8.22)
(8.24)
k=1
k
k 3/2
(8.25)
(8.26)
(8.27)
In an earlier lecture, we defined a thermal wave length denoted by the symbol
. This is the de Broglie wave length associated with a particle having thermal
energy of kB T . It is also called quantum wavelength. It is given by, see earlier
notes,
h
=
2mkB T
(8.28)
The sum over k, in the expression for hN i, is usually denoted by the symbol
g3/2 ():
g3/2 () =
X
k
k 3/2
k=1
3
2
= + + +
2 2 3 3
(8.29)
(8.30)
139
Thus we get,
V
g3/2 ()
3
(8.31)
N 3
= g3/2 ()
V
(8.32)
N =
We can write the above as,
The fugacity is small at high temperature. For small we can replace g3/2 ()
by . We get
N =
(8.33)
X
k
hnk i =
(8.34)
2m
h2
3/2 Z
2m
h2
3/2 Z
= 2V
2mkB T
h2
3/2
(3/2)
(8.37)
= 2V
2mkB T
h2
3/2
1
(1/2)
2
(8.38)
= 2V
2mkB T
h2
3/2
(8.39)
exp(k ) = 2V
= 2V
=V
2mkB T
h2
d 1/2 exp()(8.35)
0
3/2
(8.40)
(8.41)
140
4
3.5
3
2.5
2
1.5
g3/2 ()
1
0.5
0.2
0.4
0.6
0.8
X
1
k 3/2
(8.42)
k=1
= (3/2)
(8.43)
= 2.612
(8.44)
X
1
.
kn
k=1
(8.45)
141
X exp(k
+
1
1 exp(k )
(8.47)
In the above, we have separated the ground state occupancy and the occupancy of all the excited states. Let hN0 i denote the ground state occupancy.
It is given by the first term,
N0 =
(8.48)
The occupancy of all the excited states is given by the second term, where
the sum is taken only over the indices k representing the excited states. Let
Ne denote the occupancy of excited states. It is given by,
2
The chemical potential approaches the energy of the ground state. With out loss
of generality, we can set the ground state at zero energy.
142
Ne =
X
k
exp(k
1 exp(k )
(8.49)
In the above, the sum over k can be replaced by an integral over energy. In
the integral over energy, we can still keep the lower limit of integration as
zero, since the density of states giving weight factors for occupancy of states
is zero at zero energy. Accordingly we write
N = hN0 i + hNe i
=
V
g3/2 ()
+
1 3
(8.50)
(8.51)
We thus have,
N 3
3
=
+ g3/2 ()
V
V 1
(8.52)
Let us define the number of density - number of particles per unit volume,
denoted by the symbol . It is given by
=
N
V
(8.53)
The function /(1 ) diverges at = 1, as you can see from the figure
below.
Hence the relevant curve for carrying out graphical inversion should be the
one that depicts the sum of the singular part (that takes care of the occupancy
of the ground state) and the regular part (that takes care of the ocupancy
of the excited states). For a value of 3 /V = .05 we have plotted both the
curves and their sum in the figure below. Thus for any value of 3 we can
now determine the fugacity by graphical inversion.
We carry out such an exercise and obtain the values of for various values
of 3 and the figure below depicts the results.
It is clear that when 3 > 2.612, the fugacity is close unity. How close
can it approach unity ?
Let us postulate3
a
= 1 .
N
where a is a number. To determine a we proceed as follows.
3
a
We have reasons to postulate = 1 . This is related to the mechanism underlyN
ing Bose-Einstein condensation we shall discuss the details later. In fact, following
Donald A McQuarrie, Statistical Mechanics, Harper and Row (1976)p.173 we can
a
make a postulate = 1 . This should also lead to the same conclusions.
V
143
100
90
80
70
60
50
40
30
20
10
0
0.2
0.4
0.6
0.8
We have,
N
=
1
1
a
N
if N >> a
a
(8.54)
(8.55)
We start with,
3 =
3
+ g3/2 ()
V 1
(8.56)
3
+ g3/2 (1)
a
(8.57)
144
8
7
6
5
4
3
+ g3/2 ()
V 1
2
1
0
0.2
0.4
0.6
0.8
Fig. 8.3. 3 versus . The singular part [3 /V ][/(1)] (the bottom most curve),
the regular part g3/2 () (the middle curve) , and the total (3 ) are plotted. For this
plot we have taken 3 /V as 0.05
Thus we get,
a=
3
3 g3/2 (1)
(8.58)
The point 3 = g3/2 (1) = 2.612 is a special point indeed. What is the
physical significance of this point ? To answer this question, consider the
quantity 3 as a function of temperature with kept at a constant value.
The temperature dependence of this quantity is shown below.
3
2mkB T
3
(8.59)
At high temperature for which 3 < g3/2 (1) = 2.612, we can determine
the value of from the equation g3/2 () = 3 by graphical or numerical
inversion.
145
1
0.9
0.8
0.7
0.6
0.5
0.4
3
+ g3/2 ()
=
V 1
3
= .05
V
3
0.3
0.2
0.1
0
2.612
0
0.5
1.5
2.5
3.5
3
Fig. 8.4. Fugacity versus 3
3
g3/2 (1)
(8.60)
(8.61)
N
a
(8.62)
N0
1
=
N
a
(8.63)
N0 =
= 1
1
g3/2 (1)
3
(8.64)
146
(8.65)
Therefore,
N0
1
=
N
a
(8.66)
= 1
3BEC
3
= 1
3
(8.68)
!3
TBEC
= 1
= 1
BEC
(8.67)
T
TBEC
3/2
(8.69)
(8.70)
1.5
N0
=1
N
T
TBEC
43/2
N0
N
0.5
0.5
1.5
T
TBEC
Fig. 8.5. Ground state occupation as a function of temperature
147
= 2.612
(8.71)
V
2mkB T
kB TBEC =
h2
2m
N
2.612 V
2/3
(8.72)
X
k
"
Y
k
(8.73)
1
(1 exp(k )
ln[1 exp(k )]
(8.74)
(8.75)
Now we shall be careful and separate the singular part and regular part to
get,
X
ln[1 exp(k )]
(8.76)
G = kB T ln(1 ) + kB T
k
We have
ln[1 exp(k )] =
k exp(kk )
(8.77)
k=1
() V 2
We proceed as follows:
2m
h2
3/2 Z
() 1/2 d,
(8.78)
148
G = kB T ln(1 ) + kB T
X
k
ln[1 exp(k )]
= kB T ln(1 ) kB T V 2
= kB T ln(1 ) kB T V 2
2m
h2
3/2 X
2m
h2
Z
3/2 X
k=1
= kB T ln(1 ) kB T V 2
= kB T ln(1 ) kB T V
= kB T ln(1 ) kB T
d 1/2 exp(k)
(8.80)
(k)1/2 exp(k)
d(k)(8.81)
k 3/2 3/2
2mkB T
h2
2mkB T
h2
k=1
(8.79)
3/2
X
k
(3/2)
k 3/2
3/2 X
k=1
(8.82)
k=1
k
k 3/2
V
g3/2 ()
3
(8.83)
(8.84)
Thus we have,
G(T, V, ) = kB T ln(1 )
V
kB T g3/2 ()
3
(8.85)
In the above Ei is the energy of the open system when in micro state i; Ni is
the number of particles in the open system when in micro state i. Let = .
Then we get,
X
exp(Ei ) exp(+Ni )
(8.87)
Q(, V, ) =
i
X
Q
i exp[i + Ni )
=
149
(8.88)
1 Q
= hEi = U
Q
(8.89)
(8.90)
ln Q =
U =
1
1 exp(i )
X
i
ln[1 exp(i )]
X i exp(i )
1 exp(i )
i
(8.91)
(8.92)
Let us now go to continuum limit by converting the sum over micro states by
an integral over energy and get,
U =
3
1
V kB T 3 g5/2 ()
2
(8.93)
Let us now investigate the energy of the system at T > TBEC . When
temperature is high, the number of Bosons in the ground state is negiligibly
small. Hence the total energy of the system is the same as the one given above.
For temperatures less that TBEC , the ground state gets populated anomalously. The Bosons in the ground state do not contribute to the energy. Hence
for T < TBEC , we have
N0
1
3
(8.94)
U = V kB T 3 g5/2 (1) 1
2
N
1
3
= V kB T 3 g5/2 (1)
2
T
TBEC
3/2
(8.95)
Thus we have,
U =
3
V
kB T 3 g5/2 ()
3/2
V
T
3
TBEC
(8.96)
150
We have,
N = N0 + Ne
(8.97)
N0 =
(8.98)
Ne =
V
g3/2 ()
3
(8.99)
(8.100)
hN
ik
T
for T > TBEC
B
2
g3/2 ()
U =
3/2
g5/2 (1)
3
T
hN ikB T
for T < TBEC
2
g3/2 (1) TBEC
(8.101)
CV
=
=
N kB T
N kB
T
3T g5/2 ()
2 g3/2 ()
(8.102)
3
g3/2 () =
g3/2 ()
T
2T
First Relation:
3
g3/2 () =
g3/2 ()
T
2T
(8.103)
151
(8.104)
[g ()] = 3 2
T 3/2
T
= 3 3
(8.105)
h
2mkB T
(8.106)
3
1
h
2
2
2mkB T 3/2
(8.107)
h
3
2
2T
2mkB T
(8.108)
3
3
2T
(8.109)
3
g3/2 ()
2T
(8.110)
Q.E.D
[gn/2 ()] =
g(n/2)1 ()
Second Relation
1
Proof :
X
k
. Therefore,
We have by definition, gn/2 () =
k n/2
k=1
152
#
"
X k
[g ()] =
n/2
k n/2
k=1
=
X
kk1
k=1
k n/2
(8.111)
(8.112)
1X
k
k (n/2)1
(8.113)
1
g(n/2)1 ()
(8.114)
k=1
1 d
dT
Q.E.D
g3/2 ()
2T g1/2 ()
Third Relation
3 g3/2 ()
1 d
=
dT
2T g1/2 ()
Proof :
We proceed as follows :
[g3/2 ()] =
[g3/2 ()]
T
dT
3
1
d
g3/2 () = g1/2 ()
2T
dT
(8.115)
(8.116)
g3/2 ()
g1/2 ()
(8.117)
CV
=
N kB
T
3T g5/2 ()
2 g3/2 ()
153
3 g5/2 () 3T
+
2 g3/2 ()
2 T
g5/2 ()
g3/2 ()
3 g5/2 () 3T
=
2 g3/2 ()
2
"
g5/2 () d
g5/2 () g3/2 ()
1
2 ()
g3/2
T
g3/2 ()
dT
3 g5/2 () 3T
=
2 g3/2 ()
2
"
g5/2 ()
2 ()
g3/2
#
3
1
1
d
g3/2 ()
g3/2 ()
2T
g3/2 ()
dT
3 g5/2 () 3T
=
2 g3/2 ()
2
"
g5/2 ()
2 ()
g3/2
#
3
1 d
g3/2 ()
2T
dT
3 g5/2 () 3T
=
2 g3/2 ()
2
"
g5/2 ()
2 ()
g3/2
#
3
3 g3/2 ()
g3/2 () +
2T
2T g1/2 ()
15 g5/2 () 9 g3/2 ()
4 g3/2 () 4 g1/2 ()
(8.118)
CV
for T < TBEC
N kB
Now, let us consider the case with T < TBEC . We have,
Thus we have,
g5/2 (1)
U
3
= T
N kB
2
g3/2 (1)
3 g5/2 (1) 5
1
CV =
N kB
2 g3/2 (1) 2
T
TBEC
T
TBEC
3/2
(8.119)
3/2
(8.120)
154
15 g5/2 () 9 g3/2 ()
4 g3/2 () 4 g1/2 ()
1
CV =
N kB
3/2
15 g5/2 (1)
T
(8.121)
The specific heat is plotted against temperature in the figure below. The
2
1.8
1.6
Classical : 3NkB /2
1.4
CV
Nk B
1.2
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1.2
1.4
1.6
1.8
T
TBEC
Fig. 8.6. Heat capacity in the neighbourhood of Bose - Einstein condensation temperature
3h2
8mL2
155
1
0
1
exp
kB T
(8.122)
kB T
0 (T )
(8.124)
(8.125)
The largest value that N0 can take is N , i.e. when all the particles condense
into the ground state. In other words, the smallest value that 1/N0 can take is
1/N . Hence (0 )/[kB T ] can not be smaller than 1/N . The smallest possible
value it can take is 1/N - inverse of the average number of particles in the
entire system.
0
1
kB T
N
(8.126)
For large T , the numerator is large; but the denominator is also large. Note that
(T ) is negative and large for large T . In fact the denominator goes to infinity
faster than the numerator.
156
a sense, the ground state forbids the chemical potential to come close to any
energy level other than the ground state energy. It sort of guards all the excited
states from a close visit of . As T 0, the number of Bosons in the ground
state increases.
This precisely is the subtle mechanism underlying Bose-Einstein
condensation.
9
Statistical Mechanics of Harmonic Oscillators
1
p2
+ m 2 q 2
2m 2
(9.1)
where q is the distance between the current position of the harmonic oscillator
and its mean position and p its momentum. is the characteristic frequency
of the oscillator and m its mass.
We have,
2
Z
Z +
1
p
1 +
(9.2)
+ m 2 q 2
dq
dp exp
Q1 (T ) =
h
2m 2
1
Q1 (T ) =
h
dq exp
2
"
!2
r
kB T
m 2
q2
p2
1
dp exp
3
2
mkB T
(9.3)
(9.4)
Let 1 and 2 denote the standard deviations of of the two zero-mean Gaussian
distributions. These are given by,
158
1 =
kB T
m 2
(9.5)
2 =
p
mkB T
(9.6)
1 2 =
kB T
1 x2
dx exp 2 = 2
2
(9.7)
(9.8)
Therefore,
Q1 (T ) =
2
kB T
1 2 =
h
~
(9.9)
If all the oscillators are identical i.e. they all have the same characteristic
frequency of oscillations, then
3N
kB T
(9.10)
Q3N (T ) =
~
See footnote1 where we have considered 3N harmonic oscillators with 3N
characteristic frequencies.
3N
Y
kB T
~i
i=1
(9.11)
159
(9.16)
dF = dU T dS SdT
(9.17)
= SdT P dV + dN
(9.18)
why?
F
N
S=
(9.21)
T,V
= kB T ln
(9.20)
F
T
~
kB T
= N kB ln
(9.22)
(9.23)
V,N
kB T
~
+1
(9.24)
We also have,
F (T, V, N ) = kB T
3N
X
ln
i=1
kB T
~i
(9.13)
g()d = 3N
0
(9.15)
160
U =
ln Q
(9.25)
V,N
= 3N kB T,
(9.26)
consistent with equipartition theorem which says each quadratic term in the
Hamiltonian carries kB T /2 of energy. The Hamiltonian of a single harmonic
oscillator has two quadratic terms - one in position q and the other in momentum p.
We also find that the results are consistent with the Dulong and Petits
law which says that the heat capacity at constant volume is independent of
temperature:
U
(9.27)
CV =
T V
= 3N kB = 3nR
(9.28)
CV
= 3R = 6 calories (mole)1 (Kelvin)1
n
(9.29)
More importantly, the heat capacity is the same for all the materials; it depends only on the number of molecules or the number of moles of the substance
and not on what the substance is. The heat capacity per mole is approximately
6 calories per Kelvin.
[exp(~)]
(9.31)
n=0
exp(~/2)
1 exp(~)
(9.32)
exp(3N ~/2)
QN (T ) =
[1 exp(~)]
161
(9.33)
3N
(9.35)
1
= 3N
~ + kB T ln {1 exp(~)}
2
(9.36)
See footnote4 for an expression for free energy for 3N independent harmonic
oscillators with different frequencies.
We can obtain the thermodynamic properties of the system from the free
energy. We get,
If the harmonic oscillators are all of different frequencies, the partition function
is given by
Q(T ) =
3N
Y
exp(~i /2)
1
exp(~i )
i=1
(9.34)
3N
X
~i
i=1
=
Z
d
0
+ kB T ln {1 exp(~i )}
~
+ kB T ln {1 exp(~)} g()
2
(9.37)
(9.38)
d g() = 3N
(9.39)
162
F
N
(9.40)
T,V
1
~ + kB T ln [1 exp(~)]
2
P =
F
V
(9.41)
(9.42)
T,N
=0
S=
(9.43)
F
T
= 3N kB
U =
(9.44)
V,N
~
ln {1 exp(~)}
exp(~) 1
ln Q
= 3N
(9.45)
(9.46)
~
~
+
2
exp(~) 1
(9.47)
The expression for U tells that the equipartition theorem is the first victim of
quantum mechanics : Quantum harmonic oscillators do not obey equipartition
theorem. The average energy per oscillator is higher than the classical value
of kB T . Only for T , we have kB T >> ~, the quantum results
coincide with the classical results.
The heat capacity at constant volume is given by
U
(9.48)
CV =
T V,N
=
3N
kB
~
T
2
exp[~]
2
(exp[~] 1)
(9.49)
The second victim of quantum mechanics is the law of Dulong and Petit. The
heat capacity depends on temperature and on the oscillator frequency. The
heat capacity per mole will change from substance to substance because of
its dependence on the oscillator frequency. Only in the limit of T (the
same as 0), do we get the classical results.
163
To appreciate the above statement, consider a class room wherein the chairs
are already arranged with constant spacing along the length and breadth of the
class room. The students occupy these chairs and form a regular structure. This
corresponds to a situation wherein each student is bound independently to his
chair.
Now consider a situation wherein the students are mutually bound to each
other. Let us say that the students interact with each other in the following way :
Each is required to keep an arms length from his four neighbours. If the distance
between two neighbouring students is less, they are pushed outward; if more,
they are pulled inward. Such mutual interactions lead to the student organizing
themselves in a two dimensional regular array
I shall leave it to you to visualize how such mutual nearest neighbour interactions can give rise to three dimensional arrays.
164
V (x1 , x2 , x3N ) = V (
x1 , x
2 , x
3N ) +
3N
X
V
i=1
xi
(9.50)
x
1 ,
x2 , ,
x3N
(xi xi ) +
(9.51)
2
3N X
3N
X
1
V
(xi x
i )(xj x
(9.52)
j)
2 xi xj x1 ,x2 ,x3N
i=1 j=1
The first term gives the minimum energy of the solid when all its atoms are
in their equilibrium positions. We can denote this energy by V0 .
The second set of terms involving the first order partial derivatives of
the potential are all identically zero by definition : V has a minimum at
{xi = xi i = 1, 3N }
The third set of terms involving second order partial derivatives describe
harmonic vibrations. We neglect the terms involving higher order derivatives
and this is justified if only small oscillations are present in the crystalline .
Thus under harmonic approximations we can write the Hamiltonian as,
H = V0 +
2
3N
X
1 di
i=1
dt
3N
3N X
X
i,j i j
(9.53)
i=1 j=1
where
= xi xi
i,j
1
=
2
(9.54)
2V
xi xj
(9.55)
x
1 ,
x2 , ,
x3N
3N
X
1
i=1
m q2 + i2 qi2
(9.56)
165
Thus we can describe the system in terms of independent harmonic oscillators by defining a normal coordinate system, in which the equations of motion
are decoupled. If there are N atoms in the crystals there are 3N degrees of
freedom. Three of the degrees of freedom are associated with the translation
of the whole crystal; and three with rotation. Thus, there are strictly 3N 6
normal mode oscillations. If N is of the order of 1025 or so, it doesnt matter
if the number of normal modes is 3N 6 and not 3N .
We can write the canonical partition function as
Q=
3N
Y
exp(~i /2)
1
exp(~i )
i=1
(9.57)
We have,
ln Q =
3N
X
~i
i=1
+ ln {1 exp(~i )}
~
+ ln {1 exp(~)}
2
(9.59)
g()d
(9.60)
The problem reduces to finding the function g(). Once we know g(), we
can calculate the thermodynamic properties of the crystal. In particular we
can calculate the internal energy U and heat capacity, see below.
Z
~ exp(~)
~
+
g()d
(9.61)
U =
2
1 exp(~)
0
CV = kB
~
~
+
2
exp(~) 1
g()d
(~)2 exp(~)
g()d
[exp(~) 1]2
(9.62)
(9.63)
166
The problem of determining the function g() is a non-trivial task. It is precisely here that the difficulties lie. However, there are two well known approximations to g(). One of them is due to Einstein and the other due to
Debye.
(9.64)
~E
kB
(9.66)
and call E as Einstein temperature. Verify that this quantity has the unit
of temperature. In terms of Einstein temperature we have,
2
E
exp(E /T )
CV = 3N kB
(9.67)
2
T
[exp(E /T ) 1]
T 0
3N kB
E
T
2
exp(E /T )
(9.68)
167
3
we get, = 9N/D
. Thus we have
g() =
9N
2 for D
3
D
(9.70)
for > D
(9.71)
Let
x = ~
(9.72)
~D
kB
(9.73)
D =
T
D
3 Z
D /T
x4 exp(x)
dx
[exp(x) 1]2
(9.74)
/T
x4 exp(x)
dx
(exp(x) 1)2
(9.75)
transverse or longitudinal
transverse mode is doubly degenerate and longitudinal mode is non-degenerate,
etc..
Take u(x) = x4 and dv(x) = exp(x)dx/[exp(x) 1]2
168
1
I=
exp(D /T ) 1
D
T
4
+4
D /T
x3
dx
exp(x) 1
(9.76)
(9.79)
= 3N kB
(9.80)
CV T 0
12 4
N kB
5
T
D
3
(9.83)
(9.84)
The integral equals (4)(4), where () is the gamma function and () is the
Riemann zeta function. (4) = 3! = 6 and (4) = 4 /90. See e.g. G B Arfken
and H J Weber, Mathematical Methods for Physicists, Fourth Edition, Academic
Press, INC, Prism Books PVT LTD (1995).
169
(9.86)
and () is the Riemann zeta function, see below. Note (2) = 2 /6 and
(4) = 4 /90, etc..
Riemann zeta function is defined as
(p) =
np
(9.87)
n=1
xp dx =
p+1
x
for p 6= 1
p + 1 1
ln x
(9.88)
for p = 1
The integral and hence the series is divergent for p 1 and convergent for
p > 1.
9.7.1 Bernoulli Numbers
Bernoulli numbers Bn are defined by the series
X
x
xn
=
Bn
exp(x) 1 n=0 n!
(9.89)
170
= (1)n1
2(2n)! X 2n
p
,
(2)2n p=1
2(2n)!
(2n),
(2)2n
n = 1, 2, 3,
n = 1, 2, 3,
(9.92)
(4) =
4
90
(9.93)
6
945
(8) =
8
9450
(9.94)
(2) =
(6) =
(9.91)