You are on page 1of 5

Quantum Mechanics: bits and pieces 2

Notes by Sergei Winitzki


DRAFT July 18, 2005
Contents
1 Calculations in Hilbert spaces 1
1.1 Matrix elements of operators . . . . . . . . . 1
1.1.1 Denition . . . . . . . . . . . . . . . . 1
1.1.2 Examples . . . . . . . . . . . . . . . . 1
1.1.3 Product of operators in terms of ma-
trix elements . . . . . . . . . . . . . . . 1
1.1.4 Einstein summation convention . . . 1
1.2 Trace . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Denition . . . . . . . . . . . . . . . . 2
1.2.2 Properties . . . . . . . . . . . . . . . . 2
1.3 Eigenvectors and eigenvalues . . . . . . . . . 2
1.4 Generalized eigenbases . . . . . . . . . . . . . 2
1.4.1 Eigenbasis of the position operator . . 2
1.4.2 Calculations in the x eigenbasis . . . . 3
1.4.3 Relation between x and p eigenbases . 3
1.5 Calculations with operators . . . . . . . . . . 3
1.5.1 Functions of operators . . . . . . . . . 3
1.5.2 Algebraic calculations . . . . . . . . . 4
1.5.3 Commutators . . . . . . . . . . . . . . 4
1.5.4 Canonically conjugate operators . . . 5
1.5.5 Coordinate representation . . . . . . . 5
1 Calculations in Hilbert spaces
In this chapter we study some methods of performing cal-
culations with specic operators and vectors in a Hilbert
space. We shall use three examples of Hilbert spaces:
a nite-dimensional space C
n
, the space L
2
of square-
integrable functions on a domain, and the space l
2
of
square-summable innite sequences.
1.1 Matrix elements of operators
1.1.1 Denition
The matrix element of an operator

A with respect to a vec-
tor |v and a dual vector w| is by denition the number
w|

A|v.
Suppose that an orthonormal basis |e
l
, l = 1, 2, ... is cho-
sen and

Ais an operator. Then the matrix element of

Awith
respect to the l-th and the m-th basis vectors is denoted by
A
m
l
e
l
|

A|e
m
. (1)
Thus the operator

A is represented as an innite matrix
in the given basis. It follows that

A =

m,l
A
m
l
|e
l
e
m
| . (2)
1.1.2 Examples
1. Consider the operator

A = |e
1
e
1
| +|e
1
e
2
| . (3)
The matrix elements of

A are: A
1
1
= 1, A
1
2
= 0, A
2
1
= ,
A
2
2
= 0, and all other A
m
l
= 0 for l, m 3.
2. In the Hilbert space l
2
of innite square-summable
sequences |z
1
, z
2
, ..., consider the operator

S
1
that
shifts the sequence to the right,

S
1
|z
1
, z
2
, z
3
, ... = |0, z
1
, z
2
, z
3
, ... (4)
In the natural basis |e
m
= |0, ..., 0, 1, 0, ... (the 1 is at
the m-th place), the matrix elements of

S
1
are
(

S
1
)
m
l
=
_
1, m = l 1
0, m = l 1
. (5)
3. In the same space l
2
, consider the left shift operator

S
n
dened by

S
n
|z
1
, z
2
, ... = |z
n+1
, z
n+2
, ... (6)
The matrix elements of

S
n
are
(

S
n
)
m
l
=
_
1, m = l +n
0, m = l +n
. (7)
1.1.3 Product of operators in terms of matrix elements
The matrix elements of a product

A

B are given by the fa-
miliar formula for matrix multiplication,
(

A

B)
l
k
=

m=1
A
m
k
B
l
m
. (8)
This can be proved by using the decomposition of unity,
e
k
|

A

B|e
l
= e
k
|

A
_

m=1
|e
m
e
m
|
_

B|e
l
=

m=1
A
m
k
B
l
m
.
(9)
1.1.4 Einstein summation convention
The Einstein summation convention is widely used in
physics. Any repeated index (conventionally, one subscript
and one superscript) implies a sum over its values. Thus,

m=1
A
m
k
B
l
m
is written simply as A
m
k
B
l
m
, omitting the

symbol; the presence of the repeated index m in the for-
mula A
m
k
B
l
m
indicates that a summation over all values of
m is being performed.
1
With the Einstein summation convention, the expansion
of a vector in the basis is written as
|v = v
j
|e
j
, (10)
the decomposition of unity is

1 =

e
k
_
e
k
| , (11)
and the derivation in Eq. (9) looks like
e
k
|

A

B|e
l
= e
k
|

A|e
m
e
m
|

B|e
l
= A
m
k
B
l
m
. (12)
1.2 Trace
1.2.1 Denition
The trace of a linear operator

A is dened as a sum of the
diagonal matrix elements in some basis {|e
j
},
Tr

A

j=1
e
j
|

A|e
j
A
m
m
, (13)
provided that this sumconverges. Thus, the trace is dened
only for a certain subset of linear operators

A. For example,
the trace of the identity operator

1 is undened since

j=1
e
j
|

1 |e
j
=

j=1
1 = . (14)
The value of the trace dened by Eq. (13) does not de-
pend on the choice of the basis {|e
j
}; the basis does not
need to be orthonormal. (We shall not prove this state-
ment.)
1.2.2 Properties
These properties are easily proved using matrix element
computations.
1. The trace is linear in the operator

A, namely
Tr(

A+

B) = Tr

A+Tr

B, (15)
provided that all three traces are well-dened.
2. If Tr(

A

B) and Tr(

B

A) are both well-dened, then
Tr(

A

B) = Tr(

B

A). (16)
3. The trace of an operator

A is equal to the sum of all
eigenvalues of

A (when it is well-dened). This can be
derived from the denition by choosing the eigenbasis
of

A as the basis |e
j
.
Note that it may happen that Tr

Aand Tr

B are well-dened,
but Tr(

A

B) is undened. For example, the trace of a shift
operator

S
n
(see Sec. 1.1.2) is zero, however

S
1

S
1
=

1 and
therefore Tr

S
1

S
1
is undened. (The trace of

S
1

S
1
is also
undened.)
1.3 Eigenvectors and eigenvalues
If

A is a Hermitian operator, then (under certain assump-
tions) it is diagonalizable and all its eigenvalues are real,
and any two eigenvectors with different eigenvalues are or-
thogonal. For an operator with a discrete spectrum, one has
a set of eigenvectors |e
j
such that

A =

j=1

j
|e
j
e
j
| ,
j
R. (17)
Under certain conditions on the operator

A, the vectors |e
j

are an orthonormal basis in the Hilbert space. (We shall not


need to go into detail about these conditions.)
In a nite-dimensional space there is a standard method
for computing the full set of eigenvectors and eigenval-
ues of operators. One rst sets up the characteristic equa-
tion det(

A

1) = 0, then one has to compute its roots

j
and determines the corresponding eigenvectors v
j
. In
the innite-dimensional case, the determinant is undened
and there is no general method for computing the eigen-
values and eigenvectors of operators. Therefore one has to
nd tricks that work for each particular operator.
For example, consider the one-dimensional Laplace op-
erator
2
/x
2
acting in the space L
2
[0, 1] of functions
of x. The eigenvectors are functions (x) such that

2
x
2
(x) = (x) , C. (18)
This is a differential equation for (x) which has the gen-
eral solution
(x) = Ae
x

+Be
x

, A, B C. (19)
Therefore the Laplace operator has a continuous spec-
trum of eigenvalues . It is often the case that physically
relevant functions (x) obey boundary conditions such as
(0) = (1) = 0. In that case, Eq. (18) has solutions only if
=
2
n
2
for integer n, namely
(x) = Asin (nx) , A C, n N. (20)
Therefore the Laplace operator has a discrete spectrum in
the space of square-integrable functions with zero bound-
ary conditions at x = 0 and x = 1.
1.4 Generalized eigenbases
In this section we shall consider the Hilbert space L
2
of
square-integrable functions on some interval [a, b].
1.4.1 Eigenbasis of the position operator
Some operators have no eigenvectors within the Hilbert
space. For example, the operator x which acts as the multi-
plication by x,
x : (x) x (x) , (21)
has no eigenvectors because in that space there are no
nonzero functions (x) such that x (x) = (x) for a
constant . Nevertheless, we can consider a larger space of
generalized functions where this equation would have
solutions. For instance, the generalized eigenvector of x
2
corresponding to the eigenvalue is the Dirac delta func-
tion (x ) since
x (x ) = (x ) . (22)
Let us denote this vector by |
x
. This generalized eigen-
vector is dened with respect to the subspace of continu-
ous functions in L
2
. It is convenient to represent vectors
in that subspace by expansions in the generalized vectors
|
x
, namely
| =
_
d () |
x
, (23)
where () is a function that describes the components of
the vector | in the basis |
x
,
() =
x
| . (24)
We expect that the decomposition of unity is

1 =
_
d|
x

x
| . (25)
1.4.2 Calculations in the x eigenbasis
For brevity, the generalized eigenvectors of the position op-
erator x with eigenvalues x
1
, x
2
, etc., will be denoted sim-
ply by |x
1
, |x
2
, ..., instead of |
x
x
1
etc. The decomposition
of unity is then rewritten as

1 =
_
dx|x x| . (26)
In the |x eigenbasis, all vectors | are represented by
functions (x) = x|. Consider a linear operator

A act-
ing on a vector |: the result is a new vector |

A|. In
general, we may determine the relation between the func-
tions (x) and (x) as follows,
(x) = x| = x|

A|
= x|

A
_
dy |y y|
=
_
dy A
y
x
(y) , (27)
where the matrix element A
y
x
is dened by
A
y
x
x|

A|y . (28)
Thus, linear operators

A can be characterized by functions
A(x, y) A
y
x
which are integrated with (y) to yield the
new function (x).
The matrix elements of some simple operators in the |x
basis are
x|

1 |y = x|y = (x y) , (29)
x| x|y = x (x y) , (30)
x|
x
|y =

(x y) , (31)
x| exp (
x
) |y = (x y +) . (32)
None of these are ordinary functions; recall that we are
dealing with generalized vectors |x which do not belong
to the Hilbert space of square-integrable functions. Accord-
ingly, the scalar product x|x is undened. Instead, the
generalized vectors |x are normalized to the delta func-
tion as shown in Eq. (29).
1.4.3 Relation between x and p eigenbases
The momentum operator p is related to x via the commu-
tation relation [ x, p] = i. We can dene the generalized
eigenbasis of eigenvectors |p similarly to the eigenbasis
|x. The relation between these bases is summarized by
x|p =
1

2
exp
_
ipx

_
. (33)
This relation can be derived from Eq. (58) below. Namely,
from Eq. (66) it follows that |p can be obtained as
|p = C
1
exp (ip x/) |p = 0 , (34)
where |p = 0 is the eigenvector of zero momentum and C
1
is a normalization constant to be determined. (Note that the
eigenvector |p is dened up to an arbitrary phase which
could be a function of p, but Eq. (34) denes that phase in a
particularly convenient way.) Similarly,
|x = C
2
exp (i px/) |x = 0 . (35)
Therefore we can compute the matrix element x|p as
x|p = x| C
1
exp
_
i

p x
_
|p = 0 = C
1
e
ipx/
x|p = 0
= C
1
e
ipx/
x = 0| C

2
exp (i px/) |p = 0
= C
1
C

2
e
ipx/
x = 0|p = 0 . (36)
The combination C
1
C

2
x = 0|p = 0 is independent of p
and x and is thus a normalization constant that we may
now denote C, therefore
x|p = Ce
ipx/
. (37)
The constant C is found from the normalization condition,
_
dp x|p p|x

= x|x

= (x x

) . (38)
After a standard integration, using the relation
_
+

e
iax
dx = 2 (a) , (39)
we obtain Eq. (33).
1.5 Calculations with operators
In this section we consider some methods of calculation
that do not involve matrix elements or bases but only oper-
ators. For the rest of this section, we shall denote operators
by unaccented letters A instead of

A; the identity operator

1 will be simply 1. Scalars will be denoted by Greek letters


, , etc.
1.5.1 Functions of operators
If A is an operator, one can compute a polynomial function
of A such as A
2
A + 1. However, more general functions
of operators, for example exp (A) or

1 +A, are also quite
useful in calculations.
If f (x) is an ordinary function, we may dene the
operator-valued function f (A) in two ways:
3
1. Suppose that A is a diagonalizable operator and its
eigenbasis {|e
j
} and eigenvalues
j
are known. Then
we can write
A =

j=1

j
|e
j
e
j
| (40)
and dene f (A) as the operator
f (A)

j=1
f (
j
) |e
j
e
j
| . (41)
This is a well-dened operator as long as all f(
j
) are
nite. An operator function f (x) is undened if f ()
does not exist for some eigenvalue
j
of A. For exam-
ple, if A has a zero eigenvalue then A
1
does not exist.
2. Suppose that f (x) is an analytic function that admits
an everywhere convergent expansion
f (x) =

n=0

n
x
n
. (42)
Then we dene f (A) by the operator-valued series
f (A)

n=0

n
A
n
, (43)
In most cases, calculations with operator-valued functions
are quite analogous to calculations with ordinary functions.
For example,

1 A

1 +A =
_
1 A
2
, (44)
sin
2
A+ cos
2
A = 1, (45)
d
d
exp (A) = Aexp (A) . (46)
These statements can be justied by applying either of the
two denitions of operator-valued functions.
It follows from either of the denitions that the operator
f (A) always commutes with A, i.e. f(A)A = Af(A). How-
ever, one has to be careful to preserve the order of non-
commuting operators in operator calculations. For exam-
ple, if A and B are such that AB = BA, then in general
e
A
e
B
= e
A+B
. (47)
1.5.2 Algebraic calculations
Sometimes an operator satises an algebraic equation, for
example A
2
= or A
2
= A, where is a complex con-
stant (we abbreviate

1 to ). In that case, most calculations


involving A can be performed explicitly, without assuming
that A is diagonalizable and without computing the eigen-
values and the eigenvectors of A.
Some examples:
1. If A
2
= , then what is (1 +A)
1
? Solution: normally
one would expect (1 +A)
1
= 1 A + A
2
... to be
a power series in A, however the condition A
2
=
simplies all power series to just two terms, A + .
Therefore (1 +A)
1
= A + with some unknown
constants and . A direct substitution yields
(1 +A)
1
=
1
1
2
(1 A) . (48)
2. If A
2
= A + , then what is A
1
? Solution: we again
guess that A
1
might be of the form A + with un-
known and . Then a substitution into A
1
A = 1
yields A
1
= (A)
1
. Note that the solution is
invalid for = 0, in which case it is impossible to ex-
press A
1
through A. Indeed, an operator A satisfying
A
2
= A may not have an inverse. (An inverse opera-
tor A
1
does not exist if A has a zero eigenvalue.)
3. More generally: suppose that an operator A satises
an algebraic equation of the form
A
n
+a
1
A
n1
+... +a
n1
A+a
n
= 0, (49)
then any function f (A) is reduced to a polynomial in
A of degree at most n 1. In such cases it is usually
easy to compute

1 +A, ln (1 +A), exp A and other
elementary functions of A. The problem of computing
f (A) is thus reduced to the easier problem of comput-
ing n scalars (so if necessary, one could perform nu-
merical calculations on a computer). Note that any
nite-dimensional operator satises its characteristic
equation P
A
(A) = 0, where P
A
(x) det (Ax) is the
characteristic polynomial (the Hamilton-Cayley theo-
rem). Therefore all functions of nite-dimensional op-
erators are always expressible as polynomials in those
operators.
4. If the operator A satises an operator-valued equation
of the form f (A) = 0, where f (A) is some function,
then all eigenvalues
j
of A also satisfy the equation
f (
j
) = 0. (This directly follows from the rst deni-
tion of f (A), see Sec. 1.5.1.)
5. Given that A
n
= 0 for some n, prove that TrA = 0.
Solution: the trace is the sum of all eigenvalues; all
eigenvalues
j
of A must satisfy
n
= 0 and therefore
all
j
= 0, thus the trace vanishes.
In innite-dimensional spaces it is seldom the case that
an operator satises an algebraic equation. For example,
the position and momentum operators x, p do not satisfy
any algebraic equations. However, the spin and the angu-
lar momentum operators are effectively nite-dimensional
and satisfy algebraic equations, for instance,
x

x
= 1.
1.5.3 Commutators
In general, two operators A, B do not commute, i.e. AB =
BA. Therefore one needs to take care when multiplying
linear combinations of operators, e.g.
(A+B) (A2B) = A
2
+BA2AB 2B
2
. (50)
The commutator of two operators A, B is by denition
the operator
[A, B] AB BA. (51)
The commutator has the following properties that read-
ily follow from the denition: for all operators A, B, C,
[A, 1] = 0, (52)
[A, B] = [B, A] , (53)
[A, BC] = B[A, C] + [A, B] C, (54)
[A, [B, C]] +[B, [C, A]] + [C, [A, B]] = 0. (55)
4
A commutation relation is an equation of the form
[A, B] = f (A, B) , (56)
where f is some known function (different fromABBA).
For example, [x, p] = i is a commutation relation.
In calculations, it is often required to transform operator
expressions so that one operator is always positioned to the
right of another; thus one needs to pull an operator from
left to right through other operators. For example, sup-
pose that the operators A and B satisfy the commutation
relation [A, B] = 2A, and we need to simplify the expres-
sion AB
2
by pulling the operator A through the oper-
ator B, so that the resulting formula has all As to the right
of all Bs. The calculation proceeds like this,
AB = [A, B] +BA = 2A+BA, so
AB
2
= (AB) B = (2A+BA) B
= (2 +B) AB = (2 +B) (2A+BA)
= 4A+ 4BA+B
2
A.
1.5.4 Canonically conjugate operators
Two operators A, B are a canonically conjugate pair if
[A, B] = 1, (57)
where C is a constant. For example, the coordinate
operator q and the momentum operator p are a canonically
conjugate pair since [q, p] = i.
Two operators from a canonically conjugate pair have
rather special algebraic properties, and many calculations
involving these operators are much easier.
The main properties of a canonically conjugate pair are:
1. The commutator with A acts like a derivative with re-
spect to B, and vice versa. Namely, for any function
f (x), if [A, B] = then
[A, f (B)] = f

(B) ; [f (A) , B] = f

(A) . (58)
This property can be derived from the second deni-
tion in Sec. 1.5.1 by rst proving that
[A, B
n
] = nB
n1
, [A
n
, B] = nA
n1
, (59)
using induction in n. It follows in particular that
[A, exp (B)] = exp (B).
2. More generally, if [A, B] = then
e
A
f (B) = f (B +) e
A
. (60)
(We shall prove this in Sec. 1.5.5.) A particular case is
the useful formula
e
A
e
B
= e

e
B
e
A
. (61)
Finally, the Hausdorff formula
e
A
e
B
= e
A+B
e
1
2

(62)
can be derived by computing the derivative / of
the auxiliary function f () e
A
e
B
,
f

= (A+B +) f, f ( = 0) = 1. (63)
This differential equation has the solution
f () = exp
_
A+B +
1
2

2
_
(64)
that yields Eq. (62) upon setting = 1.
3. If A has an eigenvector | with eigenvalue , then it
has a continuous spectrum of eigenvalues. Indeed, a
simple calculation starting with
_
A, exp
_

1
B
_
| (65)
shows that the vector
exp
_

1
B
_
| (66)
is an eigenvector of A with eigenvalue + . Note
that the expression exp
_

1
B
_
| might diverge and
then it must be understood as a generalized vector.
1.5.5 Coordinate representation
Operators A and B from a canonically conjugate pair
[A, B] = admit the coordinate representation similar
to the x- and p-representations for operators of coordinate
x and momentum p.
The coordinate representation operates in a space of
functions (x) of the formal variable x. The operator B
is represented as multiplication by x, and the operator

A is
represented as the derivative operator /x:
A : (x)

x
(x) , (67)
B : (x) x (x) . (68)
It is easy to see that the commutation relation [A, B] = is
satised.
Using the coordinate representation, a computation with
operators A and B is reduced to differential equations. For
instance, to nd the eigenfunctions of A, we write
A (x) =

x
= (x) , (69)
thus exp (x).
Finally, we shall derive the identity (60) using the coor-
dinate representation. The operator exp (A) becomes the
shift operator that acts on functions (x) by
exp
_


x
_
(x) = (x +) . (70)
Therefore the product exp (A) f (B) acts on a function
(x) by
exp
_


x
_
[f (x) (x)] = f (x +) (x +)
= f (x +) exp
_


x
_
(x) .
(71)
Therefore
e
A
f (B) (x) = f (B +) e
A
(x) (72)
for all (x), which is exactly the identity (60).
5

You might also like