You are on page 1of 7

Physics 70007, Fall 2009

Answers to HW set #2

October 16, 2009

1. (Sakurai 1.2)
Suppose that a 2x2 matrix X (not necessarily Hermitian, or unitary) is written as
X = a0 + ~σ · ~a

where a0 and a1,2,3 are numbers.


(a) How are a0 and ak (k = 1, 2, 3) related to tr(X) and tr(σk X)?
Since the matrices σk are traceless, while the trace of the 2x2 identity matrix is 2, we nd tr(X) =
2a0 , and so
1
a0 = tr(X)
2
To multiply in a factor of σk , we rst write out the dot product in the denition of X explicitly:

X = a0 I + a1 σ1 + a2 σ2 + a3 σ3

where I represents the 2x2 identity matrix. Then we can use the fundamental relation σi σj =
δij I + iijk σk to multiply in a Pauli matrix: for example,

σ1 X = a0 (σ1 ) + a1 (I) + a2 (iσ3 ) + a3 (−iσ2 )

When we take the trace, only the term involving I gives a non-zero contribution (as above): we
obtain tr(σ1 X) = 2a1 , or a1 = 12 tr(σ1 X). We obtain similar results for k = 2, 3; so in general

1
ak = tr(σk X)
2

(b) Obtain a0 and ak in terms of the matrix elements Xij .


We now write out the various traces obtained in part (a): using
 
X11 X12
X=
X21 X22

and      
0 1 0 −i 1 0
σ1 = σ2 = σ3 =
1 0 i 0 0 −1
we have
1 X11 + X22
a0 = tr(X) =
2 2
and  
X21 X22 1 X21 + X12
σ1 X = =⇒ a1 = tr(σ1 X) =
X11 X12 2 2

 
−iX21 −iX22 1 X12 − X21
σ2 X = =⇒ a2 = tr(σ2 X) = i
iX11 iX12 2 2

1
 
X11 X12 1 X11 − X22
σ3 X = =⇒ a3 = tr(σ3 X) =
−X21 −X22 2 2

2. (Sakurai 1.3)
Show that the determinant of a 2x2 matrix ~σ · ~a is invariant under
   
i~σ · n̂φ −i~σ · n̂φ
~σ · ~a → ~σ · ~a0 ≡ exp (~σ · ~a) exp
2 2

Find a0k in terms of ak when n̂ is in the positive z-direction and interpret your result.
Since the determinant of a product of matrices equals the product of the determinants of the
individual matrices, what we are being asked to show is that
     
i~σ · n̂φ −i~σ · n̂φ
det exp det exp =1
2 2

Now it would be sucient to simply note that the matrices in question are inverses of each other
(to see this, apply the Baker-Campbell-Hausdor formula for eA eB , where we set B = −A and
therefore all the commutators in the formula become zero), and so the above relation is trivially
true. However, it turns out that each individual determinant equals one, all by itself:
 
i~σ · n̂φ
det exp =1
2

Showing this is an instructive exercise, so we will work through it in detail. First we write out the
exponential as a power series:
   1  2
i~σ · n̂φ 1 i~σ · n̂φ 1 i~σ · n̂φ
exp =I+ + + ...
2 1! 2 2! 2

(This denes what we mean by the exponential of a matrix.) Then we separate out the matrix and
constant pieces of each term:
   1  2
i~σ · n̂φ 1 iφ 1 1 iφ 2
exp =I+ (~σ · n̂) + (~σ · n̂) + . . .
2 1! 2 2! 2
2
Each term involves a power of ~σ · n̂. We can simplify these greatly by noting that (~σ · n̂) = I:
2
(~σ · n̂) = σi ni σj nj = ni nj (δij I + iijk σk ) = (n̂ · n̂) I + 0 = I

(in the third equality, the product of a term symmetric in i and j and a corresponding anti-symmetric
term gives zero.) So, for an arbitrary power, we have
h ij
2j 2 2j+1
(~σ · n̂) = (~σ · n̂) =I and (~σ · n̂) = ~σ · n̂

This lets us collect the even and odd powers in our power series separately:
   2  4 !  1  3 !
i~σ · n̂φ 1 iφ 1 iφ 1 iφ 1 iφ
exp = I 1+ + + . . . + (~σ · n̂) + + ...
2 2! 2 4! 2 1! 2 3! 2
 2  4 !  1  3 !
1 φ 1 φ 1 φ 1 φ
= I 1− + − . . . + i (~σ · n̂) − + ...
2! 2 4! 2 1! 2 3! 2
φ φ
= I cos + i (~σ · n̂) sin
2 2

2
At this point, the simplest way to take the determinant is to write out the matrix explicitly: writing
c ≡ cos φ2 and s ≡ sin φ2 , we have
   
i~σ · n̂φ c + isn3 s (in1 + n2 )
exp =
2 s (in1 − n2 ) c − isn3

and so
 
i~σ · n̂φ
c2 + s2 n23 − s2 −n21 − n22

det exp =
2
= c2 + s2 (n̂ · n̂)
φ φ
= cos2 + sin2
2 2
= 1.

Now we take n̂ = ẑ and determine ~a0 in terms of ~a. Here, (n1 , n2 , n3 ) = (0, 0, 1), so
   
0 i~σ · n̂φ −i~σ · n̂φ
~σ · ~a = exp (~σ · ~a) exp
2 2
   
c + is 0 a3 a1 − ia2 c − is 0
=
0 c − is a1 + ia2 −a3 0 c + is
 2 
a3 (c + is) (c − is) (a1 − ia2 ) (c + is)
= 2
(a1 + ia2 ) (c − is) −a3 (c − is) (c + is)
(a1 − ia2 ) eiφ
 
a3
=
(a1 + ia2 ) e−iφ −a3

We can now use our work from part (b) of the rst problem to determine ~a0 . We call this matrix
X and write it as a00 + ~σ · ~a0 ; then (noting that X12 = X21

)

X11 + X22
a00 = =0
2
1
a01 = (X12 + X21 ) = Re X12
2
= a1 cos φ + a2 sin φ
i
a02 = (X12 − X21 ) = −Im X12
2
= −a1 sin φ + a2 cos φ
1
a03 = (X11 − X22 ) = a3
2
Thus the vector ~a0 ends up being the vector ~a rotated around the z-axis by an angle φ. It is
reasonable to assume (and is in fact the case) that an arbitrary n̂ will lead to a rotation of ~a
around the n̂-axis. This result will become more meaningful once we get into rotations and angular
momentum in chapter 3 of Sakurai.

3. (Sakurai 1.8)
Using the orthonormality of |+i and |−i, prove
~2
 
[Si , Sj ] = iijk ~Sk , {Si , Sj } = δij
2

where
~
Sx = (|+ih−| + |−ih+|)
2
i~
Sy = (− |+ih−| + |−ih+|)
2
~
Sz = (|+ih+| − |−ih−|) .
2

3
The properties of the basis |+i , |−i which we will use are orthonormality:

h+|+i = h−|−i = 1, h+|−i = h−|+i = 0


and completeness:
|+ih+| + |−ih−| = I.
An ecient way to do the problem is to rst write out all possible products of two spin operators.
Just to give you the idea, I'll write out only the ones which don't involve Sz :
~ ~
Sx2 = (|+ih−| + |−ih+|) (|+ih−| + |−ih+|)
2 2
 2
~
= (|+ih−|+ih−| + |+ih−|−ih+| + |−ih+|+ih−| + |−ih+|−ih+|)
2
~2
= (0 + |+ih+| + |−ih−| + 0)
4
~2
= I
4
~ i~
Sx Sy = (|+ih−| + |−ih+|) (− |+ih−| + |−ih+|)
2 2
 2
~
= i (− |+ih−|+ih−| + |+ih−|−ih+| − |−ih+|+ih−| + |−ih+|−ih+|)
2
 2
~
= i (0 + |+ih+| − |−ih−| + 0)
2
~
= i Sz
2
i~ ~
Sy Sx = (− |+ih−| + |−ih+|) (|+ih−| + |−ih+|)
2 2
 2
~
= i (− |+ih−|+ih−| − |+ih−|−ih+| + |−ih+|+ih−| + |−ih+|−ih+|)
2
 2
~
= i (0 − |+ih+| − + |−ih−| + 0)
2
~
= −i Sz
2
i~ i~
Sy2 = (− |+ih−| + |−ih+|) (− |+ih−| + |−ih+|)
2 2
 2
~
= − (|+ih−|+ih−| − |+ih−|−ih+| − |−ih+|+ih−| + |−ih+|−ih+|)
2
~2
= − (0 − |+ih+| − |−ih−| + 0)
4
~2
= I
4
With this work done, we can quickly write down the commutators and anti-commutators:
[Sx , Sx ] = [Sy , Sy ] = 0 (trivially)
i~ −i~
[Sx , Sy ] = Sx Sy − Sy Sx = Sz − Sz = i~Sz
2 2
[Sy , Sx ] = −[Sx , Sy ] = −i~Sz
~2
{Sx , Sx } = Sx Sx + Sx Sx = I
2
i~ −i~
{Sx , Sy } = Sx Sy + Sy Sx = Sz + Sz = 0
2 2
{Sy , Sx } = {Sx , Sy } = 0
~2
{Sy , Sy } = Sy Sy + Sy Sy = I
2

4
These agree with the general formulas we are trying to establish. The relations involving Sz follow
analogously.

4. (Sakurai 1.9)
E
Construct S~ · n̂; + such that
E ~ E
~ ~
S · n̂ S · n̂; + =
~
S · n̂; +
2

where n̂ is a unit vector with polar angle β and azimuthal angle α with respect to the z-axis.

The vector n̂ has Cartesian components

(sin β cos α, sin β sin α, cos β)

so we can construct the matrix for the operator S


~ · n̂:

~ · n̂ = ~
S (σx nx + σy ny + σz nz )
2      
~ 0 1 0 −i 1 0
= sin β cos α + sin β sin α + cos β
2 1 0 i 0 0 −1
−iα
 
~ cos β e sin β
=
2 eiα sin β − cos β
 
The eigenvalues of this matrix are found by solving det S~ · n̂ − λI = 0:

    2
~ ~ ~
0 = cos β − λ − cos β − λ − sin2 β
2 2 2
 2
~
= − + λ2
2

yielding λ = ± ~2 . In this problem, we are interested


E in the normalized eigenket corresponding to
the positive eigenvalue; writing it as S · n̂; + ≡ A |+i + B |−i, we have
~

e−iα sin β
    
~ cos β A ~ A
=
2 eiα sin β − cos β B 2 B

The two equations contained in this matrix equation are redundant, so we can consider either one:
for example, the top equation is

(cos β − 1) A + e−iα sin β B = 0




This gives us one condition on A and B ; we need a second one to x the actual values. This is
given by the requirement that the ket be normalized:
2 2
|A| + |B| = 1

Taking the modulus of the rst condition and solving for |B|, we nd

1 − cos β
|B| = |A|
sin β

Substituting into the second condition and solving for |A| gives

β
|A| = cos
2

5
where we have used the trigonometric identities sin β = 2 sin β2 cos β2 and 1 − cos β = 2 sin2 β2 . We
choose the phase of A to be real and positive, i.e.
β
A = cos
2
and then our rst condition gives the value of B :
1 − cos β
B = eiα A
sin β
β
= eiα sin
2
using our trig identities again. So the desired normalized eigenket is

~
E β β
S · n̂; + = cos |+i + eiα sin |−i
2 2

5. (Sakurai 1.10)
The Hamiltonian operator for a two-state system is given by
H = a (|1ih1| − |2ih2| + |1ih2| + |2ih1|)

where a is a number with the dimension of energy. Find the energy eigenvalues and the correspond-
ing eigenkets (as linear combinations of |1i and |2i.

The matrix of H is  
a a
H=
a −a
The sum of the eigenvalues λ1 , λ2 of H equals its trace, and their product equals its determinant:
i.e.
λ1 + λ2 = 0, λ1 λ2 = −2a2
√ √
which are easily solved to give λ1 = a 2, λ2 = −a 2.

For λ1 = a 2, we write the eigenket as c1 |1i + c2 |2i; then
 √  
1− 2 1√ c1
a =0
1 −1 − 2 c2
√ 2 2
so we have (1 − 2)c1 + c2 = 0, plus normalization: |c1 | + |c2 | = 1. Choosing c1 to be real and
positive, we nd for the normalized eigenket
     
c1 1 √ 1 0.924
=p √ ≈
c2 4−2 2 2−1 0.383


For λ2 = −a 2, we write the eigenket as d1 |1i + d2 |2i; then
 √  
1+ 2 1√ d1
a =0
1 −1 + 2 d2
√ 2 2
so we have (1 + 2)d1 + d2 = 0, plus normalization: |d1 | + |d2 | = 1. Choosing d1 to be real and
positive, we nd for the normalized eigenket
     
d1 1 √ 1 0.383
=p √ ≈
d2 4+2 2 − 2−1 −0.924

6
6. (Sakurai 1.11)
A two-state system is characterized by the Hamiltonian
H = H11 |1ih1| + H22 |2ih2| + H12 [|1ih2| + |2ih1|]

where H11 , H22 , and H12 are real numbers with the dimension of energy, and |1i and |2i are eigen-
kets of some observable (6= H ). Find the energy eigenkets and corresponding energy eigenvalues.
Make sure that your answer makes good sense for H12 = 0.

The eigenvalues are found in the usual manner:



H −λ H12
0 = det (H − λI) = 11
H12 H22 − λ

which gives, after a little algebra,


 
1
q
2 2
λ± = H11 + H22 ± (H11 − H22 ) + 4H12
2

(If H12 = 0, the eigenvalues simplify to just H11 and H22 , which makes sense since the Hamiltonian
becomes diagonal w.r.t. the |1i , |2i basis in this situation.)
The eigenkets then follow as usual (I won't bother to normalize them  doing so is not hard, but the
resulting normalization constant is rather messy.) For λ+ , we write the eigenket as c1 |1i + c2 |2i;
then     
H11 H12 c1 c1
= λ+
H12 H22 c2 c2
Choosing c1 = 1, we nd from the bottom equation that c2 = λ+H−H 12
22
, so the (unnormalized)
eigenket is 
specied.
 Note that when H12 = 0 (and so λ+ = H11 ), then c2 becomes zero, and the
1
eigenket is as expected.
0
Similarly, for λ− , we write the eigenket as d1 |1i + d2 |2i; then
    
H11 H12 d1 d1
= λ−
H12 H22 d2 d2

Choosing d2 = 1, we nd from the top equation that d1 = λ−H−H12


11
, so the (unnormalized) eigenket
is 
specied.
 Note that when H12 = 0 (and so λ− = H 22 ), then d1 becomes zero, and the eigenket
0
is as expected.
1

You might also like