You are on page 1of 30

Mike & Ike

Chapter 2 Exercises
Last Updated: 2018/07/24, at 23:44:42, EST, USA
Foobanana (the real one)

No I’m not posting the LATEX code; write up your own solutions, you bum ,
2.1 Show that (1, −1), (1, 2) and (2, 1) are linearly dependent.
1
Solution: See that 1 · [ −1 ] + 1 · [ 12 ] − 1 · [ 21 ] = [ 00 ].

2.2 Suppose V is a vector space with basis vectors |0i and |1i, and A is a linear operator
from V to V such that A|0i = |1i and A|1i = |0i. Give a matrix representation for A,
with respect to the input basis |0i, |1i, and the output basis |0i, |1i. Find input and
output bases which give rise to a different matrix representation of A.
Solution: Notice that under this A,

|0i 7→ |1i = 0 · |0i + 1 · |1i

|1i 7→ 1 · |0i + 0 · |1i


Thus the representation 0 1
√ of A we are looking for is A = [ 1 0 ]. Now let us take the
vectors |0i, (|0i + |1i)/ 2 as another basis for V . Then, we see that
√ √
|0i 7→ |1i = −1 · |0i + 2 · ((|0i + |1i)/ 2)

|1i 7→ |0i = 1 · |0i + 0 · ((|0i + |1i)/ 2)
 √ 
Therefore the representation of A under this basis is A = −1 1 0
2

2.3 Suppose A : V → W and B : W → X are linear operators with vector spaces V ,W ,X.
Let |vi i, |wj i, and |xk i be bases for the vector spaces V ,W , and X, respectively. Show
that the matrix representation for the linear transformation BA is the matrix product
of the matrix representations for B and A, with respect to the appropriate bases.
P
Solution: The entries of A are described by A|vi i = Aji |wj i and the entries of
P j
B are described by B|wj i = Bkj |xk i. With this in place, we can see that
k
!
X
BA|vi i = B(A|vi i) = B Aji |wj i
j
X
= Aji (B|wj i) (linearity)
j
X X X
= Aji Bkj |xk i = (BA)ki |xk i
j k k

1
2.4 Show that the identity operator on a vector space V has a matrix representation which
is one along the diagonal and zero everywhere else, if the matrix representation is taken
with respect to the same input and output bases.
Solution: If we take some basis |vi i for V , then we see thatPthis identity operator,
call it I, must be such that I|vi i = vi . Then, as vi = I|vj i = Iij |vi i, we must have
i
that Iii = 1 for all i, and for distinct i, j Iij = 0. This the identity operator has matrix
representation with 1 on the diagonal and 0 everywhere else.
2.5 Verify that (·, ·) shown below is an inner product on Cn .
 
z1
. . . yn∗  ... 
X
yi∗ zi = y1∗
 
((y1 , ..., yn ), (z1 , ..., zn )) =
 
i zn

Solution:
(a) To see that (·, ·) is linear in the second argument, notice that
!
X j
X
(y1 , ..., yn ), aj (z1 , ..., znj ) = yi∗ (a1 zi1 + ... + an zin )
j i
X X
= a1 yi∗ zi1 + ... + an yi∗ zin
i i
!
X X
= aj yi∗ zij
j i
X
aj (y1 , ..., yn ), (z1j , ..., znj )

=
j

(b) To see that (·, ·) is conjugate symmetric, notice that for complex numbers z1 =
(a + bi), z2 = (c + di) that
z1∗ z2 = (a − bi)(c + di) = (ac + bd) + i(ad − bc)
= ((ac + bd) − i(ad − bc))∗ = ((c − di)(a + bi))∗ = (z2∗ z1 )∗
Thus, we have that
X
((y1 , ..., yn ), (z1 , ..., zn )) = yi∗ zi
i
X
= (zi∗ yi )∗
i
!∗
X
= zi∗ yi
i
= ((z1 , ..., zn ), (y1 , ..., yn ))∗

2
(c) Finally to see that (·, ·) is positive definite, notice that for z = (a + bi) that
zz ∗ = a2 + b2 and see that
X X
((y1 , ..., yn ), (y1 , ..., yn )) = yi∗ yi = a2yi + b2yi
i i

We know that this is greater than 0 for all y ∈ C, and also that ((y1 , ..., yn ), (y1 , ..., yn )) =
0 if and only if a2yi + b2yi = 0 for each y, which happens if and only if ayi = byi = 0
for each i, i.e. if (y1 , ..., yn ) = 0.

2.6 Show that any inner product (·, ·) is conjugate-linear in the first argument.
Using conjugate symmetry and linearity in the second argument, we find that
! !∗
X X
λi |wi i, |vi = |vi, λi |wi i (conjugate symmetry)
i i
!∗
X
= λi (|vi, |wi i) (linearity in 2nd argument)
i
X
= λ∗i (|vi, |wi i)∗
i
X
= λ∗i (|wi i, |vi) (conjugate symmetry)
i

2.7 Verify that |wi = (1, 1) and |vi = (1, −1) are orthogonal. What are the normalized
forms of these vectors?
Solution: See that (|wi, |vi) = 1∗ · 1 + 1∗ · (−1) = 1 − 1 = 0, and so |wi and
|wi
|vi are orthogonal. The normalized forms of these vectors are |||wi|| = √12 [ 11 ] and
|vi
= √1 1
[ −1 ].
|||vi|| 2

2.8 Prove that the Gram-Schmidt procedure produces an orthonormal basis for V .
Solution: Take |wi i to be a basis for V and |vi i to be the set generated by the
Gram-Schmidt process. By definition we can see that the vectors |vi i are unit length.
To see orthogonality,
see that
q
for distinct p 6= q with p < q without loss of generality,
P
and letting N = |wq i − hvi |wq i|vi i so that things don’t look hideous, we may see
i=1

3
that
q q
!! " #
1 X 1 X
|vp i, |wq i − hvi |wq i|vi i = hvp | |wq i − hvi |wq i|vi i
N i=1
N i=1
q
" #
1 X
= hvp |wq i − hvi |wq ihvp |vi i
N i=1
1
= [hvp |wq i − [0 + ... + hvp |wq i · 1 + ... + 0]]
N
1
= [hvp |wq i − hvp |wq i] = 0
N
Finally, to see that this set |vi i forms a basis, we notice by definition that each |wj i ∈
Span(|vi i), and since there are as many |vi i as there are |wi i, we have thus found an
independent set of size equal to the size of our basis, and so |vi i is thus a basis for V .

2.9 The Pauli matrices (Figure 2.2 on page 65) can be considered as operators with respect
to an orthonormal basis |0i, |1i for a two-dimensional Hilbert space. Express each of
the Pauli operators in the outer product notation.
Solution: It is easy to verify that

I = [ 10 01 ] = |0ih0| + |1ih1|
X = [ 01 10 ] = |0ih1| + |1ih0|
Y = [ 0i −i0 ] = −i|0ih1| + i|1ih0|
1 0
Z = [ 0 −1 ] = |0ih0| − |1ih1|

2.10 Suppose |vi i is an orthonormal basis for an inner product space V . What is the matrix
representation for the operator |vj ihvk |, with respect to the |vi i basis?
   
vj1 vk1
Solution: If |vj i =  ...  and |vk i =  ...  then the matrix representation of our
   
vjm vkn
outer product is
 1 1 ∗ 
vj (vk ) vj1 (vk2 )∗ . . . vj1 (vkn )∗
 v 2 (v 1 )∗ v 2 (v 2 )∗ . . . v 2 (v n )∗  
 j k j k j k 1 ∗ 2 ∗ n ∗

|vj ihvk | =  .. = (v ) |vi i (vk ) |vi i ... (vk ) |vi i

.. . .. .
..  k
 . . 
m 1 ∗ m 2 ∗ m n ∗
vj (vk ) vj (vk ) . . . vj (vk )

I’m not entirely sure if this answer is what the question is asking for, because of the
ambiguity of the phrase “with respect to the |vi i basis”.

2.11 Find the eigenvectors, eigenvalues, and diagonal representations of the Pauli matrices
X, Y , and Z.

4
Solution: I is trivial, and Z is already done in the book, so we will only expand
on X and Y . For the operator X, we see that
 
−λ 1
det(X − λI) = det = λ2 − 1
1 −λ
Thus the eigenvalues of X are λ = ±1. Using these eigenvalues we find that when:
     
−1 1 0 −1 1 0 1
λ=1: → , so x1 = x2 and thus x =
1 −1 0 0 0 0 1
     
1 1 0 1 1 0 1
λ = −1 : → , so − x1 = x2 and thus x =
1 1 0 0 0 0 −1
After normalization we have that our eigenvalues are √12 [ 11 ] and √1
2
1
[ −1 ]. With this
information we find that
" # " #
X √1 h i √1 h i
1 1 1 1
λi |iihi| = 1 · √1 2 √ √ + (−1) · 2 √ − √
2
2 2 − √12 2 2
i
 1 1   −1 1 
= 12 21 + 21 −1 2
2 2 2 2
 
0 1
= =X
1 0
Thus we confirm that we have diagonalized X. To diagonalize Y , we find that
 
−λ −i
det(Y − λI) = det = λ2 − 1
i −λ
Thus the eigenvalues of Y are λ = ±1. Using these eigenvalues we find that when:
     
−1 −i 0 −1 −i 0 −i
λ=1: → , so x1 = −ix2 and thus x =
i −1 0 0 0 0 1
     
1 −i 0 1 −i 0 i
λ = −1 : → , so x1 = ix2 and thus x =
i 1 0 0 0 0 1
After normalization we have that our eigenvalues are √12 [ −i
1 ] and
√1
2
[ 1i ]. With this
information we find that (keeping in mind that hψ| = (|ψi∗ )T )
" # " #
X −i h
√ i √i h i
−i
λi |iihi| = 1 · √12 √i2 √12 + (−1) · √12 √ 2
√1
2
i 2 2
−i
1   1 i

= 2 2 + 2 2
i 1 −i 1
2 2 2 2
 
0 −i
= =Y
i 0

5
2.12 Prove that the matrix [ 11 01 ] is not diagonalizable.
Observe that
         
1 1
† 1 0 2 1 1 1 1 0 1 1
AA= = 6 = = = AA†
0 1 1 1 1 1 1 2 1 1 0 1
Since we have that A is not normal, it must not be diagonalizable by the Spectral
Theorem.
2.13 If |wi and |vi are any two vectors, show that (|wihv|)† = |vihw|.
Solution: Let |wi, |vi ∈ V and consider |wihv|. Then we may see that for any
|xi, |yi ∈ V that
(|xi, (|wihv|)|yi) = (|xi, hv|yi|wi)
= hv|yi(|xi, |wi) (linearity in 2nd argument)
= (hv|yi∗ |xi, |wi) (Exercise 2.6)
= (hy|vi|xi, |wi) (Exercise 2.5)
= ((|xihy|)|vi, |wi)
=
.
= ..
= (|vi, (|yihx|)|wi)
= (|vi, hx|wi|yi)(linearity in 2nd argument)
= hx|wi(|vi, |yi) (Exercise 2.5)
= hw|xi∗ (|vi, |yi) (Exercise 2.6)
= (hw|xi|vi, |yi)
= ((|vihw|)|xi, |yi)
UNFINISHED WORK
2.14 Show that the adjoint operation is anti-linear.
Solution: Observe that
 !†  ! !
X X
 ai Ai |vi, |wi = |vi, ai Ai |wi (definition of adjoint)
i i
X
= ai (|vi, Ai |wi) (linearity in second argument)
i
X  † 
= ai Ai |vi, |wi (definition of adjoint)
i
!
X
= a∗i A†i |vi, |wi (Exercise 2.6)
i

6
As this holds for all |vi, |wi ∈ V we reference Corollary 1 at the back of this solution
set to get (as we wished to show) that
!†
X X
ai Ai = a∗i A†i
i i

2.15 Show that (A† )† = A.


Utilizing conjugate symmetry we may observe that
(|vi, A|wi) = (A† |vi, |wi) (definition of adjoint)
= (|wi, A† |vi)∗ (conjugate symmetry)
= ((A† )† |vi, |wi)∗ (definition of adjoint)
= ((|vi, (A† )† |wi)∗ )∗ (conjugate symmetry)
= (|vi, (A† )† |wi)
Here we again reference Corollary 1 at the back of this solution set to get that A =
(A† )† , as desired.
2.16 Show that any projector P satisfies the equation P 2 = P .
Pk
Solution: Let P = |iihi|. Then we find that if we act on an element |xi ∈ V
i=1
that
((|iihi|)(|jihj|))(|xi) = (|iihi|) [hj|xi|ji]
= hj|xi [(|iihi|)(|ji)]
= hj|xi [hi|ji|ii]
(
hj|xi · [0 · |ii] = 0 if i 6= j
=
hj|xi · [1 · |ii] = (|iihi|)(|xi) if i = j
Thus we have that
k
!2 k k
X X X
2
P = |iihi| = (|iihi|)(|jihj|) = |iihi| = P
i=1 i,j=1 i=1

2.17 Show that a normal matrix is Hermitian if and only if it has real eigenvalues.
Solution: First supposeP
that A has real eigenvalues. Since it is normal, it has
spectral decomposition A = λi |iihi|, where by our assumption each λi ∈ R. Applying
i
Exercise 2.13 we find that (|iihi|)† = |iihi| for all |ii. So with this we see that A is thus
Hermitian, since
!†
X X X
A† = λi |iihi| = λ∗i (|iihi|)† = λi |iihi| = A
i i i

7
Now, assume that A is Hermitian, that is, that A† = A. Then we find that
!†
X X X X
λ∗i |iihi| = λ∗i (|iihi|)† = λi |iihi| = A† = A = λi |iihi|
i i i i

This tells us that


!
X X X
0= λ∗i |iihi| − λi |iihi| = (λ∗i − λi )|iihi|
i i i

As the set of vectors |ii form an orthonormal basis, they are linearly independent, and
so each λ∗i − λi = 0 necessarily, which can only happen if λi is real valued; thus we
have shown that all eigenvalues λi of A are real if A is Hermitian.
2.18 Show that all eigenvalues of a unitary matrix have modulus 1, that is, can be written
in the form eiθ for some real θ.
Solution: Suppose that λ ∈ C is an eigenvalue of U , that is, U |vi = λ|vi. We also
know that since U is unitary that U preserves inner products, that is, (U |vi, U |wi) =
(|vi, |wi) then we may see that
(|vi, |wi) = (U |vi, U |wi) = (λ|vi, λ|wi) = λ(|vi, λ|wi) = λλ∗ (|vi, |wi) = |λ|2 (|vi, |wi)
This then implies that |λ|2 = 1, and thus we have λ has modulus 1.
2.19 Show that the Pauli matrices are Hermitian and unitary.
To see that the Pauli matrices are Hermitian, observe that
 ∗ T  T  
† ∗ T 1 0 1 0 1 0
I = (I ) = = = =I
0 1 0 1 0 1
 ∗ T  T  
† ∗ T 0 1 0 1 0 1
X = (X ) = = = =X
1 0 1 0 1 0
 ∗ T  T  
† ∗ T 0 −i 0 i 0 −i
Y = (Y ) = = = =Y
i 0 −i 0 i 0
 ∗ T  T  
† ∗ T 1 0 1 0 1 0
Z = (Z ) = = = =Z
0 −1 0 −1 0 −1
Then we will use the fact that they are Hermitian to show that A† A = I, i.e. that
they are unitary, as shown below.
  
† † 0 1 0 1
I I = I, X X = =I
1 0 1 0
     
† 0 −i 0 −i † 1 0 1 0
Y Y = = I, Z Z = =I
i 0 i 0 0 −1 0 −1

8
2.20 Suppose A0 and A00 are matrix representations of an operator A on a vector space V
with respect to two different orthonormal bases, |vi i and |wi i. Then the elements of A0
and A00 are A0ij = hvi |A|vj i and A00ij = hwi |A|wj i. Characterize the relationship between
A0 and A00 .
Solution: Maybe A00ij = hwi |vi iA0ij hvj |wj i, but I could just be dicking around with
notation here when I shouldn’t be. UNFINISHED WORK
2.21 Repeat the proof of the spectral decomposition in Box 2.2 for the case when M is
Hermitian, simplifying the proof wherever possible.
Solution: Let M be Hermitian, so M † = M . The case d = 1 is trivial. Let λ be an
eigenvalue of M , P the projector onto the λ eigenspace, and Q the projector onto the
orthogonal complement. Then M = (P + Q)M (P + Q) = P M P + QM P + P M Q +
QM Q. Obviously P M P = λP . Furthermore, QM P = 0, as M takes the subspace
P into itself. We claim that P M Q = 0 also, as P M Q = QM † P = QM P = 0. Thus
M = P M P + QM Q. Next, we prove that QM Q is normal. To see this, remember
that H is Hermitian for
(QM Q)† (QM Q) = QM † QQM Q = QM QQM † Q = (QM Q)(QM Q)†
By induction, QM Q is diagonal with respect to some orthonormal basis for the sub-
space Q, and P M P is already diagonal with respect to some orthonormal basis for P .
It follows that M = P M P + QM Q is diagonal with respect to some orthonormal basis
for the total vector space. This completes our updated proof.
2.22 Prove that two eigenvectors of a Hermitian operator with different eigenvalues are
necessarily orthogonal.
Solution: We know that Hermitian operators are normal,
P and by the spectral
theorem normal operators are diagonalizable, and so A = λi |iihi|, where |ii are an
i
orthonormal set of eigenvectors of A. Thus, we have that distinct eigenvalues of A
are orthonormal since we were able to come up with such a set of eigenvectors by the
spectral theorem.
2.23 Show that the eigenvalues of a projector P are all either 0 or 1.
Pk
Solution: We know that P = |iihi| is diagonal by representation, and so by
i=1 P
noticing that if we extend this to P = ai |iihi| over all values of i, we see that the
i
coefficients are either 0 or 1 on all of these terms, and since these coefficients ai are
the eigenvalues of the operator, we thus know that any projector P has only 0 or 1 as
eigenvalues.
2.24 Show that a positive operator is necessarily Hermitian. (Hint: Show that an arbitrary
operator A can be written A = B + iC where B and C are Hermitian.)
Solution: UNFINISHED WORK

9
2.25 Show that for any operator A, A† A is positive.
Solution: We know by Exercise 2.15 that (A† )A† = A, and remember by assump-
tion that for all vec. Let A be a linear operator and see that for all |vi ∈ V that

(|vi, A† A|vi) = (|vi, A† (A|vi)) = ((A† )A† |vi, A|vi) = (A|vi, A|vi)

Since we know (|wi, |wi) ≥ 0 for all vectors |wi ∈ V , we know this holds for the vector
A|vi ∈ V . Thus we have that (|vi, A† A|vi) ≥ 0 for all |vi ∈ V , and therefore A† A is a
positive operator for any operator A, as A was arbitrary.

2.26 Let |ψi = (|0i + |1i)/ 2. Write out |ψi⊗2 and |ψi⊗3 explicitly, both in terms of tensor
products like |0i|1i, and using the Kronecker product.
Solution:
 " #
√1 1
" # " #  √12 2
√1 
2
√1 √1

⊗2
 2
1
2 2
|ψi = |ψi ⊗ |ψi = √1
⊗ √1
= " # = 2
1
 √1  2
2 2  √1 2  1
2 √1 2
2

1
 
  1  √
2 2
2  √1 
  1  2 2
 1   √1  21   √1 
 2   2 2
" #
1 2  2   1 
√ 1 1   √ 
|ψi⊗3
  
2 212
= |ψi ⊗ |ψi ⊗ |ψi = √1 ⊗  2 = 
1 
 2 
1  =  √ 
2 2  2  2 2
1  1  1   √1 
2  √  21  2 2
 2    √1 
2 2 2
1
2 √1
2 2

Since these all have every outcome with the same probability, we may express these
states as
1
|ψi⊗2 = (|00i + |01i + |10i + |11i)
2
1
|ψi⊗3 = √ (|000i + |001i + |010i + |011i + |100i + |101i + |110i + |111i)
2 2
2.27 Calculate the matrix representation of the tensor products of the Pauli operators (a)
X and Z; (b) I and X; (c) X and I. Is the tensor product commutative?
Solution: With calculation, we evaluate said tensor products as shown below:
 
      0 0 1 0
0 1 1 0 0 · [ 10 −1
0
] 1 · [ 10 −1
0
] 0 0 0 −1
(a) X ⊗ Z = ⊗ = 1 0 1 0 = 
1 0 0 −1 1 · [ 0 −1 ] 0 · [ 0 −1 ] 1 0 0 0 
0 −1 0 0

10
 
      0 1 0 0
1 0 0 1 1 · [ 01 10 ] 0
0 · [1 0] 1 1 0 0 0
(b) I ⊗ X = ⊗ = = 
0 1 1 0 0 · [ 01 10 ] 1 · [ 01 10 ] 0 0 0 1
0 0 1 0
 
      0 0 1 0
0 1 1 0 0· [ 10 0]
1
1
1 · [0 1] 0 0 0 0 1
(c) X ⊗ I = ⊗ = = 
1 0 0 1 1· [ 10 0]
1 0 · [ 10 01 ] 1 0 0 0
0 1 0 0
Clearly the tensor product is not commutative, because for the pair X, I we found
above that X ⊗ I 6= I ⊗ X.

2.28 Show that the transpose, complex conjugation, and adjoint operations distribute over
the tensor product.
Solution:
 T  
A11 B . . . A1n B A11 B T . . . Am1 B T
(A ⊗ B)T =  ... .. ..  =  .. .. ..  = AT ⊗ B T

. .   . . . 
Am1 B . . . Amn B A1n B . . . Amn B T
T

 ∗  
A11 B . . . A1n B (A11 B)∗ . . . (A1n B)∗
∗  .. .. ..  =  .. .. ..
(A ⊗ B) =  .

. .   . . . 
∗ ∗
Am1 B . . . Amn B (Am1 B) . . . (Amn B)
 
A∗11 B ∗ . . . A∗1n B ∗
=  ... .. ..  = A∗ ⊗ B ∗

. . 
A∗m1 B ∗ . . . Amn B ∗

Lastly we just note that since the matrix representation of A† is (A∗ )T and we have
proven transposition and conjugation distribute over the tensor product, we thus have
that (A ⊗ B)† = A† ⊗ B † .

2.29 Show that the tensor product of two unitary operators is unitary.
Solution: Let U and V be two unitary operators, that is, U † U = I = V † V . Keep
in mind that the columns of a unitary matrix form an orthonormal basis, and so if
U = [c1 , ..., cn ] is composed of columns c1 , ..., cn then hci |cj i = δij . With this, we see

11
that

(U ⊗ V )† (U ⊗ V ) = (U † ⊗ V † )(U ⊗ V ) (Exercise 2.28)


  

U11 V † . . . Un1 ∗
V† U11 V . . . U1n V
=  ... .. ..   .. .. .. 

. .  . . . 
∗ † ∗ †
U1n V . . . Unn V Un1 V . . . Unn V
P n n 
∗ † ∗ †
P
(U U )V V . . . (Ui1 Uin )V V
 i=1 i1 i1 i=1 
=
 .
. ... .
.

 . . 

P n n 
∗ † ∗ †
P
(Uin Ui1 )V V . . . (Uin Uin )Umn V V
i=1 i=1
  
hc1 |c1 iI . . . hc1 |cn iI In . . . 0n
.. .. ..   .. . . .
= =. . ..  = In2

. . .
hcn |c1 iI . . . hcn |cn iI 0n . . . In

2.30 Show that the tensor product of two Hermitian operators is Hermitian.
Solution: Let U and V be Hermitian, that is U † = U and V † = V . Then, taking
note of Exercise 2.28 for the first equality, we find that

(U ⊗ V )† = U † ⊗ V † = U ⊗ V (replacement, since U and V are Hermitian)

2.31 Show that the tensor product of two positive operators is positive.
Solution: Let A and B be positive, so that for all |vi ∈ V we have that (|vi, A|vi)
is nonnegative and real, and similarly for all |wi ∈ W , (|wi, B|wi) is nonnegative and
real. With this, and noting our inner product defined on V ⊗ W , we see that for all
|vi ⊗ |wi ∈ V ⊗ W that

(|vi ⊗ |wi, (A ⊗ B)(|vi ⊗ |wi)) = (|vi ⊗ |wi, A|vi ⊗ B|wi) = hv|A|vihw|B|wi

Since hv|A|vi = (|vi, A|vi) and hw|B|wi = (|wi, B|wi) are both nonnegative and real,
their product must be also, and so we have shown that A ⊗ B is then again a positive
operator if both A and B are, as we wished to show.

2.32 Show that the tensor product of two projectors is a projector.


Solution: Let V and W be the component spaces of our tensor product, V ⊗ W .
We know V has some orthonormal basis set |ii and W has basis set |ji; call these basis
sets BV and BW for better notation. Let SV ⊂ BV and SW ⊂ BW , and then we will
define projectors X X
PV = |iihi|, and PW = |jihj|
|ii∈SV |ji∈SW

12
We wish to show that PV ⊗ PW is a projector on V ⊗ W . To do this, we need to show
that it is a sum of elements vv † where v is an element of a basis for V ⊗ W . We know
because of this that v = |ii ⊗ |ji where |ii and |ji are elements of orthonormal bases
for V and W respectively (I’ve proven this before, but can’t remember where in the
Mike & Ike this is). So, to start we will observe that
   
X X
PV ⊗ PW =  |iihi| ⊗  |jihj|
|ii∈SV |ji∈SW
X X
= [(|iihi|) ⊗ (|jihj|)] (bilinearity of tensor product)
|ii∈SV |ji∈SW

Thus, it only remains to be shown that (|iihi|) ⊗ (|jihj|) = (|ii ⊗ |ji) (hi| ⊗ hj|). We

13
may see that this is true, since
   
i1 i∗1 . . . i1 i∗n j1 j1∗ . . . j1 jm ∗

(|iihi|) ⊗ (|jihj|) =  ... .. ..  ⊗  .. .. .. 



. .   . . . 
∗ ∗ ∗ ∗
in i1 . . . in in jm j1 . . . jm jm
    
j1 j1∗ . . . j1 jm ∗
j1 j1∗ . . . j1 jm ∗
 ∗ . .. ..  . . . i i∗  .. .. ..  
 i1 i1  .. . .  1 n . . . 
 ∗ ∗ ∗ ∗ 


 jm j1 . . . jm jm jm j1 . . . jm jm 
=
 .
. . . .
. 
 .  .  . 

 ∗ ∗ ∗ ∗
 j1 j1 . . . j1 jm j1 j1 . . . j1 jm 
i i∗  .. ..  . . . i i∗  .. .. 
 
. . . .
 n 1 . . .  n n . . . 
∗ ∗ ∗ ∗
jm j1 . . . jm jm jm j1 . . . jm jm
   
i1 i∗1 j1 j1∗ . . . i1 i∗1 j1 jm ∗
i1 i∗n j1 j1∗ . . . i1 i∗n j1 jm ∗
.. .. .. .. .. ..
 ... 
   

 . . . . . . 

 i1 i∗1 jm j1∗ . . . i1 i∗1 jm jm ∗
i 1 i∗
n j m j1

. . . i1 i∗
n j m jm
∗ 
 
=
 .. .. .. 
. .  . 
 ∗ ∗  
 in i1 j1 j1 . . . in i∗1 j1 jm ∗
in i∗n j1 j1∗ . . . in i∗n j1 jm ∗ 
.. .. .. ..
 
 ..  ... 
  .. 
. .
 
 . . . . 
in i∗1 jm j1∗ . . . in i∗1 jm jm ∗
in i∗n jm j1∗ . . . in i∗n jm jm ∗
 
i1 j1∗
 . 
  ..  
 ∗ 

 i1 jm
 
 ..   ∗ ∗
. . . in j1∗ . . . in jm ∗
  
=  .  i1 j1 . . . i1 jm
 ∗ 
 in j1 
 .. 
 
 . 

in jm
 
i1 |ji
 ..   ∗
=  .  i1 hj| . . . i∗n hj| = (|ii ⊗ |ji) (hi| ⊗ hj|)


in |ji

2.33 The Hadamard operator on one qubit may be written as


1
H = √ [(|0i + |1i)h0| + (|0i − |1i)h1|]
2
Show explicitly that the Hadamard transform on n qubits, H ⊗n , may be written as
1 X
H ⊗n = √ (−1)x·y |xihy|
n
2 x,y

14
Write out an explicit matrix representation for H ⊗n .
Solution: UNFINISHED WORK
2.34 Find the square root and logarithm of the matrix A = [ 43 34 ].
Solution: First we must find the eigenvalues and eigenvectors of A for our spectral
decomposition. See that det(A − λI) = λ2 − 8λ + 7 = (λ − 1)(λ − 7), and so we have
two eigenvalues 1 and 7. For the eigenvectors,
     
3 3 0 3 3 0 1
λ=1: → , so − x1 = x2 and thus x =
3 3 0 0 0 0 −1
     
−3 3 0 −3 3 0 1
λ=7: → , so x1 = x2 and thus x =
3 −3 0 0 0 0 1
Then, we have that
       
X 1   1   1 −1 1 1
A= λi |iihi| = 1 · 1 −1 + 7 · 1 1 =1· +7·
−1 1 −1 1 1 1
i

With this representation of A, we can calculate A and ln(A) as shown below:
√ √ 
√ √ √
    
1 −1 1 1 1 + √7 −1 +√ 7
A= 1· + 7· =
−1 1 1 1 −1 + 7 1 + 7
     
1 −1 1 1 ln(7) ln(7)
ln(A) = ln(1) · + ln(7) · =
−1 1 1 1 ln(7) ln(7)
2.35 Let ~v be any real, three-dimensional unit vector and θ a real number. Prove that
exp(iθ~v · ~σ ) = cos(θ)I + i sin(θ)~v · ~σ
Solution: UNFINISHED WORK
2.36 Show that the Pauli matrices except for I have trace zero.
Solution: Observe with calculation that
   
1 0 0 1
tr(I) = tr = 1 + 1 = 2, tr(X) = tr =0+0=0
0 1 1 0
   
0 −i 1 0
tr(Y ) = tr = 0 + 0 = 0, tr(Z) = tr =1−1=0
i 0 0 −1
2.37 If A and B are two linear operators show that tr(AB) = tr(BA)
Solution: Let A be m × n and B be n × m. We may observe that
m m n
! m Xn
X X X X
tr(AB) = (AB)ii = aij bji = aji bij (just switching the labels i and j)
i=1 i=1 j=1 j=1 i=1
n m
! n
X X X
= bij aji = (BA)ii = tr(BA)
i=1 j=1 i=1

15
2.38 If A and B are two linear operators, and if z is an arbitrary complex number show
that tr(zA + B) = ztr(A) + tr(B).
Solution: We may see easily that
X X X
z(tr(A)) + tr(B) = z Aii + Bii = (zAii + Bii ) = tr(zA + B)
i i i

2.39 The set LV of linear operators on a Hilbert space V is obviously a vector space - the
sum of two linear operators is a linear operator, zA is a linear operator if A is a linear
operator and z is a complex number, and there is a zero element 0. An important
additional result is that the vector space LV can be given a natural inner product
structure, turning it into a Hilbert space.

(a) Show that the function (·, ·) on LV × LV defined by (A, B) ≡ tr(A† B) is an inner
product function. This inner product is known as the Hilbert–Schmidt or trace
inner product.
(b) If V has d dimensions show that LV has dimension d2 .
(c) Find an orthonormal basis of Hermitian matrices for the Hilbert space LV .

Solution:

(a) Suppose that A is m × n. To see that positivity holds, see that


n m
!
X X
tr(A† A) = a∗ij aij
j=1 i=1

So, we notice that since a∗ij aij ≥ 0 as it is the square of the modulus of aij that
our function is positive. Notice also that as this is the sum over all 1 ≤ i ≤ m
and all 1 ≤ j ≤ n that tr(A† A) = 0 if and only if aij = 0 for all i, j, that is, if and
only if A ≡ 0.
In Exercise 2.14 we showed that the adjoint was anti-linear, and with this may
see that our inner product is linear in the first component by observing that
! !! !
X X X X  X
† †
A, ci Bi = tr A ci Bi = tr ci A Bi = ci tr A† Bi = ci (A, B)
i i i i i

corollary: trace is linear


conjugate symmetry (too lazy to do it atm tbh)
(b)
(c)

UNFINISHED WORK

16
2.40 Verify the commutation relations [X, Y ] = 2iZ, [Y, Z] = 2iX, [Z, X] = 2iY .
Solution: With computation,
       
0 1 0 −i 0 −i 0 1 2i 0
[X, Y ] = XY − Y X = − = = 2iZ
1 0 i 0 i 0 1 0 0 −2i
       
0 −i 1 0 1 0 0 −i 0 2i
[Y, Z] = Y Z − ZY = − = = 2iX
i 0 0 −1 0 −1 i 0 2i 0
       
1 0 0 1 0 1 1 0 0 2i(−i)
[Z, X] = ZX − XZ = − = = 2iY
0 −1 1 0 1 0 0 −1 2i(i) 0

2.41 Verify the anti-commutation relations {σi , σj } = 0 where i 6= j are both chosen from
the set {1, 2, 3}. Also verify for i = 1, 2, 3 that σi2 = I.
Solution: We saw computationally that σi2 = I in Exercise 2.19. To see the rest of
the problem, we will notice that {σi , σj } = σi σj + σj σi = σj σi + σi σj = {σj σi }, and
only make three calculations.
         
0 1 0 −i 0 −i 0 1 i 0 −i 0
{σ1 , σ2 } = σ1 σ2 + σ2 σ1 = + = + =0
1 0 i 0 i 0 1 0 0 −i 0 i
         
0 −i 1 0 1 0 0 −i 0 i 0 −i
{σ2 , σ3 } = σ2 σ3 +σ3 σ2 = + = + =0
i 0 0 −1 0 −1 i 0 i 0 −i 0
         
0 1 1 0 1 0 0 1 0 −1 0 1
{σ1 , σ3 } = σ1 σ3 + σ3 σ1 = + = + =0
1 0 0 −1 0 −1 1 0 1 0 −1 0
[A,B]+{A,B}
2.42 Verify that AB = 2
Solution: By definition we have [A, B] = AB − BA and {A, B} = AB + BA, and
so subtracting these we get

[A, B] + {A, B} = (AB − BA) + (AB + BA) = 2AB


[A,B]+{A,B}
Therefore, we divide by 2 to get AB = 2
.

2.43 Show that for j, k = 1, 2, 3 that


3
X
σj σk = δjk I + i jk` σ`
`=1

Solution: We will notice a couple things for this problem (all things which stem
from Exercise 2.41).

ˆ Notice that if j = k then εjk` = 0, and so we would need to show σj2 = I, which
we have done in Exercise 2.41 for j = 1, 2, 3.

17
ˆ Notice also that we verified in Exercise 2.41 that σi σj = −σj σi , and so it will
suffice to show the equation holds by computing σ1 σ2 , σ2 σ3 , σ1 σ3 .
ˆ Lastly, note by our computations in Exercise 2.41 that we have shown σ1 σ2 = iσ3 ,
σ2 σ3 = iσ1 , and σ1 σ3 = −iσ2

With these facts recorded, we may see that


3
X
j = 1, k = 2 : δ12 I + i 12` σ` = 0 + i(0σ1 + 0σ2 + 1σ3 ) = σ3 = σ1 σ2
`=1

3
X
j = 2, k = 3 : δ23 I + i 23` σ` = 0 + i(1σ1 + 0σ2 + 0σ3 ) = σ1 = σ2 σ3
`=1
3
X
j = 1, k = 3 : δ13 I + i 13` σ` = 0 + i(0σ1 + −1σ2 + 0σ3 ) = −σ2 = σ1 σ3
`=1

2.44 Suppose [A, B] = 0, {A, B} = 0, and A is invertible. Show that B must be 0.


Solution: If [A, B] = 0, {A, B} = 0 then we have that AB − BA = 0 = AB + BA,
or similarly that 2BA = 0. Dividing by 2 and right multiplying my A−1 then gives us
that B = 0 as we desired.

2.45 Show that [A, B]† = [B † , A† ].


Solution: [A, B]† = (AB − BA)† = (AB)† − (BA)† = B † A† − A† B † = [B † , A† ]

2.46 Show that [A, B] = −[B, A].


Solution: [A, B] = AB − BA = −(BA − AB) = −[B, A]

2.47 Suppose A and B are Hermitian. Show that i[A, B] is Hermitian.


Solution: If A and B are Hermitean, we have that A† = A, B † = B. Then we may
see, remembering that the adjoint consists (in part) of complex conjugation, that

(i[A, B])† = (iAB − iBA)† = (iAB)† − (iBA)† = (−iB † A† ) + (iA† B † )


= i(A† B † − B † A† ) = i(AB − BA) = i[A, B]

2.48 What is the polar decomposition of a positive matrix P ? Of a unitary matrix U ? Of


a Hermitian matrix, H?
Solution: The decomposition of a positive matrix P is P = IP = P I, where the
unitary “U ” is the identity matrix. Similarly, the polar decomposition of a unitary
matrix U is U = U√I = IU ,√where the positive matrices “J” and “K” are both the
identity as well, as U † U = I = I is our condition
√ on
√ J, and similarly for K. Finally,
a Hermitian matrix H would be such that H † H = H 2 = H, and thus we have that
J = K = H and U = I.

18
2.49 Express the polar decomposition of a normal matrix in the outer product representa-
tion.
Solution: UNFINISHED WORK

2.50 Find the left and right polar decompositions of the matrix [ 11 01 ]
Solution: I can do this but it will be ugly; I might use Wolfram Alpha and skip
some steps. UNFINISHED WORK

2.51 Verify that the Hadamard gate H is unitary.


Solution: Observe that
  †      2  
† 1 1 1 1 1 1 1 1 1 1 2 0
H H= √ √ = √ = =I
2 1 −1 2 1 −1 2 1 −1 2 0 2

2.52 Verify that H 2 = I.


Solution: This was done in Exercise 2.51.

2.53 What are the eigenvalues and eigenvectors of H?


Solution: We find that det(H − λI) = λ2 − 1, and so it has eigenvalues λ = ±1,
and we may see that
" # √ √ 
√1 − 1 √1 0 √
  
2 2 1 −1 − 2 0 1+ 2
λ=1: √1 √1 − 1
→ , so x1 = (1+ 2)x2 and thus x =
2
− 2
0 0 0 0 1
" #  √ √ 
√1 + 1 √1 0 √
 
2 2 1 2−1 0 1− 2
λ = −1 : √1
→ , so x1 = (1− 2)x2 and thus x =
2
− √12 + 1 0 0 0 0 1

2.54 Suppose A and B are commuting Hermitian operators. Prove that exp(A) exp(B) =
exp(A + B). (Hint: Use the results of Section 2.1.9.)
Solution: Note that since A and B commute that [A, B] = AB −BA = ABP −AB =
0, and so by the simultaneous diagonalization theorem we may write A = ai |iihi|
P i
and B = bi |iihi|, where |ii is an orthonormal basis. Then, notating |ji as a possible
i

19
reordering of |ii, we may see that
! !
X X
exp(A) exp(B) = exp ai |iihi| exp bj |jihj|
i j
X
= exp(ai ) exp(bj )|iihi|jihj|
i,j
X
= (δij ) exp(ai ) exp(bj )|iihj|
i,j
X
= exp(ai + bi )|iihi|
i
!
X
= exp (ai + bi )|iihi|
i
= exp(A + B)
h i
−iH(t2 −t1 )
2.55 Prove that U (t1 , t2 ) = exp ~
is unitary.
Solution: UNFINISHED WORK
2.56 Use the spectral decomposition to show that K ≡ −i log(U ) is Hermitian for any
unitary U , and thus U = exp(iK) for some Hermitian K.
Solution: Since U is unitary we know from Exercise 2.18 that all eigenvalues have
modulus
P 1, and so are expressible as eiθ where θ ∈ R. So, take spectral decomposition
U = λk |kihk| where λk = eiθk . Then, we get that
k
!
X X X
iθk
K = −i log(U ) = −i log e |kihk| = −i iθk |kihk| = θk |kihk|
k k k

Thus we have as θk ∈ R for all k that


!†
X X X
K† = θk |kihk| = θk∗ (|kihk|)† = θk |kihk| = K
k k k

Therefore, K is a Hermitian operator as desired, and thus U = exp(iK) for some


Hermitian K.
2.57 Suppose {L` } and {Mm } are two sets of measurement operators. Show that a measure-
ment defined by the measurement operators {L` } followed by a measurement defined
by the measurement operators {Mm } is physically equivalent to a single measurement
defined by measurement operators {N`m } with the representation N`m = Mm L` .
Solution: Let us first consider what happens when we apply L` and then Mm .
L` |ψi
Applying L` to |ψi we get the state |φi = √ with probability p(`) = hψ|L†` L` |ψi.
p(`)

20
Applying Mm to the new state |φi created after applying Le ll, we then get the new
state !
Mm |φi Mm L |ψi Mm L` |ψi
|ζi = p =p p` =p
p(m) p(m) p(`) p(m)p(`)
(We will add parentheses to the next part for readability) This happens with probability
!† !
L |ψi L |ψi 1
p(m) = p` †
Mm Mm p` = hψ|L†` Mm

Mm L` |ψi
p(`) p(`) p(`)

Notice from this we get that p(`)p(m) = hψ|L†` Mm



Mm L` |ψi.
N`m |ψi
Now suppose that we apply N`m to |ψi. Then we get the state |ξi √ √m L` |ψi
= M
p(`m) p(`m)
with probability

p(`m) = hψ|N`m N`m |ψi = hψ|(Mm L` )† (Mm L` )|ψi = hψ|L†` Mm

Mm L` |ψi

Notice then that as p(`)p(m) = p(`m) that the output probabilities are the same,
and thus both the output states |ζi and |ξi are the same, and so we have successfully
shown that a measurement defined by the measurement operators {L` } followed by a
measurement defined by the measurement operators {Mm } is physically equivalent to a
single measurement defined by measurement operators {N`m } with the representation
N`m = Mm L` .

2.58 Suppose we prepare a quantum system in an eigenstate |ψi of some observable M ,


with corresponding eigenvalue m. What is the average observed value of M , and the
standard deviation?
Solution: Remembering that eigenvalues of a Hermitian matrix are orthonormal,
we can see that hM i = hψ|M |ψi = mhψ|ψi = m · 1 = m, since M |ψi = m|ψi since m
is an eigenvalue of M with eigenvector |ψi. We may then also see that since
!2
X X X X
M2 = mPm = mnPm Pn = m2 Pm2 = m2 Pm
m m,n m m

Since Pm Pn = δmn Pm and also since Pm2 = Pm since Pm is a projector. Then, we can
see (again remembering that |ψi is one of a set of orthonormal eigenvectors of H) that
!
X
(∆M )2 = hM 2 i − hM i2 = hψ| m2 Pm |ψi − (m)2 = m2 − m2 = 0
m

Thus we get that hM i = m with standard deviation ∆M = 0.

2.59 Suppose we have qubit in the state |0i, and we measure the observable X. What is
the average value of X? What is the standard deviation of X?

21
Solution: See with computation that
    
  0 1 1   1
hXi = h0|X|0i = 1 0 = 0 1 =0
1 0 0 0
2
We may alsopsee that since Xp = I that hX 2 i√= hIi = h0|I|0i = h0|0i = 1, and thus
that ∆X = hX 2 i − hXi2 = hIi − hXi2 = 1 − 02 = 1.

2.60 Show that ~v · ~σ where ~v is a unit vector has eigenvalues ±1, and that the projectors
onto the corresponding eigenspaces are given by P± = (I ± ~v · ~σ )/2.
Solution: First to see that the eigenvalues of ~v · ~σ are ±1, see that
 
v3 − λ vi − iv2
det(~v · ~σ − λI) = det
v1 + iv2 −v3 − λ
= (v3 − λ)(−v3 − λ) − (vi − iv2 )(v1 + iv2 )
= −v32 + λ2 − (v12 + v22 )
= λ2 − (v12 + v22 + v32 )
= λ2 − 1

Thus we get that the eigenvalues are λ = ±1.


Consider the case when λ = λ1 = 1. Then, we can see (with row reduction R2 =
R2 − vv13+v2i
−1
R1 ) that
" #
v − 1 v − iv 0
 
v3 − 1 v1 − iv2 0 ref 3 1 2
−→ v12 +v22
v1 + iv2 −v3 − 1 0 0 − v3 −1 − v3 − 1 0
" #
v3 − 1 v1 − iv2 0
= v 2 +v 2 +v 2 −1
0 − 1 v23 −13 0
 
v − 1 v1 − iv2 0
= 3
0 0 0
 
1
So our eigenvector is |λ1 i = 1−v3 . Then we may normalize |λ1 i to get
v1 −iv2

|||λ1 i||2 = (|λ1 i, |λ1 i)


(1 − v3 )2 (1 − v3 )2
= hλ1 |λ1 i = +1= 2 +1
(v1 + iv2 )(v1 − iv2 ) v1 + v22
(1 − v3 )2 ) 1 − v3 2
= 2
+1= +1=
1 − v3 1 + v3 1 + v3

22
q
2
Thus, we get that ||v|| = 1+v3
, and with this we can see that
 
1 1 + v3 1  1−v3

|λ1 ihλ1 | = 1−v 3
1 v −iv
|||λ1 i||2 2 v1 −iv2
1 2
 
1 1 + v3 v1 − iv2
= = P+
2 v1 + iv2 1 − v3
 
1
Now, in the case where λ = λ−1 = −1, we may similarly get that |λ−1 i = −1−v3 ,
v1 −iv2
and again with calculation find that the normalized version of this eigenvector is such
1
that |||λ−1 |λ ihλ−1 | = P− .
i||2 −1

2.61 Calculate the probability of obtaining the result +1 for a measurement of ~v · ~σ , given
that the state prior to measurement is |0i. What is the state of the system after the
measurement if +1 is obtained?
Solution: With calculation we find that, as P is a projector,
    
† 1   1 + v3 v1 − iv2 1 1 + v3
P (+1) = h0|P P |0i = h0|P |0i = 1 0 =
2 v1 + iv2 1 − v3 0 2
So, we may then also find that the state after measurement is
   r   r  
P+ |0i 1 1 1 + v3 1 + v3 1 1 + v3 1 1
|ψi = p =q = v1 +iv2 = 1−v3 = |λ1 i
P (+1) 1+v3 2 v1 + iv2 2 1+v3 2 v1 −iv2 |||λ1 i||
2

So, the post-measurement state is the normalized version of the eigenvector corre-
sponding to the outcome.

2.62 Show that any measurement where the measurement operators and the POVM ele-
ments coincide is a projective measurement.
Solution: Suppose we have measurement operators {Mm } and POVM elements
† †
Em = Mm Mm . If Mm = Em , then Mm = Mm Mm . We know also that as {Mm } is a
measurement that
† Mm |ψi X

p(m) = hψ|Mm Mm |ψi, with post-measurement state p where Mm Mm = I
p(m) m

So, remembering that Mm = Em , we find that


Mm |ψi X
p(m) = hψ|Mm |ψi, with post-measurement state p where Mm = I
p(m) m

Finally, the Hermitian operator which describes our projective measurements is I, since
the condition that Mm = Em tells that all the Mm are projectors.

23
2.63 Suppose a measurement is described by measurement
√ operators Mm . Show that there
exist unitary operators Um such that Mm = Um Em , where Em is the POVM associ-
ated to the measurement.
Solution: UNFINISHED WORK
2.64 Suppose Bob is given a quantum state chosen from a set |ψ1 i, ..., |ψm i of linearly in-
dependent states. Construct a POVM {E1 , ..., Em+1 } such that if outcome Ei occurs,
1 ≤ i ≤ m, then Bob knows with certainty that he was given the state |ψi i. (The
POVM must be such that hψi |Ei |ψi i > 0 for each i.)
Solution: UNFINISHED WORK
√ √
2.65 Express the states (|0i + |1i)/ 2 and (|0i − |1i)/ 2 in a basis in which they are not
the same up to a relative phase shift.
√ √
Solution: Notice that if we take |+i = (|0i + |1i)/ 2 and |−i = (|0i − |1i)/ 2 as
our basis, then we are able to express each |ψi = α|+i + β|−i. Notice that
|+i = 1 · |+i + 0 · |−i, |−i = 0 · |+i + 1 · |−i
So, for each pair of phases we either would need some real θ such that 1 = 0 · eiθ or
that 0 = 1 · eiθ , but no such real θ exists, and thus in this basis, |+i and |−i are not
the same up to relative phase shift.
2.66 Show that the average value of the
√ observable X1 Z2 for a two qubit system measured
in the state |ψi = (|00i + |11i)/ 2 is zero.
Solution: The matrix representation of our given state is
     
1 0 1
|00i + |11i 1  0 0 1 0
√ =√    +   = √  
2 2 0 0
 2 0
0 1 1
Remember now that the tensor product is such that each component of a tensor product
of operators is acting on a different state in the composite system, and so our operator
X1 Z2 is represented by
 
  0 0 1 0
0 Z 0 0 0 −1
X ⊗Z = = 
Z 0 1 0 0 0 
0 −1 0 0
With this we may see that the average value of our observable measured in the given
state is
       
0 1 0 0 1 1
1   1 0 0 0  0 1   0 1
hψ|X⊗Z|ψi =  1 0 0 1  0 0 0 −1 0 = 2  0 −1 1 0 0 = 2 (0) = 0
     
2
0 0 −1 0 1 1

24
2.67 Suppose V is a Hilbert space with a subspace W . Suppose U : W → V is a linear
operator which preserves inner products, that is, for any |w1 i and |w2 i in W ,

hw1 |U † U |w2 i = hw1 |w2 i

Prove that there exists a unitary operator U 0 : V → V which extends U . That is,
U 0 |wi = U |wi for all |wi ∈ W , but U 0 is defined on the entire space V . Usually we
omit the prime symbol 0 and just write U to denote the extension.
Solution: UNFINISHED WORK

2.68 Prove that |ψi = (|00i + |11i)/ 2 is such that for all single qubit states |ai and |bi,
|ψi =
6 |ai|bi.
Solution: Suppose that there is some |ai = α|0i + β|1i and some |bi = γ|0i + δ|1i
such that |ψi = |ai|bi. To derive a contradiction, we will notice that mathematically,
 
1
|00i + |11i 1  0
|ψi = √ =√  
2 2 0
1
 
    αγ
α γ  αδ 
|ai|bi = (α|0i + β|1i)(γ|0i + δ|1i) = ⊗ = 
β δ βγ 
βδ
Then, we can see that if |ψi = |ai|bi that αδ = 0 = βγ. If αδ = 0 then one of α or
δ must be 0, but if α = 0 then √12 = αγ = 0 and if δ = 0 then √12 = βδ = 0. Thus,
as either case yields a contradiction, we have shown that there can be no such single
qubit states |ai and |bi where |ψi = |ai|bi.

2.69 Verify that the Bell basis forms an orthonormal basis for the two qubit state space.
Solution: In matrix form, the Bell states are
   
1 1
|00i + |11i 1  0 |00i − |11i 1 0
 
√ =√  , √ = √
2 2 0 2 2 0 
1 −1
   
0 0
|01i + |10i 1  1 |01i − |10i 1 1
√ =√  , √ =√  
2 2 1 2 2 −1
0 0

25
Then, any two-qubit state can be formed as a linear combination of these as
         
α   1   1   0   0
β  α + δ 1 0 α − δ 1  0  β + γ 1 1 β − γ 1  1 
 = √ ·√  + √ ·√  + √ ·√  + √ ·√  
γ  2 2 0 2 2 0  2 2 1 2 2 −1
δ 1 −1 0 0

Confirming that these are unit vectors and orthogonal is just a matter of computation,
and so I leave that to the reader.

2.70 Suppose E is any positive operator acting on Alice’s qubit. Show that hψ|E ⊗ I|ψi
takes the same value when |ψi is any of the four Bell states. Suppose some malevolent
third party (‘Eve’) intercepts Alice’s qubit on the way to Bob in the superdense coding
protocol. Can Eve infer anything about which of the four possible bit strings 00, 01,
10, 11 Alice is trying to send? If so, how, or if not, why not?
Solution: UNFINISHED WORK

2.71 Let ρ be a density operator. Show that tr(ρ2 ) ≤ 1, with equality if and only if ρ is a
pure state.
Solution:
P If ρ is a density operator then for some ensemble {pi , |ψi i} we have that
ρ= pi |ψi ihψi |. Keeping in mind now that the trace of the outer product relates to
i
the inner product, i.e. that tr(|ψi i|ψj i = hψj |ψi i, and then we may see that
 !2  !
X X
2
tr(ρ ) = tr  pi |ψi ihψi |  = tr pi pj |ψi ihψi ||ψj ihψj |
i i,j
!
X X
= tr pi pj hψi ψj ||iiψi hψj | = pi pj hψi |ψj itr (|ψi ihψj |)
i,j i,j
X X
= pi pj hψi |ψj ihψj |ψi i = pi pj |hψi |ψj i|2
i,j i,j

Notice now that since 0 ≤ pi , pj ≤ 1 since they are probabilities, and that since
|hψi |ψj i|2 is UNFINISHED WORK

2.72 The Bloch sphere picture for pure states of a single qubit was introduced in Section
1.2. This description has an important generalization to mixed states as follows.

(a) Show that an arbitrary density matrix for a mixed state qubit may be written as
I + ~r · ~σ
ρ=
2
where ~r is a real three-dimensional vector such that ||~r|| ≤ 1. This vector is known
as the Bloch vector for the state ρ.

26
(b) What is the Bloch vector representation for the state ρ = I/2?
(c) Show that ρ is pure if and only if ||~r|| = 1
(d) Show that for pure states the description of the Bloch vector we have given coin-
cides with that in Section 1.2.

Solution: UNFINISHED WORK

2.73 Let ρ be a density operator. A minimal ensemble for ρ is an ensemble {pi , |ψi i}
containing a number of elements equal to the rank of ρ. Let |ψi be any state in the
support of ρ. (The support of a Hermitian operator A is the vector space spanned
by the eigenvectors of A with non-zero eigenvalues.) Show that there is a minimal
ensemble for ρ that contains |ψi, and moreover that in any such ensemble |ψi must
appear with probability
1
pi =
hψi |ρ−1 |ψi i
where ρ−1 is defined to be the inverse of ρ, when ρ is considered as an operator acting
only on the support of ρ. (This definition removes the problem that ρ may not have
an inverse.)
Solution: UNFINISHED WORK

2.74 Suppose a composite of systems A and B is in the state |ai|bi, where |ai is a pure
state of system A, and |bi is a pure state of system B. Show that the reduced density
operator of system A alone is a pure state.
Solution: We can see that since ρAB = |aiha| ⊗ |bihb| that

ρA = trB (ρAB ) = |aiha|tr(|bihb|) = |aiha| (hb|bi) = |aiha|

Then, since tr((ρA )2 ) = tr(|aiha||aiha|) = tr(|aiha|) = 1, we have that ρA is a pure


state by Exercise 71.

2.75 For each of the four Bell states, find the reduced density operator for each qubit.
|00i+|11i |00ih00|+|00ih11|+|11ih00|+|11ih11|
Solution: For the state |ψi = √
2
, we see that ρ = |ψihψ| = 2
.
From there,

ρ1 = tr2 (ρ)
tr(|00ih00|) + tr(|00ih11|) + tr(|11ih00|) + tr(|11ih11|)
=
2
|0ih0|h0|0i + |1ih0|h0|1i + |0ih1|h1|0i + |1ih1|h1|1i
=
2
|0ih0| + |1ih1|
= = I/2
2

27
Notice that ρ2 = ρ1 since the first and second pairs of qubits are exactly the same.
Next, for |ψi = |00i−|11i

2
, see that ρ = |ψihψ| = |00ih00|−|00ih11|−|11ih00|+|11ih11|
2
and so
ρ1 = tr2 (ρ)
tr(|00ih00|) − tr(|00ih11|) − tr(|11ih00|) + tr(|11ih11|)
=
2
|0ih0|h0|0i − |1ih0|h0|1i − |0ih1|h1|0i + |1ih1|h1|1i
=
2
|0ih0| + |1ih1|
= = I/2
2
Again notice that ρ2 = ρ1 since the first and second pairs of qubits are exactly the
same.
Next, for |ψi = |01i+|10i

2
, see that ρ = |ψihψ| = |01ih01|+|01ih10|+|10ih01|+|10ih10|
2
, and so
ρ1 = tr2 (ρ)
|0ih0|h1|1i + |0ih1|h0|1i + |1ih0|h1|0i + |1ih1|h0|0i
=
2
|0ih0| + |1ih1|
= = I/2
2

ρ2 = tr1 (ρ)
|1ih1|h0|0i + |1ih0|h1|0i + |0ih1|h0|1i + |0ih0|h1|1i
=
2
|0ih0| + |1ih1|
= = I/2
2

Finally, we will notice that for the state |ψi = |01i−|10i



2
that the terms which have their
1 2
sign flipped in the computation of ρ , ρ are those which will zero out, and so again we
will get that ρ1 = ρ2 = I/2 for this state |ψi as well.
2.76 Extend the proof of the Schmidt decomposition to the case where A and B may have
state spaces of different dimensionality.
Solution: UNFINISHED WORK
2.77 Suppose ABC is a three component quantum system. Show by example that there are
quantum states |ψi of such systems which can not be written in the form
X
|ψi = λi |iA i|iB i|iC i
i

where λi are real numbers and |iA i, |iB i, |iC i are orthonormal bases of their respective
systems.
Solution: UNFINISHED WORK

28
2.78 Prove that a state |ψi of a composite system AB is a product state if and only if it
has Schmidt number 1. Prove that |ψi is a product state if and only if ρA (and thus
ρB ) are pure states.
Solution: UNFINISHED WORK
2.79 Consider a composite system consisting of two qubits. Find the Schmidt decomposi-
tions of the states
|00i + |11i |00i + |01i + |10i + |11i |00i + |01i + |11i
√ , , and √
2 2 3
Solution: Note that the Schmidt decomposition of a state is not unique, so answers
may vary. We can see that since an orthonormal basis for each the first and second
qubits are |0i, |1i, that we may decompose each state as
   
|00i + |11i 1 1
√ = √ |0i|0i + √ |1i|1i
2 2 2
       
|00i + |01i + |10i + |11i 1 1 1 1
= |0i|0i + |0i|1i + |1i|0i + |1i|1i
2 2 2 2 2
     
|00i + |01i + |11i 1 1 1
√ = √ |0i|0i + √ |0i|1i + √ |1i|1i
3 3 3 3
2.80 Suppose |ψi and |φi are two pure states of a composite quantum system with com-
ponents A and B, with identical Schmidt coefficients. Show that there are unitary
transformations U on system A and V on system B such that |ψi = (U ⊗ V )|φi.
Solution: UNFINISHED WORK
2.81 Let |AR1 i and |AR2 i be two purifications of a state ρA to a composite system AR.
Prove that there exists a unitary transformation UR acting on system R such that
|AR1 i = (IA ⊗ UR )|AR2 i.
Solution: UNFINISHED WORK
P
2.82 Suppose {pi , |ψi i} is an ensemble of states generating a density matrix ρ = pi |ψi ihψi |
i
for a quantum system A. Introduce system R with orthonormal basis |ii.
P√
(a) Show that pi |ψi i|ii is a purification of ρ.
i
(b) Suppose we measure R in the basis |ii, obtaining outcome i. With what prob-
ability do we obtain the result i, and what is the corresponding state of system
A?
(c) Let |ARi be any purification of ρ to the system AR. Show that there exists an
orthonormal basis |ii in which R can be measured such that the corresponding
post-measurement state for system A is |ii with probability pi .
Solution: UNFINISHED WORK

29
Corollaries / General Results:

ˆ Corollary 1: If (A|vi, |wi) = (B|vi, |wi) for all |vi, |wi ∈ V , then A = B.
Proof: Suppose (A|vi, |wi) = (B|vi, |wi) for all |vi, |wi ∈ V . Then we have that

0 = (A|vi, |wi) − (B|vi, |wi) = ((A − B)|vi, |wi)

Since this is true for all |wi ∈ V , pick |wi = (A − B)|vi; then we find that

((A − B)|vi, (A − B)|vi) = 0

This necessarily implies that (A − B)|vi = 0. As this holds for all |vi ∈ V , necessarily
A − B = 0 the zero operator, and so A = B as desired.

30

You might also like