Professional Documents
Culture Documents
r
X
i
v
:
q
u
a
n
t
-
p
h
/
0
4
0
2
1
0
7
v
1
1
6
F
e
b
2
0
0
4
Coins Make Quantum Walks Faster
Andris Ambainis
School of Mathematics,
Institute for Advanced Study,
Princeton, NJ 08540
ambainis@ias.edu
Julia Kempe
CNRS-LRI UMR 8623
Universite de Paris-Sud
91405 Orsay, France
and UC Berkeley, Berkeley, CA 94720
kempe@lri.fr
Alexander Rivosh
Institute of Mathematics and Computer Science
University of Latvia
Raina bulv.29, Riga, LV-1459, Latvia
vbug@solutions.lv
May 26, 2006
Abstract
We show how to search N items arranged on a
N grid in time O(
N log N).
1 Introduction
Quantum walks are quantum counterparts of classical random walks. Classical random walks have
many applications in randomized algorithms [MR95] and we hope that quantum walks would have
similar applications in quantum algorithms. Both discrete-time [Mey96, AAKV01, ABN
+
01] and
continous time [FG98, CFG02] quantum walks have been introduced
1
. The denitions of the two
are quite dierent. In continous time, one can directly dene the walk on the vertices of the graph.
In discrete time, it is necessary to introduce an extra coin register storing the direction in which
the walk is moving.
Because of this dierence in the denitions, it has been open what the relation between discrete
and continous walk is. In the classical world, the continous walk is the limit of the discrete walk
1
For an introduction to quantum walks see [Kem03a].
1
when the length of the time step approaches 0. In the quantum case, this is no longer true. Even
if we make the time-steps of the discrete walk smaller and smaller, the coin register remains.
Therefore, the limit cannot be the continous walk without the coin register. This means that one
variant of quantum walks could be more powerful than the other in some context, but so far all
known examples have given similar behavior of the two walks (see e.g. [CFG02, Kem03b, CCD
+
03]).
In this paper, we present the rst example where the discrete walk (with coin) outperforms
the continous walk (with no coin). Our example is the spatial search [Ben02, AA03] variant of
Grovers search problem. In the usual Grovers search problem [Gro96], we have N items, one of
which is marked. Then, we can nd the marked item in O(
N grid. This was rst studied by Benio [Ben02] who observed that the usual
Grovers search algorithm takes (N) steps. It uses (
N). Thus, the total time becomes (N) and the quantum
speedup disappears. Aaronson and Ambainis [AA03] xed this problem by giving an algorithm for
searching the 2-dimensional grid in O(
N log
2
N) total steps
2
(counting both queries and moving
steps) and the 3-dimensional grid in O(
N) in 5 and
more dimensions, but not in 2 or 3 dimensions, where the continuous walk takes (N) and (N
5/6
),
respectively. In 4 dimensions the continuous time walk algorithm performs as O(
N log N).
In this paper, we use discrete-time quantum walks to design an algorithm that searches the
grid in O(
N log
3
N) for the algorithm of [AA03] and
O(
N log
2
N) for our algorithm which we present in this paper. For 3 and higher dimensions, the general case can
be reduced to the one item case with just a constant factor increase [AA03]. Thus, the asymptotic running times
stay the same.
2
transformation. One natural choice, discovered numerically by Neil Shenvi [She03], leads to our
algorithm while some other natural choices fail to produce a good algorithm. Thus, the coin
transformation could be a resource which aects the algorithm profoundly. We give both upper
and lower bounds for the performance of some natural choices of the coin. Surprisingly we show
that in the case of the 2-dimensional grid only 2 (and not the standard 4) coin-degrees of freedom
are sucient to achieve the quantum speed-up. The insights gained from our study might aid in
the design of future discrete quantum walk based algorithms. Several such algorithms have recently
been discovered [CCD
+
03, Amb03, MSS03, CE03, Sze04].
Our presentation allows a fairly general approach to quantum walk search algorithms on graphs.
In particular it simplies the proof of [SKW03], where the relevant eigenvectors had to be guessed.
We also give a discrete walk search algorithm on the complete graph and show its equivalence to
Grovers algorithm and outline several generalizations of our results.
2 Preliminaries and Notation
2.1 Model
Our model is similar to the one in [AA03]. We have an undirected graph G = (V, E). Each vertex v
stores a variable a
v
0, 1. Our goal is to nd a vertex v for which a
v
= 1 (assuming such vertex
exists). We will often call such vertices marked and vertices for which a
v
= 0 unmarked.
In one step, an algorithm can examine the current vertex or move to a neighboring vertex in
the graph G. The goal is to nd a marked vertex in as few steps as possible.
More formally, a quantum algorithm is a sequence of unitary transformations on a Hilbert space
H
i
H
V
. H
V
is a Hilbert space spanned by states [v) corresponding to vertices of G. H
i
represents
the algorithms internal state and can be of arbitrary xed dimension. A t-step quantum algorithm
is a sequence U
1
, U
2
, . . ., U
t
where each U
i
is either a query or a local transformation. A query U
i
consists of two transformations (U
0
i
, U
1
i
). U
0
i
I is applied to all H
i
[v) for which a
v
= 0 and
U
1
i
I is applied to all H
i
[v) for which a
v
= 1.
A local transformation can be dened in several ways [AA03]. In this paper, we require them
to be Z-local. A transformation U
i
is Z-local if, for any v V and [) H
i
, the state U
i
([) [v))
is contained in the subspace H
i
H
(v)
where H
(v)
H
V
is spanned by the state [v) and the
states [v
) for all v
adjacent to v. Our results also apply if the local transformations are C-local
(another locality denition introduced in [AA03]).
The algorithm starts in a xed starting state [
start
) and applies U
1
, . . ., U
t
. This results in a
nal state [
final
) = U
t
U
t1
. . . U
1
[
start
). Then, we measure [
start
). The algorithm succeeds if
measuring the H
V
part of the nal state gives [g) such that a
g
= 1.
For more details on this model, see [AA03].
2.2 Search by quantum walk
In what follows we will assume that G is undirected and d-regular, i.e. has constant degree d. To
each vertex we can associate a labeling 1, . . . , d of the d edges (directions) adjacent to it and an
auxiliary coin-Hilbert space H
d
= [1), . . . , [d). Let H
N
be the Hilbert space spanned by the
vertices of the graph, then the walk takes place in the joint space of coin and graph H = H
d
H
N
.
3
Denition 1 [Discrete Quantum Walk on G:] The discrete quantum walk is an alternation of coin
ip and moving step: U = S C, where S is a shift controlled by the coin register
S : [i) [x) [(i)) [ x) (1)
i = 1, . . . , d and x, x V , x and x are connected by the edge labelled i on xs side and is a
permutation of the d basis states of the coin space H
d
, and the coin C = C
0
I
N
where I
N
acts as
identity on H
N
and C
0
is a coin-ip acting on H
d
C
0
= 2[s)s[ I
d
where [s) =
1
d
d
i=1
[i). (2)
For a given i S permutes the vertices of the graph, hence S is a unitary operation. The permutation
allows us to specify shift operations that act dierently on the coin space H
d
. Note that the
coin is symmetric in that it treats all d directions equally, and among all such coins C
0
is the one
farthest away from identity.
Remark: The uniform superposition [
0
) =
1
dN
d
i=1
N
x=1
[i)[x) is an eigenvector of U with
eigenvalue 1 (U[
0
) = [
0
)); if we start the walk in [
0
) it will never change this state.
To introduce a marked item in the graph we need to have an inhomogeneity in the quantum
walk by using the coin to mark a vertex v, which gives rise to the following:
Denition 2 [Perturbed Quantum Walk:] The perturbed walk with marked vertex v and marking
coin C
1
= I
d
is U
= S C
, where
C
= C
0
(I [v)v[) +C
1
[v)v[ = C (C
0
C
1
) [v)v[. (3)
We will think of U
as the random walk with one (or several) marked coins. This means that instead
of one coin for all nodes, C
0
I, we have a dierent coin C
1
on the marked state. Numerical data
shows that other marked coins exhibit similar properties as C
1
= I, but we will use this coin which
simplies the analysis. Then C
0
C
1
= 2[s)s[, and U
= U 2S[s, v)s, v[ = U (I
dN
2[s, v)s, v[)
using C
0
[s) = [s).
The quantum walk U gives rise to a search algorithm on a graph G in the following way:
Quantum Walk Search Algorithm
1. Initialise the quantum system in the uni-
form superposition [
0
).
2. Do T times: Apply the marked walk U
.
3. Measure the position register.
4. Check if the measured vertex is the
marked item.
An item on a vertex of the graph could be marked by setting an auxiliary qubit to [1), whereas the
unmarked items could have this qubit set to [0). Then this auxiliary qubit can control the coin to
be C for the unmarked items and C
= C
0
(I 2[v)v[), where [v) is the marked state. Note that C
0
= 2[s)s[ 1
N
is the
reection around the mean operator of Grovers (standard) algorithms and I 2[v)v[) =: R
v
the phase ip of the oracle. Recall that Grovers algorithm is of the form (R
v
C
0
)
T
[s). The
initial state for the random walk based algorithm is the uniform superposition [
0
) = [s) [s).
Now U
[
0
) = S C
[
0
) = R
v
[s) C
0
[s), C
[
0
) = (C
0
R
v
)[s) (R
v
C
0
)[s) and U
2
[
0
) =
(R
v
C
0
)[s) (C
0
R
v
)[s). So we see that a random walk in this scenario gives exactly Grovers
algorithm on both the coin space and the vertex space, at the expense of a factor of 2 in the number
of applications.
3 Results in 2 dimensions
We give several upper and lower bounds for the discrete quantum walk on the grid. The N
memory locations are arranged in a
N.
The natural coin space is 4-dimensional. We will label the edges emanating from each vertex with
, , , , indicating the positive and negative x and y directions.
As it turns out, the choice of the coin transformation (or, equivalently, of the permutation in
Eq. (1)) is crucial for the performance of the random walk. We will show that using a ip-op
shift, gives a search algorithm that succeeds in O(
N log N).
Proof of Corollary 1: The initial state [) can be generated with
N local transformations.
Since we only have an estimate for T up to a constant factor, we need to repeat the random walk
an appropriate (constant) number of times. For the algorithm we will use amplitude amplication
[BHMT02] to achieve a time O(
2
[0) +
1
2
[1) and
[ ) =
1
2
[0)
1
2
[1) be the Hadamard basis. If there is no marked coins, one step of the quantum
walk U with the two-dimensional coin consists of:
1. Move up/down:
[ ) [x) [y) [ ) [x)[y 1),
[ ) [x) [y) [ ) [x)[y + 1).
2. Move left/right:
[ ) [x) [y) [ ) [x 1)[y),
[ ) [x) [y) [ ) [x + 1)[y).
If there is a marked coin [v), we dene the quantum walk as U
2
[ ) +
1
2
[ ).
Theorem 3 The associated quantum walk search algorithm takes O(
N log N).
4 Results in 3 and more dimensions
In more than 2 dimensions the ip-op based quantum walk search algorithms achieves its optimal
performance of O(
N . . .
d
N, with periodic
boundary conditions, as before, and states are labelled as [x
1
, . . . , x
d
).
Theorem 4 Let G be the d-dimensional grid with N vertices. Then the associated quantum walk
with one marked coin takes O(
N
[0) +
_
N1
N
[1), Grovers
algorithms corresponds to the transformation (with sin = 2
N1
N
)
U =
_
cos sin
sin cos
_
. (5)
The two eigenvectors of U are [ ) =
1
2
([0) i[1)) with eigenvalues e
i
. The initial state is a
uniform superposition of the two eigenvectors [) =
1
2
([) + [ )). After T applications of U,
with T chosen such that T =
2
, we have
U
T
[) = U
T
1
2
([) +[ )) =
1
2
(i[) +i[ )) = [1)
which has an overlap of
_
N1
N
with the marked state [v).
In the random walk algorithm the transformation (I 2[s, v)s, v[) is a counterpart of R
v
and the
transformation U is an imperfect counterpart of R
|
. We will rst show, that with an appropriate
choice of coin (as in Thms. 1, 3, and 4) the resulting transformation is still approximately in a
2-dimensional subspace; In this space U
= U
2
U
1
. Since U
2
is a real unitary matrix, its non-1-
eigenvalues come in pairs of complex conjugate numbers. Denote them by e
i
1
, . . ., e
im
. Let
min
= min(
1
, . . . ,
m
).
7
Lemma 1 Dene the arc / as the set of e
i
for all real satisfying
min
< <
min
. Then U
j
) be the eigenvectors with eigen-
values e
i
j
and e
i
j
, respectively. We express [
good
) as a superposition of the eigenvectors of U
2
:
[
good
) = a
0
[
start
) +
m
j=1
_
a
+
j
[
+
j
) +a
j
[
j
)
_
. (6)
Lemma 2 It is possible to select [
+
j
) and [
j
) so that a
+
j
= a
j
and a
+
j
is a real number.
In the next lemmas, we assume that this is the case and denote a
+
j
= a
j
simply as a
j
.
Lemma 3 The eigenvalues of U
in / are e
i
where
=
_
_
_
_
1
_
j
a
2
j
a
2
0
1
1cos
j
_
_
_
_
. (7)
Let [w
) and [w
start
) =
1
2
[w
)
1
2
[w
), [w
start
) =
1
w
start
[w
start
). We claim that [w
start
) is close to the
starting state [
start
). This is quantied by the following lemma.
Lemma 4 Assume that <
1
2
min
. Then,
start
[w
start
) 1
_
_
j
a
2
j
a
2
0
1
(1 cos
j
)
2
_
_
.
The last lemma shows that, after repeating U
2
U
1
a certain number of times, the state has
signicant overlap with [
good
). Say we apply (U
2
U
1
)
/4
to the state
1
2
[w
)
1
2
[w
). Then,
we get the state which is equal to
1
2
e
i/4
[w
) e
i/4
1
2
[w
) = i(
1
2
[w
) +
1
2
[w
)) =: [w
good
)
plus a state of norm O() (because /4 and /4 dier by an amount which is less than ).
Lemma 5 Assume that <
1
2
min
. Let [w
good
) =
1
2
[w
) +
1
2
[w
). Then,
[
good
[w
good
)[ =
_
_
min
_
_
1
_
j
a
2
j
cot
2
j
4
_
_
, 1
_
_
.
3
The next lemma implies that there are exactly two eigenvalues in A.
8
Corollary 2 Assume that <
1
2
min
.
[
good
[(U
2
U
1
)
/4
[w
good
) =
_
_
min
_
_
1
_
j
a
2
j
cot
2
j
4
_
_
, 1
_
_
+O().
These three lemmas are the basis of our proofs. In each of our positive results, we rst nd
a subspace H such that the search algorithm restricted to this subspace is a special case of an
abstract search algorithm. Then, we apply Lemma 4 to show that the starting state is close to
[w
start
) and Lemma 3 and corollary 2 to show it evolves to a state having signicant overlap with
[
good
).
6 Proofs of the main results
6.1 Theorem 1
Let us determine the eigenspectrum of U = S
ff
(C
0
I
N
) rst.
Claim 6 [Spectrum of U:] U has eigenvalues
kl
with corresponding eigenvectors of the form
[v
kl
) [
k
) [
l
) for all k, l = 0, . . . ,
N 1, where [
k
) =
1
4
N1
j=0
kj
[j) with = e
2i/
N
,
and
kl
and [v
kl
) satisfy the equation
C
kl
[v
kl
) =
_
_
_
_
_
0
k
0 0
k
0 0 0
0 0 0
l
0 0
l
0
_
_
_
_
_
C
0
[v
kl
) =
kl
[v
kl
). (8)
The four eigenvalues
kl
of C
kl
are 1, 1 and e
i
kl
where cos
kl
=
1
2
(cos
2k
N
+cos
2l
N
). Let [v
1
kl
),
[v
1
kl
) and [v
kl
) be the vectors [v
kl
) for the eigenvalues 1, -1 and e
i
, respectively. Then, [v
1
kl
) is
orthogonal to [s) for (k, l) ,= (0, 0) and [v
1
kl
) is orthogonal to [s) for all (k, l), including (0, 0).
Proof: Apply U to a vector of the form [v
kl
) [
k
) [
l
) and note that S
ff
[ ) [
k
) [
l
) =
k
[ ) [
k
) [
l
), and S
ff
[ ) [
k
) [
l
) =
k
[ ) [
k
) [
l
), and similarly for the y-
coordinate, which gives Eq. (8). Solving the equation [C
kl
I[ = 0 for gives the eigenvalues.
For (k, l) ,= (0, 0) the 1-eigenvector [v
1
kl
) is proportional to (
k
(
l
1), 1
l
,
l
(1
k
),
k
1) and
hence orthogonal to [s) =
1
2
(1, 1, 1, 1). The 1-eigenvector [v
1
kl
) is proportional to (
l
+1,
k
(
l
+
1), (
k
+ 1),
l
(
k
+ 1)) and hence orthogonal to [s) =
1
2
(1, 1, 1, 1). For (k, l) = (0, 0), [s) is a
1-eigenvector of C
00
.
For (k, l) = (0, 0), the eigenvalue 1 occurs 3 times. Thus, there is a 3-dimensional 1-eigenspace.
Since [s) is orthogonal to [v
1
00
), [s) belongs to this eigenspace. We choose [v
1
00
) = [s) and [v
00
)
orthogonal to [s).
Let H
0
be the space spanned by the eigenvectors [v
kl
) [
k
) [
l
), (k, l) ,= (0, 0) and [
0
) =
[v
1
00
) [
k
) [
l
). Notice that all other eigenvectors of U are orthogonal to [s, v), by Claim 6.
Therefore, [s, v) is in H
0
. Moreover, applying U
0
, as shown by
Claim 7 We have U
(H
0
) = H
0
. Furthermore, U
0
.
9
Proof: For the rst part, notice that U
0
) = H
0
and (I 2[s, v)s, v[)(H
0
) = H
0
. The rst equality is true because H
0
has a basis
consisting of eigenvectors of U. Each of those eigenvectors gets mapped to a multiple of itself which
is in H
0
. Therefore, U(H
0
) = H
0
. The second equality follows because (I 2[s, v)s, v[)[) =
[) s, v[)[s, v). This is a linear combination of [) and [s, v) and, if [) H
0
, it is in H
0
.
For the second part assume [
0
) is an eigenvector of eigenvalue 1 of U
in H
0
. Then
0 ,=
0
[
0
) =
0
[U
[
0
) =
0
[U(I 2[s, v)s, v[)[
0
) =
0
[
0
) 2
0
[s, v)s, v[
0
).
This implies
0
[s, v)s, v[
0
) = 0 and, since
0
[s, v) =
1
N
,= 0, that s, v[
0
) = 0 and [
0
) =
U
[
0
) = U[
0
) which in turn implies that [
0
) is an eigenvector of eigenvalue 1 of U. Since [
0
) has
zero overlap with [s, v) and precisely the 1-eigenvectors of U orthogonal to [
0
) have zero overlap
with [s, v), it follows that
0
[
0
) = 0 which contradicts that [
0
) H
0
.
The above shows that the random walk algorithm starting in [
0
) is restricted to a subspace H
0
of the Hilbert space. Since [
0
) is the only 1-eigenvector of U in H
0
, we have an instance of the
abstract search algorithm on the space H
0
, with U
1
= I 2[s, v)s, v[, U
2
= U, [
good
) = [s, v) and
[
start
) = [
0
).
As described in Section 5, we study the 2-dimensional subspace spanned by [w
) and [w
).
First, we bound using Lemma 3. We need to expand [
good
) = [s, v) in the basis of eigenvectors
of U. Dene [
+
kl
) = [v
+
kl
) [
k
) [
l
). Let [
kl
) be the vector obtained by replacing every
amplitude in [
+
kl
) by its conjugate. Then, [
kl
) = [v
k,l
) [
k
) [
l
). (This follows from
two observations. First, replacing every amplitude by its conjugate in [
k
) gives [
k
). Therefore,
[
kl
) = [v)[
k
)[
l
). Second, since U[
+
kl
) = e
i
kl
[
+
kl
), we have U[
kl
) = e
i
kl
[
kl
), implying
that [v) = [v
k,l
).) From Lemma 2, we have
[s, v) = a
0
[
0
) +
(k,l)=(0,0)
a
kl
([
+
kl
) +[
kl
))
where [
+
kl
) and [
kl
) appear with the same real coecient a
kl
= s, v[
+
kl
) = s, v[
kl
).
Claim 8
a
kl
=
1
2N
.
Proof: We have [v[
k
) [
l
)[ =
1
N
(since each of the N locations has an equal emplitude
in [
k
) [
l
)). It remains to show that [s[v
+
kl
)[ =
1
2
. For that, we rst notice that [s) is a
superposition of [v
kl
) (since [v
1
kl
) and [v
1
kl
) are orthogonal to [s)). By direct calculation s[C
kl
[s) =
1
2
(cos
2k
N
+ cos
2l
N
). We have
s[C
kl
[s) = e
i
kl
s[v
+
kl
)v
+
kl
[s) +e
i
kl
s[v
kl
)v
kl
[s).
This is possible only if [s[v
+
kl
)[ = [s[v
kl
)[ =
1
2
.
Therefore,
[s, v) =
1
N
[
0
) +
1
2N
(k,l)=(0,0)
([
+
kl
) +[
kl
)). (9)
By Lemma 3, = (
1
_
kl
1
1cos
kl
). The following claim implies that = (
1
cN log N
).
10
Claim 9
(k,l)=(0,0)
1
1cos
kl
= (N log N).
Proof of Claim 9: Recall from Claim 6 that the eigenvalues corresponding to [v
kl
) [
k
) [
l
)
are e
i
kl
= cos
kl
i sin
kl
where cos
kl
=
1
2
(cos
2k
N
+ cos
2l
N
). For x [0, 2], we have
1
x
2
2
cos x 1
2x
2
2
. (10)
Therefore,
1
1
4N
(k
2
+l
2
) cos
kl
1
1
2
N
(k
2
+l
2
),
1
2
N
(k
2
+l
2
) 1 cos
kl
1
4N
(k
2
+l
2
) (11)
This means that it suces to show
N
k,l
1
k
2
+l
2
= (N log N), (12)
where the summation is over all k, l 0, . . . ,
k,l
1
k
2
+l
2
= (log N). A simple way to see this is to sum points that lie
on m-rectangles with the four corners (m, m). The term
1
k
2
+l
2
for (k, l) on an m-rectangle is
bounded as
1
2m
2
1
k
2
+l
2
1
m
2
, and there are 8m such points on each m-rectangle. Hence
N1
m=1
8m
1
2m
2
k,l
1
k
2
+l
2
N1
m=1
8m
1
m
2
. (13)
The claim now follows from
N1
m=1
1
m
=
1
2
log N(1 +o(1)).
Next, we use Lemma 4 to bound the overlap between [
0
) and [w
start
) =
1
2
[w
)
1
2
[w
).
Claim 10
(k,l)=(0,0)
1
(1cos
kl
)
2
= (N
2
).
Proof of Claim 10 : Using the proof of Claim 9,
4
N
2
(k
2
+l
2
)
2
1
(1 cos
kl
)
2
4N
2
(k
2
+l
2
)
2
.
Therefore, it suces to bound N
2
(k,l)=(0,0)
1
(k
2
+l
2
)
2
. Again, we sum points (k, l) over rectangles
with corners (m, m). Each rectangle has 8m points, each of which contributes a term of order
1
m
4
to the sum. Since
m
8m
1
m
4
=
m
8
m
3
is bounded by a constant, the lemma follows.
This means that the overlap between the starting state and [w
start
) is 1 (
4
N
2
) = 1
(
1
log
2
N
). Equivalently, [
0
) = [w
start
) +[
rem
), with |[
rem
)| = (
1
log N
). After
4
repetitions,
the state becomes [w
good
) +[
rem
), with |
rem
| = (
1
log N
) +O() = (
1
log N
). Finally, we bound
w
good
[s, v), using Lemma 5. Since all a
kl
are equal to
1
2N
and cot x
1
x
, we have
1
_
k,l
a
2
kl
cot
2
kl
2
1
_
1
2N
k,l
4
2
kl
. (14)
11
From the proof of Claim 9, we know that
1
2
kl
is bounded from below and above by
const
1cos
kl
.
Therefore, Claim 9 implies
k,l
1
2
kl
= (N log N). Thus, the expression of Eq. (14) is of order
(
1
log N
).
To conclude the proof of the theorem, the overlap of the state of the algorithm after
4
steps
and [s, v) is
[w
good
[s, v) +
rem
[s, v)[ [w
good
[s, v)[ [
rem
[s, v)[.
The rst term is of order (
1
log N
) and the second term is of lower order ((
1
log N
)). Therefore,
the result is of order (
1
log N
).
Hence a measurement gives the marked location [v) with probability p
c
log N
. This completes
the proof of Theorem 1.
Proof of Corollary 1: First, note that it is possible to generate the initial state [
0
) with 2
N
local transformations. We start with the state concentrated in one point (say [0, 0, 0)) and rst
spread the amplitude along the x-axis in
N
[0) +
_
N1
N
[1), followed by a [1)-controlled shift in the x-direction, followed by a rotation of
the coin register back to [0) in the vertex (0, 0). Similarly we repeat this procedure to move
_
N2
N
of amplitude from (1, 0) to (2, 0) and so on. After
= I 2[
0
)
0
[ in 4
N log N
2
T
N log N
2
2
= T
max
). To get -close to the state
U
T
[
0
) we use a standard trick and run the walk for times T
min
, (1 + )T
min
, (1 + )
2
T
min
, . . .
until we reach T
max
. One of these times is within a factor of (1 ) of T and hence our state
and nal measurement probability will be -close to the state at time T. We can chose to
be some small constant. The total time including all repetitions (bounded by
1
1
T
min
) is still
O(T) = O(
N log N).
Finally, to amplify the success probability we will use amplitude amplication [BHMT02], which
is a succession of steps consisting of reection around the mean [
0
) and a run of the algorithm.
The intermediate reection around the mean can be implemented in 4
N log N).
6.2 Theorem 2
Proof: The key dierence between this walk using S
m
and the walk from Theorem 1 using S
ff
is that the initial state [
0
) now has very large overlap with the eigenspace of eigenvalue 1 of U
and U
. This means that the walk (nearly) does not move at all and the state at any time T has
overlap with [
0
) close to 1. The dierence becames apparent in the eigenspectrum of U:
12
Claim 6 [Spectrum of U:] U has eigenvalues
kl
with corresponding eigenvectors of the form [v
kl
)
[
k
) [
l
) for all k, l = 0, . . . ,
N 1, where [
k
) =
1
4
N1
j=0
kj
[j) with = e
2i/
N
, and
kl
and [v
kl
) satisfy the equation
C
kl
[v
kl
) =
_
_
_
_
_
k
0 0 0
0
k
0 0
0 0
l
0
0 0 0
l
_
_
_
_
_
C
0
[v
kl
) =
kl
[v
kl
) (15)
The four eigenvalues
kl
of C
kl
are 1, 1 and e
i
kl
, where cos
kl
=
1
2
(cos
2k
N
+ cos
2l
N
). For
the eigenvector [v
1
kl
) corresponding to eigenvalue 1, we have
[v
1
kl
[s)[
1 + cos
2k
N
+ cos
2l
N
+ cos
2(k+l)
N
4
.
Proof: The rst part is by straightforward calculation as before (Claim 6). For the second part,
the eigenvector corresponding to eigenvalue 1 is [v
1
kl
) =
|u
1
kl
u
1
kl
with
[u
1
kl
) = (w
k
(1 +w
l
), 1 +w
l
, w
l
(1 +w
k
), 1 +w
k
)
We have |u
1
kl
| 4 because each of the 4 components of [u
1
kl
) is at most 2 in absolute value. It
remains to bound u
1
kl
[s). We have
v
1
kl
[s)
u
1
kl
[s)
4
=
1
8
(w
k
(1 +w
l
) + 1 +w
l
+w
l
(1 +w
k
) + 1 +w
k
) =
1
4
(1 +w
k
)(1 +w
l
).
The real part of this expression is
1
4
(1 + cos
2k
N
+ cos
2l
N
+ cos
2(k+l)
N
). This implies the claim.
Let H
1
be the 1-eigenspace of U, spanned by the [
1
kl
) = [v
1
kl
) [
k
) [
l
) for k, l = 0, . . . ,
N 1,
with [
1
00
) = [
0
). Write
[s, v) =
k,l
kl
[
1
kl
) +[s
)
where [s
has a 1-eigenvector [
) such that [
0
[
)[
2
= 1
|
00
|
2
N
i,j=1
|
ij
|
2
.
Proof: Let
kl
=
kl
_
N
i,j=1
|
ij
|
2
. Let [) =
k,l
kl
[
1
kl
) be the projection of [s, v) on H
1
. Since
0
[) =
00
, we can write
[
0
) =
00
[) +
_
1 [
00
[
2
[
)
where [
) is also in H
1
. We claim that [
) is a 1-eigenvector of U
. The state [
) is orthogonal to
[s, v) because [
) belongs to H
1
and is orthogonal to [) which is the projection of [s, v) to that
subspace. Therefore, [
) is also a 1-eigenvector of U
because it belongs to H
1
. This means that it is a 1-eigenvector of U
N1,
N1
[. We have
00
=
1
N
. We
will show that there are (N) other
kl
of order (1/
is
1
2
00
N
i,j=1
[
ij
[
2
= 1 (
1
N
).
Claim 6 gives the desired a bound on the
kl
:
[s, v[v
1
kl
) [
k
) [
l
)[ = [v[
k
l
)[ [s[v
1
kl
)[ =
[s[v
1
kl
)[
1 + cos
2k
N
+ cos
2l
N
+ cos
2(k+l)
N
4
N
.
The range for
2k
N
and
2l
N
is [
2
,
2
]. Therefore, for half of all k (resp. half of all l) we have
[k[
2
N
4
(resp. [l[
2
N
4
). For the
N
4
pairs (k, l) that satisfy both of those conditions we have
1 + cos
2k
N
+ cos
2l
N
+ cos
2(k +l)
N
1 +
1
2
+
1
2
1 +
2.
Thus, for at least
N
4
pairs (k, l)
[
kl
[ = [s, v[v
1
kl
) [
k
) [
l
)[
1 +
2
4
N
>
1
2
N
.
6.3 Theorem 3
Proof: The proof for this random walk algorithm with a 2-dimensional coin proceeds in close
analogy to the proof of Theorem 1, and we will emphasize and prove the points that dier.
Claim 6[Spectrum of U:] U has eigenvalues
kl
with corresponding eigenvectors of the form [v
kl
)
[
k
) [
l
) for all k, l = 0, . . . ,
N 1, where [
k
) =
1
4
N1
j=0
kj
[j) with = e
2i/
N
, and
kl
and [v
kl
) satisfy the equation
C
kl
[v
kl
) =
_
l
cos k i
l
sin k
i
l
sin k
l
cos k
_
[v
kl
) =
kl
[v
kl
). (16)
The two eigenvalues
kl
of C
kl
are e
i
kl
where cos
kl
=
1
2
(cos
2(k+l)
N
+ cos
2(kl)
N
).
As a corollary, we have that there are exactly two eigenvectors with eigenvalue 1, both of them
of the form [v
00
) [
0
) [
0
). Since the coin space is 2-dimensional, the two vectors [v
00
) span it
and, therefore, [v) [
0
) [
0
) is an eigenvector for any [v). In particular, we can take [v
+
00
) = [s)
and [v
00
) = [s
) where [s
0
be the space [v
+
00
) [
0
) [
0
)
and [v
kl
) [
k
) [
l
). Similarly to Claim 7, H
0
is mapped to itself by U
.
Let [
+
kl
) = [v
+
kl
) [
k
) [
l
) and [
kl
) = [v
k,l
) [
k
) [
l
). We can express the state
[s, v) as
[s, v) = a
0
[
0
) +
(k,l)=(0,0)
a
kl
([
+
kl
) +[
kl
)). (17)
14
where the equality of the coecients of [
+
kl
) and [
kl
) follows from the proof of Lemma 2 (and
[
+
kl
) and [
kl
) being complex conjugates).
We now have to bound the sum of Eq. (7). We claim that replacing all
|a
kl
|
2
|a
0
|
2
by
1
2
does not
change the value of the sum. To see that, we rst notice that
kl
=
k,l
. Therefore, replacing
[a
2
kl
[ and [a
2
k,l
[ by
|a
kl
|
2
+|a
k,l
|
2
2
does not change the sum. Futhermore, [a
kl
[
2
+ [a
k,l
[
2
=
1
N
.
(We have [a
kl
[ =
+
kl
[s, v), [a
k,l
[ =
+
k,l
[s, v) =
k,l
[s, v). The vectors [
+
kl
) and [
k,l
)
are the same as [v
kl
) [
k
) [
l
). Therefore, [a
kl
[
2
+ [a
k,l
[
2
equals the squared projection of
[s, v) to [v
kl
) [
k
) [
l
) which is equal to [v[
k
) [
l
)[
2
=
1
N
.) Also, we still have a
0
=
1
N
and
[a
0
[
2
=
1
N
. Therefore, Eq. (7) simplies to
=
_
_
1
_
(k,l)=(0,0)
1
2(1cos
kl
)
_
_
,
just as in the proof of Theorem 1. We get = (
N log N).
6.4 Theorem 4
Proof: Let us call the 2d directions on the d-dimensional grid i
) [x
1
. . . x
i
. . . x
d
) = [i
) [x
1
. . . x
i
1 . . . x
d
)
Let us recapitulate what the key elements in the proof of Theorem 1 are and how they generalize
to the d-dimensional case.
Claim 6
N 1), where [
i
) =
1
2d
N
d
N1
k=0
k
[k) with
= e
2i/
d
N
, and
k
1
k
2
...k
d
and [v
k
1
k
2
...k
d
) satisfy the equation
C
k
1
k
2
...k
d
[v
k
1
k
2
...k
d
) =
_
_
_
_
_
_
_
0
k
1
0 0
k
1
0 0 0
0 0 0
k
2
...
0 0
k
2
0
...
_
_
_
_
_
_
_
C
0
[v
k
1
k
2
...k
d
) =
k
1
k
2
...k
d
[v
k
1
k
2
...k
d
). (18)
The 2d eigenvalues
k
1
k
2
...k
d
of C
k
1
k
2
...k
d
are 1 and 1 with multiplicity d 1 each, and e
i
k
1
k
2
...k
d
where cos
k
1
k
2
...k
d
=
1
d
d
i=1
cos
2k
i
d
N
. All [v
1
k
1
k
2
...k
d
) corresponding to eigenvalue 1 are orthogonal
to [s) for (k
1
k
2
. . . k
d
) ,= (0, 0, . . . , 0). All [v
1
k
1
k
2
...k
d
) corresponding to eigenvalue 1 are orthogonal
to [s) for all (k
1
k
2
. . . k
d
).
15
Proof: Eq. (18) is obtained in the same way as in the proof of Claim 6. C
k
1
k
2
...k
d
consists
of 2 parts, the 2-block-diagonal matrix (call it D
k
1
k
2
...k
d
) and C
0
. Each block in D
k
1
k
2
...k
d
has
eigenvalues 1 and eigenvectors (
k
i
2
,
k
i
2
), so the matrix D
k
1
k
2
...k
d
itself has eigenspaces of
1 and 1 of dimension d each. In each of these two eigenspaces we can nd d 1 orthogonal
vectors orthogonal to [s). For those vectors C
0
= 2[s)s[ I just ips their sign, so the 1
eigenvectors of D
k
1
k
2
...k
d
become +1 eigenvectors of C
k
1
k
2
...k
d
and vice versa. Call the remaining
two eigenvectors of eigenvalue 1 [e
[e
). Then s[C
k
1
k
2
...k
d
[s) =
s[D
k
1
k
2
...k
d
[s) =
1
d
d
i=1
cos 2k
i
/
d
N = cos
k
1
k
2
...k
d
implies [
+
[
2
[
[
2
= cos
k
1
k
2
...k
d
. Let [)
be an eigenvector of C
k
1
k
2
...k
d
not orthogonal to [s) and expand [) =
+
[e
+
) +
[e
). Then we
obtain (using [e
+
) =
+
[s) +
[s
) and [e
) =
[s)
+
[s
) where [s
) is orthogonal to [s) in
the space spanned by [e
[
2
)+4iIm
=
cos
k
1
k
2
...k
d
i sin
k
1
k
2
...k
d
.
For (k
1
k
2
. . . k
d
) = (0, 0, . . . , 0) there are d + 1 1-eigenvectors, we set [v
0...0
) = [s). Then, the
other 1-eigenvectors are orthogonal to [s).
Similarly to Theorem 1, we restrict to the subspace H
0
spanned by [v
1
k
1
k
2
...k
d
) [
k
1
) [
k
2
)
. . . [
k
d
) and [v
00...0
) [
0
) [
0
) . . . [
0
). As in Theorem 1, we have U
(H
0
) = H
0
. Further
[s, v) =
1
N
[
0
) +
1
2N
(k
1
,k
2
,...,k
d
)=(0,0,...,0)
([
+
k
1
k
2
...k
d
) +[
k
1
k
2
...k
d
)),
where [
+
k
1
k
2
...k
d
) = [v
+1
k
1
k
2
...k
d
) [
k
1
) [
k
2
) . . . [
k
d
) and [
k
1
k
2
...k
d
) = [v
1
k
1
k
2
...k
d
) [
k
1
)
[
k
2
) . . . [
k
d
). We use a modied Claim 9:
Claim 9
(k
1
,k
2
,...k
d
)=(0,0,...,0)
1
1cos
k
1
k
2
...k
d
= (N).
Proof: The proof follows along the lines of the proof of Claim 9. By Claim 6
,
1
1 cos
k
1
k
2
...k
d
=
d
d
j=1
(1 cos
2k
j
N
1/d
)
Similarly to Lemma 9, this is bounded from above and below by a constant times N
2/d 1
k
2
1
+k
2
2
+...+k
2
d
.
Thus, we have to estimate
N
2/d
k
1
,k
2
,...,k
d
1
k
2
1
+k
2
2
+. . . +k
2
d
(19)
where the summation is over all k
i
0, . . . ,
d
k
1
,...,k
d
1
k
2
1
+k
2
2
+...+k
2
d
over the m
th
slice is of order m
d1 1
m
2
= m
d3
. Since there are N
1/d
slices and, for each of them, the sum is m
d3
N
(d3)/d
, the sum
k
1
,...,k
d
1
k
2
1
+k
2
2
+...+k
2
d
over all
(k
1
, . . . , k
d
) ,= (0, . . . , 0) is of order at most N
1/d
N
(d3)/d
= N
(d2)/d
. This implies that Eq. (19)
is of order at most N. It is also of order at least N since each individual term N
2/d 1
k
2
1
+k
2
2
+...+k
2
d
is
at least a constant.
16
Therefore, applying Lemma 3 gives a sharper bound = (
1
N
). Notice that we still fulll the
requirement <
1
2
min
needed for Lemmas 4 and 5. The reason for that is that all
i
are at least
(
1
N
1/d
).
Similarly to Claim 9, we can show
(k
1
,k
2
,...k
d
)=(0,0,...,0)
1
(1 cos
k
1
k
2
...k
d
)
2
= (N)
and
(k
1
,k
2
,...k
d
)=(0,0,...,0)
1
cot
2
k
1
k
2
...k
d
= (N).
By combining these two equalities with Lemmas 4 and 5, we get that the overlap between the
starting state [
0
) and [w
start
) =
1
2
[w
)
1
2
[w
) is 1(
1
N
) and the overlap between [w
good
) =
1
2
[w
) +
1
2
[w
) and [s, v) is (1). This implies that the search algorithms nal state has a
constant overlap with [s, v).
6.5 Theorem 5
We rst discuss generalizing the positive results (Theorems 1, 4 and 5) to the case with two
marked items. The main issue is to state them as instances of the abstract search. Assume
there are k marked locations v
1
, . . . , v
k
. Then, one step of the search algorithm is U
= (I
2
k
i=1
[s, v
i
)s, v
i
[)U. Currently, we are not able to analyze cases of the abstract search where U
2
ips the sign on more than a 1-dimensional subspace.
For the k = 2 case, we can avoid this problem, via a reduction to k = 1. Dene [s
) =
1
2
[s, v
1
) +
1
2
[s, v
2
). We claim that applying (U
)
T
to the starting state [
0
) gives the same nal
state as applying (U
)
T
where U
= (I 2[s
)s
[)U.
To show that, let T be a symmetry of the grid such that T(v
1
) = v
2
and T(v
2
) = v
1
. (For the
2-dimensional grid, if v
1
= (x
1
, y
1
) and v
2
= (x
2
, y
2
), then T(x, y) = (x
1
+x
2
x, y
1
+y
2
y).) We
identify T with the unitary mapping [c, v) to [c, T(v)).
Claim 12 For any t 0, T(U
)
t
[
0
) = (U
)
t
[
0
).
Proof: By induction. For the base case, we have to show T[
0
) = [
0
). This follows since [
0
) is
a uniform superposition of the states [s, v) and T just permutes locations v.
For the inductive case, notice that T commutes with both I 2[s, v
1
)s, v
1
[ 2[s, v
2
)s, v
2
[ and
U. Therefore, TU
= U
)
t
[
0
) = (U
)
t
[
0
) is true, then we
also have
T(U
)
t+1
[
0
) = U
T(U
)
t
[
0
) = (U
)
t+1
[
0
),
completing the induction step.
Let [s
) =
1
2
[s, v
1
)
1
2
[s, v
2
). Then, (U
)
t
[
0
) is orthogonal to [s
) because T[s
) = [s
).
We have [s, v
1
)s, v
1
[ +[s, v
2
)s, v
2
[ = [s
)s
[ +[s
)s
[. Together with (U
)
t
[
0
) [s
), this means
(I 2[s, v
1
)s, v
1
[ 2[s, v
2
)s, v
2
[)(U
)
t
[
0
) = (I 2[s
)s
[)(U
)
t
[
0
).
17
Thus, U
can be replaced by U
at every step.
The rest of the proofs is now similar to the case of 1 marked location, except that [s, v) is
replaced by [s
) everywhere. Theorem 2 also follows similarly to the 1 marked item case, with [s
)
instead of [s, v).
7 Proofs of the technical lemmas
In this section, we prove Lemmas 1, 3, 2, 4 and 5. We will repeatedly use the following result which
can be found in many linear algebra textbooks.
Fact 1 The eigenvectors of a real unitary matrix either have eigenvalue 1 or else they appear in
conjugated pairs with eigenvalues e
i
and eigenvector [ ) =
1
2
([R) i[I)), where [R) and [I)
are real normalised vectors and R[I) = 0.
Proof of Lemma 1: Let g = 1 Re(e
i
min
). Then, for any e
i
/, Re(e
i
) > 1 g and, for
every eigenvector [) with an eigenvalue in /, Re[U
[) > 1 g.
If there were more than two eigenvectors of U
in H
0
with eigenvalues on the arc /, we
could construct a linear combination [a) of them such that [a) [
0
), [s, v). Since [a) is a lin-
ear combination of vectors [) with Re[U
0
), [a) is a linear combination of eigenvectors of U at least g away from
1 and hence Rea[U[a) 1 g, which gives a contradiction.
Proof of Lemma 2: Let [) be the vector obtained by replacing every amplitude in [
+
j
) by its
complex conjugate. Since U
2
is a real unitary matrix, U
2
[
+
j
) = e
i
j
[
+
j
) implies U
2
[) = e
i
j
[).
Therefore, we can assume that [
j
) is a complex conjugate of [
+
j
). The coecients a
+
j
and a
j
are equal to the inner products
+
j
[
start
) and
j
[
start
). Since [
start
) is a real vector, these
two inner products are complex conjugates and a
+
j
= (a
j
)
. By multiplying [
+
j
) and [
j
) with
appropriate constants, we can achieve a
+
j
= a
j
.
Proof of Lemma 3: First, we express [
good
) in the basis consisting of eigenvectors of U
2
:
[
good
) = a
0
[
0
) +
m
j=1
a
j
([
+
j
) +[
j
)). (20)
where [
0
) = [
start
). We dene for real
[w
) = a
0
cot
2
[
0
) +
j
a
j
_
cot
j
2
[
+
j
) + cot
+
j
2
[
j
)
_
. (21)
Similarly to Claim 2 in [Amb03], we have
Lemma 13 If [w
) is orthogonal to [
good
), then [
) = [
good
) + i[w
) is an eigenvector of U
with eigenvalue e
i
and [
) = [
good
) +i[w
) is an eigenvector of U
with eigenvalue e
i
.
18
Proof: The proof is similar to [Amb03], but we include it for completeness.
Apply U
to [
)) = U([
good
) +i[w
)) = a
0
(1 +i cot
2
)[
0
)+
j
a
j
_
e
i
j
(1 +i cot
j
2
)[
+
j
) +e
i
j
(1 +i cot
+
j
2
)[
j
)
_
.
In this equation, every coecient is equal to the corresponding coecient in e
i
([
good
) + i[w
)).
Namely, for the coecient of [
0
), we have
_
1 +i cot
2
_
=
e
i(
2
+
2
)
sin
2
= e
i
e
i(
2
)
sin
2
= e
i
_
1 +i cot
2
_
.
For the coecient of [
+
j
), we have
e
i
j
_
1 +i cot
j
+
2
_
= e
i
j
e
i(
j
2
+
2
)
sin
j
+
2
= e
i
e
i(
2
+
j
2
2
)
sin
j
+
2
= e
i
_
1 +i cot
j
+
2
_
and, similarly, the conditions for the coecients of [
j
) are satised.
By Eqs. (20) and (21), s, v[w
) = 0 is equivalent to
a
2
0
cot
2
+
m
j=1
a
2
j
(cot
+
j
2
+ cot
j
2
) = 0. (22)
Let
min
be the smallest of
1
, . . .,
m
. Then, this equation has exactly one solution in [0,
min
]
and one solution in [
min
, 0]. The reason for that is that the cot function is decreasing (except
for x = k, where it goes to for x < k and + for x > k). Therefore, the whole right hand
side is decreasing, except if one of
2
,
+
j
2
,
j
2
becomes a multiple of . This happens for = 0
and =
min
. Since
min
is the smallest of
j
, [
min
, 0] and [0,
min
] contain no values of for
which one of the cot becomes innity. On the interval [0,
min
] the left-hand side of (22) goes to
+ if 0, if
01
and is 0 for exactly one value of between 0 and
01
. This means
that the two eigenvectors of U
) with [
) as in Eq. (21).
Next, let us determine this . Since
cot x + cot y =
cos x
sin x
+
cos y
sin y
=
cos xsin y + cos y sinx
sin xsin y
=
sin(x +y)
sin xsin y
= 2
sin(x +y)
cos(x y) cos(x +y)
,
Eq. (22) is equivalent to a
2
0
cot
2
+
j
2a
2
j
sin
cos
j
cos
= 0 which, in turn, is equivalent to
a
2
0
cot
2
sin
=
j
2a
2
j
1
cos cos
j
.
For = o(1), sin = (1 o(1)) and cot = (1 +o(1))
1
2
=
j
a
2
j
1
cos cos
j
j
a
2
j
1
1 cos
j
. (23)
19
This implies
1
2
1
_
j
a
2
j
a
2
0
1
1cos
j
. (24)
It remains to lower bound .
Assume that <
1
2
min
. (Otherwise, the lower bound of the lemma is true.) Then, we have
cos cos
j
cos
j
2
cos
j
1
2
(1 cos
j
). (25)
The rst inequality follows from <
min
2
j
2
and cos being decreasing on [0, ]. The second
inequality is equivalent to cos
j
2
1
2
(1 +cos
j
) which follows from 1 +cos
j
= 2 cos
2
j
2 cos
j
.
Eq. (23) and (25) together imply
(1 +o(1))
a
2
0
2
1
2
j
a
2
j
1
1 cos
j
, (26)
which implies the lower bound on .
Proof of Lemma 4: We will show that the starting state is close to the state [w
start
) =
1
w
start
[w
start
),
[w
start
) =
1
2
[w
)
1
2
[w
start
) =
2a
0
cot
2
i[
start
) +
2a
j
_
cot
+
j
2
cot
j
2
_
i
_
[
+
j
) [
j
)
_
.
We have
start
[w
start
) =
start|w
start
start
2a
0
cot
2
w
start
start
|. We
have
|w
start
|
2
= 2a
2
0
cot
2
2
+ 4
j
a
2
j
_
cot
+
j
2
cot
j
2
_
2
.
Since cot x = (1 + o(1))
1
x
, the rst term is 2(1 + o(1))a
2
0
4
2
= (a
2
0
/
2
). Similarly to the previous
lemma, we have
cot
+
j
2
cot
j
2
=
sin
cos cos
j
=
(1 +o(1))
cos cos
j
.
Since <
1
2
min
, we have cos cos
j
1
2
(1 cos
j
) (similarly to Eq. (25)). Therefore,
|w
|
2
a
2
0
cot
2
2
1 +
j
a
2
j
_
(1+o(1))
0.5(1cos
j
)
2
_
2
(a
2
0
/
2
)
= 1 +
_
_
j
a
2
j
a
2
0
_
(cos
j
_
2
_
_
.
This means that
start
[w) =
2a
0
cot
2
|w
|
= 1
_
_
j
a
2
j
a
2
0
1
(1 cos
j
)
2
_
_
.
20
Proof of Lemma 5: Let [w
good
) =
1
2
[w
)
1
2
[w
good
) = [w
) [w
). Obviously, [w
good
) =
|w
good
good
. We have
[w
good
) = 2a
0
[
start
)+
m
j=1
a
j
_
(2 +i cot
+
j
2
+i cot
+
j
2
)[
+
j
) + (2 +i cot
j
2
+i cot
j
2
)[
j
)
_
.
Also
good
[w
good
) =
good
|w
good
good
. Futhermore,
good
[w
good
) = 2a
2
0
+
m
j=1
4a
2
j
= 2|[
good
)|
2
= 2.
(The imaginary terms cancel out because cot
+
j
2
= cot
j
2
.) It remains to bound |w
good
|.
We have
|w
good
|
2
= 2a
2
0
+
m
j=1
4a
2
j
+
m
j=1
2a
2
j
_
cot
+
j
2
+i cot
+
j
2
_
2
= 2 +
m
j=1
2a
2
j
_
cot
+
j
2
+ cot
+
j
2
_
2
.
Since <
1
2
min
, this sum is at most
2 +
m
j=1
2a
2
j
(2 cot
j
/2
2
)
2
2 +
_
_
m
j=1
a
2
j
cot
2
j
4
_
_
.
Therefore, |w
| = (max(
_
j
a
2
j
cot
2
j
4
, 1)) and
good
[w) =
_
_
min
_
_
1
_
j
a
2
j
cot
2
j
4
, 1
_
_
_
_
.
8 General graphs
The approach and methods we have presented are amenable to analyze quantum walk algorithms
on other graphs G. All we need is the eigenspectrum of the unperturbed walk U, an appropriate
subspace H
0
containing no 1-eigenvectors of U (which is equivalent to proving that all but one 1-
eigenvector of U is orthogonal to [s)), and the sums in Lemmas 3, 4 and 5 involving the eigenvalues
of U (which give the angle and the overlaps). Then we can apply Lemmas 3-5 to get the desired
result.
Hypercube: For instance we can derive the performance of the random walk search algorithm
on the hypercube, given in [SKW03] without having to guess the form of the eigenvalues [
0
).
[SKW03] showed that the random walk search algorithm after time T =
2
N gives a probability
of
1
2
to measure the marked state. For the d-dimensional hypercube (with N = 2
d
vertices), the
transformation U has d2
d
eigenvectors. An argument similar to Claim 7 shows that the quantum
walk stays in the 2
d
-dimensional subspace spanned by 2
d
eigenvectors with eigenvalues e
i
k
[MR02]
with
cos
k
= 1
2k
d
21
for k = 0 . . . d, each with degeneracy
_
d
k
_
. Among those, [
0
) is the only eigenvector with eigenvalue
1, thus we have an instance of the abstract search. We can now apply Lemmas 3-5. For Lemma 3,
we need to compute the sum of inverse gaps of all eigenvalues of U in Lemma 9, which is now of
the form
d
k=1
_
d
k
_
1
1 cos
k
=
d
2
d
k=1
_
d
k
_
1
k
= 2
d
(1 +o(1)) = N(1 +o(1)).
This gives a time T = (
0
of the perturbed walk U
d
i=1
S
i
. The eigenvalues of A are just sums of eigenvalues of shifts (which are the Fourier
coecients). For instance in the case of the grid the eigenvalues of A are of the form
1
2
(cos
2k
N
+
cos
2l
N
) =
1
4
(
k
+
k
+
l
+
l
) for k, l = 0 . . .
1
1cos
( eigenvalue of U) which determine the speed of the algorithm can be
estimated from graph properties alone (given a successful choice of coin).
Related work: Analyzing quantum walks on general graphs has been recently considered by
Szegedy [Sze04] who has shown a following general result. If fraction of all vertices of a graph is
marked and all eigenvalues of a graph dier from 1 by at least , then O(
1
) steps of quantum
walk suce. This contributes to the same goal as our paper: developing general tools for analyzing
quantum walks and using them in quantum algorithms.
22
The power of two methods (ours and [Sze04]) seems to be incomparable. The strength of our
method is that it is able to exploit a ner structure of the graph. For example, consider the 2-
dimensional grid with a unique marked item. Applying the theorem of [Sze04] gives a running
time of O(N
3/4
), compared to O(
N
). Our approach uses a quantity (
1
1cos
)
that involves all eigenvalues, capturing the fact that most eigenvalues are not close to 1.
The strength of Szegedys [Sze04] analysis is that it allows to handle multiple marked locations
with no extra eort, which we have not been able to achieve with our approach. It might be
interesting to combine the methods so that both advantages can be achieved at the same time.
9 Conclusion
We have shown that the discrete quantum walk can search the 2D grid in time O(
N log N) and
higher dimensional grids in time O(
rule, quant-ph/0401053.
25