You are on page 1of 5

Lecture 7

October 26

7.1 Quantum Hamiltonian Complexity


A Hamiltonian H is an observable, which means it is a Hermitian matrix H † = H.
P
In classical statistical physics, we have an energy function E, e.g., in the Ising model E(x) = (i,j)∈G (−xi xj ).
In thermal equilibrium, the Boltzmann distribution is

e−βE(x)
p(x) = .
Z
In quantum systems, we have the Hamiltonian H. We measure an energy which is an eigenvalue of H. The
density matrix is
e−βH
ρeq = .
Z
Furthermore, we have the Schrödinger equation

∂|ψi
= −iH|ψi
∂t
so |ψit = e−itH/~ |ψi0 .

Let the eigenvalues of H be λ0 ≤ λ1 ≤ · · · ≤ λr . The eigenvalue λ0 is the ground state energy, and |ψ0 i is
the corresponding eigenvector.

Definition 7.1 (Hamiltonian Problem). We are given a Hamiltonian H and 0 ≤ a < b (b ≥ a + 1). If
λ0 ≤ a, output “YES”. If λ0 ≥ b, output “NO”.

⊗k Pr
Definition 7.2 (k-Local Hamiltonian Problem). We are given H1 , . . . , Hr ∈ (C2 ) ,H= i=1 (Hi ⊗ I),
and 0 ≤ a < b (b ≥ a + 1). If λ0 ≤ a, output “YES”. If λ0 ≥ b, output “NO”.

We can now look at ground states of local Hamiltonians.

k-LH is a quantum generalization of k-CSP. For example, to encode 3-SAT, we can take a clause such as

34
LECTURE 7. OCTOBER 26 35

(x1 ∨ x̄2 ∨ x3 ), which is False only if (x1 , x2 , x3 ) = (0, 1, 0), so we encode it as the Hamiltonian
 
0
 0 
 

 1 


 0 
.

 0 


 0 

 0 
0

k-LH is QMA-complete. Recall that L ∈ NP if there exists a Turing Machine V such that for all x ∈ L, there
poly(x) poly(x)
exists y ∈ {0, 1} such that V (x, y) = 1 and for all x ∈/ L, for all y 0 ∈ {0, 1} , V (x, y) = 0. We say
⊗ poly(x)
that L ∈ QMA if there exists a quantum algorithm V such that for all x ∈ L, there exists |yi ∈ (C2 )
⊗ poly(x)
such that P{V (x, y) = 1} ≥ 2/3 and for all x ∈ / L, for all y 0 ∈ {0, 1} , P{V (x, y) = 1} ≤ 1/3.

Theorem 7.3. k-LH for k ≥ 2 is QMA-hard.

It is believed that there is no subexponential classical witness.

Due to the difficulty of the computational problem, we will consider gapped Hamiltonians. A Hamiltonian H
is gapped if λ1 ≥ λ0 + gap, where the gap is O(1) or O(1/ poly(n)).

7.1.1 Tensor Networks


A tensor is a map T : [d1 ] × · · · × [dn ] → C.

We will now introduce the concept of a tensor network. In a simple example of a tensor network, a tensor
T is connected to four free edges i1 , i2 , i3 , i4 . We define composition rules for tensor networks:

1. (tensor product) Given two tensor networks T1 and T2 with 4 and 3 free edges respectively, the
tensor network containing both networks is T (i1 , . . . , i7 ) = T1 (i1 , . . . , i4 )T2 (i5 , . . . , i7 ). The tensor
product of two tensor networks u and v, each with a single free edge, is uv T , the tensor network with
T (i, j) = u(i)v(j).
2. (edge contraction) If we connect two tensor networks P T1 (i1 , i2 , k) and T2 (k, i3 , i4 ) by an edge k, then
the resulting tensor network is T (i1 , i2 , i3 , i4 ) = k∈[d] T1 (i1 , i2 , k)T2 (k, i3 , i4 ).

Example
P 7.4. If we connect the edges of two vectors u and v, we get the inner product hu, vi, i.e.,
α = i∈[d] u(i)v(i).

Example 7.5. Similarly, if we connect the edges of two matrices A and B, we get hA, Bi.

P
Example 7.6. If we connect the two edges of a matrix M together, we get i∈[d] M (i, i) = tr M .

Example 7.7. If we connect M (i, k) and N (k, j) along the edge k, we get T = M N , since
X
T (i, j) = M (i, k)N (k, j).
k∈[d]
LECTURE 7. OCTOBER 26 36

7.1.2 Matrix Product States (MPS)


P1 P1 P1
Motivation: For n = 2, ( i=0 αii |ii)( j=0 α2j |ji) = i,j=0 α1i α2j |iji.

Definition 7.8 (MPS). Consider a set of n objects Aj11 , . . . , Ajnn . At the boundaries, these are in CD ;
in the middle, they are matrices in CD×D . We say that |ψi is a MPS if
d−1
X
|ψi = Aj11 · · · Ajnn |j1 · · · jn i.
j1 ,...,jn =0

This requires O(ndD2 ) complexity to describe. For D = 1, the state is unentangled; if D = Ω(exp n), the
states are maximally entangled.

7.1.3 Density Matrix Renormalization Group (DMRG) (1D Heuristic)


We want to solve
ψ ∗ Hψ
min .
ψ∈C⊗2n ψ∗ ψ
Instead, we will solve
hψ|H|ψi
min⊗2n .
|ψi∈C hψ|ψi
MPS
small D

We can represent the numerator and denominator by tensor networks. For λ > 0, the Lagrangian is
minψ∈C⊗2n {hψ|H|ψi − λhψ|ψi}.

(DMRG) For i = 1, . . . , n, fix everything except at i. Optimize over {Aji i }. It turns out that differentiating
the tensor network just removes a node; if we denote H 0 and N to be linear maps representing the rest of the
tensor network acting on x, then this becomes recognizable as a generalized eigenvalue problem H 0 x = λN x.

7.1.4 Area Laws & AGSP


Pr
H = i=1 Hi and for all i, Hi is 2-local (space locality). Assume that |ψ0 i is unique, 0 ≤ kHi k ≤ 1, Hi2 = Hi ,
and λ0 (H) = 0. The gap is λ1 (H) − λ0 (H) = ε = O(1).

Complexity (entropy) of a region should scale as the surface area (instead of the volume).

Conjecture (Area Law): If |ψ0 i is the ground state of a local Hamiltonian, (A, Ā) is a partition of the
qubits on which H acts, then S(A)|ψ0 i scales as the size of ∂A, where S(A)|ψ0 i is the von Neumann entropy
and ∂A is the qubits in A that interact with Ā through A.

Result: For 1D local Hamiltonians that satisfy the assumptions above, the area law holds.
Qr
Easy: Assume that the Hi commute. Then, P = i=1 (1 − Hi ) projects to the ground state.

Definition 7.9. The entanglement rank of |ψi is the number of non-zero coefficients in the Schmidt
decomposition.

Approximate ground state projection: K is a (D, ∆)-AGSP if:


1. K|ψ0 i = |ψ0 i (invariance);
2. If ψ ⊥ is orthogonal to the ground space, then K|ψ ⊥ i is also orthogonal to the ground space, and
kK|ψ ⊥ ik2 = ∆ (shrinkage);
LECTURE 7. OCTOBER 26 37

3. If |ψi has entanglement rank L, then K|ψi has entanglement rank ≤ DL.
If there exists an (D, ∆)-AGSP with D∆ < 1/2, then an area law holds. The proof proceeds in two stages:
1. Assume that there exists a (D, ∆)-AGSP with D∆ < 1/2. Then, there exists a product state |φi = |Li|Ri
such that
1
|hφ|ψi| = µ ≥ √ .
2D

2. Assume the above, then we have an area law for the 1D case by applying K ` |φi.
How do we construct AGSPs? Attempt:
 H `
K = 1− .
kHk
For this,
 ε `
∆= 1−
kHk
≈ e−ε`/kHk .

Hamiltonian truncation: Look at the s qubits surrounding the boundary of the cut A. For all qubits to the
left of these qubits, call the combined Hamiltonian HL , and similarly for all of the qubits to the right of the s
qubits, call the Hamiltonian HR . So, H = HL + H1 + · · · + Hs + HR . Let H 0 = HL≤t + H1 + · · · + Hs + HR ≤t
,
≤t
where the notation H means we have truncated the eigenvalues which are greater than t.

Proposition 7.10. 1. kH 0 k ≤ s + 2t.


2. H 0 is constant gapped.
3. H 0 |ψ0 i = 0.

So, now consider


`
 H0 
K = 1− 0
kH k
and 0
∆ ≈ e−ε`/kH k .
Unfortunately, this is still not good enough.

The Chebyshev polynomials are given by T0 (x) = 1, T1 (x) = x, Td (x) = 2xTd−1 (x) − Td−2 (x). They have
the property that they are bounded in absolute value by 1 in the interval [−1, 1], and any other degree-d
polynomial with this property grows slower than Td outside of [−1, 1].

We define
T` ([kH 0 k + ε − x]/[kH 0 k − ε])
C` (x) =
T` ([kH 0 k + ε]/[kH 0 k − ε])
√ 0
and K = C` (H). Then, C` (0) = 1 and ∆ = 4e−4` ε/kH k .

0 i
P`
Now, we analyze the entanglement rank. We can write K = i=0 ci (H ).

` `
(H 0 ) = (HL≤t + H1 + · · · + Hs + HR ≤t
)
X
= Hi1 · · · Hi` .
[s]
i1 ,...,i` ∈( ` )
LECTURE 7. OCTOBER 26 38

For a single monomial, the entanglement rank across a random cut A0 is ≈ 4`/s , so the entanglement rank
across A is at most ≈ 4`/s 4s = 4`/s+s . If we group the terms intelligently and sum, we can get D = 4`/s+s .

Now, D∆ ≈ e−`/ s+2`s+s
, so if we take

` = O(s2 ),
 log2 d 
s=O ,
ε
then we will have D∆ ≤ 1/2.

Note that the two-dimensional area law is still an open problem. What we have shown is that
 log2 d 
S|ψ0 i (A) ≤ O .
ε
If we can instead show that O((log d)/ε) instead, then by reducing the two-dimensional case to the one-
dimensional case, we would prove S|ψ0 i (A) ≤ O(r/ε) (where r is the radius of the system) which is the
two-dimensional area law. In fact, if we could prove O((log2−α d)/ε) for any α > 0, then we could prove a
sub-area law, which would already be interesting.

You might also like