You are on page 1of 15

Eigenvalue problems

Name/ Surname: Dionysios Zelios

Email: dionisis.zel@gmail.com

Course: Computational Physics (FK8002)

Date of submission: 14/04/2014

CONTENTS

Description of the problem


i. Introduction.3
ii. Finite difference method..4
Five point method4
iii. Diagonalization6
Inverse power iteration..6
Inverse power iteration with shift..7

Results
i. Standard routine.8
ii. Inverse power iteration with shift routine..10
iii. Time duration of our methods.13

References

Description of the problem

i.

Introduction
When we want to treat a quantum mechanical system, we usually have to solve an
eigenvalue problem H=, where H is the Hamilton operator. For a one
2
2
V ( x)
dimensional problem: H
2m x 2
As a test case we will use the one dimensional harmonic oscillator Hamiltonian. This
problem can be solved analytically and we will use that to check our numerical
approach. The importance of the harmonic oscillator problem steams
from the fact that whenever there is a local potential minimum the harmonic
oscillator model gives the first approximation to the physics. If the potential V(x) has
a minimum in x = x0 ,we can expand it in a Taylor series around the minimum:
1
1
V ( x) V ( x0 ) V '( x)( x x0 ) V ''( x)( x x0 ) 2 ... V ( x) V ( x0 ) k ( x x0 ) 2
2
2
where we have used that V '( x0 ) 0 since x x0 is a minimum. We have further put

V ''( x0 ) k where V ''( x0 ) 0 (which follows since we are in a minimum). A classical


harmonic oscillator (a spring for example) is governed by a restoring force F k ( x x0 )
and it potential energy is V ( x )

1
k ( x x0 ) 2 . Such an oscillator will oscillate with period
2

k
. Using instead of k, we can rewrite the Hamilton operator as:
m

, with

2 1
m 2 x 2 ,where we have put x0 in the origin. Hence we have:
2m x 2 2
2

H (

2 1
m 2 x 2 )
2
2m x
2
2

This can be transformed to a dimensionless system by setting z


have: (

x , thus we will

1 2 1 2
E
1
z )
, but we also know that n (n ) , so:
2
2
2 z
2

1 2 1 2
1
z ) (n )
2
2 z
2
2

(1) , with n=0, 1, 2

Finite difference method


There are several ways to solve the eigenvalue equation with this Hamiltonian. We
shall start with a so called finite difference approach.
In principle there is an infinite number of eigenstates, ( x) , to H, and these can
extend to x . However, we are usually only interested in a finite number of
states. We search for states with rather low energy and which essentially are
confined to a region of space close to x = 0.
We span a space which we think is appropriate for what we are interested in with a
linear grid from Xmin to Xmax. Outside these boundaries we assume that the wave
function is zero. In principle this means that we have put our harmonic oscillator in a
potential well. At the well boundaries we assume that the potential goes to infinity,
and thus the eigenstates have to go to zero there.
The purpose of discretization is to obtain a problem that can be solved by a finite
procedure. By discretizing we may rewrite derivatives in terms of finite differences,
which eventually enables us to formulate the original problem in terms of a matrix
equation that can be solved by inverse iteration. In this section we will use a very
intuitive approach to find expressions of different accuracy for the finite-difference
operator representing the second derivative. The formulas we derive here to
calculate second order derivatives by means of finite difference will always include
the point at which the derivative is found, since in the discrete regime it seems
reasonable to assume that this point stores valuable information about the function.
Five-point method :
We deduce the five-point method in order to have a good precision. We try to
approximate the second order derivative of function f at grid point using two
neighbor points to the left and two neighbor points to the right from the
grid. We take:
fi f ( xi )
so that

, h ih , i 0, 1, 2 and we look for the constants c0 , c1 , c1 , c2 , c2

2 f
c0 f i c1 f i 1 c1 f i 1 c2 f i 2 c2 f i 2
x 2

The constants are determined by Taylor expanding around xi and after a few
calculations and solving the system of equations obtained, we find:

f ''( x) c0 c1 f ( x h) c1 f ( x h) c2 f ( x 2h) c2 f ( x 2h) O(h 4 )


1
f ''( x)
( f ( x 2h) 16 f ( x h) 30 f ( x) 16 f ( x h) f ( x 2h)) O( h 4 )
2
12h
Considering that our equation (1) has a factor (-1/2), multiplying each coefficient with this
and taking into account the above equations we have: Hence: c0

c2

15
8
, c1
,
2
12h
12h 2

1
, c1 c1 , c2 c2 where h is the step size that we are using in our model.
24h 2

We can now set up the matrix eigenvalue equation: HX=EX


where

c1
c2
0
0
0
c0 V ( x1 )

c0 V ( x2 )
c1
c2
0
0
c1
c
c1
c0 V ( x3 )
c1
c2
0
H 2
c2
c1
c0 V ( x4 )
c1
c2
0
0
0
c2
c1
c0 V ( x5 ) c1

0
0
c2
c1

0
0
0
0
c2

0
0
0
0
0

00
0 0
00

00
00

and

f ( x1 )

f ( x2 )
f ( x3 ) . The matrix above is banded and it is also symmetric since cn c n .

f ( x4 )

H is an Hermitian operator. We know then that the eigenstates corresponding to


different eigenvalues should be orthogonal

( x)
*

( x)dx ij

, where the normalization is assumed.

Further the eigenstates to a Hermitian operator form a complete set. The matrix H
we just obtained by discretization of H is an Hermitian matrix (this means that it is
self adjoint; H H ( H T )* )and its eigenvectors will be orthogonal in a similar
manner:
X i X j ij , when
Ei E j

Our matrix above belongs actually to the subclass of Hermitian matrices that are real
and symmetric. Then also the eigenvectors Xi will be real. If H is an nxn matrix then
there will be n eigenvectors. These eigenvectors, as the eigenvectors to the Hamilton
operator, form a complete set. The difference is though that the eigenfunction to
the operator H can span any function (x) defined anywhere on the real x-axis, the
finite set of eigenstates to our matrix H can span any function defined on our grid.

Diagonalization
There are several methods to solve the matrix eigenvalue equation. One method is
to diagonalize and find all eigenvalues and (optionally) all eigenvectors. The scheme
is then to find the similarity transformation of the matrix H such that X 1HX D
where D is a diagonal matrix. The eigenvalues are now found at the diagonal of D
and the eigenvectors are the columns of X. A method that uses the fact that the
matrix is banded and symmetric is much faster than a general diagonalizer. Another
possibility is to find a few eigenvalues (the lowest, the highest, in a specified energy
region) and their eigenvector with an iterative method. This type of methods are
often faster when we want a particular solution and have a sparse matrix. One such
method is the (Inverse) Power Iteration method .

Inverse power iteration


For solutions of the Schrdinger equation we are more interested in the smallest
eigenvalue than in the largest. For this we use the Inverse iteration scheme which
formally corresponds to the performance of power iteration for A1 (where A is a nxn
matrix). Since the computation of the inverse of a matrix is as time-consuming as the full
eigenvalue problem the practical calculation is, however, through a different path. The
eigenvalues of A1 are n1 , if n are the eigenvalues of A.
Hence:

AX n n X n n1 X n A1 X n
We will now solve the system of linear equations AY2 Y1 , where Y1 is our first guess for an
eigenvector. In the next step we put the solution, Y2 on the right hand side and solve again,
so we have an iterative scheme AYi 1 Yi . To analyze the situation we note that any vector
can be expanded in eigenvectors to the matrix. After the first step we have for instance:

Y2 A1Y1 A1 cn X n n1cn X n .
n

It is clear that in the iterative procedure the solution Yi 1 will converge towards the
eigenvector with the largest value of n1 , i.e. towards the eigenvector with the smallest
eigenvalue. At every step in the iteration the current approximation of the inverse of the
smallest eigenvalue is given by:

Yi Yi 1 Yi A1Yi
1

, when

Yi Yi
Yi Yi
min | n |
Also here it is a good idea to normalize in every step. Finally at every step in the
iteration we solve a system of linear equations AYi 1 Yi . The left-hand side matrix
is the same every time, but the right-hand side changes.

This is a typical situation where it is an advantage to first perform an LUdecomposition, A = LU, for fast solutions in the following iterations.

Inverse Power Iteration with Shift


If we shift the matrix with a scalar constant, , ,
the power iterations will converge to the largest (smallest) eigenvalue of the shifted
matrix i.e to max | n | , or to 1/ min | n | . In this way it is possible to find more
than one eigenvalue. The shift can also be used to improve the convergence since
the rate of convergence depends on | 1 / 2 | , where 1 is the largest and 2 is the
second largest eigenvalue, or vice versa for the inverse power iteration.

Results
To begin with, we are taking a linear grid x= [-7,7] and a step size h=0,1. Our
1
potential is given from the formula : V x 2 . Below we are plotting the potential as a
2
function of distance:

Standard routine
Then, we use the build-in Matlab command eig in order to get the eigenvalues and
the eigenvectors of our Hamiltonian. We check if the Hamiltonian matrix is
symmetric by calculating the quantity H(i,j)-H(j,i) where i , j are the line and column
of the matrix respectively. We find that this quantity is zero, hence it is indeed
symmetric. We also check that the eigenvectors we have found are correct by
inserting them into the eigenvalue equation.
For the lowest energies we notice that the equation (1) is true since as we can see in
the matrix below, the energies (eigenvalues) are describing by the formula (n+1/2)
where n=0,1,2,3
State (n)
0
1
2
3

Energy
0.5
1.5
2.5
3.4999

We also present the eigenfunctions for the quantum harmonic oscillator for the first
4 states.

We notice that the wave functions for higher n have more humps within the
potential well. This corresponds to a shorter wavelength and therefore by the de
Broglie relation, they may be seen to have a higher momentum and therefore higher
energy.

Below, we present the two highest energy solutions with E=284.1615. We are
expecting the higher energy solutions to be unphysical and depend on the
approximations, for instance the finite box, the grid and the representation of
the derivative.

Making our grid double, [-14,14], we plot the highest energy solutions with
E=353,57515.

Inverse Power Iteration with shift routine


We also write an Inverse Power Iteration with shift routine" to obtain the first few
eigenvalues and eigenvectors to the matrix (as described above). As an initial guess,
we are taking a constant vector (with ones) with length as much as the length of the
Hamiltonian matrix. The shift is being set to zero in order to get the smallest
eigenvalue and its corresponding eigenvector. First, we plot the ground state for a
few iterations until we find the correct answer. We conclude that after 3 iterations
we get the desirable value.

The plot is given below:

Below, we present the lowest odd solution obtained after 3 iterations, having shift
being set as 1.5.

In addition, we present the lowest even solution obtained after five iterations,
having shift being set as 2.

We conclude that the convergence rate depends on the value of the shift. For
instance, if we have set the shift in the previous case as 2.5, then with one iteration
we have the right answer, while we need 5 iterations if we set it 2.
In order to examine the shift dependence further, we present a table varying the
shift from 1.9 to 2.5 for the lowest odd solution and note the number of iterations
that we want in order to find the exact solution.
Shift
1.9
2
2.1
2.2
2.3
2.4
2.5

Iterations
6
5
4
3
2
2
1

As expected, the closer to the value we are, the less iterations our routine wants in
order to return the right eigenvalue and its corresponding eigenvector.

Time duration of our methods


We now want to check how time consuming the two methods are. Using the built in
command tic-toc, we are reading the elapsed time from stopwatch. Below we
present our results in a table, concluding that the inverse power iteration with shift
method is much faster than the standard routine.

Stepsize

Matrix
elements

0.1
0.01

141*141
1401*1401

Build in
command
(sec)
2.183901
26.217114

Routine
(1 it.)
(sec)
0.016805
0.774165

2
iterations
(sec)
0.018968
0.937791

3
iterations
(sec)
0.022248
0.963312

For step size 0.1, our routine needs 130 times less time to compute the eigenvalue
and the corresponding eigenvector while for step size 0.01 it is quicker 33 times from
the standard routine.

Last but not least, we want to compare our calculation with the analytical solutions
of the Harmonic oscillator problem. We can see that for the lower states, we get
multiples of (1/2) (since we work on a dimensionless model) as expected from our
theory. Hence, from the fourth eigenvalue, we start having a small divergence from
the expected value in the third decimal. This can be explained by the fact that the
second order derivative is approximated with a finite difference formula and as a
result this gives a specific error. Moreover, we force our solution to be zero outside
the last point in our grid since this is necessary in order to making it possible to
normalize the wave function.
Our solutions are normalized and tend to be orthogonal to each other. Their inner
product is almost but not exactly zero. This can also be explained by the fact that we
have used an approximation for the second derivative hence maybe a more accurate
method could be used to get better results.

References

1. Mathematical methods for physicists


Arfken, Weber, Harris
2. Introduction to computation and modeling for differential equations
Lennart Edsberg
3. Lecture notes, computational physics course (FK8002)
Eva Lindroth
4. Numerical Recipes
Press, Teukolsky, Vetterling, Flannery

You might also like