Professional Documents
Culture Documents
Complex Numbers
Complex numbers1 give us a way of saying that the square of a number can be negative. To that end, we have to introduce the imaginary
number i such that
i = 1.
(1)
A complex number then has the form z = a + ib where a and b are the
usual real numbers we use every day.
Every complex number also has a complex conjugate. This number
is made by going through a complex number and replacing i with i
and is represented in physics by z = a ib. One important property
of this conjugation is
(2)
Since a and b are the usual real numbers, z z is not only real, but
positive.
Exponentiation
A very important operation for complex numbers and many physical
applications is the exponential function2 . We define it with an infinite
sum
(3)
ex =
n =0
http://simple.wikipedia.org/wiki/
Exponential_function
xn
x
x2
x3
= 1+ +
+
+ .
n!
1 21 321
It has the important property that if you take its derivative you get
back the same function3
d x
(4)
e = ex .
dx
Now, this function has the particular property that if instead of x,
you use an imaginary number, you obtain
(5)
ei = cos + i sin .
d it
dt e
Remember to change i to i, so z =
rei , and also remember that e x ey =
e x +y .
Linear algebra
Linear algebra is at the heart of quantum mechanics. It deals with
vectors and matrices in many dimensions.
Vectors
A column vector defined by |vi is defined as just an ordered set of
numbers in a column:
z1
z2
(6)
|vi =
.. .
.
zN
The number N is called the dimension. We can define the corresponding row vector by hv|, and it looks like
hv| = z1 z2 zN .
(7)
Notice how in making the column vector into a row vector we took
the complex conjugate of each entry. This is very important since now
we will define the inner product of a two vectors |vi and |wi as hv|wi
(just replace z with w to obtain the entries of |wi) where for the above
we get
(8)
hv|wi = z1 w1 + z2 w2 + + zN w N .
Matrices
Now, what makes linear algebra linear is that we act on vectors with
matrices that have the property that if M is a matrix, |vi and |wi are
vectors and a, b are just numbers then
(9)
Strictly speaking, this is a square matrix; in general, the rows and columns
need not be the same, but most of our
use will be with square matrices.
4
m11
m21
M=
..
.
m N1
m12
m22
..
.
m N2
..
.
m1N
m2N
..
.
.
m NN
Matrices can be multipled, but the procedure for this is similar to that
of the inner product and not how one might guess to multiply matrices. To develop this, consider two ways of thinking of a matrix: As a
row vector of column vectors or as a column vector of row vectors:
m11
m21
| m1 i = .
..
m N1
hm 1 |
2|
hm
=
|m N i
.. .
.
M = | m1 i
(10)
| m2 i
and
hm N |
hm 1 | = m11
m12
m1N
hm 1 |
h m2 |
MN =
.. |n1 i |n2 i
.
hm 1 |n1 i
2 | n1 i
hm
|n N i =
..
hm 1 |n2 i
hm 2 |n2 i
..
.
..
.
hm N |n1 i hm N |n2 i
hm N |
hm 1 |vi
hm 1 |
h m2 | v i
h m2 |
(12)
M |vi = . |vi = .
,
..
..
hm 1 |n N i
hm 2 |n N i
.
..
hm N |n N i
5
http://simple.wikipedia.org/wiki/
Matrix_(mathematics)
hm N |vi
hm N |
and
(13)
h v | M = h v | | m1 i | m2 i | m N i
= h v | m1 i h v | m2 i h v | m N i
m11
m21
mN1
mN2
m12 m22
M =
(14)
..
..
..
..
.
.
.
.
.
m1N
m2N
mNN
Notice how every entry is the complex conjugate and flipped across
the diagonal6 . A matrix is hermitian if M = M . Note that a hermitian
matrix doesnt necessarily have all real entries, just that the entries
above the diagonal are conjugate to those below the diagonal, as
defined by mij = m ji .
There is also a special matrix called the identity matrix which has 1
1 0 0
0 1 0
,
(15)
I = . . .
. . ...
.. ..
0 0 1
this has the property that MI = M and I M = M for any matrix M
(Check this as an exercise).
M | Ei = E | Ei ,
|vi = v1 | E1 i + v2 | E2 i + v3 | E3 i .
(18)
|vi =
vi |Ei i .
i =1
h Ei | M| Ej i = Ei h Ei | Ej i = Ej h Ei | Ej i .
( Ei Ej ) h Ei | Ej i = 0,
7
Sometimes Ei = Ej and this is called
a "degeneracy". A modified version of
this property holds in that case.
(21)
f ( x ) = eikx .
(22)
This decomposition into basis functions is called a Fourier decomposition or an inverse Fourier transform. The reason it is an inverse Fourier
transform is because we can actually find the function g (k) (called
the Fourier transform) by taking
(24)
g (k) =
g( x )eikx dx,
To check this is the case, we can substitute this into Eq. (23) while
change the dummy variable x to y to see
(25)
g( x ) =
Z Z
g(y)eik( xy) dy
dk
,
2
now the k-integral can be done and it gives a new function we call
the -function:
Z
dk
( x y) =
(26)
eik( xy)
.
2
This function has the property that if we do the k-integral in Eq. (25)
we see:
(27)
g( x ) =
g(y)( x y) dy.
The equation Eq. (27) is really what defines the -function, but you
can think of ( x y) as a function that is zero everywhere except
when x y = 0, then it is infinitely large10 .
There are many more relations between functions and linear algebra. In fact, the two are intimately connected. Throughout the
course, we will find different basis functions by solving eigenvalue
equations of functions. Much of the machinery here will hold over to
those problems as well.
wikipedia.org/wiki/Dirac_delta_
function
h f | gi =
f ( x ) g( x ) dx.
Gaussian integrals
An important integral for physics is the Gaussian integral. Written out
in full glory, it is
r
Z
b2 /(4a)
ax2 +bx
(31)
e
.
e
dx =
a
With this identity we can actually see that the Fourier transform of a
2
Gaussian is a Gaussian. Take the function g( x ) = e ax , then we can
write
(32)
g (k) =
e ax eikx dx,
Thus, if we let b = ik, the right hand side of Eq. (32) can be evaluated to be
r
k2 /(4a)
(33)
e
.
g (k) =
a
As an exercise, perform the inverse fourier transform to recover
2
g( x ) = e ax .