Professional Documents
Culture Documents
http://biomachina.org/courses/structures/01.html
Complex Numbers: Review
A complex number is one of the
form:
a + bi
where
i = 1
a: real part
b: imaginary part
Complex Arithmetic
When you add two complex numbers, the real and
imaginary parts add independently:
(a + bi) + (c + di) = (a + c) + (b + d)i
p1(x) = 3 x2 + 2 x + 4
p2(x) = 2 x2 + 5 x + 1
Imaginary
ii
-1
1 Real
-i
Magnitude and Phase
The length of a complex number is its magnitude:
a + bi = a 2 + b 2
The angle from the real-number axis is its phase:
(a + bi) = tan-1(b / a)
When you multiply two complex numbers, their magnitudes
multiply
|z1z2| = |z1||z2|
And their phases add
(z1z2) = (z1) + (z2)
The Complex Plane: Magnitude and Phase
Imaginary
iii
z1z2 z2
z1
-1
-1
11 Real
-i
-i
Complex Conjugates
If z = a + bi is a complex number, then its complex conjugate is:
z* = a - bi
The complex conjugate z* has the same magnitude but opposite phase
When you add z to z*, the imaginary parts cancel and you get a real number:
(a + bi) + (a - bi) = 2a
When you multiply z to z*, you get the real number equal to |z|2:
(a + bi)(a - bi) = a2 (bi)2 = a2 + b2
Complex Division
If z1 = a + bi, z2 = c + di, z = z1 / z2,
the division can be accomplished by multiplying the numerator and
denominator by the complex conjugate of the denominator:
(a + bi)(c di) ac + bd bc ad
z= = 2 2
+i 2 2
(c + di)(c di) c + d c + d
Eulers Formula
Imaginary
i
a = |z|cos( )
b = |z|sin( )
b
-1
a 1 Real
-i
Powers of Complex Numbers
Suppose that we take a complex number
z = |z|ei (z)
and raise it to some power
zn = [|z|ei (z)]n
= |z|n ei n (z)
zn has magnitude |z|n and phase n (z)
Powers of Complex Numbers: Example
What is in for various n?
Imaginary
i n = 1, 5
-1 n = 2, 6 n = 0, 4 , 8
1 Real
-i
n = 3, 7
Powers of Complex Numbers: Example
What is (ei/4)n for various n?
Imaginary
i n=2
n = 1, 9
n=3
-1 n=4 n = 0, 8
1 Real
n=5
n=7
-i
n=6
Harmonic Functions
What does x(t) = eit look like?
x(t) is a harmonic function (a building block for later analysis)
eit Time
-1
1 Real
-i
Harmonic Functions as Sinusoids
cos(t) sin(t)
Questions: Complex Numbers
Convolution
Convolution of an input x(t) with the impulse response h(t) is
written as
x(t) * h(t)
That is to say,
x(t ) h(t ) = x( )h( t ) d
Convolution of Discrete Functions
For a discrete function x[j] and impulse response h[j]:
x[ j ] h[ j ] = x[k ] h[ j k ]
k
One Way to Think of Convolution
x(t ) h(t ) = x( )h( t ) d
x[ j ] h[ j ] = x[k ] h[ j k ]
k
Think of it this way:
Shift a copy of h to each position t (or discrete position k)
Multiply by the value at that position x(t) (or discrete sample
x[k])
Add shifted, multiplied copies for all t (or discrete k)
Example: Convolution One way
x[j] = [ 1 4 3 1 2 ]
h[j] = [ 1 2 3 4 5 ]
x[0] h[j 0] = [ __ __ __ __ __ __ __ __ __ ]
x[1] h[j 1] = [ __ __ __ __ __ __ __ __ __ ]
x[2] h[j 2] = [ __ __ __ __ __ __ __ __ __ ]
x[3] h[j 3] = [ __ __ __ __ __ __ __ __ __ ]
x[4] h[j 4] = [ __ __ __ __ __ __ __ __ __ ]
= [ __ __ __ __ __ __ __ __ __ ]
Example: Convolution One way
x[j] = [ 1 4 3 1 2 ]
h[j] = [ 1 2 3 4 5 ]
x[0] h[j 0] = [ 1 2 3 4 5 __ __ __ __ ]
x[1] h[j 1] = [ __ __ __ __ __ __ __ __ __ ]
x[2] h[j 2] = [ __ __ __ __ __ __ __ __ __ ]
x[3] h[j 3] = [ __ __ __ __ __ __ __ __ __ ]
x[4] h[j 4] = [ __ __ __ __ __ __ __ __ __ ]
= [ __ __ __ __ __ __ __ __ __ ]
Example: Convolution One way
x[j] = [ 1 4 3 1 2 ]
h[j] = [ 1 2 3 4 5 ]
x[0] h[j 0] = [ 1 2 3 4 5 __ __ __ __ ]
x[1] h[j 1] = [ __ 4 8 12 16 20 __ __ __ ]
x[2] h[j 2] = [ __ __ __ __ __ __ __ __ __ ]
x[3] h[j 3] = [ __ __ __ __ __ __ __ __ __ ]
x[4] h[j 4] = [ __ __ __ __ __ __ __ __ __ ]
= [ __ __ __ __ __ __ __ __ __ ]
Example: Convolution One way
x[j] = [ 1 4 3 1 2 ]
h[j] = [ 1 2 3 4 5 ]
x[0] h[j 0] = [ 1 2 3 4 5 __ __ __ __ ]
x[1] h[j 1] = [ __ 4 8 12 16 20 __ __ __ ]
x[2] h[j 2] = [ __ __ 3 6 9 12 15 __ __ ]
x[3] h[j 3] = [ __ __ __ __ __ __ __ __ __ ]
x[4] h[j 4] = [ __ __ __ __ __ __ __ __ __ ]
= [ __ __ __ __ __ __ __ __ __ ]
Example: Convolution One way
x[j] = [ 1 4 3 1 2 ]
h[j] = [ 1 2 3 4 5 ]
x[0] h[j 0] = [ 1 2 3 4 5 __ __ __ __ ]
x[1] h[j 1] = [ __ 4 8 12 16 20 __ __ __ ]
x[2] h[j 2] = [ __ __ 3 6 9 12 15 __ __ ]
x[3] h[j 3] = [ __ __ __ 1 2 3 4 5 __ ]
x[4] h[j 4] = [ __ __ __ __ __ __ __ __ __ ]
= [ __ __ __ __ __ __ __ __ __ ]
Example: Convolution One way
x[j] = [ 1 4 3 1 2 ]
h[j] = [ 1 2 3 4 5 ]
x[0] h[j 0] = [ 1 2 3 4 5 __ __ __ __ ]
x[1] h[j 1] = [ __ 4 8 12 16 20 __ __ __ ]
x[2] h[j 2] = [ __ __ 3 6 9 12 15 __ __ ]
x[3] h[j 3] = [ __ __ __ 1 2 3 4 5 __ ]
x[4] h[j 4] = [ __ __ __ __ 2 4 6 8 10 ]
= [ __ __ __ __ __ __ __ __ __ ]
Example: Convolution One way
x[j] = [ 1 4 3 1 2 ]
h[j] = [ 1 2 3 4 5 ]
x[0] h[j 0] = [ 1 2 3 4 5 __ __ __ __ ]
x[1] h[j 1] = [ __ 4 8 12 16 20 __ __ __ ]
x[2] h[j 2] = [ __ __ 3 6 9 12 15 __ __ ]
x[3] h[j 3] = [ __ __ __ 1 2 3 4 5 __ ]
x[4] h[j 4] = [ __ __ __ __ 2 4 6 8 10 ]
= [ 1 6 14 23 34 39 25 13 10 ]
Another Way to Look at Convolution
x[ j ] h[ j ] = x[k ] h[ j k ]
k
Think of it this way:
Flip the function h around zero
Shift a copy to output position j
Point-wise multiply for each position k the value of the
function x and the flipped and shifted copy of h
Add for all k and write that value at position j
Convolution in Higher Dimensions
In one dimension:
x(t ) h(t ) = x( )h( t ) d
In two dimensions:
I ( x, y ) h ( x, y ) = I ( x , y )h( x x , y y )d x d y
Or, in discrete form:
I [ x, y ] h[ x, y ] = I [ j , k ]h[ x j , y k ]
k j
Example: Two-Dimensional Convolution
1 1 2 2
1 1 1
1 1 2 2
* 1 2 1 =
1 1 2 2
1 1 1
1
____
1 ____
2 2____ ____ ____ ____
____ ____ ____ ____ ____ ____
____ ____ ____ ____ ____ ____
____ ____ ____ ____ ____ ____
____ ____ ____ ____ ____ ____
____ ____ ____ ____ ____ ____
Derivative:
d
( f g) = f g + f g
dt
Convolution has the same mathematical properties as
multiplication
(This is no coincidence)
Useful Functions
Square: a(t)
Triangle: a(t)
Gaussian: G(t, s)
Step: u(t)
Impulse/Delta: (t)
1 if t a 1
a (t ) =
0 otherwise
-a a
What does f(t) * a(t) do to a signal f(t)?
What is a(t) * a(t)?
Triangle
1 t if t a
a (t ) = a 1
0 otherwise
-a a
Gaussian
Gaussian: maximum value = 1 1
2
t
G (t , ) = e 2 2
2
Convolving a Gaussian with another:
-
G (t , 1 ) G (t , 2 ) = G (t , 12 + 22 )
Step Function
1 if t 0 1
u (t ) =
0 otherwise
if t = k
(t k ) =
0 otherwise
What is a function f(t) convolved with (t)?
0 k
What is a function f(t) convolved with (t - k)?
Comb (Shah) Function
A set of equally-spaced impulses: also called an impulse train
combh (t ) = (t hk )
k
h is the spacing
What is f(t) * combh(t)?
-3h -2h -h 0 h 2h 3h
Convolution Filtering
Convolution is useful for modeling the behavior of filters
It is also useful to do ourselves to produce a desired effect
When we do it ourselves, we get to choose the function that the
input will be convolved with
This function that is convolved with the input is called the
convolution kernel
Convolution Filtering: Averaging
Can use a square function (box filter) or Gaussian to locally
average the signal/image
Square (box) function: uniform averaging
http://www.physics.gatech.edu/gcuo/UltrafastOptics/PhysicalOptics/ http://www.cs.sfu.ca/~hamarneh/courses/cmpt340_04_1
Frequency Analysis
To use transfer functions, we must first decompose a signal into its component
frequencies
Basic idea: any signal can be written as the sum of phase-shifted sines and
cosines of different frequencies
www.dai.ed.ac.uk/HIPR2/ fourier.htm
General Idea of Transforms
Given an orthonormal (orthogonal, unit length) basis set of vectors
{k}:
Any vector in the space spanned by this basis set can be
represented as a weighted sum of those basis vectors:
v = ak ek
k
To get a vectors weight relative to a particular basis vector k:
ak = v ek
Thus, the vector can be transformed into the weights ak
u v = u [ j] v[ j]
j
Cant we do the same thing with functions?
f g =
*
f ( x) g ( x)dx
Functions satisfy all of the linear algebraic requirements of
vectors
Transforms with Functions
Just as we transformed vectors, we can also transform functions:
Inverse v = ak ek f (t ) = ak ek (t )
k k
Basis Set: Generalized Harmonics
The set of generalized harmonics we discussed earlier form an
orthonormal basis set for functions:
{ei2st}
Remember:
ei2st = cos(2st) + i sin(2st)
ak = f ei 2s k t
Transform ak = f ek =
f (t )ek* (t )dt
= f (t )e i 2s k t dt
Inverse f (t ) = ak ek (t ) f (t ) = ak ei 2s k t
k k
The Fourier Transform
Most tasks need an infinite number of basis functions
(frequencies), each with their own weight F(s):
ak = f ei 2s k t F ( s ) = f ei 2st
Transform
= f (t )e i 2s k t dt = f (t )e i 2st dt
Inverse f (t ) = ak ei 2s k t f (t ) = F ( s ) e i 2s k t
ds
k
The Fourier Transform
To get the weights (amount of each frequency):F
F (s) = f (t )e i 2st dt
F(s) is the Fourier Transform of f(t): F(f(t)) = F(s)
i 2st
f (t ) = F ( s ) e ds
Magnitude: F = ( F ) 2 + ( F ) 2
( F )
Phase: ( F ) = arctan
( F )
1 2 3 N
, , ,"
N N N N
Discrete Fourier Transform (DFT)
If we treat a discrete signal with N samples as one period of an
infinite periodic signal, then
N 1 i 2 st
1
F [s] =
N
f [t ]e N
t =0
and
N 1 i 2 st
f [t ] = F[s]e N
s =0
Note: For a periodic function, the discrete Fourier transform is the same as
the continuous transform
We give up nothing in going from a continuous to a discrete
transform as long as the function is periodic
Normalizing DFTs: Conventions
Basis Transform Inverse
Function
N 1 N 1
1 i 2 st i 2 st
e
i 2 st
N F [ s] =
N
f [t ]e N f [t ] = F[s]e N
t =0 s =0
N 1 N 1 i 2 st
1 i 2 st 1
f [t ]e F [s]e
1 i 2 st
e N F [ s] = N f [t ] = N
N N t =0 N s =0
N 1 N 1 i 2 st
1 i 2 st N i 2 st 1
N
e F [ s] = f [t ]e N f [t ] =
N
F [s]e N
t =0 s =0
Discrete Fourier Transform (DFT)
N 1 i 2 st
1
F [s] =
N
f [t ]e N
t =0
N 1 i 2 st
f [t ] = F[s]e N
s =0
Questions:
What would the code for the discrete Fourier transform look
like?
What would its computational complexity be?
Fast Fourier Transform
developed by Tukey and Cooley in 1965
If we let
i 2
WN = e N
2 M 1
1
F [s] =
2M
f [t ] W2stM
t =0
Fast Fourier Transform
Separating out the M even and M odd terms,
1 1 M 1 1 M 1 s ( 2t +1)
F [s] =
2 M t = 0
f [ 2t ] W s ( 2t )
2M +
M t =0
f [ 2t + 1] W2M
Notice that
i 2 s ( 2t ) i 2 st
W2sM( 2t ) =e 2M =e M = WMst
and
i 2 s ( 2t +1) i 2 st i 2 s
W2sM( 2t +1) =e 2M =e Me 2M = WMstW2sM
So,
1 1 M 1
1 M 1
F [s] =
2 M
f [2t ] WMst +
M
f [2t + 1] WMstW2sM
t =0 t =0
Fast Fourier Transform
1 1 M 1
1 M 1
F [s] =
2 M
f [2t ] WMst +
M
f [2t + 1] WMstW2sM
t =0 t =0
Can be written as
1
{
F [ s ] = Feven ( s ) + Fodd ( s )W2sM
2
}
We can use this for the first M terms of the Fourier transform of
2M items, then we can re-use these values to compute the last M
terms as follows:
1
{
F [ s + M ] = Feven ( s ) Fodd ( s )W2sM
2
}
Fast Fourier Transform
If M is itself a multiple of 2, do it again!
If N is a power of 2, recursively subdivide until you have one
element, which is its own Fourier Transform
ComplexSignal FFT(ComplexSignal f) {
if (length(f) == 1) return f;
M = length(f) / 2;
W_2M = e^(-I * 2 * Pi / M) // A complex value.
even = FFT(EvenTerms(f));
odd = FFT( OddTerms(f));
for (s = 0; s < M; s++) {
result[s ] = even[s] + W_2M^s * odd[s];
result[s+M] = even[s] W_2M^s * odd[s];
}
}
Fast Fourier Transform
Computational Complexity:
Remember: The FFT is just a faster algorithm for computing the DFT it
does not produce a different result
Fourier Pairs
Use the Fourier Transform, denoted F, to get the weights for
each harmonic component in a signal:
F ( s ) = F ( f (t )) = f (t )e i 2st dt
cos(2t) [(s + ) + (s )]
1 (s)
a a (s)
http://www.cis.rit.edu/htbooks/nmr/chap-5/chap-5.htm
Delta (Impulse) Function
(t) 1
http://www.cis.rit.edu/htbooks/nmr/chap-5/chap-5.htm
Square Pulse
sin(2 as )
a(t) 2a sinc(2as ) =
s
i 2 st
F ( s) = e dt
a
1
[cos(2 as) + i sin(2 as) cos(2 as) i sin(2 as)] =
i 2 s
1 1
[2i sin(2 as )] = sin(2 as ) = 2a sinc(2as ).
i 2 s s
Triangle
a(t) a sinc2(as)
1/ 2 (t ) 2
1/2 sinc (s / 2)
1 0.5
-1/2 0 1/2 t 0 s
Comb (Shah) Function
http://www.cis.rit.edu/htbooks/nmr/chap-5/chap-5.htm
Gaussian
t 2 s 2
e e
2
t
( s )
2
e e
e e
http://www.med.harvard.edu/JPNM/physics/didactics/improc/intro/fourier3.html
Common Fourier Transform Pairs
Spatial Domain: f(t) Frequency Domain: F(s)
Cosine cos(2t) Shifted Deltas [ (s + ) + (s )]
Sine sin(2t) Shifted Deltas [ (s + ) - (s )]i
Unit Function 1 Delta Function (s)
Constant a Delta Function a (s)
Delta Function (t) Unit Function 1
Comb (t mod h) Comb (t mod 1/h)
Square Pulse a(t) Sinc Function 2a sinc(2as)
Triangle a(t) Sinc Squared a sinc2(as)
Gaussian t 2 Gaussian s 2
e e
FT Properties: Addition Theorem
Adding two functions together adds their Fourier Transforms:
F(f + g) = F(f) + F(g)
f * g = F1(F(f ) F(g))
F (u, v) = f ( x, y )e i 2 (ux + vy ) dx dy
Same process for the inverse transform:
i 2 (ux + vy )
f ( x, y ) = F (u , v ) e dx dy
2-D Discrete Fourier Transform
For an N M image, the basis functions are:
i 2ux / N i 2vy / M
hu , v [ x, y ] = e e
= e i 2 (ux / N + vy / M )
N 1 M 1
1
F [u , v] =
NM
f [ x, y ]e i 2 (ux / N + vy / M )
x =0 y =0
Same process for the inverse transform:
N 1 M 1
f [ x, y ] = F [u , v ]e i 2 (ux / N + vy / M )
u =0 v =0
2D and 3D Fourier Transforms
The point (u, v) in the frequency domain corresponds to the basis
function with:
Shift
Scaling
Rayleighs Theorem
Convolution Theorem
Rotation
Rotating a 2D function rotates its Fourier Transform
If
f2 = rotate(f1)
= f1(x cos() y sin(), x sin() + y cos())
F1 = F(f1)
F2 = F(f2)
then
F2(s) = F1(x cos() y sin(), x sin() + y cos())
needs
more
boundary
padding!
http://mail.udlap.mx/~oldwall/docencia/IMAGENES/chapter2/image_232_IS548.html
Transforms of Separable Functions
If
1 M 1 1 N 1 i 2vy / M
M
f [ x, y]e i 2ux / N
e
y =0 N x =0
Likewise for higher dimensions!
Convolution using FFT
Convolution theorem says
f *g = F 1(F(f ) F(g))
Can do either:
Direct Space Convolution
FFT, multiplication, and inverse FFT
Correlation is
f (t ) * g (t ) = f ( ) g (t + )d
Correlation in the Frequency Domain
Convolution
Correlation
target probe
http://www.reindeergraphics.com/tutorial/chap4/fourier11.html
Particle Picking
Use spherical, or rotationally averaged probes
Goal: maximize correlation between target and probe image
target probe
http://www.reindeergraphics.com/tutorial/chap4/fourier11.html
Autocorrelation
Autocorrelation is the correlation of a function with itself:
f (t) * f(-t)
Text and figures for this lecture were adapted in part from the following source, in
agreement with the listed copyright statements:
http://web.engr.oregonstate.edu/~enm/cs519
2003 School of Electrical Engineering and Computer Science, Oregon State University, Dearborn Hall, Corvallis, Oregon, 97331
Resources
Textbooks:
Kenneth R. Castleman, Digital Image Processing, Chapters 9,10
John C. Russ, The Image Processing Handbook, Chapter 5