You are on page 1of 9

Fourier Series Expansions

How do I make one?

You may well be wondering how I got the relative amplitudes of the sine waves to produce our square
wave approximation. Well, that's what we're going to dive into here. There are two ways of doing it:
standard solutions from HLT, and deriving it from Fourier integrals.

Odd and Even Functions

An important point to note, before you dive into the maths, is the difference between odd and even
functions. For odd functions, on the other side of the y-axis, the function is inverted. That is to say:

sin is an odd function. So, from this, you should be able to see that an odd function is made up of sin
functions only. And any combination of sin functions will produce odd functions.

Similarly, cos is an even function (mirrored about the y-axis), and so combinations of cos functions
produce even functions.

This is all stated rather obliquely in HLT p11. Of course, some functions are neither wholly odd nor even:
they are asymmetric. HLT miserably fails to produce any graphical examples of this, but it should (I hope)
be fairly obvious that asymmetric functions are made up of both sin and cos functions:
Enough Messing Around

Okay, so you have an arbitrary function which isn't in HLT. Or you're in an exam and you want to prove
from first principles.

What we're hoping for is a series of the form:

or

where ai and bi are constants we can find. From the previous section's discussion of odd and even
functions, you should be able to see that ai will be zero for odd functions, and bi zero for even functions.

For ease of reference, I'm going to tell you the answer to the question now, and then explain it:

Why? It has to do with the orthogonality of sin and cos functions. I refuse to talk much about this; suffice
it to say that, for integer i and j:

Obviously, therefore, constituent cos and sin terms will pair up with the multiplying cos and sin terms by
integrating over a single period. (If you don't believe me, try it, or see my explanation.) You are then left
with just one term multiplying each: corresponding to ai (or bi if you're doing sines). So, if you just
integrate this original function lots of times, each time multiplying it by cos(iωt), with i going up from 0,
you will get the coefficients of the cos terms by:
which then yields the equation above for ai and bi.

Easy-peasy lemon squeezy. So to speak. You can see these equations in HLT.

Even More Complexity

Alert readers will say, Hang on - haven't we seen combinations of sin and cos functions before?

Well, combinations of sin and cos functions were found in 2nd order differential equations last year, and
you may remember we got them from complex exponentials:

So, if we fit these into our expression for the Fourier series, we get a new one in terms of j, with complex
exponentials and complex coefficients ck. Oh joy.

where

Notice that now, instead of starting at zero, our counter starts at minus infinity. So, in summary, you can
get a complex Fourier series from a real one quite easily. Or you can get c k straight away:

If you refer to HLT p10, you should be able to see the function X(ω) which looks remarkably similar to our
expression for c k earlier. This will lead us into the next big area: Fourier transforms. But for now: you can
transform any periodic function (that you can integrate). Even better: easy functions like simple
combinations of sin and cos functions you can transform immediately into a Fourier series. It shouldn't
take too much effort to imagine doing it for linear combinations of complex exponentials either. Wahey!

Exampling

So, because this is so easy to do, let's have some really simple examples.

The easy way of obtaining the Fourier series of some functions is to reduce them to a sum of sine and
cosine terms, then just read off the coefficients. For example, take the function
This is equal to

This is a simple Fourier series, with ω0 = 200, a1 = 1, and b1 = j. Generating the complex coefficients is
even easier: c1 = 1 (from the coefficient of ejωt).

Where did I get ω0 from? The answer is quite simple: in cases like this, it's the lowest angular frequency
expressed; in this case 200. In general, however, ω0 = 2 π / T, where T is the minimum time between
repeats of the function (period).

To illustrate this, let's take an easy function:

ω0 = 4, a1 = 1, and b2 = 1. (The function repeats after π / 2, giving ω0).

Our last example will demonstrate the integration method of determining coefficients. The function I'm
going to transform looks like this:

It has period 2, and x (t) = e -t for |t|<1.

And at this point, the Fourier analysis is finished! We now have an expression for each of the coefficients
of the complex Fourier series, and the rest is just algebraic simplification.

Knowing that ωT = 2π, and T = 2, we get ω = π, and:

If you work out e - jkπ and e jkπ you will notice that:

for integral k. Thus, this equation becomes:


Note that although the ck terms are complex, the series results in a wholly real function x(t). This is
because:

Terms on each side of 0 are conjugate, and so the complex parts will cancel.

An Example: Automotives in Africa

Due to certain geological conditions, a road in Africa develops an uneven surface depth governed by the
equation:

i) Show that this can be represented by a Fourier series containing only odd harmonic sine terms, and find
an expression for the coefficients.

Since the surface is periodic, and has odd half- and even quarter-wave symmetry, we can conclude that it
can be represented by such a series.

To find the Fourier coefficients, we must perform the integral:

If ω = 2π/L, this yields

where φ = π/4.

Now, the Fourier analysis is done.

Next, we'll talk about non-periodic functions and the Fourier transform
Fourier Transforms
So, we now have equations that define the amplitude of harmonics in a periodic wave. In fact, this is
called the frequency spectrum of the wave, and tells us all about the frequency content.

Now it's time to move onto aperiodic functions, and in fact to processing arbitrary functions. If we are
sampling a signal, we don't really want to be concerned about what the basic unit of the wave is - we just
want to feed the function in and get a frequency spectrum out. And this is where the functions in HLT
come in.

Towards a Less Periodic Signal

To derive the Fourier transform of an aperiodic function x(t), defined over -τ <= t <= τ, we will treat it as
a periodic function xP(t) with period T (where T > τ ).

If we consider one frequency within the spectrum, ω = kω0 (= 2πk / T), as T increases, ω0 decreases, but
we will say that k will increase so that the frequency we are examining stays the same.

From the expression of ck earlier, we have:

Remember that this integral is zero outside of -τ <= t <= τ, so we don't have any problems here. This
integral is going to be the same value whatever the value of T, so long as T > τ. Therefore, if we increase
T to infinity, k goes to infinity so that ω still equals kω0, so this becomes

where we have replaced ck T by X(ω). This definition of X(ω) is familiar if you look in HLT p10.

The converse is obtained from our familiar Fourier series expression for x(t), writing an expression for
xP(t) in terms of c k:

where we have replaced ck by X(ω), where ω = kω0 once more. The difference between values of ω in
each part of the summation is ω0, so we will instead call it δω.

Substituting for T and kδω:

Now, here comes the trick. If we fix ω, as we take T towards infinity, ω0 will go towards zero. Therefore, it
is reasonable to assume that δω goes to dω, and the summation becomes an integral:
Since we now have an exact expression, not a periodic function, xp(t) has become x(t), and this should be
familiar from HLT as well.

The upshot of all this is that a frequency spectrum of any signal can be derived from these integrals. This
is quite fantastic, since we can now dispense with all that theory and just use these integrals!

An Example: A Pie By Any Other Name or The Kitchen Functions

True to form, mathematics has half a dozen uses for each Greek letter, and π is no exception. More
commonly known as 3.1415 (approximately), or in its capital form (Π) as the product operator (like Σ),
there is also a function Π( t/τ ).

(See HLT p12.)

So, let's derive the frequency spectrum for this nice and simple function. Remember that:

In this case, x(t) is only defined in the interval [-τ/2, τ/2], and so these become our limits of integration,
and the integral becomes

which we can now integrate to

This has therefore given us a function of the form (sin x) / x, which is translated to the sinc function in
HLT.

Note that sinc x = [sin xπ] / xπ.

Next, we'll look at the properties of these transforms.


Fourier Transform Properties

The Dreaded Laplace Transformation

If you look at the definition of a Laplace transform:

and the Fourier transform:

it may occur to you that the two look remarkably similar. In fact, if you put s = j ω, they are identical,
apart from limits. This means that all the Laplace properties suddenly hold for Fourier transforms as well,
with little change! This includes the cool way of doing differentiation:

The difference between the two has a large amount to do with the limits of the integrals. Laplace
transforms often depend on the initial value of the function; Fourier transforms are independent of the
initial value. In fact, the above expression for the Laplace differential only holds when the initial value is
zero, while the Fourier one always holds. Also, the transforms are only the same if the function is the
same both sides of the y-axis (so the unit step function is different). However, we can still do lots of the
same sort of things, including convolution, time-shifting and so on without a lot of difference.

Discrete-Time Systems

Sampling

When we were looking at the frequency-shifting and modulation properties, I promised you that I'd tell
you what the example was for.

Well, it all has to do with discrete-time sampling. Computers and digital systems typically don't work in
continuous time; they operate on a clock that causes them to look at a real-world input only once every so
often. This precludes ordinary integration techniques, and so everything has to be expressed as sums and
we need to do some tricks to emulate what we see in the real continuous-time world.

Another problem with this approach is that our maths is all geared up to work with continuous signals...
and not very well with discrete time. Thus, we need some way of modelling the sampling discipline in
continuous time. The way this is done is to multiply the function by a periodic sampling function, that
models the input behaviour of the sampler.

The obvious sampling function is an infinite series of Dirac impulse functions - and you should now be able
to see where that example comes in useful. It demonstrates that by sampling, we have introduced
additional frequencies into the function, at intervals of the sampling frequency. Therefore, if we want to
obtain the original frequency distribution, we must use a low-pass filter that removes the sampling noise.
This filter is placed at half the sampling frequency. It should be obvious, therefore, that the sampling rate
must be high enough that this filter does not remove any of the original signal:
where ω0 is the maximum frequency in the signal, and ωs is the sampling frequency. The frequency 2ω0 is
called the Nyquist frequency of the signal, and the above statement the Nyquist sampling theorem.

In practice, of course, filters are not ideal. In fact, filters with excellent frequency response often have
appalling phase response, and vice-versa. In order to avoid phase distortion, therefore, filters which have
worse frequency response are used - so the sampling rate must be higher to avoid roll-off of the audible
frequency spectrum. This is termed oversampling.

However, we can't really do a lot with sampling unless we have a discrete, computable algorithm for
Fourier transformation. This is where the discrete Fourier transform (DFT) comes in, and its more efficient
counterpart, the fast Fourier transform (FFT).

The Discrete Fourier Transform


In order to create an approximation of the Fourier transform for discrete, sampled data, we must make
two major steps:

• Move from limits of +/- infinity to limits of the amount of sampled data;
• Move from a continuous integral to a discrete summation.

The first step we can overcome by assuming the function is periodic outside the range which we are
considering. Therefore, ω0 = 2π/T. Since a periodic function only has harmonics that are multiples of ω0,
we can also say that ω = mω0 = 2πm/T, where m is integral.

The second is a fairly simple thing to achieve: we can make the approximation that the space between the
time elements, Ts, is small, and so convert the integral into a summation. Now, t = nTs , and the period T
is NT, where N is the number of samples:

We can now simplify and say that X is a discrete function of m, and x a discrete function of n:

This equation gives the discrete Fourier transform of the group of samples x[0..N-1]. Nyquist's sampling
theorem dictates that there are N values in the discrete transform: [0..N-1].

A similar derivation leads to the inverse DFT:

which is very similar.

You might also like