You are on page 1of 81

MATHEMATICS 257

Mathematics for Electronics Engineering


Technology III

XXX XXX XXX


Contents

Part 1. Essential Calculus 1


Chapter 1. Review of MATH 147 3
Chapter 2. Partial Fractions 5
Chapter 3. Improper Integrals 7

Part 2. Ordinary Differential Equations (ODEs) 9


Chapter 4. Introduction 11
4.1. Solutions and Initial Value Problems (IVPs) 13
Chapter 5. First Order Linear ODEs 17
5.1. An Integral Formula 17
5.2. An Introduction to Undetermined Coefficients 18
5.3. Modelling with First Order Linear ODEs 19
Chapter 6. Second Orders ODEs with Constant Coefficients 23
6.1. The Characteristic Equation and the AHE 24
6.2. Undetermined Coefficients Revisited 26
6.3. Modelling with Second Order ODEs 28
6.4. Variation of Parameters 30

Part 3. Fourier Analysis 33


Chapter 7. Fourier Series 35
7.1. Infinite Series, Convergence and Divergence 35
7.2. Periodic Signals and the Exponential Form 39
7.3. Alternate Forms and Symmetry 45
7.4. Properties of Fourier Series 50
7.5. Spectra and Parseval’s Theorem 56
7.6. Applications of Fourier Series 56
Chapter 8. The Continuous Fourier Transform (CFT) 61
8.1. Aperiodic Signals 61
8.2. Operational Properties 61
8.3. Amplitude Modulation 61
Chapter 9. The Discrete Fourier Transform 63
iii
iv CONTENTS

Part 4. Laplace Transforms 65


9.1. The Laplace transform and the Inverse Laplace Transform 66
9.2. Operational Properties and Tables of Laplace Transforms 66
9.3. Solving IVPs with the Laplace Transform 66
9.4. Discontinuous Forcing Functions 66
9.5. Transfer functions and PID Controllers 66
9.6. Convolution Theorem 66
Appendix A. Complex Numbers 67
Part 1

Essential Calculus
CHAPTER 1

Review of MATH 147

You should be able to differentiate and integrate. DUH. Rules for derivatives
and examples. Rules for Integrals and examples. Make sure to include
various forms of the FTC. Product rule, quotient rule and chain rule for
differentiation. IBS and IBP for integration. Show how these techniques
come from chain and product rule respectively.

3
CHAPTER 2

Partial Fractions

You need to do distinct linear, repeated linear and quadratic plus a mixture
of the cases.

5
CHAPTER 3

Improper Integrals

Do both type I and type II. Make sure to quote definitions of the CFT and
the Laplace transform as motivation for studying these integrals.

7
Part 2

Ordinary Differential Equations


(ODEs)
CHAPTER 4

Introduction

Definition 4.1 (ODE, PDE). An ordinary differential (ODE) is an a equa-


tion that involves the derivatives of one of more dependent variables with
respect to a single independent variable. If the equation contains derivatives
with respect to two or more independent variables, then it is called a partial
differential equation (PDE).

The most general ODE can written in the form


(4.1) F (y (n) (x), y (n−1) (x), . . . , y 00 (x), y 0 (x), y(x), x) = 0,
where F is a known function. In this case, x is the independent variable and
y is the independent variable. One usually assumes that equation (4.1) can
be solved for y (n) and written in the form
(4.2) y (n) (x) = G(y (n−1) (x), . . . , y 00 (x), y 0 (x), y(x), x)
for a known function G.
Example 4.2. The equation
d2 θ g
(4.3) + sin(θ) = f (t)
dt2 l
is an ODE. The independent variable is t and the dependent variable is θ.
The function f represents an external force. This equation provides a model
of the oscillations of a pendulum.

The equation
∂2u 2
2∂ u
(4.4) − c = f (x, t)
∂x2 ∂x2
is a PDE. The independent variables are x and t. The dependent variable
is u. The variable u represents the transverse vibrations (waves) traveling
on a flexible wired. Once again, f represents external force.

The pair of equations


dx
(4.5) = αx − βxy,
dt
dy
= δxy − γy
dt
11
12 4. INTRODUCTION

comprise a 2 × 2 system of ODEs. The independent variable is t and the


dependent variables are x and y. This system models numbers of two species,
a predator y and a prey x, in a closed ecological system.

Given a differential equation, an important objective is to find all the func-


tions that satisfy the equation. However, the generality of equations (4.1)
and (4.2) makes this impossible in most circumstances. In such these cases,
we are have to use numerical techniques to estimate the functions that satisfy
equations (4.1) and (4.2).

However, there do exist solution techniques that work for special cases
of (4.1) and (4.2). Selection of the correct solution technique is critical.
The selection process depends on our ability to classify the ODE that we
are trying to solve. We begin by defining the order of a differential equation.
Definition 4.3 (order). The order of a differential equation is the order of
the highest order derivative in the equation.
Example 4.4. The orders of equations (4.1) and (4.2) are n. The orders
of equations (4.3) and (4.4) are two. The orders of equations (4.5) are one,
while the order of the order of the PDE
∂2u 4
2∂ u
= α
∂t2 ∂x4
is four.
Definition 4.5 (linear, homogeneous). An nth order ODE is said to be
linear if it can be written in the form
(4.6) an (x)y (n) (x) + an (x)y (n) (x) + · · · + a1 (x)y 0 (x) + a0 (x)y(x) = f (x).
Otherwise, the equations is said to be nonlinear.

When the function f (x) ≡ 0 the equation is said to be homogeneous. Oth-


erwise the equation is said to be nonhomogeneous.

It is important to notice that the coefficient functions ak (x) and the non-
homogeneity f (x) depend on the independent variable only. In particular,
if ak = ak (x, y) for any k or f = f (x, y), then ODE (4.6) is nonlinear.
Example 4.6. The equation (4.3) is nonlinear and nonhomogeneous. The
Van der Pol equation
(4.7) ẍ + µ(x2 − 1)ẋ + x = 0
is nonlinear and homogeneous as is The Schrodinger equation
1
(4.8) jψt = ψxx + V (x)ψ + +K|ψ|2 ψ.
2
The equations (4.4) are nonlinear and homogeneous as well.
4.1. SOLUTIONS AND INITIAL VALUE PROBLEMS (IVPS) 13

The equation (4.4) is linear and nonhomogeneous. The equation


(4.9) mẍ + β ẋ + kx = 0
is linear and homogeneous. Equation (4.9) represents the unforced vibra-
tions of a spring-mass-damper system.

4.1. Solutions and Initial Value Problems (IVPs)

Definition 4.7 (solution). A function ψ(x) is said to be a solution of


ODE (4.6) if
an (x)ψ (n) (x) + an (x)ψ (n) (x) + · · · + a1 (x)ψ 0 (x) + a0 (x)ψ(x) ≡ f (x).
That is, the function ψ(x) reduces equation (4.6) to and identity. We will
call ψ(x) the general solution of (4.6) if it contains all possible solutions
of (4.6).

Recall the difference between identities and equations. The trigonometric


expression cos2 (θ)+sin2 (θ) ≡ 1 is and identity in the sense that the equation
is true for all values of θ. On the other hand x2 − 9 = 0 is and equation.
The equation is true only when x = ±3.
Example 4.8. Show that y(x) = C1 cos(2x) + C2 sin(2x) is a solution of
(4.10) y 00 (x) + 4y(x) = 0
for any constants C1 , C2 ∈ R. Since
y 0 (x) = −2C1 sin(2x) + 2C2 cos(2x),
y 00 (x) = −4C1 cos(2x) − 4C2 sin(2x)
it follows that y 00 + 4y ≡ 0 and y(x) is a solution for arbitrary constants
C1 and C2 . It is important to note that there are an infinite number of
solutions. For instance,
y(x) = 4 sin(x),
y(x) = 2 cos(x) − 3 sin(x),
y(x) = 7 cos(x),
y(x) ≡ 0,
..
.
are all solutions. Notice that the trivial solution ψ(x) ≡ 0 is also a solution.
This is true as long as (4.6) is homogeneous.
Example 4.9. Show that i(t) = exp(−t) is not a solution of the ODE
di di di
dt + i = t. Since dt + i = 0 6≡ t, i(t) = exp(−t) is not a solution of dt + i = 0.
di
However, i(t) = C1 exp(−t) + t is a solution of dt + i = 0 for any C1 ∈ R.
14 4. INTRODUCTION

Definition 4.10 (particular solution). We will call any solution ψp (x),


of (4.6), a particular solution of (4.6) if it is free of abitrary constants.
Example 4.11. ip (t) = exp(−t) + t and ip (t) = t are particular solutions of
di
the ODE dt + i = 0, while yp (x) = 7 cos(x), yp (x) = 2 cos(x) − 3 sin(x) and
yp (x) ≡ 0 are particular solutions of (4.10).

The solution structure of linear equations is very specific. This structure is


important as it a determine solution methodology and leads to a complete
theory for linear ODEs. We shall make use of the following definitions to
describe this structure.
Definition 4.12 (associated homogeneous equation). The associated ho-
mogeneous equation (AHE) of equation (4.6) is given by
(4.11) an (x)y (n) (x) + an (x)y (n) (x) + · · · + a1 (x)y 0 (x) + a0 (x)y(x) = 0.
We will use yh (x) to denote the general solution of (4.11).
Example 4.13. The AHE of the ODE
y 000 (x) + sin(x)y 00 (x) + 2y 0 (x) + x cos(x) = 0
is
y 000 (x) + sin(x)y 00 (x) + 2y 0 (x) = 0
while the AHE of y 00 (x) + 2y(x) = 0 is simply y 00 (x) + 2y(x) = 0. This last
result is due to the fact that the ODE is homogeneous
Theorem 4.14. The general solution of equation (4.6) can be written in the
form
(4.12) y(x) = yh (x) + yp (x),
where yh (x) is the general solution of (4.11) and yp (x) is any particular
solution of (4.6).
Theorem 4.15. The general solution of (4.11) can always be written in the
form
(4.13) yh (x) = C1 y1 (x) + C2 y2 (x) + · · · + Cn yn (x),
where yk , k = 1, 2, . . . , n are distict solutions of (4.11) and Ck ∈ R, k =
1, 2, . . . , n are arbitrary constants.

It follows that the general solution of (4.6) can be written in the form
(4.14) y(x) = C1 y1 (x) + C2 y2 (x) + · · · + Cn yn (x) + yp (x).
di
Example 4.16. The general solution of dt +i = t is i(t) = C1 exp(−t)+t and
the general solution of y 00 (x) + 4y(x) = 0 is y(x) = C1 cos(2x) + C2 sin(2x).
The general solution of the third order equation y 000 (x) + y 0 (x) = sin(x) is
y(x) = C1 cos(x) + C2 sin(x) + C3 − x2 sin(x).
4.1. SOLUTIONS AND INITIAL VALUE PROBLEMS (IVPS) 15

ODEs provide models of physical systems and one of their primary uses
is the prediction of the evolution of a physical system under examination.
With respect to prediction, the solution (4.14) is of limited value. This is
due arbitrary constants Ck ∈ R, k = 1, 2, . . . , n with make the preciction of
a specific value for (4.14) impossible. To address the problem we introduce
n side conditions, one for each arbitrary constant. These side conditions are
known as initial conditions.
Definition 4.17 (initial conditions). A set of initial conditions (ICs) for
equation (4.6) is a set conditions of the form
(4.15) y(0) = y0 , y 0 (0) = y1 , . . . , y (n) (0) = yn ,
where yk ∈ R for k = 1, 2, . . . , n. We note that there is one IC for each
arbitrary constant. The ICs allow us to compute a particular solution yp (x).
Equivalently, the ICs determine specific values for the constants Ck ∈ R for
k = 1, 2, . . . , n
Definition 4.18. An ODE along with its’ ICs is called an initial value
problem (IVP). The IVP corresponding to (4.6) takes the form
(4.16)
(
an (x)y (n) (x) + an (x)y (n) (x) + · · · + a1 (x)y 0 (x) + a0 (x)y(x) = f (x),
y(0) = y0 , y 0 (0) = y1 , . . . , y (n) (0) = yn .
Example 4.19. Solve the IVP
(
i0 (t) + i(t) = t,
i(0) = 2.
Since the general solution is i(t) = C1 exp(−t) + t and
i(0) = C1 = 2,
the solution of the IVP is i(t) = 2 exp(−t) + t.

The IVP (
y 000 (x) + y 0 (x) = sin(x),
y(0) = 1, y 0 (0) = −1, y 00 (0) = 0.
leads to the 3 × 3 linear system
y(0) = C1 + C3 = 1,
y 0 (0) = C2 = −1,
y 00 (0) = −C1 + 1 = 0.
It follows that the solution of the IVP
x
y(x) = cos(x) − sin(x) − sin(x).
2
CHAPTER 5

First Order Linear ODEs

5.1. An Integral Formula

In this section, we derive an integral formula for the solution if the first
order linear ODE
(5.1) a1 (x)y 0 (x) + a0 (x)y(x) = f (x).

Under the assumption a1 (x) 6≡ 0, equation (5.1) is usually written in the


standard form
(5.2) y 0 (x) + P (x)y(x) = Q(x).
R 
Definition 5.1. Let φ(x) = exp P (x)dx , then φ(x) is called an inte-
grating factor of equation (5.1). We note that the fundamental theorem of
calculus ensures that φ0 (x) = P (x)φ(x).

If we multiply both sides of (5.1) by φ we obtain


φy 0 + P φy = φQ,
or equivalently
d 
φy = φQ.
dx

If we integrate both sides of the last equation we find that the general
solution is given by
 
1
Z
(5.3) y(x) = φ(x)Q(x) dx + C ,
φ(x)
where C is a constant of integration. The solution (5.3) canR be written in
C 1
the form (4.12). In particular yh (x) = φ(x) and yp (x) = φ(x) φ(x)Q(x) dx.

Example 5.2. Consider the IVP


(
ẋ + 2tx = t,
(5.4)
x(0) = −1.
17
18 5. FIRST ORDER LINEAR ODES

2
Since P (t) = 2t, the integrating factor is φ(t) = exp 2 t dt = et . The
R 
formula provides Z 
−t2 t2
x(t) = e te dt + C

and the substituion u = t2 yields the general solution


t 2
x(t) = + Ce−t .
2
2
From the IC, x(0) = C = −1 so the solution of the IVP is x(t) = 2t − e−t .
2
We point out that the general solution of the AHE is xh (t) = Ce−t while
xp (t) = 2t is a particular solution.

The formula (5.2) can lead to some tedious integrals to evaluate analytically.
For instance, if P (x) = 1 and Q(x) = sin(x), then we need to evaluate the
integral Z
I= ex sin(x),

which requires two integration by parts.

5.2. An Introduction to Undetermined Coefficients

As we have seen, the formula (5.3) canR lead to integrals that are tedious
to evaluated. Since yh (x) = C exp − P (x) dx , we can find the general
solution of (5.2) as long as we have a method to determine a particular
solution yp (x). Undetermined coefficients is such a method. The method
works for constant coefficient ODEs and specific forms of the right hand side
Q(x). We refer the reader to table 1.

Q(x) Guess for yp (x)


α sin(ωx), β cos(ωx), α sin(ωx) + β cos(ωx) A sin(ωx) + B cos(ωx)
α exp(τ x) A exp(τ x)
n
αn x + αn−1 x n−1 + · · · + α1 x + α0 An x + An−1 xn−1 + · · · + A1 x + A0
n

Table 1. Initial Guesses for Undetermined Coefficients

Example 5.3. Solve the IVP


(
y 0 (x) + y(x) = sin(x),
y(0) = 1.
Given P (x) = 1, yh (x) = Ce−x . Since Q(x) = sin(x), according to table 1,
we seek a particular solution of the form yp (x) = A sin(x) + B cos(x). The
coefficients A and B need to be determined.
5.3. MODELLING WITH FIRST ORDER LINEAR ODES 19

Now

yp0 (x) + yp (x) = A cos(x) − B sin(x) + A sin(x) + B cos(x),


 
= A − B sin(x) + A + B cos(x) = sin(x).

and, matching the amplitues of sin(x) and cos(x), we obtain the 2 × 2 linear
system

A−B =1
A+B =0

It follows that yp (x) = 12 sin(x) − 12 cos(x) and the general solution of the
ODE is y(x) = e−x + 21 sin(x) − 12 cos(x).

Example
R 5.4. Find the general solution
 of the ODE ẋ(t) + 2x(t) = t2 .
Since 2 dt = 2t, xh (t) = exp − 2t . Now, according to table 1, xp (t) =
At2 + Bt + C.

It follows that

2At + B + 2At2 + 2Bt + 2C = 2At2 + 2A + 2B + B + 2C ,


 

= t2 ,

which leads to the 3 × 3 system

2A = 1
2A + 2B = 0
B + 2C = 0

The particular solution is xp (t) = 21 t2 − 12 t − 1


4 and the general solution is
x(t) = e−2t + 12 t2 − 12 t − 41 .

5.3. Modelling with First Order Linear ODEs

Recall that the current-voltages relationships for analog components are


di 1
R
vR (r) = Ri(t), vL (t) = L dt and vC (t) = C i(t) dt and consider a series RC-
circuit. From Kirchoff’s voltageR law, the current i(t) flowing in this circuit
obeys the equation Ri(t) + C1 i(t) dt. By the Fundamental Theorem of
Calculus, the last equation is equivalent to the ODE Ri0 (t) + C1 i(t) = v 0 (t).
See (5.5) for the corresponding IVP.
20 5. FIRST ORDER LINEAR ODES

R
v (
Ri0 (t) + C1 i(t) = v 0 (t)
C (5.5)
i(0) = i0

1
Example 5.5. Consider IVP (5.5) and suppose that R = 10 Ω and C = 100
F . Let v(t) = cos(2t), then, in standard form, the ODE is i0 (t) + 1000
1
i(t) =
1 1
−2 sin(2t). We have P (t) = 1000 and Q(t) = − 500 sin(2t). The general
solution of the AHE is ih (t) = Ce−t/1000 .

Now, using undetermined coefficient, the particular solution takes the form
ip (t) = A sin(2t) + B cos(2t) and

1 A B
i0p (t) ip (t) = 2A cos(2t) − 2B sin(2t) + sin(2t) + cos(2t),
1000   1000
 1000 
2000A B 2000B A
= + cos(2t) + − + sin(2t),
1000 1000 1000 1000
1
=− sin(2t).
500

Equating like amplitudes, we obtain the 2 × 2 linear system

2000A + B = 0,
A − 2000B = −2

2 4000
which leads to the particular solution ip (t) = − 4000001 sin(2t)+ 4000001 cos(2t).
The general solution is i(t) = Ce −t/1000 2 4000
− 4000001 sin(2t) + 4000001 cos(2t)
and the solution of the IVP is i(t) = − 40000014000
e−t/1000 − 4000001
2
sin(2t) +
4000
4000001 cos(2t).

The part of the solution that tends to zero as t → ∞ is known as the transient
of the system modelled by the corresponding. In a physically realistic sys-
tem, the general solution of the AHE ih (t) = Ce−t/1000 is trasient. The part
of the solution that remains after the transient has vanished is known as the
steady state. The particular solution represents the steady state of the sys-
2 4000
tem. In this case, the steady state is ip (t) = − 4000001 sin(2t)+ 4000001 cos(2t).

In a similar fashion, we find that the current flowing in an RL-circuit follows


the ODE Li0 (t) + Ri(t) = v(t). The corresponding IVP is given in (5.6)
5.3. MODELLING WITH FIRST ORDER LINEAR ODES 21

R
v (
Li0 (t) + Ri(t) = v(t)
L (5.6)
i(0) = i0

Now consider a mass m that is dropped from a height h at time t = 0.


The force due to gravity that acts on the mass is given by Fg = −mg,
while the force due to air resistance is given by Fa = −αv, where α is
a constant of proportionality. Recall that acceleration is the time rate of
change of velocity. That is a(t) = v 0 (t). Newton’s second law yields the ODE
mv 0 (t) + αv(t) = −mg. Since the mass is initially at rest, the corresponding
IVP is (5.7).

x(t) Fa = −αv(t)

m
x(0) = h b
(
v(0) = 0
mv 0 (t) + αv(t) = −mg
(5.7)
Fg − mg v(0) = 0

Example 5.6. Solve the IVP (5.7). In standard form, the ODE is v 0 (t) +
α α
m v(t) = −g so P (t) = m and Q(t) = −m. The general solution of themAHE
α
is vh (t) = C exp − m t . Undetermined coefficients yields vp (t) = − α g so
α
t −m

the general solution is v(t) = C exp − m α g.

We apply the IC to obtain C = − m α g so the solution of the IVP is v(t) =


−m g exp − α
t − m
g. Since x 0 (t) = v(t), we can find the position x(t) by
α m α
integrating the solution of the IVP with respect to time t. We leave this
as an exercise for the reader. We note that limt→∞ v(t) = − m α g, which is
the so-called terminal velocity of the mass m. The terminal velocity is the
steady state of the ODE v 0 (t) + m α
v(t) = −g.
CHAPTER 6

Second Orders ODEs with Constant Coefficients

We now turn our attention to ODEs of the form


(6.1) ay 00 (x) + by 0 (x) + cy(x) = f (x),
where a, b and c ∈ R. The associated IVP is
(
ay 00 (x) + by 0 (x) + cy(x) = f (x),
(6.2)
y(0) = y0 , y 0 (0) = y1 .

The general solution of (6.1) is a special case of the general solution of (4.6).
In particular, let y1 (x) and y2 (x) be two distinct solutions of the AHE
(6.3) ay 00 (x) + by 0 (x) + cy(x) = 0,
and suppose that yp (x) is a particular solution of (6.1). The general solution
of (6.1) takes the form
(6.4) y(x) = C1 y1 (x) + C2 y2 (x) + yp (x),
where C1 , C2 ∈ R are arbitrary.

As before, the ICs in (6.2) are used to determine the contants C1 and C2 . In
the current case, the ICS always lead to the 2 × 2 linear system of equations
(6.5) C1 y1 (0) + C2 y2 (0) = y0 − yp (0),
C1 y10 (0) + C2 y20 (0) = y1 − yp0 (0).

From Cramer’s rule, the constants C1 and C2 are given by



y0 − yp (0) y2 (0)
y1 − yp0 (0) y20 (0)

C1 =
y1
0 (0) y2 (0)
y1 (0) y20 (0)


y1 (0) y0 − yp (0)
0
y1 (0) y1 − yp0 (0)

C2 =
y1
0 (0) y2 (0)
y1 (0) y20 (0)

23
24 6. SECOND ORDERS ODES WITH CONSTANT COEFFICIENTS

and system (6.5) will have a unique solution (C1 , C2 ) for any x0 if and only
if the Wronskian determinant is not zero. That is,

y (x ) y2 (x0 )
(6.6) W (y1 , y2 ) = 10 0 6= 0.
y1 (x0 ) y20 (x0 )
Consequently, (6.2) has a unique solution if and only if (6.6) holds for x0 = 0.
Example 6.1. The general solution of the ODE y 00 (x) + y(x) = 3 is y(x) =
C1 sin(x) + C2 cos(x) + 3 and, for arbitrary x0 the Wronskian of y1 (x) and
y2 (x) is

y1 (0) y2 (0) sin(x0 ) cos(x0 )
W (y1 , y2 ) = 0 =
y1 (0) y20 (0) cos(x0 ) − sin(x0 )

= − sin2 (x0 ) − cos2 (x0 ) = −1 6= 0.

It follows that the IVP


(
y 00 (x) + y(x) = 3,
y(x0 ) = y0 , y 0 (x0 ) = y1
has a unique solution for any t0 ∈ R.

6.1. The Characteristic Equation and the AHE

Consider the AHE (6.3) and recallR that, for the ODE y 0 (x) + P (x)y(x) =
C
Q(x), yh (x) = φ(x) = C exp − P (x) dx . Since yh (x) = Ce−λx when
P (x) = λ, we seek solutions of (6.3) in the form y(x) = Ceλx , where in
general, λ ∈ C.

We assume C 6≡ 0 and substitute y(x) = Ceλx into (6.3) to obtain


C aλ2 + bλ + c eλx = 0.


Since eλx 6≡ 0, we demmand that


(6.7) aλ2 + bλ + c = 0.

Equation (6.7) is known as the characteristic equation (CE) of the ODE (6.3)
and y(x) = Ceλx is a solution of (6.3) as long as λ is a root of (6.7). The
CE is a quadratic equation and, depending on the sign of the discriminant
∆ = b2 − 4ac, we consider three distinct cases for the roots of the CE.

6.1.1. Distinct Real Roots. If ∆ > 0, then the CE has two distinct
real roots λ1 6= λ2 . It follows that y1 (x) = eλ1 x and y1 (x) = eλ2 x are distinct
solutions of the AHE. Furthermore, the general solution of (6.3) is of the
form yh (x) = C1 eλ1 x + C2 eλ2 x .
6.1. THE CHARACTERISTIC EQUATION AND THE AHE 25

In this case, the Wronskian of the AHE solutions is


λx
e 1 0 eλ2 x0

 λ1 +λ2 x0
(6.8) W (y1 , y2 ) = λ1 x0

λ x = λ2 − λ1 e 6= 0
λ1 e λ2 e 2 0
and we conclude that a unique solution of (6.2) exists for any λ1 6= λ2 and
x0 ∈ R.
Example 6.2. Find the general solution of the homogeneous ODE
y 00 (x) + 5y 0 (x) + 6y(x) = 0.
The characteristic equation of the ODE is λ2 + 5λ + 6 = (λ + 2)(λ + 3) = 0.
The CE equation has the distict real roots λ1,2 = −2, −3 so the general
solution is
y(x) = C1 e−2x + C2 e−3x .

6.1.2. Repeated Real Roots. Now assume that ∆ = 0. In this case


there is a single real root of the CE λ = λ1 = λ2 . Consequently the AHE has
only one distinct solution y1 (x) = eλx . The method of reduction of order
can be used to show that a second distict solution is given by y2 (x) = xeλx .
The general solution of (6.3) is y(x) = C1 eλx + C2 xeλx .

The Wronskian of the AHE solutions is



x0 eλx0 eλx0
= e2λx0 6= 0.

(6.9) W (y1 , y2 ) =
λx0 eλx0 + eλx0 λeλx0
So (6.2) has a unique solution for any λ and any x0 ∈ R.
Example 6.3. Solve the IVP
(
y 00 (x) + 4y 0 (x) + 4 = 0,
y(0) = 1 y 0 (0) = −1.

The CE is λ + 4λ + 4 = (λ + 2)2 = 0 so the general solution is


y(x) = C1 e−2x + C2 xe−2x .
Since y 0 (x) = −2C1 e−2x − 2C2 xe−2x + C2 e−2x , y(0) = C1 = 1 and y 0 (0) =
−2C1 + C2 = −1. The solution of the IVP is
y(x) = e−2x + xe−2x .

6.1.3. Complex Roots. When ∆ < 0 the roots of the CE are complex
and λ1,2 = α ± βj. It can be shown that distinct solutions of the AHE are
given by y1 (x) = eαx sin(βx) and y2 (x) = eαx cos(βx). It follows that the
general solution of (6.3) is y(x) = C1 eαx sin(βx) + C2 eαx cos(βx).

The Wronskian of the AHE solutions is


λ ∗ x0
λx
e 0 e
(6.10) W (y1 , y2 ) = λx0 ∗ = −2βe2αx0 6= 0
λe λ ∗ e λ x0
26 6. SECOND ORDERS ODES WITH CONSTANT COEFFICIENTS

where λ∗ is the complex conjugate of λ. It follows that (6.2) has a unique


solution for any λ and any x0 ∈ R.
Example 6.4. Find the general solution of the ODE y 00 (x) + 4y 0 (x) +
13y(x) = 0. We use the quadratic formula and find that λ1,2 = 2 ± 3j
so the general solution is
y(x) = e−2x C1 sin(3x) + C2 cos(3x) .


6.2. Undetermined Coefficients Revisited

We now examine how undetermined coeffiecients can be used to to compute


the particular solution yp (x) for the ODE (6.1).
Example 6.5. Solve the IVP
(
y 00 (x) + 3y 0 (x) + 2y(x) = 2e−3x
y(0) = 0, y 0 (0) = 2
The CE is λ2 + 3λ + 2 = (λ + 1)(λ + 2) = 0, so the general solution of the
AHE is
yh (x) = C1 e−x + C2 e−2x .
According to table 1, we seek a particular solution of the form yh (x) =
Ae−3x .

Since
yp00 (x) + xyp0 (x) + 2yp (x) = 9Ae−3x − 6Ay −3x + 2Ay −3x ,
= 5Ae−3x = 2e−3x .
2
It follow that A = 5 and the general solution of reference is
2
y(x) = C1 e−x + C2 e−2x + e−3x .
5

The ICs lead to the 2 × 2 linear system


2
y(0) = C1 + C2 +
= 0,
5
6
y 0 (0) = −C1 − 2C2 − = 2,
5
which implies C1 = − 16 14
5 and C2 = − 5 . The solution of the IVP is
16 −x 14 −2x 2 −3x
y(x) = −
e − e + e .
5 5 5
Example 6.6. Consider the IVP
(
y 00 (x) + 3y 0 (x) + 2y(x) = 2e−3x
y(0) = y0 , y 0 (0) = y1
6.2. UNDETERMINED COEFFICIENTS REVISITED 27

and find, if possible, y0 , y1 so that y(x) = e−3x . We require A = 1. It follows


that the general solution is
y(x) = Ae−x + Be−2x + e−3x
and the ICs yield
A + B + 1 = y0 ,
−A − 2B − 3 = y1 .
which yields B = −2 − y0 − y1 and A = y0 − B − 1 = 2y0 + y1 − 1.

The solution of the IVP is


y(x) = 2y0 + y1 − 1 e−x + − 2 − y0 − y1 e−2x + e−3x
 

and we now require


2y0 + y1 = 1,
y0 + y1 = −2.

It follows that y0 = 3 and y1 = −5 and therefore y(x) = e−3x .


Example 6.7. Solve the IVP
(
y 00 (x) + 4y 0 (x) + 13y(x) = sin(x) + cos(x)
y(0) = 0, y 0 (0) = 0.

We have already seen that the general solution of the AHE is yh (x) =
e−2x C1 sin(3x) + C2 cos(3x) . According to table 1 the particular solution
is of the form yp (x) = A sin(x) + B cos(x). It follows that

y 00 (x) + 4y 0 (x) + 13y(x) = −A sin(x) − B cos(x) + 4A cos(x) − 4B sin(x),


+ 13A sin(x) + 13B cos(x),
 
= 12A − 4B sin(x) + 4A + 12B cos(x) = sin(x) + cos(x).

We obtain the 2 × 2 linear system of equations


12A − 4B = 1,
4A + 12B = 1,
251
which implies A = 4 and B = −20 so the particular solution is
1 1
yp (x) = sin(x) + cos(x)
10 20
and the general solution is
1 1
y(x) = e−2x C1 sin(3x) + C2 cos(3x) +

sin(x) + cos(x).
10 20
28 6. SECOND ORDERS ODES WITH CONSTANT COEFFICIENTS

Since
y(x) = −2e−2x C1 sin(3x) + C2 cos(3x)


1 1
+ e−2x 3C1 cos(3x) − 3C2 sin(3x) −

cos(x) − sin(x).
10 20
the ICs imply
1
y(0) = C2 + = 0,
20
1
y 0 (0) = −2C2 + 3C1 − = 0.
10
1
It follows that C1 = 0, C2 = − 20 and the solution of the IVP is
1 −2x 1 1 1
y(x) = − e sin(3x) − e−2x cos(3x) + sin(x) + cos(x).
15 20 10 20

When yp (x) is a solution of the AHE, the method of undetermined coeffi-


cients will fail. In these cases, we multiply our initial guess yp (x) by x until
no part of yp (x) satisfies the AHE. A few examples will clarify this idea. It
is important to remember that this technique depends on the solution of the
AHE.
Example 6.8. Find a particular solution for the ODE ẍ(t) + 2ẋ(t) + x(t) =
e−t . According to table 1 our initial guess should be xp (t) = Ae−t . However,
since λ2 + 2λ + 1 = (λ + 1)2 = 0, the general solution of the AHE is
xh (t) = C1 e−t + C2 te−t . Accordingly, we modify the initial guess for xp (t)
to obtain xp (t) = At2 e−t .

Since
ẍp (t) + 2ẋp (t) + xp (t) = 2Ae−t = e−t
so A = 1
2 and xp (t) = 12 e−t .
Example 6.9. Find a particular solution of the ODE i00 (t) + i(t) = 3 sin(t).
Since the general solution of the AHE is ih (t) = C1 sin(t) + C2 cos(t), we
seek a particular solution of the form ip (t) = At sin(t) + Bt cos(t). Since
i00p (t) + i0p (t) = 2A cos(t) − 2B sin(t) = 3 sin(t),
we have A = 0 and B = − 23 . It follows that ip (t) = − 23 t cos(t).

6.3. Modelling with Second Order ODEs

As we have already seen the current-voltages relationships R for analog com-


di
ponents are vR (r) = Ri(t), vL (t) = L dt and vC (t) = C1 i(t) dt. Consider
the series RLC-circuit. From Kirchoff’s voltage law we obtain the ODE
Li00 (t) + Ri0 (t) + C1 i(t) = v 0 (t). The corresponding IVP is given in (5.6).
6.3. MODELLING WITH SECOND ORDER ODES 29

R L
v (6.11)
(
C Li00 (t) + Ri0 (t) + C1 i(t) = v 0 (t)
i(0) = i0 , i0 (0) = i1

1
Example 6.10. Suppose that L = 1 (H), R = 4 (Ω), C = 20 (F) and
0
v(t) = sin(t). Assume the circuit starts at rest so i(0) = i (0) = 0.

Now the CE is λ2 + 4λ + 20 = 0 so the general solution of the AHE is


ih (t) = e−2t C1 sin(4t) + C2 cos(4t)).
The particular solution is of the form ip (t) = A sin(t) + B cos(t).

It can be shown that


4 19
ip (t) = sin(t) + cos(t)
377 377

so the solution of the IVP is


 
−2t 21 19 4 19
i(t) = −e sin(4t) + cos(4t) + sin(t) + cos(t).
754 377 377 377

In this case, ih (t) represents the transient current flow of the circuit while
ip (t) is the steady state current flow. Consequently
4 19
i(t) → sin(t) + cos(t)
377 377
as t → ∞

sensitive equipment

m = misolated + msensitive
x = x1 (t)
isolated slab
(6.12)
 
k β
 mẍ1 (t) + β ẋ1 (t) − ẋ2 (t)


+k x1 (t) − x2 (t) − h = 0

x2 (t) = B sin(ωt)
x1 (0) = x0 , ẋ1 (0) = 0

x = x2 (t)
shop floor

x=0

Example 6.11. Consider the IVP (6.12). This system is a spring-mass-


damper system that is designed to isolate sensitive equipment from vibra-
tions in a shop floor. If we assume that the shop floor vibrates according to
x2 (t) = B sin(ωt), then the isolated slab vibrates according to the IVP
30 6. SECOND ORDERS ODES WITH CONSTANT COEFFICIENTS



 mẍ1 (t) + β ẋ1 (t) + kx1 (t)
(6.13) = kh + kB sin(ωt) + βωB cos(ωt),

x1 (0) = x0 , ẋ1 (0) = 0,

which is a model of a force spring-mass system. We have assume that the


uncompressed spring is of length h and that the isolated slab is initially at
rest at a distance x0 from x = 0. The objective is to choose the parameter
β and k so that the steady state amplitude of the vibrations of the isolated
slab is minimized.

Let xp1 (t) be the steady state (particular) solution of (6.13). Under the
assumption that xp1 (t) = h + A sin(ωt + φ), it can be shown that
k2 + β 2 ω2
(6.14) A2 =  B2.
(k − mω 2 )2 + β 2 ω 2

As a particular example, we let m = 500 kg, k = 10 kg/s2 and β = 5


A A2
kg/s. Figure 1 depicts a plot of B as a function of ω. We see that B 2 is a
bandpass filter with a center frequency of ωC ≈ 0.1412 rad/s. As a result,
if the frequency of the shop floor is significantly different that ωC , then the
amplitude of vibrations A in the isolated slab will be attenuated.

A
Figure 1. Plot of B.

6.4. Variation of Parameters

We have already see that the method of undetermined coeffients works for
specific forms of the right hand side (Q(x) or f (x)). The method does not
work for a general right hand side. For example, undetermined coefficients
1
does not work when f (x) = 1+x 2 . The method of variation of parameters

works for a general right hand side, say f (x).


6.4. VARIATION OF PARAMETERS 31

Let y1 (x) and y2 (x) be distinct solutions of (6.3) and recall that (6.6) is
the Wronskian of y1 (x) and y2 (x). In variation of parameters one seeks a
particular solution of the form
(6.15) yp (x) = u1 (x)y1 (x) = u2 (x)y2 (x),
where the functions u1 (x) and u2 (x) are to be determined.

Under the assumption that


u01 (x)y1 (x) + u02 (x)y2 (x) = 0
it can be shown that
y2 (x)f (x)
(6.16) u01 (x) = − ,
W (y1 , y2 )
y1 (x)f (x)
u02 (x) = .
W (y1 , y2 )
Example 6.12. Consider the ODE

(6.17) y 00 (x) + 2y 0 (x) + y(x) = x.
Distinct solutions for the AHE are y1 (x) = e−x and y2 (x) = xe−x respec-
tively. It follows that
−x
xe−x

e
W (y1 , y2 ) = −x −x = e−2x
e − xe−x

−e
and consequently

xe−x x 3
u01 (x)
− −2x
= −x 2 ex ,
e √
0 e−x x √ x
u2 (x) = −2x xe .
e

By the fundamental theorem of calculus


Z
3
u1 (x) = − x 2 ex dx,
√ x
Z
u2 (x) = xe dx.

Unfortunately the preceeding integrals cannot be evaluated by hand and


we must state the particular solution in terms of unevaluated integrals. In
particular,
√ x
Z Z
3
−x
x
yp (x) = − x e dx e +
2 xe dx xe−x .

Variation of parameter also works in cases where undetermined coefficients


is applicable. However, we often have to evaluate tedious integrals to find
yp (x). In the following example, it is simpler to use undetermined coefficients
to find yp (x).
32 6. SECOND ORDERS ODES WITH CONSTANT COEFFICIENTS

Example 6.13. Use variaton of parameters to find a particular solution for


the ODE y 00 (x) + 9y(x) = 2 + 3x. The distinct solutions of the AHE are
y1 (x) = sin(3x) and y2 (x) = cos(3x). The corresponding Wronskian is given
by

sin(3x) cos(3x) 2 2

W (y1 , y2 ) =
= −3 sin (3x) + cos (3x) = −3
3 cos(3x) −3 sin(3x)
and it follows that
cos(3x)(2 + 3x)
u01 (x) = ,
3
sin(3x)(2 + 3x)
u02 (x) = − .
3

We use integration by parts and find that


2 1 x
u1 (x) = sin(3x) + cos(3x) + sin(3x),
9 9 3
2 1 x
u2 (x) = cos(3x) − sin(3x) + cos(3x),
9 9 3
which implies
 
2 1 x
yp (x) = sin(3x) sin(3x) + cos(3x) + sin(3x)
9 9 3
 
2 1 x
+ cos(3x) cos(3x) − sin(3x) + cos(3x)
9 9 3
2 x
= + .
9 3
Part 3

Fourier Analysis
CHAPTER 7

Fourier Series

What is this chapter about??

7.1. Infinite Series, Convergence and Divergence

define convergence and divergence informally and give some examples. Make
them responsible for geometric series.

Introduce this section

Definition 7.1 (Sigma Notation). We shall use

k2
X
(7.1) ak = ak1 + ak1 +1 + · · · + ak2 −1 + ak2 .
k=k1

to denote the sum of the numbers ak , k = k1 , . . . , k2 . Here

(1) k is the index of summation,


(2) k1 and k2 are the lower and upper limits of summation respectively
and
(3) the ak , k = k1 , . . . , k2 are the terms of the summation.

If one or both of the limits of summation is infinite, then we will call (7.1)
an infinite series.
35
36 7. FOURIER SERIES

Example 7.2.
2
X
k 2 = (−2)2 + (−1)2 + 02 + 12 + 22 = 10,
k=−2

X 1 1 1 1
(−1)k = − + − + · · · = − ln(2),
k 1 2 3
k=1

X 1 1 1 1
= + + + · · · does not exist,
k 1 2 3
k=1
4
X
sin(k) = sin(0) + sin(1) + sin(2) + sin(3) + sin(4) ≈ 1.1351,
k=0

X 1 π2
= ,
k2 6
k=1

X
k = 1 + 2 + 3 + · · · does not exist.
k=1

Definition 7.3 (Partial Sums). Suppose that



X
S= ak = a1 + a2 + a3 + · · ·
k=1

We define the sequence of partial sums {Sn |n = 1, 2, 3, . . .} by


n
X
(7.2) Sn = a1 + a2 + a3 + · · · + an .
k=1

Definition 7.4 (Convergence). Suppose that the limit of the sequence of


partial sums {Sn |n = 1, 2, 3, . . .}
lim Sn = S < ∞
n→∞
exists, then the series

X
ak = a1 + a2 + a3 + · · ·
k=1
is said to be convergent with sum

X
S= ak .
k=1
Otherwise, the series is said to be divergent.

We note that, for large n, S ≈ Sn . This means that we can use partial sums
to estimate the sum S in cases where S cannot be computed explicitly.
7.1. INFINITE SERIES, CONVERGENCE AND DIVERGENCE 37

Theorem 7.5 (p-Series). Let p > 0, then the series



X 1
(7.3) S=
kp
k=1
converges for any p > 1. That is, S < ∞. The series diverges when 0 < p <
1.
Theorem 7.6 (Alternating Series). Suppose the numbers ak > 0 are such
that

(1) ak+1 < ak and


(2) limk→∞ ak = 0,

then the alternating series



X
(7.4) S= (−1)k ak
k=1
converges with S < ∞.

In addition, if we let S = Sn +Rn , then it can be shown that |Rn | = |S−Sn | <
an+1 . Consequently, the error in the estimate S ≈ Sn is no more than the
first neglected term an+1 .
Theorem 7.7 (Geometric Series). Let a 6= 0 and r ∈ R, then
n
X 1 − rn+1
(7.5) Sn = ark = a
1−r
k=0

If follows that, for |r| < 1, the series


X∞
(7.6) S= ark
k=0
converges with
a
S = lim Sn = .
n→∞ 1−r
The series (7.6) diverges when |r| ≥ 1.
Theorem 7.8 (Divergence). Suppose that the infinite series
X∞
S= ak < ∞
k=1
converges, then
lim ak = 0.
k→∞

Equivalently, if
lim ak 6= 0,
k→∞
38 7. FOURIER SERIES

then

X
S= ak
k=1
diverges.
Example 7.9. Let

X 1
S= ,
k4
k=1
then S is a convergent p-series with p = 4. The theorem does not tell us
what S is. However, we can use a partial sum to estimate S. In particular,
S ≈ S10 = 1.082036583.
Example 7.10. Let

X 1
S= √ ,
k=1
k
then S is a divergent p-series with p = 12 .
Example 7.11. Let

X 1
S= (−1)k √ ,
k=1
k
then S is an an alternating series. Since,
√ √
(1) k + 1 > k ⇒ k + 1 > k ⇒ √1 < √1 and
k+1 k
(2) limk→∞ ak = 0,

the series converges with S < ∞.

Now, S ≈ S24 = −0.5038991421 and |S − S24 | < a25 = 15 .


Example 7.12. Let

X
S= xk ,
k=1
then S is a geometric series with a = 1 and r = x. It follows that the series
converges for |x| < 1 with
1
S= .
1−x
Example 7.13. The series

X
S= sink (x)
k=0
is geometric with a = 1 and r = sin(x). It follows that
1
S=
1 − sin(x)
7.2. PERIODIC SIGNALS AND THE EXPONENTIAL FORM 39

 that | sin(x)| < 1. For example, the series converges when


for all x such
x ∈ − π2 , π2 and diverges when x = ± π2 .
Example 7.14. Consider infinite series

X k
S= .
k+1
k=1
Since
k
lim = 1 6= 0,
k→∞ k + 1
the series S diverges.

7.2. Periodic Signals and the Exponential Form

Definition 7.15 (Periodic Function). Suppose that there is a number T > 0


such that the signal (function) s(t) satisfies
(7.7) s(t + T ) = s(t)
for and t ∈ R, then we say that s(t) is T -periodic with period T . Note
that (7.7) implies
s(t + kT ) = s(t)
for any k ∈ Z.
Definition 7.16 (General Sinusoid). Recall that the general sinusoid is a
signal of the form
(7.8) s(t) = A cos(ωt + φ),
where

(1) A > 0 is the amplitude,


(2) ω > 0 (rads/s) is the angular frequency,
(3) φ ∈ [−π, π) (rads) is the phase,
ω
(4) the frequency in Hertz is f = 2π (Hz),
φ
(5) ts = − ω (s) is the time shift and
(6) T = f1 = 2π
ω (s) is the period.

Example 7.17 (General Sinusoid). Suppose s(t) = −15 sin(100πt), then


the angular frequency of s(t) is ω = 100π (rads/s). It follows that the
2π 1
period of s(t) is T = 100π = 50 (s). The amplitude of s(t) is A = 15 and,
π
since cos t + 2 = − sin(t), the phase of s(t) is φ = π2 . We conclude that
π

s(t) = 15 cos 100πt + 2 .
Example 7.18 (General Sinusoid). Construct a general sinusoid with an
amplitude of A = 10, a frequency of f = 200 (Hz) and that is shifted
1
right (delayed) by ts = 10 (s). We have ω = 2πf = 400π (rads/s) and
40 7. FOURIER SERIES

φ = −ωts = 400π
10 = 200π (rads). Since φ ∈ [−π, π), we take φ = 0 (rads)
and find that
s(t) = 10 cos(400πt).
Example 7.19 (General Sinusoid). Let s(t) = 3 cos(2πt) + 4 sin(2πt) and
write s(t) in the form s(t) = A cos(ωt + φ). Since
cos(α + β) = cos(α) cos(β) − sin(α) sin(β),
we have
A cos(φ) = 3,
A sin(φ) = −4.

It follows that
A2 = (3)2 + (−4)2 = 25,
4
tan(φ) = −
3
and A = 5, φ = −0.927295218 rads. Therefore s(t) = 5 cos(2πt−0.927295218).
Theorem 7.20 (Fourier Series). Suppose that the signal s(t) is T -periodic
with
Z T
(7.9) |s(t)|2 dt < ∞,
0
then there exist scalars Ck ∈ C such that
X∞
(7.10) s(t) = Ck exp(jkω1 t),
k=−∞

where ω1 = T .

The Ck are the (exponential) Fourier coefficients of s(t) and are computed
according to the formula
1 T
Z
(7.11) Ck = exp(−jkω1 t)s(t) dt.
T 0

Proof. We delay a justification of Theorem 7.20 until Section 7.5. 

We refer to (7.10) as the exponential form of the Fourier series of s(t). It


is worth noting that we can change the interval of integration, in (7.11), to
any interval of length T . In particular,
1 τ +T
Z
Ck = exp(−jkω1 t)s(t) dt
T τ
for any τ ∈ R.

Theorem 7.20 motivates the following definition.


7.2. PERIODIC SIGNALS AND THE EXPONENTIAL FORM 41

Definition 7.21 (Harmonics). Let the signal s(t) be T -periodic.

(1) The angular frequency



(7.12) ω1 =
T
is called the fundamental frequency of s(t).
(2) The angular frequency
(7.13) ωk = kω1 ,
for k ∈ Z, is called the k th harmonic frequency of s(t).
(3) The complex sinusoid
(7.14) sk (t) = Ck exp(jkω1 t),
k ∈ Z, is called the k th harmonic of s(t).
(4) The coefficient C0 is called the DC term or DC offset of s(t) and
can be regarded as the zeroth harmonic s0 (t). According to for-
mula (7.11), the DC term is the average < s > of s(t) over [0, T ].
That is,
1 T
Z
(7.15) C0 =< s >= s(t) dt.
T 0
Note that, since s(t) is T -periodic, we can write
1 τ +T
Z
C0 =< s >= s(t) dt
T τ
for any τ ∈ R. The preceding formula does not hold when s(t) is
aperiodic (not periodic).

The coefficient Ck ∈ C can be regarded as the amplitude of the harmonic (7.14)


at frequency ωk = kω1 , k ∈ Z. The series (7.10) builds up the signal s(t)
in terms of complex sinusoids oscillating at the harmonic frequencies. This
process is often referred to as Fourier synthesis. Likewise, the calculation of
the coefficients Ck is often called Fourier analysis.

Since T = ω , equation (7.13) implies that the period of the k th harmonic
sk (t) is

(7.16) Tk =
ω1 k
for k ∈ Z. Equation (7.16) reveals two properties of the definition (7.13)
that are problematic. When k = 0, the period of the DC term is undefined.
On the other hand, according to the definition (7.7), the zeroth harmonic is
periodic with an arbitrary period. Without loss of generality, we assume the
period of C0 is the same as the period of s(t). Now, Tk < 0 when k ∈ Z− .
As periods are necessarily times they need to satisfy 0 < Tk < ∞ and this
property does not make sense.
42 7. FOURIER SERIES

In practical circumstances we combine ω±k = ±kω1 , k ∈ Z+ , to obtain the


single positive frequency ωk = kω1 . Although the problem with the DC term
remains, we can address the problem of negative frequencies. In Section 7.3
we shall see two alternate forms of Fourier series that dispense with the need
for negative frequencies.

Example 7.22 (Full Wave Rectified Sine Wave). Let s(t) = 10| sin(4πt)| and
compute the exponential form of the Fourier series of s(t). The fundamental
frequency of s(t) is ω1 = 4π (rads/s) so the period is T = 21 (s). We have

Z 1
2
Ck = 20 exp(−j4πkt)| sin(4πt)| dt,
0
Z 1
4
= 20 exp(−j4πkt)| sin(4πt)| dt,
− 14

where we have use the fact that the integrand is T -periodic.

If we let u = 4πt, then

π
5
Z
Ck = exp(−jku)| sin(u)| du.
π −π

Recall that, by Euler’s Identity,

e−jθ = cos(θ) − j sin(θ).

Since cos(u) and | sin(u)| are even and sin(u) is odd on [−π, π], it follows
that
10 π
Z
Ck = cos(ku) sin(u) du,
π 0

where we have used the fact that sin(u) ≥ 0 on [0, π].

Since
1 
cos(β) sin(α) = sin(α + β) + sin(α − β) ,
2

the integrand can be written as

1  
cos(ku) sin(u) = sin (1 + k)u + sin (1 − k)u .
2
7.2. PERIODIC SIGNALS AND THE EXPONENTIAL FORM 43

We have
  ! π
5 cos (1 + k)u cos (1 − k)u
Ck = + ,
π (1 + k) (1 − k)
0
 !
5 cos (1 + k)π cos (1 − k)π
= +
π (1 + k) (1 − k)
 
5 1 1
− + ,
π (1 + k) (1 − k)
5 (−1)1+k (1 − k) + (−1)1−k (1 + k) 10
= 2

π 1−k π(1 − k 2 )
and, since (−1)−k = (−1)k , it follows that
10((−1)k + 1)
Ck = − ,
π(k 2 − 1)
which is valid for k 6= ±1.

The special case k = 1 must be computed seperately. Since sin(u) is an odd


function, can be shown that C±1 = 0 and consequently

10 X (−1)k + 1
(7.17) s(t) = − exp(j4πkt).
π k2 − 1
k=−∞
k6=±1

In practice, we cannot add up an infinite number of terms. Accordingly, we


approximate the Fourier series of s(t) with the symmetric partial sums
K
X
(7.18) SK (t) = Ck exp(jkω1 t).
k=−K

Figure 1 depicts the signal s(t) = 10| sin(4πt)| and the partial sums S2 and
S8 . In general, the partial sum approximation s(t) ≈ SK (t) improves as K
increases.

Figure 1. Full Wave Sine and and Partial Sums


44 7. FOURIER SERIES

Example 7.23 (Square Wave). Let s(t) = sgn(sin(2πt)), where



1
 x > 0,
sgn(x) = 0 x=0

−1 x < 0

is the signum function. The signal s(t) is a square wave with period T = 1
(s) (see Figure 2) and fundamental frequency ω1 = 2π (rads/s).

The Fourier coefficients of s(t) are


Z 1
Ck = exp(−j2πk)s(t) dt,
0
1
Z
2
Z 1
= exp(−j2πkt) dt − exp(−j2πkt) dt,
1
0 2
 1 1 
1
2
=− exp(−j2πkt) − exp(−j2πkt) 1 ,

j2πk 0 2
1 
=− 2 cos(πk) − 2 ,
j2πk
j 1 − (−1)k

= .
πk

The preceding formula for the Ck is valid only when k 6= 0. We have


already seen that the DC term C0 is simply the average of s(t) over [0, T ].
Consequently, C0 = 0.

We now have

1 − (−1)k

j X
s(t) = exp(j2πkt)
π k
k=−∞
k6=0
and note that the Ck can be complex numbers even when the signal s(t) is
real-valued. Plots of s(t), S2 (t) and S8 (t) are given in Figure 2.

Figure 2. Square Wave and Partial Sums


7.3. ALTERNATE FORMS AND SYMMETRY 45

Before we finish this example, it is worth noting that Fourier series can often
be simplified. This simplification typically arises as a result of symmetries in
the signal s(t). In the current case, the square wave s(t) is an odd function
of t and, as a result, the even indexed terms of the Fourier series vanish.

Let k = 2p + 1 then k is an odd integer whenever p ∈ Z. The Fourier series


of the square wave becomes

1 − (−1)2p+1

j X
s(t) = exp(j2π(2p + 1)t),
π 2p + 1
2p+1=−∞
2p+16=0

2j X 1
= exp(j2π(2p + 1)t).
π p=−∞
2p + 1

Further simplification require the use of Euler’s identity. As it turns out,


the preceeding series can be written entirely in terms of sine functions. This
due to the fact that the square wave is an odd function of t. We leave the
corresponding theory and practice for Section 7.3.

Example 7.24 (Saw Tooth Wave). Let s(t) = π2 arctan tan(πt) , then s(t)


is a sawtooth wave with T = 1 (s) and ω1 = 2π (rads/s) (See Figure 3).


To compute the Fourier coefficients, we use the fact that s(t) = 2t for
t ∈ − 21 , 12 .

Integration by parts with u = t and dv = exp(−j2πkt) dt provides

j(−1)k
Ck =
πk
for k 6= 0. Since < s(t) > is zero, C0 = 0 and

j X (−1)k
s(t) = exp(j2πkt).
π k
k=−∞
k6=0

Plots of s(t), S2 (t) and S8 (t) are given in Figure 3.

7.3. Alternate Forms and Symmetry

When using the exponential form of Fourier series, we have to deal with
negative frequencies and complex numbers even when the signal s(t) is real-
valued. There are two alternate forms of Fourier series that avoid these
difficulties.
46 7. FOURIER SERIES

Figure 3. Saw Tooth Wave and Partial Sums

Theorem 7.25 (Sine-Cosine Form). Let s(t) be T -periodic, then there are
scalars ak , bk ∈ R such that

X
(7.19) s(t) = a0 + ak cos(kω1 t) + bk sin(kω1 t),
k=1
where a0 = C0 and
(7.20) ak = 2Re(Ck ),
bk = −2Im(Ck ).

Proof. We assume that s(t) is real-valued, then (7.11) implies C−k =


Ck . As a consequence, we can rewrite (7.10) as

X 
s(t) = C0 + Ck exp(jkω1 t) + C−k exp(−jkω1 t) ,
k=1

X 
= C0 + Ck exp(jkω1 t) + Ck exp(jkω1 t) ,
k=1

X 
= C0 + 2Re Ck exp(jkω1 t) .
k=1

Let Ck = αk + jβk , then by Euler’s identity,



Ck exp(jkω1 t) = (αk + jβk ) cos(jkω1 t) + j sin(jkω1 t) ,

= αk cos(jkω1 t) − βk sin(jkω1 t) + j αk sin(jkω1 t) + βk cos(jkω1 t) .
It follows that ak = 2Re(Ck ) and bk = −2Im(Ck ) as required. 
Example 7.26 (Full Wave Rectified Sine). Find the sine-cosine form of the
Fourier series (7.17). Since
10((−1)k + 1)
Ck = − ,
π(k 2 − 1)
7.3. ALTERNATE FORMS AND SYMMETRY 47

for k 6= ±1, bk = 0 and


20((−1)k + 1)
ak = −
π(k 2 − 1)
when k 6= ±1.

When k = 0
20
C0 =
π
and therefore

20 20 X (−1)k + 1
s(t) = − cos(4πkt).
π π k2 − 1
k=2

It is worth noting that the odd indexed terms in the preceding series vanish.
Let k = 2p. The simplified series is

20 20 X (−1)2p + 1 
s(t) = − 2
cos 4π(2p)t ,
π π (2p) − 1
2p=2

20 40 X 1
= − cos(8πpt).
π π 4p2 − 1
p=1

Example 7.27 (Complex Ck ). The signal s(t) has a period of T = 5 (s)


with Fourier coefficients

1 (−1)k 3
Ck = 3 + j
k k3
when k 6= 0 and
C0 = −5.

Find the sine-cosine form of the Fourier series. For k > 0, the coefficients
for the sine-cosine form are
2
ak = 3
k √
(−1)k 2 3
bk = −
k3
and
a0 = −5.

Since ω1 = 25 π, it follows that


∞ √
(−1)k+1 2 3
   
X 1 2 2
s(t) = −5 + 2 cos πkt + sin πkt .
k3 5 k3 5
k=1
48 7. FOURIER SERIES

Example 7.28 (Imaginary Ck ). Given



X (−1)k
s(t) = sin(kt),
k3
k=1
find the exponential form of the Fourier series. We have C0 = a0 = 0 and,
from (7.20),
ak bk (−1)k
Ck = − j=− j.
2 2 2k 3
It follows that

j X (−1)k+1
s(t) = exp(jkt).
2 k3
k=−∞
k6=0

Theorem 7.29 (Magnitude-Phase Form). Let s(t) be T -periodic, then there


are scalars Ak > 0 and φk ∈ [−π, π) such that

X
(7.21) s(t) = A0 cos(φ0 ) + Ak cos(kω1 t + φk ),
k=1
where the amplitudes are given by
( q
2|Ck | = a2k + b2k k 6= 0,
(7.22) Ak =
|C0 | k=0
and the phases are given by
 
∠Ck = −Arctan ak , bk
 k 6= 0,
(7.23) φk = 0 k = 0 and C0 > 0,

π k = 0 and C0 < 0.

The function Arctan(x, y) is defined by



y
arctan x 
 x > 0,
Arctan(x, y) = arctan xy + π x < 0 and y > 0,
arctan xy − π
 
x < 0 and y < 0.

Proof. Since
cos(α + β) = cos(α) cos(β) − sin(α) sin(β),
we have
Ak cos(kω1 t + φk ) = Ak cos(kω1 ) cos(φk ) − Ak sin(kω1 ) sin(φk ).
It follows, from (7.19), that
Ak cos(φk ) = ak ,
Ak sin(φk ) = −bk ,
7.3. ALTERNATE FORMS AND SYMMETRY 49

when k > 0. It follows that


q
Ak = a2k + b2k = 2|Ck |
and
bk
tan(φk ) = − .
ak

When k = 0,
A0 cos(φ0 ) = a0
as required. 
Example 7.30 (Complex Ck ). Find the magnitude-phase form of the Fourier
series in example 7.27. Since

1 (−1)k 3
Ck = 3 + j ,
k k3
we have
√ !2
v
u 2
u 1 (−1)k 3
Ak = 2 t + ,
k3 k3
r
1 3
=2 6
+ 6,
k k
4
= 3,
k
for k > 0. Also
A0 = |C0 | = 5.

Now, for k > 0,



(−1)k 3 k 3 k

tan(φk ) = ∠Ck = = (−1) 3.
k3 1
Since Re(Ck ) > 0, it follows that
π
φk = (−1)k .
6
Now, C0 < 0 and therefore φ0 = π. It follows that
∞  
X 1 2 kπ
s(t) = 5 cos(π) + 4 cos πkt + (−1) .
k3 5 6
k=1

Example 7.31. Find the magnitude-phase form of the Fourier series in


example 7.24. Since C0 = 0, we have A0 = 0 and φ0 = 0. For k > 0,
(−1)k

2
Ak = 2 j = .
πk πk
50 7. FOURIER SERIES

Since Re(Ck ) = 0,
π
φk = (−1)k
2
and

2X1  π
s(t) = cos 2πkt + (−1)k .
π k 2
k=1

Example 7.32. Given



X k 


s(t) = 2 cos 7πkt + (−1) ,
1 + k4 4
k=1
find the coefficients Ck , ak and bk .

First, a0 = C0 = φ0 = 0. Now, for k > 0,


Ak jφk
Ck = |Ck | exp(j∠Ck ) = e ,
2
k 


= exp j(−1) ,
1 + k4 4
k √ k
√ 
= 2 + (−1) 2j
1 + k4
and

2 2k
ak = ,
1 + k4

(−1)k 2 2k
bk = .
1 + k4

7.3.1. Phasor Transforms. Could spend some time relating the mag-
nitude phase form to phasor transforms P.

7.4. Properties of Fourier Series

Theorem 7.33 (Operational Properties). Let s(t) be T -periodic with fun-


damental frequency ω1 and Fourier coefficients Ck . Suppose that:

(1) ŝ(t) = αs(t), then ŝ(t) has period is Tb = T and fundamental fre-
quency ω̂1 = ω1 . The Fourier coefficients are C bk = αCk .
T
(2) ŝ(t) = s(αt), then ŝ(t) is periodic with T = α and ω̂1 = αω1 . The
b
Fourier coefficients are C bk = Ck .
(3) ŝ(t) = s(t − α), then ŝ(t) is T -periodic with fundamental frequency
ω̂1 = ω1 . The Fourier coefficients are C bk = exp(−jkω1 α)Ck .
0
(4) ŝ(t) = s (t), then ŝ(t) is periodic with Tb = T and ω̂1 = ω1 . The
Fourier coefficients are C bk = jkω1 Ck .
7.4. PROPERTIES OF FOURIER SERIES 51
Rt
(5) ŝ(t) = 0 s(τ ) dτ and C0 = 0, then ŝ(t) is periodic with Tb = T and
ω̂1 = ω1 . The Fourier coefficients are Cbk = Ck for k = 6 0. The
jkω1
P Ck
DC term is C0 = − k6=0 jkω 1
and the Fourier series is
∞ ∞
X Ck X Ck jkω1 t
ŝ(t) = − + e .
jkω1 jkω1
k=−∞ k=−∞
k6=0 k6=0

Example 7.34. Let s(t) = π2 arcsin sin(2πt) , then s(t) is a triangular wave

with T = 1. It can be shown that the Fourier coefficients are
  
πk
 4 sin


2
Ck = − 2 k2
j k 6= 0,

 π
0 k = 0.

Suppose σ(t) = 2s(3t), then σ(t) is a triangular wave with period Tb = 13 (s)
and fundamental frequency ω̂1 = 6π (rads/s). See Figure 4. If Ck are the
Fourier coeffcients of s(t), then the Fourier coefficients of σ(t) are C
bk = 2Ck
or   
 8 sin πk

2

C
bk = −
2 2
j k 6= 0,

 π k
0 k = 0.

Figure 4. Scaled Triangular Wave σ(t) and Partial Sum Σ2 (t)

Example 7.35. Let s(t) be the triangular wave from example 7.34 and
define σ(t) = s0 t − 81 . The signal σ(t) has period Tb = 1 (s) and fun-


damental frequency ω̂1 = 2π (rads/s). The Fourier coefficients are C


bk =
jkω1 exp(−jkω1 α)Ck or
 π 
  π  8 sin k
2

Ck =
b exp −j k k 6= 0,
 4 πk
0 k = 0.

52 7. FOURIER SERIES

See Figure 5. We observe that the derivative of a triangular wave the square
wave σ(t). The square wave has been translated to the right by 18 (s).

Figure 5. Derivative of Triangular Wave and Partial Sum Σ8 (t)

Example 7.36. Let s(t) be the triangular wave from example 7.34 and define
Rt
σ(t) = 0 s(τ ) dτ . The signal σ(t) has period Tb = 1 (s) and fundamental
frequency ω̂1 = 2π (rads/s). The Fourier coefficients are

πk

 2 sin 2

bk = − π 3 k 3
C
k 6= 0,
2 P sin( 2 )
πk
= 18 k = 0.

π3 k6=0 k3

The picture

Figure 6. Integral of Triangular Wave and Partial Sum Σ8 (t)

Recall that the function f (x) is said to be even if f (x) = f (−x) for all x.
On the other hand, f (x) is said to be odd if f (x) = −f (−x) for all x.
Theorem 7.37 (Even and Odd Symmetry). Let s(t) be T -periodic, then
7.4. PROPERTIES OF FOURIER SERIES 53

Figure 7. Integral of Triangular Wave and Partial Sum Σ8 (t)

(1) If s(t) is even, then from (7.11), the coefficients Ck are purely real
and even in the index k with
Z T
2 2
(7.24) Ck = s(t) cos(kω1 t) dt.
T 0
It follows that equation (7.10) can be writen in the form

X
s(t) = C0 + 2 Ck cos(kω1 t).
k=1
That is, s(t) has an expansion in terms of cos(kω1 t) which are even,
real-valued functions of t.
(2) If s(t) is odd, then from (7.11), the coefficients Ck are purely imag-
inary and odd in the index k with
Z T
2j 2
Ck = − s(t) sin(kω1 t) dt.
T 0
Since the average of s(t) must be zero, it follows that equa-
tion (7.10) can be writen in the form

X
s(t) = −2 Im(Ck ) sin(kω1 t).
k=1
That is, s(t) has an expansion in terms of sin(kω1 t) which are odd,
real-valued functions of t.
Example 7.38 (Even Symmetry). Let
(
16t2 − 41 ≤ t < 1
4
fe (t) =
0 otherwise
and extend fe (t) to R via

X
se (t) = fe (t − k).
k=−∞
54 7. FOURIER SERIES

The function se (t) is even and periodic with a period of T = 1.

Let Cke be the Fourier coefficients of se (t). It can be shown that

π π
 2 2  
 (π k − 8) sin
 2k + 4πk cos 2k
k>0
Cke = π3 k3
1

k=0
6

so


(π 2 k 2 − 8) sin π2 k + 4πk cos π
 
1 2k
X
se (t) = + 2 cos(2πkt).
6 π3 k3
k=1

Refer to Figure 8. Depicts some even stuff...

Figure 8. Even Symmetric Signal s(t) and Partial Sums


S2 (t) and S8 (t)

The expression for the coefficients Cke is complicated. However, if we account


for the parity of Cke in the index k, then we obtain a significant simplification.
Let k = 2p, p ∈ Z+ then

π π
π 2 (2p)2 − 8 sin
  
e 2 (2p) + 4π(2p) cos 2 (2p)
C2p = ,
π 3 (2p)3
4π 2 p2 − 8 sin(πp) + 8πp cos(πp)

= ,
8π 3 p3
(−1)p
= 2 2 .
π p
7.4. PROPERTIES OF FOURIER SERIES 55

When k = 2p − 1,
π π
π 2 (2p − 1)2 − 8 sin
  
e 2 (2p − 1) + 4π(2p − 1) cos 2 (2p − 1)
C2p−1 = ,
π 3 (2p − 1)3
4π 2 (2p − 1)2 − 8 (−1)p+1

= ,
8π 3 (2p − 1)3
(−1)p+1 (−1)p
= + 3 .
2π(2p − 1) π (2p − 1)3
Example 7.39 (Odd Symmetry). Let
(
8t3 − 21 ≤ t < 1
2
fo (t) =
0 otherwise

and extend fo (t) to R via



X
so (t) = fo (t − k).
k=−∞

The function so (t) is odd and periodic with a period of T = 1.

Let Cko be the Fourier coefficients of so (t). It can be shown that


π2 k2 − 6

(−1)k j k>0

o
Ck = π3 k3
0 k=0
so

X π2 k2 − 6
so (t) = −2 (−1)k sin(2πkt).
π3 k3
k=1

Refer to Figure 9. Depicts some odd stuff

Figure 9. Odd Symmetric Signal s(t) and Partial Sums


S2 (t) and S8 (t)
56 7. FOURIER SERIES

7.5. Spectra and Parseval’s Theorem

Definition 7.40. Let s(t) be T -periodic.

(1) We call
T
1
Z
2
(7.25) < s >= |s(t)|2 dt
T 0

the average power of the signal s(t).


(2) The number

(7.26) ksk = < s2 >
is called the root mean square (RMS) value of s(t).
Example 7.41. Compute the average power of the general sinusoid (7.8).
Since
1 T
Z
2
< s >= |A cos(ω1 t + φ)|2 dt
T 0

with ω1 = T , the change of variable u = ω1 t + φ provides
Z 2π
2 2
<s >=A cos2 (u) du
0
A2 2π 1 + cos(2u)
Z
= du
ω1 0 2
A2
= .
2

A
The corresponding RMS value is ksk = √
2
.

Theorem 7.42 (Parseval’s Theorem). Let the T -periodic signal s(t) satisfy
1 α+T
Z
ksk = |s(t)|2 dt < ∞
T α
for any α.

Obvious here. Downplay phase spectrum. Apply this to filtering. Define


Bode plots.

7.6. Applications of Fourier Series

Do a circuit with a sawtooth input. Derive the Fourier Coefficients of the


output. This will be steady-state only. How do we get the transient.
7.6. APPLICATIONS OF FOURIER SERIES 57

Example 7.43 (LR Circuit). Consider the problem of finding the steady
state current flow ip (t) for the following IVP.
(
Li0 (t) + Ri(t) = v(t),
i(0) = i0 .
Previously, we solved this problem by using undetermined coefficients. This
method can be extended to the case where ip (t) and v(t) are represented by
Fourier series. The transient current flow ih (t) can be found by considering
the AHE Li0 (t) + Ri(t) = 0.
P P
Assume v(t) = k Vk exp(jkω1 t) and ip (t) = k Ip,k exp(jkω1 t). We want
to find the coefficients Ik in terms of the Vk . Since the ODE is linear we
can restrict our attention to the k th harmonics ip,k (t) = Ik exp(jkω1 t) and
vk (t) = Vk exp(jkω1 t).

We substitute ip,k (t) and vk (t) into the ODE to obtain



Lkω1 j + R Ik exp(jkω1 t) = Vk exp(jkω1 t).
and, since exp(jkω1 t) 6≡ 0,
Vk
Ip,k = .
R + Lkω1 j
It follows that

X Vk
ip (t) = exp(jkω1 t).
R + Lkω1 j
k=−∞

Consider the particular case L = 0.1 (H), R = 1.0 (Ω) and let v(t) =
2

π arcsin sin(2πt) be the triangular wave from Example 7.34. It follows
that  π 
 4 sin
 k
2
Vk = − π 2 k 2 j k 6= 0,

0 k = 0.

and π 
4j
∞ sin k
2
X
ip (t) = − 2 exp(j2πkt).
π k 2 (1.0 + 0.2πkj)
k=−∞
k6=0

Refer to Figure 10 .....


Example 7.44 (LCR Circuit). We repeat example 7.43 in the case of an
LCR circuit. Consider the IVP
(
Li00 (t) + Ri0 (t) + C1 i(t) = v 0 (t)
i(0) = i0 , i0 (0) = i1
P P
and assume that v(t) = k Vk exp(jkω1 t) and ip (t) = k Ip,k exp(jkω1 t).
58 7. FOURIER SERIES

Figure 10. Steady State Current Flow in an LR-Circuit

It follows that
jkω1 Vk
Ip,k = 1
C − Lk 2 ω12 + Rkω1 j
and

X jkω1 Vk
ip (t) = 1 exp(jkω1 t).
k=−∞ C
− Lk 2 ω12 + Rkω1 j
k6=0

Suppose that L = 1 (H), C = 1 (F), R = 1 (Ω) and let v(t) = π2 arcsin sin(2πt) .

We have π 
∞ sin k
2
X
ip (t) = 8π exp(j2πkt).
k(1 − 4π 2 k 2 + 2πkj)
k=−∞
k6=0

Refer to Figure Refer to Figure 11 .....

Figure 11. Steady State Current Flow in an LCR-Circuit

Example 7.45 (Audio Effects). We refer the reader to the block diagram
depicted in Figure 12. This diagram summarizes a number of popular audio
effects including: phase shifting, flanging and chorus effect. For simplicity
we assume that the feedback gain is α2 = 1 and set α1 = α.
7.6. APPLICATIONS OF FOURIER SERIES 59

Feedback gain α2

LFO delay τ (t) Gain α1


Input x(t)
Output y(t)

Figure 12. the caption

The impulse response of the effect is given by the equation


X∞
αp x t − pτ (t) ,

y(t) =
p=0

where we assume that |α| < 1. The delay τ (t) is a low frequency oscillator
and is usually a broadband signal like a square wave. For definiteness, we
take 
τ (t) = A0 sgn sin(2πf0 t) .

Assume the x(t) is periodic. If f1 (Hz) is the fundamental frequency of x(t)


then f0  f1 . If x(t) is periodic with
X∞
x(t) = Ck exp(jk2πf1 t),
k=−∞
then

X ∞
X
y(t) = Ck exp(jkω1 t) αp exp(−jkpω1 τ )
k=−∞ p=0

X Ck
= exp(jkω1 t)
1 − α exp(−jkω1 τ )
k=−∞
Refer to Figure 13 .....
60 7. FOURIER SERIES

Figure 13. Audio Input and Output


CHAPTER 8

The Continuous Fourier Transform (CFT)

Define aperiodic signals and give the definition and the inverse.

8.1. Aperiodic Signals

8.2. Operational Properties

8.3. Amplitude Modulation

61
CHAPTER 9

The Discrete Fourier Transform

63
Part 4

Laplace Transforms
9.1. The Laplace transform and the Inverse Laplace Transform

9.2. Operational Properties and Tables of Laplace Transforms

9.3. Solving IVPs with the Laplace Transform

9.4. Discontinuous Forcing Functions

9.5. Transfer functions and PID Controllers

9.6. Convolution Theorem


APPENDIX A

Complex Numbers

The MECH guys need this chapter

67
APPENDIX B

MATH 257 Formulae and Tables

B.1. Ordinary Differential Equations

B.1.1. Linear First Order. The general solution of

(B.1) y 0 (x) + P (x)y(x) = Q(x)

is
Z 
1
(B.2) y(x) = φ(x)Q(x) dx + C ,
φ(x)
where
Z 
φ(x) = exp P (x) dx .

B.1.2. Linear Second Order. The characteristic equation of

(B.3) a2 y 00 (x) + a1 y 0 (x) + a0 y(x) = 0

is

(B.4) a2 λ2 + a1 λ + a0 = 0 .

General Solution: If λ1 , λ2 are the roots of (B.4) and C1 , C2 ∈ R are


arbitrary then:

(1) If λ1 6= λ2 with λ1 , λ2 ∈ R, then

(B.5) y(x) = C1 e λ1 x + C2 e λ2 x

(2) If λ = λ1 = λ2 with λ ∈ R, then

(B.6) y(x) = C1 e λx + C2 xe λx

(3) If λ = α ± βj, α, β ∈ R, then


 
(B.7) y(x) = e αx C1 cos(βx) + C2 sin(βx)
69
70 B. MATH 257 FORMULAE AND TABLES

B.2. Fourier Series

If s(t) is T -periodic, then ω0 = 2π/T and



X
(B.8) s(t) = Ck exp(jkω0 t) ,
k=−∞

where
T
1
Z
(B.9) Ck = exp(−jkω0 t)s(t) dt .
T 0

FS
B.2.1. Properties. We use the notation s(t) ←→ {Ck } to indicate
that {Ck } are the Fourier coefficients of s(t). Let s(t) and u(t) be T -periodic
FS FS
with s(t) ←→ {Ck }, u(t) ←→ {Dk }. Let a, b ∈ R.

FS
Linearity: as(t) + bu(t) ←→ {aCk + bDk }
FS
Time Shift: Shifting in time changes the phase of Ck . s(t − ts ) ←→
{exp(−jkω0 ts )Ck }.
Time Scaling: Assume a > 0. The fundamental frequency of s(at)
is ω
e0 = aω0 . The Fourier coefficients are unchanged.
Amplitude Scaling: The Fourier coefficients of As(t) are ACk .
FS FS
Differentiation: s 0 (t) ←→ {−jkω0 Ck } and s 00 (t) ←→ {−k 2 ω02 Ck }.
Parseval’s Theorem: The average power of s(t) is
∞ ∞
1 T
Z X X
2
(B.10) < s >= |s(t)|2 dt = |Ck |2 = P0 + Pk ,
T 0
k=−∞ k=1

where
(
|C0 |2 k=0
(B.11) Pk = .
2|Ck |2 k = 1, 2, 3, . . .

A2k
Note: 2|Ck |2 = 2 = 12 (a2k + b2k ), k ≥ 1.

B.2.2. Sine - Cosine Form.



X
(B.12) s(t) = C0 + ak cos(kω0 t) + bk sin(kω0 t) ,
k=1

where ak = 2Re(Ck ) and bk = −2Im(Ck ).


B.2. FOURIER SERIES 71

B.2.3. Magnitude - Phase Form.



X
(B.13) s(t) = C0 + Ak cos(kω0 t + φk ) ,
k=1
q
where Ak = a2k + b2k = 2|Ck | and φk = ∠Ck .

B.2.4. Symmetries.

Even Symmetry: If s(t) = s(−t), then bk = 0 for all k = 1, 2, 3 . . .


Odd Symmetry: If s(t) = −s(−t), then C0 = 0 and ak = 0 for all
k = 1, 2, 3 . . .
Half-wave Symmetry: If s(t) = −s(t − T /2), then C2p = 0 for all
p ∈ Z.

B.2.5. Common Signals. In what follows ω0 = 2π.

B.2.5.1. Full Wave Rectified Sine.

1
0.8
0.6
0.4
0.2
0
–1 0 1 2 3
t

Signal: s(t) = | sin(πt)|


2


 k=0
Coefficients: Ck = π 2
 k 6= 0
π (1 − 4 k 2 )


2 4 X cos(k 2πt)
Simplified series: s(t) = +
π π 1 − 4k 2
k=1

B.2.5.2. Half Wave Rectified Sine.

1
0.8
0.6
0.4
0.2
0
–1 0 1 2 3
t
72 B. MATH 257 FORMULAE AND TABLES

1 1
Signal: s(t) = | sin(2πt)| + sin(2πt)
2 2
1


 k=0
π


−j

Coefficients: Ck = k = ±1
4
(−1)k + 1



otherwise


2π(1 − k 2 )


1 sin(2πt) 2 X cos(k 4πt)
Simplified series: s(t) = + +
π 2 π 1 − 4k 2
k=1

B.2.5.3. Square Wave.

0.5

–0.5

–1
–1 0 1 2 3
t


Signal: s(t) = sgn sin(2πt)

0 k=0
Coefficients: Ck = (−1)k − 1
 j k 6= 0
kπ  
4X
∞ sin (2k − 1) 2πt
Simplified series: s(t) =
π 2k − 1
k=1

B.2.5.4. Sawtooth Wave.

0.5

–0.5

–1
–1 0 1 2 3
t
B.3. CONTINUOUS FOURIER TRANSFORM 73

1 2  
Signal: s(t) = 2(t − floor(t) − ) = − arctan cot(πx)
 2 π
0 k=0
Coefficients: Ck = j
 otherwise


2 X sin(k 2πt)
Simplified series: s(t) = −
π k
k=1

B.2.5.5. Triangular Wave.

0.5

–0.5

–1
–1 0 1 2 3
t

2  
Signal: s(t) = 1 − 4t − 4floor(t + 14 ) − 1 = arcsin sin(2πt)

 π
0 k=0
Coefficients: Ck = kπ

− 4 sin 2
j k 6= 0
k2 π2  
∞ (−1)k sin (2k − 1) 2πt
8 X
Simplified series: s(t) = − 2
π (2k − 1)2
k=1

B.3. Continuous Fourier Transform

Continuous Fourier Transform (CFT):


Z ∞
(B.14) ŝ(ω) = e−jωt s(t) dt
−∞

Inverse Continuous Fourier Transform:


Z ∞
1
(B.15) s(t) = ejωt ŝ(ω) dω
2π −∞

Parseval’s Relation:
Z ∞ Z ∞
1
(B.16) |s(t)|2 dt = |ŝ(ω)|2 dω
−∞ 2π −∞
74 B. MATH 257 FORMULAE AND TABLES

B.4. Discrete Fourier Transform

Discrete Fourier Transform (DFT):


N −1  
X 2π
(B.17) S[l] = exp −jkl s[k] for l = 0, 1, . . . N − 1
N
k=0

Inverse Discrete Fourier Transform (IDFT):


N −1  
1 X 2π
(B.18) s[k] = exp jkl S[l] for k = 0, 1, . . . N − 1
N N
l=0

Parseval’s Relation:
N −1 N −1
X 1 X
(B.19) |s[k]|2 = |S[k]|2
N
k=0 k=0

B.5. Power Series

The Taylor series for the function f (x) about x = a is given by


f 00 (a) f (n) (a)
(B.20) f (x) = f (a)+f 0 (a)(x−a)+ (x−a)2 +· · ·+ (x−a)n +· · ·
2! n!

When a = 0, we obtain the MacLaurin series as a special case.

B.5.1. MacLaurin Series.


x2 x3 x4
(B.21) e x = 1 + x + + + + ··· x∈R
2! 3! 4!
x3 x5 x7 x11
(B.22) sin(x) = x − + − + − ··· x∈R
3! 5! 7! 11!
x2 x4 x6 x8
(B.23) cos(x) = 1 − + − + − ··· x∈R
2! 4! 6! 8!
1
(B.24) = 1 + x + x2 + x3 + x4 + · · · |x| < 1
1−x
x2 x3 x4 x5
(B.25) ln(1 + x) = x − + − + + ··· |x| < 1
2 3 4 5
α(α − 1) 2 α(α − 1)(α − 2) 3
(B.26) (1 + x)α = 1 + αx + x + x
2! 3!
α(α − 1)(α − 2)(α − 2) 3
+ x + ··· |x| < 1
4!
B.6. LAPLACE TRANSFORM 75

B.6. Laplace Transform

Laplace Transform:

Z ∞
(B.27) F (s) = L{f }(s) = e−st f (t) dt .
0

Note: fˆ(ω) = F (jω).


Inverse Laplace Transform:

c+j∞
1
Z
(B.28) f (t) = L−1 {F }(t) = est F (s) ds .
2πj c−j∞

Note: Re(sp ) < c for all poles sp of F .

B.6.1. Table of Laplace Transforms.


76 B. MATH 257 FORMULAE AND TABLES

f = L−1 {F }, t ≥ 0 F = L{f }, s > α

1
1. 1 , s>0
s
n!
2. tn , n = 0, 1, 2 . . . , s>0
sn+1
1
3. eat , s>a
s−a
1
4. teat , s>a
(s − a)2
a
5. sin at , s>0
s + a2
2
s
6. cos at , s>0
s + a2
2

2as
7. t sin at , s>0
(s + a2 )2
2

s2 − a2
8. t cos at , s>0
(s2 + a2 )2
a
9. ebt sin at , s>b
(s − b)2 + a2
s−b
10. ebt cos at , s>b
(s − b)2 + a2
2a(s − b)
11. tebt sin at 2 , s > 0
(s − b)2 + a2
(s − b)2 − a2
12. tebt cos at 2 , s > 0
(s − b)2 + a2
7. δ(t − a) e−as

e−cs
18. uc (t) , s>0
s
Γ(p + 1)
1. tp ,
p > −1
sp+1
1
 √
exp − 4t exp(− s)
2. √ √
πt s
2 1
3. ϕ(t) = k∈Z e−k πt √ √
P
s tanh s
B.6. LAPLACE TRANSFORM 77

B.6.2. Properties.

f = L−1 {F }, t ≥ 0 F = L{f }, s > α

13. af (t) + bg(t) aF (s) + bG(s)

14. f 0 (t) sF (s) − f (0)

15. f 00 (t) s2 F (s) − sf (0) − f 0 (0)

16. f (n) (t), n = 0, 1, 2 . . . sn F (s) − nk=1 sk−1 f (n−k) (0)


P

17. tn f (t), n = 0, 1, 2 . . . (−1)n F (n) (s)

19. uc (t)f (t − c) e−cs F (s)

20. ect f (t) F (s − c)


Rt F (s)
5. 0 f (τ ) dτ
s
Rt
6. 0 f (t − τ )g(τ ) dτ F (s)G(s)

4. f (at), a>0 1/a F (s/a)

B.6.3. Notes.

(1) Γ(z) is the gamma function


Z ∞
Γ(z) = e−x xz−1 dx, Re(z) > 1 .
0
(2) u(t) is the unit step or Heaviside function
(
1 t>0
u(t) =
0 t≤0
and uc (t) = u(t − c).
(3) δ is the unit impulse or Dirac delta function. Formally δ(t) = u0 (t).

You might also like