You are on page 1of 141

Apuntes de la asignatura

Ecuaciones Diferenciales
basados en los trabajos de
Rosario Muoz Marn,
Juan Jos Saameo Rodrguez
y
Francisco J. Rodrguez Snchez,1

This book is protected under creative commons license


Attribution-NonCommercial-ShareAlike 4.0 International (cc by-nc-sa 4.0)
http://creativecommons.org/licenses/by-nc-sa/4.0/.

Grados en Ingenieras de
Sistemas Electrnicos,
Sistemas de Telecomunicacin,
Telemtica
y
Sonido e Imagen
Universidad de Mlaga
1

Dpto. Matemtica Aplicada. Universidad de Mlaga.

ndice general
1. Ordinary Differential Equations (ODE)
1.1. Introduction and definitions . . . . . . . . .
1.1.1. Solutions of an ODE . . . . . . . . .
1.2. First Order Differential Equation . . . . . .
1.2.1. Equations with Separated Variables
1.2.2. Homogeneous Equations . . . . . . .
1.2.3. Exact Differential Equation . . . . .
1.2.4. Linear Differential Equations . . . .
1.3. Integrating ODEs of higher order . . . . . .
1.3.1. Linear ODEs . . . . . . . . . . . . .
1.3.2. Second order Linear EDOs . . . . .
1.3.3. Linear EDOs of order n . . . . . . .
1.4. Systems of Linear Differential Equations . .
1.4.1. First Order Systems . . . . . . . . .
Ejercicios . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

2. Fourier Transform and Laplace Transform


2.1. Periodic Functions and Fourier Series . . . . . . . . . . . . . .
2.1.1. Fourier Series for Periodic Functions with other Period
2.1.2. Complex Notation . . . . . . . . . . . . . . . . . . . .
2.2. Fourier Integral Transform . . . . . . . . . . . . . . . . . . . .
2.2.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . .
2.2.2. Properties to the Fourier transform and inverse . . . .
2.2.3. Convolution . . . . . . . . . . . . . . . . . . . . . . . .
2.2.4. Fourier Transforms of elementary functions . . . . . .
2.2.5. Distributions and its Fourier transform . . . . . . . . .
2.2.6. Fourier transform applied to differential equations . .
2.2.7. Fourier transforms Table . . . . . . . . . . . . . . . . .
2.3. Laplace Integral Transform . . . . . . . . . . . . . . . . . . .
2.3.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . .
2.3.2. Properties of the Laplace Operator . . . . . . . . . . .
2.3.3. Laplace Transform Table . . . . . . . . . . . . . . . . .
2.3.4. Inverse Laplace Transform . . . . . . . . . . . . . . . .
2.3.5. Laplace Method for Solving ODEs . . . . . . . . . . .
Ejercicios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3. Partial Differential Equations
3.1. Introduccin . . . . . . . . . . . . . . . . .
3.1.1. EDPs lineales y cuasilineales . . .
3.1.2. Algunos ejemplos clsicos de EDPs
3.2. EDP de primer orden . . . . . . . . . . . .
iii

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

. . .
than
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

. .
2
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .

.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

1
1
1
2
5
5
6
9
11
13
13
19
20
21
25

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

41
41
44
45
46
46
48
51
54
55
58
59
60
60
61
64
65
69
70

.
.
.
.

81
81
82
83
84

3.2.1. Resolucin directa . . . . . . . . . . . . . .


3.2.2. Mtodo de separacin de variables . . . . .
3.2.3. Mtodo de las caractersticas . . . . . . . .
3.3. Ecuaciones en derivadas parciales de segundo orden
3.3.1. La ecuacin de ondas . . . . . . . . . . . . .
3.3.2. La ecuacin de difusin del calor . . . . . .
Ejercicios . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

4. Complex Variable I (Differentiation and Integration)


4.1. Complex Differentiation . . . . . . . . . . . . . . . . . .
4.1.1. Accumulation Points and Limits . . . . . . . . .
4.1.2. Differentiability and Holomorphicity . . . . . . .
4.1.3. The CauchyRiemann Equations . . . . . . . . .
4.2. Integration . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1. Definition and Basic Properties . . . . . . . . . .
4.2.2. Homotopies . . . . . . . . . . . . . . . . . . . . .
4.2.3. Cauchys Integral Formula . . . . . . . . . . . . .
4.2.4. Extensin of Cauchys Formula . . . . . . . . . .
4.2.5. Fundamental Theorem of Algebra . . . . . . . . .
4.2.6. Fundamental Theorems of Calculus . . . . . . . .
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5. Complex Variable II (Poles and the
5.1. Taylor and Laurent Series . . . . .
5.1.1. Power series . . . . . . . . .
5.1.2. Taylor Series . . . . . . . .
5.1.3. Laurent Series . . . . . . . .
5.2. Poles and the Residue Theorem . .
5.2.1. Isolated Singularities . . . .
5.2.2. Residues . . . . . . . . . . .
Exercises . . . . . . . . . . . . . . . . .
A. Complex Numbers
A.1. Algebraic Definition . . . . . . .
A.2. Number i. Rectangular and Polar
A.3. Complex Conjugates . . . . . . .
Exercises . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

Residue Theorem)
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

84
84
85
87
87
92
94

.
.
.
.
.
.
.
.
.
.
.
.

101
. 101
. 101
. 103
. 105
. 107
. 107
. 109
. 110
. 113
. 115
. 115
. 117

.
.
.
.
.
.
.
.

123
. 123
. 123
. 127
. 128
. 130
. 130
. 133
. 136

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

141
. 141
. 141
. 144
. 145

B. Elementary Complex Functions


B.1. Exponential Function . . . . . . . . . . . . .
B.2. Trigonometric Functions . . . . . . . . . . .
B.3. Hyperbolic Trig Functions . . . . . . . . . .
B.4. Logarithms . . . . . . . . . . . . . . . . . .
B.5. Exponential with any non-null complex base
Exercises . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

. . . .
Forms
. . . .
. . . .

.
.
.
.

149
149
150
151
152
153
154

C. Computing Some Real Z


Integrals
157
2
C.1. Integrals in the form
R(sin x, cos x) dx. . . . . . . . . . . . . . . . . . 157
0

C.2. Improper Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158


Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

iv

Chapter 1

Ordinary Differential
Equations (ODE)
1.1.

Introduction and definitions

A differential equation is any equation which contains derivatives, either ordinary


derivatives (only one independent variable, ODE in short) or partial derivatives (several
independent variables, PDE in short). Differential equations play a very important and
useful role in mathematics, physics and engineering. A lot of mathematical and numerical
methods has been developed for the solution of differential equations.
Examples of differential equations are:
000

a) xy 0 + y 2 x = 1 (ODE)
b) x

c) x2 y xy 0 = ex (ODE)

z
z
y2
= x (PDE)
x
y

d)

2z
2z
2z
+

= y (PDE)
x2 xy y 2

Order of a differential equation is the number of the highest derivative in the equation.
In literature there are very important differential equations, some examples are:
Newtons Second Law: F = m
and the space s = s(t).

d2 s
, where the force F = F (t), mass m is a constant
dt2

d2
g
Simple Pendulum Motion:
+ = 0, where the angle = (t) and gravity g
dt2
L
and length of pendulum L are constants.
d2 Q
dQ
1
+R
+ Q = E, where the charge Q = Q(t),
2
dt
dt
C
voltage E = E(t) and the inductance L, resistance R and capacity C are constants.

Electric Circuit Equation: L

2u
u
Heat equation: k 2 =
, where k is a constant (thermal diffusivity) and u = u(x, t)
x
t
is the temperature along an unidimensional wire at distance x and time t.

1.1.1.

Solutions of an ODE

Given an ordinary differential equation of order n written as


F (x, y, y 0 , . . . , y (n) ) = 0
1

we say that y = g(x) is a solution in an interval I R if g is at least n times differentiable


and
F (x, g(x), g 0 (x), . . . , g (n) (x)) = 0
for every x I.
Often, to find solutions for an ODE involves to resolve anti-derivative of functions.
For this reason, to solve an ODE is named to integrate an equation.
Example 1.1.1. Function y = x4 is solution of the ODE

y 0 = 4x y.
But this equation has infinite solutions: the parametric functions family y = (x2 + C)2
for any constant C. Also this ODE has y = 0 like a trivial solution.
A solution not involving constant is called particular solution. A parametric family of
functions which contains every particular solution is called general solution. Sometimes
the general solution does not contain all solutions, then we say that ODE has singular
solutions. In the above example 1.1.1, functions y = (x2 + C)2 is a general solution and
the trivial solution y = 0 is a singular solution.
Curve solutions
Its very common that solutions of an ODE are expressed as curves instead of functions.
Example 1.1.2. the first order differential equation
yy 0 + x = 1
has general solution the following curves (circles centered in (1, 0))
x2 + y 2 2x = C,

with C 1

2
1

1
2
3

You can check this fact using implicit derivative.

1.2.

First Order Differential Equation

As we know is an expression F (x, y, y 0 ) = 0.


When is possible to calculate y 0 in function of x, y we say that ODE is expressed in
normal form
y 0 = f (x, y)
2

The easiest way for integrating a solution of an ODE is just to do an integral, so


Z
0
y = 2x = y = 2xdx = y = x2 + C
In general, to solve an ODE is a very complicate problem. Moreover sometimes we need
the solution verifying one point (x0 , y0 ).
Definition 1.2.1 (Cauchys Problem). This is the problem of finding a function y = g(x)
solution of an ODE y 0 = f (x, y) which verifies y0 = g(x0 ). This can be represented
(
y 0 = f (x, y)
y(x0 ) = y0
Next theorem gives a sufficient condition for existence and onliness of solution for a
Cauchys problem.
Theorem 1.2.2. Let (x0 , y0 ) a point where the scalar field f (x, y) is continuous, and
the partial derivative f
y exists and is continuous in a open ball around the point (x0 , y0 ).
Then there exists an > 0 for which the Cauchys problem
(
y 0 = f (x, y)
y(x0 ) = y0
has an only solution y = g(x) for x (x0 , x0 + ).
Proof. Out of objective.
Example 1.2.3. The Cauchys problem

y 0 = x
y2
y(2) = 0
has solution
r
y=

3 2
x 6
2

x
is not continuous in (2, 0). This is a counter-example for the theorem.
y2
Example 1.2.4. The Cauchys problem
(
yy 0 + x = 1
y(2) = 0

but f (x, y) =

has not solution, but the curve solution x2 + y 2 2x = 0 (see Example A.2) is not a
function in the form y = g(x) around the point (2, 0) (remember Theorem of the Implicit
Function).
Example 1.2.5.
The Cauchy problem
(
xy 0 = 2y
y(0) = 0
has solution, but it is not unique. In fact, it has infinite solutions y = Cx2 .
Sometimes a first order ODE is expressed in the a form equivalent to the normal
form, but in a different notation
P (x, y)dy = Q(x, y)dx

dy
Q(x, y)
=
y 0 = f (x, y)
dx
P (x, y)
3

Figure 1.1: Point (0, 0) has infinity solutions. Example 1.2.5


Orthogonal trajectories
Many problems in Physic involve calculation of families of curves orthogonal to other
families of curves. We say that two curves are orthogonal if both curves are orthogonal
tangent lines in the intersection point .
Example 1.2.6. We are going to calculate the orthogonal family of parabolic curves
y = x2 + c.
This family verify the differential equation dy = 2xdx then, considering that the orthogonal slopes are opposite and inverse, the orthogonal familie verify
dx = 2xdy =

dy
1
=
dx
2x

therefore, the orthogonal family is


y=C

ln |x|
2

(see Figure 1.2).

Figure 1.2: Family of curves (in blue) orthogonal to the family y = x2 + c (in black).
Example 1.2.7. Lets go to calculate the orthogonal family to the hyperboles
x2 y 2 = 2cx.
The derivative is
2x dx 2y dy = 2c dx = 2x dx 2y dy =

x2 y 2
dx
x

and the orthogonal family is obtained from 2x2 dy + 2xy dx = (x2 + y 2 ) dy equivalent to
2xy dx + (x2 + y 2 ) dy = 0
and its solution will be given in Example 1.2.16.
Now, we give some knowing methods for integrating first order ODEs.
4

1.2.1.

Equations with Separated Variables

It is an ODE which can be expressed in the form


g(y)y 0 = f (x) equivalent to g(y)dy = f (x)dx
R
R
R
R
For integrating, g(y) dy = f (x) dx, therefore if G(y) = g(y) dy and F (x) = f (x) dx,
the general solution is the parametric curve
G(y) = F (x) + C
sin x
we express
y3
Z
Z
y4
3
3
y dy = sin x dx =
y dy = sin x dx =
= cos x + C =
4
y 4 + 4 cos x = C 1 =

y = 4 C 4 cos x

Example 1.2.8. To solve y 0 =

1.2.2.

Homogeneous Equations

Definition 1.2.9. A scalar field is said homogeneous of order k 0 if


f (tx, ty) = tk f (x, y) for every t
Example 1.2.10. Compute that the scalar field f (x, y) = x2 2xy +2y 2 is homogeneous
of second order.
We say that an first order ODE in normal form y = f (x, y) is homogeneous if f (x, y)
is homogeneous of order 0, that is f (tx, ty) = f (x, y), t.
This is equivalent to say that the ODE in the form Q(x, y)dy = P (x, y)dx is homogeneous if P and Q are homogeneous of the same order.
Proposition 1.2.11. A homogeneous first order ODE can be converted in an equation
of separated variable doing the change of variable y = ux where u is a dependent of x
variable.
Proof. If y 0 = f (x, y) and y = ux, then u0 x + u = f (x, ux) = f (1, u) = u0 x =
f (1, u) u, therefore
1
1
du = dx
f (1, u) u
x
If the ODE is P (x, y)dy = Q(x, y)dx and y = ux, then P (x, ux)(xdu + udx) =
Q(x, ux)dx = P (1, u)(xdu + udx) = Q(1, u)dx = xP (1, u)du = (Q(1, u) u)dx,
therefore
P (1, u)
1
du = dx
Q(1, u) u
x
which is of separated variables.


p
Example 1.2.12. Integrate xdy = y + x2 y 2 dx.
We observe that is homogeneous (P , Q are homogeneous of first order). Then we do
y = ux, therefore




p
p
x(xdu + udx) = ux + x2 u2 x2 dx = xdu + udx = u + 1 u2 dx =

1
1
du = dx.
2
x
1u

Then arcsin u = ln |x| + C = arcsin xy = ln |x| + C. Also we can write y = x sin(ln |x| +
C).
5

Figure 1.3: Orthogonal trajectories of circles x2 + y 2 = cx are circles x2 + y 2 = cy


(Example 1.2.13)
Example 1.2.13. Calculate the orthogonal trajectories of the family or circles
x2 + y 2 = cx.
Check that the solution is other family of circles x2 + y 2 = cy (see Fiigure 1.3).

1.2.3.

Exact Differential Equation

We say that a first order ODE in the form


P (x, y)dx + Q(x, y)dy = 0

(1.1)

is exact if the vector field F (x, y) = (P (x, y), Q(x, y)) is conservative, i.e. there exists a
scalar field U (x, y) such that

= P (x, y)

x
U = F

U = Q(x, y)
y
In this cases, equation the differential equation (1.1) can be rewritten dU = 0, hence the
parametric family of curves
U (x, y) = C
is a general solution of the equation.
By calculus, we know that if a vector field F = (P, Q) then its curl F =
0, then F is conservative. Hence

Q P
x y

Proposition 1.2.14. If the ODE P (x, y)dx + Q(x, y)dy = 0 verifies


P (x, y)
Q(x, y)
=
,
y
x
then it is an exact differential equation.
Example 1.2.15. Integrate the equation
(4x3 y 2 + x 1)dx + (2x4 y y + 2)dy = 0.
Let P = 4x3 y 2 + x 1 and Q = 2x4 y y + 2. Check the ODE is exact.
6

(1.2)

In similar way we did in calculus


Z
U=

(4x3 y 2 + x 1)dx = x4 y 2 +

P dx =

x2
x + (y)
2

U
= Q = 2x4 y + 0 (y) = 2x4 y y + 2
y
Z
y2
0
(y) = y + 2 = (y) = y + 2 dy = 2y
+C
2
therefore the general curve solution is
x4 y 2 +

x2
y2
x + 2y
= C.
2
2

Example 1.2.16. The orthogonal family to the hyperboles family


x2 y 2 = 2cx
is obtained, as we saw in Example 1.2.7, from the the ODE
2xy dx + (x2 y 2 )dy = 0.
This ODE is exact, because Py = 2x and Qx = 2x. Now, we calculate the potential
Z

2xy dx = x2 y + (y)

Ux = P = U =
2

Uy = Q = x + (y) = x y

Z
= (y) =

y 2 dy =

y3
.
3

Therefore the orthogonal family is


x2 y +

y3
= C.
3

Integration Factors
An integration factor is a function (x, y) that is choosen to convert a inexact differential equation in exact.
(inexact ODE)
P (x, y)dx + Q(x, y)dy = 0

(exact ODE)
(x, y)P (x, y)dx + (x, y)Q(x, y)dy = 0

Because the second equation is exact, it verifies


((x, y)P (x, y)
((x, y)Q(x, y)
=
y
x

Q
P
+
=Q
+
y
y
x
x

in other words,
(1.3)

This means that (x, y) is a solution of a partial differential equation (1.3), generally
much more difficult than the previous.
Sometimes equation (1.3) can be simplified imposing additional conditions on (x, y).
We will see some of these.
7

Integral factor is a function which does not depend of y, i,e. = (x). In this
case, equation (1.3) can be written Py = Q0 + Qx . This implies that
Py Qx
0
=
= (x)
Q

is a function only dependent of x. Then, To find the integral factor is easy, because
Z
R
ln() = (x) dx = = e (x) dx .
Example 1.2.17. Integrate the ODE (4x5 y 3 + x)dx + (3x6 y 2 x2 )dy = 0.
This is not exact, but it has an integral factor = (x), because
6x5 y 2 + 2x
2
P y Qx
12x5 y 2 (18x5 y 2 2x)
0
=
=
=
=
,
Q
3x6 y 2 x2
3x6 y 2 x2
x

hence (x) = e2

1
x

dx

= x2 . The equivalent ODE




1
4x y +
x
3 3


dx + 3x4 y 2 1 dy = 0

is exact, and calculating the potential function, its general solution is


U = x4 y 3 + ln |x| y = C.
Integral factor is a function which does not depend of x, i,e. = (y). In this
case, equation (1.3) can be written P 0 + Py = Qx . This implies that
Qx Py
0
=
= (y)
P

is a function only dependent of y. Then, To find the integral factor is easy, because
Z
R
ln() = (y) dy = = e (y) dy .
Example 1.2.18. Integrate the ODE y(1 + xy)dx xdy = 0, using integral factor
= (y).
Other integral factors. Assuming = (z), where z = h(x, y), equation (1.3) can
be written
P z hy + Py = Qz hx + Qx .
This implies that
Py Qx
z
=
= (z)
Qhx P hy

is a function only dependent of z. Then, to fondi the integral factor is easy, because
Z
R
ln() = (z) dz = = e (z) dz = (h(x, y)).
8

Example 1.2.19. Integrate the ODE (4xy 3x2 y) dx + (2x y x2 )dy = 0 using an
integral factor in the form = (x2 y).
Using equation (1.3) an z = x2 y, we can write
(4xy 3x2 y)(1)z + (4x 1) = (2x y x2 )(2x)z + (2 2x)
(6x 3) = (2xy y 2x3 + x2 )z
z
6x 3
3(2x 1)
3
3
=
=
=
= .
3
2
2
2

2xy y 2x + x
y(2x 1) x (2x 1)
yx
z
Hence
= e3

1
z

dz

= z 3 =

(x2

1
y)3

Multiplying the equation by this integral factor we obtain


2x y x2
4xy 3x2 y
dx
+
dy = 0
(x2 y)3
(x2 y)3
and the general curve solution is
xy
=C
(x2 y)2
Example 1.2.20. Integrate the ODE (3xy 2 4y) + (3x 4x2 y)y 0 = 0, using an integral
factor depending of xm y n (Exercise 1.2).

1.2.4.

Linear Differential Equations

A first order ordinary linear differential equation is an equation that can be expressed
in the form
a1 (x)y 0 + a0 (x)y = g(x) or equivalent

y 0 + P (x)y = Q(x).

We show two methods for integrating linear equations:


0
Usind an integral
R factor: For y + P (x)y = Q(x) we can multiply for the integral
P
(x)
dx
factor = e
. So
R

y0e

P (x) dx

R
R
d R P (x) dx
ye
= Q(x)e P (x) dx =
= Q(x)e P (x) dx
dx
Z 

R
R
ye P (x) dx =
Q(x)e P (x) dx dx

R
R
Q(x)e P (x) dx dx
R
y=
e P (x) dx

+ yP (x)e

P (x) dx

Example 1.2.21. Integrate the equation y 0 +


We have = e

1
x

dx

y
= x.
x

= eln x = x, then

d
xy + y = x =
(xy) = x2 = xy =
dx
0

y=

x3
3

+C
x2 C
=
+
x
3
x
9

x2 dx

Using a particular solution: We use a general solution of the named associated homogeneous linear equation y 0 + P (x)y = 0. This solution is found using variable
separated method. So,
yh = Ce

P (x) dx

being C a positive constant.

For finding a particular solution yp , we use the named Lagranges method of variation. It consist in changing
the constant C of homogeneous solution by a function
R
C(x), i.e. yp = C(x)e P (x) dx .


C 0 (x)e

yp0 + P (x)yp = Q(x)



R
R
P (x) dx
P (x)C(x)e P (x) dx + P (x)C(x)e P (x) dx = Q(x)
R

C 0 (x) = Q(x)e

P (x) dx

Therefore the general solution is


Z 
 
R
R
R
P (x) dx
y = yp + yh =
Q(x)e
dx e P (x) dx + Ce P (x) dx
Remark. For equations with constant coefficients is faster use the method of indeterminate coefficients which will be explain further on this document (page 16).
Example 1.2.22. Lets integrate the general solution for Example 1.2.21 using a
y
particular solution, y + = x.
x
First, compute general solution for the associated homogeneous equation
y0
yh
1
= 0 = h = = ln yh = ln x + K
x
yh
x
C
yh = e( ln x+K) = Cx1 =
x
Now, we look for a particular solution using method of variation of constant, yp =
C(x)
.
x
yh0 +

C(x)

xC 0 (x) C(x)
+ x
x2
x

= x =
C(x) =

C 0 (x)
= x = C 0 (x) = x2
x

x3
3

Hence, the general solution is


y = yp + yh =

x2 C
+
3
x

Two important equations reported in the literature can be solved by converting them to linear
equations.

Equation of Bernoulli
This Bernoulli ODE has the form
y 0 + yP (x) = y n Q(x) with n 6= 0, n 6= 1
and may be solved using the change of variable z =
y 0 + yP (x) = y n Q(x) =

10

y n1

= z 0 =

(1 n)y 0
.
yn

y0
1
+ n1 P (x) = Q(x)
yn
y

z0
+ zP (x) = Q(x)
1n
which is a linear equation.

Equation of Ricatti
The Ricatti ODE has the form
y 0 + yP (x) + y 2 R(x) = Q(x)
This equation can be reduced to a Bernoulli equation if we know a particular solution yp doing
the change of variable y = z + yp . Indeed,
z 0 + yp0 + zP (x) + yp P (x) + z 2 R(x) + 2zyp R(x) + yp2 R(x) = Q(x)
z 0 + zP (x) + z 2 R(x) + 2zyp R(x) = 0
z 0 + z(P (x) + 2yp R(x)) = z 2 R(x)
1
1
y = 2 . e The function yp =
x
x
a particular solution (verify this). We change the variable y = z + x1 , then
Example 1.2.23. Integrate the Ricatti equation y 0 + y 2 +

z0

1
x

is

2z
z
1
1
1
1
+ z2 +
+ 2+ + 2 = 2
x2
x
x
x x
x
3
z 0 + z = z 2 .
x

Last equation is a Bernoulli EDO, therefore we change the variable u =

1
= u0 =
z

z 0
z2 ,

then

3
z0
31
= 1 = u0 + u = 1 (Linear ODE).
+
z2
xz
x
We have uh = Cx3 and up = C(x)x3 ,
C 0 (x)x3 3x2 C(x) + 3C(x)x2 = 1 = C 0 (x) = x3 = C(x) =
Hence u = Cx3

x
1
2
=
= z =
. Therefore the general solution is
2
z
2Cx3 x
y=z+

1.3.

x2
1
= 2
2
2x

2
1
2Cx2 + 1
1
=
+
=
x
2Cx3 x x
2Cx3 x

Integrating ODEs of higher order

The order of an ordinary differential equation is the biggest derivative n in the equation
F (x, y, y 0 , . . . , y (n) ) = 0.
The Cauchy problem of order n consist to find a solution for


y (n) = f x, y 0 , y 00 , . . . , y (n1)

y(x0 ) = y0

y 0 (x0 ) = y 0
0
..

(n1)
(n1)
y
(x0 ) = y0
Similar to the first order case, it is possible to prove that the Cauchy problem has unique
f
f
solution when f is continuous and all the partial derivatives f
y , y 0 , . . . , y (n1) are
continuous.
11

Sometimes is possible to reduce the order of an ODE of order n to an equivalent ODE


of order less than n.
Equations in the form F(x, y(k) , y(k+1) , . . . , y(n) ) = 0. We do the change of variable
u = y (k) .
Example 1.3.1. Compute y 00 + x1 y 0 = 3x.
The change u = y 0 reduces the order to the first order equation u0 + x1 u = 3x. Hence
y 0 = u = x2 +

c1
x3
= y =
+ c1 ln x + c2
x
3

Equations in the form F(y, y0 , y00 , . . . , y(n) ) = 0. The independent variable does not
appear. We consider the function p(y) = y 0 , then we have
y0 = p
dp dy
dp
dp
=
=p
y 00 =
dx
dy dx
dy

 2

dy 00
dp
d2 p
dy 00
dp dp
d2 p
y 000 =
+ p2 2
=
p=
+p 2 p=p
dx
dy
dy dy
dy
dy
dy
..
.
Example 1.3.2. The second order ODE y 00 + (y 0 )2 = 2y 0 ey changes to
p

dp
dp
+ p2 = 2pey =
+ p = 2ey
dy
dy

(First order linear)

with solution
0

p = y = (2y + c1 )e

Z
=

ey
dy = x + c2
2y + c1

Note: We dont know how to calculate last integral.


F is homogeneous for dependent variables. In these cases we know that
F (x, y, y 0 , . . . , y (n) ) = k F (x, y, y 0 , . . . , y (n) ).

R
We do a change of variable z = z(x) such that y = exp z dx , and then
Z

y = exp
z dx

Z
0
y = z exp
z dx
Z

Z

Z

z dx + z 0 exp
z dx = (z 2 + z 0 ) exp
z dx
y 00 = z 2 exp
Z

000
3
0
00
y = (z + 3z z + z ) exp
z dx
..
.
Replacing in the equation, we get the new ODE
R

F (x, e

z dx

,ze

z dx

, (z 2 + z 0 )e

z dx

 R
k
. . . ) = e z dx F (x, 1, z 2 + z 0 , . . . ) = 0

which produces an equation of minor order.


12

Example 1.3.3. For computing y y 00 + (y 0 )2 = 0 we use the previous change of variable


and we obtain the next equation
z 2 + z 0 + z 2 = 0 = z 0 + 2z 2 = 0 = z 2 z 0 = 2 =

1
= 2x + c1 .
z

R




ln(2x+c1 )
1
Therefore y = exp
dx
=
exp
+
k
= c2 2x + c1 .
2x+c1
2
Note.- Observe that this equation could be solved differently

y2
d
yy 0 = 0 = yy 0 = c1 =
= c1 x + c2
dx
2
y 2 = 2c1 x + 2c2 (equivalent to the above solution)

y y 00 + (y 0 )2 = 0 =

1.3.1.

Linear ODEs

A linear ODE of order n is


a0 (x)y (n) + a1 (x)y (n1) + + an1 (x)y 0 + an (x)y = p(x)
where a0 (x), a1 (x), . . . , an (x) and p(x) are functions of x (independent of y). If p(x) = 0
then it is a homogeneous linear ODE.
The D-operator. It is useful to change notation in the next sense: we call Dy = y 0 ,
then D(Dy) = D2 y = y 00 , D3 y = y 000 , . . . , Dn y = y (n) , . . . . Calling L(D) = a0 (x)Dn +
a1 (x)Dn1 + + an1 (x)D + an (x), the linear equation can be written
L(D)y = p(x) or L(D)y = 0
for associated homogeneous linear equation.
Proposition 1.3.4. The operator L(D) is linear, i.e. for constants c1, c2 , verified
L(D)(c1 y1 + c2 y2 ) = c1 L(D)(y1 ) + c2 L(D)y2
Proof. Trivial from the properties of the derivatives.
Corollary 1.3.5. If y1 , y2 , . . . , yk are solutions of the homogeneous
linear differential
P
equation L(D)y = 0, then any linear combination of these ki=1 ci yi is another solution.
!
k
k
k
X
X
X
Proof.
L(D)
ci yi =
ci L(D)yi =
ci 0 = 0.
i=1

i=1

i=1

Proposition 1.3.6. Let yp be a particular solution of a linear differential equation,


L(D)yp = p(x), and yh a general solution for the associated homogeneous linear ODE,
L(D)yh = 0, then the general solution of L(D)y = p(x) is y = yp + yh .
Proof.

1.3.2.

L(D)y = L(D)(yp + yh ) = L(D)yp + L(D)yh = p(x) + 0 = p(x).

Second order Linear EDOs

They can be written in the form y 00 + a(x)y 0 + b(x)y = p(x). By the previous proposition 1.3.6, if we know a particular solution, we only solve the associated homogeneous
equation
y 00 + a(x)y 0 + b(x)y = 0.
13

We assume that we have y1 , y2 two solutions of the homogeneous equation satisfying


the same initial conditions y(x0 ) = y0 and y 0 (x0 ) = y00 , so, by proposition 1.3.5, for any
two constants c1 and c2 ,
c1 y1 (x0 ) + c2 y2 (x0 ) = y0
c1 y10 (x0 ) + c2 y20 (x0 ) = y00
and this system has a unique solution when the following determinant, named Wronskian,
is nonzero


y1 (x0 ) y2 (x0 )
6= 0.

W (y1 , y2 )(x0 ) = 0
y1 (x0 ) y20 (x0 )


y1 (x) y2 (x)
6=
The system of two nonzero solutions y1 , y2 which have Wronskian W (y1 , y2 ) = 0
y1 (x) y20 (x)
0 is called Fundamental System of Solutions.
Proposition 1.3.7. {y1 , y2 } is a fundamental system of solutions if and only if they are
linearly independent.
Proof. If y1 , y2 are linearly dependent, there exit constants c1 , c2 such that c1 6= 0 or
c2 6= 0 and
c1 y1 + c2 y2 = 0 = c1 y10 + c2 y20 = 0.
(
y1 + y2 = 0
Hence the linear system
has non trivial solution c1 , c2 , therefore and
y10 + y20 = 0
the Wronskian is null.
Conversely,
W (y1 , y2 ) = 0 = y1 y20 y2 y10 = 0 =

y20
y0
= 1 = y2 = ky1
y2
y1

and they are linearly dependent.


If {y1 , y2 } is a fundamental system of solutions of a second order homogeneous linear
ODE, then the general solution is
y = c1 y1 + c2 y2 , with c1 , c2 constants.
Example 1.3.8. The homogeneous linear equation x2 y 00 2xy 0 + 2y = 0 has solutions
y = x, y = x2 . The Wronskian


x x2
2

= x2
W (x, x ) =
1 2x
is nonzero. Therefore {x, x2 } is a fundamental system of solutions and the general solution is y = c1 x + c2 x2 .
Homogeneous Linear ODEs with constant coefficients
We suppose that the equation is in the form
y 00 + ay 0 + by = 0 (D2 + aD + b)y = 0, with a, b real constants.
The polynomial 2 + a + b is called Characteristic polynomial. We know that there are
always two complex roots r, s (equal or different).
14

Theorem 1.3.9. Let r, s be the roots of the characteristic polynomial of the homogeneous
linear equation y 00 + ay 0 + by = 0, then
1. If r 6= s, then {erx , esx } is a fundamental system of solutions.
2. If r is the unique (double) solution, then {erx , xerx } is a fundamental system of
solutions.
Proof. In the first case, function erx is solution (similar to esx ), because, for example
(D r)(D s)erx = (D r)(rerx serx ) = r2 erx r2 erx rserx + rserx = 0.
Moreover {erx , esx } is a fundamental system because
rx

sx
e
e
rx sx
= (s r)e(r+s)x 6= 0
W (e , e ) = rx
re
sesx
In the second case, xerx is solution
 + erx 
) = rerx rerx = 0
rx
rx
xre
rxe
(D r)2 (xerx ) = (D r)(

and
rx



rx
e

1

xe
x
2rx
=e
= e2rx 6= 0,
W (e , xe ) = rx
rx
rx


re
xre + e
r xr + 1
rx

rx

which prove the theorem.


Following what we have seen before, the general solution is
y = c1 erx + c2 esx

or y = (c1 + c2 x)erx .

An special situation in the first case is for non real roots r = + i, s = i:


y = c1 ex eix + c2 ex eix = ex (A cos x + B sin x)

y = L cos t
L

Idle state
T =

2T

m
Example 1.3.10 (Simple Harmonic Motion). It is typified by the motion of a mass m
in a spring when it is subject to a linear elastic restoring force F given by Hookes Law
F = kx, being k a constant that depends on spring and x the distance on time t.
The equation is
mx00 (t) = kx(t)
which is a second order homogeneous linearODE.
q 
q 
k
k
k
The characteristic polynomial 2 + m
= i m
+i m
= (i)(+i),
q
k
where = m
. Therefore the solution is
x(t) = c1 cos t + c2 sin t
15

Considering the initial conditions x(0) = L and vi = x0 (0) = 0, then


(
L = c1 cos 0 + c2 sin 0
= c1 = L, c2 = 0.
0 = c1 sin 0 + c2 cos 0
Hence
x(t) = L cos t,
Second order non homogeneous linear ODEs with constant coefficients
As seen in the proposition 1.3.6, when the general solution yh of the associated homogeneous is known, for solving the non homogeneous linear equation
y 00 + ay 0 + by = p(x)
we need to find a particular solution yp . Thus the general solution is y = yp + yh .
There are several methods for finding a particular solution, but only will be seen two
of them.
Indeterminate coefficients method. It consists to look for a particular solution
"similar" to the function p(x), trying to compute any indeterminate coefficients. For
example is p(x) is a polynomial of grade 2, we will look for other polynomial of grade 2
(perhaps upper) yp = Ax2 + Bx + C with indeterminate coefficients A, B, C .
Example 1.3.11. For solving the equation y 00 + 3y 0 + 2y = x, first we find a general
solution for the associated homogeneous through the characteristic polynomial 2 + 3 +
2 = ( + 1) ( + 2), then
yh = c1 ex + c2 e2x .
For finding a particular solutions, we suppose yp = Ax + B (because (x) is a polynomial
of first grade). So
(Ax + B)00 + 3(Ax + B)0 + 2(Ax + B) = x
3A + 2Ax + 2B = x


2A = 1
A = 21
=
3A + 2B = 0
B = 34
The general solution for the non homogeneous is
1
2
y = yp + yh = x + c1 ex + c2 e2x .
2
4
Sometimes there are not particular solutions in the same form of the function p(x).
Example 1.3.12. Equation y 00 + y 0 = x has not particular solution as a polynomial of
first order Ax + B
(Ax + B)00 + (Ax + B)0 = x = B = x
and this is impossible because B is a constant.
You can check that there is a particular solution in the form of a polynomial of second
grade yp = Ax2 + Bx

A = 1
2
00
2
0
(Ax + Bx) + (Ax + Bx) = x = 2A + 2Ax + B = x =
2
B = 1
16

Therefore

1
y = x2 x + c1 ex + c2
2

is the general solution.


Table 1.1 gives a list of proposed form of particular solutions depending the second
term p(x) in the non homogeneous linear equations.
Table 1.1: Pm (x) and Qm (x) represent polynomials of grade m.
p(x) is in the form
Pm (x)
erx Pm (x)

Pm (x) cos x + Qm (x) sin x


ex (Pm (x) cos x + Qm (x) sin x)

Roots of characteristic polynomial

Form of the particular solution


(k = max(m, n))

= 0 is not a root
= 0 is a root with
multiplicity s
= r is not a root
= r is a root with
multipliity s
= i is not a
root
= i is a root
= i is not a
root
= i is a root

Qm (x)
xs Qm (x)
erx Qm (x)
xs erx Qm (x)
k (x) sin x
Pk (x) cos x + Q

k (x) sin x
x Pk (x) cos x + Q
k (x) sin x
Pk (x) cos x + Q

k (x) sin x
x Pk (x) cos x + Q

Example 1.3.13. Integrate the ODE 2y 00 y 0 y = x2 ex .


Factoring the characteristic polynomial 22 1 = 2( 1)( + 12 ), then

yh = c1 ex + c2 ex
Now, according with the Table 1.1, we look for a particular solution yp = xex (Ax2 +
Bx + C),
2(xex (Ax2 + Bx + C))00 (xex (Ax2 + Bx + C))0 xex (Ax2 + Bx + C) = x2 ex
x
x
e
e
(9Ax2 + (12A + 6B)x + (4B + 3C)) = x2


9A = 1
12A + 6B = 0
=

4B + 3C = 0
 3
2x2
x
yp =

+
9
9

A = 19
B = 29

8
C = 27

8x x
e
27

and

y = yp + yh =

x3 2x2 8x

+
+ c1 ex + c2 ex
9
9
27

Variation of constants method. Suppose a non homogeneous linear equation


y 00 + ay 0 + by = p(x).
Similarly a we have seen in subsection 1.2.4, we can use the general solution for the
associated homogeneous equation yh = c1 y1 + c2 y2 for finding a particular solution of the
17

non homogeneous linear equation of the form yp = C1 y1 + C2 y2 , varying the constants


for functions C1 = c1 (x) and C2 = c2 (x).
yp0 = (C1 y1 + C2 y2 )0 = (C10 y1 + C20 y2 ) + (C1 y10 + C2 y20 )
To make the second derivative of yp , we dont want to involve the second derivatives of
functions C1 and C2 , hence impose
C10 y1 + C20 y2 = 0

(1.4)

and the second derivative is


yp00 = (C1 y10 + C2 y20 )0 = (C1 y100 + C10 y10 ) + (C2 y200 + C20 y20 ).
Now applying yp is solution, we have

(C1 y100 + C10 y10 ) + (C2 y200 + C20 y20 ) + a C1 y10 + C2 y20 + b(C1 y1 + C2 y2 ) = p(x)
=

C1 (y100 + ay10 + by1 ) +C2 (y200 + ay20 + by2 ) +C10 y10 + C20 y20 = p(x)
{z
}
{z
}
|
|
0

C10 y10

C20 y20

= p(x)

(1.5)

Joining equations (1.4) and (1.5) we obtain the system



C10 y1 + C20 y2 = 0
C10 y10 + C20 y20 = p(x)
which has solution because Wronskian W (y1 , y2 ) 6= 0. Last, making integration we obtain
C1 and C2 , therefore yp .
Best way to understand this method, as usually, is solving exercises.
Example 1.3.14. We are going to use this method of variation of constants to solve the
2
same problem that Exercise 1.3.13, 2y 00 y 0 y = x2 ex .
1
By characteristic polynomial, yh = c1 ex + c2 e 2 x . The particular solution yp = C1 ex +
1
C2 e 2 x , gives the linear system
)
1
C10 ex + C20 e 2 x = 0
1
2
C10 ex + C20 12 e 2 x = x2 ex


x
21 x

1
e
e


1 1x
with Wronskian W = x
e 2 x = 32 ex . Cramers method for
1 = 2e2

x
1
e 2 e 2
solving linear systems gives


1


e 2 x
0
x2 x
1
2
Z 2
2 e 12 e 2 x
x2 ex
x2
x
x3
0

=
=
=
C
=
dx
=
C1 =
1
3
3
9
32 ex
32 ex
x

e
0

 3x
Z 2
x2 2x
ex x2 ex
18x2 24x + 16 e 2
e
x2 3x
x
2
0
2
3x

e = C2 =
e dx =
C2 =
= 3 x =
.
3
3
81
32 ex
2 e
Therefore, the general solution of the non homogeneous equation is
 3

x
x
2x2 8x 16
y=

+ c1 ex + c2 e 2
9
9
27 81
18

1.3.3.

Linear EDOs of order n

Similar to the second order equations, thet can be written in the form
y (n) + a1 (x)y (n1) + + an (x)y = p(x)
with every ai (x) and p(x) functions depending of x.The associated homogeneous equation
will be written y (n) + a1 (x)y (n1) + + an (x)y = 0.
The system of n nonzero solutions y1 , y2 , . . . , y : n which have Wronskian

y1
y2
0
y1
y20
W (y1 , y2 , . . . , yn ) =
..
(n) . (n)
y
y2
1

...
...
...
...





6= 0


(n)
y
yn
yn0
n

is called Fundamental System of Solutions.


Proposition 1.3.15. {y1 , y2 , . . . , yn } is a fundamental system of solutions if and only if
they are linearly independent.
If {y1 , y2 , . . . , yn } is a fundamental system of solutions of a homogeneous linear ODE,
then the general solution is
y = c1 y1 + c2 y2 + + cn yn , with ci constants.
Homogeneous Linear ODEs with constant coefficients
We suppose that the equation is in the form
y (n) + a1 y (n1) + + an y = 0 (Dn + a1 Dn1 + + an )y = 0, with ai real constants.
The polynomial p() = n +a1 n1 + +an is called Characteristic polynomial. We know
that there are always n complex roots ri (equal or different). Now, we can distinguish
between different situations of roots to construct a fundamental system of solutions:
If r is simple root of p(), we consider the function erx .
If r is a doble root of p(), we consider the functions erx and xerx .
In general, if r is a root of p() with multiplicity k, we consider the set of functions
{erx , xerx , x2 erx , . . . , xk1 erx }.
All this functions establish a fundamental system of solutions and provide a general
solution for the homogeneous linear ODE.
Example 1.3.16. Find the general solution of y (4) 5y 00 + 4y = 0.
The characteristic polynomial is p() = 4 52 + 4 = ( 2)( + 2)( 1)( + 1).
It has four simple roots, then the general solution is
y = c1 e2x + c2 e2x + c3 ex + c4 ex
Example 1.3.17. Find the general solution of y (4) 8y 00 + 16y = 0.
The characteristic polynomial is p() = 4 82 + 16 = ( 2)2 ( + 2)2 . It has two
real doble roots r1 = 2 and r2 = 2, then the general solution is
y = (c1 + c2 x)e2x + (c3 + c4 x)e2x
19

Example 1.3.18. Find the general solution of y (4) 2y 000 + 2y 00 2y 0 + y = 0.


The characteristic polynomial is p() = 4 23 + 22 2 + 1 = ( 1)2 (2 + 1).
It has a real doble roots r1 = 1 and two non real simple roots r2 = i, r3 = i, then the
general solution is
y = (c1 + c2 x)ex + c3 eix + c4 eix =
= (c1 + c2 x)ex + d1 cos x + d2 sin x
Example 1.3.19. Find the general solution of y (4) + 8y 00 + 16y = 0. 
2
The characteristic polynomial is p() = 4 + 82 + 16 = 2 + 4 . It has two non
real doble roots r1 = 2i, r2 = 2i, then the general solution is
y = c1 e2ix + c2 xe2ix + c3 e2ix + c4 xe2ix =
= c1 cos 2x + ic1 sin 2x + c2 x cos 2x + ic2 x sin 2x+
+ c3 cos(2x) + ic3 sin(2x) + c4 x cos(2x) + ic4 x sin(2x) =
= (d1 + d2 x) cos 2x + (d3 + d4 x) sin 2x
Non homogeneous linear ODEs with constant coefficients
A particular solution is solved in a similar way as the case of second-order equations.
Methods of indeterminate coefficients and variation of constants are valid for equations
of order higher than two.

1.4.

Systems of Linear Differential Equations

A system of differential equations of order r express relations between several n


functions x1 (t), x2 (t), . . . , xn (t) and its successive derivatives:



(r)
(r)
F1 t, x1 , x2 . . . , xn , x01 , x02 , . . . , x0n , x001 , x002 , . . . , x00n , x(r)1 , x2 , . . . , xn = 0




(r)
(r)
0
0
0
00
00
00
(r)
1
F2 t, x1 , x2 . . . , xn , x1 , x2 , . . . , xn , x1 , x2 , . . . , xn , x , x2 , . . . , xn = 0
..


.

(r)
(r)

0
0
0
00
00
00
(r)
1
Fk t, x1 , x2 . . . , xn , x1 , x2 , . . . , xn , x1 , x2 , . . . , xn , x , x2 , . . . , xn = 0
Next problem is solved by a system of differential equations.
x1

k1

x2
m1

k3

m2

k2

Figure 1.4: Coupled harmonic oscillators.


Example 1.4.1 (Coupled harmonic oscillators). Suppose two objects joined for three
springs moving in a line. Springs verify the Hookes law with respective constants
k1 , k2 , k3 , (Figure 1.4). The time-dependent functions x1 (t) and x2 (t), determine the
distance of each object to their idle point. The motion is governed by the following
second order system of differential equations:

2
m1 ddtx21 = k1 x1 + k3 (x2 x1 )
2
m2 ddtx22 = k2 x2 + k3 (x1 x2 )

20

1.4.1.

First Order Systems

Although a first order system


written

dx1

dt

dx2
dt

dxn
dt

of ODEs may be a more general form, usually are

= f1 (t, x1 , x2 , . . . , xn )
= f2 (t, x1 , x2 , . . . , xn )
..
.
= fn (t, x1 , x2 , . . . , xn )

and denoting x = (x1 , x2 , . . . , xb ) and


f = (f1 , f2 , . . . , fn ),
dx
this system can be written
= f (t, x). General solutions express parametric family of
dt
n
curves of R . The named problem of initial values
(
dx
dt = f (t, x)
x(t0 ) = x0 Rn
determines a unique curve (particular solution) when f is a continuous function with
is continuous for every i, j.
Sometimes a system of differential equations is expressed in symmetric form

fi
xj

dx1
dx2
dxn
dt
=
= =
=
P1 (t, x1 , . . . , xn )
P1 (t, x2 , . . . , xn )
Pn (t, x1 , . . . , xn )
Q(t, x1 , . . . , xn )
First Order Linear Systems
A first order system is linear if it is expressed

dx1

= a11 (t)x1 + a12 (t)x2 + + a1n (t)xn + b1 (t)

dt

dx2
= a21 (t)x1 + a22 (t)x2 + + a2n (t)xn + b2 (t)
dt

..

dx
n

= an1 (t)x1 + an2 (t)x2 + + ann (t)xn + bn (t)


dt
In matrix form is
x0 (t) = A(t)x(t) + b(t).
A Cauchy problem is to find solutions of the system which verify the so-called initial
conditions x1 (t0 ) = x
1 , x2 (t0 ) = x
2 , . . . , xn (t0 ) = x
n . It expressed
x0 (t) = A(t)x(t) + b(t), with
x1 (t0 ) = x
1 , x2 (t0 ) = x
2 , . . . , xn (t0 ) = x
n


(1.6)

Theorem 1.4.2. Let A(t), b(t), respectively, a n n-matrix a an n 1-vector functions


with continuous derivatives in an open interval (a, b) R. For t0 (a, b), the Cauchy
problem (1.6) has an unique solution.
21

All first order linear system with n variables is equivalent to a linear ODE of grade
n. To see this, consider the ODE
y (n) + a1 (x)y (n1) + + an1 (x)y 0 + an y = p(x)
and the changes of variable y1 = y, y2 = y 0 , . . . , yn = y (n1) produce the system
0
y1 = y2

y 0 = y3
2
..

0
yn = p(x) an (x)y1 a2 (x)yn1 a1 (x)yn
Converse can be made by successive derivatives and replacements.
Example 1.4.3. Express as an unique ODE the next system
(
 0 
    2
x01 = tx1 x2 + t2
x1
t
1
x1
t

=
+
0
2
0
2
x2
1t t
x2
0
x2 = (1 t)x1 + t x2 .

(1.7)

Derivative the first equation and replace x02 from the second equation to obtain
x001 = x1 + t x01 x02 + 2t = x001 = t x01 + t x1 t2 x2 + 2t
Finally, using the first equation again, eliminate x2 from this, and you have obtain the
searched second order linear ODE
x001 (t2 + t)x01 + (t3 t)x1 = t4 + 2 t
equivalent to the system (1.7).
First Order Linear Systems with constant coefficients
A first order system is

dx1

dt

dx2
dt

dxn
dt

linear with constant coefficients is expressed


= a11 x1 + a12 x2 + + a1n xn + b1 (t)
= a21 x1 + a22 x2 + + a2n xn + b2 (t)
..
.
= an1 x1 + an2 x2 + + ann xn + bn (t)

where aij real constants. In matrix form


0
x1 (t)
a11 a12 . . .
x0 (t) a21 a22 . . .
2
.. =
..
.
.
x0n (t)
an1 an2 . . .

a1n
x1 (t)
b1 (t)

a2n
x2 (t) b1 (t)
.. + .. ,
. .
ann

xn (t)

bn (t)

in short
0

x (t) = A x(t) + b(t).

(1.8)

For solving, we generalize the known method for solving equations of the first order.
First, we solve the associated homogeneous system x0 = Ax. Can be shown that the
general solution is
xh = eAt
c
22

where the exponential matrix is defined as eAt


c1

X
1
c2
=
(At)k , and
c = . is a column
k!
..
k=0

cn
matrix of constants.
In a second step, by any method, we calculate a particular solution xp of system
of (1.8), and hence the general solution we are seeking is
x = x h + xp
How to compute the exponential matrix eAt for n = 2?



1 0
If matrix A is diagonalizable, A =
being D =
the diagonal
0 2

matrix of (real or complex) eigenvalues and P = v w , the change of basis
matrix (v, w eigenvectors, of course). Hence
P DP 1 ,

z }| {
xh = eAt
c = P eDt P 1
c = P eDt c = c1 e1 t v + c2 e2 t w
If matrix A is not diagonalizable. We know that there exists an unique (real)
eigenvalue and an uniqueindependent
eigenvector v. It is possible to prove that



1
and a invertible matrix P = v w , being w
there exist a matrix2 J =
0
3
a vector verifying Aw = v + w, such that A = P JP 1 .

 

0 1
0
= D + N.
+
Matrix J can be expressed J =
0 0
0
Therefore
c

z }| {
xh = eAt
c = P eJt P 1
c = P eDt eN t c.
But

eDt


 t
P
e
0
and eN t =
=
t
k=0
0 e
Dt

xh = P e

1
k
k! (N t)


=

1 t
0 1

because (N t)2 = 0, so



c1 + c2 t
= (c1 + c2 t)et v + c2 et w.
c2

How to find a particular solution xp ? Similar methods to one-equation case are


used. The particular case of system of two equations will be studied below.
We present three examples in different situations.
Example 1.4.4 (Non-homogeneous, diagonalizable matrix). Solve the Cauchy problem

dx

= 4x 2y + 1

dt
dy

= 3x y + t

dt

with x(0) = 1, y(0) = 0


2
3

Matrix J is called Matrix of Jordan.


Vector w is found solving the algebraic system (A I)w = v.

23

First step. Solve the associated homogeneous system



 0 
 
x0 = 4x 2y
x
4 2
x

=
y 0 = 3x y
y0
3 1
y


4
2

= 2 3 + 2 = ( 2)( 1).
The characteristic polynomial is
3
1
The eigenvectors are:

 
2 2
v1
For = 2,
= 0 = v = (1, 1).
3 3
v2

 
3 2
w1
For = 1,
= 0 = w = (2, 3).
3 2
w2
The general solution of the homogeneous associates is

 t

1   
 t
 
2 1
e
0
2 1
c1
2 1
e
0
c1
xh (t) = e
c=
=
2t
2t
3 1
0 e
3 1
c2
3 1
0 e
c2
 
 
1
2
+ c2 e2t
= c1 et
1
3
At

therefore
(
x1h (t) = 2c1 et + c2 e2t
x2h (t) = 3c1 et + c2 e2t
Second step. Find a particular solution using the indeterminate coefficients method.
Suppose


x1p = at + b
xp =
x2p = ct + d
is a solution, then

a = 1


b = 1
a = (4a 2c)t 2d + 4b + 1
=
c = (3a c + 1)t + t d + 3b

c = 2

d = 1
Third step. General solution of the system is
(
x1 (t) = c1 et + 2c2 e2t t 1
x2 (t) = c1 et + 3c2 e2t 2t 1.
Replacing the initial conditions (t = 0) it produces the next linear algebraic system
(

c1 = 4
c1 + 2c2 1 = 1
=
c1 + 3c2 1 = 0
c2 = 1
and, therefore, the solution is
x1 (t) = 4e2t 2et t 1
x2 (t) = 4e2t 3et 2t 1
24

Example 1.4.5 (Homogeneous, two different complex eigenvalues). Find the differential
system equation
(
x0 2x + y = 0
y 0 x 2y = 0
and solve the Cauchy problem with x(0) = y(0) = 1.


2 1
This differential equations system is homogeneous and its matrix is
with
1 2
eigenvalues:
1 = 2 i, eigenvector v = (1, i).
1 = 2 + i, eigenvector v = (1, i).
The general solution is
x(t) = c1 e(2i)t + c2 e(2+i)t = e2t [(c1 + c2 ) cos t + i(c2 c1 ) sin t]
y(t) = ic1 e(2i)t ic2 e(2+i)t = e2t [(c1 + c2 ) sin t + i(c1 c2 ) cos t.]
For solving the Cauchy problem replace the initial conditions, then


c1 + c2 = 2
c1 = 1 i
=
ic1 ic2 = 2
c2 = i + 1
and hence
x(t) = 2e2t (cos t sin t)
y(t) = 2e2t (cos t + sin t)
Example 1.4.6 (Homogeneous, non diagonalizable matrix). Find the general solution
of the differential equations system

dx
dt = x y
dy
dt = x + 3y


1 1
= 2 4 + 4 = ( 2)2 .

Diagonalize the matrix:
1
3
The unique eigenvalue is = 2 and the unique independent eigenvector is v = (1, 1).
To compute the second vector w solve the algebraic system

   
1 1
w1
1
= w1 + w2 = 1
(A 2I)w = v =
=
1
1
w2
1
By simplicity, we choose w = (1, 0), then
 
 
1
2t
2t 1
xh = (c1 + c2 t)e
+ c2 e
.
1
0
Hence
x = (c1 + c2 t) e2t c2 e2t
y = (c1 + c2 t) e2t

Ejercicios
Ejercicio 1.1
Encontrar la solucin general de las siguientes ecuaciones diferenciales:
25

1. (y 0 )2 = x + y + 8.


2. y + xy 2 dx + x x2 y dy = 0;
usando = (xy).
3. (3x + y 2) + y 0 (x 1) = 0.
y
4. y 0 = x2 y 8 .
x

5. y 0 + y cotg x = 5ecos x .
1

6. y 0 = (y 2x) 3 + 2, y(1) = 2. Tiene


solucin nica?
7. y 0 = (x cos y + sen 2y)1 .

Ejercicio 1.2
Integra la EDO (3xy 2 4y)+(3x4x2 y)y 0 = 0, usando un factor integrante (x, y) =
m
x yn.
Ejercicio 1.3
Hallar la curva que pasa por el origende coordenadas, O, y tal que, el rea comprendida entre ella y la cuerda hasta un punto A de la curva, OA, sea proporcional al cuadrado
de la abscisa del punto A.
Ejercicio 1.4
1. Calcula las funciones que verifican que la recta tangente en cada punto (x, y) de su
grfica corta al eje OY en el punto (0, y2 ).
2. Calcula la familia de curvas ortogonales a las soluciones del apartado anterior.
Ejercicio 1.5
Los puntos medios de los segmentos de tangente a una curva comprendida entre el
punto de contacto y el eje OX describen la parbola y 2 = x. Hallar la curva, sabiendo
que pasa por el punto (1, 2).
Ejercicio 1.6
La cantidad de material radioactivo x(t) vara con el tiempo siguiendo el siguiente
problema de Cauchy
(
x0 (t) = kx(t)
x(0) = x0
donde k es una constante que depende del material y x0 es la cantidad inicial de materia.
Se denomina Vida Media T al tiempo que tarda en reducirse a la mitad la cantidad de
material radioactivo. Prueba que T = k1 ln 2, lo que prueba que la vida media no depende
de la cantidad de material radioactivo.
Ejercicio 1.7
Un cuerpo en cada amortiguada a travs de un fluido sigue una ecuacin diferencial
my 00 (t) = mg ky 0
donde y es la altura dependiente del tiempo, m es la masa del cuerpo, g es la gravedad
(constante) y k > 0 es una constante de resistencia que depende del fluido. Calcula la
funcin y suponiendo que comienza del reposo (y 0 (0) = 0) y parte de altura y(0) = h.
Qu podemos decir de la velocidad de cada y 0 cuando el tiempo t tiende a infinito?

26

Ejercicio 1.8
De forma anloga al ejercicio anterior podemos plantear el movimiento armnico
simple amortiguado suponiendo el caso similar al Ejemplo 1.3.10 pero con una fuerza
amortiguadora proporcional a la velocidad de constante de rozamiento R 0:
m x00 (t) = k x(t) R x0 (t)
siendo las constantes m (masa) y k la constante del resorte (usada en la ley de Hook).
Calcula la solucin y general y comprueba que existen dos tipos de soluciones dependiendo del valor de la constante de rozamiento R y que, esencialmente, sigue uno de los
modelos de las siguientes grficas:
Calcula las relaciones entre las constantes m, k, R para que la solucin se corresponda
a cada uno de los tipos.

Ejercicio 1.9
Reducir el orden y resolver, si es posible, las siguientes ecuaciones diferenciales:
1. y 00 = (y 0 )2 y(y 0 )3 .
2. y 000 = (y 00 )2 .

3. x2 y 00 = (y 0 )2 2xy 0 +
2x2 .
1
4. y (5) y (4) = 0.
x

5. y 00 y 0 tg x =

1
2

sen 2x.

6. yy 00 (y 0 )2 = 6xy 2 .

Ejercicio 1.10
Hallar la ecuacin diferencial que tiene como conjunto de soluciones:
1. y = Cex + De2x .

2. yeCx = 1.

3. y = Ax2 + Bx + C +
D sen x + E cos x.

Ejercicio 1.11
Sea {ex , cos x, sen x} un sistema fundamental de soluciones de una ecuacin diferencial
lineal homognea. Halla la solucin particular que satisface las condiciones iniciales y(0) =
3, y 0 (0) = 4, y 00 (0) = 1.
Ejercicio 1.12
Las races del polinomio caracterstico de una EDO de orden superior son:
1 = 0 (simple), 2 = 2 (triple), 3 = 1 + i, 4 = 1 i (dobles)
Determina la solucin general de la ecuacin diferencial.
Ejercicio 1.13
Hallar la solucin de:

27

1. y 00 3y 0 + 2y = (x2 + x)e2x .
2. y 00 + 3y 0 + 2y =
y 000

3.

y (4)

4.

2y 00 +y 0 y

3y 00

5. y 00 2y 0 + y = ln x.

1
.
1 + ex
+

5y 0

6. y 00 + 5y 0 + 6y = 3e2x .
7. y 000 + y 0 = tg x.

2y = 0.

= 0 con y(0) =

y 0 (0)

= 1.

8. y (4) y = 8ex con y(0) = 1, y 0 (0) =


0, y 00 (0) = 1, y 000 (0) = 0

Ejercicio 1.14
Halla la solucin general de la EDO x2 y 00 xy 0 3y = 5x4 , usando el cambio de
variable x = et .
(Nota.- Este cambio reduce la ecuacin a coeficientes constantes.)
Ejercicio 1.15
Reduce a una ecuacin lineal de primer orden la siguiente ecuacin de seguno orden
xy 00 xy 0 y = xex
con y = xn ex siguiendo los siguientes pasos.
1. Encuentra un valor de n para que la funcin u = xn ex sea solucin de la ecuacin
homognea asociada.
R
2. Para dicho valor de n, realiza el cambio y = u z dx, siendo z una funcin desconocida y desarrolla y 0 y y 00 .
3. Sustituye los valores de y 00 , y 0 , y en la ecuacin original y comprueba que resulta
otra ecuacin en z de orden reducido.
Ejercicio 1.16
Resuelve el siguiente problema de Cauchy
(
y 000 = 3yy 0
y(0) = 1, y 0 (0) = 1, y 00 (0) = 23 .
Ejercicio 1.17
Encuentra las solucin general de la EDO
(cos x sen x)y 00 + 2y 0 sen x (sen x + cos x)y = ex (cos x sen x)2
sabiendo que las funciones y1 = sen x, y2 = ex son soluciones de la homognea asociada.
Ejercicio 1.18
Resuelve los siguientes sistemas de ecuaciones diferenciales:
(

x0 + y = sen 2t
x = 3x + 5y
1.
.
3.
y 0 = 2x 8y
y 0 x = cos 2t

con x(0) = 2, y(0) = 5


(
5x00 + y 0 + 2x = 4 cos t
2.
3x0 + y = 8t cos t
28


dx

=y

dt
4.
2

dy = y
dt
x

dy
dz
dx
=
=
x(y z)
y(z x)
z(x y)

dx

=y

dt

dy = x
7.
dt

dz

=z

dt
con x(0) = y(0) = z(0) = 1
6.

y2
dx

dt
x
5.
2

dy = x
dt
y

29

30

Chapter 2

Fourier Transform and Laplace


Transform
2.1.

Periodic Functions and Fourier Series

A function f is said periodic function with period T > 0 if f (x + nT ) = f (x) for all
n integer.

x0

x0 + T

Figure 2.1: Periodic function.


Expanding a function as a trigonometric series is sometimes more advantageous than
expanding it as a power series. In particular, astronomical phenomena are usually periodic, as are electromagnetic waves, and vibrating strings, so it makes sense to express
them in terms of periodic functions.
Definition 2.1.1. Let f (x) a periodic function with period 2, we say that f admits a
trigonometric expansion in Fourier series if there exist sequences {an }, n = 0, 1, . . . and
{bn }, n = 1, 2, . . . , called Fourier coefficients such that

a0 X
f (x) =
+
ak cos (kx) + bk sin (kx)
2

(2.1)

k=1

We start by assuming that the trigonometric series converges and has a continuous
function as its sum on the interval [0, 2]. If we integrate both sides of Equation (2.1)
and assume that it is permissible to integrate the series term-by-term, we get
Z 2
Z 2
Z 2
Z 2

X
X
a0
cos (kx) dx +
bk
sin (kx) dx,
f (x) dx =
dx +
ak
2
0
0
0
0
k=1
k=1
R 2
R 2
but 0 cos (kx) dx = 0 sin (kx) dx = 0 because k is a integer. So
Z
1 2
a0 =
f (x) dx.
0
31

x1

t2 b

Figure 2.2: Piecewise continuous function.


To determine an for we multiply both sides of Equation (2.1) by cos(nx) and integrate
term-by-term from 0 to 2:
Z

f (x) cos(nx) dx
0

Z
Z 2
Z 2



X
a0 2
 X
dx


cos(nx)
dx
+
a
cos
(kx)
cos(nx)
dx
+
b
sin
(kx)
cos(nx)
k
k

2 0 
0
0 
k=1 |
|  {z
} k=1
{z
}


=0
=0

Z 2
Z 2
n1



X Z 2
X


2
 dx

cos(nx)
=
ak
cos(kx)
cos (nx) dx +
ak
cos(kx)

cos(nx) dx + an


k=1
{z
}
{z
} k=n+1
{z
}
|0 
|0
| 0






=0

=0

= an .

Hence
2

1
an =

f (x) cos(nx) dx
0

and, similarly,
bn =

f (x) sin(nx) dx
0

give expressions for the fourier coefficients.


Notice that we are not saying f (x) is equal to its Fourier series. Later we will discuss
conditions under which that is actually true. For now we are just saying that is true for
any periodic function with period 2 and piecewise continuous function on [0, 2].
Definition 2.1.2. A function f (x) is piecewise continuous on a finite interval [a, b]
provided there exists a partition a = x0 < . . . < xn = b of the interval [a, b] and functions
f1 , f2 , . . . , fn continuous on [a, b] such that for t not a partition point

f1 (x) x0 < x < x1 ,


..
..
f (x) =
(2.2)
.
.

f (x) x
<x<x .
n

n1

The values of f at partition points t0 , t1 , . . . , xn are undecided by this equation (A.1).


In particular, equation (A.1) implies that f (x) has one-sided limits at each point of
a < x < b and appropriate one-sided limits at the endpoints. Therefore, f has at worst
a jump discontinuity at each partition point. See Figure 2.2.
32

Example 2.1.3 (square wave function). Find the Fourier coefficients and Fourier series
of the function defined by
(
0 if x < 0
f (x) =
and f (x + 2) = f (x).
1 if 0 x <
This is piecewise continuous and periodic with period 2, and its graphic is

1
0

Using formulas for the Fourier coefficients, we have


Z

Z
Z 2
1 2
1
a0 =
f (x) dx =
1 dx +
0 dx = 1
0

and for n 1,

Z
Z
1 2
1
1 sin nx
an =
=0
f (x) cos(nx) dx =
cos(nx) dx =
0
0

n
0
(
Z
Z
0
1 2
1
1 cos nx i
bn =
f (x) sin(nx) dx =
sin(nx) dx =
= 2
0
0

n
0
n

if n even,
.
if n odd

Therefore the Fourier series is


1 2
2
2
+ sin x +
sin 3x +
sin 5x + =
2
3
5

1 X
2
+
sin(2k 1)x.
2
(2k 1)
k=1

1
0

(a) For k = 1

(b) For k = 2

1
0

(c) For k = 3

(d) For k = 6

Figure 2.3: Here some graphics.

33

Theorem 2.1.4 (Dirichlet). If f is a periodic function with period 2 and f and f 0 are
piecewise continuous on [0, 2], then the Fourier series is convergent.
The sum of the Fourier series is equal to f (x) at all numbers where f is continuous.
At the numbers x where f is not continuous, the sum of the Fourier series is the
average of the right and left limits, that is
f (x+ ) + f (x )
2
Proof. Out of objective.
We use the notation

a0 X
f (x)
+
ak cos (kx) + bk sin (kx)
2
k=1

to represent this situation. Symbol means = for x such that f (x) is continuous, but
it is not true for discontinuity points.

2.1.1.

Fourier Series for Periodic Functions with other Period than 2

We can find its Fourier series by making a change of variable. In the field of engineering It is usual use the real variable t (time) for functions. Suppose f (t) has period
T , that is f (t + T ) = f (t) for all t, and we let x = 2t
T and
f(x) = f

Tx
2

is a function with period 2 and t = T corresponds x = 2. Indeed,








T (x + 2)
Tx
Tx

f (x + 2) = f
=f
+T =f
= f(x).
2
2
2
So the Fourier series of f (t) can be obtained from the Fourier series of f(x):

a0 X
f(x)
+
ak cos (kx) + bk sin (kx)
2
k=1

a0 X
f (t)
+
ak cos
2

2kt
T

+ bk sin

2kt
T

k=1

And the Fourier coefficients


Z
Z
Z
1 2
1 2
1 2
a0 =
f (x) dx, an =
f (x) cos(nx) dx, bn =
f (x) sin(nx) dx,
0
0
0
Changing variables
Z T
Z T
Z
2
2
2 T
a0 =
f (t) dt, an =
f (t) cos(n 2
t)
dt,
b
=
f (t) sin(n 2
n
T
T t) dt.
T 0
T 0
T 0
It is easy to see that its possible choose any interval [a, a + T ] instead of [0, T ].
To get a simpler formulas we express in terms of frequency = 2
T ,

f (t)

a0 X
+
ak cos (kt) + bk sin (kt)
2
k=1

34

with Fourier coefficients


Z
2 T
f (t) cos(nt) dt;
an =
T 0

2
bn =
T

f (t) sin(nt) dt
0

In the Fourier series we find, that the frequencies appear as multiplies of the basic frequency (1/T ). The basic frequency is called the fundamental, while the multiples are
called harmonics. Fourier analysis is often called harmonic analysis. A periodic signal
may then be described with its fundamental and harmonics.
Example 2.1.5 (Triangle wave function). Find the Fourier series of the function defined
by
f (t) = |t| if 1 t 1 and f (t + 2) = f (t) for all t.
Function f (t) is periodic of period 2 and = . Choose interval [1, 1] and calculate
the Fourier Coefficients
Z 1
Z
Z 0
2 1
a0 =
dt = 1,
|t| dt =
t dt +
2 1
0
1
(
Z
0
if n is even
2 1
2 cos (n) 2
an =
|t| cos(nt) dt =
=
2
2
4
2 1
n
if n is odd,
n2 2
Z 1
2
|t| sin(nt) dt = 0.
bn =
2 1
Therefore
f (t) =

1
4
4
4
2 cos(t) 2 cos(3t)
cos(5t) . . .
2
9
25 2

(2.3)

Figure 2.4: Note the very fast convergence of the Fourier series. In the above graphic
the first two terms give a very good approximation to the function.

Example 2.1.6. Using the previous example, we can show that


1+

1
1
1
1
2
+
+
+
+

=
32 52 72 92
8

only doing t = 0 in (2.3).

2.1.2.

Complex Notation

By using the complex notation for sine and cosine functions,


cos =

ei + ei
,
2
35

sin =

ei ei
2i

we may write the formula for the Fourier series in a more compact way:

a0 X eikt + eikt
eikt eikt
f (t) =
+
+ bk
=
ak
2
2
2i
k=1




X
a0 X ak
ak
bk
bk
ikt
=
e
+
eikt
+
i
+i
2
2
2
2
2
k=1

k=1

calling c0 = a20 , ck = a2k i b2k for k > 0 and ck =


be written in a more compact way
f (t) =

ikt

ck e

k=

ak
2

+ i b2k for k < 0, function f (t) could

1
with cn =
T

f (t)eint dt

This is called the complex Fourier series. Please note that the summation now also covers
negative indexes, we have negative frequencies.

2.2.
2.2.1.

Fourier Integral Transform


Definitions

For a complex function


R f (t), defined for all time t, i.e. < t < and f is
absolutely integrable, i.e. |f (t)| dt < , we define the Fourier transform F(f (t))
by:
Z

F[f (t)] = f () =
f (t)eit dt.

Function f is a complex-valued function of the variable , frequency, and is defined for all
frequencies. As the function is complex, it may be described by a real and an imaginary
part or with magnitude and phase (polar form), as with any complex number.
Warning. Our definition of the Fourier transform is a standard one, but it is not the
only one1 .
Examples
Example 2.2.1. Given the time signal function (rectangle function)
(
1 for |t| < a/2,
a (t) =
0 elsewhere,
The Fourier transform F(a (t)) is
Z a/2

ia
1  ia

a () =
eit dt =
e 2 e 2 =
i
a/2
=

2 sin (a/2)
.

In this above example the Fourier transform is a real function but this does not
happen always, as shown in below example.
1

Often in circuit design or signal processing is useful the alternative definition


Z
F(f (t)) =
f (t)e2it dt

36

a ()

a (t)

a
2

a
2

(a) Time signal.

(b) Fourier transform.

Figure 2.5: Graphics on Example 2.2.1.


Exercise 2.2.2. Compute the Fourier transform of the triangle function
(t)
(
1 |t|
(t) =
0

1
if |t| < 1
otherwise
1

(Solution: ()
=

22 cos
)
2

(
eat sin bt for t 0
Example 2.2.3. The time signal (t-function) f (t) =
has the
0
for t < 0
following complex Fourier transform
f() =

a2

b2

b
2 + 2ia

and this can be expressed in rectangular form as:



2 a2 b2
b

2ab
f() =
+i
2
( 2 a2 b2 ) + 4a2 2
( 2 a2 b2 )2 + 4a2 2
Inverse Fourier transform
Theorem 2.2.4 (Fourier integral theorem). Let f (t) be a function defined for all time t,
i.e. t , which is continuous except for a discrete set of points {t1 , t2 , . . . , tn , . . . }
such that exist lateral limits at right (f (t+ )) and left (f (t )). If in addition f is laterally
differentiable everywhere, then

Z Z
f (t+ ) + f (t )
1
i(tu)
=
f (u)e
du d
2
2

Proof of this theorem is beyond our purposes.


Observe that if f (t) is a continuous function which verifies conditions of Fourier
integral theorem, we obtain the next equality

Z
Z Z
1
1
it
iu
f()e d =
f (u)e
du eit d =
2
2


Z Z
1
i(tu)
=
f (u)e
du d =
2

= f (t)
Hence, if the Fourier transform exists, there is an inverse transform formula.
37

Re f()
f (t)

(b) Real part of Fourier transform.

Im f()

(a) Time signal.

(c) Imaginary part of Fourier transform.

Figure 2.6: Graphics on Example 2.2.3.


Theorem 2.2.5. If f (t) is a function verifying hypothesis of Fourier integral theorem,
then there exists the inverse transform:
F

2.2.2.

1
(f()) = f (t) =
2

f()eit d.

Properties to the Fourier transform and inverse

Linearity
Proposition 2.2.6. Let f1 (t) and f2 (t) be functions which Fourier transform exists and
let c1 and c2 be constant complex numbers, then
F(c1 f1 (t) + c2 f2 (t)) = c1 F(f1 (t)) + c2 F(f2 (t))
Proof.
Z

(c1 f1 (t) + c2 f2 (t)) eit dt =


Z
Z
= c1
f1 (t)eit dt + c2
f2 (t)eit dt =

F(c1 f1 (t) + c2 f2 (t)) =

= c1 f1 () + c2 f2 ()

Translations
Proposition 2.2.7. Let f (t) be a function for which exists Fourier transform f(), a a
real number, then
F(f (t a)) = eia f()
38

Proof.
Z

F(f (t a)) =

u=ta

f (t a)eit dt

f (u)eiu eia du = eia f()

Observe that the Fourier transform of a function and a translated functions (delayed
in time) have the same absolute value.
|F(f (t a))| = |eia ||f()| = |f()|
Proposition 2.2.8 (Inverse translation). If f() = F(f (t)), then, for all real number
k,
F(eikt f (t)) = f(w k)
Proof. Exercise 7.
Rescaling
Proposition 2.2.9. Let a 6= 0 be a constant real number. If F(f (t)) = f() then
1  
F(f (at)) =
.
f
|a|
a
Proof. If a is a positive real,
Z
Z
at=u
it
F(f (at)) =
f (at)e
dt =

du
1
=
a
a

i u
a

f (u)e

1  
= f
a
a
Is a is a negative real,
Z
Z
at=u
it
F(f (at)) =
f (at)e
dt =

f (u)e

i u
a

du
1
=
a
a

f (u)ei a u du =

f (u)ei a u du =

1  
=
f
a
a

Fourier transform for derivatives


Proposition 2.2.10. If functions f (t) and f 0 (t) are both absolutely integrable in R and
lim f (t) = 0, then
t

F(f 0 (t)) = iF(f (t))

Proof. Using integration by parts:




Z K
Z K


0
0
it
it K
it
F(f (t)) = lim
f (t)e
dt = lim
f (t)e
+
f (t)ie
=
K
K K
K
K
Z
= lim f (K)eiK lim f (K)eiK + i
f (t)eit dt =
K

= iF(f (t))

Hence, using the necessary hypothesis about existence of integrals and limits in infinity of derivatives, using induction
F(f (n) (t)) = (i)n F(f (t))
39

(2.4)

Other properties
Proposition 2.2.11. If f() = F(f (t)), then
F(f(t)) = 2f ()
Proof. Exercise 6.
Proposition 2.2.12. If f() = F(f (t)), then
F(f (t)) = f()
Proof.
Z

F(f (t)) =

it

f (t)e

u=t

dt =

f (u)e

iu

f (u)eiu du =

du =

= f()

Proposition 2.2.13. A function f (t) is real function if and only if the Fourier transform
f() verifies f() = f().
Proof. Suppose f (t) R. Then
f() =

it

f (t)e

dt =

f (t) sin(t) dt =

f (t) cos(t) dt i

f (t) cos(t) dt + i
f (t) sin(t) dt =

= f()
Inversely, suppose f() = f() and let f (t) = u(t) + iv(t). Using the inverse Fourier
transform:
Z
Z
1
1
it

f ()e d =
(
u() + i
v ()) (cos(t) + i sin(t)) d =
f (t) =
2
2
Z
Z
1
i
=
(
u() cos(t) v() sin(t)) d +
(
u() sin(t) + v() cos(t)) d
2
2
But, by hypothesis, u
() + i
v () = u
() i
v (), then u
is an even function and v is
an odd function. Hence u
() sin(t) + v() cos(t) is an odd function, and the integral
in imaginary part is null. So, f (t) is real.
Example 2.2.14. Lets find the Fourier transform of the two-sided exponential decay:
f (t)
f (t) = ea|t| , with a a positive constant.
t
40

We could find the transform directly plugging into the formula for the Fourier transform
(exercise). However, we are going to compute using some above properties. Recall that
for
(
et if t > 0
g(t) =
0
if t < 0
we have
Z

g() =

et eit dt =

1
i + 1

Also for h(t) = g(t) + g(t), we have2


1
1
2
+
= 2
i + 1 i + 1
+1

h()
= F(g(t)) + F(g(t)) =

And, now observe that f (t) is almost equal to h(at). In fact, they are agree except at
the origin, where f (0) = 1 and h(0) = g(0) + g(0) = 2. But it is not important for
integration. Therefore
f()
f() = F(h(at)) =

2
1
a (/a)2 +1

2a
+ a2

2.2.3.

Convolution

Let f (t) and g(t) be functions. We call convolution product (or simply convolution)
of f and g to
Z

(f g)(t) =

f (u)g(t u) du

Next proposition is trivial.


Proposition 2.2.15. For any constant k and functions f and g, we have
(af ) ? g = f ? (ag) = a(f ? g)
Proposition 2.2.16. Convolution is commutative, i.e. (f g)(t) = (g f )(t).
Proof. Exercise 11
Proposition 2.2.17. Convolution is associative, i.e. ((f g) h)(t) = (f (g h))(t).
2

Function h is not defined in t = 0, but it is not relevant.

41

Proof.
Z

(f g)(u)h(t u) du =

f (v)g(u v) dv h(t u) du =
=

Z

Z
{w=uv}
f (v)
g(u v)h(t u) du dv
=
=

Z

Z
=
f (v)
g(w)h(t v w) dw dv =

Z
=
f (v) (g h) (t v) dv =

((f g) h))(t) =

Z Z

= (f (g h))(t).

Proposition 2.2.18. Convolution is distributive, i.e.


(f (g + h))(t) = (f g)(t) + (f h)(t).
Proof. Exercise. Very trivial.
(
a |t| a < t < a
Example 2.2.19. Let us prove that the a-triangle function a (t) =
0
otherwise
is the convolution of rectangle
( functions a (t) = (a ? a )(t). Remember the rectangle
1 a2 < t < a2
function is defined a (t) =
.
0 otherwise
Z

Z
a (u)a (t u) du =

(a ? a )(t) =

a
2

a2

Z
a (t u) du =

t+ a2

t a2

a (v) dv

Thus
For t a = t +

a
2

a2 . Hence (a ? a )(t) =

R t+ a2
t a2

a (v) dv = 0.

(
Z t+ a
2
t a2 a2
For a < t 0 =
a (v) dv =
. Hence (a ? a )(t) =
a
a
a
a
2 < t + 2 2
2
a + t.
(
R a2
a < t a2 < a2
For 0 < t < a = a 2
.
Hence
(
?

)(t)
=
a
a
t a2 a (v) dv = a t.
a
2 <t+ 2
For a t =

a
2

a
2.

Z
Hence (a ? a )(t) =

t+ a2

t a2

a (v) dv = 0.

So a (t) = (a ? a )(t).
Convolution for Fourier transform
Theorem 2.2.20. Let f (t) and g(t) functions with respectively Fourier transform f()
and g(), then
F((f g)(t)) = f()
g ()
42

Proof. By definition and changing order of integration, we have



Z Z
Z
it
f (u)g(t u) du eit dt =
(f g)(t) e
dt =
F((f g)(t)) =

Z

Z
Z
2.2.7
it
f (u)F(g(t u))du =
f (u)
g(t u) e
dt du =
=

Z

Z
=
f (u)eiu g()du =
f (u)eiu du g() =

= f()
g ().

This allows us to compute inverse Fourier transform of product of transforms.


Corollary 2.2.21. F 1 (f()
g ()) = (f g)(t).
Example 2.2.22. Using convolution we can calculate the Fourier transform of the atriangle function and compare with exercise 2.2.2.
a ()
a () and for example 2.2.1:
We have F(a (t)) = F((a a )(t)) =
2 sin (a/2) 2 sin (a/2)
4 sin2 (a/2)
2 2 cos (a)
=
=
.

2
2
We can use Fourier transform and convolution for solving some differential equations.
F(a (t)) =

Example 2.2.23. Find an expression for solutions of the next classic second order ODE:
u00 u = f
Take the Fourier transform of both sides:
(i)2 u
u
= f
1
u
= f
1 + 2
Take inverse Fourier transform of both sides:


1
1
u = f ? F
2 + 1
For example 2.2.14, we know the inverse transform, thus
Z
1 |t|
1
u(t) = f (t) ? e
=
f (u) e|tu| du.
2
2
Theorem 2.2.24 (Parsevals identity). If f() is the Fourier transform of f (t), then
Z
Z

2
|f (t)|2 dt
f (w) dw = 2

Proof. We know
F

Z
2

1

f ()f () =
f ()| eit d.
2

In the other hand, for proposition 2.2.12, f() = F(f (t)) = F(g(t)),
Z




F 1 f()f() = F 1 f()F(g(t)) = f (t) g(t) =
f (u)g(t u) du

Matching (2.3.2) and (2.6), for t = 0,


Z
Z
Z
Z
2
1


f (u)g(u) du =
f (u)f (u) du =
|f (u)|2 du
f ()| d =
2

we prove the theorem.


43

(2.5)

(2.6)

2.2.4.

Fourier Transforms of elementary functions

Rectangles
Function (a, b)-rectangle is defined
(
1 a<t<b
(a,b) (t) =
.
0 othewise
Then its Fourier transform is a complex function (exercise)
F((a,b) (t)) =

eia eib
.
i

Particularly, for a (t) = ( a2 , a2 ) (t) verifies F(a (t)) =

2 sin( a
2 )
(Example 2.2.1).
w

Exponential function
Let c be a complex number with Re(c) > 0.
(
ect a < t < b
Function f (t) =
, i.e. f (t) = ect (a,b) (t), has Fourier transform
0
otherwise

F ect (a,b) (t) =

ect eit dt =

eiaac eibbc
i + c

Function f (t) = ect (0,) (t) has Fourier transform



F ect (0,) (t) =

F ec|t| =

1
i + c

2c
. See Example 2.2.14.
+ c2
2

Function of Gauss f (t) = eat , with a > 0 has Fourier transform.


Z
2

f () =
eat eit dt
Z

d
2
f () = i
teat eit dt
d

Doing integration by parts with u = eit and dv = teat dt, and applying limits,
Z
d
at2 it

f () =
f ()
e
e
dt =
d
2a
2a
is an elementary ordinary differential equation with solution
2
f() = f(0)e /4a

But we know f(0) =

at2 dt
e

F e

,
a

at2

44

hence

2
= e /4a
a

R
2
Remark. For computing I = eat dt, we consider I 2 and It doesnt matter
what we call the variable of integration, so
Z
 Z
 Z Z
2
2
2
ax2
ay 2
ea(x +y ) dxdy
I =
e
dx
e
dy =

Now we make a change of variables, introducing polar coordinates, (, )


Z 2 Z

2
2
ea d d = .
I =
a
0
0
1
, with Re(c) > 0
t2 + c2


1
. By proposition 2.2.11
As usual f() = F 2
t + c2

Function f (t) =

F(f(t)) = 2f () =



2
2c
c|t|
=
=
F
e
w2 + c2
c 2 + c2
c

Hence

f() = ec||
c

2.2.5.

Distributions and its Fourier transform

Dirac delta distribution


A frequently used concept in theory of transform is that of the Dirac delta, which is
somewhat abstractly defined as:
Z
(t) = 0 for t 6= 0 and
(t) dt = 1

The Dirac delta is not a function but a concept called distribution (out of this course).
It can be understood, roughly speaking, as a function that is very tall and very thin. It
is usually use the translated Dirac delta (t a) for some real a (see figure 2.8a).
Often this distribution is defined as the function which do
Z
f (t)(t) dt = f (0)

and it can also be see as the limit of families of functions with certain properties, for
example
r
n nt2
Gaussian functions : n (t) =
e
for n = 1, 2, 3, . . .

Lorentz functions : n (t) =

1
n
for n = 1, 2, 3, . . .
1 + n2 t2

and others.
that is, n (t) (t) for n .
We can apply definition of Fourier transform to distribution (t a),
Z
F((t a)) =
(t a)eit dt = eia

= 1.
and, in particular, F((t)) = (t)
In the other hand, applying proposition 2.2.11
F(eiat ) = 2( a) = 2( + a).
In particular F(1) = F(e0 ) = 2().
45

t
Figure 2.7: Gaussian functions n (t) converge to Dirac delta (t).
Remark. Distribution (t a) is often called impulse at a and, if c is a complex
constant, c(t a) is called a impulse at a weighted by c.
Proposition 2.2.25. We have the next Fourier transform formulas (Exercise 13):
1. F( (n) (t)) = (i)n .
2. F(t) = 2i 0 ().
3. F(tn ) = 2in (n) ().
Sign function
Define sign function as
1
(
1
t>0
sgn(t) =
1 t < 0
1
undefined for t = 0. Is usual to (
represent sgn() = 1, and so, this function has the
2 t>0
property: sgn(t) sgn() =
. Furthermore,
0 t<0
(
Z t
2 t0
2(x) dx =
.
0 t<0

Rt
Matching both functions, except for t = 0, we have 2(x) dx = sgn(t) sgn().
d
Hence, dt
sgn(t) = 2(t). For proposition 2.2.10, F (2(t)) = iF(sgn(t)) and we can
compute the Fourier transform for sign function:
F(sgn(t)) =

2
i

Heaviside unit step function H(t a)


We call unit step function or Heaviside function to
(
1 for t 0
H(t) =
0 elsewhere.
46

1
`

(a) Dirac delta at t = a.

(b) Heaviside unit step at t = a.

Figure 2.8: Dirac delta and Heaviside function.


That is a piecewise continuous function. Its usual consider the unit step function at
t = a, named H(t a) (see figure 2.8b).
From
(
Z t
Z t
0 if t < a
(x a) dx = lim
n (x a) dx =
= H(t a)
n
1 if t a

we can interpret that is the derivative3 of the Heaviside function.


d
H(t a) = (t a)
dt
Furthermore H(t) =

1
2

(1 + sgn(t)), then
F (H(t)) = () +

1
i


ia

F (H(t a)) = e

1
() +
i

Proposition 2.2.26. We have the next Fourier transform formulas (Exercise 13):
 
1
1. F
= i sgn() = i 2i H().
t


1
(i)n
2. F n+1 =
(i 2i H()).
t
n!
The Fourier transform of sine and cosine
We can combine the results above to find the Fourier transform for the sine and
cosine.


(t a) + (t + a)
eia + eia
F
=
= cos(a).
2
2
therefore
( a) + ( + a)
= (( + a) + ( a))
2


(t + a) (t a)
eia eia
Analogous F
=
= sin(a) and therefore
2i
2i
F(cos(at)) = 2

F(sin(at)) =
3

2
(( + a) ( a)) = (( a) ( + a))
2i

Obviously H(t a) is not a continuous function at a, therefore is not differentiable.

47

2.2.6.

Fourier transform applied to differential equations

As we have seen in the previous example 2.2.23, Fourier transforms can be applied
to the solution of differential equations.
Consider the following Ordinary Differential Equation (ODE):
an x(n) (t) + an1 x(n1) (t) + + a1 x0 (t) + a0 x(t) = g(t)

(2.7)

assuming that solution and all its derivatives approach to zero if t . Applying
Fourier transform we obtain

an (i)n + an1 (i)n1 + + a1 (i) + a0 x
() = g().
Calling
F () =

1
an

(i)n

+ an1

(i)n1

+ + a1 (i) + a0

and f (t) = F 1 (F ()), we obtain


x
() = F ()
g ()
and the solution is
x(t) = f (t) ? g(t).
If the Fourier transform of right side in (2.7) is known, we can apply this for solving
the differential equation.
Example 2.2.27. Use Fourier transform to find a solution of ODE
x0 x = 2 cos t
Applying Fourier transform
(i)
xx
= 2 (( + 1) + ( 1))
2( + 1) 2( 1)
x
=
+
1 + i
1 + i
Because delta of Dirac (t) is 0 for t 6= 0, we have
x
=

2( + 1) 2( 1)
+
1 i
1 + i

and doing inverse transform


1
1 + i
1 i
1
eit +
eit =
(cos t i sin t) +
(cos t + i sin t)
1 i
1 + i
2
2
1
= ( cos t + i sin t + i cos t + sin t cos t i sin t i cos t + sin t)
2
= sin t cos t.

x(t) =

48

2.2.7.

Fourier transforms Table

f (t)

f() = F(f (t))

(a,b) (t)

eia eib
i

a (t) = ( a2 , a2 ) (t)

2 sin( a
2 )

Re(c) > 0, ect (a,b) (t)

eiaac eibbc
i + c

Re(c) > 0, ect (0,) (t)

1
i + c

Re(c) > 0, ec|t|

a > 0,

2
e /4a
a

2
eat

Re(c) > 0,

t2

2c
+ c2

1
+ c2

c||
e
c

(t a)

eia

eiat

2( + a)

tn

2in (n) ()

sgn(t)

2
i

H(t)

() +

1
i

tn+1

(i)n
(i 2i H())
n!

cos(at)

(( + a) + ( a))

sin(at)

(( a) ( + a))

49

2.3.

Laplace Integral Transform

2.3.1.

Definitions

Definition 2.3.1. The (direct) Laplace transform of a real function f (t) defined for
0 t < is the ordinary calculus integral
Z
f (t) est dt
F (s) =
0

where s is a real number. Function F (s) is usually denoted L(f (t)) and L is denoted
Laplace transform operator.
Example 2.3.2. Well illustrate the definition calculating the Laplace transform for
some functions.
1. f (t) = 1.
Z

F (s) =
0

Then L(1) =

est
1 est dt =
s



=
t=0

1
s

assumed s > 0.
assumed s 0.

1
for s > 0.
s

2. f (t) = t. Integrating by parts (u = t, dv = est dt)


(
 st

Z
1
s t
te
e
2
t est dt =
= s
F (s) =
2
s
s

0
t=0

assumed s > 0.
assumed s 0.

d st
e
and
An alternative method is to observe t est = ds
Z
Z
d
d
d
1
est dt =
1 est dt = L(1) = 2
L(t) =
ds
ds 0
ds
s
0

assumed s > 0.
Exercise 2.3.3. Use

dn st
dsn e

= (1)n tn est to prove

L(tn ) =

n!
sn+1

assumed s > 0.

Example 2.3.4. We know the Heaviside unit step function


(
1 for t 0
H(t) =
0 elsewhere.
1
for s > 0 (see example 2.3.2).
s
Now we are coming to calculate the Laplace transform for function H(t a) with a > 0
which represents a unit step in t = a.

This is a piecewise continuous function with L(H(t)) =

a
Z
L(H(t a)) =

H(t a)est dt =

Z
a

eas
= eas L(1) =
.
s
50

est dt

u=ta

es(u+a) du =

Existence of the Transform.


R
The Laplace integral 0 f (t)est dt is known to exist in the sense of the improper
integral
Z

f (t)est dt = lim

N 0

f (t)est dt

and the issue is to determinate classes of functions f such that the convergence is guarantied.
Next theorem gives us a sufficient condition for existence of Laplace transforms.
Theorem 2.3.5 (Existence of L(f )). Let f (t) be piecewise continuous on every finite
interval in t 0 and satisfy |f (t)| M et for some constants M and . Then L(f (t))
exists for s and
lim L(f (t)) = 0.

(2.8)

Proof. It has to be shown that the Laplace integral of f is finite for s > . Advanced
calculus implies that it is sufficient to show that the integrand is absolutely bounded above
by an integrable function g(t). Take g(t) = M e(s)t . Then g(t) 0. Furthermore,
g is integrable, because
Z
M
g(t) dt =
.
s

0
Inequality |f (t)| M et implies the absolute value of the Laplace transform integrand
f (t)est is estimated by


f (t)est M et est = g(t)
R
M
The limit statement follows from |L(f (t))| 0 g(t) dt = s
, because the right side of
this inequality has limit zero at s = . The proof is complete.
The property 2.8 in the previous theorem gives us a criterion to determine when a
function is the Laplace transform of another one. For example, polynomial functions are
not any Laplace transforms. Instead, function F (s) = arctan(1/s) for s > 0, could be a
Laplace transform as we confirm in example 2.3.26.

2.3.2.

Properties of the Laplace Operator

Linearity
Proposition 2.3.6. Let f1 (t) and f2 (t) be functions which Laplace transform exists and
let c1 and c2 be constant real numbers, then
L(c1 f1 (t) + c2 f2 (t)) = c1 L(f1 (t)) + c2 L(f2 (t))
Proof.
Z

(c1 f1 (t) + c2 f2 (t))est dt =


Z
Z
= c1
f1 (t)est dt + c2
f2 (t)est dt =

L(c1 f1 (t) + c2 f2 (t)) =

= c1 L(f1 (t)) + c2 L(f2 (t))

51

Translations
Proposition 2.3.7. Let f (t) be a function, H(t) is the Heaviside unit step function
defined in Example 2.3.4 and g(t) = H(t a)f (t a), i.e.
(
f (t a) for t > a
g(t) =
0
for t < a
with a > 0, then
L(g(t)) = eas L(f (t))
Proof.
Z

L(g(t)) =

g(t)e

st

ds =

=e

f (t a)e

a
as

st

f (t a)est dt

doing u = t a
Z
L(g(t)) =

s(u+a)

f (u)e

dt =

du = e

as

f (u)esu du =

L(f (t))

Example 2.3.8. For calculating the Laplace transform for step function
1
(
1 for a t < b
f (t) =
0 elsewhere

observe what f (t) = H(t a) H(t b) where H(t) is the Heaviside unit step function.
Then
L(f (t)) = L(H(t a)) L(H(t b)) = eas L(1) ebs L(1) =
eas ebs
s

Proposition 2.3.9. If L(f (t)) = F (s) for s > c then L(eat f (t)) = F (sa) for s > a+c.
Proof. It is easy. Start developing F (s a).

4s
Exercise 2.3.10. Use the above propositions to prove that L (t 1)e3t =
for
(s 3)2
s > 3.
Rescaling
Proposition 2.3.11. If L(f (t)) = F (s) then L(f (at)) =

1 s
F
.
a
a

Proof.
Z

L(f (at)) =
0

1
= F
a

at=u

f (at)est dt =
s

f (u)es a

52

du
1
=
a
a

Z
0

f (u)e a u du =

Laplace Transform for Derivatives


t-derivative rule
Theorem 2.3.12. If f (t) is continuous, limt f (t)est = 0 for all large values of s
and f 0 (t) is piecewise continuous, then L(f 0 (t)) exists for all large s and
L(f 0 (t)) = sL(f (t)) f (0).

Proof. Already L(f (t)) exists, because f is of exponential order and continuous. On an
interval [a, b] where f 0 is continuous, integration by parts using u = est , dv = f 0 (t)dt
gives
Z b
Z b

0
st
st b
f (t)est dt =
f (t)e dt = f (t)e
+s
t=a
a
a
Z b
f (t)est dt
= f (b)ebs f (a)eas + s
a

On any interval [0, N ], there are finitely many intervals [a, b] on each of which f 0 is
continuous. Add the above equality across these finitely many intervals [a, b]. The
boundary values on adjacent intervals match and the integrals add to give
Z N
Z N
0
st
N s
0
f (t)e dt = f (N )e
f (0)e + s
f (t)est dt
0

Take the limit across this equality as N . Then the right side has limit f (0) +
sL(f (t)), because of the existence of L(f (t)) and limt f (t)est = 0 for large s.
Therefore, the left side has a limit, and by definition L(f 0 (t)) exists and L(f 0 (t)) =
f (0) + sL(f (t)).
Similarly we have:
L(f 00 (t)) = sL(f 0 (t)) f 0 (0) = s (sL(f (t)) f (0)) f 0 (0) =
= s2 L(f (t)) sf (0) f 0 (0)
and furthermore L(f 000 (t)) = s2 L(f (t)) s2 f (0) sf 0 (0) f 00 (0). In general,


L f (n) (t) = sn L(f (t)) sn1 f (0) sn2 f 0 (0) f (n1) (0)
s-derivative rule
dn
L(f (t)) = (1)n L (tn f (t)).
dsn
Proof. Proceed by induction on n.

Proposition 2.3.13.

For n = 1
Z
Z

d
d
st
L(f (t)) =
f (t)e
dt =
tf (t)est dt = L(tf (t)).
ds
ds
0
0
dn
L(f (t)) = (1)n L (tn f (t)). Then
dsn
 n

dn+1
d
d
d
L(f (t)) =
L(f (t)) =
[(1)n L (tn f (t))] =
n+1
n
ds
ds ds
ds
Z
Z

d n
n
st
n
= (1)
t f (t)e
dt = (1)
tn+1 f (t)est dt =
ds
0
0

= (1)n+1 L tn+1 f (t) .

Hypothesis:

.
53

What proves the thesis.


Laplace Transform for Integrals
Rt
When 0 f (u) du is a t-dependent function which verify conditions for the existence
of its Laplace transform, we have
R
 L(f (t))
t
Proposition 2.3.14. L 0 f (u) du =
s

Z t
 Z Z t
f (u) du est dt.
Proof. L
f (u) du =
0
0R
0
t
Integration by parts using u = 0 f (u) du and dv = est dt gives
Z
L
0


 Xst
Z
Z
e XXXt X 
est

X
f (u) du =
f
(u)
du
f
(t)

dt =

X


XXX
 0
s
s
0
t=0
1
= L(f (t)).
s

f (t)
t

Proposition 2.3.15. If limt


L

exists and L(f (t)) = F (s), then

f (t)
t

F (t) dt.

=
s

Proof. Omitted.
Laplace Transform for Dirac Delta Distribution
From
Z

(
0 if t < a
n (x a) dx =
= H(t a)
1 if t a

Z
(x a) dx = lim

we can interpret
d
H(t a) = (t a)
dt
and so, using t-derivative rule, theorem 2.3.12, we obtain the Laplace transform for the
Dirac Delta:
L((t a)) = sL(H(t a)) H(0 a) = eas .

2.3.3.

Laplace Transform Table

Proposition 2.3.16. L(eat ) =

1
assumed s > a.
sa

Proof.
L(eat ) =

Z
0


1
(as)t
e

(as)t
e
dt =
= sa

as
t=0

54

for s > a
for s a

Proposition 2.3.17. L(sin at) =

s2

a
assumed s > 0.
+ a2

Proof. First we calculate L(sin t)






d sin t
d cos t
= sL(cos t) + 1 = sL
+ 1 = s2 L(sin t) + 1.
L(sin t) = L
dt
dt
Hence L(sin t) =

s2

1
. Rescaling (Proposition 2.3.11)
+1
L(sin at) =

Proposition 2.3.18. L(cos at) =

1
a

1
s2
a2

+1

s2

a
.
+ a2

s
assumed s > 0.
s2 + a2

Proof. Analogous.
Proposition 2.3.19. L(cosh at) =

s
assumed s > |a|.
s2 a2

eat + eat
.
2
a
Proposition 2.3.20. L(sinh at) = 2
assumed s > |a|.
s a2

Proof. Exercise. Hint: use cosh at =

Proof. Analogous.
Table 2.1(a) shows most important Laplace transforms.

2.3.4.

Inverse Laplace Transform

Definition 2.3.21. We say that f (t) is an inverse Laplace transform of F (s) when
L(f (t)) = F (s) and then we say
L1 (F (s)) = f (t).
Observe the Inverse Laplace transform is not unique.
(
0 for t = 2
Example 2.3.22. Functions f1 (t) = et and f2 (t) =
verify
et for t 6= 2
L(f1 (t)) = L(f2 (t)) =

1
,
s1

therefore both functions are inverse Laplace transform of the same function F (s) =

1
s1 .

However there are conditions for the uniqueness of the inverse transform as established
next theorem we give without proof.
Theorem 2.3.23 (Lerch). If f1 (t) and f2 (t) are continuous, of exponential order and
L(f 1(t)) = L(f2 ) for all s > s0 then f1 (t) = f2 (t) for all t 0.
Table 2.1(b) shows most important Inverse Laplace transforms, immediate consequence of table 2.1(a).
55

F (s) = L(f (t))

f (t)

F (s)

f (t) = L1 (F (s))

1
s

s>0

1
s

1
s2

s>0

1
s2

n!

sn+1
1
sa

eat

sin at

s2

s>0

sn+1
1
sa

s>a

a
s>0
+ a2

sinh at

s2

a
s > |a|
a2

(t a)

eas

sin at
a

s
s 2 + a2

cos at

1
a2

sinh at
a

s
s 2 a2

cosh at

eas

(t a)

s2

s
s > |a|
s 2 a2

cosh at

s>0

(a) Direct Transform

eat

1
+ a2

s2

s
s>0
s2 + a2

cos at

tn
n!

(b) Inverse Transform

Table 2.1: Direct and inverse Laplace transform for some functions.

56

Properties of the Inverse Laplace Transform


Basic Properties The following properties are deduced from section 2.3.2.
1. Linearity. Let F1 (s) and F2 (s) be functions and let c1 and c2 be constant real
numbers, then
L1 (c1 F1 (s) + c2 F2 (s)) = c1 L1 (F1 (s)) + c2 L1 (F2 (s)).
2. Translations. If L1 (F (s)) = f (t) then L1 (F (s a)) = eat f (t).
 
1
t
1
1
.
3. Rescaling. If L (F (s)) = f (t) then L (F (as)) = f
a
a
4. Derivative rule. If L1 (F (s)) = f (t) then L1 (F (n) (s)) = (1)n tn f (t).
5. Integral rule. If L1 (F (s)) = f (t) then L1

R
s

 f (t)
.
F (u) du =
t

s sin + cos
is x(t) =
s2 + 2
sin(t + ).





s
+ (cos ) 2
.
Rearranging terms in the fraction X(s) = (sin ) 2
s + 2
s + 2
We are now able to take the inverse Laplace transform of table 2.1(b):




s

1
1
x(t) = (sin )L
+ (cos )L
=
s2 + 2
s2 + 2
= (sin )(cos t) + (sin t)(cos ) =
Example 2.3.24. The inverse Laplace transform of X(s) =

= sin(t + ).
Exercise 2.3.25. Prove that the inverse Laplace transform of F (s) =

f (t) = e

at

s+b
is
(s + a)2 + 2





ba
sin t .
cos t +

1
sin t
Example 2.3.26. The inverse Laplace transform of F (s) = arctan( ) is f (t) =
.
s
t
1
The derivative is F 0 (s) = 2
and using derivative rule L1 (F 0 (s)) = t f (t), we
s +1
obtain


1 1
1
sin t
f (t) = L
=
2
t
s +1
t
Convolution property
Definition 2.3.27 (Convolution). Let f (t) and g(t) be functions piecewise continuous of
exponential order with f (t) = 0 and g(t) = 0 for t < 0. We call convolution product
(or simply convolution) of f and g to
Z t
Z
(f g)(t) =
f (u)g(t u) du =
f (u)g(t u) du

Exercise 2.3.28. Prove that the convolution is commutative, i.e. (f g)(t) = (g f )(t).
Proposition 2.3.29. Convolution is associative, i.e. ((f g) h)(t) = (f (g h))(t).
57

Proof.

Z
((f g) h))(t) =

(f g)(u)h(t u) du =

=
f (v)g(u v) dv h(t u) du =

Z

Z
{w=uv}
f (v)
g(u v)h(t u) du dv
=
=

Z

Z
f (v)
g(w)h(t v w) dw dv =
=

f (v) (g h) (t v) dv =
=

Z Z

= (f (g h))(t)

Theorem 2.3.30. If L1 (F (s)) = f (t) and L1 (G(s)) = g(t) then


L1 (F (s)G(s)) = f (t) g(t)
Proof. Using Fubinis theorem
Z
Z
su
F (s)G(s) =
f (u)e du
0

g(v)e

sv

ZZ

f (u)g(v)es(u+v) dudv

dv =

[0,)[0,)

(


0 1
u=y
(u, v)
= 1

= abs
We do a change of variable
with Jacobian
1 1
(t, y)
v =ty
and the (u, v)-region [0, ) [0, ) of integration is transformed from the (t, y)-region
{(t, y) : y 0 and t y}.
y=t

Hence
Z

f (y)g(t y)est dydt =

F (s)G(s) =
t=0
Z

y=0

e
t=0

st

Z
f (y)g(t y) dydt =

y=0

est (f g)(t) dt =

= L((f g)(t))
therefore L1 (F (s)G(s)) = (f g)(t).
Example 2.3.31. Consider a linear time-invariant system with transfer function
F (s) =

1
(s + a)(s + b)

The impulse response is simply the inverse Laplace transform of this transfer function
f (t) = L1 (F (s)).
58

To evaluate this inverse transform, we use the convolution property. That is, the
inverse of
1
1
1
F (s) =
=

(s + a)(s + b)
s+a s+b
is
f (t) = L1

1
s+a

L1

1
s+b

= eat ebt =

eax eb(tx) dx =

eat ebt
.
ba

Exercise 2.3.32. Use method of partial fraction expansion to evaluate the inverse Laplace
transform f (s) = L1 (F (s)) being
F (s) =

1
A
B
=
+
(s + a)(s + b)
s+a s+b

used in the Example 2.3.31 above.

2.3.5.

Laplace Method for Solving Ordinary Differential Equations (ODEs)

The Laplace transform can be used in some cases to solve linear differential equations
with given initial conditions.
Example 2.3.33. We use Laplace method for solving the linear ODE
y 00 + y 0 2y = x with y(0) = 2, y 0 (0) = 1.
First observe that x is the independent variable, so
L(y 00 ) + L(y 0 ) 2L(y) = L(x)
and using x-derivative rule
(s2 L(y) sy(0) y 0 (0)) + (sL(y) y(0)) 2L(y) =
(s2 L(y) 2s + 1) + (sL(y) 2) 2L(y) =

1
s2

1
s2

1
s2
1
2s3 + s2 + 1
(s2 + s 2)L(y) = 2 + 2s + 1 =
s
s2
(s2 + s 2)L(y) 2s 1 =

Hence
L(y) =

2s3 + s2 + 1
2s3 + s2 + 1
=
s2 (s2 + s 2)
s2 (s 1)(s + 2)

Using partial fraction method


L(y) =

2s3 + s2 + 1
1/2 1/4
4/3
11/12
= 2
+
+
.
2
s (s 1)(s + 2)
s
s
s1
s+2

Applying inverse transforms according to the table 2.1


 
 




1 1 1
1 1 1
4 1
1
11 1
1
y= L
L
+ L
+ L
=
2
s2
4
s
3
s1
12
s+2
1
1
4
11
= x 1 + ex + e2x =
2
4
3
12
16ex + 11e2x 6x 3
=
.
12
59

Example 2.3.34 (Damped oscillator). Solve by Laplaces method the initial value problem
x00 + 2x0 + 2x = 0, x(0) = 1, x0 (0) = 1.
Solution: x = et cos t.
Doing Laplace transform we have L(x00 ) + 2L(x0 ) + 2L(x) = 0, hence
s2 L(x) s x(0) x0 (0) + 2(sL(x) x(0)) + 2L(x) = 0
s2 L(x) s + 1 + 2(sL(x) 1) + 2L(x) = 0
(s2 + 2s + 2)L(x) = s + 1
From here
L(x) =

s2

s+1
s+1
=
+ 2s + 2
(s + 1)2 + 1

and
x = L1

s+1
(s + 1)2 + 1

= et L1

s
s2 + 1

= et cos t

Example 2.3.35. Solve the initial value problem


x0 + x + 2y = 0
y 0 + 2x 2y = sin t


with

x(0) = 1
y(0) = 0

Applying Laplace transform,


L(x0 ) + L(x) + 2L(y) = 0
L(y 0 ) + 2L(x) 2L(y) = L(sin t)

(s + 1)L(x) + 2L(y) = 1

2L(x) + (s 2)L(y) =

s2

+1

Solving this algebraic linear system


L(x) =

s3 2 s2 + s 4
2 s2 s + 1
,
L(y)
=

s4 s3 5 s2 s 6
s4 s3 5 s2 s 6

For doing Inverse Laplace transform of L(x), by partial fractions:


L(x) =

22
4
s7
+

25 (s + 2) 25 (s 3) 25 (s2 + 1)

From here
x=

22 2t
4
cos t 7 sin t
e
+ e3t
25
25
25

Similar for evaluate the function y (exercise).

Ejercicios

60

Ejercicio 2.1
(
1 si x [2k, (2k + 1))
1. Representar grficamente f (x) =
, k Z y com2 si x [(2k + 1), (2k + 2))
probar que se puede desarrollar en series de Fourier.
2. Hallar el desarrollo en serie de Fourier de f (x). Representar grficamente las tres
primeras sumas parciales.
3. Usar el resultado anterior para sumar la serie 1

1 1 1
+ + ....
3 5 7

Ejercicio 2.2
1. Desarrolla en series de cosenos f (x) = x, x [0, ]. Desarrollar la misma funcin
en serie de senos.

X
X
1
1 1 1 1
1
2. Usa lo anterior para sumar las series
, 1 + + ...,
.
2
n
3 5 7 9
n4
n=1

n=1

Ejercicio 2.3
Representar grficamente y hallar el desarrollo en series de Fourier de las siguientes
funciones peridicas de periodo 2:

3. f (x) = x2 , x [, ]

0 si x [, 2 )
1. f (x) = 1 si x [ 2 , 2 ]

(
0 si x ( 2 , ]
x x2 si x [0, )
4. f (x) =
2. f (x) = x , x (, ]
x2 x si x [, 2)

Ejercicio 2.4
Representar grficamente la funcin peridica y de periodo 2:
(
cos x
si < x 0
f (x) =
.
cos x si 0 < x
Ver si es desarrollable por serie de Fourier y encontrar el desarrollo (en su caso).
Ejercicio 2.5
Sea f (x) = sen x2 con 0 x 2 y peridica de periodo 2. Halla su desarrollo de
Fourier en forma compleja.
Ejercicio 2.6
Prove that if f() = F(f (t)), then F(f(t)) = 2f ().
Ejercicio 2.7
(Inverse Tranlation) Prove that if f() = F(f (t)), then, for all real number k,
F(eikt f (t)) = f( k).
Ejercicio 2.8
For a > 0 and b R, find the Fourier transforms:
61


1. F


eibt
.
a2 + t2

2. F


cos bt
.
a2 + t2

3. F (1 t2 )2 (t)

Ejercicio 2.9
Apply definition of Fourier transform in second question of exercise 2.8 to find the
value of next integral
Z
cos2 bt
dt
2
1 + t
Ejercicio 2.10
Use solution of question 3. in exercise 2.8 to find the value of integral
Z
x cos x sin x
x
cos dx
3
x
2
0
Ejercicio 2.11
Prove that convolution is commutative, i.e. f g = g f .
Ejercicio 2.12
Use convolution to find this inverse transform f (t) =

F 1


sin
.
(i + 1)

Ejercicio 2.13
Prove the next Fourier transform formulas:
 
1
4. F
= i sgn() = i
t
2i H().


1
(i)n
5. F n+1 =
(i 2i H()).
t
n!

1. F( (n) (t)) = (i)n .


2. F(t) = 2i 0 ().
3. F(tn ) = 2in (n) ().

Ejercicio 2.14
Find the inverse of Fourier transforms:
1.

F 1

1
2
+ i + 2


2.

F 1

1
2
2i 1

Ejercicio 2.15
Justify the equality (t) =

cos tu du.
0

Ejercicio 2.16
Use Fourier transform to solve the ODE x00 + 3x0 + 2x = et .

62

Ejercicio 2.17
Find the Laplace transform of each of the following functions:
(
3 for 0 < t < 5,
1. f (t) =
0 for t > 5.
2. f (t) = e2t cos2 3t 3t2 e3t .
Hint: You can use the equality 2 cos2 a = 1 + cos 2a.
(
cos(t 2
for t >
2
2
3 )
3. f (t) = cos(t 3 )H(t 3 ) =
0
for t <

Ejercicio 2.18
Z
Prove that L

2
3 ,
2
3 .


1
1
sin u
du = arctan .
u
s
s
0
Hint: Use propositions 2.3.14 and 2.3.15.

Ejercicio 2.19
If L(f (t)) =

s2 s + 1
compute L(f (2t)).
(2s + 1)2 (s 1)

Ejercicio 2.20
Prove L (t cos at) =

s2 a2
(s2 + a2 )2

Ejercicio 2.21
Knowing L(f 00 (t)) = arctan

1
s

and f (0) = 2, f 0 (0) = 1, find L(f (t)).

Ejercicio 2.22
Let a, b be constants, b 6= 0. Prove


1
sa
at
L(e f (bt)) = F
with L(f (t)) = F (s).
b
b
Ejercicio 2.23
Compute the inverse Laplace transform of
1. F (s) =

s2

6s 4
4s + 20

2. F (s) =

s+5
(s 2)3 (s + 3)

3. F (s) =

1
s2 (s2 + 3s 4)

4. F (s) =

s
(s 1)2 (s2 + 2s + 5)

63

Ejercicio 2.24
Use convolution rule for solving the following inverse Laplace transform:


s
1
1. L
(s2 + a2 )2


1
1
2. L
s2 (s + 1)2
Ejercicio 2.25
Solve the following ODEs using Laplace method:
1. x00 + 4x = 9t with x(0) = 0 and x0 (0) = 7.
2. x000 x = et with x(0) = x0 (0) = x00 (0) = 0.
3.

x00

+ 4x = f (t) with x(0) = 0,

x0 (0)

(
1 for 0 < t < 1,
= 1 and f (t) =
0 for t > 1.

4. (1 t)x0 tx = t with x(0) = 1.


Hint: Make the change y = (1 t)x and study the new equation.
Ejercicio 2.26
Use Laplace method for solving the following differential equations systems:

x0 + y 0 = t
1.
with x(0) = 3, x0 (0) = 2, y(0) = 0.
x00 2y = et

3x0 + y + 2x = 1
2.
with x(0) = y(0) = 0.
x0 + 4y 0 + 3y = 0

64

Chapter 3

Partial Differential Equations


3.1.

Introduccin

Definicin. Sea U Rn un conjunto abierto y conexo. Se llama ecuacin en derivadas


parciales (EDP) a aquella ecuacin en la que la incgnita es un campo escalar definido
en U y en la que aparecen sus derivadas parciales de cualquier orden.
En este curso estudiaremos el caso para dos variables independientes (que las denotaremos por x e y), y la funcin incgnita la notaremos por u (u = u(x, y) con
(x, y) U R2 ).
Con esta definicin las EDP se pueden expresar de la siguiente forma:

F


u u 2 u 2 u 2 u
,
,
,
,
, . . . = 0 con (x, y) U R2
x, y, u,
x y x2 y 2 xy

o, escribiendo las parciales de forma simplificada:


F (x, y, u, ux , uy , uxx , uyy , uxy , . . . ) = 0 con (x, y) U R2

(3.1)

donde F es la funcin que liga todas las funciones entre s.


Se llama orden de una EDP al mayor ndice de derivacin parcial que aparece en la
ecuacin.
Nota. A veces, en las aplicaciones fsicas, la variable independiente y se identifica con
el tiempo y se suele representar por t.
De esta forma, por ejemplo, ut a2 uxx = 0 es una ecuacin en derivadas parciales
de segundo orden conocida como ecuacin del calor.
Ejemplo. Una sencilla EDP de primer orden es la ecuacin
u
=0
x

(3.2)

Soluciones. Una solucin de una EDP de orden k es un campo escalar u = f (x, y)


de clase C k que satisface la ecuacin al sustituir en ella la u y sus derivadas parciales.
Encontrar las soluciones se suele llamar integrar la ecuacin.
65

Continuando con el ejemplo (3.2) anterior, si u fuese una funcin de una sola variable
(una ecuacin diferencial ordinaria, EDO) tendra por solucin u(x) = c, con c constante,
pero si la consideramos de dos variables, la constante c no es tal, sino que entenderemos
que es una funcin que no depende de la variable x. Es decir, integrando (3.2) obtenemos
una solucin general
u(x, y) = f (y)
Observamos as que, de la misma forma que en la solucin general de una EDO
aparacecan constantes arbitrarias, en la solucin de una EDP aparacen funciones arbitrarias. Las soluciones de una EDP se restringen con las llamadas condiciones de
contorno o de frontera o las condiciones iniciales. Por ejemplo, si a la ecuacin (3.2) le
imponemos la restriccin
u(x, x) = x
estamos estableciendo una condicin de frontera sobre la bisectriz del primer y tercer
cuadrante. En este caso, tenemos que la nica solucin de la EDP (3.2) es
u(x, y) = y
En la literatura, se suelen dar dos tipos de restricciones o condiciones de contorno:
Condiciones de Cauchy: que se dan generalmente en ecuaciones en derivadas parciales en las que intervienen el tiempo. As si u y ut son dadas para t = 0 se les
llaman condiciones iniciales.
Condiciones de Dirichlet: en las que se buscan soluciones u en una determinada
regin U R que verifican ciertos valores en cada punto de la frontera U de
la regin.

3.1.1.

EDPs lineales y cuasilineales

Diremos que la ecuacin (3.1) es lineal si la funcin F es lineal respecto de u y de


todas las derivadas parciales de u que aparecen en ella.
Una EDP de primer orden lineal se podr expresar, entonces de la forma
A1 (x, y)u + A2 (x, y)ux + A3 (x, y)uy = f (x, y)
donde los Ai son campos escalares definidos en U .
Anlogamente una EDP lineal de segundo orden se puede expresar
A1 (x, y)u + A2 (x, y)ux + A3 (x, y)uy + A4 (x, y)uxx + A5 (x, y)uxy + A6 (x, y)uyy = f (x, y)
Decimos una una EDP lineal es de coeficientes constantes si todos las funciones Ai (x, y) =
Ci son funciones constantes.
Una EDP lineal es homognea si en todos los trminos de F aparece u o alguna de
sus derivadas o, lo que es lo mismo, expresada en alguna de las formas anteriores se tiene
que f (x, y) = 0.
Diremos que la ecuacin (3.1) es cuasilineal de primer orden si se expresa de la forma
P (x, y, u)ux + Q(x, y, u)uy = R(x, y, u)
donde los P, Q, R son campos escalares definidos en una regin de R3 . Anlogamente las
ecuaciones cuasilineales de segundo orden son la forma:
P1 (x, y, u)ux + P2 (x, y, u)uy + P3 (x, y, u)uxx + P4 (x, y, u)uxy + P5 (x, y, u)uyy = R(x, y, u)
donde los Pi son campos escalares definidos en U R3 .
Obsrvese que las EDPs lineales son un caso particular de las cuasilineales.
66

3.1.2.

Algunos ejemplos clsicos de EDPs

Ecuacin del transporte. La ecuacin de transporte en una dimensin:


u
u
+c
= 0.
t
x
donde u es funcin del tiempo t y de la posicin x. Es lineal, de primer orden y homognea.
Ecuacin de propagacin de la luz. La ecuacin:


u
x

2


+

u
y

2


+

u
z

2
= n(x, y, z)

describe la propagacin de los rayos luminosos en un medio no homogneo con ndice de


refraccin n(x, y, z). Es una ecuacin de primer orden, no lineal y no homognea con
tres variables independientes.
Ecuaciones de Cauchy-Riemann. Es un sistema formado por dos ecuaciones lineales
de primer orden homogneas:
u v
+
= 0,
y x
u v

= 0.
x y
que representan las funciones complejas f (x + iy) = u(x, y) + iv(x, y) que son derivables
en el campo complejo.
Ecuacin de ondas. La ecuacin
utt = c2 uxx
es la que satisface una funcin u(x, t) que representa las oscilaciones de una cuerda. Es
una ecuacin lineal de segundo orden y homognea que estudiaremos en la seccin 3.3.1.
Ecuacin de disipacin del calor. La ecuacin
ut = c2 uxx ,
describe la evolucin de la temperatura de una barra homognea de seccin constante.
Tambin es una ecuacin lineal homognea de segundo orden. La estudiaremos en la
seccin 3.3.2.
Ecuacin de Laplace. La ecuacin
2u 2u 2u
+ 2 + 2 = 0,
x2
y
z
satisface el potencial u del campo elctrico en las regiones que no contienen cargas. Es
otro ejemplo de ecuacin lineal de segundo orden y homognea.
67

3.2.

Resolucin de ecuaciones en derivadas parciales de primer


orden

En esta seccin estudiaremos algunos mtodos de integracin de EDP de primer


orden, es decir, EDP de la forma:
F (x, y, u, ux , uy ) = 0

3.2.1.

Resolucin directa

u
= 3x2 + 2y 2 1. Una simple integracin conduce
Ejemplo. Resolvamos la ecuacin
x
a
u(x, y) = x3 + 2xy 2 x + f (y)
siendo f (y) una funcin en y por determinar.
u
+u = exy . Como slo aparece la derivada parcial
y
con respecto a y, podemos resolver esta EDP como si fuese una EDO (en y) u0 + u = exy
que es lineal no homognea que tiene por solucin general
Ejemplo. Resolvamos la ecuacin

u(x, y) =

ex y
+ c(x)ey .
x+1

Observa que la constante que aparece al resolver la EDO es una funcin que depende de
x cuando resolvemos la EDP.

3.2.2.

Mtodo de separacin de variables

La idea del mtodo de separacin de variables es suponer que la solucin u(x, y) de


una EDP en dos variables es producto de dos funciones en la forma
u(x, y) = (x)(y)
A veces una simple sustitucin conduce a dos ecuaciones diferenciales en (x) y en (y),
respectivamente, que se resuelven y permiten reconstruir la solucin buscada.
Ejemplo. Resolvamos la siguiente EDP con condicin de Cauchy
u
u
=2 ,
x
y

u(0, y) = ey

Suponiedo variables separadas tenemos


0 (x)(y) = 2(x) 0 (y)

0 (x)
0 (y)
=2
(x)
(y)

Como la anterior igualdad se tiene para cualquier x, y, se deducen que han de ser constantes, por tanto
)
0 (x)
kx
=
k

(x)
=
C
e
y
1
(x)
= u(x, y) = (x)(y) = Cek(x+ 2 )
ky
0 (y)
k
2
(y) = 2 (y) = C2 e
y

y de la condicin de Cauchy u(0, y) = Cek 2 = ey C = 1 y k = 2 luego


u(x, y) = e2xy
68

3.2.3.

Mtodo de las caractersticas

Para resolver una EDP cuasilineal con una EDP en dos variables
P (x, y, u)ux + Q(x, y, u)uy = R(x, y, u)
consideremos las superficies de nivel en el espacio z = u(x, y), entonces, enfocando el
problema desde el punto de vista geomtrico, la ecuacin se puede interpretar como
que el vector (P (x, y, z), Q(x, y, z), R(x, y, z)) es ortogonal al vector (ux , uy , 1) que
es el gradiente del campo f (x, y, z) = u(x, y) z. Esto nos lleva a que dicho vector
(P (x, y, z), Q(x, y, z), R(x, y, z)) es proporcional a los vectores tangentes a las curvas
contenidas en la superficie de nivel z = u(x, y), dicho de otro modo
dx
dy
dz
=
=
P (x, y, z)
Q(x, y, z)
R(x, y, z)
El mtodo consiste, por tanto, en determinar las curvas tangentes al campo vectorial
F = (P, Q, R) llamadas curvas caractersticas y encontrar el campo u(x, y) que definen
estas curvas.
dx
dy
dz
=
=
= dt
P (x, y, z)
Q(x, y, z)
R(x, y, z)
lo que nos lleva al sistema de ecuaciones diferenciales
dx
= P (x, y, z)
dt
dy
= Q(x, y, z)
dt
dz
= R(x, y, z)
dt
Resolviendo el sistema se obtiene la solucin paramtrica de la ecuacin, o solucin
general paramtrica del sistema:
x = x(c1 , c2 , c3 , t)
y = y(c1 , c2 , c3 , t)
z = z(c1 , c2 , c3 , t)
donde c1 , c2 , c3 son constantes de integracin.
Generalmente, a partir de las condiciones iniciales se puede describir una curva llamada curva directriz (f (s), g(s), h(s)) en funcin de otro parmetro s I, que, imponiendo las condiciones iniciales en t = 0, nos permiten definir las constantes en funcin
de s, es decir
x(c1 , c2 , c3 , 0) = f (s)
y(c1 , c2 , c3 , 0) = g(s)
z(c1 , c2 , c3 , 0) = h(s)
y por eliminacin de las constantes nos queda una expresin
x(s, t) = x
y(s, t) = y
z(s, t) = z
que representa la solucin en forma paramtrica. Eliminando t de las dos primeras
ecuaciones de x e y se obtiene (x, y, s) = 0, espresin de la proyeccin de las curvas
caractersticas, y, por ltimo, se elimina el parmetro s para obtener z = u(x, y) que nos
determina una forma explcita de la solucin de la EDP que plantebamos.
69

dx
Nota. En ocasiones a partir de la primera ecuacin P (x,y,z)
=
las proyecciones de las caractersticas sobre XY de la forma

dy
Q(x,y,z)

se pueden obtener

g(x, y) = s
y, con la otra ecuacin encontrar una expresin de u(x, y, h(s)) = z que depende de x,
de y y de una cierta funcin de s, h(s), que, como hemos dicho, tambin dependiente de
x, y.
u
u
+y
= 3u por este mtodo. Tenemos entonces
x
y
que resolver el sistema de ecuaciones diferenciales
dx
dy
dz
=
=
= dt
x
y
3z
Resolviendo obtenemos las curvas caractersticas
dx
= x x = c1 et
dt
dy
= y y = c2 et
dt
dz
= 3z z = c3 e3t
dt
De las dos primeras ecuaciones vemos que y = xcte., luego las proyecciones de las curvas
caractersticas son rectas, luego planteamos la curva directriz arbitraria x = 1, y = s,
z = h(s) y para t = 0 tenemos c1 = 1, c2 = s y c3 = h(s). De aqu


x = et


y = sx
t
y = se

z = h xy x3
3
z = h(s) x

z = h(s) e3t
Ejemplo. Resolvamos la ecuacin x

luego la solucin general de la EDP, por este mtodo, es


y
u(x, y) = h
x3
x

Ejercicio. Comprueba que u(x, y) = h xy x3 es solucin de la ecuacin xux +yuy = 3u.
Ejemplo. Resolvamos la ecuacin ux uy = 1 con la condicin de Cauchy u(x, 0) =
sen x. En este caso,
dx = dy = dz
Luego las curvas caractersticas son
x = c1 + t
y = c2 t
z = c3 + t
por la condiciones iniciales planteamos la curva directriz x = s, y = 0, z = u(x, 0) = sen s,
por tanto, para t = 0, tenemos c1 = s, c2 = 0, c3 = sen s. Sustituyendo tenemos
x=s+t
y = t
z = sen s + t
Eliminando t de las dos primeras ecuaciones tenemos la proyeccin en el plano XY de
las curvas caractersticas x + y = s, que son rectas. Por ltimo sustituyendo t y s en la
ltima ecuacin obtenemos la solucin z = u(x, y) de la ecuacin
u(x, y) = sen(x + y) y
70

Ejercicio. Comprueba que es campo u(x, y) = sen(x + y) y es solucin del problema


de Cauchy ux uy = 1, con u(x, 0) = sen x.
Ejemplo. Resolvamos el siguiente EDP con condicin de Dirichlet

y u + x u = xy
x
y

u(x, y) = 0 en la circunferencia de radio 1.


Tenemos, entonces
dx
dy
dz
=
=
y
x
xy
De la primera ecuacin tenemos x dx = y dy que se integra como x2 = y 2 + s. De la
2
ltima tenemos dz = y dy, luego z = y2 + h(s). Por tanto, la solucin general es
u(x, y) =

y2
+ h(x2 y 2 )
2

La condicin de Dirichlet nos permite encontrar la funcin f puesto que u(x, y) = 0 si


x2 + y 2 = 1, de aqu
y2
y2
+ h(1 y 2 y 2 ) = 0 h(1 2y 2 ) =
2
2
x1
y sustituyendo obtenemos la
cambiando la variable x = 1 2y 2 , tenemos h(x) =
4
solucin buscada
x2 + y 2 1
y 2 x2 y 2 1
+
=
u(x, y) =
2
4
4

3.3.

Ecuaciones en derivadas parciales de segundo orden

En esta seccin, nos centraremos en las ecuaciones en derivadas parciales (lineales)


de orden dos. Ms concretamente, estudiaremos la ecuacin de ondas, y la ecuacin
de difusin del calor, ecuaciones de gran importancia para la fsica.
El mtodo que usaremos para resolver la ecuacin de ondas, as como la ecuacin de
difusin del calor es el mtodo de separacin de variables. La idea de este mtodo, como ya
hemos visto en la seccin anterior, es buscar una solucin de la forma u(x, y) = f (x)g(y).
Distinguiremos los siguientes pasos:
PASO 1. Obtencin de dos ecuaciones diferenciales ordinarias.
PASO 2. Hallar las soluciones de las dos ecuaciones diferenciales ordinarias obtenidas
en el PASO 1, que cumplan las condiciones de frontera.
PASO 3. Formar una apropiada combinacin lineal de las soluciones halladas en el
PASO 2 para que se satisfagan las condiciones iniciales del problema.

3.3.1.

La ecuacin de ondas

La siguiente ecuacin
1
utt = 0,
(3.3)
c2
modela las vibraciones de una cuerda extendida entre dos puntos x = 0 y x = `; por
ejemplo, una cuerda de guitarra. El movimiento se produce en el plano xy de manera
uxx

71

que cada punto de la cuerda se mueve perpendicularmente al eje x. Sea u(x, t) el desplazamiento de la cuerda en el instante de tiempo t > 0 medido desde el eje x, con las
condiciones de frontera
u(0, t) = 0,

u(`, t) = 0 para todo t,

(3.4)

esto es, nuestra cuerda est sujeta en los extremos x = 0, x = `. Adems, consideraremos
las siguientes condiciones iniciales (en el instante t = 0)
u(x, 0) = (x),

(forma inicial de la cuerda),

ut (x, 0) = (x),

(velocidad inicial de la cuerda)

(3.5)

PASO 1.
Buscamos una solucin de la forma u(x, t) = f (x)g(t), distinta de la trivial. Al
sustituir uxx (x, t) = f 00 (x)g(t), utt (x, t) = f (x)g 00 (t) en la ecuacin (3.3) obtenemos
f 00 (x)g(t) =

1 00
g (t)f (x),
c2

lo que nos permite escribir


f 00 (x)
1 g 00 (t)
= 2
=
f (x)
c g(t)
siendo constante (con el signo menos por conveniencia).
Esta expresin se transforma en las siguientes ecuaciones diferenciales ordinarias
f 00 (x) + f (x) = 0,
g 00 (t) + c2 g(t) = 0,
que, imponiendo las condiciones de frontera (3.4),
u(0, t) = f (0)g(t) = 0, t f (0) = 0,
u(`, t) = f (`)g(t) = 0, t f (`) = 0.
se transforman en
f 00 (x) + f (x) = 0,
00

g (t) + c g(t) = 0,

f (0) = f (`) = 0 y f 6= 0
g 6= 0.

(3.6)

PASO 2.
Determinemos las soluciones de (3.6), que satisfagan las condiciones frontera.
Comenzamos por encontrar los valores del parmetro de forma que la ecuacin
f 00 + f = 0,

f (0) = f (`) = 0,

(3.7)

tenga soluciones no triviales. Por analoga con lo estudiado en el lgebra lineal, a lo


llamaremos autovalor, y a las soluciones no triviales autofunciones. Este problema se
conoce con el nombre de problema de Sturm-Liouville.
A continuacin estudiaremos las soluciones de (3.7), segn los autovalores :
72

Para < 0, la solucin general de (3.7), es de la forma

f (x) = C1 e

+ C2 e

Imponiendo las condiciones de frontera obtenemos que

f (0) = C1 + C2 = 0,

f (`) = C1 e

+ C2 e

= 0,

de donde C1 = C2 = 0, y la nica solucin es la trivial.


Para = 0, la solucin general de (3.7) es f (x) = C1 x + C2 . De nuevo, al imponer
las condiciones de frontera resulta C1 = C2 = 0, que nos da como resultado la
solucin trivial.
Para > 0, la solucin general de (3.7) es

f (x) = C1 cos( x) + C2 sen ( x).


Imponemos las condiciones de frontera
f (0) = C1 = 0,

f (`) = C2 sen( `) = 0.

Tendremos una soluciones distintas de la trivial cuando


 n 2

sen( `) = 0 ` = n = n =
`
Por tanto, las soluciones no triviales de (3.7) vienen dadas por las autofunciones


n 2
fn (x) = sen n
.
` x , de los autovalores n =
`

2
Haciendo = n = n
, obtenemos las soluciones de la segunda ecuacin (3.6)
`
gn (t) = An cos

 nc 
 nc 
t + Bn sen
t .
`
`

Multiplicando ambas, obtenemos soluciones de la ecuacin de ondas que buscamos



 nc 
 nc 
 n 
un (x, t) = fn (x)gn (t) = An cos
t + Bn sen
t
sen
x .
`
`
`
Como la ecuacin de ondas (3.3) es una ecuacin lineal homognea, la siguiente
combinacin lineal


X

 nc 
 nc 
 n 
t + Bn sen
t
sen
x ,
`
`
`
n=1
n=1
(3.8)
tambin ser una solucin de (3.3) que satisface las condiciones de frontera.
u(x, t) =

un (x, t) =

An cos

PASO 3.
Imponemos que la ecuacin (3.8) cumpla las condiciones iniciales (3.5):
u(x, 0) = (x) =

un (x, 0) =

n=1

ut (x, 0) = (x) =

n=1

An sen

n=1

 n 
x ,
`

(3.9)

 n 
X nc
un
(x, 0) =
Bn sen
x .
t
`
`
n=1

73

(3.10)

Figure 3.1: Solucin u(x, t) para c = 2.


Observemos que en (3.9) y (3.10) aparecen desarrollos de Fourier en senos. Por tanto, si
(x) y (x) admitan desarrollos de Fourier en senos, las condiciones iniciales se cumplirn
si
Z

 n 
 n 
X
2 `
(x) sen
(x) =
An sen
x An =
x dx,
`
` 0
`
n=1

y
(x) =

X
n
n=1

Z `
 n 
 n 
2
(x) sen
cBn sen
x Bn =
x dx.
`
`
nc 0
`

Entre otras aplicaciones, la ecuacin de ondas rige la propagacin de ondas de presin


(sonido) y las electromagnticas, por lo que aparecen frecuentemente en el mbito de las
Ciencias y la Ingeniera.
Ejemplo. Vamos a resolver la siguiente ecuacin de ondas.
1
utt = 0,
c2
C.f. u(0, t) = 0,

u(, t) = 0,

C.i. u(x, 0) = 0,

ut (x, 0) = 2,

uxx

0 < x < ,

t0
t0
0<x<

Sustituyendo ` = en las expresiones obtenidas anteriormente tendremos:


un (x, t) = (An cos(nct) + Bn sen(nct)) sen(nx),
con

Z
Z
2
2
An =
(x) sen(nx) dx, y Bn =
(x) sen(nx) dx
0
nc 0
Adems, como u(x, 0) = 0 = (x) y ut (x, 0) = 2 = (x) tenemos que
Z
2
An =
(x) sen(nx) dx = 0
0


Z
2
4
cos(nx)
4
Bn =
(x) sen(nx) dx =

= 2 ( cos(n) + 1) =
nc 0
nc
n
n c
0
4
= 2 ((1)n + 1)
n c
Por tanto, la solucin pedida es
u(x, t) =
=

X
n=1

X
n=1

(An cos(nct) + Bn sen (nct)) sen(nx) =


4
n2 c

(1 (1)n ) sen(cnt) sen(nx)

Vase figura 3.1


74

Ejemplo. Una cuerda de guitarra, de longitud `, est sujeta por sus extremos. Se tae
la cuerda en x = a, desplazndola una distancia h. Hllese la forma de la cuerda en
cualquier instante posterior al taido.
Hemos de resolver la Ecuacin de ondas
uxx

1
utt = 0,
c2

con velocidad inicial ut (x, 0) = (x) = 0, y con forma inicial de la cuerda.


Esta situacin se expresa matemticamente, diciendo que en el instante t = 0, la forma
x=0

x=a

x=`

de la cuerda viene dada por la siguiente funcin

hx

a
u(x, 0) = (x) =

h(`
x)

`a

si 0 x a,
si a x `.

Segn lo visto antes, la solucin es


u(x, t) =

un (x, t) =

n=1


X

An cos

n=1

 n


 n 
 n 
ct + Bn sen
ct
sen
x ,
`
`
`

con
2
An =
`

(x) sen
0

 nx 
`

dx,

2
Bn =
nc

(x) sen
0

 nx 
`

dx.

Ahora como en el caso que nos ocupa (x) = 0, tenemos que los coeficientes Bn son
todos nulos. Para los An tenemos
An =
=
=
=

Z
Z
Z
 nx 
 nx 
 nx 
2 `
2 a hx
2 ` h(` x)
(x) sen
dx =
sen
dx +
sen
dx =
` 0
`
` 0 a
`
` a `a
`
Z
Z `
Z `
 nx 
 nx 
 nx 
2h a
2h
2h
x sen
dx +
sen
dx
x sen
dx =
`a 0
`
`a a
`
`(` a) a
`
"
#
"
#

nx a
nx `
nx
nx `
cos
cos
sen
cos
2h
2h
2h sen nx
`
`
`
`
`

 x n
 x n
n
n 2
n 2
`a
`

a
`(`

a)
`
`
`
a
`
`
0
a
 na 
2h`2

sen
a(` a) 2 n2
`

Por tanto, la forma de la cuerda en el instante t viene dada por





 na 
 nx 
X
1
nct
2h`2
u(x, t) =
sen
cos
sen
a(` a) 2
n2
`
`
`
n=1

Vase figura 3.2.


75

Figure 3.2: Solucin u(x, t) para c = 2, con ` = 4, a = 1.5 y h = 1.

3.3.2.

La ecuacin de difusin del calor

En esta seccin, encontraremos una solucin distinta de la trivial de ecuacin


ut = c2 uxx ,

(3.11)

la cual describe la temperatura de una barra homognea de seccin constante. Supondremos que la superficie de la barra est aislada, de modo que el calor slo fluye longitudinalmente; haremos coincidir la barra con el eje OX. Buscamos una solucin con las
condiciones de frontera
u(0, t) = 0, u(`, t) = 0, para todo t,
es decir, los extremos de la barra estn a temperatura 0. Consideraremos, adems la
condicin inicial u(x, 0) = (x), que nos da la distribucin inicial de la temperatura
en la barra.
PASO 1.
Buscamos una solucin de la forma
u(x, t) = f (x)g(t),
distinta de la trivial.
Sustituyendo en la Ecuacin (3.11) resulta
1 g 0 (t)
f 00 (x)
=
=
c2 g(t)
f (x)

con constante.

de donde, despejando
f 00 (x) + f (x) = 0,

f (0) = f (`) = 0,

g 0 (t) + c2 g(t) = 0,

(3.12)

PASO 2.
De nuevo, nos enfrentamos al problema de Sturm-Liouville
f 00 + f = 0,

f (0) = f (`) = 0.

De los clculos realizados en la seccin anterior, sabemos que dicho problema tiene soluciones no triviales para
 n 2
n =
, n = 1, 2, . . .
`
76

(a) Solucin (3.13).

con soluciones fn (x) = sen


resulta

n
`

(b) Solucin (3.14).


x . Sustituyendo este valor de n en la Ecuacin (3.12)
g 0 + c2

 n 2
`

g=0

2 n
con solucin gn (t) = An ec ( ` ) t . Por tanto, las siguientes funciones
 n 
2 n 2
x
un (x, t) = An ec ( ` ) t sen
`

son soluciones particulares de la ecuacin (3.11), las cuales satisfacen las condiciones de
frontera.
PASO 3.
Consideramos
u(x, t) =

un (x, t) =

n=1

An sen

n=1

 n 
2 n 2
x ec ( ` ) t
`

e imponemos que se satisfaga la condicin inicial u(x, 0) = (x), obtenemos


u(x, 0) =

un (x, 0) =

n=1

An sen

n=1

 n 
x = (x),
`

Identificando los An con los coeficientes del desarrollo en serie de Fourier en senos de la
funcin (x) en el intervalo (0, `), tenemos que
2
An =
`

(x) sen
0

 n 
x dx
`

La Ecuacin del calor tambin rige el proceso de desmagnetizacin espontnea de


materiales magnetizados, y de la disolucin de un soluto en un disolvente.
Ejemplo. Resolvamos la ecuacin del calor para la funcin
ut = c2 uxx ,

0<x<

u(x, 0) = (x) = sen(2x) + 5 sen(6x).


Teniendo en cuenta que en el caso que nos ocupa ` = , la expresin que hemos
obtenido para (x) queda en la forma
(x) =

X
n=1

An sen

 n  X
x =
An sen(nx) = sen(2x) + 5 sen(6x)

n=1

77

y deducimos que todos los coeficientes An = 0 a excepcin de A2 = 1, y de A6 = 5. Por


tanto, la solucin de la ecuacin del calor es la siguiente funcin
u(x, t) =

An sen

 n 
X
2 n 2
2 2
x ec ( ) t =
An sen (nx)ec n t =

n=1

n=1

4c2 t

= sen(2x)e

+ 5 sen(6x)e

36c2 t

(3.13)

Vase figura 3.3a.


Ejemplo. Vamos a resolver la temperatura u(x, t) en una varilla de longitud si sus
extremos se mantienen a temperatura cero para todo instante y la distribucin inicial de
la temperatura viene dada por
(
0
si 0 < x < 2 ,
u(x, 0) = (x) =
1
si 2 < x < .
Tmese para la difusin trmica de la varilla c2 = 1.
Sabemos que u(0, t) = u(, t) = 0, y que
Z
Z
2
n
2
An =
(x) sen
x dx =
(x) sen(nx) dx =
0

 0

Z
 n 
2
2 1
2 
=
sen (nx) dx =
cos(nx) =
cos(n) cos

n
n
2

Por tanto
u(x, t) =

 n 

X
2 
2
cos
cos(n) sen(nx) en t
n
2

(3.14)

n=1

es la solucin a nuestro problema.


Vase figura 3.3b.

Ejercicios
Ejercicio 3.1
Elimina las constantes de las siguientes familias biparamtricas de superficies para
obtener una ecuacin en derivadas parciales que tenga a esa familia como una solucin:
2. z = (x a)2 + (y b)2 .

1. z = ax + by + ab.

Ejercicio 3.2
Halla la ecuacin en derivadas parciales lineal tiene como solucin:


2. z 2 = x2 + (y 2 x2 ).
1. yx2 , y+x
= 0.
z
donde es una funcin arbitraria y derivable respecto a sus argumentos.
Ejercicio 3.3
Halla la solucin general de las siguientes ecuaciones en derivadas parciales:

78

1. x

z
z
+y
= 3z.
x
y

2. (x2 + y 2 )

z
z
3. (x + y) x
+ (x y) y
=

y 2 2xyx2
.
x

z
z
+ 2xy
= 0.
x
y

Ejercicio 3.4
z
Halla la superficie que satisface 4yz x
+
2
2
x + z = 2, y + z = 1.

z
y

+ 2y = 0 y que contiene a la curva

Ejercicio 3.5
Halla la ecuacin de todas las superficies cuyos planos tangentes pasan por el punto
(0, 0, 1).
Ejercicio 3.6
Halla la ecuacin de la superficie que, en cada punto P , el(vector normal es ortogonal
z=1
al vector que une P con el origen y que contiene a la curva
.
x2 + y 2 = 1
Ejercicio 3.7
Hallal a solucin general de las ecuaciones en derivadas parciales:
1.

2z
2z
2z

6
= 0.
x2 xy
y 2

2.

2z
2z
2z
z
z

2
+
2
+2
= 0.
2
2
x
xy y
x
y

3.

2z
2z
2z
z
z

2
+
2
+2
= 4xe2y .
2
2
x
xy y
x
y

Ejercicio 3.8
Usa el mtodo de variables separadas para encontrar soluciones de la ecuacin:
4

2z
2z
z
+
8
= 3ex+2y
2
2
x
y
x

Ejercicio 3.9
Halla solucin no trivial del problema
2
2z
z
=
4
1.
t2
x2
con z(x, 0) = 0, z (x, 0) = 4x3 .

2z
z
= 2 2 , t > 0,
3.
t
x
con z(x, 0) = 6 sen x, z(0, t) = 0.

2
z
2z

=
4

x2
t
2. con

z(x, 0) = 0, z

t (x, 0) =

z(0, t) = z(, t) = 0.

3
40

sen x

1
40

sen 3x,

Ejercicio 3.10
Usa transformada de Laplace para encontrar la solucin de:
79

2
z
+ sen t = 0
1. tx
con z(0, t) = 0, z(x, 0) = x,

2
z
z
+
= 2t
2. tx
t
con z(0, t) = t2 , z(x, 0) = x

Ejercicio 3.11
Usa la transformada de Laplace para encontrar la solucin de

z
2z

=
2
, 0 < x < 3, t > 0,

t
x2
con zx (0, t) = zx (3, t) = 0,

4x
z(x, 0) = 4 cos 2x
3 2 cos 3 .

80

Chapter 4

Complex Variable I (Differentiation


and Integration)
Definitions and basic properties about the Complex Numbers in Appendix A and
elementary complex functions in Appendix B.

4.1.
4.1.1.

Complex Differentiation
Accumulation Points and Limits

Let z0 be a complex number, we call centered disk at z0 of radius the set {z C :


|z z0 | < }. A set G C is called open set if every point in G is the center of a disk
completely contained in G. A set G C is called closed set if its complementary is open.
An accumulation point of a set G of complex
numbers is a complex number z0 such that for every
centred disk at z0 contains infinitely many elements

of G different of z0 . In other words, for every > 0


z0
there exist infinitely many numbers z G such that
0 < |z z0 | < .
An accumulation point of G C can be interpreted like a number z0 C such that there exists a
zn
sequence {zn } of elements of G such that converges
to z0 , i.e. zn z0 . It could be in G or not.
In the opposite side of accumulation point is the
isolated point. This is a point z0 G which there
exist a centred disk at z0 without points of G except z0 . In other words, z0 G is
isolated if there exist > 0 such that {z G : 0 < |z z0 | < } = .
Definition of limit for a function f : G C C is the same as is found in most
calculus books.
Definition 4.1.1. Suppose f is a complex function with domain G and z0 is an accumulation point of G. Suppose there is a complex number 0 such that for every > 0,
we can find > 0 so that for all z G satisfying 0 < |z z0 | < we have |f (z) 0 | < .
Then 0 is the limit of f as z approaches z0 , in short
lim f (z) = 0

zz0

This definition does not require z0 is in the domain G of f but we can approach to
the point z0 as near as we want through points of G for which the function f is well
defined.
81

Example 4.1.2. Number z0 = i is not in G = C {i, i}, domain of function f (z) =


zi
, but is a point of accumulation of G, and we can compute next limit
z 2 +1
1
1
i
zi
= lim
=
=
z 2 + 1 zi z + i
2i
2

lim

zi

Example 4.1.3. Number z0 = 0 is a point of accumulation of domain of f (z) = zz , but


limz0 zz does not exist.
To see this, we try to compute this limit" as z 0 on the real and on the imaginary
axis:
z
x
lim = lim = 1
z0 z
x0 x
z
yi
= 1
lim = lim
z0 z
yi0 yi
Hence, obviously, limit does not exist.
Definition 4.1.4. A complex function f : G C C is divergent in an accumulation
point z0 if for all M > 0 there exists > 0 such that for all z G and 0 < |z z0 | <
then |f (z)| > M . This is represented as
lim f (z) =

zz0

Example 4.1.5. For evaluating limz0

z
|z 2 |

we consider z in polar form z = rei , so

rei
ei
z
=
lim
=
lim
=
z0 |z 2 |
r0 r 2
r0 r
lim

because 1r + and |ei | = 1, then bounded, for every R.


Also we prove f is divergent in 0 doing
1
z
z
= lim
= lim =
2
z0 |z |
z0 z z
z0 z
lim

Next properties of limits are similar to real functions and we let the proof for reader.
Proposition 4.1.6. Let f and g be complex functions and c, z0 C. If limzz0 f (z) and
limzz0 g(z) exist, then:
1. lim (f (z) + g(z)) = lim f (z) + lim g(z).
zz0

zz0

zz0

2. lim (c f (z)) = c lim f (z).


zz0

zz0

3. lim (f (z) g(z)) = lim f (z) lim g(z).


zz0

zz0

zz0

4. If limzz0 g(z) 6= 0 then lim

zz0

limzz0 f (z)
f (z)
=
.
g(z)
limzz0 g(z)

Continuity
Definition 4.1.7. Suppose f is a complex function. If z0 is in the domain of the function
and either z0 is an isolated1 point of the domain or
lim f (z) = f (z0 )

zz0

then f is continuous at z0 . More generally, f is continuous on G C if f is continuous


at every z G.
1

Note a function defined in an isolated point is continuous in this point.

82

Just as in the real case, we can take the limit inside a continuous function:
Proposition 4.1.8. If f is continuous at an accumulation point 0 and limzz0 g(z) =
0 then
lim f (g(z)) = f (0 ).
zz0

In other words,

lim f (g(z)) = f


lim g(z) .

zz0

zz0

This proposition implies that direct substitution is allowed when f is continuous at the
limit point. In particular, that if f is continuous at 0 then lim0 f (w) = f (0 ).

4.1.2.

Differentiability and Holomorphicity

Definition 4.1.9. Suppose f : G C C is a complex function and z0 is an interior


point of G. The derivative of f at z0 is defined as
f (z0 + h) f (z0 )
h0
h

f 0 (z0 ) = lim

(Note: h are complex numbers).

provided this limit exists. In this case, f is called differentiable at z0 .


Definition 4.1.10. If f is differentiable for all points in an open disk centered at z0 then
f is called holomorphic 2 at z0 . The function f is holomorphic on the open set G C if
it is differentiable (and hence holomorphic) at every point in G.
Functions which are differentiable (and hence holomorphic) in the whole complex
plane C are called entire.
Similarly than real functions, differentiability implies continuity.
Theorem 4.1.11. Let f be a complex function. If f is differentiable in z0 then f is
continuous in z0 .
Proof. We need to prove limzz0 f (z) = f (z0 ), but, doing h = z z0 , this is equivalent
to prove
lim (f (z0 + h) f (z0 )) = 0.
h0

We have,
lim (f (z0 + h) f (z0 )) = lim h

h0

h0

f (z0 + h) f (z0 )
= 0f 0 (z0 ) = 0
h

Example 4.1.12. Complex function f (z) = z 2 is entire, because


(z + h)2 z 2
2zh + h2
= lim
= 2z.
h0
h0
h
h
lim

Example 4.1.13. The function f (z) = z 2 is differentiable at 0 and nowhere else (in
particular, f is not holomorphic at 0):
2

(z + h) z 2
2zh + h
2z rei + r2 e2i
= lim
= lim
=
r0
h0
h0
h
rei

 h
lim

= lim 2zei2i + re3i = 2ze2i


r0

and this limit does not exist whether z 6= 0 (it depends of ) and is 0 when z = 0.
2
Some authors use the term analytic instead of holomorphic. Technically these two terms are
synonymous, although, these have different definitions.

83

Example 4.1.14. The function f (z) = z is nowhere differentiable, because limit


(z + h) z
h
= lim
h0
h0 h
h
lim

never exists independent of z (see example 4.1.3).


The basic properties for derivatives are similar to those we know from real calculus.
Proposition 4.1.15. Suppose f and g are differentiable at z0 C, and that c C,
n Z+ :
1. (f + g)0 (z0 ) = f 0 (0 z) + g 0 (z0 ).
2. (f g)0 (z0 ) = f 0 (z0 )g(z0 ) + f (z0 )g 0 (z0 ).
 0
0
0
0 )f (z0 )g )z0 )
3. fg (z0 ) = f (z0 )g(zg(z
(if g(z0 ) 6= 0).
2
0)
4. (z n )0 = nz n1 .
Proposition 4.1.16 (chains rule). If f and g complex functions such that g is differentiable in z0 and f is differentiable in g(z0 ), then f g is differentiable in z0 and
(f g)0 (z0 ) = (f (g(z0 ))0 = f 0 (g(z0 ))g 0 (z0 )
Proposition 4.1.17. Suppose G and H are open sets in C, f : G H is a bijection,
f 1 : H G is the inverse function of f , and z0 H. If f is differentiable at f 1 (z0 )
with f 0 (f 1 (z0 )) 6= 0, and f 1 is continuous at z0 then f 1 is differentiable at z0 with
derivative
0
f 1 (z0 ) =

1
f 0 (f 1 (z0 ))

Constant functions
Derivative of a constant complex function f (z) = c defined in an open set G is 0
everywhere.
f (z + h) f (z)
cc
= lim
=0
h0
h0 h
h

f 0 (z) = lim

Inverse is not completely true. As counterexample, suppose D(0, 1) the (open) disk
centered in z = 0 and radius 1 and D(2, 1) the (open) disk centered in z = 2 and radius
1. Function f : D(0, 1) D(2, 1) C defined
(
1
if z D(0, 1)
f (z) =
1 if z D(2, 0)
has derivative 0 but is not constant function. The trouble is that the domain if f is not
a connected. What is that?
Curves, Connected Sets and Regions Let I = [a, b] R an closed interval. We call
curve in C to a continuous function : I C. The first point of the curve is z1 = (a)
and the last point is z2 = (b). We say the curve goes from z1 to z2 . A curve is closed
when z1 = z2 , otherwise the curve is open. A curve is called scaled curve if is continuous
and formed by horizontal and vertical segments.
84

Two sets X, Y C are separated if


G=X Y
there are disjoint open sets A and B so
Y
A
that X A and Y B. A set G C
X
z1
is connected if it is impossible to find two
z2
separated non-empty sets whose union is
equal to G. The set G represented in figure beside this text is not connected.
B
It is hard to use the definition to show
that a set is connected. One type of connected set that we will use frequently is a
curve. Moreover if G is a connected subset of C then any two points of G may be connected by a curve in G; in fact, if G is connected open set we can connect any two points
of G by a scaled curve of horizontal and vertical segments lying in G.
Example 4.1.18. A circle in complex plane is connected but it is impossible connect
two different points by a scaled curve (inside of circle). This happens because the circle
is not open, in fact is closed.
Example 4.1.19. The set G = C{0} is open and connected, but G = C{z : z is real}
is open and not connected.
Definition 4.1.20. We call region to an connected open set.
Theorem 4.1.21. If the domain of a complex function f is a region G C and f 0 (z) = 0
for all z in G then f is a constant function.

4.1.3.

The CauchyRiemann Equations

The relationship between the complex derivative and partial derivatives is very strong
and is a powerful computational tool. It is described by the CauchyRiemann Equations,
named after the French mathematician Augustin L. Cauchy (1789 1857) and the German
mathematician Georg F. B. Riemann (1826 1866), though the equations first appeared
in works of dAlembert and Euler.
Considering complex numbers in rectangular form z = x + iy, a complex function
f : G C can be expressed depending of the real and imaginary part
f (z) = f (x + iy) = u(x, y) + iv(x, y)
being u(x, y) and v(x, y) two-valued real functions u, v : G R.
Theorem 4.1.22.
(a) Suppose f = u + iv is differentiable at z0 = x0 + iy0 . Then the partial derivatives of
f satisfy
f
f
(z0 ) = i (z0 ).
x
y
This expression can be expressed in equation form knows as Cauchy-Riemann equations:
(
ux (x0 , y0 ) = vy (x0 , y0 )
(4.1)
vx (x0 , y0 ) = vy (x0 , y0 ).
85

(b) Suppose f is a complex function such that the partial derivatives fx and fy exist in
an disk centered at z0 and are continuous at z0 . If these partial derivatives satisfy
the Cauchy-Riemann equations (4.1) then f is differentiable at z0 and
f 0 (z0 ) =

f
(z0 )
x

Proof.
(a) If f is differentiable at z0 then f 0 (z0 ) = limh0
direction of h = h1 + ih2 , so

f (z0 +h)f (z0 )


h

and it is true for any

if h2 = 0 then
f 0 (z0 ) = lim

h1 0

f (x0 + h1 , iy0 ) f (x0 , y0 )


f
f (z0 + h1 ) f (z0 )
= lim
=
(z0 )
h1 0
h1
h1
x

if h1 = 0 then
f (z0 + ih2 ) f (z0 )
1
f
f (x0 , y0 + h2 ) f (x0 , y0 )
= lim
= i (z0 )
h2 0
ih2
i h2 0
h2
y

f 0 (z0 ) = lim
therefore

f
x (z0 )

= i f
y (z0 ). Hence

ux (x0 , y0 ) + ivx (x0 , y0 ) = i(uy (x0 , y0 ) + ivy (x0 , y0 ) = vy (x0 , y0 ) iuy (x0 , y0 )
and matching real and imaginary parts we obtain equations (4.1).
(b) Suppose h = h1 + ih2 , first we rearrange the quotient
f (z0 + h) f (z0 )
f (z0 + h) f (z0 + h1 ) + f (z0 + h1 ) f (z0 )
=
=
h
h
f (z0 + h) f (z0 + h1 ) f (z0 + h1 ) f (z0 )
=
+
=
h
h
h2 f ((z0 + h1 ) + ih2 ) f (z0 + h1 ) h1 f (z0 + h1 ) f (z0 )
=
+
.
h
h2
h
h1
Second we rearrange the partial derivative
fx (z0 ) =

h
h1
h2
h1
h2
fx (z0 ) =
fx (z0 ) + ifx (z0 ) =
fx (z0 ) + fy (z0 ).
h
h
h
h
h

Now,



f (z0 + h) f (z0 )
fx (z0 ) =
lim
h0
h


h1 f (z0 + h1 ) f (z0 )
= lim
fx (z0 )
(4.2)
h0 h
h1


h2 f ((z0 + h1 ) + ih2 ) f (z0 + h1 )
+ lim
fy (z0 ) .
(4.3)
h0 h
h2


Considering hh1 < 1 and h1 0 when h 0, limit (4.2) is zero.


In the other hand, hh2 < 1 and h 0 implies h1 , h2 0, and therefore limit (4.3)
is zero.
This prove that f is differentiable in z0 and f 0 (z0 ) = fx (z0 ).
86

If f (z) = u(x, y) + iv(x, y) verifies Cauchy-Riemann equations on a centered disk in


z0 then f is holomorphic in z0 . Also, if f verifies C-R equations on a open set G, then
f is holomorphic on G.
Definition 4.1.23 (Harmonic Functions). Function u : R2 R with continuous second
partials satisfying the partial differential equation called Laplace3 equation:
uxx + uyy = 0
on a region G is called harmonic on G;
If f is holomorphic in an open set G then the partials of any order of u and v exist;
hence we will show that the real and imaginary part of a function which is holomorphic
on an open set are harmonic on that set. These functions are called conjugate harmonic.

4.2.
4.2.1.

Integration
Definition and Basic Properties

For a continuous complex-valued function : [a, b] R C, we define the integral


Z b
Z b
Z b
(t) dt =
Re((t)) dt + i
Im((t)) dt
a

For a function which takes complex numbers as arguments, we integrate over a smooth
curve in C. Let f be a complex function defined in a domain G C and the curve is
parametrized by (t), a t b such that (t) G for all t [a, b] and f is continuous
in , we call integral of f over to
Z
Z b
Z
f = f (z) dz =
f ((t)) 0 (t) dt.

This definition can be naturally extended to piecewise smooth curves, i.e. if c [a, b],
is not differentiable in c, and 1 = : [a, c] C, 2 = : [c, d] C
Z
Z
Z
f (z) dz =
f (z) dz +
f (z) dz

Lets see an example:


Example 4.2.1. Let be the curve formed by consecutive segments from 1 to i and from
i to 1 and
R
f (z) = z 2 . We are going to calculate f .
First, we need parametrization of . This is piecewise differentiable in two pieces:
(t) = t + (1 |t|)i,
1 t 1
(
1 + i if 1 < t < 0
with 0 (t) =
, therefore
1 i if 0 < t < 1
Z
Z 0
Z
f (z) dz =
(t (1 + t)i)2 (1 + i) dt +
1
0

Z
=

1i

0.5i

1.5

1.

0.5

0.5i

(t (1 t)i)2 (1 i) dt

(2t2 1) + i(2t2 4t 1) dt +

(2t2 1) + i(2t2 4t + 1) dt

2
i1 1+i

=
=
3
3
3
3

0.5

In honor of French mathematician Pierre Simon Laplace (17491827).

87

Proposition 4.2.2. The value of the integral does not change if we do a new parameterization of the curve preserving the orientation. However, if the orientation is reversed
the integral changes the sign.
Proof. Suppose : [c, d] [a, b] differentiable for all s [c, d]. Then = : [c, d] C
is other parametrization of the same curve and
Z

t=(s)

(d)

f ((t)) 0 (t) dt

(c)

f (((s))) ((s)) (s) ds

f ( (s)) (s) ds =

f (z) dz =

Hence,
If preserve the orientation, i.e. 0 (s) > 0, (c) = a and (d) = b and
Z

Z
f (z) dz =

f ((t)) 0 (t) dt

If reverse the orientation, i.e. 0 (s) < 0, (c) = b and (d) = a and
Z

f ((t)) (t) dt =

f (z) dz =

f ((t)) 0 (t) dt

Usually if a curve reverses the orientation is represented by , so


Z
Z
f (z) dz = f (z) dz.

A curve is simple if it is defined by a injective


parametrization, i.e. (t1 ) 6= (t2 ) for all t1 , t2 [a, b].
A curve C is a closed curve if (a) = (b) for any
parametrization : [a, b]
H C. An integral over a closed
curve is representated by f (z) dz.
A closed curve very useful is the counterclockwise circle
centered in and radius r , represented by |z | = r. A
parametrization of this curve could be

3i
2i

1i

1i

Cr (t) = + reit with t


1
Lemma 4.2.3. Integral of z
over the circle of radius r centered in with positive
orientation (counterclockwise) is 2i.

Proof.
I
|z|=r

1
dz =
z

1
(rieit )dt = i( + ) = 2i
+ reit

Proposition 4.2.4. Suppose is a smooth curve, f and g are complex functions which
are continuous on , and c C.
R
R
R
1. (f + g) = f + g.
R
R
2. c f = c f .
88

Figure 4.1: Homotopic curves.


3. If 1 and 2 are curves so that 2 starts where 1 ends then define the curve
1 2
R
by following 1 to its end, and then continuing on 2 to its end. Then 1 2 f =
R
R
1 f + 2 f .
Proof. Items 1. and 2. follow directly from properties of real integration.
3. Let 1 : [a1 , b1 ] C and 2 : [a2 , b2 ] C be both parametrizations, the
(
1 (t)
if a1 t b1
(t) =
2 (t b1 + a2 ) if b1 t b1 + b2 a2
is a parametrization of 1 2 . Obviously is piecewise differentiable and
Z
Z b1 +b2 a2
Z b1
Z b1 +b2 a2
f (z) dz =
f ((t)) 0 (t) dt =
f ((t)) 0 (t) dt +
f ((t)) 0 (t) dt

a1
b1

Z
=

a1
Z b1

f (1 (t))10 (t) dt +
f (1 (t))10 (t) dt

a1

4.2.2.

a1
b1 +b2 a2

b1
Z b2

b1

f (2 (t b1 + a2 ))20 (t b1 + a2 ) dt

f (2 (s))20 (s) ds

a2

Z
=

Z
f (z) dz +

f (z) dz.
2

Homotopies

Suppose 0 and 1 are closed curves in a open set G C parametrized by 0 : [0, 1]


and 1 : [0, 1] C. Then we say 0 is G-homotopic to 1 , in symbols 0 G 1 , if there
is a continuous function h(t, s) : [0, 1] [0, 1] C such that
h(t, 0) = 0 (t)
h(t, 1) = 1 (t)
h(0, s) = h(1, s)
The function h(t, s) is called homotopy and represent a closed curve s for each fixed s.
The first curve is 0 and last curve is 1 . Homotopy can be interpreted as a continuous
deformation from 0 to 1 (see Figure 4.1).
Theorem 4.2.5 (Cauchys Theorem). Suppose G C is open, f is holomorphic in G,
and 0 G 1 via a homotopy with continuous second partials. Then
I
I
f (z) dz =
f (z) dz
0

89

Figure 4.2: G-contractible curve.


Proof. Suppose h is the given homotopy from 0 to 1 . For 0 s 1, let s be the curve
parametrized by h(t, s), 0 t 1. Consider the function
Z 1
I

f (h(t, s)) h(t, s) dt


f (z) dz =
I(s) =
t
0
s
as a function in s [0, 1]. We will show that I is constant with respect to s, and hence
the statement of the theorem follows with I(0) = I(1). Consider the derivative of I,


Z 1
d

f (h(t, s)) h(t, s) dt


I(s) =
ds
t
0 s
Z 1

2
f (h(t, s)) h(t, s) + f (h(t, s))
h(t, s) dt
=
t
st
0 s
Z 1

h(t, s) dt
=
f 0 (h(t, s)) h(t, s) h(t, s) + f (h(t, s))
s
t
st
0


1
Z 1

=
f (h(t, s)) h(t, s) dt = f (h(t, s)) h(t, s)
s
s
0 t
t=0

= f (h(1, s)) h(1, s) f (h(0, s)) h(0, s) = 0


s
s
Hence I is constant.
An important special case is the one where a curve is G-homotopic to a point, that
is, a constant curve (see Figure 4.2 for an example). In this case we simply say is
G-contractible, in symbols G 0.
Corollary 4.2.6. Suppose G C is open, f is holomorphic in G, and G 0 via a
homotopy with continuous second partials. Then
I
f = 0.

Corollary 4.2.7. If f is entire and is any smooth closed curve then

4.2.3.

f = 0.

Cauchys Integral Formula

Previously we need some considerations about length of a curve.


Definition 4.2.8. The length of a smooth curve parametrized as : [a, b] C is
Z b
length() =
| 0 (t)| dt
a

Example 4.2.9. Length of the circle of radius R is 2R. To compute it, we parametrize
the circle (t) = Reit , 0 t 2 and
Z 2
Z 2


it

length() =
Rie dt =
R dt = 2R.
0

90

Cr

Figure 4.3: There is a circle Cr with center and radius r homotopic to .


Lemma 4.2.10. Suppose is a smooth curve, f is complex function which is continuous
on ,
Z



f (z) dz max |f (z)| length()


z

Proof.
Z
Z
Z b





0
f (z) dz = f ((t)) (t) dt
|f ((t))| 0 (t) dt


a
Z b
0
(t) dt = max |f (z)| length()
maxz |f (z)|
z

Theorem 4.2.11 (Cauchys Integral Formula). Suppose f is holomorphic on the region


G, G, and is a positively oriented, simple, closed, smooth, G-contractible curve
such that is inside . Then
I

f (z)
dz = 2if ()
z

Proof. There is a counterclockwise circle |z | = r, in short named Cr , with center


f (z)
and radius r homotopic in G to (see Figure 4.3). Also z
is holomorphic in G {},
so Cauchys Theorem 4.2.5, gives
I

Moreover, using
I


Cr

1
Cr z dz

f (z)
dz =
z

I
Cr

f (z)
dz
z

= 2i in Lemma 4.2.3,

I
I

I


f (z)
1
f (z) f ()
f (z)




dz 2if () =
dz f ()
dz =
dz
z
z
Cr z
Cr
Cr z




f (z) f ()
f (z) f ()



2r
max
length(Cr ) = max

zCr
zCr
z
r
= 2 max |f (z) f ()|
zCr

and doing r 0, because f is continuous, we deduce the theorem.


91

Discussion. Suppose
H f (z) f holomorphic on G, G and closed curve contractible in
G. For solving z
dz, Cauchys integral formula brings us to the following discussion:
I

f (z)
dz = 2if () (Theorem 4.2.11).
z

If is inside of then

Example.
I
|z|=1

2z + i
dz =
2z i
I

If is outside of then

I
|z|=1

z + i/2
i
i
dz = 2i( + ) = 2.
z i/2
2 2

f (z)
dz = 0 (Corollary 4.2.6).
z

Example.
I

1
dz = 0.
z+1i

|z|=1

If is a point of the curve , then the integral


H
1
Example. Lets compute |z|=1 z+i
dz
I
|z|=1

ieit
dt =
eit + i

dz is not defined.

i cos t sin t
dt
cos t + i(sin t + 1)
0
0
Z 2
Z 2
cos t
1
=
dt + i
dt
2 sin t + 2
2
0
0
Z 2
cos t
dt.
= i +
2 sin t + 2
0

1
dz =
z+i

But observe that function f (z) =


R 2 cos t
0 2 sin t+2 dt is improper
Z

f (z)
x

cos t
dt = lim
0
2 sin t + 2
= lim

cos t
2 sin t+2

is not bounded in [0, 2] then integral

!
Z 2
cos t
cos t
dt +
dt
3
2 sin t + 2
2 sin t + 2
+
0
2
!
 3 


2
ln |2 sin t + 2| 2
ln |2 sin t + 2|
+
2
2
3
0
+

ln |2 sin( 3
2

ln 
2 ln 
2 ln |2 sin( 3
2 + ) + 2|
= lim
 + 
0
2
2
2
2


ln | 2 cos() + 2| ln | 2 cos() + 2|
= lim

0
2
2
= 0.
Hence

1
|z|=1 z+i dz

) + 2|

= i (improper).

Example 4.2.12. Let r be the circle centered at 2i with radius r, oriented counterclockwise. We compute
I
dz
.
2
r z + 1
92

Solution. Denominator z 2 + 1 = (z i)(z + i), hence there are two relevant points
z = i and z = 1. See Figure 4.4.
For 0 < r < 1, f (z) =

1
z 2 +1

is holomorphic inside r , then


I
r

For 1 < r < 3, function f (z) =


I
r

1
z+i

dz
=0
+1

z2

is holomorphic inside r , then


1
z+i

dz
=
z2 + 1

zi

= 2i

1
=
i+i

For r > 3, there are two conflictive points inside r . Introducing a new path4 we
obtain two counterclockwise curves 1 and 2 separating i and i according to the
figure 4.4 shown below. Thus
I
r

dz
=
z2 + 1

I
r

1
z+i

zi

1
zi

I
+

z+i

= 2i

1
1
+ 2i
= 0.
i+i
(i) i

For r = 1 and r = 3 integral is not determined.

r=3
r=1
2i
i

i
Figure 4.4

4.2.4.

Extensin of Cauchys Formula

Theorem 4.2.11 gives (if conditions are met) a new expression of f


I
1
f (z)
f () =
.
2i
z
Also we have expressions for derivatives of f .
Theorem 4.2.13. Suppose f is holomorphic on the region G, w G, and is a positively
oriented, simple, closed, smooth, G-contractible curve such that w is inside . Then
I
I
1
f (z)
1
f (z)
0
00
f (w) =
dz, f (w) =
dz
2
2i (z w)
i (z w)3
and, more generally
f (n (w) =
4

n!
2i

f (z)
dz
(z w)n+1

Integration over path is twice in opposite direction, then integral is zero.

93

Proof. We can rewrite the derivative quotient as follows


I

I
f (z)
f (z)
f ( + h) f ()
1
=
dz
dz
h
2ih
zh
z
I
1
hf (z)

=
dz
2i
h (z h)(z )
hence

I 


I
1

f ( + h) f ()
f
(z)
f
(z)
1
f
(z)


dz =
dz




2
2
h
2i (z w)
2i (z h)(z ) (z w)




f (z)
|h|

length().
max

2
2 z (z h)(z )
(4.4)
Since
/ , therefore |z | k for some k and, if M = maxz f (z),




f
(z)
|f (z)|
M
M


(z h)(z )2 (|z | |h|)|z |2 (k |h|)k 2 k 3 if h 0.




f (z)
In conclusion, length() is constant, (zh)(z)
2 is bounded, therefore expression
(4.4) goes to 0 if h 0 and
I
1
f (z)
0
f () =
dz.
2i (z w)2
The proof of the remaining formulas are performed similarly.
From this theorem an important consequence is deduced:
Corollary 4.2.14. If a complex function is differentiable then it is infinitely differentiable.
H
z
Example 4.2.15. For compute |z|=1 tan
dz we check tan z is holomorphic inside the
z3
circle of radius 1, then

I
tan z
d2 tan z
dz = i
= i2 sec2 (0) tan(0) = 0
3
dz 2 z=0
|z|=1 z
I

1
dz.
1)2
|z|=1
Function has two singularities z = 0 and z = 21 , both inside circle |z| = 1. Introduce
a path which separates 0 an 12 and
Example 4.2.16. Compute

I
|z|=1

z 2 (2z

1
1
I
1 2
(z

2
1
2)
4z
dz =
dz +
1 2 dz
2
z 2 (2z 1)2
z
1
2 (z 2 )



d
1
d 1

= 2i
+ 2i

dz (z 12 )2
dz 4z 2 z= 1
I

z=0

1
= 2i
+ 2i 1 3
1 3
2(
(0 2 )
2)
= 24i.
94

1
2
1

Figure 4.5: Example 4.2.16.

4.2.5.

Fundamental Theorem of Algebra

A well-known result on polynomials is the named the Fundamental Theorem of


Algebra which we state below. Previously, we need a corollary of Theorem 4.2.13 given
by the French mathematician Joseph Liouville (18091882).
Theorem 4.2.17 (Liouvilles Theorem). Every bounded entire function is constant.
Proof. Suppose |f (z)| M for all z C. For all radius R > 0, consider the circle CR
centered in ,




1 I

f (z)
f
(z)
1


0

length(CR ) = 1 max |f (z)| 2R
|f ()| =
dz
max
2
2
2i |z|=R (z )
2 zCR (z )
2 zCR R2

M
R

which is arbitrary small when R . Therefore f 0 () = 0 on connected region C and,


by Theorem 4.1.21, f is constant.
Theorem 4.2.18 (Fundamental Theorem of Algebra). Every polynomial of grade bigger
or equal to one has a root in C.
Proof. We do it by way of contradiction. Suppose that polynomial p has not roots, then
1
f (z) = p(z)
is entire. Because
lim f (z) = 0

|z|

we have f is bounded. By Liouvilles Theorem f is constant, then p is constant and it is


impossible.
p(z)
We know if z0 is a root of polynomial p(z) of grade n, then q(z) = zz
is another
0
polynomial of grade n 1, and reiterating on this theorem we obtain the following result.

Corollary 4.2.19. Any polynomial non constant of grade n has exactly n complex roots
(non necessary all different).

4.2.6.

Fundamental Theorems of Calculus

As in the real case, we call a primitive of a complex function f (z) on G to a holomorphic function F on G such that F 0 (z) = f (z). So, we can state the following theorem
95

Theorem 4.2.20 (Second Fundamental Theorem of Calculus). Suppose G C is a


region. Let contained in G be a smooth curve with parametrization (t), a t b. If
F is any primitive of f on G then
Z
f (z) dz = F ((b)) F ((a))

Proof. Doing a change of variable,


Z
Z b
Z
u=(t)
0
f ((t)) (t) dt =
f (z) dz =

(b)

f (u) du = F ((b)) F ((a))

(a)

Definition 4.2.21. A region G C is simply connected


if every simple closed curve in G is G-contractible. That
is, any simple closed curve in G has the interior of
in C completely contained in G.

Loosely, simply connected means G has no holes.


An example of non simply-connected is C {z0 }
If a region G is simply connected and there is a non Figure 4.6: Non simply conclosed simple curve from z1 to z2 inside of G, close this nected region G.
curve adding a new path from z1 to z2 and the resulting
curve is G-contractible. This let to state the following
corollary:
R
Corollary 4.2.22. If f is holomorphic on a simply connected region G then f is
independent of the path in G between (a) and (b).
Corollary 4.2.23. Suppose G C is open, is a smooth closed curve in G, and f has
an primitive on G. Then
Z
f (z) dz = 0

1
|z|=r z

So, for example, from


dz = 2i 6= 0 we have that function f (z) =
primitive in the region |z| < r.

1
z

has not

We state this well-known theorem


Theorem 4.2.24 (First Fundamental Theorem of Calculus). Suppose G C is a region,
and fix some basepoint z0 G. For each point z G, let z denote a smooth curve in G
from z0 to z. Let
R f : G C be a holomorphic function such that, for any simple closed
curve in G, f = 0. Then the function F (z) : G C defined by
Z
F (z) =
f (z) dz
z

is holomorphic in G with F 0 (z) = f (z),


Finally this theorem produces two important consequences
Corollary 4.2.25. Every holomorphic function on a simply-connected region has a primitive.
Corollary 4.2.26 (Moreras Theorem). Suppose f is continuous in the region G and
Z
f =0

for all smooth closed paths in G. Then f is holomorphic in G.


96

Exercises
Exercise 4.1
Evaluate the following limits or explain why they does not exist.
iz 3 1
.
zi z + i

1. lim

2.

|z| 1
.
z1 z + 1
lim

3.

lim x + i(2x + y).

z1i

Exercise 4.2
Apply the definition of the derivative to give a direct proof that f 0 (z) =
f (z) = 1/z.
Exercise 4.3
Find the derivative of the function T (z) =
When is T 0 (z) = 0?

az+b
cz+d

1
z2

when

, where a, b, c, d C and ad bc 6= 0.

Exercise 4.4
If u(x, y) and v(x, y) are differentiable does it follow that f (z) = u(x, y) + iv(x, y) is
differentiable? If not, provide a counterexample.
Exercise 4.5
Where are the following functions differentiable? Where are they holomorphic? Determine their derivatives at points where they are differentiable.
1. f (z) = ex eiy .

7. f (z) = |z|2 = x2 + y 2 .

2. f (z) = 2x + ixy 2 .

8. f (z) = z Imz.

3. f (z) = x2 + iy 2 .

9. f (z) =

4. f (z) = ex eiy .

10. f (z) = 4(Rez)(Imz) i(z)2 .

5. f (z) = cos x cosh y i sin x sinh y.

11. f (z) = 2xy i(x + y)2 .

6. f (z) = Imz.

12. f (z) = z 2 z 2 .

ix + 1
.
y

Exercise 4.6
Consider the function
( xy(x+iy)
f (z) =

x2 +y 2

if z 6= 0
if z = 0.

(As always, z = x + iy.) Show that f satisfies the CauchyRiemann equations at the
origin z = 0, yet f is not differentiable at the origin. Why doesnt this contradict Theorem
4.1.22 (b)?
Exercise 4.7
Prove: If f is holomorphic in the region G C and always real valued, then f is
constant in G. (Hint: Use the CauchyRiemann equations to show that f 0 = 0.)

97

Exercise 4.8
Prove: If f (z) and f (z) are both holomorphic in the region G Cthen f (z) is
constant in G.
Exercise 4.9
Suppose that f = u + iv is holomorphic. Find v given u:
1. u = x2 y 2 .

3. u = 2x2 + x + 1 2y 2 .

2. u = cosh y sin x.

4. u =

x
.
x2 +y 2

Exercise 4.10
Suppose f (z) is entire, with real and imaginary parts u(x, y) and v(x, y) satisfying
u(x, y)v(x, y) = 3 for all z. Show that f is constant.
Exercise 4.11
The general real homogeneous quadratic function of (x, y) is
u(x, y) = ax2 + bxy + cy 2 ,
where a, b and c are real constants.
1. Show that u is harmonic if and only if a = c.
2. If u is harmonic then show that it is the real part of a function of the form f (z) =
Az 2 , where A is a complex constant. Give a formula for A in terms of the constants
a, b and c.

Exercise 4.12
Use the definition of length to find the length of the following curves:
1. (t) = 3t + i for 1 t 1.

3. (t) = i sin(t) for t .

2. (t) = i + eit for 0 t 1.

4. (t) = t + it2 for 0 t 2.

Exercise 4.13
R
Evaluate z1 dz where (t) = sin t + i cos t, 0 t 2.
Exercise 4.14
Integrate the following functions over the circle |z| = 2, oriented counterclockwise:
1. z + z.

2. z 2 2z + 3.

3.

1
.
z4

4. xy.

Exercise 4.15
R
R
R
R
Evaluate the integrals x dz, y dz, z dz and z dz along each of the following
paths. Note that you can get the second two integrals very easily after you calculate the
first two, by writing z and z as x iy.
1. is the line segment form 0 to 1 i.
98

2. is the counterclockwise circle |z| = 1.


3. is the counterclockwise circle |z a| = r. Use (t) = a + reit .
Exercise 4.16
R
Evaluate e3z dz for each of the following paths
1. The straight line segment from 1 to i.
2. The circle |z| = 3.
3. The parabola y = x2 from x = 0 to x = 1.
Exercise 4.17
R
Evaluate z 2 dz where is the parabola with parametric equation (t) = t +
it2 , 0 t 1.
Exercise 4.18
R
Compute z where is the semicircle from 1 through i to 1.
Exercise 4.19
R
Compute ez where is the line segment from 0 to z0 .
Exercise 4.20
R
Compute z + 12 where is parametrized by (t), 0 t 1, and satisfies Im(t) > 0,
(0) = 4 + i and (1) = 6 + 2i.
Exercise 4.21
R
Find sin z where is parametrized by (t), 0 t 1, and satisfies (0) = i and
(1) = .
Exercise 4.22
R
Show that z n dz = 0 for any closed smooth and any integer n 6= 1. [If n is
negative, assume that does not pass through the origin, since otherwise the integral is
not defined.]
Exercise 4.23
Compute the real integral
Z
0

d
2 + sin

by writing the sine function in terms of the exponential function and making the substitution z = ei to turn the real into a complex integral.
Exercise 4.24
H
z2
Find |z+1|=2 4z
2.

99

Exercise 4.25
H
What is |z|=1

sin z
z .

Exercise 4.26
H
Evaluate |z|=2

ez
z(z3)

and

ez
|z|=4 z(z3) .

Exercise 4.27
Compute the following integrals, where C is the boundary of the square with corners
at 4 4i:
I z
I
e
sin(2z)
dz.
1.
3.
dz.
3
2
C z
C (z )
I
I
ez
ez cos z
2.
dz.
dz.
4.
2
3
C (z i)
C (z )
Exercise 4.28
Integrate the following functions over the circle |z| = 3, oriented counterclockwise:
6. iz3 .

1. Log(z 4i).
2.

1
.
z 12

7.

1
.
z2 4
exp z
4.
.
z3
 cos z 2
5.
.
z
3.

Exercise 4.29
I
Evaluate
|z|=3

8.
9.

e2z dz
.
(z 1)2 (z 2)

100

sin z
(z 2 + 1/2)2

1
.
(z + 4)(z 2 + 1)
exp z
where is any fixed com(z )2
plex number with || =
6 3.

Chapter 5

Complex Variable II (Poles and the


Residue Theorem)
5.1.
5.1.1.

Taylor and Laurent Series


Power series

Sequences and series


As in the real case, a (complex) sequence is a function from the nonnegative integers
to the complex numbers. Its values are usually denoted by an and we commonly denote
the sequence by {an }.
Definition 5.1.1. Suppose {an} is a sequence
(i) and a C such that for all  > 0, there is an integer N such that for all n N ,
we have |an a| < . Then the sequence {an} is convergent and a is its limit, in
symbols
lim an = a.

(ii) and for all real number K > 0 there is an integer N such that for all n N , we
have |an | > K. Then the sequence {an} is divergent , in symbols
lim an = .

Example 5.1.2.
1. Sequence an =

in
n

n

converges to 0 because in 0 =

2. Sequence an = 2n +

i
n

1
n

0 if n .


diverges because |an | 2n ni = 2n n1 .

3. Sequence an = in is not convergent and not divergent.


Properties of convergent and divergent complex sequences are the same properties
than real sequence.
Series A series

X
n=0

an =

an is a sequence {bn } whose members are of the form

n0

bn =

n
X

ak = a0 + a1 + + an

k=0

101

A sere converges to a if bn converges to a, in symbols

an = a

n=0

Sometimes we represent a convergent series writing


Example 5.1.3. Series

k0 an

< .

X 1
converges for p > 1 and diverges for p 1.
np

n1

There is a notion of convergence that is special to series,


P
P
Definition 5.1.4. We say that k0 ak converges absolutely if k0 |ak | converges.
Proposition 5.1.5. If a series converges absolutely then series it converges.
The converse is not true.
Example 5.1.6. The alternating harmonic series
lutely.

n1

(1)n
n

converges, but not abso-

Sequences and Series of Functions


We say that a sequence of functions {fn } converges at z0 if the sequence (of complex
numbers) {fn (z0 )} converges. If a sequence of functions converges at all z in some subset
G C then we say that {fn } converges pointwise on G.
Definition 5.1.7. Suppose {fn } and f are functions defined on G C. If for all > 0
there is an integer N such that for all z G and for all n N we have
|fn (z) f (z)| <
then sequence {fn } converges uniformly in G to f .
Convergence pointwise do not conserve continuity in contrast of convergence uniformly.
1

Figure 5.1: Continuous functions fn (x) = sinn (x) in [0, ] converge pointwise to a discontinuous function.

Proposition 5.1.8. If {fn } is a sequence of continuous functions on region G converging


uniformly to f then f is continuous in G.
Also the uniform continuity preserves integration:
102

Proposition 5.1.9. Suppose fn are continuous on the smooth curve and converge
uniformly on to f . Then
Z

Z
lim

fn =

Proof. Given > 0, for n > N we have maxz |fn (z) f (z)| <

length() .

Hence

Z

Z Z



fn f = fn f max |fn (z) f (z)| length() <



z

and this proves the proposition.


Pointwise and uniform convergence can be translate to series of functions
Next theorem is due to Weierstrass (Germany, 18151897).

n0 fn .

Theorem 5.1.10 (M -Test of Weierstrass).


Suppose fn are continuous
P
P on the region G,
|fn (z)| Mn for all z G, and n0 Mn = M converges. Then n0 fn converges
absolutely and uniformly in G.
P
P
P
P
Proof. For each z, we have | fn |
|fn (z)|
Mn = M , so
fn converges
absolutely and there exists the function
X

f (z) =

fn (z).

n0

P
P
To see that
fn converges uniformly
to f , suppose
> 0, since convergence of
Mn ,


Pk


there are a integer N such that M n=0 Mn < for all k N . Then for all z G,
if k N
k




k
X
X
X


X
X





fn (z) f (z) =
fn (z)
|fn (z)|
Mn = M
Mn <






n=0

n>k

n>k

n>k

k=0

and this satisfies the definition of uniform convergence.


Power series: Radius of Convergence
A very important examples of series of functions is the power series
Definition 5.1.11. A power series centered at z0 is a series of functions of the form

ck (z z0 )k

k=0

An example of power series is the called geometric series


to study where converges the power series.

k0 z

k.

Now, we are going

P
k
Lemma 5.1.12. The geometric series
k0 z converges absolutely in the open disk
1
|z| < 1 to the function 1z and it diverges absolutely in the closed set |z| 1.
The convergence is uniform on any set of the form Dr = {z C : |z| r} for any
r < 1.
103

Proof. Let an =

Pn

k=0 z

= 1 + z + z 2 + + z n then

zan + 1 = z + z 2 + + z n + z n+1 = an + z n+1 1 =


1 z n+1
.
1z

an =

It is easy to show limn z n+1

(
0
=

if |z| < 1
, therefore
if |z| > 1
(

z k = lim an =

if |z| < 1
if |z| > 1

k0

1
1z

P k
For |z| = 1, series of absolute values
|z| = 1 + 1 + 1 + . . . , diverges.
k
k
In the other hand,
P fork z Dr , |z | r = Mk , and for the M T est of Weierstrass,
Theorem 5.1.10, k0 z converges uniformly in Dr .
P
Theorem 5.1.13. For any power series k0 ck (z z0 )k there exists 0 R , called
radius of convergence, such that
P
k
(a) If r < R 6= 0 then
k0 ck (z z0 ) converges absolutely and uniformly on the
closed disk |z z0 | < r of radius r centered at z0 .
P
(b) If |z z0 | > R then the series k0 ck (z z0 )k diverges.
For 0 < R < the open disk |z z0 | < R is called region of convergence. For R = the
region of convergence is the entire complex plane C. For R = 0 the region of convergence
is the empty set.
All tests to search the radius of convergence studied in Real Analysis are valid in
Complex Analysis.
Proof. Omitted.
From this Theorem, we know that power series are continuous on its region of convergence, and Proposition 5.1.9 we have the following property of power series:
Corollary 5.1.14. Suppose the curve contained in the region of convergence of the
power series, then
Z X

ck (z z0 )k dz =

k=0

In particular, if is closed

X
k=0

H P

k=0 ck (z

Z
ck

(z z0 )k dz

z0 )k dz = 0.

Moreover, as consequence of Moreras Theorem (Corollary 4.2.26) the power series


are holomorphic.
P
Theorem 5.1.15. Suppose f (z) = k0 ck (z z0 )k has positive radius of convergence
R. Then f is holomorphic in |z z0 | < R and
f 0 (z) =

kck (z z0 )k1 ,

k1

is another power series and its radius of convergence is also R.


104

Proof. Since f holomorphic, Cr the circle of radius r < R centered in z0 and the Cauchys
integral formula gives
I
I P
k
f ()
1
1
k0 ck ( z0 )
0
f (z) =
d
=
d
2i Cr ( z)2
2i Cr
( z)2

I

X
X

( z0 )k
d
1
k
d
=
=
c
(

z
)
ck
0
k

2i
( z)2
d
=

k=0

Cr

k=0

=z

ck k(z z0 )k1 .

k=0

The radius of convergence of f 0 (z) is at least R (since we have shown that the series
converges whenever |z z0 | < R), and it cannot be larger than R by comparison to the
series for f (z), since the coefficients for (z z0 )f 0 (z) are bigger than the corresponding
ones for f (z).

5.1.2.

Taylor Series

P
A complex function which can be expressed like a power series f (z) = k0 ck (zz0 )k
on a disk centered in z0 is called analytic in z0 . Theorem 5.1.15 says an analytic function
in z0 is holomorphic in z0 . Moreover f has derivative of any order in z0 :
X
f (n) (z) =
k(k 1) . . . (k n + 1)ck (z z0 )kn ,
kn

and doing z = z0 , we have f (n) (z0 ) = n! cn .


The converse is also true: all holomorphic function is analytic.
Theorem 5.1.16. Suppose f is a function which is holomorphic in D = {z C :
|z z0 | < R}. Then f can be represented in D as a power series centered at z0 with a
radius of convergence at least R:
I
X
1
f ()
k
f (z) =
ck (z z0 )
with
ck =
d
2i ( z0 )k+1
k0

where is any positively oriented, simple, closed, smooth curve in D for which z0 is
inside .
Proof. Let g(z) = f (z + z0 ); so g is a function holomorphic in |z| < R. Fix 0 < r < R,
by Cauchys integral formula, if |z| = r is the positively oriented


I
I
1
g()
1
1
1
g(z) =
d =
g()
d
2i |z|=r z
2i |z|=r
1 z

I


X
1
1
z k
=
g()
d
2i |z|=r

k0
!
I
X
1
g()
=
d z k .
2i |z|=r k+1
k0

Hence, doing a change of variable,


!
g()
f (z) = g(z z0 ) =
d (z z0 )k
k+1

|z|=r
k0
!
I
1
f ()
=z0 X
=
d (z z0 )k .
2i |zz0 |=r ( z0 )k+1
X

1
2i

k0

105

Since G |z z0 | = r for the region of convergence, the open disk, G = |z z0 | < R,


the theorem is proved.
By summarizing, a holomorphic function in z0 can be expressed as a power series
called Taylor series expansion of f in z0
f (z) =

X
f (k) (z0 )
k=0

k!

(z z0 )k

Example 5.1.17. Taylor series expansion of exp(z) in z0 = 0 is exp(z) =

X zk
k0

k!

Example 5.1.18. Taylor series expansion of sin z in z0 = 0 is

k
k
X
X
1
1
(iz)
(iz)
sin z =
(exp(iz) exp(iz)) =

2i
2i
k!
k!
k0
k0




1
i2 z 2 i3 z 3 i4 z 4
i2 z 2 i3 z 3
=
1 + iz +
+
+
+ . . . 1 iz +

+ ...
2i
2!
3!
4!
2!
3!


2i3 z 3 2i5 z 5
i2 z 3 i4 z 5
1
2iz +
=
+
... = z +
+
+ ...
2i
3!
5!
3!
5!
z3 z5 z7
=z
+

...
3!
5!
7!
X
z 2k+1
=
(1)k
(2k + 1)!
k0

5.1.3.

Laurent Series

We introduce power series" with negative exponents.


Definition 5.1.19. We call double series to
X
kZ

ak =

ak =

k=

X
k1

ak +

ak

k0

with ak complex numbers.


A double series converges if and only if both of its defining series do. Absolute and
uniform convergence are defined analogously.
Definition 5.1.20. A Laurent series centered at z0 is a double series of the form
X
ck (z z0 )k .
kZ

Any power series

k0 ck (z

z0 )k is a Laurent series (with ck = 0 for k < 0).

106

A Laurent series has two radius of convergence,


indeed,
X

ck (z z0 )k =

ck

k>1

kZ

X
1
ck (z z0 )k .
+
(z z0 )k
k>0

R2
z0
R1



1
The first series converges for zz0 < R1 and the
second converges for |z z0 | < R2 , then both series
converge for the annulus
R1 < |z z0 | < R2 .
Obviously the Laurent series does not converges anywhere if R1 R2 .
Previous theorems show that Laurent series is holomorphic in its region of convergence
R1 < |z z0 | < R2 if R1 < R2 . The fact that we can conversely represent any function
holomorphic in such an annulus by a Laurent series is the substance of the next theorem.
Theorem 5.1.21. Suppose f is a function which is holomorphic in D = {z C : R1 <
|z z0 | < R2 }. Then f can be represented in D as a Laurent series centered at z0 :
Z
X
1
f ()
k
f (z) =
ck (z z0 )
with
ck =
d
2i ( z0 )k+1
kK

where is any positively oriented, simple, closed, smooth curve in the annulus D.
Proof. Omitted.
Example 5.1.22. Function exp(1/z) is not holomorphic for z = 0, but it is holomorphic
in the annulus 0 < |z| < . We are going to evaluate its Laurent series centered in 0:
exp

X 1
1 X (1/z)k
1
1
=
=
z k = + z 3 + z 2 + z 1 + 1
z
k!
k!
3!
2!
k0

k0

Example 5.1.23. Let f (z) =

z 3 z
z1 .

This function is holomorphic for z 6= 1, then:

1. Laurent series of f (z) centered in z = 0 is the Taylor series


f (z) =

z(z + 1)(z 1)
= z + z2
z1

and its radius of convergences is R = 1 (region of convergence is |z| < 1).


2. Laurent series of f (z) centered in z = 1.
f (z)

( + 1)3 1
= 2 + 3 + 2

= 2 + (z 1) + (z 1)2 .

=z1

Region of convergence is |z 1| > 0 (also wrote 0 < |z 1| < , to express both


radius).
3. Laurent series of centered in z = i is the Taylor series
f (z) = z + z 2 = (z i + i) + (z i + i)2 = i + (z i) + (z i)2 1 + 2i(z i)
= (1 + i) + (1 + 2i)(z i) + (z i)2

with radius of convergence R = |i 1| = 2 (region of convergence |z i| < 2).


107

Example 5.1.24. Find the first three terms of Laurent series of cot z centered in z = 0.
We know
4
6
2
1 z2! + z4! z6! + . . .
cos z
cot z =
.=
3
5
sin z
z z + z + ...
3!

5!

and doing long division


z2
2!

1
1 +

z2
3!

z3 +
z2
3

z4
4!

+ ...

z4
5!

+ ...

z4
30

+ ...

z4
18

+ ...

z3
3!

1
z

z
3

z5
5!

z3
45

+ ...

+ ...

z
+ ...
45

we have
f (z) = z 1

5.2.
5.2.1.

z3
z

+ ...
3 45

with region of convergence 0 < |z| < .

Poles and the Residue Theorem


Isolated Singularities

We name isolated singularity of a complex function f : U C C to a number


z0 U such that there exists a centered disk D = {z C : |z z0 | < } where f is
holomorphic on all numbers in D but not in z0 .
Some functions have singularities but not isolated. Examples of such functions are
the logarithmic branches. You can check that the principal logarithm Log z have many
infinitely singularities at x 0, but all this singularities are not isolated.
For example, z0 = 0 is an isolated singularity of f (z) = z1 , and also of f (z) = sinz z ,
or f (z) = exp( z1 ), but every singularity is of different nature.
Definition 5.2.1. An isolated singularity z0 for a function f is said
a) Removable if there exists a open disk D = {z C : |z z0 | < } and a function g
holomorphic in D, such that f = g in {z C : 0 < |z z0 | < }. By continuity the
value of g(z0 ) is the limit of f at z0 ,
g(z0 ) = lim f (z).
zz0

b) a pole if f grows in absolute value to infinity near of z0 , i.e.


lim |f (z)| = .

zz0

c) essential if is neither removable nor a pole.


Example 5.2.2.
because

1. The function f (z) =

z
sin z

has a removable singularity in z0 = 0,

z
= 1.
z0 sin z
lim

108

So, using the Taylor series of sin z at 0 and large division, we obtain the Laurent
series of f (z) at 0
g(z) = 1 +

z2
7z 4
+
+ ...
6
360

which is holomorphic in |z| < .


2. The function f (z) =

1
z

has a pole in 0 because



1
1
1
lim = lim
= lim = .
i
z0 z
r0 |re |
r0 r

3. Function f (z) = exp( z1 ) hast a essential singularity in


1
1

lim e x = lim e x = +
and
lim
x0+

0, because
1
1
x
e = lim+ 1 = 0.

x0
x0 e x

x0+

1

then does not exist the limz0 e z .
Next proposition gives a classification of not essential singularities.
Proposition 5.2.3. Suppose z0 a not essential isolated singularity of f , then there exists
an integer n 0 such that
lim (z z0 )n+1 f (z) = 0.

(5.1)

zz0

The order of a singularity is the smallest integer n which verifies (5.1).


Therefore, the removable singularities have order 0 and poles order n 1.
Proof. We do distinguish between two cases.
Case n = 0: Suppose z0 is a removable singularity, then
lim (z z0 )f (z) = lim (z z0 )g(z) = 0g(0) = 0.

zz0

zz0

Conversely, if limzz0 (z z0 )f (z) = 0, z0 singularity of f and f holomorphic in


0 < |z z0 | < R, then the new function
(
(z z0 )2 f (z) if z 6= z0
(z) =
0
if z = z0
0)
is holomorphic on |z z0 | < R, from 0 (z0 ) = limzz0 (z)(z
= limzz0 (z
zz0
z0 )f (z) = 0, therefore the Taylor series expansion of at z0 is

(z) = 0 + 0(z z0 ) + c2 (z z)2 + c3 (z z0 )3 + = (z z0 )2

ck (z z0 )k2 .

k=2

Hence, g(z) = k=2 ck (z z0 )k2 is holomorphic on |z z0 | < R and f (z) = g(z)


on 0 < |z z0 | < R, therefore z0 is removable.
1
1
= 0, and function the f (z)
is
Case n > 0: Suppose z0 is a pole of f , then limzz0 f (z)
holomorphic on 0 < |z z0 | < R and has a removable singularity at z0 . Function
(
1
if z 6= z0
(z) = f (z)
0
if z = z0

109

is holomorphic
on |z z0 | < R, hence has a Taylor series expansion at z0 ,
P
(z) = k0 ck (z z0 )k . Let n be the smallest n such that cn 6= 0. Obviously
P
n > 0, because z0 is a zero of , and g(z) = kn ck (z z0 )kn verifies g(z0 ) 6= 0.
Then
(z z0 )n+1
(z z0 )n+1
(z z0 )
=
lim
= lim
=0
n
k
zz
zz
g(z)
0 (z z0 ) g(z)
0
k0 ck (z z0 )

lim (z z0 )n+1 f (z) = lim P


zz0

zz0

Conversely, if limzz0 (z z0 )n+1 f (z) = 0, being n the smallest possible, then


(z z0 )n f (z) has a removable singularity in z0 . Let (z) the holomorphic function
on |z z0 | < R such that (z) = (z z0 )n f (z) on 0 < |z z0 | < R. We notice that
limzz0 g(z) = c 6= 0 because, otherwise, n should not be the smallest. So,
lim |f (z)| = lim

zz0

zz0

|g(z)|
= ,
|z z0 |n

and z0 is a pole.

g(z)
Remark. Sometimes, for functions in the form f (z) = h(z)
, to find poles we study the
values where h(z) = 0. Suppose z0 such g(z0 ) 6= 0 and f (z0 ) = 0. Then z0 is a pole and
its order is the multiplicity 1 of z0 of g.
1+z
This
Example 5.2.4. Function f (z) = (z+i)
3 has a unique singularity in z = i.
singularity is a pole of order 3.
Indeed,
lim (z + i)4 f (z) = lim (z + i)(1 + z) = 0
zi

zi

and lim (z + i) f (z) 6= 0 (or diverges) for n 3.


zi

z
Example 5.2.5. Function f (z) = sin
has a pole of order 2 in 0 (spite of 0 is a zero of
z3
3
multiplicity 3 of z ).


sin z
sin z

lim 3 = and
lim z 3 3 = 0 (the smallest n) .
z0 z
z0
z

The following classifies singularities according to their Laurent series expansion.


Proposition 5.2.6. Suppose z0 is an isolated singularity of f with Laurent series
f (z) =

ck (z z0 )k with 0 < |z z0 | < R.

k=

Then
a) z0 is removable if and only if there are no negative exponents (that is, the Laurent
series is a power series),
b) z0 is a pole if and only if there are finitely many negative exponents, and the order
of the pole is the largest n such that cn 6= 0 and
c) z0 is essential if and only if there are infinitely many negative exponents.
1
Multiplicity of a zero z0 of g(z) is the smallest positive integer n such that there exists a holomorphic
function (z) with (z0 ) 6= 0 and g(z) = (z z0 )n (z).

110

Proof. Exercise.
Example 5.2.7.
1. We know from Exercise 5.2.5 than 0 is a pole of order 2 of f (z) =

sin z
.
z3

Furthermore,
f (z) =

z
sin z
=
z3

z3
6

z
+ 120

1
z2
2
=
z

+

z3
6 120

2. The Laurent series expansion of exp(1/z) is


exp(1/z) = +

1 3
1
z + z 2 + z 1 + 1
3!
2!

(see Example 5.1.22)

which has infinitely many negative exponents.

5.2.2.

Residues

Suppose z0 a isolated singularity of f (z) holomorphic on 0 < |z z0 | < R, and let


be the counterclockwise circle of radius R. Consider the Laurent series expansion of f
at z0
f (z) = + c2 (z z0 )2 + c1 (z z0 )1 + c0 + c1 (z z0 ) + c2 (z z0 )2 + . . .
Hence, since Cauchys Theorem, Corollary 4.2.6, and Cauchys integral Formulas, Theorems 4.2.11 and 4.2.13, we have
I
f (z) =

I
I
I
I
I





dz
dz
 2






+
c
= + c2
+
c
dz
+
c
(z

z
)
+
c
(z

z
1
0
1
0
2
0) + . . .


2



(z z0 ) 
(z z0 )



|
{z
}
2i

From this it follows that the integral depends only on the term c1 of the Laurent series
I
f (z) dz = 2ic1 .

This term c1 is named residue of f (z) at the singularity z0 and it will be represented
Res(f (z), z0 ).
How to Calculate Residues
Most often it is not necessary to find the Laurent series to calculate residues. Following propositions provide methods for this.
Proposition 5.2.8. Suppose z0 is a removable singularity of f . Then Res(f (z), z0 ) = 0.
Proof. It is consequence of the Laurent series for f at z0 is a power series.
Proposition 5.2.9. Suppose z0 is a pole of f of order n. Then
Res(f (z), z0 ) =


1
dn1
lim
(z z0 )n f (z)
n1
(n 1)! zz0 dz
111

Proof. By Proposition 5.2.6, the Laurent series expansion of f at z0 is

f (z) =

ck (z z0 )k ,

and cn 6= 0 =

k=n

(z z0 )n f (z) =

ck (z z0 )n+k

k=n

= cn + cn+1 (z z0 ) + + c1 (z z0 )n1 +

ck (z z0 )n+k .

k=0

Then, the (n 1)-th derivative of (z z0 )n f (z) is

X
dn1
n
(z

z
)
f
(z)
=
(n

1)!
c
+
ck (n + k)(n + k 1) (k + 2)(z z0 )k+1
0
1
dz n1
k=0

and, hence
lim

zz0

dn1
(z z0 )n f (z) = (n 1)! c1 .
dz n1

From here we get the result.


In particular, we have an easier way to compute the residue of a pole of order 1 for
n(z)
a function defined f (z) =
d(z)
Proposition 5.2.10. Suppose z0 is a pole of order 1 of f (z) =
holomorphic, and z0 is a zero of multiplicity 1 of d(z), then


n(z)
n(z0 )
, z0 = 0
Res
d(z)
d (z0 )

n(z)
d(z) ,

being n and d

Proof. Since z0 is a zero of multiplicity 1 of d we can stay d(z) = (z z0 )(z) with h


holomorphic at z0 and (z0 ) 6= 0. Then


1
d(z)
f (z) =
z z0 (z)
0)
and the residue of f (z) is first term of the Taylor series expand of d at z0 , that is d(z
(z0 ) .
In the other hand, d0 (z) = (z) + (z z0 ) 0 (z0 ), therefore (z0 ) = d0 (z0 ) and the
residue of f an z0 is

Res(f (z), z0 ) =

n(z0 )
d0 (z0 )

Example 5.2.11. For computing the residue of f (z) =


that z0 is a zero of multiplicity 1 for cos z, then

eiz
cos z sin z

e2


sin 2
i

Res f (z),
=
= e 2 = i
2
sin 2
Other way to compute the residue is

(z 2 )eiz

Res f (z),
= lim
= i
2
z 2 cos z sin z
112

at z0 = /2, we observe

Residue Theorem
Theorem 5.2.12 (Residue Theorem). Suppose f is holomorphic in the region G, except for isolated singularities, and is a positively oriented, simple, closed, smooth,
G-contractible curve which avoids the singularities of f . Then
I
X
f (z) dz = 2i
Res(f (z), zi )

where the sum is taken over all singularities zi inside .

z1

z2

z3

Figure 5.2: Proof of the Residue Theorem.


Proof. Suppose there is an only singularity z0 inside of , then, how is described at the
beginning of the section, since is contractible to a circle around the singularity we have
I
f = 2i Res(f (z), z0 )
(5.2)

For several isolated singularities, draw two circles around each them inside , one
with positive, and another one with negative orientation, as pictured in Figure 5.2.
Each of these pairs cancel each other when we integrate over them. Now connect the
circles with negative orientation with . This gives a curve which is contractible in the
region of holomorphicity of f. But this means that we can replace by the positively
oriented circles; now all we need to do the sum of all expressions similar to (5.2) for every
singularity.
Example 5.2.13. Lets calculate the integral
I
z
dz.
z
|z|<1 e sin(4z)
z
inside of the circle |z| < 1 are z1 = /4, z2 = 0
The singularities of f (z) = ez sin(4z)
and z3 = /4. We compute the residue for each of them:
/4



e 4
exp(/4)

=
=
.
z1 = 4 is a pole of order 1. and Res f (z),
4
16
4 cos(4
4 )

z2 = 0 is removable, then Res (f (z), 0) = 0.


/4

z1 =


e 4
exp(/4)
is a pole of order 1. and Res f (z),
.
=
=
4
4 cos(4 4 )
16


Therefore,
I
|z|<1

z
dz = 2i
z
e sin(4z)

e 4
e 4

16
16
113

2 sinh
=
4


i

Exercises
Exercise 5.1
For each of the following series, determine where the series converges absolutely/uniformly:
1.

k(k 1)z k2

2.

k2

k0

1
z 2k+1 .
(2k + 1)!

X  1 k
.
3.
z3
k0

What functions are represented by the series in the previous exercise?


Exercise 5.2
Find the power series centered at 1 for exp z.
Exercise 5.3
By integrating a series for
is its radius of convergence?

1
1+z 2

term by term, find a power series for arctan(z). What

Exercise 5.4
Find the terms through third order and the radius of convergence of the power series for each following functions, centered at z0 . Do not find the general form for the
coefficients.
1. f (z) =

1
, z0 = 1.
1 + z2

3. f (z) = 1 + z, z0 = 0 (use the


principal branch).

2. f (z) =

1
, z0 = 0.
ez + 1

4. f (z) = ez , z0 = i.

Exercise 5.5
Find a Laurent series for
it converges.

1
(z1)(z+1)

Exercise 5.6
Find a Laurent series for
converges.

1
z(z2)2

Exercise 5.7
Find a Laurent series for
converges.

z2
z+1

centered at z = 1 and specify the region in which

centered at z = 2 and specify the region in which it

centered at z = 1 and specify the region in which it

Exercise 5.8
Find the first five terms in the Laurent series for

1
sin z

centered at z = 0.

Exercise 5.9
Find the first four non-zero terms in the power series expansion of tan z centered at
the origin. What is the radius of convergence?

114

Exercise 5.10
1. Find the power series representation for eaz centered at 0, where a is any constant.
e(1+i)z +e(1i)z
.
2

2. Show that ez cos(z) =

3. Find the power series expansion for ez cos(z) centered at 0.

Exercise 5.11
P
z1
Show that z2
= k0

1
(z1)k

Exercise 5.12
1. Find the Laurent series for
2. Prove that

for |z 1| > 1.

cos
z2

centered in z = 0.

cos z 1
f (z) =
z2
1
2

if z 6= 0
if z = 0

is entire.

Exercise 5.13
Find the Laurent series for sec z centered at the origin.
Exercise 5.14
3
Find the three Laurent series of f (z) = (1z)(z+2)
, centered in 0, but which are defined
on the three domains |z| < 1, 1 < |z| < 2, and 2 < |z|, respectively. Hint: Use partial
fraction decomposition.
Exercise 5.15
Find the poles of the following, and determine their orders:
1. (z 2 + 1)3 (z 1)4 .

3. z 5 sin(z).

2. z cot(z).

4.

1
.
1 ez

5.

z
.
1 ez

Exercise 5.16
1
1. Find a Laurent series for (z 2 4)(z2)
centered at z = 2 and specify the region in
which it converges.
H
dz
2. Compute (z 2 4)(z2)
, where is the positively oriented circle centered at 2 of
radius 1.

Exercise 5.17
Verify that if f is holomorphic in then the residue of

115

f (z)
is f ().
z

Exercise 5.18
Verify that if f is holomorphic in then the residue of
Exercise 5.19
Evaluate the following integrals for (t) = 3eit ,
Z

Z
cot z dz.

1.

3.

Z
2.

z 3 cos( z3 ) dz.

4.

f (z)
f (n) ()
is
.
(z )n
(n 1)!

0 t 2.
Z

dz
.
(z + 4)(z 2 + 1)
z 2 exp

1
z

5.

Z
dz.

6.

exp z
dz.
sinh z
iz+4
dz.
(z 2 + 16)2

Exercise 5.20
1. Find the power series of exp z centered at z = 1.
R exp z
2. Find (z+1)
34 dz, where is the circle |z + 2| = 2, positively oriented.

Exercise 5.21
Suppose f has a simple pole (i.e., a pole of order 1) at z0 and g is holomorphic at z0 .
Prove that
Res((f g)(z), z0 ) = g(z0 ) Res(f (z), z0 ).

Exercise 5.22
Find the residue of each function at 0:
1. z 3 cos z.

2. csc z.

3.

z 2 +4z+5
.
z 2 +z

Exercise 5.23
Use residues to evaluate the following:
Z
dz
, where is the circle |z + 1 i| = 1.
1.
4
z +4
Z
2.

Z
3.

Z
4.

z(z 2

dz
, where is the circle |z i| = 2.
+ z 2)

ez dz
, where is the circle |z| = 2.
z3 + z
dz
, where is the circle |z| = 1.
z 2 sin z

116

4. e1 z .

5.

e4z 1
.
sin2 z

Exercise 5.24
Suppose f has an isolated singularity at z0 .
1. Show that f 0 also has an isolated singularity at z0 .
2. Find Res(f 0 , z0 ).
Exercise 5.25
Given R > 0, let R be the half circle defined by R (t) = Reit , 0 t , and R be
the closed curve composed of R and the line segment [R, R].
R
dz
1. Compute R (1+z
2 )2 .
2. Prove that limR

dz
R (1+z 2 )2

= 0.

3. Combine 1. and 2. to evaluate the real integral

117

dx
(1+x2 )2 .

118

Appendix A

Complex Numbers
A.1.

Algebraic Definition

The complex numbers can be defined as pairs of real numbers,


C = {(x, y) : x, y R},
equipped with the addition and the multiplication
(x, y) + (a, b) = (x + a, y + b)
(x, y) (a, b) = (xa yb, xb + ya).
Both binary operations in C are extensions of the equivalence binary operations defined in R, in
the sense that the complex numbers of the form (x, 0) behave just like real numbers; that is,
(x, 0) + (y, 0) = (x + y, 0) and (x, 0) (y, 0) = (x y, 0).
So we can think of the real numbers being embedded in C as those complex numbers whose
second coordinate is zero.
Both operations have common properties associative, commutative and distributive
(x, y) ((a, b) + (c, d)) = (x, y) (a, b) + (x, y) (c, d).
Furthermore (0, 0) is the neutral element for addition and (1, 0) is neutral element for multiplication.
Exercise A.1.1. Prove the next statements:
1. the opposite element of (x, y) is (x, y), i.e.
(x, y) + (x, y) = (0, 0).


y
x
2. the inverse element for (x, y) 6= (0, 0) is x2 +y
, i.e.
2 , x2 +y 2


y
x
,
= (1, 0)
(x, y)
x2 + y 2 x2 + y 2
And, so, above properties stablish (C, +, ) is a field. As such it is an algebraic structure
with notions of addition, subtraction, multiplication, and division.

A.2.

Number i. Rectangular and Polar Forms

The definition of multiplication implies the identity


(0, 1) (0, 1) = (1, 0).
And this identity together with the fact that (a, 0) (x, y) = (ax, ay) implies
(x, y) = (x, 0) + (0, y) = (x, 0) (1, 0) + (y, 0) (0, 1).
allows an alternative notation for complex numbers.

119

(A.1)

Rectangular Form
As before, thinking 1 = (1, 0), x = (x, 0) and y = (y, 0) as real numbers and giving to (0, 1)
a special name, say i, then the complex number is represented by
(x, y) = x + yi
The number x is called the real part and y the imaginary part of the complex number x + yi,
often denoted as Re(x + iy) = x and Im(x + iy) = y. The identity (A.1) then reads
i2 = 1
A complex number written in the form x+iy where x and y are both real numbers is in rectangular
form.
Complex number i is named square root of 1 and also is named imaginary unity. Then the
polynomial x2 + 1 = 0 has roots, but only in C.

Polar Form
Lets for a moment return to the (x, y)-notation of complex numbers. It suggests that one
can think of a complex number as a two-dimensional real vector. When plotting these vectors in
the plane R2 , we will call the x-axis the real axis and the y-axis the imaginary axis.
On the other hand, a vector can be determined by its
length and the angle it encloses with, say, the positive
real axis; lets define these concepts thoroughly. The
z
absolute value (sometimes also called the modulus) r =
y = r sin
|z| R of z = x + iy is
p
r
r = |z| = x2 + y 2 ,
and an argument of z = x + iy is a number R such
that
x = r cos and y = r sin .

x = r cos

A given complex number z = x + iy has infinitely many possible arguments + 2k, where
k is any integer number.
Proposition A.2.1. . Let z1 , z2 C be two complex numbers, thought of as vectors in R2 , and
let d(z1 , z2 ) denote the distance between the two vectors in R2 . Then
d(z1 , z2 ) = |z2 z1 | = |z1 z2 |.
Proof. Lets z1 = x1 + iy1 and z2 = x2 + iy2 . By definition of distance
p
d(z1 , z2 ) = (x2 x1 )2 + (y2 y1 )2
and this expression is equal to |z2 z1 | = |(x2 x1 ) + i(y2 y1 )|. Finally, it is obvious that
|z2 z1 | = |z1 z2 |.
The complex number cos + i sin is represented in short as ei . Initially this expression
should not be interpreted as an exponential, but rather as an abbreviation. Later we will see
that verifies the properties of the exponential function and can be understood in such manner.
Definition A.2.2. The complex number z = x + iy with absolute value r and argument is
expressed as
z = x + iy = r(cos + i sin ) = rei .
The right-hand side of this expression is named polar form of the complex number z.
Because the argument (angle) is not unique representation, the polar form is not unique, so
for any k Z
rei = rei(+2k)

120

z+

+ z
rs

Figure A.1: Geometric addition and multiply of complex numbers.


Principal argument. In order to establish an unique expression for every complex number
we define the principal argument the angles < .
Remark. Sometimes may be interesting to define the principal argument like a real number 0 < 2.
Polar form is useful to multiply, divide, powers and roots of complex numbers.
Proposition A.2.3. For any z, C, 6= 0, expressed z = rei and = sei :
1. z = rei sei = rsei(+) (see figure A.1).
2. 1 =

1
1
= ei .
i
se
s

z
r
= ei() .

s
n
4. z n = rei = rn ein , for all n Z+ .

3.

5. The n-roots of a complex number are exactly n values:

+2k
n
For all n Z+ , n z = rei = n rei n with k = 0, 1, 2, . . . , n 1.
Proof.
1.
z = r(cos + i sin ) s(cos + i sin ) =
= rs ((cos cos sin sin ) + i(cos sin + sin cos )) =
= rs(cos( + ) + i sin( + )) = rsei(+)
2.
1
cos i sin
1
=
=
s(cos + i sin )
s (cos + i sin )(cos i sin )
1 cos sin
1
1
=
= (cos() + i sin()) = ei
2
2
s cos + sin
s
s

1 =

3.
z
1
r
= z 1 = rei ei = ei()

s
s
4. We use induction. z 1 = z, obviously and for n > 1, we suppose z n1 = rn1 ei(n1) .
Then, for n > 1,
z n = z n1 z = rn1 ei(n1) rei = rn ein

121

5. For any k Z,

n

rei

+2k
n

n

r ein

+2k
n

= rei(+2k) = z

And the reason because there are exactly n roots is for equivalence of angles
2k
2(k + n)
=
n
n
and so, the only different angles are for k = 0, 1, 2, . . . , n 1.

Example A.2.4. The fifth roots of the unity 5 1 are the next complex numbers:

2k
5
5
1 = ei2k = ei 5
and, then,
z1

For k = 0, z0 = ei0 = 1,
For k = 1, z1 = ei

2
5

For k = 2, z2 = ei

4
5

For k = 3, z3 = e

i 5
5

For k = 4, z4 = e

i 8
5

For k = 5, e

i 10
5

z2
2
5

z3

= ei2 = z0 , . . .

z4

For k = 1, 2, . . . also all values are repeated.


Exercise A.2.5. Compute and represent the sixth roots of 1, i.e.
roots in rectangular form.

A.3.

z0

1. Also, express such

Complex Conjugates

Definition A.3.1. For each complex number z = x + iy we define the conjugate of z as


z = x iy
It is easy to see that the absolute value can be expressed from itself and its conjugated (exercise):

z1

|z| = z z.
and, hence, when z 6= 0
z 1 =

1
z
= 2
z
|z|

Geometrically, conjugating z means reflecting the


vector corresponding to z with respect to the real axis.
The following collects some basic properties of the
conjugate. Their easy proofs are left for exercises.

z2 = z1

Proposition A.3.2. For any z, z1 , z2 C,




z1
z2

1. z1 z2 = z1 z2 .

3.

2. z1 z2 = z1 z2 .

4. z = z.

z1
.
z2

5. |z| = |z|.
6. z = z iff z is a real.

122

7. Re(z) =

z+z
.
2

8. Im(z) =

zz
.
2i

9. ei = ei .

A famous geometric inequality (which holds for vectors in Rn ) is the triangle inequality.
Complex numbers verify this inequality
Proposition A.3.3. For z1 , z2 C, |z1 + z2 | |z1 | + |z2 |
Proof.
|z1 + z2 |2 = (z1 + z2 )(z1 + z2 ) = (z1 + z2 )(z1 + z2 ) =

= z1 z1 + z1 z2 + z2 z1 + z2 z2 = |z1 |2 + z1 z2 + z1 z2 + |z2 |2 =
= |z1 |2 + 2Re(z1 z2 ) + |z2 |2 .

(A.2)

Finally by Re(z) |z| for all z, we have Re(z1 z2 ) |z1 z2 | = |z1 ||z2 | and from (A.2)
2

|z1 + z2 |2 |z1 |2 + 2|z1 ||z2 | + |z2 |2 = (|z1 | + |z2 |)


which is equivalent to our claim.
There are several variants of the triangle inequality:
Corollary A.3.4. . For z1 , z2 C, we have the following inequalities:
1. | z1 z2 | |z1 | + |z2 | (the triangle inequality).
2. | z1 z2 | |z1 | |z2 | (the reverse triangle inequality).
Proof. Exercise.

Exercises
Exercise 1.1
Let z = 1 + 2i and = 2 i. Compute:
1. z + 3.

3. z 3 .

5. z 2 + z + i.

2. z.

4. Re( 2 + ).

Exercise 1.2
Find the real and imaginary parts of each of the following:
!3
1 + i 3
.
2

1.

za
(a R).
z+a

3.

2.

3 + 5i
.
7i + 1

4. in for any n Z

Exercise 1.3
Find the absolute value and conjugate of each of the following:
1. 2 + i.

2. (2+i)(4+3i).

Exercise 1.4
Write in both polar and rectangular form:

123

3.

3i .
2+3i

4. (1 + i)6 .

1. 2i.
2. 1 + i.
3. 3 +

3i.

4. i.

5. (2 i)2 .

9.

6. |3 4i|.

7.
5 i.
4

1i

.
8.
3

10. 34ei/2 .

14. eln 5i .

11. ei250 .

15. e1+i/2 .

12. 2e4i .

16.

2ei3/4 .

13. 2i .

d +i
.
d e

Exercise 1.5
Prove the quadratic formula works for complex numbers, regardless of whether the discriminant is negative. That is, prove, the roots of the equation az 2 + bz + c = 0, where a, b, c C,
are b 2ab4ac as long as a 6= 0.

Exercise 1.6
Find all solutions to the following equations:
1. z 2 + 25 = 0.

5. z 2 = 2z.

2. 2z 2 + 2z + 5 = 0.

6. z 6 = 1.

3. 5z 2 + 4z + 1 = 0.

7. z 4 = 16.

4. z 2 z = 1.

8. z 6 = 9.

9. z 6 z 3 2 = 0.
10. z 2 + 2z + (1 i) = 0.
11. z 4 + iz = 2i.

Exercise 1.7
Show that:
1. |z| = 1 if and only if

1
z

= z.

2. z is a real number if and only if z = z.


3. z is either real or purely imaginary if and only if (z)2 = z 2 .

Exercise 1.8
Use operations in polar form to derive the triple angle formulas:
1. cos 3 = cos 3 3 cos sin 2.

2. sin 3 = 3 cos 2 sin sin 3.

Exercise 1.9
Sketch the following sets in the complex plane:
1. {z C : |z 1 + i| = 2}.

6. {z C : 2|z| |z + i|}.

2. {z C : |z 1 + i| 2}.

7. {z C : |z + 3| < 2|}.

3. {z C : Re(z + 2 2i) = 3}.

8. {z C : |Im z| < 1}.

4. {z C : |z i| + |z + i| = 3}.

9. {z C : 1 |z 1| < 2}.

5. {z C : |z| = |z + 1|}.

10. {z C : |z 1| + |z + 1| < 3}.

Exercise 1.10




Use the triangular inequality to show that z211

124

1
3

for every z on the circle z = 2ei .

Appendix B

Elementary Complex Functions


B.1.

Exponential Function

The complex exponential function is defined for z = x + iy as


exp(z) = ex eiy = ex (cos y + i sin y).
This definition specializes to the real exponential function, for x R
exp(x) = exp(x + i0) = ex ei0 = ex .
Furthermore all exponential rules which we are used to from real numbers carry over to the
complex case.
Proposition B.1.1. For all z, z1 , z2 C,
1. exp(z) 6= 0.
2. exp(z1 + z2 ) = exp(z1 ) exp(z2 ).
3. exp(z) =

1
exp(z) .
0

4. exp(z) is entire and (exp(z)) = exp(z).


Specific rules for complex exponential which are different for real exponential are:
5. exp(z + 2i) = exp(z), i.e. complex exponential is periodic of period 2i.
6. | exp(z)| = exp(Rez).
Proof.
1. Suppose z0 = x0 + iy0 such thar exp(z0 ) = 0. Since ex > 0 for all x real we have
ex0 (cos y0 + i sin y0 ) = 0 = cos y0 = sin y0 = 0,
but this is impossible.
2. From proposition A.2.3 and known property of real exponential
exp(z1 + z2 ) = ex1 +x2 ei(y1 +y2 ) = ex1 ex2 eiy1 eiy2 = exp(z1 ) exp(z2 )
3. Also from proposition A.2.3: exp(z) = ex eiy =

1
1 1
=
.
ex eiy
exp(z)

4. Use the Cauchy-Riemann equations for exp(z) = u(x, y) + iv(x, y) for u(x, y) = ex cos y
and v(x, y) = ex sin y. Furthermore
(exp(z))0 =

(ex eiy )
= ex eiy = exp(z)
x

5. Trivial, because cos and sin are periodic functions with period 2.
6. | exp(z)| = ex |eiy | = ex .

Remark. Note that the representation of the complex exponential function is exp z and is not ez ,
because, as we will see in section B.5, the expression ez is not strictly a function.

125

(a) exp(x0 + iy) fixing different values of x0 produces circles


centered in origin.

(b) exp(x+iy0 ) fixing different values of y0 produces (infinite) rays


from origin.

Figure B.1: Images of exp(z) changing values of z = x + iy.

B.2.

Trigonometric Functions

Complex exponential function allows to define the trigonometric functions. The complex sine
and cosine are defined respectively as
sin z =

eiz eiz
2i

and

cos z =

eiz + eiz
.
2

Because exp z is entire, so are sin z and cos z. Furthermore


Proposition B.2.1.
0

(sin z) = cos z
0

(cos z) = sin z
Proof. Exercise.
As with the exponential function, we should first make sure that we are not redefining the
real sine and cosine: if x R then

 i sin(x) 
eix eix
2i sin x
cos
x + i sin x 
cos(x)

sin(x + i0) =
=
=
= sin x
2i
2i
2i


cos x + 
2 cos x
eix + eix
i sin
x + cos(x) + 
i sin(x)

=
=
= cos x
cos(x + i0) =
2
2
2
We know the real sin and cos functions are bounded functions, but it is not true for corresponding
complex functions.
Proposition B.2.2. Complex sin z (resp. cos z) function is not bounded.
y y


| 1y ey | 1
Proof. | sin(iy)| = e 2ie = e 2
= 2ey 12 ey diverges to as y .
Similarly for cos z.
The tangent and cotangent are defined as
tan z =

sin z
exp(2iz) 1
= i
cos z
exp(2iz) + 1

and cot z =

cos z
exp(2iz) + 1
=i
sin z
exp(2iz) 1

respectively.
Proposition B.2.3.
(a) tan z is holomorphic on every complex number z 6=

126

2k+1
2 ,

k Z.

(b) cot z is holomorphic on every complex number z 6= k, k Z.


Proof.
(a) By proposition 4.1.15 tan z is differentiable where cos z 6= 0, but
cos z = 0 = eiz = eiz = ey (cos x + i sin x) = ey ( cos x + i sin x).
This does impossible y 6= 0 and therefore z = x must be real. We know cos x = 0 where
z = 2k+1
2 .
(b) Similarly for cot z.

Theorem B.2.4 (Fundamental Theorem of Trigonometry). For all z C


sin2 z + cos2 z = 1
Proof.
2  iz
2
eiz eiz
e
+ eiz
(eiz eiz )2 + (eiz + eiz )2
+
=
2i
2
4


iz
iz
iz
iz
iz
iz
iz


((e
+
e ) + (e

e ))((
e
+ e ) (
4eiz eiz
e
eiz ))
=
=
4
4
=1

sin2 z + cos2 z =

All rules for real trigonometric functions are satisfied for complex functions:
Proposition B.2.5. For all z, z1 , z2 C
1. sin(z + 2) = sin z and cos(z + 2) = cos z (Both are periodic functions with period 2).
2. tan(z + ) = tan z and cot(z + ) = cot z (Both are periodic functions with period ).
3. sin(z1 z2 ) = sin z1 cos z2 cos z1 sin z2 .
4. cos(z1 z2 ) = cos z1 cos z2 sin z1 sin z2 .
5. sin(2z) = 2 sin z cos z and cos(2z) = cos2 z sin2 z.
6. sin(z) = sin z and cos(z) = cos z.


7. sin z + 2 = cos z and cos z + 2 = sin z.

B.3.

Hyperbolic Trig Functions

The hyperbolic sine, cosine, tangent, and cotangent are defined as in the real case:
ez ez
2
sinh z
exp(2z) 1
tanh z =
=
cosh z
exp(2z) + 1
sinh z =

ez + ez
2
cosh z
exp(2z) + 1
coth z =
=
sinh z
exp(2z) 1
cosh z =

They satisfy rules like homologous real functions, especially


Proposition B.3.1. For all z C
(a) cosh2 z sinh2 z = 1.
(b) cosh(z) = cosh z and sinh(z) = sinh(z).
(c) (sinh z)0 = cosh z and (cosh z)0 = sinh z.
Proof. Exercise.
Moreover, they are now related to the trigonometric functions via the following useful identities:
Proposition B.3.2. For all z C
sinh(iz) = i sin z

and

Proof. Exercise.

127

cosh(iz) = cos z.

B.4.

Logarithms

Classically, the logarithm function is the inverse of the exponential function. For real function
ex its inverse is called natural1 logarithm ln x, so are verified the following identities
eln x = x

and

ln(ex ) = x

This is possible because ex : R R+ (and therefore ln x : R+ R) is a bijection. However


complex exp(z) : C C is not a bijection because is a periodic function (period 2i) and this
does that there exist a lot of inverse functions of exponential called logarithmic branches log z.
Definition B.4.1. Let z 6= 0 be a non-null complex number with argument arg z and R a
fixed angle. We call logarithmic branch to the function
log z = ln |z| + i arg z

where arg z (, + 2] or arg z [, + 2).

Thus we have an infinity number of logarithmic branches. Like an example, for z = 1,


considering arg(z) = , we have log(1) = ln 1 + i( + 2k), with k any integer.
Proposition B.4.2. Every logarithmic branch verifies exp(log z) = z. In general, log(exp z) 6= z
(see Example B.4.3), however if z = x + y with y (, + 2] and log(z) is a the corresponding
branch, then log(exp z) = z.
Proof. We have
exp(log z) = exp(ln |z| + i arg z) = eln |z| ei arg z = |z|ei arg z = z.

(B.1)

In the other hand, let z = x + iy with < y + 2, then arg(exp z) = y and


log(exp z) = log(ex eiy ) = ln(ex ) + i arg(exp z) = x + iy = z.

Although it is usual to consider the argument of a complex number in [0, 2), is not the
principal form, but consider the argument in (, ]. This principal branch is represented by
Log z, more concretely
Log(z) = ln |z| + i arg z,
If z = x + iy 6= 0, then Log z =

ln(x2 +y 2 )
2

+ i arctan

< arg z .
y
x

, considering arctan from to .

Example B.4.3. For z = 2i,


Log(exp z) = Log(exp 2i) = Log(1) = 0 6= z.
Proposition B.4.4. Every logarithmic branch log z determined by < arg z + 2 is
continuous at all complex number z except at points of the ray = {rei : r 0}.
In particular, the principal logarithm Log is not continuous at the negative semiaxis, i.e. the
ray {x + 0i : x 0}.

z0

Proof. Consider z0 in the ray and z approaches to z0 . Let the argument of z.


1

Also called Naperian logarithm in honor of Scottish mathematician John Napier (15501617).

128

If + , i.e. z approaches to z0 being > , then


lim log z = ln |z| + i

zz0

But if , i.e. z approaches to z0 being < , then


lim log z = ln |z| + i( + 2).

zz0

Therefore does not exist the limit at z0 and log z is not continuous at the ray .
Obviously, log z is continuous at the points that are not in the ray .
Theorem B.4.5. For z which are not int the ray defined above, the corresponding logarithmic
branch is differentiable and
1
(log z)0 = .
z
Proof. Using proposition 4.1.17 with log z = exp1 (z) and (B.1),
(log z)0 =

B.5.

1
1
1
=
= .
(exp)0 (log z)
exp(log z)
z

Exponential with any non-null complex base

Let a 6= 0 be a complex number. For z C we define the exponential


Definition B.5.1.
az = exp(z log a)
which is not a function because it is not unique.
Observe that az takes a lot of values, as many as logarithmic branches. To avoid this, we
define the princial value of az as
az = exp(z Log a)
with has a unique value.
For this definitions ez 6= exp(z), but this is true if we consider ez as the principal value.
Example B.5.2. Lets go to calculate


1 = exp

1 = 1 2 using the above definition.




1
2
ki

log 1 = exp
= exp(ki) = cos(k) + i sin(k),
2
2


k Z.

Therefore, 1 only have two values, 1 = 1 (for k even) and 1 = 1 (for k odd)2 . Furthermore
Log 1 = 0 and the principal value or 1 = 1.
Example B.5.3. The power of imaginary to other imaginary number may be a real number.


+ 4k
i
i = exp i log i = exp
, k Z.
2
The principal value of ii 0.2079.
2

We already know that.

129

Exercises
Exercise 2.1
Describe the images of the following sets under the exponential function:
1. the line segment defined by z = iy, 0 y 2.
2. the line segment defined by z = 1 + iy, 0 y 2.
3. the rectangle {z = x + iy C : 0 x 1, 0 y 2}.

Exercise 2.2
Describe the image under exp of the line with equation y = x.

Exercise 2.3
Prove that sin(z) = sin(z) and cos(z) = cos(z).

Exercise 2.4
Find the expression u(x, y) + iv(x, y) of functions sin z and cos z.

Exercise 2.5
Let z = x + iy and show that
1. | sin z|2 = sin2 x + sinh2 y = cosh2 y cos2 x.
2. | cos z|2 = cos2 x + sinh2 y = cosh2 y sin2 x.
3. If cos x = 0 then | cot z|2 =
4. If |y| 1 then | cot z|2

cosh2 y1
cosh2 y
2

sinh y+1
sinh2 y

1.

=1+

1
sinh2 y

1+

1
sinh2 1

2.

Exercise 2.6
Evaluate the value(s) of the following expressions, giving your answers in the form x + iy.
1. ei .

4. esin i .

7.

2. e .

5. exp(Log(3 + 4i)).

6.
1 + i.

8.

3. i1i .

3(1 i).

i+1

4

Exercise 2.7
Find the principal values of
1. log i.

2. (1)i .

3. log(1 + i).

Exercise 2.8
Is there a difference between the set of all values of log(z 2 ) and the set of all values of 2 log z?
(Try some fixed numbers for z.)

Exercise 2.9
Is there a difference between the set of all values of log(z 2 ) and the set of all values of 2 log z?
(Try some fixed numbers for z.)

Exercise 2.10
For each of the following functions, determine all complex numbers for which the function is
holomorphic. If you run into a logarithm, use the principal value (unless stated otherwise).

130

1. z 2 .

2.

sin z
z 3 +1 .

3. exp(z).

4. log(z 2i + 1) where log(z) = ln |z| + i arg(z) with 0 arg(z) < 2.


5. (z 3)i .

6. iz3 .

7.

1
Log z .

Exercise 2.11
Find all solutions to the following equations:
1. Log(z) =

2 i.

4. sin z = cosh 4.

7. exp(iz) = exp(iz).

2. Log(z) =

3
2 i.

5. cos z = 0.

8. z 1/2 = 1 + i.

6. sinh z = 0.

9. cosh z = 1.

3. exp(z) = i.

Exercise 2.12
Fix c C {0}. Find the derivative of f (z) = z c .

Exercise 2.13
Prove that ab is single-valued if and only if b is an integer. (Note that this means that
complex exponentials dont clash with monomials z n .) What can you say if b is rational?

131

132

Appendix C

Computing Some Real Integrals


We have seen that the residue theorem solves lot of complex integrals, but also solves some
real integrals which, otherwise, would require complicated methods for resolution.
In the scope of this course we will see just some of the methods that are standard in mathematics texts, for the sole purpose of showing the power of the method and give us an idea of
how to proceed in other cases.
In general, to calculate a real integral we seek an integral of a complex function into a
closed, simple and smooth curve in counterclockwise which can be resolved by the method
of residues and, if necessary, decompose this integral into appropriate pieces containing this
curve. Obviously, the result should be the same, and equating, we solve the integral we intend
to compute.

Z
C.1.

R(sin x, cos x) dx.

Integrals in the form


0

Inner of integral, R is a rational function. Doing the change of variable z = eix we obtain
sin x =

z z 1
,
2i

cos x =

z + z 1
,
2

dz = ieix dx = iz dx

and therefore the real integral becomes




Z 2
Z
z z 1 z + z 1 dz
R(sin x, cos x) dx =
R
,
2i
2
iz
0

where (t) = eit , with 0 t 2, is the parametrized circle of radius 1 in counterclockwise


around the number 0.
For the proper operation of this method it is necessary that the resulting complex function
in the second integral has no poles on the curve .
Z 2
dx
Example C.1.1. Compute
.
(2
+
cos x)2
0
Using the change of variable described above we have
Z 2
Z
Z
dz
dx
4
z
iz
=
dz

2 =
2
2
1
(2 + cos x)
i (z + 4z + 1)2
0
2 + z+z
2
being the counterclockwise circle |z| = 1.

z
3 2 and z1 = 3 2, but
Function f (z) = (z2 +4z+1)
2 has two poles of order 2 in z0 =
only z0 is inside the circle . Then
1
d
z
z z1

lim
(z 

z0 )2
= lim
=
2
2

zz

1! zz0 dz 
(z

z
)
(z

z
)
(z
z1 )3
0
0
1

1
=
6 3

Res(f (z), z0 ) =

133

And therefore, using the Residue Theorem


2

4
dx
1
4
= 2i =
(2 + cos x)2
i
6 3
3 3

C.2.

Improper Integrals

x2
dx.
2
2
(x + 1)(x + 4)
z2
are not on the real axis. Then,
We note that the singularities of f (z) = 2
(z + 1)(z 2 + 4)
consider the closed curve composed by the upper semicircle of radius R and the segment
[R, R] according to the following drawing:
Z

Example C.2.1. Compute

CR

Figure C.1
For R sufficient large, the only (simple) poles of f (z) inside the curve are z0 = i and z1 = 2i
(see figure C.1) and its residues are
i2
1
=
(i + i)(i2 + 4)
6i
2
1
(2i)
=
Res(f (z), z1 ) =
((2i)2 + 1)(2i + 2i)
3i
Res(f (z), z0 ) =

Therefore the complex integral


Z

and also
Z

z2
dz = 2i
(z 2 + 1)(z 2 + 4)

z2
dz =
2
(z + 1)(z 2 + 4)

Z
CR

1
1
+
6i
3i

z2
dz +
2
(z + 1)(z 2 + 4)


= /3

Z
[R,R]

(x2

x2
dx
+ 1)(x2 + 4)

Now, parametrizing Cr by Reit with t [0, ], web have


Z
Z





z2
z2
R2 |e2it |



dz
dz
R




2
2
2
2
2
2it
|R e + 1||R2 e2it + 4|
CR (z + 1)(z + 4)
CR (z + 1)(z + 4)
z2
dz 0 when R .
+ 1)(z 2 + 4)
CR
Making limit in (C.1) when R goes to infinity, we have
Z

verify

(z 2

(x2

x2

dx =
+ 1)(x2 + 4)
3

134

(C.1)

cos x
dx.
2+1
x
x=0
R
Since f is an even function, we have 0
consider
Example C.2.2. Compute

cos x
x2 +1

dx =

1
2

cos x
x2 +1

dx. In the other hand, we

eiz
+1
defined insided the closed curve as described in example C.2.1 above. For radius R sufficient
1
1
eiz
= ei+i = 2ei
.
large, the only pole inside is z0 = i and its residue es Res(f (z), z0 ) = limzi z+i
Then
Z
eiz

dz = .
2+1
z
e

f (z) =

z2

When CR is the semicircle upper,


Z
Z R
Z R
cos x
sin x
eiz

=
dz +
dx + i
dx
2+1
2+1
2+1
e
z
x
x
R
R
CR
Before to do limit when R we observe:
Z
eiz
lim
dz = 0 because
2
R C z + 1
R
Z
Z iz


e

eiz
1



dz
dz
R 0 when R




2
2
2+1
z
+
1
z
+
1
R
CR
CR
Z
sin x
sin x
Principal value of
dx = 0 because 2
is a odd function.
2
x +1
x + 1
Z

cos x
Hence
dx =
2+1
x
2e
0
We end this applications of residue theorem computing a real improper integral which needs
to avoid a singularity of complex function on the curve.
To compute below example we use the next result:
Z

Proposition C.2.3 (Jordans inequality). For every R 0 we have


eR sin t dt < .
R
0
Proof. As observe in figure C.2 the segment (0, 0)( 2 , 1) is under the graphic of sine function, i.e.

2
t sin t when 0 t 2 .

y = 2 t
y = sin t

Figure C.2
Then, when R > 0 and 0 t
Z

we have R sin t
Z

2R
t and therefore

1 eR <
2R
2R
0
0
R
R 2 R cos s
In the other hand, doing a change of variable s = t 2 , we have t= eR sin t dt = s=0
e
ds
e

R sin t

dt

e2Rt/ dt =

and 1 2 s cos s with 0 s 2 , i.e. R cos s R + 2R


s, therefore
Z 2
Z 2


2R

eR cos s ds eR
e s ds = eR
eR 1 =
1 eR <
2R
2R
2R
0
0
And this prove the proposition.

135

sin x
dx.
x
0
For R > > 0, consider the counterclockwise closed curve composed by next open curves:

Example C.2.4. Compute

1. Semicircle CR parametrized Reit with t [0, ].


2. Segment [R, ] on real axis.
3. Clockwise semicircle C parametrized ei(t) with t [0, ].
4. Segment [, R] on real axis.

CR

C
R

Figure C.3
R
eiz
is holomorphic inside the curve , then f (z) dz = 0. Furthermore
z
Z ix
Z
Z R ix
Z
Z
e
eiz
e
eiz
dz +
dx +
dz +
dx =
0=
f (z) dz =
x
R x
C z


CR z
Z
Z
Z
Z R
Z
Z R
eiz
eiz
cos x
cos x
sin x
sin x
=
dz
dz +
dx +
dx + i
dx + i
dx
z
z
x
x
x
x
CR
C
R

The function f (z) =

But the real functions cosx x and sinx x are, respectively, odd and even, hence
Z
Z
Z
Z R
Z R


eiz
eiz
cos
x
cosx
sin x

0=
dz
dz +
dx +
dx
(C.2)
 dx + 2i


x
x
 x
CR z
C z


R
Now,
Z iR(cos t+i sin t)
Z


Z
Z

iR cos t R sin t


e
eiz
it

e
e
dt =


=
eR sin t dt
dz
Rie
dt




it
z
Re
CR
0
0
0
and, using the proposition C.2.3, Jordans inequality, we have
Z


eiz


dz <
0 when R

z
R
CR
2
3 2
eiz
= z1 + i + i2!z + i 3!z + = z1 + g(z), being g(z)
z
holomorphic everywhere, then there exists a constant M such that |g(z)| M for all
z C and
Z
Z
Z
ei z
1
dz =
dz +
g(z) dz
C z
C z
C
But
Z

Z
Z


1
1
it

M

dz =
e
dt
=
i
and
i

g(z)
dz



it

e
C z
0 
C

The Laurent series of function

Hence
Z
lim

0+

ei z
dz = i
z

Then, doing limit when 0 in (C.2), we obtain the required result:


Z
sinx

dx =
x
2
0

136

Exercises
Exercise 3.1
Use residues to evaluate the following:
Z
cos 2
1.
d.
0 5 3 cos
Z 2
d
d with |a| < 1.
2.
1 + a cos
0

Exercise 3.2
Z

x
dx.
1
+
x2
0
Hint: Use change of variable x = u2 for converting to rational integral and apply residues
method.
Evaluate

Exercise 3.3
Z
Evaluate
0

cos x2 sin x2
dx.
1 + x4

Hint: Use complex function f (z) =

137

eiz
.
1 + z4

You might also like