Professional Documents
Culture Documents
Ecuaciones Diferenciales
basados en los trabajos de
Rosario Muoz Marn,
Juan Jos Saameo Rodrguez
y
Francisco J. Rodrguez Snchez,1
Grados en Ingenieras de
Sistemas Electrnicos,
Sistemas de Telecomunicacin,
Telemtica
y
Sonido e Imagen
Universidad de Mlaga
1
ndice general
1. Ordinary Differential Equations (ODE)
1.1. Introduction and definitions . . . . . . . . .
1.1.1. Solutions of an ODE . . . . . . . . .
1.2. First Order Differential Equation . . . . . .
1.2.1. Equations with Separated Variables
1.2.2. Homogeneous Equations . . . . . . .
1.2.3. Exact Differential Equation . . . . .
1.2.4. Linear Differential Equations . . . .
1.3. Integrating ODEs of higher order . . . . . .
1.3.1. Linear ODEs . . . . . . . . . . . . .
1.3.2. Second order Linear EDOs . . . . .
1.3.3. Linear EDOs of order n . . . . . . .
1.4. Systems of Linear Differential Equations . .
1.4.1. First Order Systems . . . . . . . . .
Ejercicios . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . .
than
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
2
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
1
2
5
5
6
9
11
13
13
19
20
21
25
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
41
41
44
45
46
46
48
51
54
55
58
59
60
60
61
64
65
69
70
.
.
.
.
81
81
82
83
84
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Residue Theorem)
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
84
84
85
87
87
92
94
.
.
.
.
.
.
.
.
.
.
.
.
101
. 101
. 101
. 103
. 105
. 107
. 107
. 109
. 110
. 113
. 115
. 115
. 117
.
.
.
.
.
.
.
.
123
. 123
. 123
. 127
. 128
. 130
. 130
. 133
. 136
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
141
. 141
. 141
. 144
. 145
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . .
Forms
. . . .
. . . .
.
.
.
.
149
149
150
151
152
153
154
iv
Chapter 1
Ordinary Differential
Equations (ODE)
1.1.
a) xy 0 + y 2 x = 1 (ODE)
b) x
c) x2 y xy 0 = ex (ODE)
z
z
y2
= x (PDE)
x
y
d)
2z
2z
2z
+
= y (PDE)
x2 xy y 2
Order of a differential equation is the number of the highest derivative in the equation.
In literature there are very important differential equations, some examples are:
Newtons Second Law: F = m
and the space s = s(t).
d2 s
, where the force F = F (t), mass m is a constant
dt2
d2
g
Simple Pendulum Motion:
+ = 0, where the angle = (t) and gravity g
dt2
L
and length of pendulum L are constants.
d2 Q
dQ
1
+R
+ Q = E, where the charge Q = Q(t),
2
dt
dt
C
voltage E = E(t) and the inductance L, resistance R and capacity C are constants.
2u
u
Heat equation: k 2 =
, where k is a constant (thermal diffusivity) and u = u(x, t)
x
t
is the temperature along an unidimensional wire at distance x and time t.
1.1.1.
Solutions of an ODE
y 0 = 4x y.
But this equation has infinite solutions: the parametric functions family y = (x2 + C)2
for any constant C. Also this ODE has y = 0 like a trivial solution.
A solution not involving constant is called particular solution. A parametric family of
functions which contains every particular solution is called general solution. Sometimes
the general solution does not contain all solutions, then we say that ODE has singular
solutions. In the above example 1.1.1, functions y = (x2 + C)2 is a general solution and
the trivial solution y = 0 is a singular solution.
Curve solutions
Its very common that solutions of an ODE are expressed as curves instead of functions.
Example 1.1.2. the first order differential equation
yy 0 + x = 1
has general solution the following curves (circles centered in (1, 0))
x2 + y 2 2x = C,
with C 1
2
1
1
2
3
1.2.
y 0 = x
y2
y(2) = 0
has solution
r
y=
3 2
x 6
2
x
is not continuous in (2, 0). This is a counter-example for the theorem.
y2
Example 1.2.4. The Cauchys problem
(
yy 0 + x = 1
y(2) = 0
but f (x, y) =
has not solution, but the curve solution x2 + y 2 2x = 0 (see Example A.2) is not a
function in the form y = g(x) around the point (2, 0) (remember Theorem of the Implicit
Function).
Example 1.2.5.
The Cauchy problem
(
xy 0 = 2y
y(0) = 0
has solution, but it is not unique. In fact, it has infinite solutions y = Cx2 .
Sometimes a first order ODE is expressed in the a form equivalent to the normal
form, but in a different notation
P (x, y)dy = Q(x, y)dx
dy
Q(x, y)
=
y 0 = f (x, y)
dx
P (x, y)
3
dy
1
=
dx
2x
ln |x|
2
Figure 1.2: Family of curves (in blue) orthogonal to the family y = x2 + c (in black).
Example 1.2.7. Lets go to calculate the orthogonal family to the hyperboles
x2 y 2 = 2cx.
The derivative is
2x dx 2y dy = 2c dx = 2x dx 2y dy =
x2 y 2
dx
x
and the orthogonal family is obtained from 2x2 dy + 2xy dx = (x2 + y 2 ) dy equivalent to
2xy dx + (x2 + y 2 ) dy = 0
and its solution will be given in Example 1.2.16.
Now, we give some knowing methods for integrating first order ODEs.
4
1.2.1.
y = 4 C 4 cos x
1.2.2.
Homogeneous Equations
1
1
du = dx.
2
x
1u
Then arcsin u = ln |x| + C = arcsin xy = ln |x| + C. Also we can write y = x sin(ln |x| +
C).
5
1.2.3.
(1.1)
is exact if the vector field F (x, y) = (P (x, y), Q(x, y)) is conservative, i.e. there exists a
scalar field U (x, y) such that
= P (x, y)
x
U = F
U = Q(x, y)
y
In this cases, equation the differential equation (1.1) can be rewritten dU = 0, hence the
parametric family of curves
U (x, y) = C
is a general solution of the equation.
By calculus, we know that if a vector field F = (P, Q) then its curl F =
0, then F is conservative. Hence
Q P
x y
(1.2)
(4x3 y 2 + x 1)dx = x4 y 2 +
P dx =
x2
x + (y)
2
U
= Q = 2x4 y + 0 (y) = 2x4 y y + 2
y
Z
y2
0
(y) = y + 2 = (y) = y + 2 dy = 2y
+C
2
therefore the general curve solution is
x4 y 2 +
x2
y2
x + 2y
= C.
2
2
2xy dx = x2 y + (y)
Ux = P = U =
2
Uy = Q = x + (y) = x y
Z
= (y) =
y 2 dy =
y3
.
3
y3
= C.
3
Integration Factors
An integration factor is a function (x, y) that is choosen to convert a inexact differential equation in exact.
(inexact ODE)
P (x, y)dx + Q(x, y)dy = 0
(exact ODE)
(x, y)P (x, y)dx + (x, y)Q(x, y)dy = 0
Q
P
+
=Q
+
y
y
x
x
in other words,
(1.3)
This means that (x, y) is a solution of a partial differential equation (1.3), generally
much more difficult than the previous.
Sometimes equation (1.3) can be simplified imposing additional conditions on (x, y).
We will see some of these.
7
Integral factor is a function which does not depend of y, i,e. = (x). In this
case, equation (1.3) can be written Py = Q0 + Qx . This implies that
Py Qx
0
=
= (x)
Q
is a function only dependent of x. Then, To find the integral factor is easy, because
Z
R
ln() = (x) dx = = e (x) dx .
Example 1.2.17. Integrate the ODE (4x5 y 3 + x)dx + (3x6 y 2 x2 )dy = 0.
This is not exact, but it has an integral factor = (x), because
6x5 y 2 + 2x
2
P y Qx
12x5 y 2 (18x5 y 2 2x)
0
=
=
=
=
,
Q
3x6 y 2 x2
3x6 y 2 x2
x
hence (x) = e2
1
x
dx
1
4x y +
x
3 3
dx + 3x4 y 2 1 dy = 0
is a function only dependent of y. Then, To find the integral factor is easy, because
Z
R
ln() = (y) dy = = e (y) dy .
Example 1.2.18. Integrate the ODE y(1 + xy)dx xdy = 0, using integral factor
= (y).
Other integral factors. Assuming = (z), where z = h(x, y), equation (1.3) can
be written
P z hy + Py = Qz hx + Qx .
This implies that
Py Qx
z
=
= (z)
Qhx P hy
is a function only dependent of z. Then, to fondi the integral factor is easy, because
Z
R
ln() = (z) dz = = e (z) dz = (h(x, y)).
8
Example 1.2.19. Integrate the ODE (4xy 3x2 y) dx + (2x y x2 )dy = 0 using an
integral factor in the form = (x2 y).
Using equation (1.3) an z = x2 y, we can write
(4xy 3x2 y)(1)z + (4x 1) = (2x y x2 )(2x)z + (2 2x)
(6x 3) = (2xy y 2x3 + x2 )z
z
6x 3
3(2x 1)
3
3
=
=
=
= .
3
2
2
2
2xy y 2x + x
y(2x 1) x (2x 1)
yx
z
Hence
= e3
1
z
dz
= z 3 =
(x2
1
y)3
1.2.4.
A first order ordinary linear differential equation is an equation that can be expressed
in the form
a1 (x)y 0 + a0 (x)y = g(x) or equivalent
y 0 + P (x)y = Q(x).
y0e
P (x) dx
R
R
d R P (x) dx
ye
= Q(x)e P (x) dx =
= Q(x)e P (x) dx
dx
Z
R
R
ye P (x) dx =
Q(x)e P (x) dx dx
R
R
Q(x)e P (x) dx dx
R
y=
e P (x) dx
+ yP (x)e
P (x) dx
1
x
dx
y
= x.
x
= eln x = x, then
d
xy + y = x =
(xy) = x2 = xy =
dx
0
y=
x3
3
+C
x2 C
=
+
x
3
x
9
x2 dx
Using a particular solution: We use a general solution of the named associated homogeneous linear equation y 0 + P (x)y = 0. This solution is found using variable
separated method. So,
yh = Ce
P (x) dx
For finding a particular solution yp , we use the named Lagranges method of variation. It consist in changing
the constant C of homogeneous solution by a function
R
C(x), i.e. yp = C(x)e P (x) dx .
C 0 (x)e
C 0 (x) = Q(x)e
P (x) dx
C(x)
xC 0 (x) C(x)
+ x
x2
x
= x =
C(x) =
C 0 (x)
= x = C 0 (x) = x2
x
x3
3
x2 C
+
3
x
Two important equations reported in the literature can be solved by converting them to linear
equations.
Equation of Bernoulli
This Bernoulli ODE has the form
y 0 + yP (x) = y n Q(x) with n 6= 0, n 6= 1
and may be solved using the change of variable z =
y 0 + yP (x) = y n Q(x) =
10
y n1
= z 0 =
(1 n)y 0
.
yn
y0
1
+ n1 P (x) = Q(x)
yn
y
z0
+ zP (x) = Q(x)
1n
which is a linear equation.
Equation of Ricatti
The Ricatti ODE has the form
y 0 + yP (x) + y 2 R(x) = Q(x)
This equation can be reduced to a Bernoulli equation if we know a particular solution yp doing
the change of variable y = z + yp . Indeed,
z 0 + yp0 + zP (x) + yp P (x) + z 2 R(x) + 2zyp R(x) + yp2 R(x) = Q(x)
z 0 + zP (x) + z 2 R(x) + 2zyp R(x) = 0
z 0 + z(P (x) + 2yp R(x)) = z 2 R(x)
1
1
y = 2 . e The function yp =
x
x
a particular solution (verify this). We change the variable y = z + x1 , then
Example 1.2.23. Integrate the Ricatti equation y 0 + y 2 +
z0
1
x
is
2z
z
1
1
1
1
+ z2 +
+ 2+ + 2 = 2
x2
x
x
x x
x
3
z 0 + z = z 2 .
x
1
= u0 =
z
z 0
z2 ,
then
3
z0
31
= 1 = u0 + u = 1 (Linear ODE).
+
z2
xz
x
We have uh = Cx3 and up = C(x)x3 ,
C 0 (x)x3 3x2 C(x) + 3C(x)x2 = 1 = C 0 (x) = x3 = C(x) =
Hence u = Cx3
x
1
2
=
= z =
. Therefore the general solution is
2
z
2Cx3 x
y=z+
1.3.
x2
1
= 2
2
2x
2
1
2Cx2 + 1
1
=
+
=
x
2Cx3 x x
2Cx3 x
The order of an ordinary differential equation is the biggest derivative n in the equation
F (x, y, y 0 , . . . , y (n) ) = 0.
The Cauchy problem of order n consist to find a solution for
y (n) = f x, y 0 , y 00 , . . . , y (n1)
y(x0 ) = y0
y 0 (x0 ) = y 0
0
..
(n1)
(n1)
y
(x0 ) = y0
Similar to the first order case, it is possible to prove that the Cauchy problem has unique
f
f
solution when f is continuous and all the partial derivatives f
y , y 0 , . . . , y (n1) are
continuous.
11
c1
x3
= y =
+ c1 ln x + c2
x
3
Equations in the form F(y, y0 , y00 , . . . , y(n) ) = 0. The independent variable does not
appear. We consider the function p(y) = y 0 , then we have
y0 = p
dp dy
dp
dp
=
=p
y 00 =
dx
dy dx
dy
2
dy 00
dp
d2 p
dy 00
dp dp
d2 p
y 000 =
+ p2 2
=
p=
+p 2 p=p
dx
dy
dy dy
dy
dy
dy
..
.
Example 1.3.2. The second order ODE y 00 + (y 0 )2 = 2y 0 ey changes to
p
dp
dp
+ p2 = 2pey =
+ p = 2ey
dy
dy
with solution
0
p = y = (2y + c1 )e
Z
=
ey
dy = x + c2
2y + c1
F (x, e
z dx
,ze
z dx
, (z 2 + z 0 )e
z dx
R
k
. . . ) = e z dx F (x, 1, z 2 + z 0 , . . . ) = 0
1
= 2x + c1 .
z
R
ln(2x+c1 )
1
Therefore y = exp
dx
=
exp
+
k
= c2 2x + c1 .
2x+c1
2
Note.- Observe that this equation could be solved differently
y2
d
yy 0 = 0 = yy 0 = c1 =
= c1 x + c2
dx
2
y 2 = 2c1 x + 2c2 (equivalent to the above solution)
y y 00 + (y 0 )2 = 0 =
1.3.1.
Linear ODEs
i=1
i=1
1.3.2.
They can be written in the form y 00 + a(x)y 0 + b(x)y = p(x). By the previous proposition 1.3.6, if we know a particular solution, we only solve the associated homogeneous
equation
y 00 + a(x)y 0 + b(x)y = 0.
13
y20
y0
= 1 = y2 = ky1
y2
y1
Theorem 1.3.9. Let r, s be the roots of the characteristic polynomial of the homogeneous
linear equation y 00 + ay 0 + by = 0, then
1. If r 6= s, then {erx , esx } is a fundamental system of solutions.
2. If r is the unique (double) solution, then {erx , xerx } is a fundamental system of
solutions.
Proof. In the first case, function erx is solution (similar to esx ), because, for example
(D r)(D s)erx = (D r)(rerx serx ) = r2 erx r2 erx rserx + rserx = 0.
Moreover {erx , esx } is a fundamental system because
rx
sx
e
e
rx sx
= (s r)e(r+s)x 6= 0
W (e , e ) = rx
re
sesx
In the second case, xerx is solution
+ erx
) = rerx rerx = 0
rx
rx
xre
rxe
(D r)2 (xerx ) = (D r)(
and
rx
rx
e
1
xe
x
2rx
=e
= e2rx 6= 0,
W (e , xe ) = rx
rx
rx
re
xre + e
r xr + 1
rx
rx
or y = (c1 + c2 x)erx .
y = L cos t
L
Idle state
T =
2T
m
Example 1.3.10 (Simple Harmonic Motion). It is typified by the motion of a mass m
in a spring when it is subject to a linear elastic restoring force F given by Hookes Law
F = kx, being k a constant that depends on spring and x the distance on time t.
The equation is
mx00 (t) = kx(t)
which is a second order homogeneous linearODE.
q
q
k
k
k
The characteristic polynomial 2 + m
= i m
+i m
= (i)(+i),
q
k
where = m
. Therefore the solution is
x(t) = c1 cos t + c2 sin t
15
A = 1
2
00
2
0
(Ax + Bx) + (Ax + Bx) = x = 2A + 2Ax + B = x =
2
B = 1
16
Therefore
1
y = x2 x + c1 ex + c2
2
= 0 is not a root
= 0 is a root with
multiplicity s
= r is not a root
= r is a root with
multipliity s
= i is not a
root
= i is a root
= i is not a
root
= i is a root
Qm (x)
xs Qm (x)
erx Qm (x)
xs erx Qm (x)
k (x) sin x
Pk (x) cos x + Q
k (x) sin x
x Pk (x) cos x + Q
k (x) sin x
Pk (x) cos x + Q
k (x) sin x
x Pk (x) cos x + Q
yh = c1 ex + c2 ex
Now, according with the Table 1.1, we look for a particular solution yp = xex (Ax2 +
Bx + C),
2(xex (Ax2 + Bx + C))00 (xex (Ax2 + Bx + C))0 xex (Ax2 + Bx + C) = x2 ex
x
x
e
e
(9Ax2 + (12A + 6B)x + (4B + 3C)) = x2
9A = 1
12A + 6B = 0
=
4B + 3C = 0
3
2x2
x
yp =
+
9
9
A = 19
B = 29
8
C = 27
8x x
e
27
and
y = yp + yh =
x3 2x2 8x
+
+ c1 ex + c2 ex
9
9
27
(1.4)
C1 (y100 + ay10 + by1 ) +C2 (y200 + ay20 + by2 ) +C10 y10 + C20 y20 = p(x)
{z
}
{z
}
|
|
0
C10 y10
C20 y20
= p(x)
(1.5)
1
e
e
1 1x
with Wronskian W = x
e 2 x = 32 ex . Cramers method for
1 = 2e2
x
1
e 2 e 2
solving linear systems gives
1
e 2 x
0
x2 x
1
2
Z 2
2 e 12 e 2 x
x2 ex
x2
x
x3
0
=
=
=
C
=
dx
=
C1 =
1
3
3
9
32 ex
32 ex
x
e
0
3x
Z 2
x2 2x
ex x2 ex
18x2 24x + 16 e 2
e
x2 3x
x
2
0
2
3x
e = C2 =
e dx =
C2 =
= 3 x =
.
3
3
81
32 ex
2 e
Therefore, the general solution of the non homogeneous equation is
3
x
x
2x2 8x 16
y=
+ c1 ex + c2 e 2
9
9
27 81
18
1.3.3.
Similar to the second order equations, thet can be written in the form
y (n) + a1 (x)y (n1) + + an (x)y = p(x)
with every ai (x) and p(x) functions depending of x.The associated homogeneous equation
will be written y (n) + a1 (x)y (n1) + + an (x)y = 0.
The system of n nonzero solutions y1 , y2 , . . . , y : n which have Wronskian
y1
y2
0
y1
y20
W (y1 , y2 , . . . , yn ) =
..
(n) . (n)
y
y2
1
...
...
...
...
6= 0
(n)
y
yn
yn0
n
1.4.
(r)
(r)
F1 t, x1 , x2 . . . , xn , x01 , x02 , . . . , x0n , x001 , x002 , . . . , x00n , x(r)1 , x2 , . . . , xn = 0
(r)
(r)
0
0
0
00
00
00
(r)
1
F2 t, x1 , x2 . . . , xn , x1 , x2 , . . . , xn , x1 , x2 , . . . , xn , x , x2 , . . . , xn = 0
..
.
(r)
(r)
0
0
0
00
00
00
(r)
1
Fk t, x1 , x2 . . . , xn , x1 , x2 , . . . , xn , x1 , x2 , . . . , xn , x , x2 , . . . , xn = 0
Next problem is solved by a system of differential equations.
x1
k1
x2
m1
k3
m2
k2
2
m1 ddtx21 = k1 x1 + k3 (x2 x1 )
2
m2 ddtx22 = k2 x2 + k3 (x1 x2 )
20
1.4.1.
dx1
dt
dx2
dt
dxn
dt
= f1 (t, x1 , x2 , . . . , xn )
= f2 (t, x1 , x2 , . . . , xn )
..
.
= fn (t, x1 , x2 , . . . , xn )
fi
xj
dx1
dx2
dxn
dt
=
= =
=
P1 (t, x1 , . . . , xn )
P1 (t, x2 , . . . , xn )
Pn (t, x1 , . . . , xn )
Q(t, x1 , . . . , xn )
First Order Linear Systems
A first order system is linear if it is expressed
dx1
dt
dx2
= a21 (t)x1 + a22 (t)x2 + + a2n (t)xn + b2 (t)
dt
..
dx
n
(1.6)
All first order linear system with n variables is equivalent to a linear ODE of grade
n. To see this, consider the ODE
y (n) + a1 (x)y (n1) + + an1 (x)y 0 + an y = p(x)
and the changes of variable y1 = y, y2 = y 0 , . . . , yn = y (n1) produce the system
0
y1 = y2
y 0 = y3
2
..
0
yn = p(x) an (x)y1 a2 (x)yn1 a1 (x)yn
Converse can be made by successive derivatives and replacements.
Example 1.4.3. Express as an unique ODE the next system
(
0
2
x01 = tx1 x2 + t2
x1
t
1
x1
t
=
+
0
2
0
2
x2
1t t
x2
0
x2 = (1 t)x1 + t x2 .
(1.7)
Derivative the first equation and replace x02 from the second equation to obtain
x001 = x1 + t x01 x02 + 2t = x001 = t x01 + t x1 t2 x2 + 2t
Finally, using the first equation again, eliminate x2 from this, and you have obtain the
searched second order linear ODE
x001 (t2 + t)x01 + (t3 t)x1 = t4 + 2 t
equivalent to the system (1.7).
First Order Linear Systems with constant coefficients
A first order system is
dx1
dt
dx2
dt
dxn
dt
a1n
x1 (t)
b1 (t)
a2n
x2 (t) b1 (t)
.. + .. ,
. .
ann
xn (t)
bn (t)
in short
0
(1.8)
For solving, we generalize the known method for solving equations of the first order.
First, we solve the associated homogeneous system x0 = Ax. Can be shown that the
general solution is
xh = eAt
c
22
c1
X
1
c2
=
(At)k , and
c = . is a column
k!
..
k=0
cn
matrix of constants.
In a second step, by any method, we calculate a particular solution xp of system
of (1.8), and hence the general solution we are seeking is
x = x h + xp
How to compute the exponential matrix eAt for n = 2?
1 0
If matrix A is diagonalizable, A =
being D =
the diagonal
0 2
matrix of (real or complex) eigenvalues and P = v w , the change of basis
matrix (v, w eigenvectors, of course). Hence
P DP 1 ,
z }| {
xh = eAt
c = P eDt P 1
c = P eDt c = c1 e1 t v + c2 e2 t w
If matrix A is not diagonalizable. We know that there exists an unique (real)
eigenvalue and an uniqueindependent
eigenvector v. It is possible to prove that
1
and a invertible matrix P = v w , being w
there exist a matrix2 J =
0
3
a vector verifying Aw = v + w, such that A = P JP 1 .
0 1
0
= D + N.
+
Matrix J can be expressed J =
0 0
0
Therefore
c
z }| {
xh = eAt
c = P eJt P 1
c = P eDt eN t c.
But
eDt
t
P
e
0
and eN t =
=
t
k=0
0 e
Dt
xh = P e
1
k
k! (N t)
=
1 t
0 1
because (N t)2 = 0, so
c1 + c2 t
= (c1 + c2 t)et v + c2 et w.
c2
dx
= 4x 2y + 1
dt
dy
= 3x y + t
dt
23
=
y 0 = 3x y
y0
3 1
y
4
2
= 2 3 + 2 = ( 2)( 1).
The characteristic polynomial is
3
1
The eigenvectors are:
2 2
v1
For = 2,
= 0 = v = (1, 1).
3 3
v2
3 2
w1
For = 1,
= 0 = w = (2, 3).
3 2
w2
The general solution of the homogeneous associates is
t
1
t
2 1
e
0
2 1
c1
2 1
e
0
c1
xh (t) = e
c=
=
2t
2t
3 1
0 e
3 1
c2
3 1
0 e
c2
1
2
+ c2 e2t
= c1 et
1
3
At
therefore
(
x1h (t) = 2c1 et + c2 e2t
x2h (t) = 3c1 et + c2 e2t
Second step. Find a particular solution using the indeterminate coefficients method.
Suppose
x1p = at + b
xp =
x2p = ct + d
is a solution, then
a = 1
b = 1
a = (4a 2c)t 2d + 4b + 1
=
c = (3a c + 1)t + t d + 3b
c = 2
d = 1
Third step. General solution of the system is
(
x1 (t) = c1 et + 2c2 e2t t 1
x2 (t) = c1 et + 3c2 e2t 2t 1.
Replacing the initial conditions (t = 0) it produces the next linear algebraic system
(
c1 = 4
c1 + 2c2 1 = 1
=
c1 + 3c2 1 = 0
c2 = 1
and, therefore, the solution is
x1 (t) = 4e2t 2et t 1
x2 (t) = 4e2t 3et 2t 1
24
Example 1.4.5 (Homogeneous, two different complex eigenvalues). Find the differential
system equation
(
x0 2x + y = 0
y 0 x 2y = 0
and solve the Cauchy problem with x(0) = y(0) = 1.
2 1
This differential equations system is homogeneous and its matrix is
with
1 2
eigenvalues:
1 = 2 i, eigenvector v = (1, i).
1 = 2 + i, eigenvector v = (1, i).
The general solution is
x(t) = c1 e(2i)t + c2 e(2+i)t = e2t [(c1 + c2 ) cos t + i(c2 c1 ) sin t]
y(t) = ic1 e(2i)t ic2 e(2+i)t = e2t [(c1 + c2 ) sin t + i(c1 c2 ) cos t.]
For solving the Cauchy problem replace the initial conditions, then
c1 + c2 = 2
c1 = 1 i
=
ic1 ic2 = 2
c2 = i + 1
and hence
x(t) = 2e2t (cos t sin t)
y(t) = 2e2t (cos t + sin t)
Example 1.4.6 (Homogeneous, non diagonalizable matrix). Find the general solution
of the differential equations system
dx
dt = x y
dy
dt = x + 3y
1 1
= 2 4 + 4 = ( 2)2 .
Diagonalize the matrix:
1
3
The unique eigenvalue is = 2 and the unique independent eigenvector is v = (1, 1).
To compute the second vector w solve the algebraic system
1 1
w1
1
= w1 + w2 = 1
(A 2I)w = v =
=
1
1
w2
1
By simplicity, we choose w = (1, 0), then
1
2t
2t 1
xh = (c1 + c2 t)e
+ c2 e
.
1
0
Hence
x = (c1 + c2 t) e2t c2 e2t
y = (c1 + c2 t) e2t
Ejercicios
Ejercicio 1.1
Encontrar la solucin general de las siguientes ecuaciones diferenciales:
25
1. (y 0 )2 = x + y + 8.
2. y + xy 2 dx + x x2 y dy = 0;
usando = (xy).
3. (3x + y 2) + y 0 (x 1) = 0.
y
4. y 0 = x2 y 8 .
x
5. y 0 + y cotg x = 5ecos x .
1
Ejercicio 1.2
Integra la EDO (3xy 2 4y)+(3x4x2 y)y 0 = 0, usando un factor integrante (x, y) =
m
x yn.
Ejercicio 1.3
Hallar la curva que pasa por el origende coordenadas, O, y tal que, el rea comprendida entre ella y la cuerda hasta un punto A de la curva, OA, sea proporcional al cuadrado
de la abscisa del punto A.
Ejercicio 1.4
1. Calcula las funciones que verifican que la recta tangente en cada punto (x, y) de su
grfica corta al eje OY en el punto (0, y2 ).
2. Calcula la familia de curvas ortogonales a las soluciones del apartado anterior.
Ejercicio 1.5
Los puntos medios de los segmentos de tangente a una curva comprendida entre el
punto de contacto y el eje OX describen la parbola y 2 = x. Hallar la curva, sabiendo
que pasa por el punto (1, 2).
Ejercicio 1.6
La cantidad de material radioactivo x(t) vara con el tiempo siguiendo el siguiente
problema de Cauchy
(
x0 (t) = kx(t)
x(0) = x0
donde k es una constante que depende del material y x0 es la cantidad inicial de materia.
Se denomina Vida Media T al tiempo que tarda en reducirse a la mitad la cantidad de
material radioactivo. Prueba que T = k1 ln 2, lo que prueba que la vida media no depende
de la cantidad de material radioactivo.
Ejercicio 1.7
Un cuerpo en cada amortiguada a travs de un fluido sigue una ecuacin diferencial
my 00 (t) = mg ky 0
donde y es la altura dependiente del tiempo, m es la masa del cuerpo, g es la gravedad
(constante) y k > 0 es una constante de resistencia que depende del fluido. Calcula la
funcin y suponiendo que comienza del reposo (y 0 (0) = 0) y parte de altura y(0) = h.
Qu podemos decir de la velocidad de cada y 0 cuando el tiempo t tiende a infinito?
26
Ejercicio 1.8
De forma anloga al ejercicio anterior podemos plantear el movimiento armnico
simple amortiguado suponiendo el caso similar al Ejemplo 1.3.10 pero con una fuerza
amortiguadora proporcional a la velocidad de constante de rozamiento R 0:
m x00 (t) = k x(t) R x0 (t)
siendo las constantes m (masa) y k la constante del resorte (usada en la ley de Hook).
Calcula la solucin y general y comprueba que existen dos tipos de soluciones dependiendo del valor de la constante de rozamiento R y que, esencialmente, sigue uno de los
modelos de las siguientes grficas:
Calcula las relaciones entre las constantes m, k, R para que la solucin se corresponda
a cada uno de los tipos.
Ejercicio 1.9
Reducir el orden y resolver, si es posible, las siguientes ecuaciones diferenciales:
1. y 00 = (y 0 )2 y(y 0 )3 .
2. y 000 = (y 00 )2 .
3. x2 y 00 = (y 0 )2 2xy 0 +
2x2 .
1
4. y (5) y (4) = 0.
x
5. y 00 y 0 tg x =
1
2
sen 2x.
6. yy 00 (y 0 )2 = 6xy 2 .
Ejercicio 1.10
Hallar la ecuacin diferencial que tiene como conjunto de soluciones:
1. y = Cex + De2x .
2. yeCx = 1.
3. y = Ax2 + Bx + C +
D sen x + E cos x.
Ejercicio 1.11
Sea {ex , cos x, sen x} un sistema fundamental de soluciones de una ecuacin diferencial
lineal homognea. Halla la solucin particular que satisface las condiciones iniciales y(0) =
3, y 0 (0) = 4, y 00 (0) = 1.
Ejercicio 1.12
Las races del polinomio caracterstico de una EDO de orden superior son:
1 = 0 (simple), 2 = 2 (triple), 3 = 1 + i, 4 = 1 i (dobles)
Determina la solucin general de la ecuacin diferencial.
Ejercicio 1.13
Hallar la solucin de:
27
1. y 00 3y 0 + 2y = (x2 + x)e2x .
2. y 00 + 3y 0 + 2y =
y 000
3.
y (4)
4.
2y 00 +y 0 y
3y 00
5. y 00 2y 0 + y = ln x.
1
.
1 + ex
+
5y 0
6. y 00 + 5y 0 + 6y = 3e2x .
7. y 000 + y 0 = tg x.
2y = 0.
= 0 con y(0) =
y 0 (0)
= 1.
Ejercicio 1.14
Halla la solucin general de la EDO x2 y 00 xy 0 3y = 5x4 , usando el cambio de
variable x = et .
(Nota.- Este cambio reduce la ecuacin a coeficientes constantes.)
Ejercicio 1.15
Reduce a una ecuacin lineal de primer orden la siguiente ecuacin de seguno orden
xy 00 xy 0 y = xex
con y = xn ex siguiendo los siguientes pasos.
1. Encuentra un valor de n para que la funcin u = xn ex sea solucin de la ecuacin
homognea asociada.
R
2. Para dicho valor de n, realiza el cambio y = u z dx, siendo z una funcin desconocida y desarrolla y 0 y y 00 .
3. Sustituye los valores de y 00 , y 0 , y en la ecuacin original y comprueba que resulta
otra ecuacin en z de orden reducido.
Ejercicio 1.16
Resuelve el siguiente problema de Cauchy
(
y 000 = 3yy 0
y(0) = 1, y 0 (0) = 1, y 00 (0) = 23 .
Ejercicio 1.17
Encuentra las solucin general de la EDO
(cos x sen x)y 00 + 2y 0 sen x (sen x + cos x)y = ex (cos x sen x)2
sabiendo que las funciones y1 = sen x, y2 = ex son soluciones de la homognea asociada.
Ejercicio 1.18
Resuelve los siguientes sistemas de ecuaciones diferenciales:
(
x0 + y = sen 2t
x = 3x + 5y
1.
.
3.
y 0 = 2x 8y
y 0 x = cos 2t
dx
=y
dt
4.
2
dy = y
dt
x
dy
dz
dx
=
=
x(y z)
y(z x)
z(x y)
dx
=y
dt
dy = x
7.
dt
dz
=z
dt
con x(0) = y(0) = z(0) = 1
6.
y2
dx
dt
x
5.
2
dy = x
dt
y
29
30
Chapter 2
A function f is said periodic function with period T > 0 if f (x + nT ) = f (x) for all
n integer.
x0
x0 + T
a0 X
f (x) =
+
ak cos (kx) + bk sin (kx)
2
(2.1)
k=1
We start by assuming that the trigonometric series converges and has a continuous
function as its sum on the interval [0, 2]. If we integrate both sides of Equation (2.1)
and assume that it is permissible to integrate the series term-by-term, we get
Z 2
Z 2
Z 2
Z 2
X
X
a0
cos (kx) dx +
bk
sin (kx) dx,
f (x) dx =
dx +
ak
2
0
0
0
0
k=1
k=1
R 2
R 2
but 0 cos (kx) dx = 0 sin (kx) dx = 0 because k is a integer. So
Z
1 2
a0 =
f (x) dx.
0
31
x1
t2 b
f (x) cos(nx) dx
0
Z
Z 2
Z 2
X
a0 2
X
dx
cos(nx)
dx
+
a
cos
(kx)
cos(nx)
dx
+
b
sin
(kx)
cos(nx)
k
k
2 0
0
0
k=1 |
| {z
} k=1
{z
}
=0
=0
Z 2
Z 2
n1
X Z 2
X
2
dx
cos(nx)
=
ak
cos(kx)
cos (nx) dx +
ak
cos(kx)
cos(nx) dx + an
k=1
{z
}
{z
} k=n+1
{z
}
|0
|0
| 0
=0
=0
= an .
Hence
2
1
an =
f (x) cos(nx) dx
0
and, similarly,
bn =
f (x) sin(nx) dx
0
f (x) x
<x<x .
n
n1
Example 2.1.3 (square wave function). Find the Fourier coefficients and Fourier series
of the function defined by
(
0 if x < 0
f (x) =
and f (x + 2) = f (x).
1 if 0 x <
This is piecewise continuous and periodic with period 2, and its graphic is
1
0
and for n 1,
Z
Z
1 2
1
1 sin nx
an =
=0
f (x) cos(nx) dx =
cos(nx) dx =
0
0
n
0
(
Z
Z
0
1 2
1
1 cos nx i
bn =
f (x) sin(nx) dx =
sin(nx) dx =
= 2
0
0
n
0
n
if n even,
.
if n odd
1 X
2
+
sin(2k 1)x.
2
(2k 1)
k=1
1
0
(a) For k = 1
(b) For k = 2
1
0
(c) For k = 3
(d) For k = 6
33
Theorem 2.1.4 (Dirichlet). If f is a periodic function with period 2 and f and f 0 are
piecewise continuous on [0, 2], then the Fourier series is convergent.
The sum of the Fourier series is equal to f (x) at all numbers where f is continuous.
At the numbers x where f is not continuous, the sum of the Fourier series is the
average of the right and left limits, that is
f (x+ ) + f (x )
2
Proof. Out of objective.
We use the notation
a0 X
f (x)
+
ak cos (kx) + bk sin (kx)
2
k=1
to represent this situation. Symbol means = for x such that f (x) is continuous, but
it is not true for discontinuity points.
2.1.1.
We can find its Fourier series by making a change of variable. In the field of engineering It is usual use the real variable t (time) for functions. Suppose f (t) has period
T , that is f (t + T ) = f (t) for all t, and we let x = 2t
T and
f(x) = f
Tx
2
f (x + 2) = f
=f
+T =f
= f(x).
2
2
2
So the Fourier series of f (t) can be obtained from the Fourier series of f(x):
a0 X
f(x)
+
ak cos (kx) + bk sin (kx)
2
k=1
a0 X
f (t)
+
ak cos
2
2kt
T
+ bk sin
2kt
T
k=1
f (t)
a0 X
+
ak cos (kt) + bk sin (kt)
2
k=1
34
2
bn =
T
f (t) sin(nt) dt
0
In the Fourier series we find, that the frequencies appear as multiplies of the basic frequency (1/T ). The basic frequency is called the fundamental, while the multiples are
called harmonics. Fourier analysis is often called harmonic analysis. A periodic signal
may then be described with its fundamental and harmonics.
Example 2.1.5 (Triangle wave function). Find the Fourier series of the function defined
by
f (t) = |t| if 1 t 1 and f (t + 2) = f (t) for all t.
Function f (t) is periodic of period 2 and = . Choose interval [1, 1] and calculate
the Fourier Coefficients
Z 1
Z
Z 0
2 1
a0 =
dt = 1,
|t| dt =
t dt +
2 1
0
1
(
Z
0
if n is even
2 1
2 cos (n) 2
an =
|t| cos(nt) dt =
=
2
2
4
2 1
n
if n is odd,
n2 2
Z 1
2
|t| sin(nt) dt = 0.
bn =
2 1
Therefore
f (t) =
1
4
4
4
2 cos(t) 2 cos(3t)
cos(5t) . . .
2
9
25 2
(2.3)
Figure 2.4: Note the very fast convergence of the Fourier series. In the above graphic
the first two terms give a very good approximation to the function.
1
1
1
1
2
+
+
+
+
=
32 52 72 92
8
2.1.2.
Complex Notation
ei + ei
,
2
35
sin =
ei ei
2i
we may write the formula for the Fourier series in a more compact way:
a0 X eikt + eikt
eikt eikt
f (t) =
+
+ bk
=
ak
2
2
2i
k=1
X
a0 X ak
ak
bk
bk
ikt
=
e
+
eikt
+
i
+i
2
2
2
2
2
k=1
k=1
ikt
ck e
k=
ak
2
1
with cn =
T
f (t)eint dt
This is called the complex Fourier series. Please note that the summation now also covers
negative indexes, we have negative frequencies.
2.2.
2.2.1.
F[f (t)] = f () =
f (t)eit dt.
Function f is a complex-valued function of the variable , frequency, and is defined for all
frequencies. As the function is complex, it may be described by a real and an imaginary
part or with magnitude and phase (polar form), as with any complex number.
Warning. Our definition of the Fourier transform is a standard one, but it is not the
only one1 .
Examples
Example 2.2.1. Given the time signal function (rectangle function)
(
1 for |t| < a/2,
a (t) =
0 elsewhere,
The Fourier transform F(a (t)) is
Z a/2
ia
1 ia
a () =
eit dt =
e 2 e 2 =
i
a/2
=
2 sin (a/2)
.
In this above example the Fourier transform is a real function but this does not
happen always, as shown in below example.
1
36
a ()
a (t)
a
2
a
2
1
if |t| < 1
otherwise
1
(Solution: ()
=
22 cos
)
2
(
eat sin bt for t 0
Example 2.2.3. The time signal (t-function) f (t) =
has the
0
for t < 0
following complex Fourier transform
f() =
a2
b2
b
2 + 2ia
2ab
f() =
+i
2
( 2 a2 b2 ) + 4a2 2
( 2 a2 b2 )2 + 4a2 2
Inverse Fourier transform
Theorem 2.2.4 (Fourier integral theorem). Let f (t) be a function defined for all time t,
i.e. t , which is continuous except for a discrete set of points {t1 , t2 , . . . , tn , . . . }
such that exist lateral limits at right (f (t+ )) and left (f (t )). If in addition f is laterally
differentiable everywhere, then
Z Z
f (t+ ) + f (t )
1
i(tu)
=
f (u)e
du d
2
2
Z Z
1
i(tu)
=
f (u)e
du d =
2
= f (t)
Hence, if the Fourier transform exists, there is an inverse transform formula.
37
Re f()
f (t)
Im f()
2.2.2.
1
(f()) = f (t) =
2
f()eit d.
Linearity
Proposition 2.2.6. Let f1 (t) and f2 (t) be functions which Fourier transform exists and
let c1 and c2 be constant complex numbers, then
F(c1 f1 (t) + c2 f2 (t)) = c1 F(f1 (t)) + c2 F(f2 (t))
Proof.
Z
= c1 f1 () + c2 f2 ()
Translations
Proposition 2.2.7. Let f (t) be a function for which exists Fourier transform f(), a a
real number, then
F(f (t a)) = eia f()
38
Proof.
Z
F(f (t a)) =
u=ta
f (t a)eit dt
Observe that the Fourier transform of a function and a translated functions (delayed
in time) have the same absolute value.
|F(f (t a))| = |eia ||f()| = |f()|
Proposition 2.2.8 (Inverse translation). If f() = F(f (t)), then, for all real number
k,
F(eikt f (t)) = f(w k)
Proof. Exercise 7.
Rescaling
Proposition 2.2.9. Let a 6= 0 be a constant real number. If F(f (t)) = f() then
1
F(f (at)) =
.
f
|a|
a
Proof. If a is a positive real,
Z
Z
at=u
it
F(f (at)) =
f (at)e
dt =
du
1
=
a
a
i u
a
f (u)e
1
= f
a
a
Is a is a negative real,
Z
Z
at=u
it
F(f (at)) =
f (at)e
dt =
f (u)e
i u
a
du
1
=
a
a
f (u)ei a u du =
f (u)ei a u du =
1
=
f
a
a
= iF(f (t))
Hence, using the necessary hypothesis about existence of integrals and limits in infinity of derivatives, using induction
F(f (n) (t)) = (i)n F(f (t))
39
(2.4)
Other properties
Proposition 2.2.11. If f() = F(f (t)), then
F(f(t)) = 2f ()
Proof. Exercise 6.
Proposition 2.2.12. If f() = F(f (t)), then
F(f (t)) = f()
Proof.
Z
F(f (t)) =
it
f (t)e
u=t
dt =
f (u)e
iu
f (u)eiu du =
du =
= f()
Proposition 2.2.13. A function f (t) is real function if and only if the Fourier transform
f() verifies f() = f().
Proof. Suppose f (t) R. Then
f() =
it
f (t)e
dt =
f (t) sin(t) dt =
f (t) cos(t) dt i
f (t) cos(t) dt + i
f (t) sin(t) dt =
= f()
Inversely, suppose f() = f() and let f (t) = u(t) + iv(t). Using the inverse Fourier
transform:
Z
Z
1
1
it
f ()e d =
(
u() + i
v ()) (cos(t) + i sin(t)) d =
f (t) =
2
2
Z
Z
1
i
=
(
u() cos(t) v() sin(t)) d +
(
u() sin(t) + v() cos(t)) d
2
2
But, by hypothesis, u
() + i
v () = u
() i
v (), then u
is an even function and v is
an odd function. Hence u
() sin(t) + v() cos(t) is an odd function, and the integral
in imaginary part is null. So, f (t) is real.
Example 2.2.14. Lets find the Fourier transform of the two-sided exponential decay:
f (t)
f (t) = ea|t| , with a a positive constant.
t
40
We could find the transform directly plugging into the formula for the Fourier transform
(exercise). However, we are going to compute using some above properties. Recall that
for
(
et if t > 0
g(t) =
0
if t < 0
we have
Z
g() =
et eit dt =
1
i + 1
h()
= F(g(t)) + F(g(t)) =
And, now observe that f (t) is almost equal to h(at). In fact, they are agree except at
the origin, where f (0) = 1 and h(0) = g(0) + g(0) = 2. But it is not important for
integration. Therefore
f()
f() = F(h(at)) =
2
1
a (/a)2 +1
2a
+ a2
2.2.3.
Convolution
Let f (t) and g(t) be functions. We call convolution product (or simply convolution)
of f and g to
Z
(f g)(t) =
f (u)g(t u) du
41
Proof.
Z
(f g)(u)h(t u) du =
f (v)g(u v) dv h(t u) du =
=
Z
Z
{w=uv}
f (v)
g(u v)h(t u) du dv
=
=
Z
Z
=
f (v)
g(w)h(t v w) dw dv =
Z
=
f (v) (g h) (t v) dv =
((f g) h))(t) =
Z Z
= (f (g h))(t).
Z
a (u)a (t u) du =
(a ? a )(t) =
a
2
a2
Z
a (t u) du =
t+ a2
t a2
a (v) dv
Thus
For t a = t +
a
2
a2 . Hence (a ? a )(t) =
R t+ a2
t a2
a (v) dv = 0.
(
Z t+ a
2
t a2 a2
For a < t 0 =
a (v) dv =
. Hence (a ? a )(t) =
a
a
a
a
2 < t + 2 2
2
a + t.
(
R a2
a < t a2 < a2
For 0 < t < a = a 2
.
Hence
(
?
)(t)
=
a
a
t a2 a (v) dv = a t.
a
2 <t+ 2
For a t =
a
2
a
2.
Z
Hence (a ? a )(t) =
t+ a2
t a2
a (v) dv = 0.
So a (t) = (a ? a )(t).
Convolution for Fourier transform
Theorem 2.2.20. Let f (t) and g(t) functions with respectively Fourier transform f()
and g(), then
F((f g)(t)) = f()
g ()
42
Z
Z
Z
2.2.7
it
f (u)F(g(t u))du =
f (u)
g(t u) e
dt du =
=
Z
Z
=
f (u)eiu g()du =
f (u)eiu du g() =
= f()
g ().
2
2
We can use Fourier transform and convolution for solving some differential equations.
F(a (t)) =
Example 2.2.23. Find an expression for solutions of the next classic second order ODE:
u00 u = f
Take the Fourier transform of both sides:
(i)2 u
u
= f
1
u
= f
1 + 2
Take inverse Fourier transform of both sides:
1
1
u = f ? F
2 + 1
For example 2.2.14, we know the inverse transform, thus
Z
1 |t|
1
u(t) = f (t) ? e
=
f (u) e|tu| du.
2
2
Theorem 2.2.24 (Parsevals identity). If f() is the Fourier transform of f (t), then
Z
Z
2
|f (t)|2 dt
f (w) dw = 2
Proof. We know
F
Z
2
1
f ()f () =
f ()| eit d.
2
In the other hand, for proposition 2.2.12, f() = F(f (t)) = F(g(t)),
Z
F 1 f()f() = F 1 f()F(g(t)) = f (t) g(t) =
f (u)g(t u) du
(2.5)
(2.6)
2.2.4.
Rectangles
Function (a, b)-rectangle is defined
(
1 a<t<b
(a,b) (t) =
.
0 othewise
Then its Fourier transform is a complex function (exercise)
F((a,b) (t)) =
eia eib
.
i
2 sin( a
2 )
(Example 2.2.1).
w
Exponential function
Let c be a complex number with Re(c) > 0.
(
ect a < t < b
Function f (t) =
, i.e. f (t) = ect (a,b) (t), has Fourier transform
0
otherwise
F ect (a,b) (t) =
ect eit dt =
eiaac eibbc
i + c
1
i + c
2c
. See Example 2.2.14.
+ c2
2
f () =
eat eit dt
Z
d
2
f () = i
teat eit dt
d
Doing integration by parts with u = eit and dv = teat dt, and applying limits,
Z
d
at2 it
f () =
f ()
e
e
dt =
d
2a
2a
is an elementary ordinary differential equation with solution
2
f() = f(0)e /4a
at2 dt
e
F e
,
a
at2
44
hence
2
= e /4a
a
R
2
Remark. For computing I = eat dt, we consider I 2 and It doesnt matter
what we call the variable of integration, so
Z
Z
Z Z
2
2
2
ax2
ay 2
ea(x +y ) dxdy
I =
e
dx
e
dy =
2
2
ea d d = .
I =
a
0
0
1
, with Re(c) > 0
t2 + c2
1
. By proposition 2.2.11
As usual f() = F 2
t + c2
Function f (t) =
F(f(t)) = 2f () =
2
2c
c|t|
=
=
F
e
w2 + c2
c 2 + c2
c
Hence
f() = ec||
c
2.2.5.
The Dirac delta is not a function but a concept called distribution (out of this course).
It can be understood, roughly speaking, as a function that is very tall and very thin. It
is usually use the translated Dirac delta (t a) for some real a (see figure 2.8a).
Often this distribution is defined as the function which do
Z
f (t)(t) dt = f (0)
and it can also be see as the limit of families of functions with certain properties, for
example
r
n nt2
Gaussian functions : n (t) =
e
for n = 1, 2, 3, . . .
1
n
for n = 1, 2, 3, . . .
1 + n2 t2
and others.
that is, n (t) (t) for n .
We can apply definition of Fourier transform to distribution (t a),
Z
F((t a)) =
(t a)eit dt = eia
= 1.
and, in particular, F((t)) = (t)
In the other hand, applying proposition 2.2.11
F(eiat ) = 2( a) = 2( + a).
In particular F(1) = F(e0 ) = 2().
45
t
Figure 2.7: Gaussian functions n (t) converge to Dirac delta (t).
Remark. Distribution (t a) is often called impulse at a and, if c is a complex
constant, c(t a) is called a impulse at a weighted by c.
Proposition 2.2.25. We have the next Fourier transform formulas (Exercise 13):
1. F( (n) (t)) = (i)n .
2. F(t) = 2i 0 ().
3. F(tn ) = 2in (n) ().
Sign function
Define sign function as
1
(
1
t>0
sgn(t) =
1 t < 0
1
undefined for t = 0. Is usual to (
represent sgn() = 1, and so, this function has the
2 t>0
property: sgn(t) sgn() =
. Furthermore,
0 t<0
(
Z t
2 t0
2(x) dx =
.
0 t<0
Rt
Matching both functions, except for t = 0, we have 2(x) dx = sgn(t) sgn().
d
Hence, dt
sgn(t) = 2(t). For proposition 2.2.10, F (2(t)) = iF(sgn(t)) and we can
compute the Fourier transform for sign function:
F(sgn(t)) =
2
i
1
`
1
2
(1 + sgn(t)), then
F (H(t)) = () +
1
i
ia
F (H(t a)) = e
1
() +
i
Proposition 2.2.26. We have the next Fourier transform formulas (Exercise 13):
1
1. F
= i sgn() = i 2i H().
t
1
(i)n
2. F n+1 =
(i 2i H()).
t
n!
The Fourier transform of sine and cosine
We can combine the results above to find the Fourier transform for the sine and
cosine.
(t a) + (t + a)
eia + eia
F
=
= cos(a).
2
2
therefore
( a) + ( + a)
= (( + a) + ( a))
2
(t + a) (t a)
eia eia
Analogous F
=
= sin(a) and therefore
2i
2i
F(cos(at)) = 2
F(sin(at)) =
3
2
(( + a) ( a)) = (( a) ( + a))
2i
47
2.2.6.
As we have seen in the previous example 2.2.23, Fourier transforms can be applied
to the solution of differential equations.
Consider the following Ordinary Differential Equation (ODE):
an x(n) (t) + an1 x(n1) (t) + + a1 x0 (t) + a0 x(t) = g(t)
(2.7)
assuming that solution and all its derivatives approach to zero if t . Applying
Fourier transform we obtain
an (i)n + an1 (i)n1 + + a1 (i) + a0 x
() = g().
Calling
F () =
1
an
(i)n
+ an1
(i)n1
+ + a1 (i) + a0
2( + 1) 2( 1)
+
1 i
1 + i
x(t) =
48
2.2.7.
f (t)
(a,b) (t)
eia eib
i
a (t) = ( a2 , a2 ) (t)
2 sin( a
2 )
eiaac eibbc
i + c
1
i + c
a > 0,
2
e /4a
a
2
eat
Re(c) > 0,
t2
2c
+ c2
1
+ c2
c||
e
c
(t a)
eia
eiat
2( + a)
tn
2in (n) ()
sgn(t)
2
i
H(t)
() +
1
i
tn+1
(i)n
(i 2i H())
n!
cos(at)
(( + a) + ( a))
sin(at)
(( a) ( + a))
49
2.3.
2.3.1.
Definitions
Definition 2.3.1. The (direct) Laplace transform of a real function f (t) defined for
0 t < is the ordinary calculus integral
Z
f (t) est dt
F (s) =
0
where s is a real number. Function F (s) is usually denoted L(f (t)) and L is denoted
Laplace transform operator.
Example 2.3.2. Well illustrate the definition calculating the Laplace transform for
some functions.
1. f (t) = 1.
Z
F (s) =
0
Then L(1) =
est
1 est dt =
s
=
t=0
1
s
assumed s > 0.
assumed s 0.
1
for s > 0.
s
0
t=0
assumed s > 0.
assumed s 0.
d st
e
and
An alternative method is to observe t est = ds
Z
Z
d
d
d
1
est dt =
1 est dt = L(1) = 2
L(t) =
ds
ds 0
ds
s
0
assumed s > 0.
Exercise 2.3.3. Use
dn st
dsn e
L(tn ) =
n!
sn+1
assumed s > 0.
a
Z
L(H(t a)) =
H(t a)est dt =
Z
a
eas
= eas L(1) =
.
s
50
est dt
u=ta
es(u+a) du =
f (t)est dt = lim
N 0
f (t)est dt
and the issue is to determinate classes of functions f such that the convergence is guarantied.
Next theorem gives us a sufficient condition for existence of Laplace transforms.
Theorem 2.3.5 (Existence of L(f )). Let f (t) be piecewise continuous on every finite
interval in t 0 and satisfy |f (t)| M et for some constants M and . Then L(f (t))
exists for s and
lim L(f (t)) = 0.
(2.8)
Proof. It has to be shown that the Laplace integral of f is finite for s > . Advanced
calculus implies that it is sufficient to show that the integrand is absolutely bounded above
by an integrable function g(t). Take g(t) = M e(s)t . Then g(t) 0. Furthermore,
g is integrable, because
Z
M
g(t) dt =
.
s
0
Inequality |f (t)| M et implies the absolute value of the Laplace transform integrand
f (t)est is estimated by
f (t)est M et est = g(t)
R
M
The limit statement follows from |L(f (t))| 0 g(t) dt = s
, because the right side of
this inequality has limit zero at s = . The proof is complete.
The property 2.8 in the previous theorem gives us a criterion to determine when a
function is the Laplace transform of another one. For example, polynomial functions are
not any Laplace transforms. Instead, function F (s) = arctan(1/s) for s > 0, could be a
Laplace transform as we confirm in example 2.3.26.
2.3.2.
Linearity
Proposition 2.3.6. Let f1 (t) and f2 (t) be functions which Laplace transform exists and
let c1 and c2 be constant real numbers, then
L(c1 f1 (t) + c2 f2 (t)) = c1 L(f1 (t)) + c2 L(f2 (t))
Proof.
Z
51
Translations
Proposition 2.3.7. Let f (t) be a function, H(t) is the Heaviside unit step function
defined in Example 2.3.4 and g(t) = H(t a)f (t a), i.e.
(
f (t a) for t > a
g(t) =
0
for t < a
with a > 0, then
L(g(t)) = eas L(f (t))
Proof.
Z
L(g(t)) =
g(t)e
st
ds =
=e
f (t a)e
a
as
st
f (t a)est dt
doing u = t a
Z
L(g(t)) =
s(u+a)
f (u)e
dt =
du = e
as
f (u)esu du =
L(f (t))
Example 2.3.8. For calculating the Laplace transform for step function
1
(
1 for a t < b
f (t) =
0 elsewhere
observe what f (t) = H(t a) H(t b) where H(t) is the Heaviside unit step function.
Then
L(f (t)) = L(H(t a)) L(H(t b)) = eas L(1) ebs L(1) =
eas ebs
s
Proposition 2.3.9. If L(f (t)) = F (s) for s > c then L(eat f (t)) = F (sa) for s > a+c.
Proof. It is easy. Start developing F (s a).
4s
Exercise 2.3.10. Use the above propositions to prove that L (t 1)e3t =
for
(s 3)2
s > 3.
Rescaling
Proposition 2.3.11. If L(f (t)) = F (s) then L(f (at)) =
1 s
F
.
a
a
Proof.
Z
L(f (at)) =
0
1
= F
a
at=u
f (at)est dt =
s
f (u)es a
52
du
1
=
a
a
Z
0
f (u)e a u du =
Proof. Already L(f (t)) exists, because f is of exponential order and continuous. On an
interval [a, b] where f 0 is continuous, integration by parts using u = est , dv = f 0 (t)dt
gives
Z b
Z b
0
st
st b
f (t)est dt =
f (t)e dt = f (t)e
+s
t=a
a
a
Z b
f (t)est dt
= f (b)ebs f (a)eas + s
a
On any interval [0, N ], there are finitely many intervals [a, b] on each of which f 0 is
continuous. Add the above equality across these finitely many intervals [a, b]. The
boundary values on adjacent intervals match and the integrals add to give
Z N
Z N
0
st
N s
0
f (t)e dt = f (N )e
f (0)e + s
f (t)est dt
0
Take the limit across this equality as N . Then the right side has limit f (0) +
sL(f (t)), because of the existence of L(f (t)) and limt f (t)est = 0 for large s.
Therefore, the left side has a limit, and by definition L(f 0 (t)) exists and L(f 0 (t)) =
f (0) + sL(f (t)).
Similarly we have:
L(f 00 (t)) = sL(f 0 (t)) f 0 (0) = s (sL(f (t)) f (0)) f 0 (0) =
= s2 L(f (t)) sf (0) f 0 (0)
and furthermore L(f 000 (t)) = s2 L(f (t)) s2 f (0) sf 0 (0) f 00 (0). In general,
L f (n) (t) = sn L(f (t)) sn1 f (0) sn2 f 0 (0) f (n1) (0)
s-derivative rule
dn
L(f (t)) = (1)n L (tn f (t)).
dsn
Proof. Proceed by induction on n.
Proposition 2.3.13.
For n = 1
Z
Z
d
d
st
L(f (t)) =
f (t)e
dt =
tf (t)est dt = L(tf (t)).
ds
ds
0
0
dn
L(f (t)) = (1)n L (tn f (t)). Then
dsn
n
dn+1
d
d
d
L(f (t)) =
L(f (t)) =
[(1)n L (tn f (t))] =
n+1
n
ds
ds ds
ds
Z
Z
d n
n
st
n
= (1)
t f (t)e
dt = (1)
tn+1 f (t)est dt =
ds
0
0
= (1)n+1 L tn+1 f (t) .
Hypothesis:
.
53
Xst
Z
Z
e XXXt X
est
X
f (u) du =
f
(u)
du
f
(t)
dt =
X
XXX
0
s
s
0
t=0
1
= L(f (t)).
s
f (t)
t
L
f (t)
t
F (t) dt.
=
s
Proof. Omitted.
Laplace Transform for Dirac Delta Distribution
From
Z
(
0 if t < a
n (x a) dx =
= H(t a)
1 if t a
Z
(x a) dx = lim
we can interpret
d
H(t a) = (t a)
dt
and so, using t-derivative rule, theorem 2.3.12, we obtain the Laplace transform for the
Dirac Delta:
L((t a)) = sL(H(t a)) H(0 a) = eas .
2.3.3.
1
assumed s > a.
sa
Proof.
L(eat ) =
Z
0
1
(as)t
e
(as)t
e
dt =
= sa
as
t=0
54
for s > a
for s a
s2
a
assumed s > 0.
+ a2
s2
1
. Rescaling (Proposition 2.3.11)
+1
L(sin at) =
1
a
1
s2
a2
+1
s2
a
.
+ a2
s
assumed s > 0.
s2 + a2
Proof. Analogous.
Proposition 2.3.19. L(cosh at) =
s
assumed s > |a|.
s2 a2
eat + eat
.
2
a
Proposition 2.3.20. L(sinh at) = 2
assumed s > |a|.
s a2
Proof. Analogous.
Table 2.1(a) shows most important Laplace transforms.
2.3.4.
Definition 2.3.21. We say that f (t) is an inverse Laplace transform of F (s) when
L(f (t)) = F (s) and then we say
L1 (F (s)) = f (t).
Observe the Inverse Laplace transform is not unique.
(
0 for t = 2
Example 2.3.22. Functions f1 (t) = et and f2 (t) =
verify
et for t 6= 2
L(f1 (t)) = L(f2 (t)) =
1
,
s1
therefore both functions are inverse Laplace transform of the same function F (s) =
1
s1 .
However there are conditions for the uniqueness of the inverse transform as established
next theorem we give without proof.
Theorem 2.3.23 (Lerch). If f1 (t) and f2 (t) are continuous, of exponential order and
L(f 1(t)) = L(f2 ) for all s > s0 then f1 (t) = f2 (t) for all t 0.
Table 2.1(b) shows most important Inverse Laplace transforms, immediate consequence of table 2.1(a).
55
f (t)
F (s)
f (t) = L1 (F (s))
1
s
s>0
1
s
1
s2
s>0
1
s2
n!
sn+1
1
sa
eat
sin at
s2
s>0
sn+1
1
sa
s>a
a
s>0
+ a2
sinh at
s2
a
s > |a|
a2
(t a)
eas
sin at
a
s
s 2 + a2
cos at
1
a2
sinh at
a
s
s 2 a2
cosh at
eas
(t a)
s2
s
s > |a|
s 2 a2
cosh at
s>0
eat
1
+ a2
s2
s
s>0
s2 + a2
cos at
tn
n!
Table 2.1: Direct and inverse Laplace transform for some functions.
56
R
s
f (t)
.
F (u) du =
t
s sin + cos
is x(t) =
s2 + 2
sin(t + ).
s
+ (cos ) 2
.
Rearranging terms in the fraction X(s) = (sin ) 2
s + 2
s + 2
We are now able to take the inverse Laplace transform of table 2.1(b):
s
1
1
x(t) = (sin )L
+ (cos )L
=
s2 + 2
s2 + 2
= (sin )(cos t) + (sin t)(cos ) =
Example 2.3.24. The inverse Laplace transform of X(s) =
= sin(t + ).
Exercise 2.3.25. Prove that the inverse Laplace transform of F (s) =
f (t) = e
at
s+b
is
(s + a)2 + 2
ba
sin t .
cos t +
1
sin t
Example 2.3.26. The inverse Laplace transform of F (s) = arctan( ) is f (t) =
.
s
t
1
The derivative is F 0 (s) = 2
and using derivative rule L1 (F 0 (s)) = t f (t), we
s +1
obtain
1 1
1
sin t
f (t) = L
=
2
t
s +1
t
Convolution property
Definition 2.3.27 (Convolution). Let f (t) and g(t) be functions piecewise continuous of
exponential order with f (t) = 0 and g(t) = 0 for t < 0. We call convolution product
(or simply convolution) of f and g to
Z t
Z
(f g)(t) =
f (u)g(t u) du =
f (u)g(t u) du
Exercise 2.3.28. Prove that the convolution is commutative, i.e. (f g)(t) = (g f )(t).
Proposition 2.3.29. Convolution is associative, i.e. ((f g) h)(t) = (f (g h))(t).
57
Proof.
Z
((f g) h))(t) =
(f g)(u)h(t u) du =
=
f (v)g(u v) dv h(t u) du =
Z
Z
{w=uv}
f (v)
g(u v)h(t u) du dv
=
=
Z
Z
f (v)
g(w)h(t v w) dw dv =
=
f (v) (g h) (t v) dv =
=
Z Z
= (f (g h))(t)
g(v)e
sv
ZZ
f (u)g(v)es(u+v) dudv
dv =
[0,)[0,)
(
0 1
u=y
(u, v)
= 1
= abs
We do a change of variable
with Jacobian
1 1
(t, y)
v =ty
and the (u, v)-region [0, ) [0, ) of integration is transformed from the (t, y)-region
{(t, y) : y 0 and t y}.
y=t
Hence
Z
F (s)G(s) =
t=0
Z
y=0
e
t=0
st
Z
f (y)g(t y) dydt =
y=0
est (f g)(t) dt =
= L((f g)(t))
therefore L1 (F (s)G(s)) = (f g)(t).
Example 2.3.31. Consider a linear time-invariant system with transfer function
F (s) =
1
(s + a)(s + b)
The impulse response is simply the inverse Laplace transform of this transfer function
f (t) = L1 (F (s)).
58
To evaluate this inverse transform, we use the convolution property. That is, the
inverse of
1
1
1
F (s) =
=
(s + a)(s + b)
s+a s+b
is
f (t) = L1
1
s+a
L1
1
s+b
= eat ebt =
eax eb(tx) dx =
eat ebt
.
ba
Exercise 2.3.32. Use method of partial fraction expansion to evaluate the inverse Laplace
transform f (s) = L1 (F (s)) being
F (s) =
1
A
B
=
+
(s + a)(s + b)
s+a s+b
2.3.5.
The Laplace transform can be used in some cases to solve linear differential equations
with given initial conditions.
Example 2.3.33. We use Laplace method for solving the linear ODE
y 00 + y 0 2y = x with y(0) = 2, y 0 (0) = 1.
First observe that x is the independent variable, so
L(y 00 ) + L(y 0 ) 2L(y) = L(x)
and using x-derivative rule
(s2 L(y) sy(0) y 0 (0)) + (sL(y) y(0)) 2L(y) =
(s2 L(y) 2s + 1) + (sL(y) 2) 2L(y) =
1
s2
1
s2
1
s2
1
2s3 + s2 + 1
(s2 + s 2)L(y) = 2 + 2s + 1 =
s
s2
(s2 + s 2)L(y) 2s 1 =
Hence
L(y) =
2s3 + s2 + 1
2s3 + s2 + 1
=
s2 (s2 + s 2)
s2 (s 1)(s + 2)
2s3 + s2 + 1
1/2 1/4
4/3
11/12
= 2
+
+
.
2
s (s 1)(s + 2)
s
s
s1
s+2
Example 2.3.34 (Damped oscillator). Solve by Laplaces method the initial value problem
x00 + 2x0 + 2x = 0, x(0) = 1, x0 (0) = 1.
Solution: x = et cos t.
Doing Laplace transform we have L(x00 ) + 2L(x0 ) + 2L(x) = 0, hence
s2 L(x) s x(0) x0 (0) + 2(sL(x) x(0)) + 2L(x) = 0
s2 L(x) s + 1 + 2(sL(x) 1) + 2L(x) = 0
(s2 + 2s + 2)L(x) = s + 1
From here
L(x) =
s2
s+1
s+1
=
+ 2s + 2
(s + 1)2 + 1
and
x = L1
s+1
(s + 1)2 + 1
= et L1
s
s2 + 1
= et cos t
with
x(0) = 1
y(0) = 0
(s + 1)L(x) + 2L(y) = 1
2L(x) + (s 2)L(y) =
s2
+1
s3 2 s2 + s 4
2 s2 s + 1
,
L(y)
=
s4 s3 5 s2 s 6
s4 s3 5 s2 s 6
22
4
s7
+
25 (s + 2) 25 (s 3) 25 (s2 + 1)
From here
x=
22 2t
4
cos t 7 sin t
e
+ e3t
25
25
25
Ejercicios
60
Ejercicio 2.1
(
1 si x [2k, (2k + 1))
1. Representar grficamente f (x) =
, k Z y com2 si x [(2k + 1), (2k + 2))
probar que se puede desarrollar en series de Fourier.
2. Hallar el desarrollo en serie de Fourier de f (x). Representar grficamente las tres
primeras sumas parciales.
3. Usar el resultado anterior para sumar la serie 1
1 1 1
+ + ....
3 5 7
Ejercicio 2.2
1. Desarrolla en series de cosenos f (x) = x, x [0, ]. Desarrollar la misma funcin
en serie de senos.
X
X
1
1 1 1 1
1
2. Usa lo anterior para sumar las series
, 1 + + ...,
.
2
n
3 5 7 9
n4
n=1
n=1
Ejercicio 2.3
Representar grficamente y hallar el desarrollo en series de Fourier de las siguientes
funciones peridicas de periodo 2:
3. f (x) = x2 , x [, ]
0 si x [, 2 )
1. f (x) = 1 si x [ 2 , 2 ]
(
0 si x ( 2 , ]
x x2 si x [0, )
4. f (x) =
2. f (x) = x , x (, ]
x2 x si x [, 2)
Ejercicio 2.4
Representar grficamente la funcin peridica y de periodo 2:
(
cos x
si < x 0
f (x) =
.
cos x si 0 < x
Ver si es desarrollable por serie de Fourier y encontrar el desarrollo (en su caso).
Ejercicio 2.5
Sea f (x) = sen x2 con 0 x 2 y peridica de periodo 2. Halla su desarrollo de
Fourier en forma compleja.
Ejercicio 2.6
Prove that if f() = F(f (t)), then F(f(t)) = 2f ().
Ejercicio 2.7
(Inverse Tranlation) Prove that if f() = F(f (t)), then, for all real number k,
F(eikt f (t)) = f( k).
Ejercicio 2.8
For a > 0 and b R, find the Fourier transforms:
61
1. F
eibt
.
a2 + t2
2. F
cos bt
.
a2 + t2
3. F (1 t2 )2 (t)
Ejercicio 2.9
Apply definition of Fourier transform in second question of exercise 2.8 to find the
value of next integral
Z
cos2 bt
dt
2
1 + t
Ejercicio 2.10
Use solution of question 3. in exercise 2.8 to find the value of integral
Z
x cos x sin x
x
cos dx
3
x
2
0
Ejercicio 2.11
Prove that convolution is commutative, i.e. f g = g f .
Ejercicio 2.12
Use convolution to find this inverse transform f (t) =
F 1
sin
.
(i + 1)
Ejercicio 2.13
Prove the next Fourier transform formulas:
1
4. F
= i sgn() = i
t
2i H().
1
(i)n
5. F n+1 =
(i 2i H()).
t
n!
Ejercicio 2.14
Find the inverse of Fourier transforms:
1.
F 1
1
2
+ i + 2
2.
F 1
1
2
2i 1
Ejercicio 2.15
Justify the equality (t) =
cos tu du.
0
Ejercicio 2.16
Use Fourier transform to solve the ODE x00 + 3x0 + 2x = et .
62
Ejercicio 2.17
Find the Laplace transform of each of the following functions:
(
3 for 0 < t < 5,
1. f (t) =
0 for t > 5.
2. f (t) = e2t cos2 3t 3t2 e3t .
Hint: You can use the equality 2 cos2 a = 1 + cos 2a.
(
cos(t 2
for t >
2
2
3 )
3. f (t) = cos(t 3 )H(t 3 ) =
0
for t <
Ejercicio 2.18
Z
Prove that L
2
3 ,
2
3 .
1
1
sin u
du = arctan .
u
s
s
0
Hint: Use propositions 2.3.14 and 2.3.15.
Ejercicio 2.19
If L(f (t)) =
s2 s + 1
compute L(f (2t)).
(2s + 1)2 (s 1)
Ejercicio 2.20
Prove L (t cos at) =
s2 a2
(s2 + a2 )2
Ejercicio 2.21
Knowing L(f 00 (t)) = arctan
1
s
Ejercicio 2.22
Let a, b be constants, b 6= 0. Prove
1
sa
at
L(e f (bt)) = F
with L(f (t)) = F (s).
b
b
Ejercicio 2.23
Compute the inverse Laplace transform of
1. F (s) =
s2
6s 4
4s + 20
2. F (s) =
s+5
(s 2)3 (s + 3)
3. F (s) =
1
s2 (s2 + 3s 4)
4. F (s) =
s
(s 1)2 (s2 + 2s + 5)
63
Ejercicio 2.24
Use convolution rule for solving the following inverse Laplace transform:
s
1
1. L
(s2 + a2 )2
1
1
2. L
s2 (s + 1)2
Ejercicio 2.25
Solve the following ODEs using Laplace method:
1. x00 + 4x = 9t with x(0) = 0 and x0 (0) = 7.
2. x000 x = et with x(0) = x0 (0) = x00 (0) = 0.
3.
x00
x0 (0)
(
1 for 0 < t < 1,
= 1 and f (t) =
0 for t > 1.
64
Chapter 3
Introduccin
u u 2 u 2 u 2 u
,
,
,
,
, . . . = 0 con (x, y) U R2
x, y, u,
x y x2 y 2 xy
(3.1)
(3.2)
Continuando con el ejemplo (3.2) anterior, si u fuese una funcin de una sola variable
(una ecuacin diferencial ordinaria, EDO) tendra por solucin u(x) = c, con c constante,
pero si la consideramos de dos variables, la constante c no es tal, sino que entenderemos
que es una funcin que no depende de la variable x. Es decir, integrando (3.2) obtenemos
una solucin general
u(x, y) = f (y)
Observamos as que, de la misma forma que en la solucin general de una EDO
aparacecan constantes arbitrarias, en la solucin de una EDP aparacen funciones arbitrarias. Las soluciones de una EDP se restringen con las llamadas condiciones de
contorno o de frontera o las condiciones iniciales. Por ejemplo, si a la ecuacin (3.2) le
imponemos la restriccin
u(x, x) = x
estamos estableciendo una condicin de frontera sobre la bisectriz del primer y tercer
cuadrante. En este caso, tenemos que la nica solucin de la EDP (3.2) es
u(x, y) = y
En la literatura, se suelen dar dos tipos de restricciones o condiciones de contorno:
Condiciones de Cauchy: que se dan generalmente en ecuaciones en derivadas parciales en las que intervienen el tiempo. As si u y ut son dadas para t = 0 se les
llaman condiciones iniciales.
Condiciones de Dirichlet: en las que se buscan soluciones u en una determinada
regin U R que verifican ciertos valores en cada punto de la frontera U de
la regin.
3.1.1.
3.1.2.
u
x
2
+
u
y
2
+
u
z
2
= n(x, y, z)
= 0.
x y
que representan las funciones complejas f (x + iy) = u(x, y) + iv(x, y) que son derivables
en el campo complejo.
Ecuacin de ondas. La ecuacin
utt = c2 uxx
es la que satisface una funcin u(x, t) que representa las oscilaciones de una cuerda. Es
una ecuacin lineal de segundo orden y homognea que estudiaremos en la seccin 3.3.1.
Ecuacin de disipacin del calor. La ecuacin
ut = c2 uxx ,
describe la evolucin de la temperatura de una barra homognea de seccin constante.
Tambin es una ecuacin lineal homognea de segundo orden. La estudiaremos en la
seccin 3.3.2.
Ecuacin de Laplace. La ecuacin
2u 2u 2u
+ 2 + 2 = 0,
x2
y
z
satisface el potencial u del campo elctrico en las regiones que no contienen cargas. Es
otro ejemplo de ecuacin lineal de segundo orden y homognea.
67
3.2.
3.2.1.
Resolucin directa
u
= 3x2 + 2y 2 1. Una simple integracin conduce
Ejemplo. Resolvamos la ecuacin
x
a
u(x, y) = x3 + 2xy 2 x + f (y)
siendo f (y) una funcin en y por determinar.
u
+u = exy . Como slo aparece la derivada parcial
y
con respecto a y, podemos resolver esta EDP como si fuese una EDO (en y) u0 + u = exy
que es lineal no homognea que tiene por solucin general
Ejemplo. Resolvamos la ecuacin
u(x, y) =
ex y
+ c(x)ey .
x+1
Observa que la constante que aparece al resolver la EDO es una funcin que depende de
x cuando resolvemos la EDP.
3.2.2.
u(0, y) = ey
0 (x)
0 (y)
=2
(x)
(y)
Como la anterior igualdad se tiene para cualquier x, y, se deducen que han de ser constantes, por tanto
)
0 (x)
kx
=
k
(x)
=
C
e
y
1
(x)
= u(x, y) = (x)(y) = Cek(x+ 2 )
ky
0 (y)
k
2
(y) = 2 (y) = C2 e
y
3.2.3.
Para resolver una EDP cuasilineal con una EDP en dos variables
P (x, y, u)ux + Q(x, y, u)uy = R(x, y, u)
consideremos las superficies de nivel en el espacio z = u(x, y), entonces, enfocando el
problema desde el punto de vista geomtrico, la ecuacin se puede interpretar como
que el vector (P (x, y, z), Q(x, y, z), R(x, y, z)) es ortogonal al vector (ux , uy , 1) que
es el gradiente del campo f (x, y, z) = u(x, y) z. Esto nos lleva a que dicho vector
(P (x, y, z), Q(x, y, z), R(x, y, z)) es proporcional a los vectores tangentes a las curvas
contenidas en la superficie de nivel z = u(x, y), dicho de otro modo
dx
dy
dz
=
=
P (x, y, z)
Q(x, y, z)
R(x, y, z)
El mtodo consiste, por tanto, en determinar las curvas tangentes al campo vectorial
F = (P, Q, R) llamadas curvas caractersticas y encontrar el campo u(x, y) que definen
estas curvas.
dx
dy
dz
=
=
= dt
P (x, y, z)
Q(x, y, z)
R(x, y, z)
lo que nos lleva al sistema de ecuaciones diferenciales
dx
= P (x, y, z)
dt
dy
= Q(x, y, z)
dt
dz
= R(x, y, z)
dt
Resolviendo el sistema se obtiene la solucin paramtrica de la ecuacin, o solucin
general paramtrica del sistema:
x = x(c1 , c2 , c3 , t)
y = y(c1 , c2 , c3 , t)
z = z(c1 , c2 , c3 , t)
donde c1 , c2 , c3 son constantes de integracin.
Generalmente, a partir de las condiciones iniciales se puede describir una curva llamada curva directriz (f (s), g(s), h(s)) en funcin de otro parmetro s I, que, imponiendo las condiciones iniciales en t = 0, nos permiten definir las constantes en funcin
de s, es decir
x(c1 , c2 , c3 , 0) = f (s)
y(c1 , c2 , c3 , 0) = g(s)
z(c1 , c2 , c3 , 0) = h(s)
y por eliminacin de las constantes nos queda una expresin
x(s, t) = x
y(s, t) = y
z(s, t) = z
que representa la solucin en forma paramtrica. Eliminando t de las dos primeras
ecuaciones de x e y se obtiene (x, y, s) = 0, espresin de la proyeccin de las curvas
caractersticas, y, por ltimo, se elimina el parmetro s para obtener z = u(x, y) que nos
determina una forma explcita de la solucin de la EDP que plantebamos.
69
dx
Nota. En ocasiones a partir de la primera ecuacin P (x,y,z)
=
las proyecciones de las caractersticas sobre XY de la forma
dy
Q(x,y,z)
se pueden obtener
g(x, y) = s
y, con la otra ecuacin encontrar una expresin de u(x, y, h(s)) = z que depende de x,
de y y de una cierta funcin de s, h(s), que, como hemos dicho, tambin dependiente de
x, y.
u
u
+y
= 3u por este mtodo. Tenemos entonces
x
y
que resolver el sistema de ecuaciones diferenciales
dx
dy
dz
=
=
= dt
x
y
3z
Resolviendo obtenemos las curvas caractersticas
dx
= x x = c1 et
dt
dy
= y y = c2 et
dt
dz
= 3z z = c3 e3t
dt
De las dos primeras ecuaciones vemos que y = xcte., luego las proyecciones de las curvas
caractersticas son rectas, luego planteamos la curva directriz arbitraria x = 1, y = s,
z = h(s) y para t = 0 tenemos c1 = 1, c2 = s y c3 = h(s). De aqu
x = et
y = sx
t
y = se
z = h xy x3
3
z = h(s) x
z = h(s) e3t
Ejemplo. Resolvamos la ecuacin x
y u + x u = xy
x
y
y2
+ h(x2 y 2 )
2
3.3.
3.3.1.
La ecuacin de ondas
La siguiente ecuacin
1
utt = 0,
(3.3)
c2
modela las vibraciones de una cuerda extendida entre dos puntos x = 0 y x = `; por
ejemplo, una cuerda de guitarra. El movimiento se produce en el plano xy de manera
uxx
71
que cada punto de la cuerda se mueve perpendicularmente al eje x. Sea u(x, t) el desplazamiento de la cuerda en el instante de tiempo t > 0 medido desde el eje x, con las
condiciones de frontera
u(0, t) = 0,
(3.4)
esto es, nuestra cuerda est sujeta en los extremos x = 0, x = `. Adems, consideraremos
las siguientes condiciones iniciales (en el instante t = 0)
u(x, 0) = (x),
ut (x, 0) = (x),
(3.5)
PASO 1.
Buscamos una solucin de la forma u(x, t) = f (x)g(t), distinta de la trivial. Al
sustituir uxx (x, t) = f 00 (x)g(t), utt (x, t) = f (x)g 00 (t) en la ecuacin (3.3) obtenemos
f 00 (x)g(t) =
1 00
g (t)f (x),
c2
g (t) + c g(t) = 0,
f (0) = f (`) = 0 y f 6= 0
g 6= 0.
(3.6)
PASO 2.
Determinemos las soluciones de (3.6), que satisfagan las condiciones frontera.
Comenzamos por encontrar los valores del parmetro de forma que la ecuacin
f 00 + f = 0,
f (0) = f (`) = 0,
(3.7)
f (x) = C1 e
+ C2 e
f (0) = C1 + C2 = 0,
f (`) = C1 e
+ C2 e
= 0,
f (`) = C2 sen( `) = 0.
sen( `) = 0 ` = n = n =
`
Por tanto, las soluciones no triviales de (3.7) vienen dadas por las autofunciones
n 2
fn (x) = sen n
.
` x , de los autovalores n =
`
2
Haciendo = n = n
, obtenemos las soluciones de la segunda ecuacin (3.6)
`
gn (t) = An cos
nc
nc
t + Bn sen
t .
`
`
X
nc
nc
n
t + Bn sen
t
sen
x ,
`
`
`
n=1
n=1
(3.8)
tambin ser una solucin de (3.3) que satisface las condiciones de frontera.
u(x, t) =
un (x, t) =
An cos
PASO 3.
Imponemos que la ecuacin (3.8) cumpla las condiciones iniciales (3.5):
u(x, 0) = (x) =
un (x, 0) =
n=1
ut (x, 0) = (x) =
n=1
An sen
n=1
n
x ,
`
(3.9)
n
X nc
un
(x, 0) =
Bn sen
x .
t
`
`
n=1
73
(3.10)
n
n
X
2 `
(x) sen
(x) =
An sen
x An =
x dx,
`
` 0
`
n=1
y
(x) =
X
n
n=1
Z `
n
n
2
(x) sen
cBn sen
x Bn =
x dx.
`
`
nc 0
`
u(, t) = 0,
C.i. u(x, 0) = 0,
ut (x, 0) = 2,
uxx
0 < x < ,
t0
t0
0<x<
Z
Z
2
2
An =
(x) sen(nx) dx, y Bn =
(x) sen(nx) dx
0
nc 0
Adems, como u(x, 0) = 0 = (x) y ut (x, 0) = 2 = (x) tenemos que
Z
2
An =
(x) sen(nx) dx = 0
0
Z
2
4
cos(nx)
4
Bn =
(x) sen(nx) dx =
= 2 ( cos(n) + 1) =
nc 0
nc
n
n c
0
4
= 2 ((1)n + 1)
n c
Por tanto, la solucin pedida es
u(x, t) =
=
X
n=1
X
n=1
Ejemplo. Una cuerda de guitarra, de longitud `, est sujeta por sus extremos. Se tae
la cuerda en x = a, desplazndola una distancia h. Hllese la forma de la cuerda en
cualquier instante posterior al taido.
Hemos de resolver la Ecuacin de ondas
uxx
1
utt = 0,
c2
x=a
x=`
hx
a
u(x, 0) = (x) =
h(`
x)
`a
si 0 x a,
si a x `.
un (x, t) =
n=1
X
An cos
n=1
n
n
n
ct + Bn sen
ct
sen
x ,
`
`
`
con
2
An =
`
(x) sen
0
nx
`
dx,
2
Bn =
nc
(x) sen
0
nx
`
dx.
Ahora como en el caso que nos ocupa (x) = 0, tenemos que los coeficientes Bn son
todos nulos. Para los An tenemos
An =
=
=
=
Z
Z
Z
nx
nx
nx
2 `
2 a hx
2 ` h(` x)
(x) sen
dx =
sen
dx +
sen
dx =
` 0
`
` 0 a
`
` a `a
`
Z
Z `
Z `
nx
nx
nx
2h a
2h
2h
x sen
dx +
sen
dx
x sen
dx =
`a 0
`
`a a
`
`(` a) a
`
"
#
"
#
nx a
nx `
nx
nx `
cos
cos
sen
cos
2h
2h
2h sen nx
`
`
`
`
`
x n
x n
n
n 2
n 2
`a
`
a
`(`
a)
`
`
`
a
`
`
0
a
na
2h`2
sen
a(` a) 2 n2
`
na
nx
X
1
nct
2h`2
u(x, t) =
sen
cos
sen
a(` a) 2
n2
`
`
`
n=1
3.3.2.
(3.11)
la cual describe la temperatura de una barra homognea de seccin constante. Supondremos que la superficie de la barra est aislada, de modo que el calor slo fluye longitudinalmente; haremos coincidir la barra con el eje OX. Buscamos una solucin con las
condiciones de frontera
u(0, t) = 0, u(`, t) = 0, para todo t,
es decir, los extremos de la barra estn a temperatura 0. Consideraremos, adems la
condicin inicial u(x, 0) = (x), que nos da la distribucin inicial de la temperatura
en la barra.
PASO 1.
Buscamos una solucin de la forma
u(x, t) = f (x)g(t),
distinta de la trivial.
Sustituyendo en la Ecuacin (3.11) resulta
1 g 0 (t)
f 00 (x)
=
=
c2 g(t)
f (x)
con constante.
de donde, despejando
f 00 (x) + f (x) = 0,
f (0) = f (`) = 0,
g 0 (t) + c2 g(t) = 0,
(3.12)
PASO 2.
De nuevo, nos enfrentamos al problema de Sturm-Liouville
f 00 + f = 0,
f (0) = f (`) = 0.
De los clculos realizados en la seccin anterior, sabemos que dicho problema tiene soluciones no triviales para
n 2
n =
, n = 1, 2, . . .
`
76
n
`
x . Sustituyendo este valor de n en la Ecuacin (3.12)
g 0 + c2
n 2
`
g=0
2 n
con solucin gn (t) = An ec ( ` ) t . Por tanto, las siguientes funciones
n
2 n 2
x
un (x, t) = An ec ( ` ) t sen
`
son soluciones particulares de la ecuacin (3.11), las cuales satisfacen las condiciones de
frontera.
PASO 3.
Consideramos
u(x, t) =
un (x, t) =
n=1
An sen
n=1
n
2 n 2
x ec ( ` ) t
`
un (x, 0) =
n=1
An sen
n=1
n
x = (x),
`
Identificando los An con los coeficientes del desarrollo en serie de Fourier en senos de la
funcin (x) en el intervalo (0, `), tenemos que
2
An =
`
(x) sen
0
n
x dx
`
0<x<
X
n=1
An sen
n X
x =
An sen(nx) = sen(2x) + 5 sen(6x)
n=1
77
An sen
n
X
2 n 2
2 2
x ec ( ) t =
An sen (nx)ec n t =
n=1
n=1
4c2 t
= sen(2x)e
+ 5 sen(6x)e
36c2 t
(3.13)
0
Z
n
2
2 1
2
=
sen (nx) dx =
cos(nx) =
cos(n) cos
n
n
2
Por tanto
u(x, t) =
n
X
2
2
cos
cos(n) sen(nx) en t
n
2
(3.14)
n=1
Ejercicios
Ejercicio 3.1
Elimina las constantes de las siguientes familias biparamtricas de superficies para
obtener una ecuacin en derivadas parciales que tenga a esa familia como una solucin:
2. z = (x a)2 + (y b)2 .
1. z = ax + by + ab.
Ejercicio 3.2
Halla la ecuacin en derivadas parciales lineal tiene como solucin:
2. z 2 = x2 + (y 2 x2 ).
1. yx2 , y+x
= 0.
z
donde es una funcin arbitraria y derivable respecto a sus argumentos.
Ejercicio 3.3
Halla la solucin general de las siguientes ecuaciones en derivadas parciales:
78
1. x
z
z
+y
= 3z.
x
y
2. (x2 + y 2 )
z
z
3. (x + y) x
+ (x y) y
=
y 2 2xyx2
.
x
z
z
+ 2xy
= 0.
x
y
Ejercicio 3.4
z
Halla la superficie que satisface 4yz x
+
2
2
x + z = 2, y + z = 1.
z
y
Ejercicio 3.5
Halla la ecuacin de todas las superficies cuyos planos tangentes pasan por el punto
(0, 0, 1).
Ejercicio 3.6
Halla la ecuacin de la superficie que, en cada punto P , el(vector normal es ortogonal
z=1
al vector que une P con el origen y que contiene a la curva
.
x2 + y 2 = 1
Ejercicio 3.7
Hallal a solucin general de las ecuaciones en derivadas parciales:
1.
2z
2z
2z
6
= 0.
x2 xy
y 2
2.
2z
2z
2z
z
z
2
+
2
+2
= 0.
2
2
x
xy y
x
y
3.
2z
2z
2z
z
z
2
+
2
+2
= 4xe2y .
2
2
x
xy y
x
y
Ejercicio 3.8
Usa el mtodo de variables separadas para encontrar soluciones de la ecuacin:
4
2z
2z
z
+
8
= 3ex+2y
2
2
x
y
x
Ejercicio 3.9
Halla solucin no trivial del problema
2
2z
z
=
4
1.
t2
x2
con z(x, 0) = 0, z (x, 0) = 4x3 .
2z
z
= 2 2 , t > 0,
3.
t
x
con z(x, 0) = 6 sen x, z(0, t) = 0.
2
z
2z
=
4
x2
t
2. con
z(x, 0) = 0, z
t (x, 0) =
z(0, t) = z(, t) = 0.
3
40
sen x
1
40
sen 3x,
Ejercicio 3.10
Usa transformada de Laplace para encontrar la solucin de:
79
2
z
+ sen t = 0
1. tx
con z(0, t) = 0, z(x, 0) = x,
2
z
z
+
= 2t
2. tx
t
con z(0, t) = t2 , z(x, 0) = x
Ejercicio 3.11
Usa la transformada de Laplace para encontrar la solucin de
z
2z
=
2
, 0 < x < 3, t > 0,
t
x2
con zx (0, t) = zx (3, t) = 0,
4x
z(x, 0) = 4 cos 2x
3 2 cos 3 .
80
Chapter 4
4.1.
4.1.1.
Complex Differentiation
Accumulation Points and Limits
zz0
This definition does not require z0 is in the domain G of f but we can approach to
the point z0 as near as we want through points of G for which the function f is well
defined.
81
lim
zi
zz0
z
|z 2 |
rei
ei
z
=
lim
=
lim
=
z0 |z 2 |
r0 r 2
r0 r
lim
Next properties of limits are similar to real functions and we let the proof for reader.
Proposition 4.1.6. Let f and g be complex functions and c, z0 C. If limzz0 f (z) and
limzz0 g(z) exist, then:
1. lim (f (z) + g(z)) = lim f (z) + lim g(z).
zz0
zz0
zz0
zz0
zz0
zz0
zz0
limzz0 f (z)
f (z)
=
.
g(z)
limzz0 g(z)
Continuity
Definition 4.1.7. Suppose f is a complex function. If z0 is in the domain of the function
and either z0 is an isolated1 point of the domain or
lim f (z) = f (z0 )
zz0
82
Just as in the real case, we can take the limit inside a continuous function:
Proposition 4.1.8. If f is continuous at an accumulation point 0 and limzz0 g(z) =
0 then
lim f (g(z)) = f (0 ).
zz0
In other words,
lim f (g(z)) = f
lim g(z) .
zz0
zz0
This proposition implies that direct substitution is allowed when f is continuous at the
limit point. In particular, that if f is continuous at 0 then lim0 f (w) = f (0 ).
4.1.2.
f 0 (z0 ) = lim
We have,
lim (f (z0 + h) f (z0 )) = lim h
h0
h0
f (z0 + h) f (z0 )
= 0f 0 (z0 ) = 0
h
Example 4.1.13. The function f (z) = z 2 is differentiable at 0 and nowhere else (in
particular, f is not holomorphic at 0):
2
(z + h) z 2
2zh + h
2z rei + r2 e2i
= lim
= lim
=
r0
h0
h0
h
rei
h
lim
and this limit does not exist whether z 6= 0 (it depends of ) and is 0 when z = 0.
2
Some authors use the term analytic instead of holomorphic. Technically these two terms are
synonymous, although, these have different definitions.
83
1
f 0 (f 1 (z0 ))
Constant functions
Derivative of a constant complex function f (z) = c defined in an open set G is 0
everywhere.
f (z + h) f (z)
cc
= lim
=0
h0
h0 h
h
f 0 (z) = lim
Inverse is not completely true. As counterexample, suppose D(0, 1) the (open) disk
centered in z = 0 and radius 1 and D(2, 1) the (open) disk centered in z = 2 and radius
1. Function f : D(0, 1) D(2, 1) C defined
(
1
if z D(0, 1)
f (z) =
1 if z D(2, 0)
has derivative 0 but is not constant function. The trouble is that the domain if f is not
a connected. What is that?
Curves, Connected Sets and Regions Let I = [a, b] R an closed interval. We call
curve in C to a continuous function : I C. The first point of the curve is z1 = (a)
and the last point is z2 = (b). We say the curve goes from z1 to z2 . A curve is closed
when z1 = z2 , otherwise the curve is open. A curve is called scaled curve if is continuous
and formed by horizontal and vertical segments.
84
4.1.3.
The relationship between the complex derivative and partial derivatives is very strong
and is a powerful computational tool. It is described by the CauchyRiemann Equations,
named after the French mathematician Augustin L. Cauchy (1789 1857) and the German
mathematician Georg F. B. Riemann (1826 1866), though the equations first appeared
in works of dAlembert and Euler.
Considering complex numbers in rectangular form z = x + iy, a complex function
f : G C can be expressed depending of the real and imaginary part
f (z) = f (x + iy) = u(x, y) + iv(x, y)
being u(x, y) and v(x, y) two-valued real functions u, v : G R.
Theorem 4.1.22.
(a) Suppose f = u + iv is differentiable at z0 = x0 + iy0 . Then the partial derivatives of
f satisfy
f
f
(z0 ) = i (z0 ).
x
y
This expression can be expressed in equation form knows as Cauchy-Riemann equations:
(
ux (x0 , y0 ) = vy (x0 , y0 )
(4.1)
vx (x0 , y0 ) = vy (x0 , y0 ).
85
(b) Suppose f is a complex function such that the partial derivatives fx and fy exist in
an disk centered at z0 and are continuous at z0 . If these partial derivatives satisfy
the Cauchy-Riemann equations (4.1) then f is differentiable at z0 and
f 0 (z0 ) =
f
(z0 )
x
Proof.
(a) If f is differentiable at z0 then f 0 (z0 ) = limh0
direction of h = h1 + ih2 , so
if h2 = 0 then
f 0 (z0 ) = lim
h1 0
if h1 = 0 then
f (z0 + ih2 ) f (z0 )
1
f
f (x0 , y0 + h2 ) f (x0 , y0 )
= lim
= i (z0 )
h2 0
ih2
i h2 0
h2
y
f 0 (z0 ) = lim
therefore
f
x (z0 )
= i f
y (z0 ). Hence
ux (x0 , y0 ) + ivx (x0 , y0 ) = i(uy (x0 , y0 ) + ivy (x0 , y0 ) = vy (x0 , y0 ) iuy (x0 , y0 )
and matching real and imaginary parts we obtain equations (4.1).
(b) Suppose h = h1 + ih2 , first we rearrange the quotient
f (z0 + h) f (z0 )
f (z0 + h) f (z0 + h1 ) + f (z0 + h1 ) f (z0 )
=
=
h
h
f (z0 + h) f (z0 + h1 ) f (z0 + h1 ) f (z0 )
=
+
=
h
h
h2 f ((z0 + h1 ) + ih2 ) f (z0 + h1 ) h1 f (z0 + h1 ) f (z0 )
=
+
.
h
h2
h
h1
Second we rearrange the partial derivative
fx (z0 ) =
h
h1
h2
h1
h2
fx (z0 ) =
fx (z0 ) + ifx (z0 ) =
fx (z0 ) + fy (z0 ).
h
h
h
h
h
Now,
f (z0 + h) f (z0 )
fx (z0 ) =
lim
h0
h
h1 f (z0 + h1 ) f (z0 )
= lim
fx (z0 )
(4.2)
h0 h
h1
h2 f ((z0 + h1 ) + ih2 ) f (z0 + h1 )
+ lim
fy (z0 ) .
(4.3)
h0 h
h2
Considering hh1 < 1 and h1 0 when h 0, limit (4.2) is zero.
In the other hand, hh2 < 1 and h 0 implies h1 , h2 0, and therefore limit (4.3)
is zero.
This prove that f is differentiable in z0 and f 0 (z0 ) = fx (z0 ).
86
4.2.
4.2.1.
Integration
Definition and Basic Properties
For a function which takes complex numbers as arguments, we integrate over a smooth
curve in C. Let f be a complex function defined in a domain G C and the curve is
parametrized by (t), a t b such that (t) G for all t [a, b] and f is continuous
in , we call integral of f over to
Z
Z b
Z
f = f (z) dz =
f ((t)) 0 (t) dt.
This definition can be naturally extended to piecewise smooth curves, i.e. if c [a, b],
is not differentiable in c, and 1 = : [a, c] C, 2 = : [c, d] C
Z
Z
Z
f (z) dz =
f (z) dz +
f (z) dz
Z
=
1i
0.5i
1.5
1.
0.5
0.5i
(t (1 t)i)2 (1 i) dt
(2t2 1) + i(2t2 4t 1) dt +
(2t2 1) + i(2t2 4t + 1) dt
2
i1 1+i
=
=
3
3
3
3
0.5
87
Proposition 4.2.2. The value of the integral does not change if we do a new parameterization of the curve preserving the orientation. However, if the orientation is reversed
the integral changes the sign.
Proof. Suppose : [c, d] [a, b] differentiable for all s [c, d]. Then = : [c, d] C
is other parametrization of the same curve and
Z
t=(s)
(d)
f ((t)) 0 (t) dt
(c)
f ( (s)) (s) ds =
f (z) dz =
Hence,
If preserve the orientation, i.e. 0 (s) > 0, (c) = a and (d) = b and
Z
Z
f (z) dz =
f ((t)) 0 (t) dt
If reverse the orientation, i.e. 0 (s) < 0, (c) = b and (d) = a and
Z
f ((t)) (t) dt =
f (z) dz =
f ((t)) 0 (t) dt
3i
2i
1i
1i
Proof.
I
|z|=r
1
dz =
z
1
(rieit )dt = i( + ) = 2i
+ reit
Proposition 4.2.4. Suppose is a smooth curve, f and g are complex functions which
are continuous on , and c C.
R
R
R
1. (f + g) = f + g.
R
R
2. c f = c f .
88
a1
b1
Z
=
a1
Z b1
f (1 (t))10 (t) dt +
f (1 (t))10 (t) dt
a1
4.2.2.
a1
b1 +b2 a2
b1
Z b2
b1
f (2 (t b1 + a2 ))20 (t b1 + a2 ) dt
f (2 (s))20 (s) ds
a2
Z
=
Z
f (z) dz +
f (z) dz.
2
Homotopies
89
2
f (h(t, s)) h(t, s) + f (h(t, s))
h(t, s) dt
=
t
st
0 s
Z 1
h(t, s) dt
=
f 0 (h(t, s)) h(t, s) h(t, s) + f (h(t, s))
s
t
st
0
1
Z 1
=
f (h(t, s)) h(t, s) dt = f (h(t, s)) h(t, s)
s
s
0 t
t=0
4.2.3.
f = 0.
Example 4.2.9. Length of the circle of radius R is 2R. To compute it, we parametrize
the circle (t) = Reit , 0 t 2 and
Z 2
Z 2
it
length() =
Rie dt =
R dt = 2R.
0
90
Cr
Proof.
Z
Z
Z b
0
f (z) dz = f ((t)) (t) dt
|f ((t))| 0 (t) dt
a
Z b
0
(t) dt = max |f (z)| length()
maxz |f (z)|
z
f (z)
dz = 2if ()
z
Moreover, using
I
Cr
1
Cr z dz
f (z)
dz =
z
I
Cr
f (z)
dz
z
= 2i in Lemma 4.2.3,
I
I
I
f (z)
1
f (z) f ()
f (z)
dz 2if () =
dz f ()
dz =
dz
z
z
Cr z
Cr
Cr z
f (z) f ()
f (z) f ()
2r
max
length(Cr ) = max
zCr
zCr
z
r
= 2 max |f (z) f ()|
zCr
Discussion. Suppose
H f (z) f holomorphic on G, G and closed curve contractible in
G. For solving z
dz, Cauchys integral formula brings us to the following discussion:
I
f (z)
dz = 2if () (Theorem 4.2.11).
z
If is inside of then
Example.
I
|z|=1
2z + i
dz =
2z i
I
If is outside of then
I
|z|=1
z + i/2
i
i
dz = 2i( + ) = 2.
z i/2
2 2
f (z)
dz = 0 (Corollary 4.2.6).
z
Example.
I
1
dz = 0.
z+1i
|z|=1
ieit
dt =
eit + i
dz is not defined.
i cos t sin t
dt
cos t + i(sin t + 1)
0
0
Z 2
Z 2
cos t
1
=
dt + i
dt
2 sin t + 2
2
0
0
Z 2
cos t
dt.
= i +
2 sin t + 2
0
1
dz =
z+i
f (z)
x
cos t
dt = lim
0
2 sin t + 2
= lim
cos t
2 sin t+2
!
Z 2
cos t
cos t
dt +
dt
3
2 sin t + 2
2 sin t + 2
+
0
2
!
3
2
ln |2 sin t + 2| 2
ln |2 sin t + 2|
+
2
2
3
0
+
ln |2 sin( 3
2
ln
2 ln
2 ln |2 sin( 3
2 + ) + 2|
= lim
+
0
2
2
2
2
ln | 2 cos() + 2| ln | 2 cos() + 2|
= lim
0
2
2
= 0.
Hence
1
|z|=1 z+i dz
) + 2|
= i (improper).
Example 4.2.12. Let r be the circle centered at 2i with radius r, oriented counterclockwise. We compute
I
dz
.
2
r z + 1
92
Solution. Denominator z 2 + 1 = (z i)(z + i), hence there are two relevant points
z = i and z = 1. See Figure 4.4.
For 0 < r < 1, f (z) =
1
z 2 +1
1
z+i
dz
=0
+1
z2
dz
=
z2 + 1
zi
= 2i
1
=
i+i
For r > 3, there are two conflictive points inside r . Introducing a new path4 we
obtain two counterclockwise curves 1 and 2 separating i and i according to the
figure 4.4 shown below. Thus
I
r
dz
=
z2 + 1
I
r
1
z+i
zi
1
zi
I
+
z+i
= 2i
1
1
+ 2i
= 0.
i+i
(i) i
r=3
r=1
2i
i
i
Figure 4.4
4.2.4.
n!
2i
f (z)
dz
(z w)n+1
93
2
2
h
2i (z w)
2i (z h)(z ) (z w)
f (z)
|h|
length().
max
2
2 z (z h)(z )
(4.4)
Since
/ , therefore |z | k for some k and, if M = maxz f (z),
f
(z)
|f (z)|
M
M
(z h)(z )2 (|z | |h|)|z |2 (k |h|)k 2 k 3 if h 0.
f (z)
In conclusion, length() is constant, (zh)(z)
2 is bounded, therefore expression
(4.4) goes to 0 if h 0 and
I
1
f (z)
0
f () =
dz.
2i (z w)2
The proof of the remaining formulas are performed similarly.
From this theorem an important consequence is deduced:
Corollary 4.2.14. If a complex function is differentiable then it is infinitely differentiable.
H
z
Example 4.2.15. For compute |z|=1 tan
dz we check tan z is holomorphic inside the
z3
circle of radius 1, then
I
tan z
d2 tan z
dz = i
= i2 sec2 (0) tan(0) = 0
3
dz 2 z=0
|z|=1 z
I
1
dz.
1)2
|z|=1
Function has two singularities z = 0 and z = 21 , both inside circle |z| = 1. Introduce
a path which separates 0 an 12 and
Example 4.2.16. Compute
I
|z|=1
z 2 (2z
1
1
I
1 2
(z
2
1
2)
4z
dz =
dz +
1 2 dz
2
z 2 (2z 1)2
z
1
2 (z 2 )
d
1
d 1
= 2i
+ 2i
dz (z 12 )2
dz 4z 2 z= 1
I
z=0
1
= 2i
+ 2i 1 3
1 3
2(
(0 2 )
2)
= 24i.
94
1
2
1
4.2.5.
M
R
|z|
Corollary 4.2.19. Any polynomial non constant of grade n has exactly n complex roots
(non necessary all different).
4.2.6.
As in the real case, we call a primitive of a complex function f (z) on G to a holomorphic function F on G such that F 0 (z) = f (z). So, we can state the following theorem
95
(b)
(a)
1
|z|=r z
1
z
has not
Exercises
Exercise 4.1
Evaluate the following limits or explain why they does not exist.
iz 3 1
.
zi z + i
1. lim
2.
|z| 1
.
z1 z + 1
lim
3.
z1i
Exercise 4.2
Apply the definition of the derivative to give a direct proof that f 0 (z) =
f (z) = 1/z.
Exercise 4.3
Find the derivative of the function T (z) =
When is T 0 (z) = 0?
az+b
cz+d
1
z2
when
, where a, b, c, d C and ad bc 6= 0.
Exercise 4.4
If u(x, y) and v(x, y) are differentiable does it follow that f (z) = u(x, y) + iv(x, y) is
differentiable? If not, provide a counterexample.
Exercise 4.5
Where are the following functions differentiable? Where are they holomorphic? Determine their derivatives at points where they are differentiable.
1. f (z) = ex eiy .
7. f (z) = |z|2 = x2 + y 2 .
2. f (z) = 2x + ixy 2 .
8. f (z) = z Imz.
3. f (z) = x2 + iy 2 .
9. f (z) =
4. f (z) = ex eiy .
6. f (z) = Imz.
12. f (z) = z 2 z 2 .
ix + 1
.
y
Exercise 4.6
Consider the function
( xy(x+iy)
f (z) =
x2 +y 2
if z 6= 0
if z = 0.
(As always, z = x + iy.) Show that f satisfies the CauchyRiemann equations at the
origin z = 0, yet f is not differentiable at the origin. Why doesnt this contradict Theorem
4.1.22 (b)?
Exercise 4.7
Prove: If f is holomorphic in the region G C and always real valued, then f is
constant in G. (Hint: Use the CauchyRiemann equations to show that f 0 = 0.)
97
Exercise 4.8
Prove: If f (z) and f (z) are both holomorphic in the region G Cthen f (z) is
constant in G.
Exercise 4.9
Suppose that f = u + iv is holomorphic. Find v given u:
1. u = x2 y 2 .
3. u = 2x2 + x + 1 2y 2 .
2. u = cosh y sin x.
4. u =
x
.
x2 +y 2
Exercise 4.10
Suppose f (z) is entire, with real and imaginary parts u(x, y) and v(x, y) satisfying
u(x, y)v(x, y) = 3 for all z. Show that f is constant.
Exercise 4.11
The general real homogeneous quadratic function of (x, y) is
u(x, y) = ax2 + bxy + cy 2 ,
where a, b and c are real constants.
1. Show that u is harmonic if and only if a = c.
2. If u is harmonic then show that it is the real part of a function of the form f (z) =
Az 2 , where A is a complex constant. Give a formula for A in terms of the constants
a, b and c.
Exercise 4.12
Use the definition of length to find the length of the following curves:
1. (t) = 3t + i for 1 t 1.
Exercise 4.13
R
Evaluate z1 dz where (t) = sin t + i cos t, 0 t 2.
Exercise 4.14
Integrate the following functions over the circle |z| = 2, oriented counterclockwise:
1. z + z.
2. z 2 2z + 3.
3.
1
.
z4
4. xy.
Exercise 4.15
R
R
R
R
Evaluate the integrals x dz, y dz, z dz and z dz along each of the following
paths. Note that you can get the second two integrals very easily after you calculate the
first two, by writing z and z as x iy.
1. is the line segment form 0 to 1 i.
98
d
2 + sin
by writing the sine function in terms of the exponential function and making the substitution z = ei to turn the real into a complex integral.
Exercise 4.24
H
z2
Find |z+1|=2 4z
2.
99
Exercise 4.25
H
What is |z|=1
sin z
z .
Exercise 4.26
H
Evaluate |z|=2
ez
z(z3)
and
ez
|z|=4 z(z3) .
Exercise 4.27
Compute the following integrals, where C is the boundary of the square with corners
at 4 4i:
I z
I
e
sin(2z)
dz.
1.
3.
dz.
3
2
C z
C (z )
I
I
ez
ez cos z
2.
dz.
dz.
4.
2
3
C (z i)
C (z )
Exercise 4.28
Integrate the following functions over the circle |z| = 3, oriented counterclockwise:
6. iz3 .
1. Log(z 4i).
2.
1
.
z 12
7.
1
.
z2 4
exp z
4.
.
z3
cos z 2
5.
.
z
3.
Exercise 4.29
I
Evaluate
|z|=3
8.
9.
e2z dz
.
(z 1)2 (z 2)
100
sin z
(z 2 + 1/2)2
1
.
(z + 4)(z 2 + 1)
exp z
where is any fixed com(z )2
plex number with || =
6 3.
Chapter 5
(ii) and for all real number K > 0 there is an integer N such that for all n N , we
have |an | > K. Then the sequence {an} is divergent , in symbols
lim an = .
Example 5.1.2.
1. Sequence an =
in
n
n
converges to 0 because in 0 =
2. Sequence an = 2n +
i
n
1
n
0 if n .
diverges because |an | 2n ni = 2n n1 .
X
n=0
an =
n0
bn =
n
X
ak = a0 + a1 + + an
k=0
101
an = a
n=0
k0 an
< .
X 1
converges for p > 1 and diverges for p 1.
np
n1
n1
(1)n
n
Figure 5.1: Continuous functions fn (x) = sinn (x) in [0, ] converge pointwise to a discontinuous function.
Proposition 5.1.9. Suppose fn are continuous on the smooth curve and converge
uniformly on to f . Then
Z
Z
lim
fn =
Proof. Given > 0, for n > N we have maxz |fn (z) f (z)| <
length() .
Hence
Z
Z Z
fn f = fn f max |fn (z) f (z)| length() <
z
n0 fn .
f (z) =
fn (z).
n0
P
P
To see that
fn converges uniformly
to f , suppose
> 0, since convergence of
Mn ,
Pk
there are a integer N such that M n=0 Mn < for all k N . Then for all z G,
if k N
k
k
X
X
X
X
X
fn (z) f (z) =
fn (z)
|fn (z)|
Mn = M
Mn <
n=0
n>k
n>k
n>k
k=0
ck (z z0 )k
k=0
k0 z
k.
P
k
Lemma 5.1.12. The geometric series
k0 z converges absolutely in the open disk
1
|z| < 1 to the function 1z and it diverges absolutely in the closed set |z| 1.
The convergence is uniform on any set of the form Dr = {z C : |z| r} for any
r < 1.
103
Proof. Let an =
Pn
k=0 z
= 1 + z + z 2 + + z n then
an =
(
0
=
if |z| < 1
, therefore
if |z| > 1
(
z k = lim an =
if |z| < 1
if |z| > 1
k0
1
1z
P k
For |z| = 1, series of absolute values
|z| = 1 + 1 + 1 + . . . , diverges.
k
k
In the other hand,
P fork z Dr , |z | r = Mk , and for the M T est of Weierstrass,
Theorem 5.1.10, k0 z converges uniformly in Dr .
P
Theorem 5.1.13. For any power series k0 ck (z z0 )k there exists 0 R , called
radius of convergence, such that
P
k
(a) If r < R 6= 0 then
k0 ck (z z0 ) converges absolutely and uniformly on the
closed disk |z z0 | < r of radius r centered at z0 .
P
(b) If |z z0 | > R then the series k0 ck (z z0 )k diverges.
For 0 < R < the open disk |z z0 | < R is called region of convergence. For R = the
region of convergence is the entire complex plane C. For R = 0 the region of convergence
is the empty set.
All tests to search the radius of convergence studied in Real Analysis are valid in
Complex Analysis.
Proof. Omitted.
From this Theorem, we know that power series are continuous on its region of convergence, and Proposition 5.1.9 we have the following property of power series:
Corollary 5.1.14. Suppose the curve contained in the region of convergence of the
power series, then
Z X
ck (z z0 )k dz =
k=0
In particular, if is closed
X
k=0
H P
k=0 ck (z
Z
ck
(z z0 )k dz
z0 )k dz = 0.
kck (z z0 )k1 ,
k1
Proof. Since f holomorphic, Cr the circle of radius r < R centered in z0 and the Cauchys
integral formula gives
I
I P
k
f ()
1
1
k0 ck ( z0 )
0
f (z) =
d
=
d
2i Cr ( z)2
2i Cr
( z)2
I
X
X
( z0 )k
d
1
k
d
=
=
c
(
z
)
ck
0
k
2i
( z)2
d
=
k=0
Cr
k=0
=z
ck k(z z0 )k1 .
k=0
The radius of convergence of f 0 (z) is at least R (since we have shown that the series
converges whenever |z z0 | < R), and it cannot be larger than R by comparison to the
series for f (z), since the coefficients for (z z0 )f 0 (z) are bigger than the corresponding
ones for f (z).
5.1.2.
Taylor Series
P
A complex function which can be expressed like a power series f (z) = k0 ck (zz0 )k
on a disk centered in z0 is called analytic in z0 . Theorem 5.1.15 says an analytic function
in z0 is holomorphic in z0 . Moreover f has derivative of any order in z0 :
X
f (n) (z) =
k(k 1) . . . (k n + 1)ck (z z0 )kn ,
kn
where is any positively oriented, simple, closed, smooth curve in D for which z0 is
inside .
Proof. Let g(z) = f (z + z0 ); so g is a function holomorphic in |z| < R. Fix 0 < r < R,
by Cauchys integral formula, if |z| = r is the positively oriented
I
I
1
g()
1
1
1
g(z) =
d =
g()
d
2i |z|=r z
2i |z|=r
1 z
I
X
1
1
z k
=
g()
d
2i |z|=r
k0
!
I
X
1
g()
=
d z k .
2i |z|=r k+1
k0
|z|=r
k0
!
I
1
f ()
=z0 X
=
d (z z0 )k .
2i |zz0 |=r ( z0 )k+1
X
1
2i
k0
105
X
f (k) (z0 )
k=0
k!
(z z0 )k
X zk
k0
k!
k
k
X
X
1
1
(iz)
(iz)
sin z =
(exp(iz) exp(iz)) =
2i
2i
k!
k!
k0
k0
1
i2 z 2 i3 z 3 i4 z 4
i2 z 2 i3 z 3
=
1 + iz +
+
+
+ . . . 1 iz +
+ ...
2i
2!
3!
4!
2!
3!
2i3 z 3 2i5 z 5
i2 z 3 i4 z 5
1
2iz +
=
+
... = z +
+
+ ...
2i
3!
5!
3!
5!
z3 z5 z7
=z
+
...
3!
5!
7!
X
z 2k+1
=
(1)k
(2k + 1)!
k0
5.1.3.
Laurent Series
ak =
ak =
k=
X
k1
ak +
ak
k0
k0 ck (z
106
ck (z z0 )k =
ck
k>1
kZ
X
1
ck (z z0 )k .
+
(z z0 )k
k>0
R2
z0
R1
1
The first series converges for zz0 < R1 and the
second converges for |z z0 | < R2 , then both series
converge for the annulus
R1 < |z z0 | < R2 .
Obviously the Laurent series does not converges anywhere if R1 R2 .
Previous theorems show that Laurent series is holomorphic in its region of convergence
R1 < |z z0 | < R2 if R1 < R2 . The fact that we can conversely represent any function
holomorphic in such an annulus by a Laurent series is the substance of the next theorem.
Theorem 5.1.21. Suppose f is a function which is holomorphic in D = {z C : R1 <
|z z0 | < R2 }. Then f can be represented in D as a Laurent series centered at z0 :
Z
X
1
f ()
k
f (z) =
ck (z z0 )
with
ck =
d
2i ( z0 )k+1
kK
where is any positively oriented, simple, closed, smooth curve in the annulus D.
Proof. Omitted.
Example 5.1.22. Function exp(1/z) is not holomorphic for z = 0, but it is holomorphic
in the annulus 0 < |z| < . We are going to evaluate its Laurent series centered in 0:
exp
X 1
1 X (1/z)k
1
1
=
=
z k = + z 3 + z 2 + z 1 + 1
z
k!
k!
3!
2!
k0
k0
z 3 z
z1 .
z(z + 1)(z 1)
= z + z2
z1
( + 1)3 1
= 2 + 3 + 2
= 2 + (z 1) + (z 1)2 .
=z1
Example 5.1.24. Find the first three terms of Laurent series of cot z centered in z = 0.
We know
4
6
2
1 z2! + z4! z6! + . . .
cos z
cot z =
.=
3
5
sin z
z z + z + ...
3!
5!
1
1 +
z2
3!
z3 +
z2
3
z4
4!
+ ...
z4
5!
+ ...
z4
30
+ ...
z4
18
+ ...
z3
3!
1
z
z
3
z5
5!
z3
45
+ ...
+ ...
z
+ ...
45
we have
f (z) = z 1
5.2.
5.2.1.
z3
z
+ ...
3 45
zz0
z
sin z
z
= 1.
z0 sin z
lim
108
So, using the Taylor series of sin z at 0 and large division, we obtain the Laurent
series of f (z) at 0
g(z) = 1 +
z2
7z 4
+
+ ...
6
360
1
z
0, because
1
1
x
e = lim+ 1 = 0.
x0
x0 e x
x0+
1
then does not exist the limz0 e z .
Next proposition gives a classification of not essential singularities.
Proposition 5.2.3. Suppose z0 a not essential isolated singularity of f , then there exists
an integer n 0 such that
lim (z z0 )n+1 f (z) = 0.
(5.1)
zz0
zz0
zz0
ck (z z0 )k2 .
k=2
109
is holomorphic
on |z z0 | < R, hence has a Taylor series expansion at z0 ,
P
(z) = k0 ck (z z0 )k . Let n be the smallest n such that cn 6= 0. Obviously
P
n > 0, because z0 is a zero of , and g(z) = kn ck (z z0 )kn verifies g(z0 ) 6= 0.
Then
(z z0 )n+1
(z z0 )n+1
(z z0 )
=
lim
= lim
=0
n
k
zz
zz
g(z)
0 (z z0 ) g(z)
0
k0 ck (z z0 )
zz0
zz0
zz0
|g(z)|
= ,
|z z0 |n
and z0 is a pole.
g(z)
Remark. Sometimes, for functions in the form f (z) = h(z)
, to find poles we study the
values where h(z) = 0. Suppose z0 such g(z0 ) 6= 0 and f (z0 ) = 0. Then z0 is a pole and
its order is the multiplicity 1 of z0 of g.
1+z
This
Example 5.2.4. Function f (z) = (z+i)
3 has a unique singularity in z = i.
singularity is a pole of order 3.
Indeed,
lim (z + i)4 f (z) = lim (z + i)(1 + z) = 0
zi
zi
z
Example 5.2.5. Function f (z) = sin
has a pole of order 2 in 0 (spite of 0 is a zero of
z3
3
multiplicity 3 of z ).
sin z
sin z
lim 3 = and
lim z 3 3 = 0 (the smallest n) .
z0 z
z0
z
k=
Then
a) z0 is removable if and only if there are no negative exponents (that is, the Laurent
series is a power series),
b) z0 is a pole if and only if there are finitely many negative exponents, and the order
of the pole is the largest n such that cn 6= 0 and
c) z0 is essential if and only if there are infinitely many negative exponents.
1
Multiplicity of a zero z0 of g(z) is the smallest positive integer n such that there exists a holomorphic
function (z) with (z0 ) 6= 0 and g(z) = (z z0 )n (z).
110
Proof. Exercise.
Example 5.2.7.
1. We know from Exercise 5.2.5 than 0 is a pole of order 2 of f (z) =
sin z
.
z3
Furthermore,
f (z) =
z
sin z
=
z3
z3
6
z
+ 120
1
z2
2
=
z
+
z3
6 120
1 3
1
z + z 2 + z 1 + 1
3!
2!
5.2.2.
Residues
I
I
I
I
I
dz
dz
2
+
c
= + c2
+
c
dz
+
c
(z
z
)
+
c
(z
z
1
0
1
0
2
0) + . . .
2
(z z0 )
(z z0 )
|
{z
}
2i
From this it follows that the integral depends only on the term c1 of the Laurent series
I
f (z) dz = 2ic1 .
This term c1 is named residue of f (z) at the singularity z0 and it will be represented
Res(f (z), z0 ).
How to Calculate Residues
Most often it is not necessary to find the Laurent series to calculate residues. Following propositions provide methods for this.
Proposition 5.2.8. Suppose z0 is a removable singularity of f . Then Res(f (z), z0 ) = 0.
Proof. It is consequence of the Laurent series for f at z0 is a power series.
Proposition 5.2.9. Suppose z0 is a pole of f of order n. Then
Res(f (z), z0 ) =
1
dn1
lim
(z z0 )n f (z)
n1
(n 1)! zz0 dz
111
f (z) =
ck (z z0 )k ,
and cn 6= 0 =
k=n
(z z0 )n f (z) =
ck (z z0 )n+k
k=n
= cn + cn+1 (z z0 ) + + c1 (z z0 )n1 +
ck (z z0 )n+k .
k=0
X
dn1
n
(z
z
)
f
(z)
=
(n
1)!
c
+
ck (n + k)(n + k 1) (k + 2)(z z0 )k+1
0
1
dz n1
k=0
and, hence
lim
zz0
dn1
(z z0 )n f (z) = (n 1)! c1 .
dz n1
n(z)
d(z) ,
being n and d
Res(f (z), z0 ) =
n(z0 )
d0 (z0 )
eiz
cos z sin z
e2
sin 2
i
Res f (z),
=
= e 2 = i
2
sin 2
Other way to compute the residue is
(z 2 )eiz
Res f (z),
= lim
= i
2
z 2 cos z sin z
112
at z0 = /2, we observe
Residue Theorem
Theorem 5.2.12 (Residue Theorem). Suppose f is holomorphic in the region G, except for isolated singularities, and is a positively oriented, simple, closed, smooth,
G-contractible curve which avoids the singularities of f . Then
I
X
f (z) dz = 2i
Res(f (z), zi )
z1
z2
z3
For several isolated singularities, draw two circles around each them inside , one
with positive, and another one with negative orientation, as pictured in Figure 5.2.
Each of these pairs cancel each other when we integrate over them. Now connect the
circles with negative orientation with . This gives a curve which is contractible in the
region of holomorphicity of f. But this means that we can replace by the positively
oriented circles; now all we need to do the sum of all expressions similar to (5.2) for every
singularity.
Example 5.2.13. Lets calculate the integral
I
z
dz.
z
|z|<1 e sin(4z)
z
inside of the circle |z| < 1 are z1 = /4, z2 = 0
The singularities of f (z) = ez sin(4z)
and z3 = /4. We compute the residue for each of them:
/4
e 4
exp(/4)
=
=
.
z1 = 4 is a pole of order 1. and Res f (z),
4
16
4 cos(4
4 )
z1 =
e 4
exp(/4)
is a pole of order 1. and Res f (z),
.
=
=
4
4 cos(4 4 )
16
Therefore,
I
|z|<1
z
dz = 2i
z
e sin(4z)
e 4
e 4
16
16
113
2 sinh
=
4
i
Exercises
Exercise 5.1
For each of the following series, determine where the series converges absolutely/uniformly:
1.
k(k 1)z k2
2.
k2
k0
1
z 2k+1 .
(2k + 1)!
X 1 k
.
3.
z3
k0
1
1+z 2
Exercise 5.4
Find the terms through third order and the radius of convergence of the power series for each following functions, centered at z0 . Do not find the general form for the
coefficients.
1. f (z) =
1
, z0 = 1.
1 + z2
2. f (z) =
1
, z0 = 0.
ez + 1
4. f (z) = ez , z0 = i.
Exercise 5.5
Find a Laurent series for
it converges.
1
(z1)(z+1)
Exercise 5.6
Find a Laurent series for
converges.
1
z(z2)2
Exercise 5.7
Find a Laurent series for
converges.
z2
z+1
Exercise 5.8
Find the first five terms in the Laurent series for
1
sin z
centered at z = 0.
Exercise 5.9
Find the first four non-zero terms in the power series expansion of tan z centered at
the origin. What is the radius of convergence?
114
Exercise 5.10
1. Find the power series representation for eaz centered at 0, where a is any constant.
e(1+i)z +e(1i)z
.
2
Exercise 5.11
P
z1
Show that z2
= k0
1
(z1)k
Exercise 5.12
1. Find the Laurent series for
2. Prove that
for |z 1| > 1.
cos
z2
centered in z = 0.
cos z 1
f (z) =
z2
1
2
if z 6= 0
if z = 0
is entire.
Exercise 5.13
Find the Laurent series for sec z centered at the origin.
Exercise 5.14
3
Find the three Laurent series of f (z) = (1z)(z+2)
, centered in 0, but which are defined
on the three domains |z| < 1, 1 < |z| < 2, and 2 < |z|, respectively. Hint: Use partial
fraction decomposition.
Exercise 5.15
Find the poles of the following, and determine their orders:
1. (z 2 + 1)3 (z 1)4 .
3. z 5 sin(z).
2. z cot(z).
4.
1
.
1 ez
5.
z
.
1 ez
Exercise 5.16
1
1. Find a Laurent series for (z 2 4)(z2)
centered at z = 2 and specify the region in
which it converges.
H
dz
2. Compute (z 2 4)(z2)
, where is the positively oriented circle centered at 2 of
radius 1.
Exercise 5.17
Verify that if f is holomorphic in then the residue of
115
f (z)
is f ().
z
Exercise 5.18
Verify that if f is holomorphic in then the residue of
Exercise 5.19
Evaluate the following integrals for (t) = 3eit ,
Z
Z
cot z dz.
1.
3.
Z
2.
z 3 cos( z3 ) dz.
4.
f (z)
f (n) ()
is
.
(z )n
(n 1)!
0 t 2.
Z
dz
.
(z + 4)(z 2 + 1)
z 2 exp
1
z
5.
Z
dz.
6.
exp z
dz.
sinh z
iz+4
dz.
(z 2 + 16)2
Exercise 5.20
1. Find the power series of exp z centered at z = 1.
R exp z
2. Find (z+1)
34 dz, where is the circle |z + 2| = 2, positively oriented.
Exercise 5.21
Suppose f has a simple pole (i.e., a pole of order 1) at z0 and g is holomorphic at z0 .
Prove that
Res((f g)(z), z0 ) = g(z0 ) Res(f (z), z0 ).
Exercise 5.22
Find the residue of each function at 0:
1. z 3 cos z.
2. csc z.
3.
z 2 +4z+5
.
z 2 +z
Exercise 5.23
Use residues to evaluate the following:
Z
dz
, where is the circle |z + 1 i| = 1.
1.
4
z +4
Z
2.
Z
3.
Z
4.
z(z 2
dz
, where is the circle |z i| = 2.
+ z 2)
ez dz
, where is the circle |z| = 2.
z3 + z
dz
, where is the circle |z| = 1.
z 2 sin z
116
4. e1 z .
5.
e4z 1
.
sin2 z
Exercise 5.24
Suppose f has an isolated singularity at z0 .
1. Show that f 0 also has an isolated singularity at z0 .
2. Find Res(f 0 , z0 ).
Exercise 5.25
Given R > 0, let R be the half circle defined by R (t) = Reit , 0 t , and R be
the closed curve composed of R and the line segment [R, R].
R
dz
1. Compute R (1+z
2 )2 .
2. Prove that limR
dz
R (1+z 2 )2
= 0.
117
dx
(1+x2 )2 .
118
Appendix A
Complex Numbers
A.1.
Algebraic Definition
A.2.
119
(A.1)
Rectangular Form
As before, thinking 1 = (1, 0), x = (x, 0) and y = (y, 0) as real numbers and giving to (0, 1)
a special name, say i, then the complex number is represented by
(x, y) = x + yi
The number x is called the real part and y the imaginary part of the complex number x + yi,
often denoted as Re(x + iy) = x and Im(x + iy) = y. The identity (A.1) then reads
i2 = 1
A complex number written in the form x+iy where x and y are both real numbers is in rectangular
form.
Complex number i is named square root of 1 and also is named imaginary unity. Then the
polynomial x2 + 1 = 0 has roots, but only in C.
Polar Form
Lets for a moment return to the (x, y)-notation of complex numbers. It suggests that one
can think of a complex number as a two-dimensional real vector. When plotting these vectors in
the plane R2 , we will call the x-axis the real axis and the y-axis the imaginary axis.
On the other hand, a vector can be determined by its
length and the angle it encloses with, say, the positive
real axis; lets define these concepts thoroughly. The
z
absolute value (sometimes also called the modulus) r =
y = r sin
|z| R of z = x + iy is
p
r
r = |z| = x2 + y 2 ,
and an argument of z = x + iy is a number R such
that
x = r cos and y = r sin .
x = r cos
A given complex number z = x + iy has infinitely many possible arguments + 2k, where
k is any integer number.
Proposition A.2.1. . Let z1 , z2 C be two complex numbers, thought of as vectors in R2 , and
let d(z1 , z2 ) denote the distance between the two vectors in R2 . Then
d(z1 , z2 ) = |z2 z1 | = |z1 z2 |.
Proof. Lets z1 = x1 + iy1 and z2 = x2 + iy2 . By definition of distance
p
d(z1 , z2 ) = (x2 x1 )2 + (y2 y1 )2
and this expression is equal to |z2 z1 | = |(x2 x1 ) + i(y2 y1 )|. Finally, it is obvious that
|z2 z1 | = |z1 z2 |.
The complex number cos + i sin is represented in short as ei . Initially this expression
should not be interpreted as an exponential, but rather as an abbreviation. Later we will see
that verifies the properties of the exponential function and can be understood in such manner.
Definition A.2.2. The complex number z = x + iy with absolute value r and argument is
expressed as
z = x + iy = r(cos + i sin ) = rei .
The right-hand side of this expression is named polar form of the complex number z.
Because the argument (angle) is not unique representation, the polar form is not unique, so
for any k Z
rei = rei(+2k)
120
z+
+ z
rs
1
1
= ei .
i
se
s
z
r
= ei() .
s
n
4. z n = rei = rn ein , for all n Z+ .
3.
+2k
n
For all n Z+ , n z = rei = n rei n with k = 0, 1, 2, . . . , n 1.
Proof.
1.
z = r(cos + i sin ) s(cos + i sin ) =
= rs ((cos cos sin sin ) + i(cos sin + sin cos )) =
= rs(cos( + ) + i sin( + )) = rsei(+)
2.
1
cos i sin
1
=
=
s(cos + i sin )
s (cos + i sin )(cos i sin )
1 cos sin
1
1
=
= (cos() + i sin()) = ei
2
2
s cos + sin
s
s
1 =
3.
z
1
r
= z 1 = rei ei = ei()
s
s
4. We use induction. z 1 = z, obviously and for n > 1, we suppose z n1 = rn1 ei(n1) .
Then, for n > 1,
z n = z n1 z = rn1 ei(n1) rei = rn ein
121
5. For any k Z,
n
rei
+2k
n
n
r ein
+2k
n
= rei(+2k) = z
And the reason because there are exactly n roots is for equivalence of angles
2k
2(k + n)
=
n
n
and so, the only different angles are for k = 0, 1, 2, . . . , n 1.
Example A.2.4. The fifth roots of the unity 5 1 are the next complex numbers:
2k
5
5
1 = ei2k = ei 5
and, then,
z1
For k = 0, z0 = ei0 = 1,
For k = 1, z1 = ei
2
5
For k = 2, z2 = ei
4
5
For k = 3, z3 = e
i 5
5
For k = 4, z4 = e
i 8
5
For k = 5, e
i 10
5
z2
2
5
z3
= ei2 = z0 , . . .
z4
A.3.
z0
Complex Conjugates
z1
|z| = z z.
and, hence, when z 6= 0
z 1 =
1
z
= 2
z
|z|
z2 = z1
z1
z2
1. z1 z2 = z1 z2 .
3.
2. z1 z2 = z1 z2 .
4. z = z.
z1
.
z2
5. |z| = |z|.
6. z = z iff z is a real.
122
7. Re(z) =
z+z
.
2
8. Im(z) =
zz
.
2i
9. ei = ei .
A famous geometric inequality (which holds for vectors in Rn ) is the triangle inequality.
Complex numbers verify this inequality
Proposition A.3.3. For z1 , z2 C, |z1 + z2 | |z1 | + |z2 |
Proof.
|z1 + z2 |2 = (z1 + z2 )(z1 + z2 ) = (z1 + z2 )(z1 + z2 ) =
= z1 z1 + z1 z2 + z2 z1 + z2 z2 = |z1 |2 + z1 z2 + z1 z2 + |z2 |2 =
= |z1 |2 + 2Re(z1 z2 ) + |z2 |2 .
(A.2)
Finally by Re(z) |z| for all z, we have Re(z1 z2 ) |z1 z2 | = |z1 ||z2 | and from (A.2)
2
Exercises
Exercise 1.1
Let z = 1 + 2i and = 2 i. Compute:
1. z + 3.
3. z 3 .
5. z 2 + z + i.
2. z.
4. Re( 2 + ).
Exercise 1.2
Find the real and imaginary parts of each of the following:
!3
1 + i 3
.
2
1.
za
(a R).
z+a
3.
2.
3 + 5i
.
7i + 1
4. in for any n Z
Exercise 1.3
Find the absolute value and conjugate of each of the following:
1. 2 + i.
2. (2+i)(4+3i).
Exercise 1.4
Write in both polar and rectangular form:
123
3.
3i .
2+3i
4. (1 + i)6 .
1. 2i.
2. 1 + i.
3. 3 +
3i.
4. i.
5. (2 i)2 .
9.
6. |3 4i|.
7.
5 i.
4
1i
.
8.
3
10. 34ei/2 .
14. eln 5i .
11. ei250 .
15. e1+i/2 .
12. 2e4i .
16.
2ei3/4 .
13. 2i .
d +i
.
d e
Exercise 1.5
Prove the quadratic formula works for complex numbers, regardless of whether the discriminant is negative. That is, prove, the roots of the equation az 2 + bz + c = 0, where a, b, c C,
are b 2ab4ac as long as a 6= 0.
Exercise 1.6
Find all solutions to the following equations:
1. z 2 + 25 = 0.
5. z 2 = 2z.
2. 2z 2 + 2z + 5 = 0.
6. z 6 = 1.
3. 5z 2 + 4z + 1 = 0.
7. z 4 = 16.
4. z 2 z = 1.
8. z 6 = 9.
9. z 6 z 3 2 = 0.
10. z 2 + 2z + (1 i) = 0.
11. z 4 + iz = 2i.
Exercise 1.7
Show that:
1. |z| = 1 if and only if
1
z
= z.
Exercise 1.8
Use operations in polar form to derive the triple angle formulas:
1. cos 3 = cos 3 3 cos sin 2.
Exercise 1.9
Sketch the following sets in the complex plane:
1. {z C : |z 1 + i| = 2}.
6. {z C : 2|z| |z + i|}.
2. {z C : |z 1 + i| 2}.
7. {z C : |z + 3| < 2|}.
4. {z C : |z i| + |z + i| = 3}.
9. {z C : 1 |z 1| < 2}.
5. {z C : |z| = |z + 1|}.
Exercise 1.10
Use the triangular inequality to show that z211
124
1
3
Appendix B
Exponential Function
1
exp(z) .
0
1
1 1
=
.
ex eiy
exp(z)
4. Use the Cauchy-Riemann equations for exp(z) = u(x, y) + iv(x, y) for u(x, y) = ex cos y
and v(x, y) = ex sin y. Furthermore
(exp(z))0 =
(ex eiy )
= ex eiy = exp(z)
x
5. Trivial, because cos and sin are periodic functions with period 2.
6. | exp(z)| = ex |eiy | = ex .
Remark. Note that the representation of the complex exponential function is exp z and is not ez ,
because, as we will see in section B.5, the expression ez is not strictly a function.
125
B.2.
Trigonometric Functions
Complex exponential function allows to define the trigonometric functions. The complex sine
and cosine are defined respectively as
sin z =
eiz eiz
2i
and
cos z =
eiz + eiz
.
2
(sin z) = cos z
0
(cos z) = sin z
Proof. Exercise.
As with the exponential function, we should first make sure that we are not redefining the
real sine and cosine: if x R then
i sin(x)
eix eix
2i sin x
cos
x + i sin x
cos(x)
sin(x + i0) =
=
=
= sin x
2i
2i
2i
cos x +
2 cos x
eix + eix
i sin
x + cos(x) +
i sin(x)
=
=
= cos x
cos(x + i0) =
2
2
2
We know the real sin and cos functions are bounded functions, but it is not true for corresponding
complex functions.
Proposition B.2.2. Complex sin z (resp. cos z) function is not bounded.
y y
| 1y ey | 1
Proof. | sin(iy)| = e 2ie = e 2
= 2ey 12 ey diverges to as y .
Similarly for cos z.
The tangent and cotangent are defined as
tan z =
sin z
exp(2iz) 1
= i
cos z
exp(2iz) + 1
and cot z =
cos z
exp(2iz) + 1
=i
sin z
exp(2iz) 1
respectively.
Proposition B.2.3.
(a) tan z is holomorphic on every complex number z 6=
126
2k+1
2 ,
k Z.
sin2 z + cos2 z =
All rules for real trigonometric functions are satisfied for complex functions:
Proposition B.2.5. For all z, z1 , z2 C
1. sin(z + 2) = sin z and cos(z + 2) = cos z (Both are periodic functions with period 2).
2. tan(z + ) = tan z and cot(z + ) = cot z (Both are periodic functions with period ).
3. sin(z1 z2 ) = sin z1 cos z2 cos z1 sin z2 .
4. cos(z1 z2 ) = cos z1 cos z2 sin z1 sin z2 .
5. sin(2z) = 2 sin z cos z and cos(2z) = cos2 z sin2 z.
6. sin(z) = sin z and cos(z) = cos z.
7. sin z + 2 = cos z and cos z + 2 = sin z.
B.3.
The hyperbolic sine, cosine, tangent, and cotangent are defined as in the real case:
ez ez
2
sinh z
exp(2z) 1
tanh z =
=
cosh z
exp(2z) + 1
sinh z =
ez + ez
2
cosh z
exp(2z) + 1
coth z =
=
sinh z
exp(2z) 1
cosh z =
and
Proof. Exercise.
127
cosh(iz) = cos z.
B.4.
Logarithms
Classically, the logarithm function is the inverse of the exponential function. For real function
ex its inverse is called natural1 logarithm ln x, so are verified the following identities
eln x = x
and
ln(ex ) = x
(B.1)
Although it is usual to consider the argument of a complex number in [0, 2), is not the
principal form, but consider the argument in (, ]. This principal branch is represented by
Log z, more concretely
Log(z) = ln |z| + i arg z,
If z = x + iy 6= 0, then Log z =
ln(x2 +y 2 )
2
+ i arctan
< arg z .
y
x
z0
Also called Naperian logarithm in honor of Scottish mathematician John Napier (15501617).
128
zz0
zz0
Therefore does not exist the limit at z0 and log z is not continuous at the ray .
Obviously, log z is continuous at the points that are not in the ray .
Theorem B.4.5. For z which are not int the ray defined above, the corresponding logarithmic
branch is differentiable and
1
(log z)0 = .
z
Proof. Using proposition 4.1.17 with log z = exp1 (z) and (B.1),
(log z)0 =
B.5.
1
1
1
=
= .
(exp)0 (log z)
exp(log z)
z
1 = exp
1
2
ki
log 1 = exp
= exp(ki) = cos(k) + i sin(k),
2
2
k Z.
Therefore, 1 only have two values, 1 = 1 (for k even) and 1 = 1 (for k odd)2 . Furthermore
Log 1 = 0 and the principal value or 1 = 1.
Example B.5.3. The power of imaginary to other imaginary number may be a real number.
+ 4k
i
i = exp i log i = exp
, k Z.
2
The principal value of ii 0.2079.
2
129
Exercises
Exercise 2.1
Describe the images of the following sets under the exponential function:
1. the line segment defined by z = iy, 0 y 2.
2. the line segment defined by z = 1 + iy, 0 y 2.
3. the rectangle {z = x + iy C : 0 x 1, 0 y 2}.
Exercise 2.2
Describe the image under exp of the line with equation y = x.
Exercise 2.3
Prove that sin(z) = sin(z) and cos(z) = cos(z).
Exercise 2.4
Find the expression u(x, y) + iv(x, y) of functions sin z and cos z.
Exercise 2.5
Let z = x + iy and show that
1. | sin z|2 = sin2 x + sinh2 y = cosh2 y cos2 x.
2. | cos z|2 = cos2 x + sinh2 y = cosh2 y sin2 x.
3. If cos x = 0 then | cot z|2 =
4. If |y| 1 then | cot z|2
cosh2 y1
cosh2 y
2
sinh y+1
sinh2 y
1.
=1+
1
sinh2 y
1+
1
sinh2 1
2.
Exercise 2.6
Evaluate the value(s) of the following expressions, giving your answers in the form x + iy.
1. ei .
4. esin i .
7.
2. e .
5. exp(Log(3 + 4i)).
6.
1 + i.
8.
3. i1i .
3(1 i).
i+1
4
Exercise 2.7
Find the principal values of
1. log i.
2. (1)i .
3. log(1 + i).
Exercise 2.8
Is there a difference between the set of all values of log(z 2 ) and the set of all values of 2 log z?
(Try some fixed numbers for z.)
Exercise 2.9
Is there a difference between the set of all values of log(z 2 ) and the set of all values of 2 log z?
(Try some fixed numbers for z.)
Exercise 2.10
For each of the following functions, determine all complex numbers for which the function is
holomorphic. If you run into a logarithm, use the principal value (unless stated otherwise).
130
1. z 2 .
2.
sin z
z 3 +1 .
3. exp(z).
6. iz3 .
7.
1
Log z .
Exercise 2.11
Find all solutions to the following equations:
1. Log(z) =
2 i.
4. sin z = cosh 4.
7. exp(iz) = exp(iz).
2. Log(z) =
3
2 i.
5. cos z = 0.
8. z 1/2 = 1 + i.
6. sinh z = 0.
9. cosh z = 1.
3. exp(z) = i.
Exercise 2.12
Fix c C {0}. Find the derivative of f (z) = z c .
Exercise 2.13
Prove that ab is single-valued if and only if b is an integer. (Note that this means that
complex exponentials dont clash with monomials z n .) What can you say if b is rational?
131
132
Appendix C
Z
C.1.
Inner of integral, R is a rational function. Doing the change of variable z = eix we obtain
sin x =
z z 1
,
2i
cos x =
z + z 1
,
2
dz = ieix dx = iz dx
z
3 2 and z1 = 3 2, but
Function f (z) = (z2 +4z+1)
2 has two poles of order 2 in z0 =
only z0 is inside the circle . Then
1
d
z
z z1
lim
(z
z0 )2
= lim
=
2
2
zz
1! zz0 dz
(z
z
)
(z
z
)
(z
z1 )3
0
0
1
1
=
6 3
Res(f (z), z0 ) =
133
4
dx
1
4
= 2i =
(2 + cos x)2
i
6 3
3 3
C.2.
Improper Integrals
x2
dx.
2
2
(x + 1)(x + 4)
z2
are not on the real axis. Then,
We note that the singularities of f (z) = 2
(z + 1)(z 2 + 4)
consider the closed curve composed by the upper semicircle of radius R and the segment
[R, R] according to the following drawing:
Z
CR
Figure C.1
For R sufficient large, the only (simple) poles of f (z) inside the curve are z0 = i and z1 = 2i
(see figure C.1) and its residues are
i2
1
=
(i + i)(i2 + 4)
6i
2
1
(2i)
=
Res(f (z), z1 ) =
((2i)2 + 1)(2i + 2i)
3i
Res(f (z), z0 ) =
and also
Z
z2
dz = 2i
(z 2 + 1)(z 2 + 4)
z2
dz =
2
(z + 1)(z 2 + 4)
Z
CR
1
1
+
6i
3i
z2
dz +
2
(z + 1)(z 2 + 4)
= /3
Z
[R,R]
(x2
x2
dx
+ 1)(x2 + 4)
verify
(z 2
(x2
x2
dx =
+ 1)(x2 + 4)
3
134
(C.1)
cos x
dx.
2+1
x
x=0
R
Since f is an even function, we have 0
consider
Example C.2.2. Compute
cos x
x2 +1
dx =
1
2
cos x
x2 +1
eiz
+1
defined insided the closed curve as described in example C.2.1 above. For radius R sufficient
1
1
eiz
= ei+i = 2ei
.
large, the only pole inside is z0 = i and its residue es Res(f (z), z0 ) = limzi z+i
Then
Z
eiz
dz = .
2+1
z
e
f (z) =
z2
=
dz +
dx + i
dx
2+1
2+1
2+1
e
z
x
x
R
R
CR
Before to do limit when R we observe:
Z
eiz
lim
dz = 0 because
2
R C z + 1
R
Z
Z iz
e
eiz
1
dz
dz
R 0 when R
2
2
2+1
z
+
1
z
+
1
R
CR
CR
Z
sin x
sin x
Principal value of
dx = 0 because 2
is a odd function.
2
x +1
x + 1
Z
cos x
Hence
dx =
2+1
x
2e
0
We end this applications of residue theorem computing a real improper integral which needs
to avoid a singularity of complex function on the curve.
To compute below example we use the next result:
Z
2
t sin t when 0 t 2 .
y = 2 t
y = sin t
Figure C.2
Then, when R > 0 and 0 t
Z
we have R sin t
Z
2R
t and therefore
1 eR <
2R
2R
0
0
R
R 2 R cos s
In the other hand, doing a change of variable s = t 2 , we have t= eR sin t dt = s=0
e
ds
e
R sin t
dt
e2Rt/ dt =
eR cos s ds eR
e s ds = eR
eR 1 =
1 eR <
2R
2R
2R
0
0
And this prove the proposition.
135
sin x
dx.
x
0
For R > > 0, consider the counterclockwise closed curve composed by next open curves:
CR
C
R
Figure C.3
R
eiz
is holomorphic inside the curve , then f (z) dz = 0. Furthermore
z
Z ix
Z
Z R ix
Z
Z
e
eiz
e
eiz
dz +
dx +
dz +
dx =
0=
f (z) dz =
x
R x
C z
CR z
Z
Z
Z
Z R
Z
Z R
eiz
eiz
cos x
cos x
sin x
sin x
=
dz
dz +
dx +
dx + i
dx + i
dx
z
z
x
x
x
x
CR
C
R
But the real functions cosx x and sinx x are, respectively, odd and even, hence
Z
Z
Z
Z R
Z R
eiz
eiz
cos
x
cosx
sin x
0=
dz
dz +
dx +
dx
(C.2)
dx + 2i
x
x
x
CR z
C z
R
Now,
Z iR(cos t+i sin t)
Z
Z
Z
iR cos t R sin t
e
eiz
it
e
e
dt =
=
eR sin t dt
dz
Rie
dt
it
z
Re
CR
0
0
0
and, using the proposition C.2.3, Jordans inequality, we have
Z
eiz
dz <
0 when R
z
R
CR
2
3 2
eiz
= z1 + i + i2!z + i 3!z + = z1 + g(z), being g(z)
z
holomorphic everywhere, then there exists a constant M such that |g(z)| M for all
z C and
Z
Z
Z
ei z
1
dz =
dz +
g(z) dz
C z
C z
C
But
Z
Z
Z
1
1
it
M
dz =
e
dt
=
i
and
i
g(z)
dz
it
e
C z
0
C
Hence
Z
lim
0+
ei z
dz = i
z
dx =
x
2
0
136
Exercises
Exercise 3.1
Use residues to evaluate the following:
Z
cos 2
1.
d.
0 5 3 cos
Z 2
d
d with |a| < 1.
2.
1 + a cos
0
Exercise 3.2
Z
x
dx.
1
+
x2
0
Hint: Use change of variable x = u2 for converting to rational integral and apply residues
method.
Evaluate
Exercise 3.3
Z
Evaluate
0
cos x2 sin x2
dx.
1 + x4
137
eiz
.
1 + z4