You are on page 1of 203

Foreword

The present book follows the first semester analytic curriculum of the
course ’Mathematics for Engineers’, Faculty of Engineering in Foreign Lan-
guages (FILS), University Politehnica Bucharest, that is why it may seem a
little strange the combination of the two principal parts: Systems of Differ-
ential Equations (SDEs) and Complex Analysis.
The book intends to bring together all the necessary elements for a pro-
found understanding of the selected mathematical notions, both theory and
applications.
In the first five chapters, the interested reader finds the definitions of
the notions, properties, theorems with complete proof, suggestive examples,
completely solved exercises and exercises with or without a hint (solving
these exercises could be a good method to verify the proper understanding
of the material). The last chapter is just an invitation for the interested
reader to use mathematical specialized software like MATLAB or MAPLE
for practical applications of the notions presented in the previous chapters.
The book wants to be a convenient source for studying Linear, Homoge-
neous and Non-homogeneous SDEs, Complex differentiation and integration
of functions of a complex variable, Residues theory and its applications, in
order to eliminate the search on the Internet or numerous trips to the library.
Even if the present book is mainly addressed to the the students in the 1st
or 2nd year of study in an Engineering faculty, it may be a useful tool for the
students attending Master studies or the students who want to participate
in a student math competition, like ’Traian Lalescu’.
The authors would like to express warm thanks to the referees that have
read the material and contributed to its improvement and to all the col-
leagues for their precious suggestions and valuable observations. A very
special thanks goes to Laurenţiu Toader who designed a significant part of
the figures.

i
ii
Contents

Foreword i

1 Systems of Differential Equations (SDEs) 1


1.1 Linear and Homogenous SDEs . . . . . . . . . . . . . . . . . . 2
1.1.1 First Order Linear SDEs . . . . . . . . . . . . . . . . . 2
1.1.2 Linear and Homogeneous SDEs with Constant Coeffi-
cients . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Linear and Non-homogenous SDEs . . . . . . . . . . . . . . . 13
1.2.1 The fundamental matrix . . . . . . . . . . . . . . . . . 13
1.2.2 Non-homogenous SDEs . . . . . . . . . . . . . . . . . . 18
1.2.3 Linear Control Systems . . . . . . . . . . . . . . . . . . 23
1.3 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.3.1 Asymptotic stability . . . . . . . . . . . . . . . . . . . 23
1.3.2 The Routh-Hurwitz Criterion . . . . . . . . . . . . . . 27
1.3.3 The Lyapunov Equation . . . . . . . . . . . . . . . . . 28
1.3.4 Stability of Nonlinear Systems . . . . . . . . . . . . . . 31
1.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

2 Functions of a Complex Variable 61


2.1 The Field C . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.2 The Topology on C e . . . . . . . . . . . . . . . . . . . . . . . . 67
2.3 Examples of Functions of a Complex Variable . . . . . . . . . 69
2.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

3 Complex Differentiation 83
3.1 Analytic Functions . . . . . . . . . . . . . . . . . . . . . . . . 83
3.2 Harmonic Conjugates . . . . . . . . . . . . . . . . . . . . . . . 87
3.3 Analytic Continuation . . . . . . . . . . . . . . . . . . . . . . 88

iii
3.4 Determination of Analytic Functions . . . . . . . . . . . . . . 89
3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

4 Complex Integration 107


4.1 The Complex Line Integral . . . . . . . . . . . . . . . . . . . . 107
4.2 Cauchy’s Theorems . . . . . . . . . . . . . . . . . . . . . . . . 111
4.3 Taylor and Laurent Series . . . . . . . . . . . . . . . . . . . . 121
4.3.1 Taylor Series . . . . . . . . . . . . . . . . . . . . . . . 122
4.3.2 Laurent Series . . . . . . . . . . . . . . . . . . . . . . . 125
4.4 Classification of Singular Points . . . . . . . . . . . . . . . . . 129
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

5 Residue Theory 141


5.1 The Residue Theorem . . . . . . . . . . . . . . . . . . . . . . 142
5.2 Computation of Residues at Poles . . . . . . . . . . . . . . . . 144
5.3 Jordan’s Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . 146
5.4 Applications of the Residue Theorem . . . . . . . . . . . . . . 149
5.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

6 SOFTWARE 171
6.1 Stability with MATLAB . . . . . . . . . . . . . . . . . . . . . 171
6.2 MATLAB and MAPLE Commands for Complex Numbers and
for Functions of a Complex Variable . . . . . . . . . . . . . . . 178
6.2.1 MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . 178
6.2.2 MAPLE . . . . . . . . . . . . . . . . . . . . . . . . . . 181

Bibliography 193

Index 197

iv
Chapter 1

Systems of Differential
Equations (SDEs)

Controlling the speed of an electric motor in electrical engineering, de-


scribing how drugs propagate the body in biology or defining inventory mod-
els in economy, all can be mathematically described and solved using the the-
ory of systems of differential equations (SDEs). Like single ODEs (ordinary
differential equations), SDEs are classified as linear or nonlinear, homoge-
neous or non-homogeneous, with constant coefficients or variable coefficients.

Józef Hoene Wronski, Thomas Muir [22] or Giuseppe Peano [26] are just
a few of a large number of mathematicians that developed this important
branch of mathematics. About the life and the mathematical contributions
of Wronksi, one should consult [30] (translated from Polish).

The definition of a stable SDE is due to Henri Poincaré [28], [29] (in
French) and Alexandr Lyapunov [17] (translated from Russian into French
by Édourd Davaux and from French into English by A. T. Fuller). They
noticed that the system is stable only after the initial conditions are losing
their influence.

Aspects of the qualitative theory of the differential equations are pre-


sented in [7], [9, Chapter 5], [10], [11], [12], [13], [14, Chapters 1-5], [24].

1
1.1 Linear and Homogenous SDEs
1.1.1 First Order Linear SDEs
A linear system of differential equations (for short, a SDEs) has the nor-
mal form
 0

 y1 = a11 (x)y1 + a12 (x)y2 + · · · + a1n (x)yn + b1 (x)
y 0 = a (x)y + a (x)y + · · · + a (x)y + b (x)

2 21 1 22 2 2n n 2
(1.1)


 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·
 0
yn = an1 (x)y1 + an2 (x)y2 + · · · + ann (x)yn + bn (x)
Assume that aij : I → R and bi : I → R are continuous functions on an
interval I ⊂ R, ∀i, j = 1, n.
The system is said to be homogeneous if bi ≡ 0, ∀i = 1, n and it will be
denoted (1.10 ).
Let us consider the matrix A(x) and the vectors Y (x) and B(x),
 
a11 (x) a12 (x) · · · a1n (x)
 a21 (x) a22 (x) · · · a2n (x) 
A(x) =   ···
,
··· ··· ··· 
an1 (x) an2 (x) · · · ann (x)
   
y1 (x) b1 (x)
 y2 (x)   b2 (x) 
Y (x) = 
 · · ·  , B(x) = 
  .
··· 
yn (x) bn (x)
Then the system (1.1) can be written in the form

Y 0 (x) = A(x)Y (x) + B(x), (1.2)

and the homogeneous one (1.10 ) has the form

Y 0 (x) = A(x)Y (x).

A solution is a vector function Y : I → Rn which verifies (1.1). The


general solution is a solution Y (x, C1 , C2 , . . . , Cn ), where C1 , C2 , . . . , Cn are
arbitrary constants and any solution of (1.1) can be obtained from it by a
suitable choice of constants C1 , C2 , . . . , Cn .

2
An initial condition is an equality of the form

Y (x0 ) = Y0 , (1.3)

where x0 ∈ I and Y0 ∈ Rn . The problem of finding the solution of (1.1) or


(1.10 ) which verifies (1.3) is called the initial value problem.

Theorem 1.1.1 (Existence and Uniqueness Theorem). Consider the linear


SDE given by Y 0 (x) = A(x)Y (x) + B(x). Then

i) if A : I → Rn×n is continuous on I, then for any x0 ∈ I, Y0 ∈ Rn ,


there exists a solution Y (x) of the initial value problems (1.1), (1.3)
and (1.10 ), (1.3) respectively and this solution is unique;

ii) if A and B are continuous on I, then the initial value problems (1.1)
and (1.10 ) have unique solutions.

Proof. An idea for the proof in the case (1.10 ) is to consider the sequence of
the vector functions (Yk )k∈N , defined by
Z x
Y0 (x) = Y0 , Yk (x) = Y0 + A(t)Yk−1 (t)dt, k ≥ 1.
x0

It can be proved that this sequence is uniformly convergent on any com-


pact interval J ⊂ I.
Let Y be the limit of the sequence (Yk )k∈N on J. Then the sequence
(AYk )k∈N is uniformly convergent to AY on J and we can take the limit, as
k → ∞, in the equality
Z x
Yk (x) = Y0 + A(t)Yk−1 (t)dt. (1.4)
x0
Z x
One obtains that Y (x) = Y0 + A(t)Y (t)dt. By deriving, it follows
x0
that Y 0 (x) = A(x)Y (x) and for x = x0 , Y (x0 ) = Y0 , hence the limit Y is the
solution of the initial value problem (in the first case i)).
Similarly, one can prove the case (1.1) by considering the sequence
Z x
Y0 (x) = Y0 , Yk (x) = Y0 + A(t)Yk−1 (t)dt + B(x), k ≥ 1.
x0

3
Now, if Y1 and Y2 are two solutions of (1.10 ) with Y1 (x0 ) = Y2 (x0 ), then
Y (x) = Y1 (x) − Y2 (x) verifies Y (x0 ) = 0 and
Z x

kY (x)k = A(t) [Y1 (t) − Y2 (t)] dt .
x0

Using some elementary computations and evaluations, one can prove that
kY (x)k is 0, so the solution is unique.

Proposition 1.1.2. Any linear combination of solutions of the homogeneous


system (1.10 ) is a solution of (1.10 ).

Proof. Let Yi , i = 1, k solutions of (1.10 ), i.e. Yi0 = AYi , and some real
constants Ci , i = 1, k.
X k
Consider the linear combination Y = Ci Yi . Then
i=1

k k k
!
X X X
Y0 = Ci Yi0 = Ci AYi = A C i Yi = AY,
i=1 i=1 i=1

hence Y is a solution of the system (1.10 ).

It follows that the set of the solutions of a homogeneous system is a


real linear space. Obviously, Proposition 1.1.2 remains valid in the case of
complex constants Ci .
The solutions Y1 , Y2 , . . . , Yk of (1.10 ) are linearly independent on I ⊂ R
if from C1 Y1 (x) + C2 Y2 (x) + · · · + Ck Yk (x) = 0, ∀x ∈ I one can infer that
C1 = C2 = · · · = Ck = 0.

Proposition 1.1.3. The solutions Y1 , Y2 , . . . , Yk of (1.10 ) are linearly inde-


pendent on I ⊂ R if and only if the vectors Y1 (x0 ), Y2 (x0 ), . . . , Yk (x0 ) are
linearly independent, where x0 ∈ I is an arbitrary point.

Proof. Obviously, if Y1 (x0 ), Y2 (x0 ), . . . , Yk (x0 ) are linearly independent vec-


tors and C1 Y1 (x) + C2 Y2 (x) + · · · + Ck Yk (x) = 0, ∀x ∈ I, then this equality
holds for x = x0 and, by hypotheses, C1 = C2 = · · · = Ck = 0, hence
Y1 (x), Y2 (x), . . . , Yk (x) are linearly independent solutions.

4
Conversely, if we have that Y1 (x), Y2 (x), . . . , Yk (x) are linearly indepen-
dent and C1 Y1 (x0 ) + C2 Y2 (x0 ) + · · · + Ck Yk (x0 ) = 0, it follows by Proposition
1.1.2 that Y = C1 Y1 + C2 Y2 + · · · + Ck Yk is a solution of (1.10 ) which ver-
ifies Y (x0 ) = 0. But the null function Z(x) ≡ 0 obviously verifies (1.10 )
and Z(x0 ) = 0. By the Existence and Uniqueness Theorem (see Theorem
1.1.1) we have that Y (x) ≡ Z(x) ≡ 0, i.e. C1 Y1 (x) + C2 Y2 (x) + · · · +
Ck Yk (x) = 0, ∀x ∈ I, hence C1 = C2 = . . . = Ck = 0. It follows that
Y1 (x0 ), Y2 (x0 ), . . . Yk (x0 ) are linearly independent.

Let us consider n solutions of the homogeneous SDE (1.10 ),


     
y11 y12 y1n
 y21 
 , Y2 =  y22  , . . . , Yn =  y2n  .
   
Y1 = 
 ···   ···   ··· 
yn1 yn2 ynn
The determinant with the columns Y1 , Y2 , . . . , Yn is called the Wronskian of
the solutions Y1 , Y2 , . . . , Yn and it is denoted W (x). Hence

y11 (x) y12 (x) · · · y1n (x)

y21 (x) y22 (x) · · · y2n (x)
W (x) = .
· · · · · · · · · · · ·

yn1 (x) yn2 (x) · · · ynn (x)

Proposition 1.1.4. The solutions Y1 , Y2 , . . . , Yn of (1.10 ) are linearly inde-


pendent on I ⊂ R if and only if W (x) 6= 0, ∀x ∈ I.

Proof. The solutions Y1 , Y2 , . . . , Yn of (1.10 ) are linearly independent on I


if from C1 Y1 (x) + C2 Y2 (x) + · · · + Cn Yn (x) = 0, ∀x ∈ I we obtain that
C1 = C2 = · · · = Cn = 0.
This is equivalent (by the component wise equality of the above vectors)
to the fact that the linear and homogeneous system with the indeterminates
C1 , C2 , . . . , Cn


 C1 y11 (x) + C2 y12 (x) + · · · + Cn y1n (x) = 0

C y (x) + C y (x) + · · · + C y (x) = 0
1 21 2 22 n 2n
(1.5)
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·


C1 yn1 (x) + C2 yn2 (x) + · · · + Cn ynn (x) = 0

5
has only the trivial solution C1 = C2 = · · · = Cn = 0.
This is equivalent to the fact that, for any x ∈ I, the determinant of the
system (1.5) is different from 0, i.e. W (x) 6= 0, ∀x ∈ I.

By Proposition 1.1.3, it follows that W (x) 6= 0, ∀x ∈ I⇔ ∃x0 ∈ I such


that W (x0 ) 6= 0.
The Wronskian is given by Liouville’s formula
Z x
Tr A(t)dt
W (x) = W (x0 )e x0 , (1.6)
n
X
where Tr A is the trace of the matrix A, i.e. Tr A = aii , and (1.6)
i=1
underlines the above equivalence.

Definition 1.1.5. A set of n solutions Y1 , Y2 , . . . , Yn of (1.10 ) with a nonzero


Wronskian is called a fundamental system of solutions of the homogeneous
SDE (1.10 ).

It follows that Y1 , Y2 , . . . , Yn is a fundamental system of solutions if and


only if they are linearly independent as elements in the linear space of the
continuous n-vector functions on I. This means that they form a basis of the
n-dimensional subspace of the solutions of (1.10 ), i.e. any solution Y (x) can
be written uniquely as

Y (x) = C1 Y1 (x) + C2 Y2 (x) + · · · + Cn Yn (x), (1.7)

where C1 , C2 , . . . , Cn ∈ R. It follows that, if Y1 , Y2 , . . . , Yn is a fundamen-


tal system of solutions, then the general solution of (1.10 ) is (1.7), where
C1 , C2 , . . . , Cn are arbitrary real constants.

1.1.2 Linear and Homogeneous SDEs with Constant


Coefficients
Consider the system
Y 0 (x) = AY (x), (1.8)
where A is a constant n × n real matrix.

6
In this case, the general solution of the SDEs can be obtained by using
the eigenvalues, the eigenvectors and the generalized eigenvectors of A.
Let us recall the necessary notions from Linear Algebra.
The characteristic polynomial of the matrix A is det(sIn − A). The roots
λ ∈ C of the characteristic polynomial are called the eigenvalues of A.
The set of the eigenvalues of A is called the spectrum of A and it is
denoted by σ(A).
A vector v ∈ Cn is an eigenvector of A, that corresponds to the eigenvalue
λ if
Av = λv and v 6= 0. (1.9)
For an eigenvalue λ, the set formed by the null-vector and by all corre-
sponding eigenvectors is called the eigenspace of λ and it is denoted Sλ (A).
The geometric multiplicity of λ, denoted by mg (λ), is the dimension of Sλ (A)
(hence mg (λ) is the maximum number of linearly independent eigenvectors
which correspond to λ). The algebraic multiplicity of λ, ma (λ), is the multi-
plicity of λ as a root of det(sIn − A). Obviously, ma (λ) ≥ mg (λ), for every
eigenvalue λ.
If ma (λ) = mg (λ), let us denote this number by m, hence λ occupies m
places in σ(A) and there are m eigenvectors v1 , v2 , · · ·, vm that verify

Av1 = λv1 , Av2 = λv2 , . . . , Avm = λvm . (1.10)

Consider a basis B of the space Cn obtained by completing the set of these


vectors, i.e. B = {v1 , v2 , · · ·, vm , u1 , u2 , · · ·, un−m }, and let T be the transition
matrix having as columns the vectors of B. Due to (1.10), the matrix A e=
−1
T AT , i.e. the matrix in the new basis B, has the structure
 
Aλ 0
A
e= ,
0 B
where  
λ 0 ··· 0
 0 λ ··· 0 
Aλ = 
 ···
.
··· ··· ··· 
0 0 ··· λ

In other words, if mg (λ) = ma (λ) = m, the eigenvalue λ appears in A


e in
m Jordan cells of dimension 1, J1 = [λ].

7
If, for some λ ∈ σ(A), mg (λ) < ma (λ), in order to construct a new basis
B, the mg (λ) linearly independent eigenvectors are completed to a num-
ber of ma (λ) vectors by chains of generalized eigenvectors. Such a chain
{v1 , v2 , . . . , vk } starts with an eigenvector, let us say v1 , and verifies the re-
lations

Av1 = λv1 , Av2 = λv2 + v1 , . . . , Avi = λvi + vi−1 , . . . , Avk = λvk + vk−1 .
(1.11)
The vectors v2 , . . . , vk are also called principal vectors. Then, by the above
construction, due to (1.10), this chain will provide, on the block diagonal of
the matrix A,
e a Jordan cell of dimension k:
 
λ 1 0 ··· 0 0
 0
 λ 1 ··· 0 0  
J =  · · · · · · · · · · · · · · · · · · .

 0 0 0 ··· λ 1 
0 0 0 ··· 0 λ

Now, let us determine the general solution of the linear homogeneous SDE
given by (1.8).
We are going to differentiate between two cases; whether the matrix A is
diagonalizable or not.
Case I : The matrix A is diagonalizable.
By the previous discussion, this happens if and only if mg (λ) = ma (λ),
for every λ ∈ σ(A), hence the matrix A has n linearly independent eigen-
vectors v1 , v2 , . . . , vi , . . . , vn , which correspond to the eigenvalues (including
multiplicities) λ1 , λ2 , . . . , λi , . . . , λn , i.e., by (1.9),

Avi = λi vi , vi 6= 0, ∀i = 1, n. (1.12)

Proposition 1.1.6. The vector functions

Y1 (x) = v1 eλ1 x , . . . , Yi (x) = vi eλi x , . . . , Yn (x) = vn eλn x , (1.13)

form a fundamental system of solutions for the system (1.8).

Proof. Since the column vectors v1 , v2 , . . . , vn are linearly independent, then


det[v1 , v2 , . . . , vn ] 6= 0.

8
Let us introduce the vector function Yi (x) = vi eλi x in both members of
(1.8). We first have that
0
Yi0 (x) = vi eλi x = vi λi eλi x ,
and, using (1.12), we obtain that
AYi (x) = A vi eλi x = (Avi ) eλi x = λi vi eλi x .


Therefore, Yi0 (x) = AYi (x), hence Yi (x) = vi eλi x is a solution of the system
(1.8), ∀i = 1, n.
The Wronskian of these solutions is
W (x) = det [Y1 (x), Y2 (x), . . . , Yn (x)]
= det v1 eλ1 x , v2 eλ2 x , . . . , vn eλn x
 

= eλ1 x eλ2 x · · · eλn x det [v1 , v2 , . . . , vn ] .


The last equality is true since eλi x is a common factor in the ith column of
the determinant.
Due to the fact that eλi x 6= 0 and det[v1 , v2 , . . . , vn ] 6= 0, it follows that
W (x) 6= 0, so we deduce that (1.13) is a fundamental system of solutions.

Using (1.12), (1.13) and (1.7) we can summarize the method of solving a
linear homogeneous SDE.
Algorithm 1.1.7 (The diagonalizable case).

[Stage 1 ]: Solve the characteristic equation of the matrix A:


det(sIn − A) = 0,
find the eigenvalues λ1 , λ2 , . . . , λn and determine their algebraic multiplicities
ma (λi );
[Stage 2 ]: For each eigenvalue λi , solve the linear homogeneous algebraic
system
(A − λi In )v = 0
and determine the corresponding eigenvectors vi . If mg (λ) = ma (λ), for all
λ ∈ σ(A), go to [Stage 3 ]. Otherwise, go to [Stage 2 ] in Algorithm 1.1.11;
[Stage 3 ]: Write the general solution as
Y (x) = C1 v1 eλ1 x + C2 v2 eλ2 x + · · · + Cn vn eλn x . (1.14)

9
Remark 1.1.8. Since mg (λ) = ma (λ), ∀λ ∈ σ(A), to any λ correspond a
number of linearly independent eigenvectors equal to the multiplicity of λ in
the spectrum of A, hence the total number of linearly independent eigenvectors
is the number n (which is the dimension of the matrix A).
Remark 1.1.9. If one considers the basis matrix T = [v1 , v2 , . . . , vn ], where
e = T −1 AT
v1 , v2 , . . . , vn are linearly independent eigenvectors, the matrix A
is a diagonal matrix with the eigenvalues λi on the main diagonal,
 
λ1 0 · · · 0
e =  0 λ2 · · · 0  .
 
A  ··· ··· ··· ··· 
0 0 · · · λn

Case II : The matrix A is not diagonalizable.


Therefore, ∃λ ∈ σ(A) with ma (λ) > mg (λ) and there exists a chain (or
there exist some chains) of generalized vectors which correspond to λ.
Consider the chain {v1 , v2 , . . . , vm } with

Av1 = λv1 , v1 6= 0,
Av2 = λv2 + v1 ,
·················· ,
(1.15)
Avk = λvk + vk−1 ,
·················· ,
Avm = λvm + vm−1 .
We have the following result.
Proposition 1.1.10. The following vector functions corresponding to the
chain (1.15)
Y1 (x) = v1 eλx ,
x 
Y2 (x) = v1 + v2 eλx ,
1!2 
x x
Y3 (x) = v1 + v2 + v3 eλx , (1.16)
2! 1!
······························ ,
 xk−1 xk−2 x 
Yk (x) = v1 + v2 + · · · + vk−1 + vk eλx , k ≤ m.
(k − 1)! (k − 2)! 1!

10
are linearly independent solutions of (1.8).

Proof. By Proposition 1.1.6, since v1 is an eigenvector, we have that Y1 (x) is


a solution.
Next we introduce Yi (x) in both members of (1.8). By differentiating the
terms in (1.16, one obtains, for instance that
0
xk−1 (k − 1)xk−2 xk−2

v1 = v1 = v1 ,
(k − 1)! (k − 1)! (k − 2)!
hence

xk−2 xk−3
 
1
Yk0 (x) = v1 + v2 + · · · + vk−1 eλx +
(k − 2)! (k − 3)! 1!
 k−1 k−2

x x x
+ v1 + v2 + · · · + vk−1 + vk λeλx .
(k − 1)! (k − 2)! 1!

On the other hand, the equalities (1.15) imply that

xk−1
 
x
AYk (x) = Av1 + · · · + Avk−1 + Avk eλx
(k − 1)! 1!
 k−1 
x x
= λv1 + · · · + (λvk−1 + vk−2 ) + (λvk + vk−1 ) eλx ,
(k − 1)! 1!

hence Yk0 (x) = AYk (x), i.e. Yk (x) is a solution of (1.8).


Since v1 6= 0, it follows that the solutions Y1 (x), Y2 (x), . . . , Yi (x), . . . are
vector polynomials of degrees respectively 0, 1, . . . , i − 1, . . . multiplied by
eλx 6= 0, hence they are linearly independent solutions. The eigenvectors
which correspond to distinct eigenvalues are linearly independent and this
is true for the eigenvectors which determine different chains corresponding
to the same eigenvalue. It follows that the solutions of the form (1.16) cor-
responding to different chains or eigenvalues are linearly independent, their
number is n and they form a fundamental system of solutions.

The general solution of the SDE (1.8) can be obtained in this case by the
following algorithm, in which an eigenvector without generalized eigenvectors
will be considered as a chain of length 1.

11
Algorithm 1.1.11 (The general case).

[Stage 1 ]: Solve the characteristic equation of the matrix A:


det(sIn − A) = 0,
find the eigenvalues λ1 , λ2 , . . . , λn and determine their algebraic multiplicities
ma (λi );
[Stage 2 ]: For each λi determine chains of generalized eigenvectors, by
solving the algebraic systems:
(A − λi In )v = 0,
for chains of length 1, and
(A − λi In )v1 = 0,
(A − λi In )v2 = v1 ,
(A − λi In )v3 = v2 ,
·················· ,
(A − λi In )vm = vm−1 ,
for chains of length m > 1;
[Stage 3 ]: Write the general solution of (1.8)
Y (x) = C1 Y1 (x) + C2 Y2 (x) + · · · + Ck Yk (x) + · · · + Cn Yn (x),
where
Yk (x) = vk eλk x
for chains of length 1, and
 k−1
xk−2

x x
Yk (x) = v1 + v2 + · · · + vk−1 + vk eλk x ,
(k − 1)! (k − 2)! 1!
for chains of length > 1 and C1 , C2 , . . . , Cn ∈ R.
It follows from (1.13), (1.14) and (1.16) that the solutions Y (x) of the
initial value problem Y 0 (x) = AY (x) (1.10 ), Y (x0 ) = Y0 (1.3) are vectors
whose components are linear combinations of functions of the form xj eλx ,
where λ ∈ σ(A) and j ≥ 0. More exactly, one can state the following result,
which will be useful in the study of the stability of the SDE (1.10 ) (see Section
1.3.3).

12
Theorem 1.1.12. Let λ be an eigenvalue of the matrix A.

i) If ma (λ) = mg (λ), then in the solutions Y (x), corresponding to λ can


occur only functions of the form eλx ;

ii) If ma (λ) > mg (λ), then there exist some solutions Y (x) whose compo-
nents contain functions of the form xj eλx , where j ≥ 1.

1.2 Linear and Non-homogenous SDEs


1.2.1 The fundamental matrix
Consider the initial value problem which is given by (1.10 ) and (1.3):

Y 0 (x) = A(x)Y (x), Y (x0 ) = Y0

where A : I ⊂ R → Rn×n is continuous on I, the initial moment x0 ∈ I and


the initial position Y0 ∈ Rn are given.
Let us denote by Y1 (x, x0 ), Y2 (x, x0 ), . . . , Yn (x, x0 ) the solutions of the
following initial problems:
 0
Y1 (x, x0 ) = A(x)Y1 (x, x0 )
Y1 (x0 , x0 ) = e1 = (1, 0, 0, . . . , 0)T
 0
Y2 (x, x0 ) = A(x)Y2 (x, x0 )
Y2 (x0 , x0 ) = e2 = (0, 1, 0, . . . , 0)T
·································
 0
Yn (x, x0 ) = A(x)Yn (x, x0 )
Yn (x0 , x0 ) = en = (0, 0, 0, . . . , 1)T
Definition 1.2.1. The matrix formed by the columns Y1 , Y2 , . . . , Yn is called
the fundamental matrix of the SDE (1.10 ), (or of the matrix A(x)) and is
denoted Φ(x, x0 ); hence

Φ(x, x0 ) = [Y1 (x, x0 ), Y2 (x, x0 ), . . . , Yn (x, x0 )]. (1.17)

Theorem 1.2.2. The fundamental matrix Φ(x, x0 ) has the following prop-
erties:
d
i) Φ(x, x0 ) = A(x)Φ(x, x0 ), ∀x, x0 ∈ I;
dx

13
ii) Φ(x0 , x0 ) = In , ∀x0 ∈ I;

iii) The solution of the initial value problem (1.10 ), (1.3) is

Ye (x) = Φ(x, x0 )Y0 ;

iv) Φ(x, x1 )Φ(x1 , x0 ) = Φ(x, x0 ), ∀x, x0 , x1 ∈ I. We say that the funda-


mental matrix satisfies the semigroup property;

v) Φ(x, x0 )−1 = Φ(x0 , x);

vi) We have the so called Peano-Baker formula1 :


Z x Z x Z t1
Φ(x, x0 ) = In + A(t1 )dt1 + A(t1 ) A(t2 )dt2 dt1 + · · · +
x0 x0 x0
Z x Z t1 Z tk−1
+ A(t1 ) A(t2 ) . . . A(tk )dtk . . . dt2 dt1 + · · · .
x0 x0 x0
(1.18)

vii) If A is a constant matrix, then Φ(x, x0 ) = eA(x−x0 ) , where the exponen-


tial matrix is defined by

A(x − x0 ) A2 (x − x0 )2 Ak (x − x0 )k
eA(x−x0 ) = In + + + ··· + + ··· .
1! 2! k!

Proof. i) The derivative of a matrix is the matrix of the derivatives of all its
entries. Therefore, the derivative can be calculated for instance, by taking
the derivatives of all its columns Yi (x, x0 ) and by using (1.10 ).

d
(Φ(x, x0 )) = [Y10 (x, x0 ), Y20 (x, x0 ), . . . , Yn0 (x, x0 )]
dx
= [A(x)Y1 (x, x0 ), A(x)Y2 (x, x0 ), . . . , A(x)Yn (x, x0 )]
= A(x)[Y1 (x, x0 ), Y2 (x, x0 ), . . . , Yn (x, x0 )]
= A(x)Φ(x, x0 ).
1
The integral of a matrix A is the matrix whose entries are the integrals of the entries
of A.

14
ii) By the definition of Φ(x, x0 ) (see Definition 1.2.1), one obtains that

Φ(x0 , x0 ) = [Y1 (x0 , x0 ), Y2 (x0 , x0 ), . . . , Yn (x0 , x0 )]


 
1 0 ··· 0
 0 1 ··· 0 
= [e1 , e2 , · · ·, en ] = 
 ··· ··· ··· ···


0 0 ··· 1
= In .

iii) Introduce Ye (x) = Φ(x, x0 )Y0 in (1.10 ) and in (1.3) and use i) and ii).
We will get that

d
Ye 0 (x) = (Φ(x, x0 )) Y0 = (A(x)Φ(x, x0 )) Y0
dx
= A(x) (Φ(x, x0 )Y0 ) = A(x)Ye (x),

hence Ye (x) is a solution of (1.10 ).


Also we obtain that

Ye (x0 ) = Φ(x0 , x0 )Y0 = In Y0 = Y0 ,

hence Ye (x) verifies the initial condition (1.3).


 
C1
 C2 
Remark 1.2.3. If Y = C =   · · · , where C1 , C2 , . . . , Cn are arbitrary

Cn
constants, then Yn (x) = Φ(x, x0 )C is the general solution of SDE (1.10 ).
Remark 1.2.4. If a matrix M (x, x0 ) ∈ Rn×n has the properties i) and ii),
then Φ(x, x0 ) = M (x, x0 ).
d 
Proof. Indeed, if M (x, x0 ) = A(x)M (x, x0 ) and M (x0 , x0 ) = In , then,
dx
by using the proof of iii), the vector function Yb (x) = M (x, x0 )Y0 is a solution
of initial value problem (1.10 ), (1.3). But, by the Existence and Uniqueness
Theorem (see Theorem 1.1.1), the solution of the initial value problem is
unique, hence Φ(x, x0 )Y0 = M (x, x0 ), ∀Y0 ∈ Rn , i.e. Φ(x, x0 ) = M (x, x0 ).

15
Now we go back to the proof of Theorem 1.2.2 and we continue with the
properties iv) to vii).
iv) Consider the moments x0 , x1 and x and the corresponding values of
the solution of the initial value problem (1.10 ), (1.3), respectively Y (x0 ) = Y0 ,
Y (x1 ) and Y (x) (Figure 1.1).

Figure 1.1: The Values of the Solution of the Initial Value Problem

Now we write, using iii), the solutions corresponding to the intervals


[x0 , x], [x0 , x1 ] and [x1 , x]:

Y (x) = Φ(x, x0 )Y0 , Y (x1 ) = Φ(x1 , x0 )Y0 , Y (x) = Φ(x, x1 )Y (x1 ).

It follows that Y (x) = Φ(x, x1 )Φ(x1 , x0 )Y0 , but again the solution is unique,
hence
Φ(x, x0 )Y0 = Φ(x, x1 )Φ(x1 , x0 )Y0 , ∀Y0 ∈ Rn .
Therefore Φ(x, x0 ) = Φ(x, x1 )Φ(x1 , x0 ).
v) A matrix D is the inverse of C ∈ Rn×n if and only if DC = In . Let us
consider C = Φ(x, x0 ) and D = Φ(x0 , x). By iv) and ii), one obtains that

Φ(x0 , x)Φ(x, x0 ) = Φ(x0 , x0 ) = In ,

which implies that Φ(x0 , x) = Φ(x, x0 )−1 .


vi) By using the Weierstrass criterion (see pp. 115-121 in [14]), one
can show that the series in (1.18) is uniformly convergent on any interval
[x0 , x1 ] ⊂ I, hence the series has a sum M (x, x0 ) and it can be differentiated
term by term. By using the formula
Z x 0
f (t)dt = f (x),
x0

16
one obtains that
Z x
d
(M (x, x0 )) = On + A(x) + A(x) A(t2 )dt2 + · · · +
dx x0
Z x Z t2 Z tk−1
+ A(x) A(t2 ) A(t3 ) · · · A(tk )dtk · · · dt2 + · · ·
x0 x0 x0
Z x
= A(x)[In + A(t2 )dt2 + · · · +
x0
Z x Z t2 Z tk−1
+ A(t2 ) A(t3 ) · · · A(tk )dtk · · · dt2 + · · · ]
x0 x0 x0
= A(x)M (x, x0 )
and
Z x0 Z x0 Z t1
M (x0 , x0 ) = In + A(t1 )dt1 + A(t1 ) A(t2 )dt2 dt1 + · · ·
x0 x0 x0
= In + On + On + · · · = In .
Hence M (x0 , x0 ) has the properties i) and ii) and by Remark 1.2.4, we
have that
Φ(x, x0 ) = M (x, x0 ),
which is equivalent to the fact that Φ(x, x0 ) is the sum of the series (1.18).
So the Peano-Baker formula is proved.
vii) If A is a constant matrix, the Peano-Baker formula (1.18) becomes
Z x Z x Z t1
2
Φ(x, x0 ) = In + A dt1 + A dt2 dt1 + · · · +
x0 x0 x0
Z x Z t1 Z tk−1
k (1.19)
+A ··· dtk · · · dt2 dt1 + · · · .
x x x0
| 0 0 {z }
k integrals
In the following we are going to evaluate the above integrals:
Z x x x − x0
dt1 = t1 = ,

x0 x0 1!
Z x Z t1 Z x Z t1 Z x
(t1 − x0 )2 x

t1 − x0
dt2 dt1 = dt2 dt1 = dt1 =
1! 2 · 1! x0

x0 x0 x0 x0 x0
(x − x0 )2 (x0 − x0 )2 (x − x0 )2
= − = .
2 · 1! 2 · 1! 2!

17
We will finish the proof of vii) by induction. Let us assume that such a
(x − x0 )k−1
multiple integral with k − 1 integrals has the value . Then we get
(k − 1)!
that
Z x Z t1 Z tk−1 Z x Z t1 Z tk−1 
... dtk . . . dt2 dt1 = ... dtk . . . dt2 dt1
x0 x0 x0 x0 x0 x0
Z x
(t1 − x0 )k−1
= dt1
x0 (k − 1)!
(t1 − x0 )k x (x − x0 )k
= = ,
k(k − 1)! x0 k!
hence this evaluation is true for k ≥ 1. By (1.19), we obtain that the funda-
mental matrix is
A(x − x0 ) A2 (x − x0 )2 Ak (x − x0 )k
Φ(x, x0 ) = In + + + ··· + + ···
1! 2! k!
= eA(x−x0 ) .

This matrix is called the exponential matrix of the matrix A by analogy


x x2 xn
with the exponential function ex = 1 + + + ··· + + ···.
1! 2! n!

1.2.2 Non-homogenous SDEs


Consider the linear non-homogeneous SDE given by (1.1) or by (1.2) and
the corresponding homogeneous SDE given by (1.10 ). Bare in mind that (1.1)
and (1.2) are equivalent forms of the same SDE.
Proposition 1.2.5. The general solution of the SDE (1.1) has the form
Y = Yh + Yp , (1.20)
where Yh is the general solution of the homogeneous SDE (1.10 ) and Yp is a
particular solution of the non-homogeneous SDE (1.1).
Proof. Firstly, let us show that (1.20) is a solution of SDE (1.1). We have
that
Y 0 (x) = Yh0 (x) + Yp0 (x) = A(x)Yh (x) + (A(x)Yp (x) + B(x))
= A(x) (Yh (x) + Yp (x)) + B(x) = A(x)Y (x) + B(x),

18
hence Y (x) verifies the SDE (1.1).
Now, let Y be an arbitrary solution of (1.1) and let us consider the vector
function Y − Yp . Then

Y 0 (x) − Yp0 (x) = (A(x)Y (x) + B(x)) − (A(x)Yp (x) + B(x))


= A(x) (Y (x) − Yp (x)) .

Thus Y − Yp is a solution of (1.10 ), so we have that Y − Yp = Yh , for suitable


constants Ci in the expression of the general solution Yh of (1.10 ). Now we
can conclude that Y = Yh + Yp .

Let us focus now on methods for finding particular solutions Yp .

Method 1: Variation of parameters


[Stage 1 ]: Determine a fundamental system of solutions for the SDE (1.10 )
and let us denote it by Y1 , Y2 , . . . , Yn . It follows that

Yi0 (x) = A(x)Yi (x), ∀i = 1, n and

W (x) = |Y1 (x), Y2 (x), . . . , Yn (x)| =


6 0, ∀x ∈ I.
Then, the general solution of (1.10 ) is

Yh = C 1 Y1 + C 2 Y2 + · · · + C n Yn ,

where C1 , C2 , . . . , Cn are arbitrary constants.


Next we look for a solution of the non-homogeneous SDE (1.1) of the
form

Yp (x) = C1 (x)Y1 (x) + C2 (x)Y2 (x) + · · · + Cn (x)Yn (x), (1.21)

where C1 (x), C2 (x), . . . , Cn (x) are functions of class C 1 (I). This represents
the so called variation of parameters.
Let us introduce the vector function (1.21) in the system (1.1). We will
use the fact that Yi0 (x) = A(x)Yi (x), ∀i = 1, n. We have that

Yp0 = C1 Y10 + C2 Y20 + · · · + Cn Yn0 + C10 Y1 + C20 Y2 + · · · + Cn0 Yn


= C1 AY1 + C2 AY2 + · · · + Cn AYn + C10 Y1 + C20 Y2 + · · · + Cn0 Yn

19
On the other hand,

AYp + B = AC1 Y1 + AC2 Y2 + · · · + ACn Yn + B


= C1 AY1 + C2 AY2 + · · · + Cn AYn + B

From Yp0 = AYp + B, by reducing Ci AYi , we obtain the equality

C10 Y1 + C20 Y2 + · · · + Cn0 Yn = B,

which can be considered as a linear algebraic system


 
C1
   C2 
Y1 Y2 · · · Yn ·  · · ·  = B,
 (1.22)
Cn
 
where the matrix Y1 Y2 · · · Yn is nonsingular (since W (x) 6= 0);
[Stage 2 ]: Solve the system (1.22). One gets that
 0 
C1
 C20  
 = Y1 Y2 · · · Yn −1 · B;


 ···  (1.23)
Cn0

[Stage 3 ]: Integrate (1.23) in order to obtain the functions Ci (x):


 
C1 (x) Z
 C2 (x)   −1

 ··· 
= Y 1 (x) Y 2 (x) · · · Y n (x) · B(x)dx;
Cn (x)

[Stage 4 ]: Replace Ci (x), i = 1, n in (1.21) and write a particular solution


of the SDE (1.1);
[Stage 5 ]: Write the general solution of SDE (1.1): Y = Yh + Yp .

Method 2: Variation of parameters formula


[Stage 1 ]: Determine the fundamental matrix of the SDE (1.10 ),
 
Φ(x, x0 ) = Y1 (x, x0 ) Y2 (x, x0 ) · · · Yn (x, x0 ) ,

20
and write the general solution of SDE (1.10 ),
Yh (x) = Φ(x, x0 )C,
 
C1
 C2 
where C =   · · ·  is an arbitrary constant n-vector (see Remark 1.2.3).

Cn
We apply the variation of parameters method and we look for a solution
of (1.1) of the form
Yp (x) = Φ(x, x0 )C(x), (1.24)
 T
where C(x) = C1 (x) C2 (x) · · · C2 (x) is a C 1 (I) vector function.
We introduce Yp (x) (1.24) in the system (1.1) and we use property i) from
Theorem 1.2.2. Then
 
0 d
Yp (x) = Φ(x, x0 ) C(x) + Φ(x, x0 )C 0 (x)
dx
= A(x)Φ(x, x0 )C(x) + Φ(x, x0 )C 0 (x),
and
A(x)Yp (x) + B(x) = A(x)Φ(x, x0 )C(x) + B(x).
From Yp0 (x) = A(x)Yp (x) + B(x), one obtains the algebraic system
Φ(x, x0 )C 0 (x) = B(x), (1.25)
where, by property v), Theorem 1.2.2, the fundamental matrix Φ(x, x0 ) is
nonsingular and Φ(x, x0 )−1 = Φ(x0 , x);
[Stage 2 ]: Solve the system (1.25) by multiplying it with Φ(x, x0 )−1 =
Φ(x0 , x). Therefore
C 0 (x) = Φ(x0 , x)B(x); (1.26)
[Stage 3 ]: Integrate (1.26) on an interval [x0 , x] ⊂ I. So
Z x
C(x) = Φ(x0 , t)B(t)dt;
x0

[Stage 4 ]: Replace C(x) in (1.24) and use property iv) in Theorem 1.2.2
(the semigroup property). One obtains the particular solution of system (1.1)
as being
Z x Z x
Yp (x) = Φ(x, x0 ) Φ(x0 , t)B(t)dt = Φ(x, t)B(t)dt;
x0 x0

21
[Stage 5 ]: Write the general solution of (1.1), Y = Yh + Yp , hence
Z x
Y (x) = Φ(x, x0 )C + Φ(x, t)B(t)dt. (1.27)
x0

Now, let us consider the initial value problem (1.1) with the initial con-
dition (1.3). By replacing x = x0 in (1.27), one gets
Z x0
Y0 = Y (x0 ) = Φ(x0 , x0 )C + Φ(x, t)B(t)dt = In C = C,
x0

hence C = Y0 and (1.27) becomes the variation of parameters formula, which


gives the solution of the initial value problem
Z x
Y (x) = Φ(x, x0 )Y0 + Φ(x, t)B(t)dt. (1.28)
x0

By property vii), Theorem 1.2.2, if A is a constant matrix, then (1.28)


becomes, for x0 = 0,
Z x
Ax
Y (x) = e Y0 + eA(x−t) B(t)dt. (1.29)
0

Method 3: Undetermined coefficients


If the elements of the vector B(x) are quasipolynomials, i.e. sums of
functions of the form
bi (x) = Pi1 (x)eαx cos βx + Pi2 (x)eαx sin βx, ∀i = 1, n,
where Pi1 (x), Pi2 (x) are polynomials, we search for a solution Yp of the non-
homogeneous system (1.1) which has elements sums of quasipolynomials
yi (x) = Qi1 (x)eαx cos βx + Qi2 (x)eαx sin βx, ∀i = 1, n,
where Qi1 (x) and Qi2 (x) have the following degrees:
deg Qij = max {deg Pij },
1≤i≤n,1≤j≤2

if α ± iβ is not an eigenvalue of the matrix A and


deg Qij = max {deg Pij } + k,
1≤i≤n,1≤j≤2

if α ± iβ is an eigenvalue of the matrix A with a Jordan cell of maximal


dimension k.

22
1.2.3 Linear Control Systems
A linear continuous-time control system Σ has the following state space
representation:

ẋ(t) = A(t)x(t) + B(t)u(t) the state equation (1.30)
Σ
y(t) = C(t)x(t) + D(t)u(t) the output equation (1.31)
Here A(t), B(t), C(t), D(t) are respectively n × n, n × m, p × n, p × m
continuous real matrices, x(t) ∈ Rn , u(t) ∈ Rm and y(t) ∈ Rp are respectively
the state, input (or control ) and output of the system Σ and t ∈ R is the
time.
Consider the initial condition x(t0 ) = x0 , for an initial moment t0 and an
initial state x0 . Let Φ(t, t0 ) be the fundamental matrix of the matrix A(t).
The variation of parameters formula (1.28) gives (with Y (x), Y0 , B(x)
replaced by respectively x(t), x0 , B(t)u(t)) the formula of the state of the
system Σ at the moment t
Z t
x(t) = Φ(t, t0 )x0 + Φ(t, s)B(s)u(s)ds. (1.32)
t0

By replacing the state x(t) given by (1.32) in the output equation (1.31),
one obtains the input-output map (or the general response) of the system Σ
Z t
y(t) = C(t)Φ(t, t0 )x0 + C(t)Φ(t, s)B(s)u(s)ds + D(t)u(t). (1.33)
t0

The system Σ is called linear time-invariant (LTI) if A, B, C, D are


constant matrices. In this case, Φ(t, t0 ) = eA(t−t0 ) and (1.32) and (1.33)
become, for t0 = 0,
Z t
At
x(t) = e x0 + eA(t−s) Bu(s)ds, (1.34)
0
Z t
y(t) = CeAt x0 + eA(t−s) Bu(s)ds + Du(t). (1.35)
0

1.3 Stability
1.3.1 Asymptotic stability
Consider the initial value problem
Σ : Y 0 (x) = AY (x), Y (0) = Y0 , (1.36)

23
where A is a constant n × n matrix.
By Theorem 1.2.2 iii) and vii), the solution of the initial value problem
(1.36) is
Y (x) = eAx Y0 . (1.37)

If Y0 = 0, then the corresponding solution of (1.37) is Y (x) = 0, ∀x ∈ Rn ,


hence the vector Y0 = 0 is an equilibrium position.

Definition 1.3.1. The system is said to be stable (or the zero solution is
stable) if for any  > 0 there exists δ > 0 such that, for any Y0 ∈ Rn with
kY0 k < δ, the corresponding solution verifies kY (x)k < , for any x ≥ x0 .
If the system is not stable, it is called unstable.
The system is said to be asymptotically stable if it is stable and for any
Y0 ∈ Rn , lim Y (t) = 0.
t→∞

These situations are illustrated in Figure 1.2 (for n = 1).

(a) Stable (b) Unstable

(c) Asymptotically Stable

Figure 1.2: The Three Possible States of an Equilibrium Position

24
The next theorem establishes the conditions for the possible situations
concerning the stability of the linear homogeneous SDEs with constant coe-
fficients (the LTI systems).

Theorem 1.3.2. Consider the system Σ given by (1.36).

a) The system Σ is asymptotically stable if and only if all the eigenvalues


of the matrix A have negative real parts (σ(A) ⊂ C− , where we have
put C− = {z ∈ C : Re z < 0});

b) If there exists an eigenvalue with Re λ > 0, then Σ is unstable;

c) If all the eigenvalues of the matrix A verify Re λ ≤ 0 and there exists


at least an eigenvalue with Re λ = 0 and ma (λ) = mg (λ) holds for any
such eigenvalue λ, then Σ is stable, but not asymptotically stable;

d) If there exists an eigenvalue λ with Re λ = 0 and ma (λ) > mg (λ), then


Σ is unstable.

Proof. Let
 
J1
..

 . 

 =  Ji
 


 ... 

Js
be the Jordan canonical form of the matrix A. This means that there exists a
nonsingular (transition) matrix T such that A = T ÂT −1 , Â has the structure
above and Ji are Jordan cells,
 
λi 1
..
. 1
 
 

Ji =  . .

,
 λi . 
 .. 
 . 1 
λi

where Ji ∈ Mmi , with mi > 1 if ma (λi ) > mg (λi ) and Ji = [λi ], if ma (λi ) =
mg (λi ).

25
In this case, the solution is Y (t) = eAx Y0 =T eAx T −1 Y0 and
 J1 x 
e
..

 . 

Ax Ji x
e = e ,
b  

 . ..


Js x
e
with
x2 xmi −1
 x

1 1! 2! (mi −1)!
 x .. xmi −2


 1 1!
. (mi −2)!


eJi x = eλi x  .. x2
,

 1 . 2!


 x 
1!
1
if ma (λi ) > mg (λi ) and eJi x = eλi x , if ma (λi ) = mg (λi ).
Then, using the equality |ez | = eRe (z) , one obtains:
a) The system Σ is asymptotically stable if and only if
lim Y (x) = 0 ∈ Rn
x→∞

which is equivalent to
lim xk eλi x = 0, ∀i = 1, s and ∀k = 0, mi − 1.
x→∞

The last statement is equivalent to


lim |eλi x | = 0 ⇔ lim eRe (λi )x = 0 ⇔ Re (λi ) < 0, ∀λi ∈ σ(A).
x→∞ x→∞

In this case, one says that A is a stable matrix.


b) If there exists λi ∈ σ(A) with Re (λi ) > 0, then it follows that
lim |xk eλi x | = lim xk eRe (λi )x = ∞,
x→∞ x→∞

hence there exist solutions Y (x) with components which verify lim yi (x) =
x→∞
∞, therefore the system is unstable.
c) For the eigenvalues λ of the matrix A with negative real parts, we get
that
lim |xk eλx | = lim xk eRe (λ)x = 0,
x→∞ x→∞

26
hence the functions xk eλx are bounded, and for the eigenvalues which have
null real part, λ = iβ, and ma (λ) = mg (λ), the corresponding terms in the
solutions Y (x) are of the form eiβx , hence |eiβx | = 1. Therefore, the entries of
the exponential matrix eAx remain bounded, so keAx k ≤ M , for some M > 0
and the solution verifies kY (x)k ≤ M kY0 k. For an arbitrary chosen , take
δ = M . Then, for any Y0 with kY0 k < δ, we have that

kY (x)k ≤ M · = ,
M
hence kY0 k < , i.e. the system Σ is stable. But the system Σ is not
asymptotically stable since

lim |eiβx | = 1 6= 0.
x→∞

d) For a multiple eigenvalue λ of the form λ = iβ, some solutions Y (x)


will contain functions of the form xk eiβx and since

lim |xk eiβx | = lim xk = ∞,


x→∞ x→∞

it follows that the system Σ is unstable.

Just as we have said before, if σ(A) ⊂ C− , then A is said to be a stable


matrix .

1.3.2 The Routh-Hurwitz Criterion


Definition 1.3.3. The polynomial p(z) = an z n +an−1 z n−1 +···+a0 is called a
Hurwitz polynomial if all the principal minors of the associate Hurwitz matrix
 
an−1 an 0 0 0 0 ··· 0
 an−3 an−2 an−1 an
 0 0 ··· 0  
Hn =  a n−5 a n−4 a n−3 a n−2 a n−1 a n · · · 0 

 ··· ··· ··· ··· ··· ··· ··· ··· 
0 0 0 0 0 0 · · · a0
are positive, i.e.

a an
> 0, ∆2 = n−1

∆1 = an−1 > 0, . . . , ∆n = det(Hn ) > 0. (1.38)
an−3 an−2

27
Let us denote by N − , N + , N 0 the number of roots λ of p(z) respectively
with Re λ < 0, Re λ > 0 and Re λ = 0.
Hurwitz proved that if ∆i 6= 0, ∀i = 1, n, then N 0 = 0 and N + is equal
to the number of the sign changes (denoted by Nsc ) in the following finite
sequence:
∆2 ∆n
1, ∆1 , ,..., .
∆1 ∆n−1
Using this, one obtains the following result:
Theorem 1.3.4 (Routh-Hurwitz Criterion). The system Σ is asymptotically
stable if and only if the characteristic polynomial of the matrix A is a Hurwitz
polynomial.
Proof. Let p(z) = an z n + an−1 z n−1 + · · · + a0 be the characteristic polynomial
of the matrix A, where an > 0. If ∆1 > 0, ∆2 > 0, . . ., ∆n > 0, then N 0 = 0
and since 1 > 0, we have that Nsc = 0. Therefore, N + = 0 and all the
eigenvalues of the matrix A have negative real parts. According to Theorem
1.3.2, the system Σ is asymptotically stable.
Conversely, if the system Σ is asymptotically stable, then N + = 0, hence
∆2 ∆3 ∆n
1 > 0, ∆1 > 0, > 0, > 0, . . ., > 0, thus ∆j > 0, for every
∆1 ∆2 ∆n−1
j ∈ {1, 2, . . . , n}.

Remark 1.3.5. If there exists a sign change in the sequence of the coeffi-
cients of p(z): an , an−1 , . . . , a1 , a0 (i.e. the entries on the main diagonal of
the matrix Hn ), then p(z) has a real positive root, hence it is not a Hurwitz
polynomial.

1.3.3 The Lyapunov Equation


There is a more general method for the evaluation of the stability of a
linear homogeneous SDEs, without the study of the characteristic polynomial
of the matrix A.
An n × n complex matrix P = [pij ] is said to be Hermitian if P = P ∗ ,
T
where P ∗ = P = [pji ]. If P is a real matrix, this condition becomes P = P T ,
i.e. P is symmetric.
An n × n complex matrix P = [pij ] is said to be positive definite and
one writes P > 0, if it is Hermitian and ν ∗ P ν > 0, ∀ν ∈ Cn , ν 6= 0. If

28
the matrix P is a real matrix, then this condition is equivalent to P = P T
and ν T P ν > 0, ∀ν ∈ Rn , ν 6= 0. For a Hermitian/symmetric matrix P , the
following statements are equivalent:
i) P is positive definite;
ii) all eigenvalues of P are positive real numbers;
iii) all principal minors of P are positive.
Theorem 1.3.6. The system Σ is asymptotically stable if and only if the
algebraic equation (called the Lyapunov equation)
AT P + P A = −Q (1.39)
has a real, symmetrical, positive definite matrix P , for a real, symmetric,
positive definite matrix Q.
Proof. Necessity. Assume that the system is asymptotically stable. For
any Q > 0, let us consider the matrix
Z ∞
T
P = eA x QeAx dx. (1.40)
0
AT x
The entries of the matrix e QeAx are linear combinations of functions
of the form xk eλx , where k ≥ 0, λ = λi + λj , λi , λj ∈ σ(A) (see the proof
of Theorem 1.1.12). Since A is a stable matrix, we have that Re (λi ) < 0,
∀λi ∈ σ(A). Then the improper integral (1.40) is convergent due to the fact
that its entries are linear combinations of convergent integrals of the form
Z ∞
1 k!
xk eλx dx = k
Γ(k + 1) = .
0 (−λ) (−λ)k
T
Moreover, lim eA x QeAx = 0n , since lim xk eλx = 0. It follows that
x→∞ x→∞
Z ∞ Z ∞
T T AT x Ax T
A P + PA = A e Qe dx + eA x QeAx Adx
Z0 ∞   0

d AT x

Ax AT x d Ax
= e Q e + e Q (e ) dx
0 dx dx
Z ∞  
d T

= eA x QeAx dx
0 dx
T T
= eA x QeAx x→∞ − eA x QeAx x=0
= −Q,

29
since eA0 = In .
Obviously P T = P , i.e. P is symmetric. For any Y ∈ Rn , Y = 6 0,
Ax Ax
it follows Zthat Z = e Y 6= 0 (since the matrix e is nonsingular), then

Y TPY = Z T QZdx 6= 0, therefore P > 0.
0
Sufficiency. Let us assume that there exists a matrix P > 0 such that
AT P + P A = −Q for some matrix Q > 0.
Let λ be an eigenvalue of A and consider v to be an eigenvector corre-
sponding to the eigenvalue λ, i.e. Av = λv and v 6= 0. Then Av = λv,
hence
v T AT = λv T ,
i.e. v ∗ A = λv ∗ since the matrix A is real.
We obtain that

v ∗ (AT P + P A)v = −v ∗ Qv ⇒ λv ∗ P v + v ∗ P vλ = −v ∗ Qv ⇒ (λ + λ)v ∗ P v < 0,

which implies that 2Re (λ) = λ + λ < 0 (since P > 0). In conclusion we have
that Re (λ) < 0, hence the system Σ is asymptotically stable.

Corollary 1.3.7. The system (1.36) is asymptotically stable if and only if


there exists a quadratic form V (Y ) (called a Lyapunov function), such that:

i) V (Y ) is positive definite;

ii) V̇ (Y (x)) < 0, for any solution Y (x) 6= 0 of the system Σ.

Proof. Let us consider a positive definite matrix Q and the solution P to


the Lyapunov equation (1.39). Then V (Y ) = Y T P Y is a positive definite
quadratic form. For any solution Y (x) 6= 0 of the system Σ, one obtains

d T
V̇ (Y (x)) = (Y (x)P Y (x))
dt
= Ẏ T (x)P Y (x) + Y T (x)P Ẏ (x)
= −Y T (x)QY (x) < 0,

hence V (x) = xT P x is a Lyapunov function for the system Σ.

30
Remark 1.3.8. If the matrix Q in Theorem 1.3.6 is only positive semi-
definite, i.e. v T Qv ≥ 0, ∀v ∈ Rn , then V̇ (Y (x)) ≤ 0 in Corollary 1.3.7, so
the system (1.36) is only stable.

Remark 1.3.9. The Lyapunov equation (1.39) can be written as a linear


equation of the form
LPe = −Q,
e (1.41)
where Pe is the n2 -dimensional column vector obtained from the columns of the
matrix P , Qe is obtained in a similar way from Q and L is a linear operator,
2 2
L : Rn → Rn of the form L = AT ⊗ I + I ⊗ AT (here A ⊗ B := [aij B]1≤i,j≤n
is the Kronecker product of the matrices A and B).

One can show that the eigenvalues of the matrix L are µij = λi + λj ,
where λi , λj ∈ σ(A), i, j = 1, n, hence µi,j 6= 0 since Re (λi ) < 0, for any
λi ∈ σ(A), and L is nonsingular. Therefore, 1.41 has a unique solution for
all Q > 0.

1.3.4 Stability of Nonlinear Systems


Lyapunov’s First Method
Consider the time invariant system Y 0 = f (Y ), where f ∈ C 2 is an
n-dimensional vector function and f has the equilibrium position Y , i.e.
f (Y ) = 0.
Using the substitution Z(x) = Y (x) − Y and denoting F (Z) := f (Z + Y ),
one obtains Z 0 = Y 0 = f (Y ) = f (Z + Y ) = F (Z) and F (0) = f (Y ) = 0,
hence the problem of the stability of the equilibrium position Y of the system
Y 0 = f (Y ) is equivalent to the problem of the stability of the equilibrium
position 0 of the system Z 0 = F (Z). Therefore, Definition 1.3.1 concern-
ing stability, instability and asymptotic stability of linear systems can be
extended to nonlinear systems of the form

Y 0 = f (Y ), f (0) = 0. (1.42)

Definition 1.3.10. The system (1.42) is said to be stable (or the zero so-
lution is stable) if for any  > 0 there exists δ > 0 such that, for any Y0 ∈ Rn
with kY0 k < δ, the corresponding solution verifies kY (x)k < , for any x ≥ x0 .
If the system is not stable, it is called unstable.

31
The system (1.42) is called asymptotically stable in a neighborhood N of
the origin if it is stable and for any Y0 ∈ N , the corresponding solution Y (x)
verifies lim Y (x) = 0.
x→∞
If N is a bounded set, the asymptotic stability is local and if N =Rn it is
global .
There exist a connection between the stability of the system (1.42) and
the stability of its linearized system with respect to the equilibrium position.
One applies Taylor’s formula in a neighborhood of the origin. Let us de-
note by f 0 (Y ) the Jacobian matrix of the vector function f = [f1 , f2 , . . . , fn ]T ,
where fi = fi (Y1 , Y2 . . . , Yn ). So,
∂f1 ∂f1 ∂f1
 
 ∂Y1 ∂Y2 · · · ∂Yn 
 ∂f ∂f2 ∂f2 
2
···
 
f 0 (Y ) =  ∂Y1 ∂Y2 ∂Y  .
 
 · · · · · · · · · · · ·n 
 
 ∂fn ∂fn ∂fn 
···
∂Y1 ∂Y2 ∂Yn
One says that a real function h is of order o(x) and one writes h(x) = o(x)
h(x)
if lim = 0. Then, the Taylor formula of f near 0 can be written as
x→0 x

f (Y ) = f (0) + f 0 (0)Y + g(Y ),


hence f (Y ) = f 0 (0)Y + g(Y ) and the linearization of (1.42) is the system
Y 0 (x) = AY (x) + g(Y (x)), (1.43)
where A = f 0 (0) and g(kY k) = o(kY k).
One can establish the following method to check the asymptotic stability
of the nonlinear system (1.42).
Theorem 1.3.11. The following statements are equivalent:
i) The system (1.42) is asymptotically stable in a neighborhood of the
origin;
ii) The system (1.43) is asymptotically stable in a neighborhood of the
origin;
iii) The matrix A is stable.

32
Lyapunov’s Second Method

Consider the system

x0 = f (x), f (0) = 0, (1.44)

with f (x) = (f1 (x), ..., fn (x).


The problem of checking the stability of the system (1.44) can be reduced
to the problem of the existence and of finding of some functions with special
properties, called Lyapunov functions, without solving the system.
Let O be an open set, 0 ∈ O ⊂ Rn . Assume that f is continuous on O.

Definition 1.3.12. A functional V : O → R is said to be a Lyapunov


function for the system (1.44) if it has the following properties:

i) V is positive defined on O, i.e. V ∈ C 1 (O), V (0) = 0 and V (x) > 0,


for any x ∈ O\{0};

ii) V̇ (x(t)) ≡ grad V · f (x) ≤ 0 on O;

where the derivative of V by virtue of the system is


n n
X ∂V X ∂V
V̇ (x(t)) = (x(t))x˙k (t) = (x(t))fk (x(t)) = grad V · f (x(t)).
k=1
∂xk k=1
∂xk

Theorem 1.3.13 (Local Stability). If there exists a Lyapunov function for


the system (1.44), then the system is stable.

Theorem 1.3.14 (Local Asymptotic Stability). If there exists a Lyapunov


function for the system (1.44) and V̇ (x(t)) ≡ grad V · f (x) < 0 on O\{0},
then the system is asymptotically stable.

Theorem 1.3.15 (Instability). If in some neighborhood of the origin O e⊂O


there exists V , a positive definite functional on O
e with V̇ (x(t)) ≡ grad V ·
f (x) > 0 on O\{0},
e then the system (1.44) is unstable.

For the proofs of these theorems, see [2].

33
1.4 Exercises
E 1. Determine the general solution of the following SDEs, Y 0 = AY , (the
matrix A is diagonalizable), if
     
−3 1 1 2 1 1 3 1 0
a) A =  1 −3 1 ; b) A =  2 3 2 ; c) A =  0 3 1 .
1 1 −1 3 3 4 0 −1 3
Solution. a) [Stage 1 ]: Determine the eigenvalues of the matrix A.

−3 − λ 1 1

det(A − λI3 ) = 0 ⇔ 1 −3 − λ 1 = 0.

1 1 −1 − λ

One can expand this determinant using the triangle rule, or the Sarrus
rule or the ’zero’ rule. It is advisable to use the last one.
By subtraction of column 2 from column 1, one obtains

−4 − λ 1 1

4 + λ −3 − λ 1 = 0,

0 1 −1 − λ

hence 4 + λ is a common factor and the determinant becomes



−1 1 1

(4 + λ) 1 −3 − λ 1 = 0.

0 1 −1 − λ

By adding row 1 to row 2, one gets



−1 1 1

(4 + λ) 0 −2 − λ 2 = 0.

0 1 −1 − λ

So we have that

det(A − λI3 ) = 0 ⇔ −(4 + λ) (2 + λ)(1 + λ) − 2 = 0
⇔ (4 + λ)(λ2 + 3λ) = 0.

We conclude that we have three distinct eigenvalues: λ1 = −4, λ2 = 0 and


λ3 = −3.

34
[Stage 2 ]: Determine the corresponding eigenvectors for every eigenvalue
λ.     
1 1 1 a 0
1. (A − λ1 I3 )v = 0 ⇔  1 1 1   b = 0 . This is equivalent to
1 1 3 c 0
the system (
a+b+c=0
,
a + b + 3c = 0
 
−1
hence c = 0, a = −b and, for b = 1,  1  is an eigenvector corresponding
0
to λ1 = −4.
    
−3 1 1 a 0
2. (A−λ2 I3 )v = 0 ⇔  1 −3 1  b = 0 . This is equivalent
1 1 −1 c 0
to the system 
−3a + b + c = 0

a − 3b + c = 0 ,

a+b−c=0

 
1
hence c = 2b, a = b and, for b = 1,  1  is an eigenvector corresponding to
2
λ2 = 0.
    
0 1 1 a 0
3. (A − λ3 I3 )v = 0 ⇔  1 0 1  b = 0 . Again we obtain a
1 1 2 c 0
system and in this case we have

b + c = 0

a+c=0 ,

a + b + 2c = 0

 
−1
hence b = −c, a = −c and, for c = 1,  n − 1  is an eigenvector corre-
1
sponding to λ3 = −3.

35
[Stage 3 ]: Determine a fundamental system of solutions for the SDEs.
     
−1 1 −1
Y1 = e−4x  1  , Y2 =  1  , Y3 = e−3x  −1 
0 2 1

is a fundamental system of solutions by Proposition 1.1.6.


[Stage 4 ]: Write the general solution of the SDEs.

Y = C1 Y1 + C2 Y2 + C3 Y3
     
−1 1 −1
= C1 e−4x  1  + C2  1  + C3 e−3x  −1  ,
0 2 1

where C1 , C2 , C3 are arbitrary real constants.


b) [Stage 1 ]: Determine the eigenvalues of the matrix A.

2−λ 1 1

det(A − λI3 ) = 0 ⇔ 2 3−λ 2 = 0.
3 3 4−λ

Again we can expand this determinant using the triangle rule, or the
Sarrus rule or the ’zero’ rule. It is advisable to use the last one.
By multiplying by −1 column 2 and adding it to column 2, one obtains

2−λ 1 0

2
3 − λ −1 + λ = 0,

3 3 1−λ

hence 1 − λ is a common factor and the determinant becomes



2−λ 1 0

(1 − λ) 2
3 − λ −1 = 0.
3 3 1

By adding row 2 to row 3, we get that



2−λ 1 0

(1 − λ) 2 3 − λ −1 = 0.
5 6−λ 0

36
So we have that

det(A − λI3 ) = 0 ⇔ (1 − λ) (2 − λ)(6 − λ) − 5 = 0
⇔ (1 − λ)(λ2 − 8λ + 7) = 0.

We conclude that we have two distinct eigenvalues: λ1 = 1 and λ2 = 7,


but the algebraic multiplicitiesmultiplicity!algebraic are ma (λ1 ) = 2 and
ma (λ2 ) = 1.
[Stage 2 ]: Determine the corresponding eigenvectors for every eigenvalue
λ.     
1 1 1 a 0
1. (A − λ1 I3 )v = 0 ⇔ 2 2 2
  b = 0 . This is equivalent to
 
3 3 3 c 0

a + b + c = 0,

which is a system
 with
  three unknowns
  and only  one equation, hence a =
a −b − c −1 −1
−b − c. Since b =
   b  =b  1 +c
  0 , it follows that the
c c  0  1
−1 −1
geometric multiplicity of λ1 is 2 and  1 , 0  are linearly indepen-
0 1
dent eigenvectors corresponding to λ1 = 1 (for b = 1, c = 0 and b = 0, c = 1
respectively).
    
−5 1 1 a 0
2. (A−λ2 I3 )v = 0 ⇔  2 −4 2  b = 0 . This is equivalent
 
3 3 −3 c 0
to the system

−5a + b + c = 0

2a − 4b + 2c = 0 ,

3a + 3b − 3c = 0

 
1
hence c = 2a, b = 2a and, for a = 1, 2  is an eigenvector corresponding

3
to λ2 = 7.

37
[Stage 3 ]: Determine a fundamental system of solutions for the SDEs.
     
−1 −1 1
x x 7x 
Y1 = e 1  , Y2 = e 0  , Y3 = e 2 
0 1 3

is a fundamental system of solutions by Proposition 1.1.6.


[Stage 4 ]: Write the general solution of the SDEs.

Y = C 1 Y1 + C 2 Y2 + C 3 Y3
     
−1 −1 1
= C1 ex  1  + C2 ex  0  + C3 e7x  2  ,
0 1 3

where C1 , C2 , C3 are arbitrary real constants.


c) [Stage 1 ]: Determine the eigenvalues of the matrix A.

det(A − λI3 ) = 0 ⇔ (3 − λ) (3 − λ)2 + 1 = 0.


 

We obtain three distinct eigenvalues, but one is real λ1 = 3 and the other
two are conjugated complex roots: λ2 = 3+i and λ3 = 3−i. Thus ma (λ) = 1,
for every eigenvalue λ.
[Stage 2 ]: Determine the corresponding eigenvectors for every eigenvalue
λ.     
0 1 0 a 0
1. (A − λ1 I3 )v = 0 ⇔ 0 0 1
   b = 0 . This is equivalent
 
0 −1 0 c 0
to the system (
b=0
,
c=0
 
1
hence a ∈ R and b = c = 0. Thus  0  is an eigenvector corresponding to
0
the real eigenvalue λ1 = 3 (for a = 1).
2. In case of the complex eigenvalues, since there are conjugated, it is not
necessary to use both.

38
    
−i 1 0 a + bi 0
(A − λ2 I3 )v = 0 ⇔  0 −i 1   c + di = 0 , where a, b, c,
 
0 −1 −i e + fi 0
d, e, f are arbitrary real constants. This is equivalent to the system

 b+c=0
−ai + b + c + di = 0


 
−a + d = 0
−ci + d + e + f i = 0 ⇔ ,
  d + e = 0
−c − di − ei + f = 0
 

−c + f = 0

 
a + bi
so  −b + ai , where a, b ∈ R is the complex eigenspace corresponding to
−a − bi
the complex eigenvalue λ2 .
[Stage 3 and Stage 4 ]: Determine a fundamental system of solutions for
the SDEs and write the general solution.  
1
3x 
First of all we have that Y1 (x) = C1 e 0 , where C1 ∈ R is a solution
0
for the SDEs.
If one considers2
   
a + bi a + bi
Y (x) = e(3+i)x  −b + ai  = e3x (cos x + i sin x)  −b + ai 
−a − bi −a − bi
 
a cos x − b sin x + i(b cos x + a sin x)
= e3x  −b cos x − a sin x + i(a cos x − b sin x)  ,
−a cos x + b sin x + i(−b cos x − a sin x)

both the real and the imaginary part of Y (x) are solutions for the SDEs.
The general solution, if one considers the real part of Y (x), is
 
C1 + a cos x − b sin x
Y (x) = Y1 (x) + Re Y (x) = e3x  −b cos x − a sin x  ,
−a cos x + b sin x
where C1 , a, b ∈ R.
2
The complex exponential will be studied in Chapter 2.

39
W 1. Determine the general solution of the following SDEs, Y 0 = AY , (the
matrix A is diagonalizable), if
   
4 −2 2 1 0 3
a) A =  −5 7 −5 ; b) A =  2 1 2 ;
−6 6 −4 3 0 1
   
7 4 −1 1 1 1
c) A =  4 7 −1 ; d) A =  1 1 1 .
−4 −4 4 1 1 1

E 2. Determine the general solution of the SDEs, Y 0 = AY , in the following


situations (the matrix A is not diagonalizable):
   
2 1 2 1 −3 3
a) A =  −1 0 −2 ; b) A =  −2 −6 13 .
1 1 2 −1 −4 8
Solution. a) [Stage 1 ]: Determine the eigenvalues of the matrix A.

2−λ 1 2

det(A − λI3 ) = 0 ⇔ −1 −λ −2 = 0 ⇔ (1 − λ)2 (2 − λ) = 0,
1 1 2−λ

hence one obtains the distinct eigenvalues λ1 = 1 and λ2 = 2, with ma (λ1 ) =


2 and ma (λ2 ) = 1.
[Stage 2 ]: Determine the corresponding eigenvectors and principals vec-
tors for every eigenvalue λ.     
1 1 2 a 0
1. (A − λ1 I3 )v = 0 ⇔ −1 −1 −2
   b = 0 . This is equiva-
 
1 1 1 c 0
lent to the system (
a + b + 2c = 0
,
a+b+c=0
 
−1
hence c = 0, a = −b and, for b = 1, v1  1  is an eigenvector correspond-
0
ing to λ1 = 1. Since mg (λ1 ) = 1, it follows that one needs to obtain one prin-
cipal vector vp1 . This is due to the fact that ma (λ1 )−mg (λ1 ) = 2−1 = 1. So,

40
    
1 1 2 a −1
(A−λ1 I3 )vp1 = v1 ⇔ −1 −1 −2
   b = 1 . This is equivalent
1 1 1 c 0
to the system (
a + b + 2c = −1
,
a+b+c=0
       
a 1−b −1 1
hence c = −1 and a = 1−b. Since b =
   b  = b  1 + 0 ,
c −1  0 −1
1
it follows that a principal vector needed is vp1 =  0  (for b = 0).
−1
    
0 1 2 a 0
2. (A−λ2 I3 )v = 0 ⇔  −1 −2 −2  b = 0 . This is equivalent
1 1 0 c 0
to the system 
b + 2c = 0

−a − 2b − 2c = 0 ,

a+b=0

 
2
hence b = −2c, a = 2c and, for c = 1, v2 =  −2  is an eigenvector corre-
1
sponding to λ2 = 0. Since ma (λ2 ) = mg (λ2 ) = 1, there are no corresponding
principal vectors for λ2 = 2.
[Stage 3 ]: Determine a fundamental system of solutions for the SDEs.
        
−1 −1 1 2
x
Y1 = ex  1  , Y2 = ex   1  +  0  , Y3 = e2x  −2 
1!
0 0 −1 1
is a fundamental system of solutions by Proposition 1.1.10.
[Stage 4 ]: Write the general solution of the SDEs.
Y = C 1 Y1 + C 2 Y2 + C 3 Y3
     
−1 −x + 1 2
= C1 ex  1  + C2 ex  x  + C3 e2x  −2  ,
0 −1 1

41
where C1 , C2 , C3 are arbitrary real constants.
b) [Stage 1 ]: Determine the eigenvalues of the matrix A.

1−λ −3 3

det(A − λI3 ) = 0 ⇔ −2 −6 − λ 13 = 0 ⇔ (λ − 1)3 = 0,
−1 −4 8−λ

hence one obtains λ = 1, ma (λ) = 3.


[Stage 2 ]: Determine the corresponding eigenvectors and principal vectors
for every eigenvalue λ.     
0 −3 3 a 0
(A − λI3 )v = 0 ⇔  −2 −7 13   b  =  0 . This is equivalent
−1 −4 7 c 0
to the system 
−3b + 3c = 0

−2a − 7b + 13c = 0 ,

−a − 4b + 7c = 0

 
3
hence b = c and a = 3c. Putting c = 1, it follows that v1 =  1  is an
1
eigenvector corresponding to λ = 1. Since ma (λ) = 3 and = mg (λ) = 1, one
needs to obtain two principal vectors, vp1 and vp2 .
First we have that     
0 −3 3 a 3
(A−λI3 )vp1 = v1 ⇔ −2 −7 13
   b = 1 . This is equivalent
 
−1 −4 7 c 1
to the system 
−3b + 3c = 3

−2a − 7b + 13c = 1 ,

−a − 4b + 7c = 1

       
a 3b + 6 3 6
hence c = 1+b and a = 3b+6. Since b =    b  = b 1 + 0 ,
  
c b + 1  1 1
6
it follows that a first principal vector needed is vp1 = 0  (for c = 0).

1

42
For finding the secondprincipal vector,
one solves
 
0 −3 3 a 6
(A − λI3 )vp2 = vp1 ⇔ −2 −7 13
   b = 0 . This is equivalent
 
−1 −4 7 c 1
to the system

−3b + 3c = 6

−2a − 7b + 13c = 0 ,

−a − 4b + 7c = 1

     
a 3b + 13 3
hence c = 2 + b and a = 3b + 13. Since  b  =  b  = b 1  +
  c b+2  1
13 13
 0 , it follows that the second principal vector can be vp2 =  0 .
2 2
[Stage 3 ]: Determine a fundamental system of solutions for the SDEs.
      
3 3 6
x xx 
Y1 = e 1 , Y2 = e
 1 + 0  ,
 
1!
1 1 1
      
3
2 6 13
x x
Y3 = ex   1  +  0  +  0  ,
2! 1!
1 1 2
is a fundamental system of solutions by Proposition 1.1.10.
[Stage 4 ]: Write the general solution of the SDEs.

Y = C 1 Y1 + C 2 Y2 + C 3 Y3
 
3x2
     2 + 6x + 13
3 3x + 6

x2
 
= C1 ex  1  + C2 ex  x  + C3 e x  ,


1 x+1  x2 2
 

+x+2
2

where C1 , C2 , C3 are arbitrary real constants.

43
W 2. Determine the general solution of the following SDEs, Y 0 = AY , in
the following situations (the matrix A is not diagonalizable):
   
0 1 −1 2 −1 2
a) A =  2 1 1 ; b) A =  5 −3 3 ;
0 −1 1 −1 0 −2
   
4 −1 1 4 1 1
c) A =  0 2 2 ; d) A =  0 2 2 .
0 −2 6 0 1 3

E 3. Find eAx for the following diagonalizable matrices:


 
  2 1 1
1 5
a) A = ; b) A =  2 3 2 .
5 1
3 3 4
Solution. a) We first find the eigenvalues of the matrix A by solving the
equation
1−λ 5
det(A − λI3 ) = 0 ⇔ = 0.
5 1−λ
We get that the eigenvalues are λ1 = −4 and λ2 = 6, both with algebraic
multiplicity equal to 1, hence they are equal to the corresponding geometric
multiplicities.
A corresponding eigenvector for λ1 = −4 is v1 = (1, −1)T and for λ2 = 6
is v2 = (1, 1)T .  
1 1
We write the transition matrix C = [v1 v2 ] = , which has the
  −1 1
1 1 −1
inverse C −1 = . We obtain that the diagonal form of matrix A
2 1 1
is  
−1 −4 0
D = C AC = ,
0 6
hence
e−4x 0
 
Dx
e = .
0 e6x
It follows that
e−4x + e6x −e−4x + e6x
 
Ax Dx −1 1
e = Ce C = .
2 −e−4x + e6x e−4x + e6x

44
b) From E 1. b) we already know that the eigenvalues of A are λ1 = 1,
ma (λ1 ) = 1 and λ2 = 7, ma (λ2 ) = 1. We have also determined the corre-
sponding eigenvectors:
  for λ1 = 1 wehave the linearly independent eigen-
−1 −1
vectors v1 =  1  and v2 =  0 ; for λ2 = 7 we have the eigenvector
  0 1  
1 −1 −1 1
v3 =  2 . We write the matrix C =  1 0 2 , which has the
3   0 1 3
−2 4 −2
−1 1
inverse C = −3 −3 3 .
6
1 1 1
Since  x 
e 0 0
eDx =  0 ex 0  ,
0 0 e7x
it follows that
 
5ex + e7x −ex + e7x −ex + e7x
1
eAx = CeDx C −1 =  −2ex + 2e7x 4ex + 2e7x −2ex + 2e7x  .
6
−3ex + 3e7x −3ex + 3e7x 3ex + 3e7x

W 3. Find eAx for the following diagonalizable matrices:


 
  3 1 1
0 −1
a) A = ; b) A =  1 3 1 .
−1 0
1 1 3

E 4. Find eAx for the following non-diagonalizable matrices:


 
  4 0 1
3 1
a) A = ; b) A =  1 3 0 .
−1 5
1 0 4

Solution. a) We start by computing the eigenvalues of the matrix A. We


have that
3−λ 1
det(A − λI3 ) = 0 ⇔ = 0,
−1 5 − λ

45
which implies that λ = 4, ma (λ) = 2.
The eigenvector corresponding to λ is v = (1, 1)T and the needed principal
vector is vp = (0, 1)T .    
1 0 −1 1 0
We build the matrix C = , which has the inverse C = .
1 1   −1 1
4 1
We have that the matrix J = C −1 AC = has one Jordan cell of
0 4  
Jx 4x 1 x
dimension 2 corresponding to the eigenvalue λ = 4, hence e = e .
0 1
In conclusion,
 
Ax Jx −1 4x 1−x x
e = Ce C = e .
−x 1 + x

b) Again we start by determining the eigenvalues of the matrix A. We


have that
det(A − λI3 ) = 0 ⇔ (3 − λ)2 (5 − λ) = 0,
hence λ1 = 3, ma (λ1 ) = 2 and λ2 = 5, ma (λ2 ) = 1 are the eigenvalues.
For λ1 one can find as an eigenvector v1 = (0, 1, 0)T and as a principal vec-
tor vp = (1, 0, −1)T and for λ2 acorresponding T
 eigenvector is v2 = (2, 1, 2) .
0 1 2
As before, the matrix C = 1 0 1 , which has the inverse C −1 =

  0 −1 2
−1 4 −1
1
2 0 −2 .
4
1 0 1  
3 1 0
We have that the matrix J = C −1 AC =  0 3 0  has one Jordan cell
0 0 5  
3 1
of dimension 2 corresponding to the eigenvalue λ1 : Jλ1 =3 = and one
0 3
Jordan cell of dimension 1: Jλ2 =5 = [5].  
Jλ1 x 3x 1 x
For Jλ1 =3 we have that e =e and for Jλ2 =5 we obtain that
0 1
eJλ2 x = e5x [1], hence  3x 
e xe3x 0
eJx =  0 e3x 0 
0 0 e5x

46
and
 
2e3x + 2e5x 0 −2e3x + 2e5x
1
eAx = CeJx C −1 =  (2x − 1)e3x + e5x 4e3x (−2x − 1)e3x + e5x  .
4
−2e3x + 2e5x 0 2e3x + 2e5x

W 4. Find eAx for the following non-diagonalizable matrices:


 
  2 0 −6
2 1
a) A = ; b)A =  1 −1 1 ;
−1 4
1 0 −3
   
3 1 0 0 1 0
c) A =  0 1 −2 ; d) A =  −4 4 0 .
0 2 5 −2 1 2

E 5. Solve the following initial value problems:


(
y10 = −2y1 + 2y2
a) , y1 (0) = 2, y2 (0) = 1;
y20 = y1 − y2 + e−3x

0
y1 = y1 − 3y2 + 3y3 + 3

b) y20 = −2y1 − 6y2 + 13y3 − 1 , y1 (0) = 0, y2 (0) = 0, y3 (0) = 2;

 0
y3 = −y1 − 4y2 + 8y3
c) Y 0 (x) = AY (x) + B(x), where
     
4 −2 2 1 1
A =  −5 7 −5  , B(x) =  x  , Y (0) =  1  .
−6 6 −4 0 1

Solution. We solve all three exercises using Method 2: Variation of pa-


rameters formula (see formula
 (1.28)).  
−2 2 0
a) We have that A = , B(x) = −3x , x0 = 0 and Y0 =
  1 −1 e
2
.
1
[Stage 1 ]: We need to find eAx . The eigenvalues of the matrix A are
λ1 = 0 and λ2 = −3, hence we have a diagonalizable matrix (see E 3. a)).

47
We get that  
Dx 1 0
e =
0 e−3x
and
1 + 2e−3x 2 − 2e−3x
 
Ax 1
e = .
3 1 − e−3x 2 + e−3x

[Stage 2 ]: Now we compute Yh = eAx Y0 , the solution of the associated


homogenous system. We obtain that

1 1 + 2e−3x 2 − 2e−3x 1 4 + 2e−3x


    
2
Yh = = .
3 1 − e−3x 2 + e−3x 1 3 4 − e−3x

[Stage 3 ]: ZWe proceed to determining a particular solution of the initial


x
system: Yp = eA(x−t) B(t)dt. We have that
0

x
1 + 2e−3(x−t) 2 − 2e−3(x−t)
Z   
1 0
Yp = dt
0 1 − e−3(x−t) 2 + e−3(x−t)
3 e−3t
1 x 2e−3t − 2e−3x
Z  
= dt
3 0 2e−3t + e−3x
 −3t x
2e −3x
x 
1  −3 0 − 2e t 0 
=  2e−3t x x 
3 −3x
+ e t

 −3 0 0

2 2 −3x −3x
1  3 − 3e − 2xe
= .

 2 2
3 − e−3x + xe−3x
3 3

[Stage 4 ]: Now we add up the results from the previous two stages: Y =
Yh + Yp . In conclusion,
 
14 4 −3x −3x
1  3 + 3e − 2xe
Y =  14 .

3 5
− e−3x + xe−3x
3 3

48
   
1 −3 3 3
b) We have that A =  −2 −6 13 , B(x) =  −1 , x0 = 0 and
  −1 −4 8 0
0
Y0 = 0 .

2
[Stage 1 ]: We need to find eAx . The only eigenvalue of the matrix A is
λ = 1, ma (λ) = 3 (see E 2. b)). The matrix A is not diagonalizable and
 one

3 6
can find as an eigenvector v =  1  and two principal vectors vp1 =  0 ,
  1  1 
13 3 6 13
vp2 =  0  (see E 2. b)). We construct the matrix C = 1 0 0 ,

2   1 1 2
0 1 0
−1
which has the inverse C = −2 −7 13 . We have that J = Jλ=1 =

1 3 −6
x2
   
1 1 0 1 x 2
 0 1 1  and eJx = ex  0 1 x . It follows that
0 0 1 0 0 1
 3x2 9x2 2

2
+ 1 2
− 3x −9x + 3x
2 2
eAx = ex  x2 − 2x 3x2 + 1 − 7x 13x − 3x2  .
x 2 3x 2
2
−x 2
− 4x −3x2 + 7x + 1

[Stage 2 ]: Now we compute Yh = eAx Y0 , the solution of the associated


homogenous system. We obtain that
 
3x2 9x2 2
 2 +1 − 3x −9x + 3x   
 2 2  0
 x 3x2
Yh = ex  − 2x + 1 − 7x 13x − 3x 2  0
 
 2 2  2

 x2 3x2
−x − 4x −3x2 + 7x + 1
 2 2
2 
−9x + 3x
x
= 2e 13x − 3x2  .
−3x2 + 7x + 1

49
[Stage 3 ]: We
Z x continue with determining a particular solution of the initial
system: Yp = eA(x−t) B(t)dt. We have that
0

 
Z x 3 + 3(x − t)
Yp = ex−t  x − t − 1  dt.
0 x−t
Z x
Since ex−t (x − t)dt = xex + 1 − ex (using integration by parts), it
0
follows that  
3xex
Yp =  xex − 2ex + 2  .
xex + 1 − ex

[Stage 4 ]: Now we add up the results from the previous two stages: Y =
Yh + Yp . In conclusion,
 
−9x2 + 3x + 3xex
Y =  13x − 3x2 + xex − 2ex + 2  .
−3x2 + 7x + xex + 2 − ex

   
4 −2 2 1
c) We have that A =  −5 7 −5 , B(x) =
  x , x0 = 0 and
  −6 6 −4 0
1
Y0 = 1 .

1
[Stage 1 ]: We determine first eAx . The eigenvalues of the matrix A are
λ1 = 2, ma (λ1 ) = 2 and λ2 = 3, ma (λ2 ) = 1. The matrix A is diagonalizable
since m
 g (λ1 ) = 2 and m
g (λ2) = 1. A pair of eligible eigenvectors (for λ1 ) is
1 0
v1 = 1 and v2 = 1 . For λ2 one can choose the eigenvector v3 =
  
 0 1  
−2 1 0 −2
 5 . For the above selection, we have C =  1 1 5 , which has
6 0 1 6

50
   
−1 2 −2 2 0 0
−1
as the inverse the matrix C =  6 −6 7 . Since D = 0
  2 0 
 2x  −1 1 −1 0 0 3
e 0 0
Dx 2x
and e =  0 e 0 , it follows that
0 0 e3x
 
−e2x + 2e3x 2e2x − 2e3x −2e2x + 2e3x
eAx = CeDx C −1 =  5e2x − 5e3x −4e2x + 5e3x 5e2x − 5e3x .
6e2x − 6e3x −6e2x + 6e3x 7e2x − 6e3x

[Stage 2 ]: Now we compute Yh = eAx Y0 , the solution of the associated


homogenous system. We have that
 
−e2x + 2e3x
Yh =  6e2x − 5e3x  .
7e2x − 6e3x

[Stage
Z x 3 ]: Next we determine a particular solution of the initial system:
Yp = eA(x−t) B(t)dt. We get that
0

e2(x−t) (2t − 1) + 2e3(x−t) (1 − t)

Z x
Yp =  e2(x−t) (5 − 4t) + 5e3(x−t) (t − 1)  dt
0 6e2(x−t) (1 − t) + 6e3(x−t) (t − 1)
 
−2x + 2e3x
 3 2x 
3x 
 2x − 9 3e 10e 

= + − .
 6 2 3 
 7x 3e2x 2e3x 
+ −
6 2 3

[Stage 4 ]: Now we add up the results from the previous two stages: Y =
Yh + Yp .

W 5. Solve the following initial value systems:


(
y10 = 5y1 + 2y2 + x
a) , y1 (0) = 0, y2 (0) = 1;
y20 = 2y1 + 5y2 + x

51
(
y10 = 4y1 + y2 + ex
b) , y1 (0) = 1, y2 (0) = 1;
y20 = −y1 + 2y2

0
y1 = 2y1 − 6y3 + x

c) y20 = y1 − y2 + y3 , y1 (0) = 1, y2 (0) = 1, y3 (0) = 2;

 0
y3 = y1 − 3y3 + x

0
y1 = 2y1 + y2 + y3

d) y20 = y1 − 2y2 + 1 , y1 (0) = 0, y2 (0) = 0, y3 (0) = 2;

 0 x
y3 = 3y1 + 3y2 + y3 + e
e) Y 0 (x) = AY (x) + B(x), where
     
4 0 1 0 1
A = 1 5 0 , B(x) = x , Y (0) = 4  ;
    
1 0 4 0 0

f) Y 0 (x) = AY (x) + B(x), where


   x   
0 1 1 e 1
A = 1 0 1 , B(x) =
   x , Y (0) = 1  .
 
1 1 0 0 2

E 6. Check the stability of the system Y 0 = AY using three methods if:


   
−2 1 16 0
a) A = , Q= ;
0 −2 0 2
   
−3 1 0 8 7 0
b) A =  −1 −3 0 , Q =  7 10 0 .
0 0 −1 0 0 2

Solution. a) Method 1 (using Theorem 1.3.11). We compute the eigen-


values of the matrix A:

det(A − λI2 ) = 0 ⇔ (λ + 2)2 = 0,

so λ = −2 is the only eigenvalue. Since Re λ = −2 < 0, it follows that the


system is asymptotically stable.

52
Method 2 (using Theorem 1.3.4, the Routh-Hurwitz Criterion). We
have that the Hurwitz polynomial is

p(z) = det(zI2 − A) = z 2 + 4z + 4,

hence a2 = 1, a1 = 4, a0 = 4. The associated Hurwitz matrix is


   
a1 a2 4 1
H2 = = .
0 a0 0 4

Since ∆1 = 4 > 0 and ∆2 = det H2 = 16 > 0, it follows that the system is


asymptotically stable.
Method
  3 (using Theorem 1.3.6, the Lyapunov equation). Consider
a b
P = . Then the Lyapunov equation AT P + P A = −Q becomes
b c
   
−4a a − 4b −16 0
= ,
a − 4b 2b − 4c 0 −2

hence a = 4, b = 1 and c = 1. As ∆1 = 4 > 0 and ∆2 = det P = 3 > 0, the


matrix P is positive definite and the system is asymptotically stable.
b) Method 1 (using Theorem 1.3.11). We compute the eigenvalues of
the matrix A:

det(A − λI3 ) = 0 ⇔ (−1 − λ)(λ2 + 6λ + 10) = 0,

so λ1 = −1, λ2 = −3 − i and λ3 = −3 + i. Since Re λ1 = −1 < 0,


Re λ2 = −3 < 0 and Re λ3 = −3 < 0, then the system is asymptotically
stable.
Method 2 (using Theorem 1.3.4, the Routh-Hurwitz Criterion). We
have that the Hurwitz polynomial is

p(z) = det(zI3 − A) = z 3 + 7z 2 + 16z + 10,

hence a3 = 1, a2 = 7, a1 = 16 and a0 = 10. The associated Hurwitz matrix


is    
a2 a3 0 7 1 0
H3 =  a0 a1 a2  =  10 16 7  .
0 0 a0 0 0 10

53
Since ∆1 = 7 > 0, ∆2 = 7·16−10·1 = 102 > 0 and ∆3 = det H3 = 1020 > 0,
it follows that the system is asymptotically stable.
Method
 3 (using Theorem 1.3.6, the Lyapunov equation). Consider
a b c
P =  b d e . Then the Lyapunov equation AT P + P A = −Q becomes
c e f
   
−6a − 2b a − 6b − d −4c − e −8 −7 0
 a − 6b − d 2b − 6d c − 4e  =  −7 −10 0  ,
−4c − e c − 4e −2f 0 0 −2

hence a = 1, b = 1, c = 0, d = 2, e = 0 and f = 1. As ∆1 = 1 > 0,


∆2 = 2 > 0 and ∆3 = 2 > 0, it follows that the system is asymptotically
stable.

W 6. Check the stability of the system Y 0 = AY using three methods if:


   
−3 1 12 1
a) A = , Q= ;
−1 −3 1 24
3 − 12
   
−3 0 1 6
b) A =  0 −3 −1 , Q =
  3 6 21 ;
0 0 −4 − 12 1
2
2
   
−5 −1 0 2 2 0
c) A =  1 −5 0 , Q =  2 8 0 .
0 0 −3 0 0 6

E 7. Determine the solution of the Lyapunov equation for


   
−3 1 12 4
A= and Q = ,
0 −3 4 10

using the formula Z ∞


T
P = eA x QeAx dx.
0
 
1 x
Solution. Since A is a Jordan cell, it follows that eAx = e−3x .
0 1

54
Hence
Z ∞    
−6x 1 0 12 4 1 x
P = e dx
0 x 1 4 10 0 1
Z ∞  
−6x 12 12x + 4
= e dx
0 12x + 4 12x2 + 8x + 10
 
2 1
= .
1 2

Therefore the system is asymptotically stable since ∆1 = 2 > 0 and ∆2 =


4 − 2 = 2 > 0.

W 7. Determine the solution of the Lyapunov equation for


   
−1 1 0 4 0 0
A =  0 −1 0  and Q =  0 2 0  ,
0 0 −2 0 0 12

using the formula Z ∞


T
P = eA x QeAx dx.
0

E 8. Determine a Lyapunov function for the system Y 0 = AY , where


 
−1 0
A=
1 −3

using the positive defined matrix


 
2 6
Q= .
4 24

Solution. For the chosen matrix Q, the Lyapunov equation has the solu-
tion  
3 2
P = .
2 4
Hence a Lyapunov function is
  
T 3 2 y1
V (y) = y P y = [y1 , y2 ] = 3y12 + 4y1 y2 + 4y22 .
2 4 y2

55
Indeed, V (y) = 2y12 + (y1 + 2y2 )2 > 0, for every y 6= 0 and since y10 = −y2 ,
y20 = y1 − 3y2 , we have that
V 0 (y) = 6y1 y10 + 4y10 y2 + 4y1 y20 + 8y2 y20
= −2(y12 + 4y1 y2 + 12y22 )
= −2((y1 + 2y2 )2 + 8y22 ) < 0, for every y 6= 0.
Therefore, the system is asymptotically stable.

W 8. Determine a Lyapunov function for the system Y 0 = AY , where


 
−3 1
A=
0 −3
using the positive defined matrix
 
12 4
Q= .
4 10

E 9. Study the stability of the non-linear system


(
y10 = −y1 + y22
y20 = −0.5y12 − y23 + 2y1 y2 − 3y1 + 2.5

in a neighborhood of the equilibrium position y1 = 1, y2 = 1.


Solution. The system has the form
(
y10 = f1 (y1 , y2 )
.
y20 = f2 (y1 , y2 )

We have that
f1 (1, 1) = −1 + 1 = 0, f2 (1, 1) = −0.5 − 1 + 2 − 3 + 2.5 = 0,
hence indeed Y = [1, 1]T is an equilibrium position.
The linearized system (1.44) has the matrix
∂f1 ∂f1
 
(1, 1) (1, 1)  
 ∂y1 ∂y 2 −1 2
A =  ∂f = .

2 ∂f2 −2 −1
(1, 1) (1, 1)
∂y1 ∂y2

56
The characteristic polynomial of A is

det(λI2 − A) = λ2 + 2λ + 5,

so the eigenvalues of A are λ1 = −1 + 2i and λ2 = −1 − 2i, with Re λ1,2 =


−1 < 0. We have that A is a stable matrix and by Theorem 1.3.11, the
nonlinear system is asymptotically stable.

W 9. Study the stability of the non-linear system


(
y10 = −y2
y20 = y14 + y1 + ay2 + y24

in a neighborhood of the origin.

E 10. Study the stability of the movement of a point mass described in


Figure 1.3.

Figure 1.3: Stability of the Movement of a Point Mass

Solution. By Newton’s second law, the system is characterized by the


following relations:
max = −mg sin θ, ax = lv,
where v is the angular velocity, v = θ̈(t), hence
ax g
v= = − sin θ.
l l

57
One obtains the differential equation
g
θ̈(t) = − sin θ(t).
l
By choosing y1 (t) = θ(t) and y2 (t) = θ̇(t), the differential equation is
transformed into the following nonlinear SDEs:
(
y˙1 = y2
g .
y˙2 = − sin y1 (t)
l
Obviously, the origin is an equilibrium state of the system. The matrix
of the linearized system is
" # " #
0 1 0 1
A= g = g .
− cos y1 0 y1 =0,y2 =0 − 0

l l
r
g
The eigenvalues of A are λ1,2 = ±i . Since Re λ1,2 = 0 and ma (λ1,2 ) =
l
mg (λ1,2 ) = 1, the system is stable, but not asymptotically stable.

E 11. Study the stability of the RLC network

having the state space representation


    
y˙1 0 −1 y1
= .
y˙2 1 R y2

Solution. The characteristic polynomial of matrix A is λ2 −Rλ+1. When


we solve the equation λ2 − Rλ + 1 = 0, the discriminant is ∆ = R2 − 4, hence
we have the following discussion:

58

R 4 − R2
1. if |R| ≤ 2, the eigenvalues are λ1,2 = ±i , so we have three
2 2
possible cases:

1.1 if R ∈ [−2, 0], Re λ1,2 < 0, so the system is asymptotically stable;


1.2 if R ∈ (0, 2], Re λ1,2 > 0, so the system is unstable;
1.3 if R = 0, Re λ1,2 = 0 and ma (λ1,2 ) = mg (λ1,2 ) = 1, so the system
is stable, but not asymptotically stable.

R R2 − 4
2. if |R| > 2, then λ1,2 = ± and Re λ1,2 > 0, so the system is
2 2
unstable.

Many interesting examples and applications can be found in [6], [15], [25],
[31], with emphasis on problems given at the Students Mathematical Contest
’Traian Lalescu’ in [3] and [34].
For applications with Matlab and Maple, see [1], [4], [16].

59
60
Chapter 2

Functions of a Complex
Variable

The theory of complex numbers starts with the imaginary number i. In


1572, in ’Algebra’, the Italian engineer √
Rafael Bombelli suggested that the
2
equation x + 1 = 0 has a root, which is −1 and it is something imaginary.
Many mathematicians of the time disapproved the theory. René Descartes
said that imaginary was a kind of an insult, but, a few years later, in his
book ’La geometrie’, he used the terms ’real’√and ’imaginary’.
The symbol i, as we know it today for −1, is introduced in the eigh-
teenth century by Leonard Euler. Carl Friedrich Gauss gave in 1831 the geo-
metrical representation of a complex number and in 1833, William Hamilton
expressed a complex number as a pair of real numbers (a, b).
The theory of complex numbers found large applicability in relativity
theory, alternative currents, fluid dynamics, quantum mechanics, signal pro-
cessing etc..
The theory of complex functions is a natural consequence of the devel-
opment of the complex numbers and has applications in various engineering
fields (nuclear, aerospace, mechanical and electrical engineering).

2.1 The Field C


One denotes by C the set R × R ={(x, y) : x, y ∈ R}. The pair z = (x, y),
x, y ∈ R is called a complex number .
On C we define the following operations: addition and multiplication.

61
The addition is denoted by ’+’ and it is defined as

(x1 , y1 ) + (x2 , y2 ) = (x1 + x2 , y1 + y2 ), ∀(x1 , y1 ), (x2 , y2 ) ∈ C

and the multiplication is denoted by ’·’ and it is defined as

(x1 , y1 ) · (x2 , y2 ) = (x1 x2 − y1 y2 , x1 y2 + x2 y1 ), ∀(x1 , y1 ), (x2 , y2 ) ∈ C.

Proposition 2.1.1. The triplet (C, +, ·) is a field.

Proof. The addition has the following properties:

i) associativity: (z1 + z2 ) + z3 = z1 + (z2 + z3 ), ∀z1 , z2 , z3 ∈ C. Indeed,


using the associativity of the addition of the real numbers, one obtains

(z1 + z2 ) + z3 = (x1 + x2 , y1 + y2 ) + (x3 , y3 )


= ((x1 + x2 ) + x3 , (y1 + y2 ) + y3 )
= ((x1 + (x2 + x3 ), y1 + (y2 + y3 ))
= (x1 , y1 ) + (x2 + x3 , y2 + y3 )
= z1 + (z2 + z3 );

ii) the complex number (0, 0), which is simply denoted by 0, is the null
element: z + 0 = 0 + z = z, ∀z ∈ C. Obviously, z + 0 = (x, y) + (0, 0) =
(x + 0, y + 0) = (x, y) = z and similarly 0 + z = z;

iii) for every z ∈ C there exists an opposite −z: z + (−z) = (−z) + z = 0.


We simply take −z = (−x, −y);

iv) commutativity: z1 + z2 = z2 + z1 , ∀z1 , z2 ∈ C. This simply follows by


the commutativity in R:

z1 + z2 = (x1 , y1 ) + (x2 , y2 ) = (x1 + x2 , y1 + y2 )


= (x2 + x1 , y2 + y1 ) = (x2 , y2 ) + (x1 + y1 )
= z2 + z1 .

The multiplication has the following properties:

v) associativity: z1 · (z2 · z3 ) = (z1 · z2 ) · z3 , ∀z1 , z2 , z3 ∈ C. This is proved


using the associativity of the multiplication of the real numbers;

62
vi) the complex number (1, 0), which is simply denoted by 1, is the unity:
z ·1 = 1·z = z, ∀z ∈ C. It is obvious that z ·1 = (x·1−y ·0, x·0+y ·1) =
(x, y) = z;

vii) for every z ∈ C\{0} there exists an inverse z −1 : z · z −1 = z −1 · z = 1.


Consider z = (x, y) 6= (0, 0). Then x2 + y 2 > 0 and
 
−1 x −y
z = , .
x2 + y 2 x2 + y 2
Indeed,
x2 + y 2 xy − yx
   
−1 x −y
z·z = (x, y) , 2 = , = (1, 0) = 1,
x + y x + y2
2 2 x2 + y 2 x2 + y 2
and similarly 1 · z = 1;

viii) commutativity: z1 · z2 = z2 · z1 , ∀z1 , z2 ∈ C. This follows also by the


commutativity in R.
The last property we need to check is
ix) distributivity of multiplication over addition: z1 · (z2 + z3 ) = (z1 · z2 ) +
(z1 · z3 ) and (z1 + z2 ) · z3 = (z1 · z3 ) + (z2 · z3 ), ∀z1 , z2 , z3 ∈ C. This is
left as an exercise for the reader.
Therefore (C, +, ·) fulfills the field axioms.

Proposition 2.1.2. The set C1 := {(x, 0) : x ∈ R∗ } is a subfield of C.


Proof. We first prove that C1 is closed with respect to addition and multi-
plication. Indeed, for every (x1 , 0), (x2 , 0) ∈ C1 , we have that

(x1 , 0) + (x2 , 0) = (x1 + x2 , 0) ∈ C1 ,

and

(x1 , 0) · (x2 , 0) = (x1 x2 − 0 · 0, x1 · 0 + x2 · 0) = (x1 x2 , 0) ∈ C1 .

Also it is obvious that if z = (x, 0) ∈ C1 , then

−z = (−x, 0) ∈ C1 ,

63
and if z 6= 0, then
   
−1 x −0 1
z = 2
, 2 = , 0 ∈ C1 .
x +0 x +0 x

Proposition 2.1.3. The fields R and C1 are isomorphic.


Proof. Consider the natural map f : R → C1 , f (x) = (x, 0). Then f has the
following properties:
i) f is injective: for every x1 , x2 ∈ R, x1 6= x2 , then f (x1 ) = (x1 , 0) 6=
(x2 , 0) = f (x2 );
ii) f is surjective: for every z ∈ C1 , there exists x ∈ R such that z = (x, 0),
i.e. f (x) = z;
iii) for every x1 , x2 ∈ R, we have that
f (x1 + x2 ) = (x1 + x2 , 0) = (x1 , 0) + (x2 , 0) = f (x1 ) + f (x2 );

iv) for every x1 , x2 ∈ R, we have that


f (x1 x2 ) = (x1 x2 , 0) = (x1 , 0) · (x2 , 0) = f (x1 ) · f (x2 ).

From i) and ii) we obtain that f is bijective and iii) and iv) imply that f
is a morphism of fields, so f is an isomorphism of fields. It follows that the
fields R and C1 are isomorphic.

As a consequence of Proposition 2.1.3, one can identify the fields R and


C1 by identifying x ∼
= (x, 0). Then it follows by Proposition 2.1.2 that R is
a subfield of C.
Let us denote by i the pair (0, 1) ∈ C. Then
i2 = i· = (0, 1) · (0, 1) = (0 · 0 − 1 · 1, 0 · 1 + 1 · 0) = (−1, 0) = −1,
hence i2 = −1. Again, by using the identification x = (x, 0), we obtain, for
any z ∈ C that
z = (x, y) = (x, 0) + (0, y)
= (x, 0) + (0 · y − 1 · 0, 0 · 0 + y · 1)
= (x, 0) + (0, 1)(0, y)
= x + iy.

64
Therefore, the isomorphism R ∼
= C1 allows a second writing for the complex
number z = (x, y), namely
z = x + iy. (2.1)
The form (2.1) is called the algebraic form of a complex number. Also x is
called the real part of the complex number z and y the imaginary part of z.
They are denoted by Re z and Im z, respectively. The x axis is called the
real axis and the y axis the imaginary axis.
We define for every complex number z = x + iy the conjugate to be
z̄ := x − iy. A very useful remark is that z z̄ = x2 + y 2 ∈ R.
Now, one associates to the complex number z = x + iy or z = (x, y)
the point M (x, y) in the xOy plane, which will be called the complex plane.
Obviously, the correspondence x + iy 7→ M (x, y) is a bijective map between
C and the set of points in the complex plane.
Also, to any point M (x, y) in the complex plane, we can associate the
polar coordinates (ρ, θ), where ρ is the distance between the point and the
origin of xOy and θ is the angle between the radius vector of the point and
the positive direction of the real axis.

y
(z)
M (x, y)
ρ
y
θ
x
0 x N

Figure 2.1: The Polar Coordinates of a Complex Number z

In the right triangle M ON (Figure 2.1), we have that x = ρ cos θ and


y = ρ sin θ, hence z = x + iy becomes

z = ρ(cos θ + i sin θ), (2.2)

which is called called the trigonometric form of the complex number z. We


denote ρ by |z|, which represents the absolute value of z and θ by Arg z,
−∞ < θ < ∞, which is called the argument of z.
Again, in the right triangle M ON , we get that
p
ρ = x2 + y 2 ,

65
and
y
θ = arctan .
x
We denote by arg z the value of the argument of z for θ ∈ [0, 2π). Since
cos θ and sin θ are periodic functions of period 2π, it follows that

Arg z = arg z + 2kπ, k ∈ Z.

Now, by Euler’s formula eiθ = cos θ + i sin θ, the trigonometric form (2.2) can
be written as
z = ρeiθ , (2.3)
which is called the exponential form of the complex number z.
Since a complex number is a pair of real numbers, many concepts of real
analysis can be extended to complex analysis. For instance, one can use
|z1 − z2 | as the distance between the points which correspond to z1 and z2 in
the complex plane. Then one can define as the open disk of radius R centered
at z0 ∈ C the set
{z ∈ C : |z − z0 | < R}.
This disk is also called an R-neighborhood of z0 . Using this, one can define
the notion of convergence of a sequence of complex numbers.

Definition 2.1.4. We say that the sequence (zn )n≥1 ⊂ C is convergent and
has the limit z0 ∈ C, if ∀ > 0, ∃N ∈ N such that for ∀n ∈ N, n > N , we
have that
|zn − z0 | < .

For some problems it is necessary to extend the complex plane with one
point.

Definition 2.1.5. A sequence (zn )n≥1 ⊂ C is said to be indefinitely increasing


if ∀ > 0, ∃N ∈ N such that for ∀n ∈ N, n > N , we have that

|zn | > .

It follows that the above sequence has no limit. One adopts the convention
that all indefinitely increasing sequences have the limit z = ∞. This is called
the point at infinity and it represents the analogue of ±∞ from R.
Each complex number z 6= 0 is characterized by a pair (ρ, θ). But z = 0
is characterized by ρ = 0 and an undetermined θ. Since the point at infinity

66
is the limit of all indefinitely increasing sequences, one can say that these
sequences tend to ∞, hence the point at infinity is characterized by ρ = ∞
(the ’real’ infinity, i.e the limit of all increasing divergent real sequences) and
an undetermined θ.
The set C=C
e ∪ {∞} is called the extended complex plane.

2.2 The Topology on C


e
Since a complex number is a pair of real numbers, the elements of the
usual topology on R can be extended to C. e As we have mentioned before,
one uses the fact that for z1 , z2 ∈ C, |z1 − z2 | represents the distance between
the corresponding points in the complex plane.
For z0 ∈ C and  > 0, the set N (z0 ) = {z ∈ C : |z − z0 | < } is said to
be an -neighborhood of z0 . A set N (∞) = {z ∈ C : |z| > } is called an
-neighborhood of the point at infinity. Using this two concepts we have the
following definition.

Definition 2.2.1. Consider a set S ⊂ C.


e

1. A point z0 is said to be an interior point of S if there exists N (z0 ) ⊂ S.


The set of the interior points of S is called the interior of S and it is
◦ ◦
denoted by S. If S = S, then S is said to be an open set;

2. A point z0 is said to be an exterior point of S if there exists N (z0 ) ⊂ {S


(where {S is the complement of the set S);

3. A point z0 is said to be a boundary point of S if ∀ > 0, N (z0 ) ∩ S 6= ∅


and N (z0 ) ∩ {S 6= ∅. The set of the boundary points of S is called the
boundary of S and it is denoted by ∂S.

In the following we are going to introduce topological notions dealing with


connectedness, but we will focus on open sets, even if the general definition
of connectivity can be given for arbitrary sets.

Definition 2.2.2. Consider a subset D ⊂ C.


e Then:

1. we say that D is a connected set if it is not possible to find two disjoint


non-empty open sets D1 and D2 such that D = D1 ∪ D2 ;

67
2. we say that D is path-wise connected if any two points in D can be
joined by a (piecewise-smooth) curve L ⊂ D;

3. we say that D is a domain if D is an open and path-wise connected


set;

4. a closed contour Γ ⊂ D is a closed piecewise smooth curve without


points of self-intersection;

5. the domain D is called simply connected if any simple closed curve can
be shrunk to a point by continuous deformations without crossing the
boundary of D.
Remark 2.2.3. Regarding the last definition we can make the following
observations:
1. there is an equivalent definition of connectedness for open sets in terms
of curves: an open set D is connected if and only if any two points in
D can be joined by a curve L ⊂ D;

2. an equivalent definition for D to be a simply connected domains in-


volves the notion of homotopy: loosely speaking, two curves are ho-
motopic if one curve can be deformed into the other by a continuous
transformation without ever leaving D; a domain D is simply connected
if any two pairs of curves in D with the same end-points are homotopic
(for more explanations on this topic, one should consult [35], pp. 93-
97);

3. another equivalent definition: D is a simply connected domain if and


only if both D and its complement in C e are connected;

4. geometrically speaking, a simply connected domain in our setting is


just a domain without holes in it; as basic examples we have disks,
strips ({z ∈ C : a < Im z < b}); counterexamples: the punctured plane
(C \ {0}), the annulus ({z ∈ C : a < |z| < b}, a > 0).
A domain D is called a n-uply connected domain, n ∈ N, if its boundary
is made up of n + 1 closed contours as in Figure 2.2 (d) below. If n = 0,
then D is simply connected (Figure 2.2 (a)). If n = 1, then D is called a
doubly connected domain (Figure 2.2 (b)); if n = 2, then D is called a triply
connected domain (Figure 2.2 (c)) and so on.

68
(a) A Simply Connected Do- (b) A Doubly Connected Domain
main

(c) A Triply Connected Domain (d) A n-uply Connected Domain

Figure 2.2: Different Types of Connected Domains

The sets l1 , l2 , . . . , ln , . . ., which are not included into the domain, are
called lacunas.

2.3 Examples of Functions of a Complex Vari-


able
Let D ⊂ C to be a domain. A function f : D → C is said to be a
single-valued complex function. We can use the different representations of
complex numbers to describe the function f . For instance, if z ∈ D, then z
can be written as z = (x, y) or z = x + iy or z = ρeiθ . Similarly, the values
Z = f (z) can be written as Z = (u, v), where u and v are real functions
which depend on the real variables x and y (i.e, u = u(x, y), v = v(x, y)) or
Z = u + iv or Z = ReiΘ .
The usual choice is z = x + iy and Z = u(x, y) + iv(x, y), therefore a
complex function has the representation

f (z) = u(x, y) + iv(x, y). (2.4)

69
The functions u and v are called the real and the imaginary part of f
respectively and they are denoted by u = Re f and v = Im f .
Since Ce is endowed with a topology, the properties of functions defined
on a topological space can be adapted for complex functions.
For instance, the notions of limit and continuity are the following.

Definition 2.3.1. Consider a function f : D ⊂ C → C and z0 ∈ D0 . We say


that

1. the function f has the limit l ∈ C at the point z0 6= ∞ if ∀ > 0, there


exists δ = δ() > 0 such that for every z ∈ D with |z − z0 | < δ, it
follows that |f (z) − l| < . One writes lim f (z) = l;
z→z0

2. the function f has the limit ∞ at the point z0 6= ∞ if ∀ > 0, there


exists δ = δ() > 0 such that for every z ∈ D with |z − z0 | < δ, it
follows that |f (z)| > . One writes lim f (z) = ∞;
z→z0

If we would like to encompass the case when z0 , l ∈ C,e then we have that
l is the limit of the function f at the point z0 if and only if for every sequence
(zn )n ⊂ D such that lim zn = z0 , it follows that lim f (zn ) = l.
n→∞ n→∞

Definition 2.3.2. The function f is called continuous at z0 if the limit l


exists, it is finite and f (z0 ) = lim f (z). A function f is called continuous on
z→z0
D if it is continuous in every point z0 ∈ D.

Now we are going to study the standard examples of complex functions.


1. The linear function f : C → C,

f (z) = az + b,

where a, b ∈ C. By writing z = x + iy, a = a1 + ia2 , b = b1 + ib2 , one obtains

f (z) = f (x + iy) = a1 x − a2 y + b1 + i(a2 x + a1 y + b2 ),

hence u = Re f = a1 x − a2 y + b1 and v = Im f = a2 x + a1 y + b2 . Being poly-


nomials, both u and v are continuous real functions, hence f is a continuous
function of a complex variable in C. If we would like to extend f to C, e then
we put f (∞) := ∞ and we get that f is continuous on C. e

70
2. The inverted function f : C∗ → C,
1
f (z) = ,
z
where C∗ = C \ {0}. For z = x + iy ∈ C∗ , one obtains by multiplication with
the conjugate x − iy,
1 x − iy
f (z) = = 2 ,
x + iy x + y2
x −y
hence Re f = and Im f = 2 .
x2 +y 2 x + y2
We can use the last form indicated at the beginning at the paragraph,
namely z = ρeiθ , Z = ReiΘ . We get that
1 1 1
Z = f (z) = = iθ = e−iθ ,
z ρe ρ
1
i.e. the modulus of f (z) is R = and the argument is Θ = −θ (Figure 2.3).
ρ

ρ z
θ 1 x
0 -θ

1 1
ρ Z= z

Figure 2.3: The Inverse of the Complex Number z

e then we put f (0) := ∞ and f (∞) := 0.


If we would like to extend f to C,

3. The exponential function f : C → C,

f (z) = ez .

71
For z = x + iy and using Euler’s formula eiy = cos y + i sin y, one obtains

f (z) = ex (cos y + i sin y),

which means that Re f = ex cos y and Im f = ex sin y.


Since cos y and sin y are periodic functions with period 2π, we have that

f (z + 2πi) = ex (cos(y + 2π) + i sin(y + 2π)) = ex (cos y + i sin y) = f (z),

hence the exponential function f (z) = ez is periodic of period 2πi.


The previous examples represented single-valued functions of a complex
variable.There are a multiple-valued functions too, as we will see in the fol-
lowing examples.
4. The radical function f : C → C,

f (z) = n z − a,

where a ∈ C. If we would like to extend f to C,e then we put f (∞) := ∞. By


the n-th root formula, for the trigonometric form z − a = ρ(cos θ + i sin θ),
the n-th root is

 
n √ θ + 2kπ θ + 2kπ
z − a = ρ cos
n
+ i sin , k = 0, n − 1.
n n

Using the exponential form, one obtains for z − a = ρeiθ ,


√ θ+2kπ
f (z) = n ρei n , k = 0, n − 1.

Therefore, instead of a single-valued function f we have n distinct functions


f0 , f1 , . . . , fn−1 called the branches of the multiple-valued function f , where
√ θ+2kπ
fk (z) = f (ρeiθ + a) = n
ρei n . (2.5)

Let us consider the branch fk calculated at a point z = ρeiθ + a, hence


fk has the form (2.5). If z describes a circle of radius ρ centered at a, in the
trigonometric sense (see Figure 2.4), finally we get that

z = ρei(θ+2π) + a,

which implies that


√ (θ+2π)+2kπ √ θ+2(k+1)π
fk (z) = n
ρei n = n
ρei n = fk+1 (z),

72
hence the branch fk goes to fk+1 .
In the same manner, we can prove that fn−1 goes to f0 , since

θ+2π+2(n−1)π θ θ
ei n = ei n +i2π = ei n .

The points a with this property are called algebraic branch points.

z
ρ
θ
a

x
O

Figure 2.4: The Circle Described by z

In order to prevent passing from one branch to another, one removes from
the complex plane the cut B, that is a half-line starting at a. Therefore, the
branches fk : C \ B → C are separated and well defined. In order to prevent
passing from one branch to another, one removes from the complex plane the
cut B, that is a half-line starting at a. Therefore, the branches fk : C\B → C
are separated and well defined.
1
If we replace z by , the point at infinity becomes u = 0. Then
u
  r √
n
1 n 1 1 − ua
f = −a= √ .
u u n
u

It follows that u = 0 is another branch point of f and it is necessary a


half-line in order to remove both branch points z = a and z = ∞.
Generally, a branch cut can be any continuous curve going from a to
infinity at same direction (Figure 2.5).

73
y

θ
a
x
O

Figure 2.5: A General Branch Cut



Usually, for n
z one considers the cut on the positive real axis (Figure
2.6).

C
x
O

Figure 2.6: The Cut on the Positive Real Axis

√ θ
n ρei n , which is the extension of the real radical
The branch f√ 0 (z) =
function f (x) = n x − a to the complex plane, is called the principal branch
of the multiple-valued function.
5. The logarithmic function. This will be defined for every z ∈ C∗ and
it will be denoted by Ln z. By definition, this function is the inverse of the
exponential function. But since the exponential function is periodic, with
period 2πi, hence it is not injective, one considers restrictions to horizontal
strips

Sk = {z = x + iy : x ∈ (−∞, +∞), y ∈ (2kπ, 2(k + 1)π), k ∈ Z}.

One can consult Figure 2.7.

74
On each strip Sk , the exponential is injective. The image of each hori-
zontal straight line is

{ex+2kπi : x ∈ (−∞, +∞)} = {ex : x ∈ (−∞, +∞)} = [0, ∞) = R+ .

Therefore, ez is bijective with the range C\R+ and it has an inverse on each
strip Sk .

6πi
S2
4πi
S1
2πi
S0
x
O
S−1
−2πi
S−2
−4πi

Figure 2.7: The Horizontal Strips Sk

The logarithmic function is defined by

Ln z = Z ⇔ eZ = z, z ∈ C∗ .

Let Z = u + iv and z = ρeiθ , hence eu+iv = ρeiθ , so


( (
eu = ρ u = ln ρ
⇔ ⇔ Z = ln ρ + i(θ + 2kπ).
v = θ + 2kπ v = θ + 2kπ

This implies that the logarithmic functions has the form

Ln z = ln ρ + i(θ + 2kπ), k ∈ Z, z = ρeiθ , (2.6)

meaning that Ln z is a multiple-valued function which has an infinity of


branches fk : C \ R+ → Sk , fk = ln ρ + i(θ + 2kπ), k ∈ Z.

75
As in the case of the radical function, if z describes a circle of radius ρ
centered in 0 (or any closed curve that winds around 0), we will have that
z = ρei(θ+2π) , hence the branch fk would jump to fk+1 . The point 0 is said
to be a logarithmic branch point for Ln z.
1
For z = , the point at infinity corresponds to u = 0 and one obtains
u
Ln z = −Ln u, hence u = 0 is a branch point, i.e. the point at infinity is
another logarithmic point for Ln z.
Also as in the case of the radical function, R ≥ 0 is a branch cut that
forbids the movement of z around 0, hence the branches fk are separated and
well defined.
Similarly, the function Ln (z − a), a ∈ C, has the branch points z = a
and z = ∞.

2.4 Exercises
E 12. Find the real part of the complex number z (i.e. Re z), if
π
a) z = 2ei 2 + Ln (1 − i);

b) z = (1 + i)2 ei(π+i) .
π
Solution. a) We have that ei 2 = e0 (cos π2 + i sin π2 ) = i. On the other
hand,
Ln (1 − i) = ln |1 − i| + i(arg (1 − i) + 2kπ), k ∈ Z.

As |1 − i| = 2 and arg (1 − i) = π + π4 = 5π 4
(one associates to the complex
number 1−i the point in plane M (1, −1), which belongs to the 3rd quadrant),
it follows that
√ 5π
Ln (1 − i) = ln 2 + i( + 2kπ), k ∈ Z.
4
√ √
Since z = 2i + ln 2 + i( 5π4
+ 2kπ), we have that Re z = ln 2.
b) First of all we get that (1 + i)2 = 1 + 2i − 1 = 2i. At the same time
we have that ei(π+i) =e−1 (cos π + i sin π)=−e−1 . Then

z = (1 + i)2 ei(π+i) = −2ie−1 ,

so Re z = 0.

76
W 10. Find the imaginary part of the complex number z (i.e. Im z), if
π
a) z = e−1+2πi − 2ie1−i 2
r √
3 1 + i 3
b) z = ;
2
c) z = (−i)−i (2i + 1).
Answer. a) Im z = 0;
 π 2kπ 
b) Im z = sin + , k ∈ {0, 1, 2};
9 3

c) (−i)−i = e(−i)Ln (−i) , so Im z = 2e 2 +2kπ , k ∈ Z.

E 13. Determine the imaginary part of function f (i.e. Im f ), if


a) f (z) = z 2 − 1 + ez̄ ;
eiz + e−iz
b) f (z) = ;
z+1

c) f (z) = 3 z − Ln z.
Solution. a) By taking z = x + iy, f (z) = u(x, y) + iv(x, y) and Im f =
v(x, y), we have that

f (z) = z 2 − 1 + ez = (x + iy)2 − 1 + ex+iy


= x2 − y 2 + 2xyi − 1 + ex−iy
= x2 − y 2 + 2xyi − 1 + ex (cos y − i sin y)
= x2 − y 2 − 1 + ex cos y + i(2xy − ex sin y),

hence Im f = v(x, y) = 2xy − ex sin y.


b) By taking z = x + iy, one obtains

ei(x+iy) + e−i(x+iy) eix−y + e−ix+y


f (z) = =
x + iy + 1 (x + 1) + iy
−y
e (cos x + i sin x) + ey (cos x − i sin x)
=
(x + 1) + iy
cos x(e + e ) + i sin x(e−y − ey )
y −y
= .
(x + 1) + iy

77
ey + e−y ey − e−y
Using cosh y = and sinh y = (the hyperbolic sine and
2 2
cosine), it follows that
2 cos x cosh y − 2i sin x sinh y
f (z) = .
(x + 1) + iy

By amplifying with the conjugate of (x + 1) + iy, one finds

2[(x + 1) − iy](cos x cosh y − i sin x sinh y)


f (z) = ,
(x + 1)2 + y 2
thus
−(x + 1) sin x sinh y − y cos x cosh y
Im f = 2 .
(x + 1)2 + y 2

c) By taking the trigonometric form of the complex number z = ρ(cos θ +


i sin θ), ρ = |z| and θ = argz ∈ [0, 2π), it results that f (z) = u(ρ, θ)+iv(ρ, θ).
Since
√ √  θ + 2kπ θ + 2kπ 
3
z = ρ cos
3
+ i sin , k ∈ {0, 1, 2}
3 3
and
Ln z = ln ρ + i(θ + 2lπ), l ∈ Z,
one obtains
√ √  θ + 2kπ θ + 2kπ 
f (z) = 3
z − Ln z =
3
ρ cos + i sin − ln ρ − i(θ + 2lπ)
3 3
√ θ + 2kπ √ θ + 2kπ 
= 3 ρ cos − ln ρ + i 3 ρ sin − θ − 2lπ ,
3 3
so
√ θ + 2kπ
Im f = 3
ρ sin − θ − 2lπ, k ∈ {0, 1, 2}, l ∈ Z.
3

W 11. Determine the real part of function f (i.e. Re f ), if


ez
a) f (z) = ;
z2
b) f (z) = z 2 + |z 2 − 1| − eiz ;

78
c) f (z) = z · Ln z̄.

ex (x2 − y 2 ) cos y + 2xy sin y
Answer. a) Re f (x, y) = ;
(x2 − y 2 )2 + 4x2 y 2
p
b) Re f (x, y) = x2 − y 2 + (x2 − y 2 − 1)2 + 4x2 y 2 − e−y cos x;
  
c) Re f (ρ, θ) = ρ ln ρ cos θ − (2k + 1)π − θ sin θ .

E 14. Solve the following equations in C:


a) z 2 + (1 + i)z + i = 0;

b) z 4 − i 3 = 0;
c) eiz − e−iz = −4.
Solution. a) We will present 2 methods.
Method 1. We couple the terms of the equation in order to obtain a
common factor. It follows that

z 2 + (1 + i)z + i = z 2 + z + i(z + 1) = z(z + 1) + i(z + 1) = (z + 1)(z + i) = 0,

so the solutions are z1 = −1 and z2 = −i.


Method 2. By using the formula of the discriminant, we have that

∆ = (1 + i)2 − 4i = −2i = (1 − i)2 ,

hence
−i − 1 + 1 − i −i − 1 − 1 + i
z1 = = −i and z2 = = −1.
2 2

√ √
b) We first observe that z 4 − i 3 = 0 ⇔ z 4 = i 3. By extracting the
root of order 4, one obtains
q p  
4 θ + 2kπ θ + 2kπ
z = |i 3| cos + i sin , k = 0, 3.
4 4
√ π
We have that θ = arg (i 3) = , so
2
√ π π
 
8 + 2kπ + 2kπ
z = 3 cos 2 + i sin 2 .
4 4

79
The solutions are obtain by making k to be 0, 1, 2 or 3:
√8
 π π
z1 = 3 cos + i sin , k = 0;
8 8
√8
 5π 5π 
z2 = 3 cos + i sin , k = 1;
8 8
√8
 9π 9π 
z3 = 3 cos + i sin , k = 2;
8 8

8
 13π 13π 
z4 = 3 cos + i sin , k = 3.
8 8
Remark. The algebraic forms of the solutions are obtained r by using
π 1 + cos π4
standard trigonometric formulas and the fact that cos = =
8 2
s √
2+ 2
.
4
1
c) Let us denote t = eiz . It follows that t − = −4, so t2 + 4t − 1 = 0.
t √
Since the discriminant
√ is ∆ = 20, we have that the solutions are t1 = −2+ 5
and t2 = −2 − 5. √
From eiz = t1 = −2 + 5 we deduce that
√ √
iz = Ln (−2 + 5) = ln( 5 − 2) + i · 2kπ, k ∈ Z,

hence z = 2kπ − i ln( 5 −√2).
From eiz = t2 = −2 − 5 we deduce that
√ √
iz = Ln (−2 − 5) = ln( 5 + 2) + i(π + 2kπ), k ∈ Z,

hence z = (2k + 1)π − i ln( 5 + 2).

W 12. Solve the following equations in C:

a) z 4 + 4z 2 + 4 = 0;
√ 3π
b) z 3 + 2ei 4 = 0;
eiz − e−iz
c) = 4i.
i(eiz + e−iz )

80


2
Answer. a) z = ±2i; b) ei 4 = cos 3π
4
+ i sin 3π
4
= 2
(−1 + i), hence the
3
equation becomes z = 1 − i and
√  7π
4
+ 2kπ 7π
4
+ 2kπ 
z = 2 cos + i sin , k = 0, 2;
3 3

eiz − e−iz 3
c) iz −iz
= 4i ⇔ 5eiz + 3e−iz = 0 ⇒ e2iz = − , hence
i(e + e ) 5
1 3 π
z = − i ln + (2k + 1) , k ∈ Z.
2 5 2

81
82
Chapter 3

Complex Differentiation

In 1821, in ’Cours d’analyse de l’École polytechnique’, Augustin Louis


Cauchy studied the convergence of series in the complex plane C. This is why
he is considered the creator of the theory of analytic (holomorphic) functions.
Karl Weierstrass and Bernhard Riemann have brought a major contribution
to the development of the theory. Their point of view is somewhat different
from Cauchy’s (Riemann’s is geometrical, Weierstrass is arithmetical), but
it was proved that they represent facets of the same thing.
Even if there are similarities between complex differentiability and real
differentiability, in fact there are major differences between these two the-
ories. Holomorphic functions are infinitely differentiable. On the contrary,
even if a real function is n times differentiable, it is not necessarily n + 1
times differentiable. All holomorphic functions satisfy the stronger condition
of analyticity while even infinitely differentiable real functions can be not
analytic.
Complex differentiation is one of the most important branches of complex
analysis, having numerous applications in theoretical physics, mechanics and
technology.

3.1 Analytic Functions


Let D ⊂ C be a domain and f : D → C a function of a complex variable.
Using the representation of a complex function given by (2.4) we have that
f (z) = u(x, y) + iv(x, y), z = x + iy.

83
Definition 3.1.1. We say that the function f is Fréchet differentiable at
z0 ∈ D if there exist

- a linear and continuous functional Lz0 : C → C, i.e. Lz0 (h) = ah,


a ∈ C;
ω(z0 , h)
- a function ω : D × C → C with lim = 0,
h
h→0
such that for any h ∈ C, with z0 + h ∈ D, we get that

f (z0 + h) − f (z0 ) = ah + ω(z0 , h). (3.1)

The number a is denoted by f 0 (z0 ) and it is called the derivative of the


function f at the point z0 .

Theorem 3.1.2. The function f is Fréchet differentiable at z0 = x0 +iy0 ∈ D


if and only if the real functions u and v are Fréchet differentiable at (x0 , y0 )
and they verify the Cauchy-Riemann conditions at (x0 , y0 )

∂u ∂v ∂u ∂v
(x0 , y0 ) = (x0 , y0 ), (x0 , y0 ) = − (x0 , y0 ) (3.2)
∂x ∂y ∂y ∂x

Proof. We have that a, h and ω(z, h) are complex numbers, hence they have
the form

a = a1 + ia2 , h = h1 + ih2 , ω(z, h) = ω1 (x, y, h1 , h2 ) + iω2 (x, y, h1 , h2 ).


ω ω ω ω
p 1 2
Since |ω| = ω12 + ω12 we obtain that 0 ≤ ≤ and 0 ≤ ≤ .
h h h h
This implies that ω ω
1 2
lim = 0, lim = 0. (3.3)
h→0 h h→0 h

Using Definition 3.1.1, 3.1, 3.2 and the fact that two complex numbers
are equal if and only if their real and imaginary parts are respectively equal,
one obtains the following chain of equivalent statements:

f is differentiable at z0 ⇔ f (z0 + h) − f (z0 ) = ah + ω(z0 , h) ⇔

u(x0 + h1 , y0 + h2 ) + iv(x0 + h1 , y0 + h2 ) − u(x0 , y0 ) − iv(x0 , y0 ) =


= (a1 + ia2 )(h1 + ih2 ) + ω1 + iω2 ⇔

84
(
u(x0 + h1 , y0 + h2 ) − u(x0 , y0 ) = a1 h1 − a2 h2 + ω1
v(x0 + h1 , y0 + h2 ) − v(x0 , y0 ) = a1 h2 + a2 h1 + ω2 .
We have that the previous equalities are equivalent to the fact that the real
functions u and v are Fréchet differentiable at (x0 , y0 ) and
∂u ∂u
(x0 , y0 ) = a1 , (x0 , y0 ) = −a2 ,
∂x ∂y
(3.4)
∂v ∂v
(x0 , y0 ) = a2 , (x0 , y0 ) = a1
∂x ∂y
The equalities (3.4) are equivalent to
∂u ∂v
(x0 , y0 ) = a1 = (x0 , y0 ),
∂x ∂y
∂u ∂v
(x0 , y0 ) = −a2 = − (x0 , y0 ).
∂y ∂x

Corollary 3.1.3. The derivative of the Fréchet differentiable function f at


the point z0 can be calculated by one of the following formulas:
∂u ∂v
i) f 0 (z0 ) = (x0 , y0 ) + i (x0 , y0 );
∂x ∂x
∂u ∂u
ii) f 0 (z0 ) = (x0 , y0 ) − i (x0 , y0 );
∂x ∂y
(3.5)
∂v ∂v
iii) f 0 (z0 ) = (x0 , y0 ) + i (x0 , y0 );
∂y ∂x
∂v ∂u
iv) f 0 (z0 ) = (x0 , y0 ) − i (x0 , y0 ).
∂y ∂y
Proof. By definition, f 0 (z0 ) = a = a1 + ia2 and a1 , a2 can be replaced by the
partial derivatives of u and v from (3.4).

Definition 3.1.4. A function f : D → C is said to be analytic (holomorphic)


on the domain D if the function f is Fréchet differentiable at any point z ∈ D.
By Theorem 3.1.2 one obtains the following characterization of analytic
functions.

85
Theorem 3.1.5. A function f : D → C, f (z) = u(x, y) + iv(x, y) is analytic
on D if and only if the real functions u and v are Fréchet differentiable on
D and they verify the Cauchy-Riemann conditions on D:

∂u ∂v ∂u ∂v
= , =− .
∂x ∂y ∂y ∂x

One knows from Calculus that if a real-valued function of several variables


has first order partial derivatives, then it is differentiable at every point where
these are continuous. This gives us the following sufficient condition.

Theorem 3.1.6. If the real functions u, v : D → C have continuous first or-


der partial derivatives on D and they verify the Cauchy-Riemann conditions,
then the complex function f = u + iv is analytic on D.

Remark 3.1.7. We will show later that an analytic function f is differen-


tiable to all orders and its real and imaginary parts u and v have continuous
partial derivatives to any order (u and v are of class C n , n ∈ N).

Remark 3.1.8. One can prove that the usual properties of derivatives are
true for complex functions. For instance, if f and g are analytic functions
f
on the domain D, then af + bg, a, b ∈ C, f g, (for g 6= 0) are analytic on
g
D and
 0
0 0 0 0 0 0 f f 0g − f g0
(af + bg) = af + bg , (f g) = f g + f g , = ,
g g2

where a, b ∈ C.
If f : D1 → D2 is analytic on the domain D1 and g : D2 → C is analytic
on D2 , then
(g ◦ f )0 (z) = g 0 (f (z))f 0 (z).
The last formula is known as the chain rule.
If f : D1 → D2 is analytic on the domain D1 and |f 0 (z)| = 6 0 in a
−1
neighborhood of a point z0 ∈ D1 , then the inverse function f is well defined
and it is analytic on a neighborhood of a point Z0 = f (z0 ) ∈ D2 and

1
(f −1 )0 (Z0 ) = .
f 0 (z0 )

86
Remark 3.1.9. One can show that the extensions to C of the elementary
real functions are analytic and verify similar formulas. For instance,

(z n )0 = nz n−1 , (ez )0 = ez , (sin z)0 = cos z, (cos z)0 = − sin z,

eiz − e−iz eiz + e−iz


where sin z = and cos z = .
2i 2
1
Also we have that (fk (z))0 = , for any branch fk of Ln z.
z

3.2 Harmonic Conjugates


Definition 3.2.1. A function f : D → R, where D is an open subset of R2 ,
is said to be harmonic if it is of class C 2 (D) and it verifies Laplace’s equation
on D
∂ 2f ∂ 2f
∆f = + 2 = 0.
∂x2 ∂y
The same definition holds for a complex function f : D → C, where D is
an open subset of C.

Proposition 3.2.2. Let f : D → C, D ⊂ C be a complex function, f (z) =


u(x, y)+iv(x, y), z = x+iy. If f is analytic on D, then u and v are harmonic
functions in D.
Proof. Assume that u and v are of class C 2 (this is true based on Remark
3.1.7). Since u is of class C 2 (D), the mixed second order partial derivatives
are continuous and by Schwarz’s Commutativity Criterion we have that

∂ 2u ∂ 2u
= .
∂x∂y ∂y∂x
Due to the fact that f is analytic, u and v verify the Cauchy-Riemann con-
ditions, one obtains

∂ 2u ∂ 2u
       
∂ ∂u ∂ ∂v ∂ ∂v ∂ ∂u
= = = = − = − ,
∂x2 ∂x ∂x ∂x ∂y ∂y ∂x ∂y ∂y ∂y 2

∂ 2u ∂ 2u
hence ∆u = + = 0. Similarly, ∆v = 0.
∂x2 ∂y 2

87
Obviously, since u and v are harmonic, the function f = u+iv is harmonic
too.
It was shown that if f = u + iv is an analytic function, then u and v are
∂u ∂v
related by the Cauchy-Riemann conditions (see Theorem 3.1.5) = ,
∂x ∂y
∂u ∂v
=− and they are harmonic functions. In this case one says that v
∂y ∂x
is a harmonic conjugate of u (and u is a harmonic conjugate of v). It can
be proved that if v is a harmonic conjugate of u, then v + a is a harmonic
conjugate of u, ∀a ∈ C. This is left as an exercise for the reader.

3.3 Analytic Continuation


One can deduce the following result (from a more general framework).

Theorem 3.3.1 (Theorem of Analytic Continuation). Given a continuous


function f : [a, b] ⊂ R → R, then, in some domain D ⊂ C, with [a, b] ⊂ D,
there exists only one analytic function F : D → C such that F (x) = f (x),
∀x ∈ [a, b].

The function F (z) is called the analytic continuation of the function f (x)
to the domain D.
This theorem allows us to extend elementary real functions to the complex
plane. For instance, ez = ex (cos y + i sin y) is analytic and for z = x ∈ R,
i.e. for y = 0, we have that ez = ex (cos 0 + i sin 0) = ex . Therefore, ez is the
analytic continuation of ex to C. Then, starting with Euler’s formulas for the
eix − e−ix eix + e−ix
real trigonometric functions sin x = and cos x = , one
2i 2
can determine their analytic continuations, i.e. the complex trigonometric
functions defined by
eiz − e−iz
sin z = ,
2i
eiz + e−iz
cos z = .
2
One can easily prove that the fundamental formula sin2 z +cos2 z = 1 remains
valid for any complex number z.

88
3.4 Determination of Analytic Functions
One can prove the following result.

Theorem 3.4.1. For any real harmonic function u in a simply connected


domain D, there exists a real harmonic function v in D, unique up to addition
of a constant, such that the function f = u + iv is analytic on D.

Based upon this theorem, we can solve the following problems:


Problem 1. Given a harmonic function u in a simply connected domain
D and two points z0 ∈ D and Z0 ∈ C, determine the analytic function f on
D such that
Re f = u and f (z0 ) = Z0 .
Method 1. Let z0 = x0 + iy0 . Since f = u + iv is analytic, we have that
∂u ∂v ∂u ∂v
u and v verify the Cauchy-Riemann conditions: = , =− .
∂x ∂y ∂y ∂x
We integrate the first equation with respect to y and we obtain that
Z y
∂u
v(x, y) = (x, t)dt + C(x).
y0 ∂x

By imposing the second equation and by using the fact that u is a harmonic
function, we can show that
Z y 2 Z y 2
∂u ∂v ∂ u 0 ∂ u
(x, y) = − (x, y) = − 2
(x, t)dt − C (x) = 2
(x, t)dt − C 0 (x),
∂y ∂x y0 ∂x y0 ∂y

hence
∂u ∂u ∂u
(x, y) = (x, y) − (x, y0 ) − C 0 (x).
∂y ∂y ∂y
∂u
This implies that C 0 (x) = − (x, y0 ), so
∂y
Z x
∂u
C(x) = − (t, y0 )dt + C1 ,
x0 ∂y

where C1 is an arbitrary constant. Therefore,


Z y Z x
∂u ∂u
v(x, y) = (x, t)dt − (t, y0 )dt + C1
y0 ∂x x0 ∂y

89
and v(x0 , y0 ) = C1 , which shows that v is unique up to addition of a constant
C1 .
From f (z) = u(x, y) + iv(x, y) one obtains Z0 = f (z0 ) = u(x0 , y0 ) + iC1 ,
hence Z0 − u(x0 , y0 ) = iC1 and

Z y Z x 
∂u ∂u
f (z) = u(x, y) − u(x0 , y0 ) + i (x, t)dt − (t, y0 )dt + Z0 .
y0 ∂x x0 ∂y

Method 2. We will divide this method into several steps, so we can regard
it as an actual algorithm.
∂u ∂u
Step 1. Calculate f 0 (z) = (x, y) − i (x, y) (by (3.5) ii));
∂x ∂y
Step 2. Replace y with 0 in f (z) (i.e. determine the restriction of f 0 to
0

R). We get that

∂u ∂u
f 0 (x) = (x, 0) − i (x, 0) =: g(x);
∂x ∂y

Step 3. Integrate f 0 (x) with respect to x. It follows that f (x) = G(x)+C,


where G is a primitive of g and C is a constant;
Step 4. Replace x by z (i.e. use the analytic continuation of f (x)), so
f (z) = G(z) + C;
Step 5. Replace z with z0 and obtain that Z0 = f (z0 ) = G(z0 ) + C, thus
C = Z0 − G(z0 ).
The solution is f (z) = G(z) + Z0 − G(z0 ).
In the following we will show how both methods work on an example.

Example 3.4.2. Consider u(x, y) = ex cos y + x2 − y 2 . Find an analytic


function f = u + iv such that f (0) = i.
Solution. No matter what method we use we should verify that u is a
harmonic function. This is an exercise for the reader.

90
Method 1. As f (0) = i, it follows that x0 = 0, y0 = 0 and Z0 = i. Then
Z y Z x 
∂u ∂u
f (z) = u(x, y) − u(x0 , y0 ) + i (x, t)dt − (t, y0 )dt + Z0
y0 ∂x x0 ∂y
= ex cos y + x2 − y 2 − u(0, 0)+
Z y Z x 
x t
+i (e cos t + 2x)dt − (−e sin 0 − 2 · 0)dt + i
0 0
y
= ex cos y + x2 − y 2 − 1 + i((ex sin t + 2xt) 0 − 0) + i
= ex cos y + x2 − y 2 − 1 + i(ex sin y + 2xy) + i
= ex (cos y + i sin y) + x2 − y 2 + 2ixy − 1 + i
= ez + z 2 − 1 + i.

Method 2. We follow the algorithm presented before.


Step 1. Calculate
∂u ∂u
f 0 (z) = (x, y) − i (x, y) = ex cos y + 2x − i(−ex sin y − 2y);
∂x ∂y
Step 2. Replace y with 0 in f 0 (z). Then z = x + iy becomes z = x and

f 0 (x) = ex cos 0 + 2x − i(−ex sin 0 − 2 · 0) = ex + 2x;

Step 3. Integrate f 0 (x) with respect to x. It follows that


Z
f (x) = (ex + 2x)dx = ex + x2 + C,

where C is a constant;
Step 4. Replace x by z, thus f (z) = ez + z 2 + C;
Step 5. Replace z with 0, so 1 + C = i and C = i − 1.
The analytic function is f (z) = ez + z 2 + i − 1.
Problem 2. Given a harmonic function v in a simply connected domain
D and two points z0 , Z0 ∈ C, determine the analytic function f on D such
that
Im f = v and f (z0 ) = Z0 .
Similarly, by adapting Method 1, one obtains
Z x Z y
∂v ∂v
u(x, y) = (t, y0 )dt − (x, t)dt + C1 .
x0 ∂y y0 ∂x

91
For Method 2, we change Step 1 with the following:
∂v ∂v
Step 1. Calculate f 0 (z) = (x, y) + i (x, y) (by (3.5 iii)).
∂y ∂x
y
Example 3.4.3. Consider v(x, y) = − 2xy, (x, y) 6= (0, 0). Find an
x2 + y 2
analytic function f = u + iv such that f (i) = i.
Solution. No matter what method we use we should verify that v is a
harmonic function. This is an exercise for the reader.
Method 1. We have that
∂v x2 + y 2 − y · 2y x2 − y 2
(x, y) = − 2x = − 2x,
∂y (x2 + y 2 )2 (x2 + y 2 )2

and
∂v −y · 2x −2xy
(x, y) = 2 − 2y = − 2y.
∂x (x + y 2 )2 (x2 + y 2 )2
∂u ∂v
From the second Cauchy-Riemann condition, = − , it follows that
∂y ∂x

∂u −2xy
= 2 − 2y.
∂y (x + y 2 )2

Integrating with respect to the y variable, we get that


Z  
2xy 1
u(x, y) = 2 2 2
+ 2y dy + C(x) = −x · 2 2
+ y 2 + C(x).
(x + y ) x +y

∂u ∂v
The first Cauchy-Riemann condition, = , leads to
∂x ∂y

x2 + y 2 − 2x2 x2 − y 2
 
0
− + C (x) = 2 − 2x,
(x2 + y 2 )2 (x + y 2 )2

which is equivalent to

x2 − y 2 0 x2 − y 2
+ C (x) = − 2x,
(x2 + y 2 )2 (x2 + y 2 )2

so C 0 (x) = −2x, hence C(x) = −x2 + C1 .

92
−x
We find that u(x, y) = + y 2 − x2 + C1 and
x2
+ y2
 
−x 2 2 y
f (z) = u + iv = 2 + y − x + C1 + i − 2xy
x + y2 x2 + y 2
x − iy
=− 2 + y 2 − x2 − 2ixy + C1
x + y2
1
= − − z 2 + C1 .
z
1
By imposing the condition f (i) = i, one can find − − i2 + C1 = i, so
i
C1 = −1.
1
The analytic function is f (z) = − − z 2 − 1, z 6= 0.
z
Method 2. We follow the algorithm presented before.
Step 1. Calculate

∂v ∂v
f 0 (z) = (x, y) + i (x, y).
∂y ∂x

We have that
∂v x2 + y 2 − y · 2y x2 − y 2
(x, y) = − 2x = − 2x,
∂y (x2 + y 2 )2 (x2 + y 2 )2

and
∂v −y · 2x −2xy
(x, y) = 2 − 2y = − 2y,
∂x (x + y 2 )2 (x2 + y 2 )2
so
x2 − y 2
 
0 −2xy
f (z) = 2 − 2x + i − 2y ;
(x + y 2 )2 (x2 + y 2 )2
Step 2. Replace y with 0 in f 0 (z). Then z = x + iy becomes z = x and

x2 1
f 0 (x) = 4
− 2x + i · 0 = 2 − 2x;
x x
Step 3. Integrate f 0 (x) with respect to x. It follows that
Z  
1 1
f (x) = 2
− 2x dx = − − x2 + C,
x x

93
where C is a constant;
1
Step 4. Replace x by z, thus f (z) = − − z 2 + C;
z
1 2
Step 5. Replace z with i, so − − i + C = i and C = −1.
i
1
The analytic function is f (z) = − − z 2 − 1, z 6= 0.
z

3.5 Exercises
E 15. Find the real part of the complex number z (i.e. Re z), if
a) z = (1 + i)2 sin(π + i);
2
 π
b) z = cos i + .
4
Solution. a) We have that (1 + i)2 = 1 + 2i − 1 = 2i and

ei(π+i) − e−i(π+i) eiπ−1 − e−iπ+1


sin(π + i) = =
2i 2i
e−1 (cos π + i sin π) − e1 (cos π − i sin π))
=
2i
−1
e−e
= .
2i
e − e−1
Then z = (1 + i)2 sin(π + i) = 2i = e − e−1 ∈ R, so Re z = z = e − e−1 ;
2i
b) We have that
 i(i+ π ) π 2  −1+i π 2  −2+i π
1−i π4 2−i π2
e 4 + e−i(i+ 4 )

e 4 + e e 2 + e +2
z= = =
2 2 4
−2 π π 2 π π
e (cos 2 + i sin 2 ) + e (cos 2 − i sin 2 ) + 2
=
4
ie−2 − ie2 + 2 e−2 − e2

1
= = +i ,
4 2 4
1
so Re z = .
2

W 13. Find the imaginary part of the complex number z (i.e. Im z), if

94
 π π 
a) z = (1 − 2i) cos − sin +i ;
6 4

2
 π 2
 π
b) z = cos i + − sin i + .
6 6

Answer. a) We have that


√ π π
!
3 ei( 4 +i) − e−i( 4 +i)
z = (1 − 2i) −
2 2i
√ π π
 −1 π
 π
 !
3 cos 4
+ i sin 4
e − cos − 4
+ i sin − 4
e
= (1 − 2i) −
2 2i
√ √
2 −1

2 −1
!
3 (e − e) + i (e + e)
= (1 − 2i) − 2 2
,
2 2i

√ √
√ 3 2 −1 2
hence Im z = − 3 + e + e;
4 4
b) We have that
π π
e−2+i 3 + e2−i 3
z= ,
2

3
so Im z = − sinh 2.
2

E 16. Determine the imaginary part of function f (i.e. Im f ), if

2 cos z
a) f (z) = , z 6= −2;
z+2

b) f (z) = sin(iz̄) − cos(2z).

Solution. a) We replace first the expression of cos z into f , so

eiz + e−iz
2 cos z 2 eiz + e−iz
f (z) = = 2 = .
z+2 z+2 z+2

95
By taking z = x + iy, one obtains

ei(x+iy) + e−i(x+iy) eix−y + e−ix+y


f (z) = =
x + iy + 2 (x + 2) + iy
−y
e (cos x + i sin x) + ey (cos x − i sin x)
=
(x + 2) + iy
cos x(e + e ) + i sin x(e−y − ey )
y −y
= .
(x + 2) + iy
ey + e−y ey − e−y
Using cosh y = and sinh y = (hyperbolic sine and cosine),
2 2
it follows that
2 cos x cosh y − 2i sin x sinh y
f (z) = .
(x + 2) + iy
By amplifying with the conjugate of (x + 2) + iy, one finds
2[(x + 2) − iy](cos x cosh y − sin x sinh y)
f (z) = ,
(x + 2)2 + y 2
so
−(x + 2) sin x sinh y − y cos x cosh y
Im f = 2 ;
(x + 2)2 + y 2

b) Using the definitions of complex functions sin and cos, it follows that

e−z − ez e2iz + e−2iz


f (z) = − .
2i 2
By taking z = x + iy, then z̄ = x − iy and by replacing both into f , we
get that
e−x+iy − ex−iy e−2y+2ix + e2y−2ix
f (x, y) = −
2i 2
e−x (cos y + i sin y) − ex (cos y − i sin y)
=
2i
−2y
e (cos(2x) + i sin(2x) + e2y (cos(2x) − i sin(2x)

2
sin y(e−x + ex ) − i cos y(e−x − ex )
=
2
cos(2x)(e−2y + e2y + i sin(2x)(e−2y − e2y )
− ,
2

96
hence
cos y(e−x − ex ) sin(2x)(e−2y − e2y
Im f = − −
2 2
= cos y sinh x + sin(2x) sinh(2y).

W 14. Determine the imaginary part of function f (i.e. Im f ), if

a) f (z) = i sin z + cos(z̄ + i);


sin z
b) f (z) = , z 6= 0.
z2
Answer. a) Im f = sin x(− sinh y + sinh(1 − y));
eiz − e−iz − cos x sinh y + i sin x cosh y
b) f (z) = = , hence
2iz 2 −2xy + i(x2 − y 2 )

−2xy sin x cosh y + (x2 − y 2 ) cos x sinh y


Im f = .
(x2 + y 2 )2

E 17. Solve the following equations in C:

a) sin z = 2i;

b) cos z + i sin z = 4i.

Solution. a) Using the definition of sin z, one obtains

eiz − e−iz
= 2i ⇔ eiz − e−iz = −4.
2i
1
Denote t = eiz . It follows that t − = −4, so t2 + 4t − 1 = 0. Since
t√ √
∆ = 20, we have the solutions
√ t1 = −2 + 5 and t2 = −2 − 5.
From eiz = t1 = −2 + 5 we get that
√ √
iz = Ln (−2 + 5) = ln( 5 − 2) + i · 2kπ, k ∈ Z,

hence z = 2kπ − i ln( 5 − 2).

97

From eiz = t2 = −2 − 5 we get that
√ √
iz = Ln (−2 − 5) = ln( 5 + 2) + i(π + 2kπ), k ∈ Z,

hence z = (2k + 1)π − i ln( 5 + 2);
b) Using the definitions of sin and cos, the equation becomes

eiz + e−iz eiz − e−iz


+i = 4i,
2 2i
π
which leads to eiz = 4i and to the solution z = −i ln 4 + + 2kπ, k ∈ Z.
2

W 15. Solve the following equations in C:

a) cos z = 2;
π
b) tan z = 4i, z 6= + kπ, k ∈ Z.
2

Answer. a) z = −i ln(2 ± 3) + 2kπ, k ∈ Z;
eiz − e−iz 3
b) We have that iz −iz
= 4i ⇔ 5eiz + 3e−iz = 0 ⇒ e2iz = − ,
i(e + e ) 5
hence
1 3 π
z = − i ln + (2k + 1) , k ∈ Z.
2 5 2

E 18. Verify if the following functions are analytic (holomorphic) in C:

a) f (z) = zez ;

b) f (z) = cos(z̄).

Solution. For both exercises, in order to study the analiticity of f in C,


we can verify if the real and the imaginary parts of f are differentiable and
if the Cauchy-Riemann conditions (3.2) are fulfilled:

∂u ∂v ∂u ∂v
(x0 , y0 ) = (x0 , y0 ), (x0 , y0 ) = − (x0 , y0 )
∂x ∂y ∂y ∂x

at every (x0 , y0 ) ∈ R2 .

98
a) Taking z = x + iy, we have that

f (z) = (x + iy)ex (cos y + i sin y) = ex (x cos y − y sin y) + iex (x sin y + y cos y),

hence
u(x, y) = Re f = ex (x cos y − y sin y)
and
v(x, y) = Im f = ex (x sin y + y cos y).
Obviously, u and v are differentiable on R2 .
Now let us compute the partial derivatives necessary for Cauchy-Riemann
conditions (3.2) at an arbitrary point (x, y) ∈ R2 . We obtain that

∂u
= ex (x cos y − y sin y) + ex cos y = ex (x cos y + cos y − y sin y),
∂x
∂v
= ex (x cos y + cos y − y sin y),
∂y
∂u
= ex (−x sin y − sin y − y cos y),
∂y
∂v
= ex (x sin y + y cos y) + ex sin y = ex (x sin y + sin y + y cos y).
∂x
∂u ∂v
From the first two equalities it follows that = (i.e. the first
∂x ∂y
Cauchy-Riemann condition is satisfied) and from the last two, one obtains
∂u ∂v
+ = 0 (i.e. the second Cauchy-Riemann condition is true). Since
∂y ∂x
both Cauchy-Riemann conditions are verified at every point (x, y) ∈ R2 , the
function f is analytic in C;
b) Taking z = x + iy, we have that

ei(x−iy) + e−i(x−iy)
f (z) = cos(x + iy) = cos(x − iy) =
2
eix+y + e−ix−y ey (cos x + i sin x) + e−y (cos x − i sin x)
= =
2 2
ey + e−y ey − e−y
= cos x + i sin x = cos x cosh y + i sin x sinh y,
2 2

99
hence

u(x, y) = Re f = cos x cosh y, v(x, y) = Im f = sin x sinh y,

and both are differentiable on R2 .


∂u ∂v
If one calculates = − sin x cosh y and = sin x cosh y, obviously
∂x ∂y
∂u ∂v
6= , so the first Cauchy-Riemann condition is not verified at every
∂x ∂y
point (x, y) ∈ R2 , hence the function f is not analytic in C.

W 16. Verify if the following functions are analytic (holomorphic) in C:


2
a) f (z) = ez + cos z;

b) f (z) = z − sinh(z + 2z̄).


Answer. a) f is analytic in C;
b) f is not analytic in C.

E 19. Determine the analytic (holomorphic) function f , if


a) Re f = x cos y cosh x − y sin y sinh x, f (0) = 1;
y
b) Im f = y + arctan , (x, y) 6= (0, 0), f (i) = i.
x
Solution. For both exercises we will use Method 2 from Section 3.4 (since
it is faster). One is invited to solve the same exercises using also Method 1
(from the same section).
As a consequence of Theorem 3.4.1, one cannot determine an analytic
function f unless the given functions Re f , respectively Im f are harmonic;
so, first of all, we have to check this.
a) Let us prove that u(x, y) = Re f = x cos y cosh x − y sin y sinh x is a
∂ 2u ∂ 2u
harmonic function. For that, we calculate ∆u = + . Since
∂x2 ∂y 2
∂u
= cos y(cosh x + x sinh x) − y sin y cosh x,
∂x
∂u
= −x sin y cosh x − sinh x(sin y + y cos y),
∂y

100
∂ 2u
= 2 cos y sinh x + x cos y cosh x − y sin y sinh x,
∂x2
∂ 2u
= −x cos y cosh x − 2 cos y sinh x + y sin y sinh x,
∂y 2
we get that ∆u = 0, hence the necessary condition that u is harmonic is
fulfilled.
∂u ∂u
Step 1. Calculate f 0 (z) = (x, y) − i (x, y) (by (3.5) ii)). We have
∂x ∂y
that
f 0 (z) = cos y(cosh x + x sinh x) − y sin y cosh x−
= i(−x sin y cosh x − sinh x(sin y + y cos y));
Step 2. Replace y with 0 in f 0 (z).Then z = x + iy becomes z = x and
f 0 (x) = cosh x + x sinh x − i · 0 = cosh x + x sinh x.
Step 3. Integrate f 0 (x) with respect to x. It follows that
Z Z
f (x) = (cosh x + x sinh x)dx = sinh x + x sinh xdx.

Using integration by parts for the integral above, we obtain that


Z Z
x sinh xdx = x(cosh x)0 dx
Z
= x cosh x − cosh xdx

= x cosh x − sinh x + C,
where C is a constant. So
f (x) = sinh x + x cosh x − sinh x + C = x cosh x + C;
Step 4. Replace x by z, thus f (z) = z cosh z + C;
Step 5. Replace z with 1, so C = 1.
The analytic function is f (z) = z cosh z + 1.
y
b) Let us prove that v(x, y) = Im f = y + arctan is a harmonic
x
2 2
∂ v ∂ v
function. For that, we calculate ∆v = + . Since
∂x2 ∂y 2
∂v 1  y 0 −y
=  y 2 · = 2 ,
∂x x x x + y2
1+
x

101
∂v 1  y 0 x
=1+  y 2 · =1+ 2 ,
∂y x y x + y2
1+
x
∂ 2v y 2xy
2
= 2 2 2
· 2x = 2 ,
∂x (x + y ) (x + y 2 )2
∂ 2v −x −2xy
2
= 2 2 2
· 2y = 2 ,
∂y (x + y ) (x + y 2 )2
we get that ∆v = 0, hence the necessary condition that v is harmonic is
fulfilled.
∂v ∂v
Step 1. Calculate f 0 (z) = (x, y) + i (x, y) ((3.5) iii)). We have that
∂y ∂x
x y
f 0 (z) = 1 + −i 2 ;
x2 +y 2 x + y2

Step 2. Replace y with 0 in f 0 (z). Then z = x + iy becomes z = x and

1
f 0 (x) = 1 + ;
x
Step 3. Integrate f 0 (x) with respect to x. It follows that
Z  
1
f (x) = 1+ dx = x + ln x + C,
x

where C is a constant;
Step 4. Replace x by z, thus f (z) = z + Ln z + C;
Step 5. Replace z with i, so C = −Ln i.
The analytic function is f (z) = z + Ln z − Ln i, z 6= 0.

W 17. Determine the analytic (holomorphic) function f , if

a) Re f = 3x2 y − y 3 + ln(x2 + y 2 ), f (1) = 0;

b) Re f = x sin y cosh x + y cos y sinh x, f (0) = −i;


2 −y 2
c) Re f = ex cos(2xy), f (0) = 0;
ay
d) Im f = , (x, y) 6= (0, 0), a 6= 0, f (i) = 1;
x2 + y2

102
e) Im f = ex+y sin(x − y), f (0) = i;

f) Im f = ex (2xy cos y + (x2 − y 2 ) sin y), f (0) = 0;

g) Re f = ϕ(x2 − y 2 ), ϕ ∈ C 2 ;
y
h) Im f = ϕ , (x, y) 6= (0, 0), ϕ ∈ C 2 ;
x
i) Re f = x2 − y 2 + ex ϕ(y), ϕ ∈ C 2 ;

j) Re f + Im f = ϕ(x2 + y 2 ), ϕ ∈ C 2 ;

k) Re f + ϕ(Im f ) = x2 − y 2 , ϕ ∈ C 2 .

Answer. a) f (z) = 2Ln z − iz 3 − 1, if one considers the principal branch


of the logarithm;
b) f (z) = −i(z sinh z + 1);
2
c) f (z) = ez ;
a
d) f (z) = − + 1 − ai;
z
e) Since
ex (sin x + cos x)
Z
ex cos dx =
2
and
ex (sin x − cos x)
Z
ex sin dx = ,
2
one can find
ez (sin z − cos z) ez (sin z + cos z)
f (z) = (1 + i) + (i − 1) + i + 1;
2 2

f) f (z) = z 2 ez ;
g) Denote by t = x2 − y 2 . Let us impose u(x, y) = Re f to be a harmonic
function. We have that
∂u ∂u
= 2xϕ0 (t), = −2yϕ0 (t),
∂x ∂y

103
and
∂ 2u 2 00 0 ∂ 2u
= 4x ϕ (t) + 2ϕ (t), = 4y 2 ϕ00 (t) − 2ϕ0 (t).
∂x2 ∂y 2
This implies that ∆u = 4tϕ00 (t) = 0, hence ϕ00 (t) = 0 and integrating with
respect to t, it follows that ϕ(t) = c1 t + c2 , where c1 and c2 are arbitrary real
constants.
If the function ϕ has not the specified above form, we can not find the
analytic function f .
If ϕ(t) = c1 t + c2 , then u(x, y) = c1 (x2 − y 2 ) + c2 and one can find
f (z) = c1 z 2 + C, C ∈ C;
y
h) Denote t = . Let us impose v(x, y) = Imf to be a harmonic function.
x
Then
∂v  y  0 ∂v 1
= − 2 ϕ (t), = ϕ0 (t)
∂x x ∂y x
and
∂ 2v y 2 00 2y 0 ∂ 2v 1
2
= 4 ϕ (t) + 3 ϕ (t), 2
= 2 ϕ00 (t).
∂x x x ∂y x
2
1 y
  2y
This implies that ∆u = 2 2 + 1 ϕ00 (t) + 3 ϕ0 (t) = 0 or (t2 + 1)ϕ00 (t) +
x x x
2tϕ0 (t) = 0.
ϕ00 (t) 2t
The last equation is equivalent to 0 =− 2 and integrating with
ϕ (t) t +1
respect to t, it follows that ln ϕ0 (t) = − ln(t2 + 1) + c1 , hence ln ϕ0 (t) =
c1 c1
ln 2 and ϕ0 (t) = 2 . Integrating once again with respect to t, one
t +1 t +1
obtains ϕ(t) = c1 arctan t + c2 , where c1 , c2 are arbitrary real constants.
If the function ϕ has not the specified above form, we can not find the
analytic function f .
y
If ϕ(t) = c1 arctan t + c2 , then v(x, y) = c1 arctan + c2 and one can find
x
f (z) = c1 Ln z + C, C ∈ C;
i) Let us impose u(x, y) = Re f to be a harmonic function. It follows that
∆u = ex (ϕ00 (y) + ϕ(y)) = 0, hence ϕ00 (y) + ϕ(y) = 0.
One recognizes this form as a differential equation of order 2, linear,
homogenous and with constant coefficients and having the solution ϕ(y) =
c1 cos y + c2 sin y, where c1 , c2 are arbitrary real constants.
If the function ϕ has not the specified above form, we can not find the
analytic function f .

104
If ϕ(y) = c1 cos y + c2 sin y, it follows that
u(x, y) = x2 − y 2 + ex (c1 cos y + c2 sin y)
and f (z) = z 2 + ez (c1 − ic2 ) + C, C ∈ C;
j) Denote t = x2 + y 2 . Using the hypotheses u(x, y) + v(x, y) = ϕ(t), one
can determine the following:
∂u ∂v ∂u ∂v
+ = 2xϕ0 (t), + = 2yϕ0 (t),
∂x ∂x ∂y ∂y
∂ 2u ∂ 2v 2 00 0 ∂ 2u ∂ 2v
+ = 4x ϕ (t) + 2ϕ (t), + = 4y 2 ϕ00 (t) + 2ϕ0 (t).
∂x2 ∂x2 ∂y 2 ∂y 2
It follows that ∆u + ∆v = 4tϕ00 (t) + 4ϕ0 (t). But, the functions u and v are
both harmonic functions, so ∆u = 0 and ∆v = 0, hence
ϕ00 (t) 1
tϕ00 (t) + ϕ0 (t) = 0 ⇔ 0
= − ⇔ ln ϕ0 (t) = − ln t + c1 ,
ϕ (t) t
so
c1
ϕ0 (t) = ⇒ ϕ(t) = c1 ln t + c2 ,
t
where c1 , c2 are arbitrary real constants.
So, u + v = c1 ln(x2 + y 2 ) + c2 . We derive this with respect to x and with
respect to y. One obtains
∂u ∂v 2c1 x

 + = 2
x + y2

∂x ∂x
∂u ∂v 2c1 y

 + = 2
∂y ∂y x + y2
Since the function f is analytic, using the Cauchy-Riemann conditions
(3.2), it follows that
∂u ∂u 2c1 x

 − = 2
x + y2

∂x ∂y
∂u ∂u 2c1 y

 + = 2
∂y ∂x x + y2
∂u ∂u
By solving this system with the unknowns and , one obtains
∂x ∂y
∂u c1 (x + y) ∂u c1 (y − x)
= 2 2
, = 2
∂x x +y ∂y x + y2

105
and
∂u ∂u c1 (x + y) c1 (y − x)
f 0 (z) = −i = 2 2
−i 2 .
∂x ∂y x +y x + y2
It follows that f (z) = c1 (1 + i)Ln z + C, C ∈ C;
c1
k) f (z) = (1 + i)z 2 2 + C, c1 ∈ R, C ∈ C.
c1 + 1

E 20. Prove that there is no analytic (holomorphic) function f such that


2 2
Im f = ex −y (x2 − y 2 ).

Solution. We will prove that ∆v 6= 0, v = Im f . Since


∂v 2 2 ∂v 2 2
= 2xex −y (x2 − y 2 + 1), = −2yex −y (x2 − y 2 + 1),
∂x ∂y

∂ 2v 2 2

2
= 2ex −y (2x4 − 2x2 y 2 + 5x2 − y 2 + 1),
∂x
∂ 2v 2 2

2
= 2ex −y (−2y 4 + 2x2 y 2 + 5y 2 − x2 − 1),
∂y
2 2
one obtains ∆v = 4ex −y (x2 +y 2 )(x2 −y 2 +2) 6= 0, if x2 −y 2 +2 6= 0, hence the
function u is not harmonic on R2 and, as a consequence of Theorem 3.4.1,
one can not determine an analytic function f , unless Im f is a harmonic
function.

106
Chapter 4

Complex Integration
Z
Complex integrals f (z)dz are line integrals in the complex plane, where
Γ
f is a single-valued complex function defined on an open subset of C and it
is integrated over a piecewise smooth curve in the complex plane.
The most important relations between the analiticity of a function and
the value of a specific integral over a closed contour are given by the Cauchy
theorems (1814, 1825, 1831).
Complex line integrals are used, for example, in quantum mechanics, in
determining probability amplitudes in quantum scattering theory etc..

4.1 The Complex Line Integral


Let D be an open subset of C and f : D ⊂ C → C be a single-valued
complex function. According to (2.4) we have that f (z) = u(x, y) + iv(x, y),
z = x + iy.
Consider Γ ⊂ D to be a piecewise smooth curve with the parametrization
Γ : (x = x(t), y = y(t)), t ∈ [a, b]. The functions x(t) and y(t) are piecewise
C 1.
Since a complex number can be written z = (x, y) or z = x + iy, then Γ
has the equivalent parametrization z = z(t) = x(t) + iy(t), t ∈ [a, b].
Consider a partition δ = (a = t0 < t1 < · · · < tj−1 < tj < tj+1 < · · · <
tn = b) of the interval [a, b] and a set (τj ) of intermediary points τj ∈ [tj−1 , tj ],
j = 1, n.

107
We associate to f , δ and (τj ) the integral sum
n
X
σδ (f ) = f (ζj )(zj − zj−1 ),
j=1

where
zj = z(tj ) = x(tj ) + iy(tj ) =: xj + iyj
and
ζj = z(τj ) = x(τj ) + iy(τj ) =: ξj + iηj .

Figure 4.1: The Definition of the Line Integral

The norm of the partition δ is defined to be ν(δ) = max (tj − tj−1 ). A


1≤j≤n
partition δ 0 is said to be finer than δ, and one writes δ 0  δ, if δ 0 contains all
the points tj of δ.
Now let us consider sequences of partitions (δk )k∈N∗ with lim ν(δk ) = 0
k→∞
and such that δk+1  δk , ∀k ∈ N∗ .
Definition 4.1.1. If for any sequence of partitions (δk )k∈N∗ and for any sets
of intermediary points (τjk )k∈N∗ the sequence of the integral sums σδk (f ) has
a finite limit I, then f is said Zto be integrable over the curve Γ.
The limit I is denoted by f (z)dz and it is called the (line) integral of
Γ
the function f over the curve Γ.

108
Theorem 4.1.2. If f is piecewise continuous function on Γ, then f is inte-
grable over Γ and
Z Z Z
f (z)dz = udx − vdy + i vdx + udy. (4.1)
Γ Γ Γ

Proof. Let us calculate the integral sums; the indices k are omitted, except
in δk . We have that
n
X
σδk (f ) = f (ζj )(zj − zj−1 )
j=1
Xn
= [u(ξj , ηj ) + iv(ξj , ηj )][xj − xj−1 + i(yj − yj−1 )]
j=1
Xn
= [u(ξj , ηj )(xj − xj−1 ) − v(ξj , ηj )((yj − yj−1 )]+
j=1
Xn
+i [u(ξj , ηj )(yj − yj−1 ) + v(ξj , ηj )((xj − xj−1 )].
j=1

Here we have the integral sums for two real line integrals over Γ. Since
u and v are piecewise continuous on Γ, they are integrable over Γ, hence the
limits of the
Z integral sums as Z k → ∞ exist, are finite and they are the line
integrals udx − vdy and vdx + udy. It follows that lim σδk (f ) exists,
Γ k→∞
Z Γ
is finite and it is equal to f (z)dz.
Γ
One obtains (4.1) by taking the limit as k → ∞ in the equalities above.

Remark 4.1.3. Relation (4.1) shows that the complex integral is a pair
of real line integrals. This implies that the complex integral inherits the
properties of the real one. For instance we have that
Z Z Z
f (z)dz = f (z)dz + f (z)dz,
Γ1 ∪Γ2 Γ1 Γ2
Z Z Z
(af (z) + bg(z))dz = a f (z)dz + b g(z)dz,
Γ Γ Γ

109
where a, b ∈ C, Z Z
f (z)dz = − f (z)dz,
BA AB
where AB is an arc of a curve.
The length of a curve Γ is denoted by lΓ and it is defined as the supremum
of the lengths of all inscribed polygonal paths. One can show that
Z b
lΓ = |z(t)|dt,
a

for Γ a piecewise C 1 curve, z = z(t), t ∈ [a, b].


Proposition 4.1.4. If Γ is a piecewise C 1 curve, f is an integrable function
over Γ and there exists M > 0 such that |f (z)| ≤ M , for every z ∈ Γ, then
Z

f (z)dz ≤ M lΓ . (4.2)

Γ

Proof. Consider a sequence of integral sums σδk (f ). Since τj ∈ Γ and


|f (z)| ≤ M,
then

Xn
|σδk (f )| = f (ζj )(zj − zj−1 )


j=1
n
X
≤ |f (ζj )| |zj − zj−1 |
j=1
n
X
≤M |zj − zj−1 | .
j=1

Using the fact that |zj − zj−1 | is the distance between the points zj and
n
X
zj−1 , we have that |zj − zj−1 | is the length of the polygonal line corre-
j=1
sponding to the partition δk (see Figure 4.2).
These lengths form an increasing sequence and their limit is lΓ . Taking
the limit as k → ∞ (with δk+1  δk , ∀k ∈ N∗ and lim ν(δk ) = 0), one
k→∞
obtains (4.2) from the above inequality.

110
Figure 4.2: The Partition δ

Proposition 4.1.5. If Γ is a piecewise class C 1 curve and (fn )n∈N∗ is a


sequence of piecewise class C 1 functions on Γ which converge uniformly on
Γ to the function f , then
Z Z
lim fn (z)dz = f (z)dz. (4.3)
n→∞ Γ Γ

Proof. Let Mn := max |fn (z) − f (z)|. Since fn → f uniformly on Γ, then


zj ∈Γ
lim Mn = 0. Then, using (4.2), we get that
n→∞
Z Z Z

f (z)dz − fn (z)dz = (fn (z) − f (z))dz ≤ M lΓ ,

Γ Γ Γ

and (4.2) is obtained by taking the limit as n → ∞.

4.2 Cauchy’s Theorems


Two important relations between the analiticity of a function and the
value of a specific integral over a closed contour are pointed out in this
section.
Theorem 4.2.1 (Cauchy’s Fundamental Theorem). If f is an analytic func-
tion on a simply connected domain D and Γ ⊂ D is a closed contour, then
I
f (z)dz = 0. (4.4)
Γ

111
Proof. Let f (z) = u(x, y) + iv(x, y), z = x + iy, u, v ∈ C 1 (D) and ∆ the
domain bounded by Γ. (Figure 4.3)

Figure 4.3: The Domain ∆ Bounded by Γ

One considers the trigonometric sense on any closed contour. By (4.1)


we have that
I I I
f (z)dz = udx − vdy + i vdx + udy.
Γ Γ Γ

We will use the Green-Riemann formula,


I ZZ  
∂Q ∂P
P dx + Qdy = − dxdy,
Γ ∆ ∂x ∂y
for both real line integrals (the condition that u and v are of class C 1 on a
neighborhood of ∆ holds), hence
I ZZ   ZZ  
∂v ∂u ∂u ∂v
f (z)dz = − − dxdy + i − dxdy.
Γ ∆ ∂x ∂y ∆ ∂x ∂y
By hypothesis, f is analytic, hence u and v verify the Cauchy-Riemann
conditions on ∆ ⊂ D:
∂v ∂u ∂v ∂u
=− , = ,
∂x ∂y ∂y ∂x
which imply that
∂v ∂u ∂u ∂v
− − = 0, − = 0.
∂x ∂y ∂x ∂y
I
Therefore both double integrals are equal to 0 and f (z)dz = 0.
Γ

112
The following similar result holds true.

Proposition 4.2.2. If f is an analytic function in a simply connected do-


main D bounded by the closed
I contour Γ and it is continuous on the closed
domain D = D ∪ Γ, then f (z)dz = 0.
Γ

Now let us consider the case of n-uply connected domains (Figure 4.5
(a)).

Theorem 4.2.3 (Cauchy’s Theorem for Multiply Connected Domains). If


f is an analytic function in a multiply connected domain D bounded from
without by the closed contour Γ and from within by the contours Γ1 , Γ2 , . . . , Γn
and f is continuous on the closed domain D, then
I n I
X
f (z)dz = f (z)dz. (4.5)
Γ j=1 Γj

Proof. We will use an induction type argument. First consider the case n = 1
(Figure 4.4 (a)).

(a) The Doubly Connected Do- (b) The Simply Connected Domain
main D D \ AB

Figure 4.4: Transformation of a Doubly Connected Domain into a Simply


Connected One

Since D is a doubly connected domain, we cannot use Theorem 4.2.1


(Cauchy’s Fundamental Theorem). We transform D into the simply con-
nected domain D \ AB by considering a cut AB (an arc of a curve AB
(Figure 4.4 (b)).

113
Instead of Γ, the boundary of D \ AB is ∂(D \ AB) = Γ ∪ AB ∪ Γ−1 ∪ BA,

where the sign − on Γ1 indicates that Γ1 is traversed in clockwise sense
(because the trigonometric sense for ∂(D \ AB) is the sense for which the
domain is always on the left).
Using Proposition 4.2.2, Iwe can now apply Theorem 4.2.1 (Cauchy’s Fun-
damental Theorem), hence f (z)dz = 0, i.e.
Γ
I Z I Z
f (z)dz + f (z)dz + f (z)dz + f (z)dz = 0.
Γ AB Γ1 − BA
Z Z I I
Since f (z)dz = − f (z)dz and f (z)dz = − f (z)dz, one
BA AB Γ1 − Γ1
obtains I I
f (z)dz = f (z)dz,
Γ Γ1

so (4.5) is true for n = 1.


Assume that formula (4.5) is true for n − 1 and let us consider the case
of n inner contours Γj (Figure 4.5 (a)). We will apply an induction type
argument, thus we begin by isolating the curve Γn by a cut AB (Figure 4.5
(b)). Z
Since Γ = BM A ∪ AN B and we can add and subtract f (z)dz and
AB
we obtain that
I Z Z
f (z)dz = f (z)dz + f (z)dz
Γ ZBM A ZAN B Z Z
= f (z)dz + f (z)dz + f (z)dz + f (z)dz
IBM A AB
I BA AN B

= f (z)dz + f (z)dz.
BM AB BAN B

Finally, we can apply the induction assumption for the first integral and
the case n = 1 for the second integral, hence
I n−1 I
X I
f (z)dz = f (z)dz + f (z)dz
Γ j=1 Γj Γn

Xn I
= f (z)dz.
j=1 Γj

114
(a) The n-uply Connected Domain D

(b) The Simply Connected Domain D \ AB

Figure 4.5: Transformation of a n-uply Connected Domain into a Simply


Connected One

I
Example 4.2.4. Let us calculate the integrals In = (z − a)n dz, where
Γ
n ∈ Z and a ∈ C is an interior point of the domain bounded by the closed
contour Γ (Figure 4.6 (a)).
Case I. If n ≥ 0, then the function (z − a)n is a polynomial, hence an
analytic function in
I C. By Theorem 4.2.1 (Cauchy’s Fundamental Theorem),
the integral In = (z − a)n dz = 0.
Γ I
1 1
Case II. If n = −1, i.e. I1 = dz, then the function is
Γ z −a z−a
analytic in the domain D \ {a} which is doubly connected ({a} is a lacuna).
Consider a circle of radius r centred at a, denotedI Γ1 (Figure 4.6
I (b)).
1 1
By Theorem 4.2.3, case n = 1, we have that dz = dz.
Γ z −a Γ1 z − a

115
(a) The Point a is an In- (b) A Circle Centred in a, Included
terior Point of D in D

Figure 4.6: Suggestive Drawings for Computing the Integrals In

Being a circle, Γ1 has the parametrization z − a = reiθ , θ ∈ [0, 2π), hence


Z 2π Z 2π
rieiθ
I
1
dz = dθ = i dθ = 2πi,
Γ z −a 0 reiθ 0

which is a very important constant that appears in many formulas.


Case III. If n < −1, then let us denote −n by m > 1. Then, as in Case
II, we have that
I I I
n dz dz
In = (z − a) dz = m
= m
Γ Γ (z − a) Γ1 (z − a)
Z 2π Z 2π
rieiθ dθ i
= m (eiθ )m
= m−1 e−i(m−1)θ dθ
0 r r 0
i 1 2π
i(1−m)θ 1
= m−1 · e = m−1 (ei(1−m)2π − e0 ).
r i(1 − m) 0 r (1 − m)
Since the exponential eiz is periodic of period 2π, we have that ei(1−m)2π = e0 ,
so In = 0.
We will see that analiticity is a strong property which establishes a con-
nection between the values of an analytic function on a closed contour and
the values of the function and its derivatives of any order in the domain
bounded by that contour.
Theorem 4.2.5 (Cauchy’s Integral Formula). Let f be an analytic function
in a simply connected domain D and Γ ⊂ D a closed contour. Denote by ∆
the domain bounded by Γ. Then, for every a ∈ ∆,

116
I
1 f (z)
f (a) = dz. (4.6)
2πi Γ z−a

Proof. Consider an arbitrary constant  > 0, fixed for the entire proof.
Since f is analytic in D, it is continuous
 in D, hence f is continuous at
the point a ∈ D. Then there exists δ = δ > 0 such that ∀z ∈ D with


|z − a| < δ we have that |f (z) − f (a)| < .

Let r be arbitrary and fixed such that 0 < r < δ and let Γr be the circle
of radius r centred in a (Figure 4.7 (b)).

(a) The Point a Is an Interior Point of


∆⊂D

(b) The Circle Γr of Radius r Centered in


a

Figure 4.7: Suggestive Drawings for the Proof of the Theorem 4.2.5

Then, for any z ∈ Γr , we obtain that |z −a| = r < δ, hence |f (z)−f (a)| <

. It follows that

f (z) − f (a) 
z − a < 4πr .

117
Using Proposition 4.1.4 we have that
Z
f (z) − f (a) ≤  lΓr =  2πr =  < .

dz

Γr z−a 4πr 4πr 2

Let us summarize what we have done so far. We have proved the following
statement: ∀ > 0, ∃δ > 0 such that ∀r > 0 with |r − 0| < δ we have that
I
f (z) − f (a)
dz − 0 < ,

Γr z−a

f (z) − f (a)
I
that is lim dz = 0.
r→0 Γ
r
z−a
f (z)
Now the function is analytic in the doubly connected domain given
z−a
by ∆ \ {a}. Using Theorem 4.2.3 we obtain that

f (z) − f (a) + f (a)


I I I
f (z) f (z)
dz = dz = dz
Γ z −a Γr z − a Γr z−a
f (z) − f (a)
I I
1
= dz + f (a) dz.
Γr z−a Γr z − a

The last integral is equal to 2πi (see Example 4.2.4), hence, by taking the
limit as r → 0, we get that

f (z) − f (a)
I I
f (z)
lim dz = lim dz + lim 2πif (a),
r→0 Γ z − a r→0 Γ z−a r→0
r

so I
f (z)
dz = 0 + 2πif (a) = 2πif (a). (4.7)
Γ z−a
The last formula is equivalent to (4.6) and the proof is done.

Remark 4.2.6. Cauchy’s Integral Formula (4.6) shows that the values of an
analytic function f over a closed contour Γ determine the values of f in the
whole domain bounded by Γ.
From a practical point of view, (4.7) gives a method to calculate integrals
with the indicated structure simply by multiplying the value of the numerator
at a with 2πi.

118
Remark 4.2.7. The function f being analytic in D, it is continuous in D.
Therefore lim f (u) = f (a). By (4.7) we have that
u→a
I I
f (z) f (z)
lim dz = 2πi lim f (u) = 2πif (a) = dz,
u→a Γ z−u u→a Γ z−a
i.e. Cauchy’s integral is continuous at a.

Theorem 4.2.8 (Cauchy’s Integral Formula for Derivatives). Let f be an


analytic function in a simply connected domain D and Γ ⊂ D a closed con-
tour. Denote by ∆ the domain bounded by Γ. Then, ∀a ∈ ∆, ∀n ∈ N∗ we
get that I
(n) n! f (z)
f (a) = dz. (4.8)
2πi Γ (z − a)n+1
Proof. We will perform the proof by induction. First we consider the case
when n = 1. By Definition 3.1.1, the derivative of f at a verifies (3.1), i.e.

f (a + h) − f (a) = f 0 (a)h + ω(a, h),

ω(a, h)
where lim = 0. Thus the derivative can be calculated by the usual
h→0 h
formula
f (a + h) − f (a)
f 0 (a) = lim .
h→0 h
Let us calculate the above ratio by using the integral representation (4.6)
of f . We obtain the following:
 
f (a + h) − f (a)
I I
1 1 f (z) 1 f (z)
= dz − dz
h h 2πi Γ z − a − h 2πi Γ z − a
I  
1 1 1
= f (z) − dz
2πih Γ z−a−h z−a
I
1 h
= f (z) dz.
2πih Γ (z − a)(z − a − h)

Taking the limit as h → 0 (the limit exists since f is analytic (see Remark
4.2.7)) we have that
I
0 1! h
f (a) = f (z) dz,
2πi Γ (z − a)2

119
i.e. (4.8) is true for n = 1.
Now we prove the induction step. Let us assume that (4.8) is true for
k − 1 and we will show that it is true for k. Since f (k) (z) = (f (k−1) (z))0 , as
in case n = 1, f (k) (z) is the limit, as h → 0, of the ratio
f (k−1) (a + h) − f (k−1) (a)
.
h
Using the induction assumption, this ratio is equal to
 
1 (k − 1)! (k − 1)!
I I
f (z) f (z)
k
dz − k
dz =
h 2πi Γ (z − a − h) 2πi Γ (z − a)
 
(k − 1)!
I
1 1
= f (z) − dz.
2πih Γ (z − a − h)k (z − a)k
By the binomial formula
   
k k k k−1 k
(A − B) = A − A B + ··· + (−1)k B k
1 k
for ((z − a) − h)k one obtains
1 1 N
k
− k
= ,
(z − a − h) (z − a) (z − a − h)k (z − a)k
where
     
k kk k−1 k k−2 2 k
N = (z−a) −(z−a) + (z−a) h− (z−a) h +· · ·+ (−1)k hk ,
1 2 k
 
k
k−1 2
i.e. N = k(z − a) h + h C(h), where lim C(h) = − (z − a)k−2 . We can
h→0 2
reduce h and we obtain that
f (k−1) (a + h) − f (k−1) (a)
=
h
f (z)k(z − a)k−1
I 
(k − 1)!
I
f (z)
= k k
dz + hC k k
dz .
2πi Γ (z − a − h) (z − a) Γ (z − a − h) (z − a)
Taking the limit as a → 0, one obtains (since k(k − 1)! = k!)
I
(k) k! f (z)
f (a) = dz,
2πi Γ (z − a)k+1
hence (4.8) is true for n ≥ 1.

120
Remark 4.2.9. The above theorem shows that an analytic function f has
derivatives of any order and the values of the function over a contour Γ
determine the values of all derivatives in the whole domain bounded by Γ.
From a practical point of view, (4.8) gives the following formula for inte-
grals of this structure
I
f (z) n! (n)
n+1
dz = f (a). (4.9)
Γ (z − a) 2πi

4.3 Taylor and Laurent Series


Because the basic theory of series of complex numbers and series of func-
tions of a complex variable is largely similar to the corresponding theory for
their counterparts from Calculus (see [33]), we will not present this theory,
but we will use some of its elements such as the Cauchy-Hadamard theorem.

ExampleX 4.3.1. A very important example is the well known geometric


series zn.
n≥0
The sequence of the partial sums is (Sn )n≥0 , where

1 − z n+1
Sn = 1 + z + z 2 + · · · + z n = , for z 6= 1.
1−z
For z = 1, Sn = n + 1 is a divergent sequence, hence the series is divergent.
Consider the sequence (z n )n≥1 . We have the following three possibilities:

- if |z| = ρ < 1, then lim |z|n = lim ρn = 0, hence lim z n = 0;


n→∞ n→∞ n→∞

- if |z| = ρ > 1, then lim |z|n = lim ρn = ∞, hence lim z n = ∞ (the


n→∞ n→∞ n→∞
point at infinity);

- if |z| = ρ = 1, then z = eiθ and z n = eniθ , θ 6= 0, so the limit does not


exist.

Therefore, for |z| < 1 the geometric series is convergent and its sum is

1 − z n+1 1
S = lim Sn = lim = .
n→∞ n→∞ 1 − z 1−z

121
We have obtained that X 1
zn = . (4.10)
n≥0
1−z
For |z| ≥ 1 the geometric series is divergent.
n n
Xany r ∈ (0, 1), in the disk
Moreover, for X|z| ≤ r we have that |z | ≤ r
and the series rn is convergent, hence z n is uniformly convergent in
n≥0 n≥0
any disk |z| ≤ r, thus the geometric series can be derived or integrated term
by term.
A result that is specific to the complex framework is the following theo-
rem.
Theorem 4.3.2 (Weierstrass). If the functions fn (z), n ∈ N∗ are analytic in

X
a domain D and the series fn (z) is uniformly convergent to the function
n=1
f (z) in any closed subdomain D1 of D, then
i) f (z) is analytic in the domain D;

X
ii) f (m)
(z) = fn(m) (z), ∀m ∈ N∗ ;
n=1


X
iii) the series fn(m) (z) is uniformly convergent in any closed subdomain
n=1
D1 of D.

4.3.1 Taylor Series


Let us denote by DR (a) the disk of radius R > 0 centred at the point
a ∈ C, i.e. DR (a) = {z ∈ C : |z − a| < R} (Figure 4.8 (a)).
If f is an analytic function in a disk DR (a), then a is said to be an ordinary
point of f .
Theorem 4.3.3. If f is an analytic function in DR (a), then there exists
cn ∈ C, n ∈ N, such that

X
f (z) = cn (z − a)n , ∀z ∈ DR (a). (4.11)
n=0

122
Moreover,
f (n) (a)
cn = , n ∈ N. (4.12)
n!
Proof. Let z ∈ DR (a). Consider a disk Dρ (a) with ρ < R and let Γρ be its
boundary, i.e. the circle of radius ρ centred at a (Figure 4.8 (b)).

(a) The Disk DR (a) (b) The Circle Γρ

(c) The Closed Contour Γ inside DR (a)

Figure 4.8: The Taylor Series Coefficients Deduction

Since f is analytic, we can represent the value f (z) using Theorem 4.2.5
(Cauchy’s Integral Formula). We get that
I
1 f (u)
f (z) = du. (4.13)
2πi Γρ u − z

Since u ∈ Γρ and z ∈ Dρ (a), the distance to a verifies |z − a| < |u − a|, hence

123

z − a
< 1 and we can use the geometric series 1 =
X

u − a q n with the
1−q n=0
z−a
convergence condition |q| < 1, where q = .
u−a
1
We will expand in a power series. We have that
u−z
1 1 1 1
= = ·
u−z u − a − (z − a) u−a 1− −a z
u−a
∞  n X ∞
1 X z−a (z − a)n
= = n+1
.
u − a n=0 u − a n=0
(u − a)
∞  n
z − a |z − a| X |z − a|
Since = < 1, the real series is convergent
u − a ρ n=0
ρ
∞  n
X z−a
(again using the geometric series) and it dominates the series ,
n=0
u − a
hence the latter is uniformly convergent for u ∈ Γρ . Then we can replace

1 X (z − a)n
by n+1
in (4.13) and we can integrate term by term. One
u−z n=0
(u − a)
obtains
I
1 f (u)
f (z) = du
2πi Γρ u − z

(z − a)n
I
1 X
= f (u) du
2πi Γρ n=0
(u − a)n+1
∞ I !
X 1 f (u)
= n+1
du (z − a)n .
n=0
2πi Γρ (u − a)
I
1 f (u)
By denoting cn = du, one obtains
2πi Γρ (u − a)n+1

X
f (z) = cn (z − a)n .
n=0

By Theorem 4.2.8 (Cauchy’s Integral Formula for Derivatives),


I
(n) n! f (z)
f (a) = dz,
2πi Γ (z − a)n+1

124
f (n) (a)
hence cn = .
n!

Remark 4.3.4. We obtained the formula for the coefficients


I
1 f (z)
cn = dz.
2πi Γρ (z − a)n+1

f (z)
The function is analytic on the so called punctured disk given
(z − a)n+1
by DR (a) \ {a}, which is a simply connected domain. Then, for any closed
contour Γ ⊂ DR (a) such that a belongs to the domain bounded by Γ (Figure
4.8 (c)), one obtains from Theorem 4.2.3 (Cauchy’s Theorem for Multiply
Connected Domains) the following formula:
I
1 f (z)
cn = dz. (4.14)
2πi Γ (z − a)n+1

The main examples of Taylor series are the following:



1 X
1. = z n , for |z| < 1;
1−z n=0


z
X zn
2. e = , z ∈ C;
n=0
n!


X (−1)n 2n+1
3. sin z = z , z ∈ C;
n=0
(2n + 1)!


X (−1)n
4. cos z = z 2n , z ∈ C.
n=0
(2n)!

4.3.2 Laurent Series


Consider f an analytic function in a punctured disk DR (a) \ {a}. In this
case, when f is not analytic at a, the point a is said to be an isolated singular
point (or singularity) of the function f (Figure 4.9 (a)).

125
Theorem 4.3.5. If f is an analytic function in DR (a)\{a}, then there exists
cn ∈ C, n ∈ Z, such that

X
f (z) = cn (z − a)n , ∀z ∈ DR (a) \ {a}. (4.15)
n=−∞

Proof. Let z be any point in DR (a) \ {a}. Consider an annulus Aρ,r (a)
bounded by the circles Γρ and Γr centred at a, with r < ρ < R such that
z ∈ Aρ,r (a) (Figure 4.9 (b)).
y y
DR (a) DR (a)
Aρ,r (a)
Γρ
× × Γr
a a
z

x x
0 0
(a) An Isolated Singular Point (b) The Annulus

y y
DR (a) DR (a)

Γρ Γρ
z × Γr Γr ×
Γ
a ∆ a
B
A

x x
0 0
(c) Transformation into a Simply Con- (d) A Arbitrary Curve in the Annulus
nected Domain

Figure 4.9: Steps in Determining the Coefficients of the Laurent Series

The annulus Aρ,r (a) is a doubly connected domain for f . In order to


apply Theorem 4.2.5 (Cauchy’s Integral Formula), we transform it into the
simply connected domain ∆ = Aρ,r (a) \ AB by considering a cut AB (Figure
4.9 (c)). The boundary of ∆ is ∂∆ = Γρ ∪ AB ∪ Γ− r ∪ BA. We have that
I I Z I Z !
1 f (u) 1
f (z) = du = + − + ,
2πi ∂4 u − z 2πi Γρ AB Γr BA

126
hence I I
1 f (u) 1 f (u)
f (z) = du − du. (4.16)
2πi Γρ u−z 2πi Γr u−z
From the proof of Theorem 4.3.3 we get that
I ∞
1 f (u) X
du = cn (z − a)n ,
2πi Γρ u − z n=0

where I
1 f (z)
cn = dz (4.17)
2πi Γ (z − a)n+1
and Γ is an arbitrary closed contour which encircles the point a (see Figure
4.9 (d)).
1
For the second integral in (4.16) we will write the ratio − as a
u−z
series (using the geometric
series).
Since u ∈ Γr and z ∈ Aρ,r (a), then
u − a
|u − a| < |z − a|, hence < 1.
z − a
u−a
The convergence condition, |q| < 1 for q = , holds and
z−a
1 1 1 1 1
− = = =
u−z z−u z − a − (u − a) z −a1− u−a
z−a
∞  n X ∞ n
1 X u−a (u − a)
= = .
z − a n=0 z − a n=0
(z − a)n+1

In order to unify the results that we have obtained so far with (4.17), we
change the index of summation by defining m to be −n − 1. So we have that
m = −(n + 1), m = −1 for n = 0 and m → −∞ when n → ∞. Therefore
−∞ −1 −1
1 X (u − a)−(m+1) X (z − a)m X (z − a)n
− = = = .
u − z m=−1 (z − a)−m m=−∞
(u − a)m+1 n=−∞ (u − a)n+1

1
If we replace − in (4.16) and if we integrate term by term, then
u−z
I −1  I 
1 f (u) X 1 f (u)
du = n+1
du (z − a)n ,
2πi Γr u − z n=−∞
2πi Γr (u − a)

127
hence
I −1
1 f (u) X
− du = cn (z − a)n ,
2πi Γr u−z n=−∞

where I
1 f (z)
cn = dz. (4.18)
2πi Γr (z − a)n+1

I By Theorem 4.2.3I(Cauchy’s Theorem for Multiply Connected Domains),


f (z) f (z)
n+1
dz = n+1
dz for any closed contour Γ as in Figure
Γr (z − a) Γ (z − a)
4.9 (d). By (4.16), (4.17) and (4.18) we obtain that

X −1
X
n
f (z) = cn (z − a) + cn (z − a)n , (4.19)
n=0 n=−∞

so ∞
X
f (z) = cn (z − a)n , (4.20)
n=−∞

where I
1 f (z)
cn = dz, n ∈ Z. (4.21)
2πi Γ (z − a)n+1
Remember that Γ is any closed contour such that a is the unique singular
point of f in the domain bounded by Γ.

The series (4.20) is called the Laurent Series of the function f in a punc-
tured neighborhood of a (or about a).
X∞ X−1
n
The series cn (z − a) and cn (z − a)n in (4.19) are called the
n=0 n=−∞
Taylorian Part and the Principal Part of the Laurent series, respectively.
With respect to the Taylorian part and the principal part we will be a able
to classify singular points. This follows in the next section.

128
4.4 Classification of Singular Points
Let a be a singular point of function f , i.e. f is not analytic at a.

Removable singular points


Definition 4.4.1. The point a is called a removable singular point of f if
lim f (z) is finite.
z→a
From (4.20) we have that this is equivalent to cn = 0 for all negative
indices n, i.e. the principal part of the Laurent series is null. In this case,
lim f (z) = c0 and f can be extended to an analytic function on DR (a) by
z→a
f (a) = c0 .
sin z
Example 4.4.2. The function f (z) = has z = 0 as a removable point
z
sin z
since lim = 1. The Laurent series expansion is
z→0 z

1 z z3 z5 2n+1
 
sin z n z
f (z) = = − + + · · · + (−1) + ···
z z 1 3! 5! (2n + 1)!
z2 z4 z 2n
=1− + + · · · + (−1)n + ··· .
3! 5! (2n + 1)!
Therefore the principal part is null.

Poles
Definition 4.4.3. The point a is called a pole of order k of f , k ∈ N∗ , if
there exists a function h, analytic in a disk DR (a) with h(a) 6= 0 such that
h(z)
f (z) = .
(z − a)k
A pole of order k = 1 is said to be a simple pole.
Since h is analytic, it has a Taylor series expansion on DR (a), so
h(z) = c−k + c−k+1 (z − a) + · · · + c−1 (z − a)k−1 + c0 (z − a)k + c1 (z − a)k+1 + · · ·
and 0 6= h(a) = c−k . Then the Laurent series of f is
c−k c−k+1 c−1
f (z) = + + · · · + + c0 + c1 (z − a) + · · · ,
(z − a)k (z − a)k−1 (z − a)
hence the principal part of the Laurent series has a finite number of terms.

129
Essential Singular Points
Definition 4.4.4. The point a is called an essential singular point of f if it
is neither a removable singular point, nor a pole of f , hence if and only if the
principal part of the Laurent series of f has an infinite number of nonzero
terms.
1
Example 4.4.5. The function f (z) = e z−3 has z = 3 as an isolated singular

1 z
X zn
point. By replacing z with into the exponential series e = , one
z−3 n=0
n!
1
obtains the Laurent series with the coefficients c−n = , n ∈ N∗ given by
n!
1 1 1 1
e z−3 = 1 + + 2
+ ··· + + ··· .
1!(z − 3) 2!(z − 3) n!(z − 3)n
It is obvious that the principal part contains an infinite number of nonzero
terms, hence z = 3 is an essential singular point of f .
There exist functions with non-isolated singular points. For instance, the
1 1 1 1
function f (z) = π has the singular points 0, 1, 2 , . . . , n , . . . and n→∞
lim =
sin n
z
0, therefore there is no punctured disk DR (0) \ {0} so that f is analytic in
it, i.e. the singular point 0 is not an isolated point.
Remark 4.4.6. So far we have discussed about Laurent series in a punctured
neighborhood of a singular point a, which can be considered as an annulus
AR,0 (a) = {z ∈ C : 0 < |z − a| < R}. The previous discussion can be
extended to different annuli and one obtains different Laurent series.
1
Example 4.4.7. Let us consider the function f (z) = 2 and
z (z − 1)(z − 2)
split this ratio into simple fractions. So
1 A B C D
= + 2+ + ,
z 2 (z − 1)(z − 2) z z z−1 z−2
3 1 1
where A = , B = , C = −1 and D = .
4 2 4
We will find Laurent series expansion in several situation using the Taylor
1
series expansion of = 1 + q + q 2 + · · · + q n + · · · , with the convergence
1−q
condition |q| < 1.

130
1. If |z| < 1, i.e. z ∈ A1,0 (0), we have that

3 1 1 1 1 1 1
f (z) = · + · 2− + ·
4 z 2 z z−1 4 z−2
3 1 1 1 1 1 1
= · + · 2+ − ·  .
4 z 2 z 1−z 4 2 1− z
2

z z |z| 1
Considering q = z, respectively q = , so = < < 1, it follows

2 2 2 2
that

zn
 
3 1 1 1 n 1
f (z) = · + · 2 + 1 + · · · + z + · · · − 1 + ··· + n + ···
4 z 2 z 8 2
 
3 1 1 1 7 15 1
= · + · 2+ + · z + · · · + 1 − n+3 z n + · · · .
4 z 2 z 8 16 2

2. If 1 < |z| < 2, i.e. z ∈ A2,1 (0), we obtain by taking in consideration


the convergence condition of the Taylor series expansion, that

3 1 1 1 1 1 1
f (z) = · + · 2− + ·
4 z 2 z z−1 4 z−2
3 1 1 1 1 1 1
= · + · 2−  − ·  .
4 z 2 z 1 4 2 1− z
z 1− 2
z

z |z|
1 1 z
For q = , so 1 < |z| ⇔ < 1, respectively q = , so = <1

z z 2 2 2
and we obtain that
 
3 1 1 1 1 1 1 1
f (z) = · + · 2 − 1 + + 2 + ··· + n + ··· −
4 z 2 z z z z z
2 n
 
1 z z z
− 1 + + 2 + ··· + n + ···
8 2 2 2
1 1 1 1 1 1 1
= · · · − n+1 − n − · · · − 3 − · 2 − ·
z z z 2 z 4 z
1 1 1 n
− − · z − · · · − n+3 · z − · · · .
8 16 2

131
3. If |z| > 2, i.e. z ∈ A∞,2 (0), we have that

3 1 1 1 1 1 1
f (z) = · + · 2− + ·
4 z 2 z z−1 4 z−2
3 1 1 1 1 1 1
= · + · 2−  + ·  .
4 z 2 z 1 4 2
z 1− z 1−
z z

1 1 1
Now, since |z| > 2, we can take q = , so < < 1, respectively
z z 2
2 2
q = , so < 1, one obtains
z z
 
3 1 1 1 1 1 1 1
f (z) = · + · 2 − 1 + + 2 + ··· + n + ··· +
4 z 2 z z z z z
2 n
 
1 2 2 2
+ 1 + + 2 + · · + n + ···
4z z z z
1 1 1 1
= · · · + (2n−2 − 1) n+1 + (2n−3 − 1) n + · · · + 0 · 2 + 0 · .
z z z z

Conversely, a function f can have different Laurent series, each of them


being convergent in another annulus.
By using Theorem 4.3.2 and the Cauchy-Hadamard Theorem (see [33]),
one can prove the following result:

X
Theorem 4.4.8. The Laurent series cn (z − a)n is convergent in the
n=−∞
1
annulus AR,r (a) = {z ∈ C : r < |z − a| < R}, where r = lim sup |c−n | n and
n→∞

1 X
R = 1 . If r < R, then the function f (z) = cn (z − a)n is
lim sup |cn | n
n=−∞
n→∞
analytic in AR,r (a).

132
4.5 Exercises
E 21. Determine the Laurent series expansion of function f in the following
situations:
1 1
a) f (z) = + 2 , if |z| < 2;
z−2 z
2
b) f (z) = , if |z − 2i| > 4;
z2 + 4
2
ez
c) f (z) = 5 , around a = 0;
z
cosh z
d) f (z) = , if |z| < 1.
z−1
Solution. a) The Laurent series expansion of the function f in the case
1
|z| < 2 is obviously around a = 0. We have that 2 is already a term
z
1
of this expansion belonging to the principal part and for we will use
z−2
the geometric series. We notice that the condition of convergence of the

1 X
geometric expansion = q n is |q| < 1. Since we have the condition
1−q
z n=0
|z| < 2, it follows that < 1, so we can use the geometric expansion for

2
z
q = . Hence
2
z z2 zn
 
1 1 1 1
=− · =− 1 + + 2 + ··· + n + ··· .
z−2 2 1 − z2 2 2 2 2
We obtain that
1 1 z zn
f (z) = − − − · · · − − ··· ,
z2 2 4 2n+1
so the conclusion is that the principal part of the Laurent series expansion
of the function f has only one term and the Taylor part has an infinity of
terms, under the specified conditions;
b) Taking into consideration the restriction |z −2i| > 4, it follows that the
Laurent series expansion of the function f is to be found around the point

133

1 X
at infinity. We use the alternate geometric expansion = (−1)n q n ,
1+q n=0
if |q| < 1. We first split the function into simple fractions; so
2 2 A B
f (z) = 2 = = + ,
z +4 (z − 2i)(z + 2i) z − 2i z + 2i
i i
where A = − and B = , so
2 2
 
i 1 1
f (z) = − .
2 z + 2i z − 2i
1
We have now two simple fractions, but is a term of the principal
z − 2i
1
part of the Laurent series expansion. For we want to use the alternate
z + 2i
geometric expansion, but we need to see for what q this is possible. The
4 4i
condition |z − 2i| > 4 leads to < 1, or < 1. Since the
|z − 2i| z − 2i
4i
condition of convergence is |q| < 1, it follows that a suitable q is q = .
z − 2i
1
We transform the ratio as follows
z + 2i
1 1 1 1
= = · .
z + 2i (z − 2i) + 4i z − 2i 4i
1+
z − 2i
Now, using the alternate geometric expansion, we get that
1 1 1
= ·
z + 2i z − 2i 4i
1+
z − 2i
42 i2 (−1)n 4n in
 
1 4i
= 1− + − ··· + + ···
z − 2i z − 2i (z − 2i)2 (z − 2i)n
1 4i (−1)n 4n in
= − + · · · + + ··· ,
z − 2i (z − 2i)2 (z − 2i)n+1
so
(−1)n 4n in
 
i 4i
f (z) = − + ··· + + ···
2 (z − 2i)2 (z − 2i)n+1
2 (−1)n 4n in+1
= + · · · + + ··· .
(z − 2i)2 2(z − 2i)n+1

134
Let us notice that in this situation there is no Taylor part, only a principal
part.
2
Another method is to isolate the singularity by writing f (z) = ·
z − 2i
1
and to multiply the expansion of the second fraction by the first one;
z + 2i
c) Consider the Taylor series expansion around a = 0 of the exponential
z z2 zn 2
function ez = 1 + + + ··· + + · · · , ∀z ∈ C. By replacing z with ,
1! 2! n! z
one obtains
2 2 22 2n
ez = 1 + + 2 + ··· + n + ···
z1! z 2! z n!
and
2
ez 1 2 22 2n
f (z) = = + + + · · · + + ··· ,
z5 z 5 z 6 1! z 7 2! z n+5 n!
which is the expansion needed, again only with an infinity of terms in the
principal part, but none into the Taylor part.
1
d) We write the function f as f (z) = cosh z · .
z−1
ez + e−z
Since cosh z = and using the Taylor series expansion of ez around
2
0, it follows that

zn (−1)n z n
   
z z
1 + + ··· + + ··· + 1 − + ··· + + ···
1! n! 1! n!
cosh z =
2
z2 z 2n
2+2· + ··· + 2 · + ···
2! (2n)!
=
2
z2 z 2n
=1+ + ··· + + · · · , ∀z ∈ C.
2! (2n)!

1
On the other hand, if |z| < 1, we have that admits a Taylor series
z−1
expansion around 0,

1 1
=− = −1 − z − z 2 − · · · − z n − · · · .
z−1 1−z

135
Hence
z2 z 2n
 
−1 − z − z 2 − · · · − z n − · · · .

f (z) = 1+ + ··· + + ···
2! (2n)!
1 1
It follows that cn = −1 − ··· − − · · · = − cosh 1, c−n = 0, ∀n ∈ Z,
2! (2n)!
n > 0 and c0 = −1.
We have obtained that the Taylor part has an infinite number of terms
and the principal part has none (since f is analytic as being the product of
two functions which are both analytic for |z| < 1).

W 18. Determine the Laurent series expansion of the function f in the


following situations:
1 2
a) f (z) = + , if 2 < |z − 2| < 8;
z+6 z−4
2
b) f (z) = , if |z − 2i| < 4;
z2
+4
 
1
sin
z
c) f (z) = , around a = 0;
z
ez
d) f (z) = , if |z| > 1;
z−1
1
e) f (z) = , around a = π.
sin z
Answer. a) We have that
1 2
f (z) = +
(z − 2) + 8 (z − 2) − 2
1 1 1 1
= · + ·
8 z−2 z−2 2
1+ 1−
8 z−2
2n 2n−1 2 1
= ··· + + + · · · + +
(z − 2)n+1 (z − 2)n (z − 2)2 (z − 2)
n
1 z−2 (−1)
+ − 2 + · · · · + n+1 (z − 2)n + · · · ;
8 8 8

136
b) We obtain that
(4i)2 (4i)3 (−1)n (4i)n
 
i 4i
f (z) = − + − + ··· + + ··· ;
2 (z − 2i)2 (z − 2i)3 (z − 2i)4 (z − 2i)n+1

c) We get that
1 1 1 (−1)n
f (z) = − + − · · · + + ··· ;
z 2 3!z 4 5!z 6 (2n + 1)!z 2n+2

d) In this case we have that


z n−1
   
1 z 1 1 1
f (z) = + 1 + + ··· + + ··· · 1 + + 2 + ··· + n + ··· ;
z 2! n! z z z

e) We have that
1 1 1
= = .
sin z sin ((z − π) + π) − sin(z − π)
By replacing z with z − π in the Taylor series expansion of sin z, one
z − π (z − π)3 (z − π)2n+1
obtains sin(z − π) = − + · · · + (−1)n + · · · , hence
1! 3! (2n + 1)!

1 −1 (z − π)3 (z − π)2n+1
= − + · · · + (−1)n + ···
sin z (z − π) 3! (2n + 1)!
1 1
=− · 2 2n .
z−π (z − π) n (z − π)
1− + · · · + (−1) + ···
3! (2n + 1)!
Since z = π is a first order pole of function f , it follows that
1
2
(z − π) (z − π)2n
1− + · · · + (−1)n + ···
3! (2n + 1)!
has to be the Taylor series expansion, so
1
2 =
(z − π) (z − π)2n
1− + · · · + (−1)n + ···
3! (2n + 1)!

137
= a0 + a1 (z − π) + a2 (z − π)2 + · · · + an (z − π)n + · · · ,
hence
(a0 + a1 (z − π) + · · · + an (z − π)n + · · · ) ·

(z − π)2 2n
 
n (z − π)
· 1− + · · · + (−1) + ··· = 1
3! (2n + 1)!
and identifying the coefficients, one obtains
a0
a0 = 1, a1 = 0, a2 − = 0, . . . .
3!

I
E 22. Calculate f (z)dz in the following cases:
Γ

cosh z
a) f (z) = , Γ : |z| = 2;
(z − 1)2

e2z + z 2
b) f (z) = , Γ : |z − i| = 1.
z2 + 3

Solution. Let us remember that |z − z0 | = r, z0 = x0 + iy0 , r > 0 is the


circle centred at M (x0 , y0 ) of radius r and it is in fact the boundary of the
disk Dr (z0 ) = {z ∈ C : |z − z0 | < r}.
a) We have that |z| = 2 is the circle of radius 2, centred at the origin.
Also D2 (0) is a simply connected domain, bounded by Γ.
Since g(z) = cosh z is an analytic function in D2 (0) and a = 1 belongs to
D2 (0), one can use (4.8) and obtain that
I
cosh z 2πi 0
dz = g (1) = 2πi sinh z = 2πi sinh 1;

(z − 1) 2 1!

|z|=2 z=1

b) We have that |z − i| = 1 is the circle of radius 1, centred at the point


M (0, 1). Also D1 (i) is a simply connected domain, bounded by √Γ.
The isolated singular
√ points of the function√f are z = ±i 3. The first
one, let’s say
√ z = i 3, corresponds to√M1 (0, 3) ∈ D1 (i) and the second
one, z = −i 3, corresponds to M1 (0, − 3) ∈ / D1 (i).

138
e2z + z 2 g(z)
Considering g(z) = √ , one can write f (z) = √ . Since g is
z+i 3 z−i 3
an analytic function in D1 (i), using (4.7), one obtains
I I
g(z) √
f (z)dz = √ dz = 2πig(i 3)
|z−i|=1 |z−i|=1 z − i 3

e2i 3 − 3 π √
= 2πi √ = √ (e2i 3 − 3).
2i 3 3

139
140
Chapter 5

Residue Theory

If a complex function f is not analytic at a point a, one can evaluate the


residues of f . The residue theorem generalizes Cauchy’s theorems, giving a
more powerful formula to evaluate complex line integrals of analytic functions
over different closed paths. The residue theorem also provides an efficient
method to compute definite and improper real integrals. This computations
turn out to be very useful in practice; for example, in the Fourier analysis of
signals.
Let a be an isolated singular point of the function f .
1
Definition 5.0.1. The coefficient c−1 of in the Laurent series expan-
z−a
sion of the function f in a neighborhood of a is called the residue of f at a
and it is denoted by res(f, a).

By (4.21), the coefficients of the Laurent series are


I
1 f (z)
cn = dz, n ∈ Z,
2πi Γ (z − a)n+1

hence for n = −1 one obtains


I
1
res (f, a) = f (z)dz, (5.1)
2πi Γ

where a is the unique singular point of f in the domain bounded by Γ (Figure


5.1).

141
Figure 5.1: The Unique Singular Point a in the Domain Bounded by Γ

5.1 The Residue Theorem


Theorem 5.1.1. Let f be an analytic function in a closed domain D, except
for a finite number of isolated singular points a1 , a2 , . . . , an lying inside D
and let Γ be the boundary of D. Then
I Xn
f (z)dz = 2πi res(f, aj ). (5.2)
Γ j=1

Proof. Let Γj ⊂ D, j = 1, n be closed contours such that there is a unique


singular point aj in the domain bounded by Γj (Figure 5.2 (a)).

(a) The Unique Singular Point aj in (b) The Simple Poles b1 , b2 , . . . , bj on


the Domain Bounded by Γj the Boundary of Γ

Figure 5.2: The Possible Cases of Singular Points in the Residue Theorems

By Definition 5.0.1 we have that


I
1
res (f, aj ) = f (z)dz,
2πi Γj

142
hence I
f (z)dz = 2πi · res (f, aj ).
Γj

By Theorem 4.2.3 (Cauchy’s Theorem for Multiply Connected Domains),


one obtains
I n I
X n
X
f (z)dz = f (z)dz = 2πi res (f, aj ).
Γ j=1 Γj j=1

Remark 5.1.2. If f has in Γ a finite number of simple poles b1 , b2 , . . . , bm ,


then (Figure 5.2 (b))
I n
X m
X
f (z)dz = 2πi res (f, aj ) + πi res (f, bk ). (5.3)
Γ j=1 k=1

Residue at the Point at Infinity


If z = ∞ is an isolated singular point of the function f , then the residue
1
of f at ∞ is the number −c−1 , where c−1 is the coefficient of in the
z−a
Laurent series expansion of the function f in a neighborhood of ∞, NR (∞) =
{z ∈ C : |z − a| > R}, for some a ∈ C. The sign ’−’ is justified since from the
point of view of ∞, the trigonometric sense on a closed contour is clockwise.
As in the case of finite singular points, one obtains the formula
I
1
res (f, ∞) = − f (z)dz, (5.4)
2πi Γ
where Γ is any closed contour outside of which ’∞’ is the unique singular
point of f .
Let a1 , a2 , . . . , an be the finitely many isolated singular points of the func-
tion f (Figure 5.3).
By (5.2) and (5.4) one obtains
n
1 X
res (f, ∞) = − · 2πi res (f, aj ),
2πi j=1

hence the sum of all residues of a function f is


Xn
res (f, ∞) + res (f, aj ) = 0.
j=1

143
Figure 5.3: The Isolated Singular Points of f

5.2 Computation of Residues at Poles


Let a be a pole of order k of the function f . This implies the existence of
a function h which is holomorphic on a disk DR (a) with h(a) 6= 0 such that
h(z)
f (z) = . Then, by (5.1), one gets
(z − a)k
I I
1 1 h(z)
res (f, a) = f (z)dz = dz.
2πi Γ 2πi Γ (z − a)k
Cauchy’s Integral Formula for Derivatives (4.8) gives
(k − 1)!
I
1 h(z) 1
res (f, a) = · (k−1)+1
dz = h(k−1) (a),
(k − 1)! 2πi Γ (z − a) (k − 1)!
but h(z) = f (z)(z−a)k , hence the formula for the computation of the residues
of a function f at a pole of order k is
1
res (f, a) = lim [(z − a)k f (z)](k−1) . (5.5)
(k − 1)! z→a

Now, let a be a simple pole of the function f . Then, for k = 1, (5.5)


becomes
res (f, a) = lim (z − a)f (z). (5.6)
z→a

ez + z
Example 5.2.1. Let us consider f (z) = . We have that z = 1 is
z−1
a simple pole of the function f since h(z) = ez + z is analytic in C and
h(1) = e + 1 6= 0. We obtain that
res (f, 1) = lim(z − 1)f (z) = lim(ez + z) = e + 1.
z→1 z→1

144
ez + z
Example 5.2.2. Now, let us take f (z) = . We have that z = 2 is
(z − 2)2
a pole of order 2 of the function f since h(z) = ez + z is analytic in C and
h(2) = e2 + 2 6= 0. We obtain that
1
res (f, 2) = lim [(z − 2)2 f (z)]0 = lim(ez + z)0 = lim(ez + 1) = e2 + 1.
1! z→2 z→2 z→1

It is not necessary for a function to have just one isolated singular point
of pole type.
ez + z
Example 5.2.3. Consider the function f (z) = . Be-
(z − 2)(z 2 − 3z + 2)
cause we have that z 2 − 3z + 2 = (z − 1)(z − 2), it follows that f has two
isolated singular points z1 = 1 and z2 = 2.
ez + z
Since z = 1 is a simple order pole of f (h(z) = is analytic in
(z − 2)2
DR (1), 0 < R < 1, h(1) = e + 1 6= 0), we obtain that
ez + z e+1
res (f, 1) = lim(z − 1)f (z) = lim = = e + 1.
z→1 z→1 (z − 2)2 1
ez + z
On the other hand, z = 2 is a pole of order 2 of function f (h(z) =
z−1
is analytic in DR (2), 0 < R < 1, h(2) = e2 + 2 6= 0), so
1
res (f, 2) = lim [(z − 2)2 f (z)]0
1! 
z→1
0
ez + z
= lim
z→2 z − 1

(ez + 1)(z − 1) − (ez + z) · 1


= lim
z→2 (z − 1)2
z z
ze − 2e − 1
= lim
z→2 (z − 1)2
= −1.

P (z)
Consider the case of a simple pole a of a function f (z) = , where P
Q(z)
and Q are analytic functions in a disk DR (a). Then there exists a function
Q1 which is analytic in DR (a) that satisfies Q1 (a) 6= 0 such that Q(z) =

145
Q1 (z)(z − a). By deriving it, Q0 (z) = Q1 0 (z)(z − a) + Q1 (z) and for z = a,
we get Q0 (a) = Q1 (a).
By (5.6)(2.4.27) we have that

res (f, a) = lim (z − a)f (z)


z→a
P (z)
= lim (z − a)
z→a Q(z)
P (z)
= lim (z − a)
z→a Q1 (z)(z − a)
P (a)
=
Q1 (a)
P (a)
= 0 ,
Q (a)

hence another formula for the residue at a simple pole is

P (a)
res (f, a) = . (5.7)
Q0 (a)

sin z
Example 5.2.4. The function f (z) = tan(z) = has the poles
cos z
π
zk = + kπ, k ∈ Z (the roots of cos z = 0).
2
π 
0
Since (cos z) = − sin z and sin + kπ = (−1)k 6= 0, then zk are simple
2
poles. Obviously (5.7) is preferable to (5.6) and

sin z sin z
res (f, zk ) = = = −1.
(cos z)0 zk = π2 +kπ − sin z zk = π2 +kπ

5.3 Jordan’s Lemmas


The Residue Theorem (5.1.1) provides an efficient method to compute
real definite integrals. This method uses a couple of lemmas which give
sufficient conditions for the convergence to 0 of some complex line integrals.
Let f be an analytic function in C, with the exception of a finite number
of isolated singular points.

146
Lemma 5.3.1. Let ΓR be an arc of a circle of radius R, centred at a point
a (Figure 5.4).

ΓR
z
R

Figure 5.4: Oriented Arc of a Circle of Radius R, Centered at a


I
i) If lim (z − a)f (z) = 0, then lim f (z)dz = 0;
z→a R→0 ΓR

I
ii) If lim (z − a)f (z) = 0, then lim f (z)dz = 0.
|z−a|→∞ R→∞ ΓR

Proof. i) Let  > 0 be an arbitrary number, fixed for the entire proof.
  Since
we have that lim (z − a)f (z) = 0, then there exists δ > 0, δ = δ such
z→a 4π

that, for every z with |z − a| < δ, we get that |(z − a)f (z)| < .

Let R be an arbitrary number, fixed for the rest of the proof such that
0 < R < δ. Then, for every z ∈ ΓR , we obtain that |z − a| = R < δ, which

implies that |(z − a)f (z)| < . It follows that for every z ∈ ΓR , we have

that
(z − a)f (z)
|f (z)| = <  .
z − a 4πR

Using (4.2) we get that


I
  
f (z)dz ≤ · lΓR = · 2πR = < .

ΓR 4πR 4πR 2

147
We have proved that for every  > 0, there
I exists δ > 0 such that for

every R > 0 with |R − 0| < δ, we have that f (z)dz − 0 < , i.e.
ΓR
I
lim f (z)dz = 0.
R→0 ΓR

ii) The proof is similar to i) with the changes |z − a| > δ and R > δ, since
R = |z − a| → ∞.

The following figure(s) will turn out to be the key of understanding the
next result (Lemma5.3.2).

y y
(z) (z)
ΓR z
R
θ θ
x x
0 R 0 R
R
z
ΓR

(a) ΓR in the upper half-plane (b) ΓR in the lower half-plane


y y
(z) (z)
z
z
R
R θ
θ
x x
0 R −R 0

ΓR ΓR

(c) ΓR in the right half-plane (d) ΓR in the left half-plane

Figure 5.5: All the Possible Constructions of ΓR

Lemma 5.3.2. i) Let ΓR be the semicircle |z| = R in the upper half-plane


given by Im z ≥ 0 (Figure 5.5 (a)). If lim f (z) = 0 uniformly with
|z|→∞

148
respect to θ = arg z ∈ [0, π], then
I
lim f (z)eiαz dz = 0, ∀α > 0.
R→∞ ΓR

ii) Let ΓR be the semicircle |z| = R in the lower half-plane given by Im z ≤


0 (Figure 5.5 (b)). If lim f (z) = 0 uniformly with respect to θ =
|z|→∞
arg z ∈ [π, 2π], then
I
lim f (z)e−iαz dz = 0, ∀α > 0.
R→∞ ΓR

For the proof one should consult [33].


Similar statements hold for ±iα, α > 0, in the right half-plane Re z ≥ 0,
or the left half-plane Re z ≤ 0, respectively (Figures 5.5 (c) and 5.5 (d)).

5.4 Applications of the Residue Theorem


Z ∞
Integrals of the form f (x)dx
−∞

Proposition 5.4.1. If the real function f (x) has an analytic continuation


f (z) to the upper half-plane, which has a finite number of singular points aj ,
j = 1, n in the upper half-plane, has no singular points on the real axis and
Z ∞
lim zf (z) = 0, then the improper integral f (x)dx exists and
|z|→∞ −∞
Z ∞ n
X
f (x)dx = 2πi res (f, aj ). (5.8)
−∞ j=1
I
Proof. We consider the complex line integral f (z)dz, where the closed
Γ
contour Γ consists of the semicircle BCA of radius R, centred in 0 and the
segment AB, with R > max |aj | (Figure 5.6).
1≤j≤n
By Theorem 5.1.1 (the Residue Theorem) we have that
I n
X
f (z)dz = 2πi res (f, aj ).
Γ j=1

149
y

C (z)

Γ ×
aj

× ×
a1 an
x
A(−R) 0 B(R)

Figure 5.6: The Construction of the Closed Contour Γ in the Plane

Since Γ = AB ∪ BCA, we write the following decomposition:


I Z I n
X
f (z)dz = f (z)dz + f (z)dz = 2πi res (f, aj ). (5.9)
Γ AB BCA j=1

Using the fact that AB has the parametrization z = x, x ∈ (−R, R), we


get that
Z Z R
f (z)dz = f (x)dx
AB −R
Z ∞
and taking the limit as R → ∞, we obtain f (x)dx.
I −∞

Since lim zf (z) = 0 we get that lim f (z)dz = 0 by Lemma 5.3.11


|z|→∞ R→∞ BCA
ii). The right-hand side of (5.9) remains constant as R → ∞ since |aj | < R,
∀j = 1, n, therefore the sum includes all the residues of f from the beginning.
Therefore, one obtains (5.8) as the limit of (5.9) as R → ∞.

Z ∞ Z ∞
Integrals of the form f (x) cos(αx)dx and f (x) sin(αx)dx, α > 0
−∞ −∞

Proposition 5.4.2. If the real function f (x) has an analytic continuation


f (z) to the upper half-plane, which has a finite number of singular points aj ,
j = 1, n in the upper half-plane, has no singular points on the real axis and

150
lim f (z) = 0 uniformly in θ = arg z ∈ [0, π], then the improper integrals
|z|→∞
from above exist and
n
!
Z ∞ X
I1 = f (x) cos(αx)dx = Re 2πi res (f (z)eiαz , aj ) ,
−∞ j=1
n
! (5.10)
Z ∞ X
I2 = f (x) sin(αx)dx = Im 2πi res (f (z)eiαz , aj ) .
−∞ j=1

Proof. One considers the complex number


Z ∞ Z ∞
I1 + iI2 = f (x) (cos(αx) + i sin(αx)) dx = f (x)eiαx dx,
−∞ −∞

and one applies the method from the previous case to the last integral.
Using Lemma 5.3.2 i), we get that
n
X
I1 + iI2 = 2πi res (f (z)eiαz , aj ),
j=1

i.e. (5.10) is true.

Remark 5.4.3. If α < 0, one uses Lemma 5.3.2 ii) (see Figure 5.5 (b)) and
one obtains, due to the trigonometric sense, the following formulas
Z ∞ n
!
X
I1 = f (x) cos(αx)dx = −Re 2πi res (f (z)eiαz , bj ) ,
−∞ j=1
n
! (5.11)
Z ∞ X
I2 = f (x) sin(αx)dx = −Im 2πi res (f (z)eiαz , bj ) ,
−∞ j=1

where bj are the singular points of f in the lower half-plane.

151
Z 2π
Integrals of the form f (cos x, sin x)dx
0

In this case we have that f is a rational function. One uses the change
of variable z = eix . Since |z| = 1 and x ∈ [0, 2π), z describes the unit circle
(Figure 5.7).

y
(z)

× z
aj
×
a1
x 1 x
0
×
an

Figure 5.7: The Unit Circle and the Interior Singular Points of f

eix + e−ix eix − e−ix


Using Euler’s formulas cos x = and sin x = , one
2 2i
obtains
z + z1 z2 + 1 z − z1 z2 − 1
cos x = = , sin x = = ,
2 2z 2i 2iz
dz
dz = ieix dx = izdx ⇒ dx = .
iz
It follows that
Z 2π
z2 + 1 z2 − 1
I  
dz
f (cos x, sin x)dx = f ,
0 |z|=1 2z 2iz iz
X n
= 2πi res (g(z), aj ),
j=1

z2 + 1 z2 − 1 1
 
where g(z) = f , is a rational function and aj are the
2z 2iz iz
singular points of f in the unit circle.

152
Z ∞
1
Example 5.4.4. Let us compute I = dx.
−∞ (3 + x2 )4
1
The complex function f (z) = has two isolated singular points
√ (3 + z 2 )4 √
2
z = ±i 3 (the solutions of the equation√ z √+3 = 0). Only
√ z = i 3 is located
in the upper half-plane, since Im (i 3) = 3 > 0. As i 3 is a pole of order
4 of the function f , it follows that
000 000
√ √ 4
 
1 1 1 1
res (f, i 3) = lim (z − i 3) 2 = lim √
3! z→i√3 (z + 3)4 6 z→i√3 (z + i 3)4
 00  0
1 −4 1 20
= lim √ = lim √
6 z→i√3 (z + i 3)5 6 z→i√3 (z + i 3)6
1 −120 1 −120 5
= lim√ √ = · √ = √ · i,
6 z→i 3 (z + i 3)7 6 (2i 3)7 32 · 27 · 3
√ 5π
so I = 2πi · res(f, i 3) = √ .
432 3
Z ∞
cos 3x
Example 5.4.5. Let us compute I = dx.
0 (x2
+ 9)(x2 + 1)
cos 3x
First, let us notice that the function f (x) = 2 is even, so
(x + 9)(x2 + 1)
Z ∞
1 ∞
Z
cos 3x cos 3x
I= dx = dx.
0 (x2 + 9)(x2 + 1) 2 −∞ (x2 + 9)(x2 + 1)

e3iz
The complex function f (z) = has four isolated singular
(z 2 + 9)(z 2 + 1)
points z1,2 = ±i and z3,4 = ±3i, but only i and 3i are located in the upper
half-plane.
The value of the integral is
Z ∞
cos 3x
2 2
dx = Re (2πi(res (f, i) + res (f, 3i))
−∞ (x + 9)(x + 1)

Since i and 3i are simple poles, we have that


e3iz
z 2 +9
e3iz e−3
res (f, i) = = = ,

2 0
(z + 1) z=i 2z(z 2 + 9) z=i 16i

153
e3iz
z 2 +1
e3iz e−9
res (f, 3i) = = = − .

2 0
(z + 9) z=3i 2z(z 2 + 1) z=3i 48i

We obtain that
Z ∞  −3
e−9 π(3e−3 − e−9 )
 
cos 3x e
2 2
dx = Re 2πi − = ,
−∞ (x + 9)(x + 1) 16i 48i 24
π(3e−3 − e−9 )
hence I = .
48
Z 2π
cos x
Example 5.4.6. Let us calculate I = dx.
0 5 + 3 sin x
Using the change of variable z = eix , we have that


 x ∈ [0, 2π) → |z| = 1 (closed circle)
eix + e−ix z + z1 z2 + 1




 cos x = = =
 2 2 2z
eix
− e −ix z − z1 z 2
−1
 sin x = = =
2i 2i 2iz



 dz = ieix dx = izdx ⇒ dx = dz



iz
The integral becomes
z2 + 1
z2 + 1
I I
2z dz
I= = dz.
|z|=1 z 2 − 1 iz 2
|z|=1 z(3z + 10iz − 3)
5+3
2iz
z2 + 1
Denoting by g(z) = , one finds that the function g has
z(3z 2 + 10iz − 3)
three singular points: z = 0, z = −3i and z = − 3i (the solutions of the
equation z(3z 2 + 10iz − 3) = 0). We need to compute the residue only at
points inside the disk D1 (0): z = 0 and z = − 3i . Both are poles of order 1.
Since
z2 + 1 1
res (g, 0) = 2 =− ,

3z + 10iz − 3) z=0 3

z2 + 1 z2 + 1
 
i 1
res g, − = = = ,

3 3 2 0
(3z + 10iz − 3z) z=− 3 2
(9z + 20iz − 3) z=− 3 3
i
i

  
i
the integral is I = 2πi res (g, 0) + res g, − = 0.
3

154
5.5 Exercises
E 23. Calculate the residues of the following function:
sin(πz)
a) f (z) = ;
z(z 2 + 4)
eiz
b) f (z) = ;
z 3 − 3z + 2
cos z1
c) f (z) = ;
z2 + 1
 
πz
d) f (z) = (z + 1) cos .
z+1
sin(πz)
Solution. a) The isolated singular points of the function f (z) =
z(z 2 + 4)
are z = 0, z = 2i and z = −2i (these points are obtained from the equation
z(z 2 + 4) = 0).
sin(πz) 0
Since lim 2
produces the undetermined case , we first need to
z→0 z(z + 4) 0
eliminate the indeterminacy and we are going to do this using the remarkable
sin z
limit lim = 1. We arrange the terms of the limit as
z→0 z

sin(πz) sin(πz) π π
lim 2
= lim · lim 2 =1· ,
z→0 z(z + 4) z→0 πz z→0 z + 4 4
which is a finite number, so z = 0 is a removable point of the function f and
res (f, 0) = 0.
sin(πz) sin(2πi) 6= 0
In the case of lim , the limit is = = ∞. We count
z→2i z(z 2 + 4) 0 0
how many zeros are obtained at the denominator and in this case there is
only one zero, hence z = 2i is a simple pole. It follows that
sin(πz) sin(πz) sin(2πi)
res (f, 2i) = 2 0
= 2 = .
[z(z + 4)] z=2i 3z + 4 z=2i −8

Similarly, z = −2i is a simple pole and


sin(πz) sin(πz) sin(−2πi) sin(2πi)
res (f, −2i) = 2 0
= 2 = = .
[z(z + 4)] z=−2i 3z + 4 z=−2i −8 8

155
Remark : The poles z = 2i and z = −2i are conjugated complex numbers
and their residues are also conjugated numbers.
b) We notice that z 3 − 3z + 2 = (z − 1)2 (z + 2) (one can use Horner’s rule
to decompose it), so the isolated singular points of f are z = 1 and z = −2.
eiz eiz 6= 0
Since lim 3 =lim 2
= 2 = ∞, one obtains that
z→1 z − 3z + 2 z→1 z − 1) (z + 2) 0
z = 1 is a pole of order 2 and
0  iz 0
eiz

1 2 e
res (f, 1) = lim (z − 1) 2
= lim
1! z→1 (z − 1) (z + 2) z→1 z+2
iz iz i
ie (z + 2) − e e (3i − 1)
= lim 2
= .
z→1 (z + 2) 9

eiz eiz 6= 0
Since lim = lim = = ∞, hence z = 1 is
z→−2 z 3 − 3z + 2 z→−2 (z − 1)2 (z + 2) 0
a simple pole and

eiz eiz e−2i


 
res (f, −2) = lim (z + 2) = lim = .
z→−2 (z − 1)2 (z + 2) z→−2 (z − 1)2 9

Remark : One can use to compute the residue in a simple pole either (5.6),
either (5.7).
c) The isolated singular points of f are z = 0 (inside cosine there is a
ratio with the denominator z) and z = ±i (from the equation z 2 + 1 = 0).
cos 1
When we evaluate lim 2 z , we obtain that the limit does not exist,
z→0 z + 1
so z = 0 is an essential singularity. This is the most difficult situation for
computing the residue since we need to find the coefficient c−1 of the Laurent
series expansion of f around the essential singularity.
1
In our case, using the expansion of cos z for z → and the alternate
z
2
geometric series with q = z , |q| < 1, we get that

cos z1
 
1 1
2
= cos · 2
z +1 z z +1
 
1 1 n 1
= 1− + + · · · + (−1) + ···
2!z 2 4!z 4 (2n)!z 2n
· (1 − z 2 + z 4 + · · · (−1)n z 2n + · · · ).

156
1
We know that c−1 is the coefficient of obtained from the above product
z
and it is 0. It follows that res (f, 0) = 0.
Both singular points z = i and z = −i are simple poles and using (5.6)
cos i cos i
or (5.7), one finds res (f, i) = and res (f, −i) = − ;
2i 2i
d) We have that z = −1 is the only isolated singular point of f and it is
an essential point. In order to find the Laurent series expansion of f around
z = −1, we use that
     
πz π(z + 1 − 1) π π
cos = cos = cos π − = − cos .
z+1 z+1 z+1 z+1
π
Using the Taylor series expansion of cos z for z → , one obtains
z+1
π2 (−1)n π 2n
 
f (z) = −(z + 1) 1 − + ··· + + ···
2!(z + 1)2 (2n)!(z + 1)2n
π2 π 2n
= −(z + 1) + + · · · + (−1)n+1 + ··· ,
2!(z + 1) (2n)!(z + 1)2n−1

π2 π2
hence c−1 = and also res (f, −1) = .
2! 2!

W 19. Calculate the residues of the following functions:

z4 + 1
a) f (z) = ;
(z − 1)(z 2 − 1)
 
1
cosh
z
b) f (z) = 2 ;
z − 3z + 2
1
e− z + z
c) f (z) = ;
(z − 1)2
1
d) f (z) = ;
5 − 4 cos z
z2
e) f (z) = ;
1 − cos z

157
z
f) f (z) = .
z 100 + 2100

Answer. a) We have that z = 1 is a pole of order 2 and


0
z4 + 1

3
res (f, 1) = lim =
z→1 z + 1 2

and z = −1 is a simple pole and


1
res (f, −1) = ;
2

b) The isolated singular points are z = 1 (first order pole), z = 2 (simple


pole) and z = 0 (essential singular point). We obtain that

res (f, 1) = − cosh 1, res (f, 2) = cosh 2

and since
 
1
cosh   
z 1 1 1
= cosh −
z 2 − 3z + 2 z z−2 z−1
 
1 1
= 1+ + ··· + + ···
2!z 2 (2n)!z 2n
      
1 1 1 n
· 1− + 1 − 2 z + · · · + 1 − n+1 z + · · · ,
2 2 2

it follows that
     
1 1 1 1 1 1
res (f, 0) = 1− 2 + 1 − 4 + ··· + 1 − 2n + · · ·
2! 2 4! 2 (2n)! 2
 
1 1 1 1 1
= + ··· + + ··· − + + ··· + + ···
2! (2n)! 2!22 4!24 (2n)!22n
= cosh 1 − 1 − (cosh 2 − 1) = cosh 1 − cosh 2;

c) We get that z = 1 is a pole of order 2 and


h 1 i0
res (f, 1) = lim e− z + z = e−1 + 1,
z→1

158
but z = 0 is an essential singularity. Since
1
e− z + z  1   1 0
−z
= e +z ·
(z − 1)2 1−z
(−1)n
 
1 1
= z+1− + − ··· + + ···
1!z 2!z 2 n!z n
· (1 + 2z + 3z 2 + · · · + (n + 1)z n + · · · ),
we obtain that
1 2 3 (−1)n n
res (f, 0) = − + − + ··· + + ···
1! 2! 3! n!
1 1 1 (−1)n
= −1 + − + − · · · + = −e−1 ;
1! 2! 3! (n − 1)!

d) From the equation 5 − 4 cos z = 0, one obtains zk = ±i ln 2 + 2kπ,


k ∈ Z, all simple poles, so
1
res (f, zk ) = ;
sin zk

e) By solving the equation 1 − cos z = 0, we obtain the solutions zk =


2kπ, k ∈ Z. Since lim f (z) is finite, it follows that z = 0 is a removable
z→0
singular point and res (f, 0) = 0. On the other hand, zk = 2kπ, k ∈ Z? are
poles of order 2, since they are solutions of the equation 1 − cos z = 0 and
(1 − cos z)0 = sin z = 0. We will use the Laurent series expansion to find the
residues at these points.
Since
z2 (z − 2kπ)2 + 4kπ(z − 2kπ) + 4k 2 π 2
=
1 − cos z (z − 2kπ)2 (z − 2kπ)4
1−1+ − + ···
2! 4!
1 (z − 2kπ)2 + 4kπ(z − 2kπ) + 4k 2 π 2
= ·
(z − 2kπ)2 1 (z − 2kπ)2
− + ···
2! 4!
1
= (a0 + a1 (z − 2kπ) + a2 (z − 2kπ)2 + · · · ),
(z − 2kπ)2
it follows that
res (f, 2kπ) = a1 = 8kπ, k ∈ Z? ;

159
f) The equation z 100 = −2100 has one hundred solutions, namely
 
(2k + 1)π (2k + 1)π
zk = 2 cos + i sin , k = 0, 99,
100 100

which are all simple poles. We get that

zk zk2 zk2
res (f, zk ) = = = − .
100zk99 100zk100 100 · 2100

I
E 24. Calculate f (z)dz in the following cases:
Γ

cosh z
a) f (z) = , Γ : |z| = 2;
(z − 1)2

e2z + z 2
b) f (z) = , Γ : |z − i| = 1;
z2 + 3
sin z
c) f (z) = , Γ : |z − 2| = 2;
z 2 (z− 3)

z2
d) f (z) = , Γ : |z| = 4;
(z 2 + 1)3
1
e− z
e) f (z) = 2 , Γ : x2 + 4y 2 = 4.
z +z

Solution. The first and the second integral have already been solved in the
previous chapter using Theorem 4.2.8 and formula (4.8) (Cauchy’s Integral
Formula for Derivatives) and Theorem 4.2.5 and formula (4.7) (Cauchy’s
Integral Formula).
a) We have that |z| = 2 is the circle of radius 2, centred at the origin.
Also D2 (0) is a simply connected domain, bounded by Γ. We obtain that
z = 1 is a pole of order 2 of the function f in D2 (0) and
 0
cosh z
res (f, 1) = lim (z − 1) 2
= lim(cosh z)0 = lim sinh z = sinh 1,
z→1 (z − 1)2 z→1 z→1

160
hence, from (5.2), we get that
I
cosh z
2
dz = 2πires (f, 1) = 2πi sinh 1;
|z|=2 (z − 1)

b) We have that |z − i| = 1 is the circle of radius 1, centred at the point


M (0, 1). Also D1 (i) is a simply connected domain, bounded by √Γ.
The isolated singular
√ points of the function
√ f are z = ±i 3. The first
√ z = i 3, corresponds to√M1 (0, 3) ∈ D1 (i) and the second
one, let’s say
one, z = −i 3, corresponds
√ to M1 (0, − 3) ∈/ D1 (i).
We get that z = i 3 is a simple order pole of function f and that

√ e2z + z 2 e2i 3 − 3
res (f, i 3) = √ = √ .
2z z=i 3 2i 3

Since z = −i 3 ∈ / D1 (i), from (5.2) one obtains
I
e2z + z 2 √ π 2i√3
2
dz = 2πires (f, i 3) = √ (e − 3);
|z−i|=1 z + 3 3

c) We have that |z − 2| = 2 is the circle of radius 2, centred at the point


M (2, 0). Also D2 (2) is a simply connected domain, bounded by Γ.
sin z
The function f (z) = 2 admits two isolated points, z = 0 and
z (z − 3)
z = 3.
sin z sin z 1 1
Since lim 2 = lim lim = lim = ∞, it follows that z = 0 is a
z→0 z z→0 z z→0 z z→0 z
simple pole and
 
sin z sin z
res (f, 0) = lim z · 2 = lim = 1.
z→0 z z→0 z

Obviously z = 3 is also simple pole. Since z = 3 belongs to D2 (2) and


z = 0 belongs to Γ, using (5.3), it follows that
I
sin z sin 3
2
dz = 2πires (f, 3) + πires (f, 0) = 2πi + πi;
|z−2|=2 z (z − 3) 9

d) We have that |z| = 4 is the circle of radius 4, centred at the origin.


Also D4 (0) is a simply connected domain, bounded by Γ.

161
The isolated singular points of f are z = ±i, both third order poles and
both belonging to D4 (0). We get that
00 00
z2 z2
 
1 3 1 3
res (f, i) = lim (z − i) 2 = lim (z − i)
2! z→i (z + 1)3 2 z→i (z − i)3 (z + i)3
00 0
z2 2zi − z 2
   
1 1
= lim = lim
2 z→i (z + i)3 2 z→i (z + i)4
1 (2i − 2z)(z + i) − 4(2zi − z 2 ) 1
= lim 5
=
2 z→i (z + i) 16i

and
00 00
z2 z2
 
1 3 1 3
res (f, −i) = lim (z + i) 2 = lim (z + i)
2! z→i (z + 1)3 2 z→i (z − i)3 (z + i)3
00 0
z2 −2zi − z 2
 
1 1
= lim = lim
2 z→i (z − i)3 2 z→i (z − i)4
1 (2i + 2z)(z − i) − 4(2zi + z 2 ) 1
= − lim 5
=− ,
2 z→i (z − i) 16i

hence, using (5.2), we obtain that

z2
I
dz = 2πi [res (f, i) + res (f, −i)] = 0;
|z|=4 (z 2 + 1)3

x2
e) We have that x2 + 4y 2 = 4 ⇔ + y 2 = 1 is an ellipse of semi-axes
4
a = 2 and b = 1, centred at the origin. Let ∆ be the domain bounded by
this ellipse, which is a simple connected set.
We get that z = −1 and z = 0 are the isolated singular points of f , both
belonging to ∆.
It is obvious that z = −1 is a simple pole and that
1
e− z
res (f, −1) = = −e,
z z=−1

while z = 0 is an essential singular point (e∞ does not exist in C), hence
1
res (f, 0) = c−1 , the coefficient of from the Laurent series expansion of the
z

162
function f around 0. If |z| < 1, then
1
e− z 1
f (z) = ·
z z+1
1 1
1− + · · · + (−1)n n + · · ·
= 1!z n!z · (1 − z + · · · + (−1)n z n + · · · )
 z 
1 n 1
= − · · · + (−1) n+1
+ · · · · (1 − z + · · · + (−1)n z n + · · · ),
z n!z

1 1 1 1
hence c−1 = 1 + − + − + ···.
1! 2! 3! 4!
1 1 1 1
Since e = 1 − + − + + · · · , it follows that c−1 = 2 − e−1 . We
−1
1! 2! 3! 4!
have obtained that
1
e− z
I
dz = 2πi [res (f, −1) + res (f, 0)] = 2 − e − e−1 .
x2 +4y 2 =4 z2 + z

I
W 20. Calculate I = f (z)dz in the following cases:
Γ

z
a) f (z) = , Γ : |z| = 2;
z4 − 1

cos(2z) − 1
b) f (z) = , Γ : |z| = R > 0;
z2
1
c) f (z) = , Γ : |z − 2| = R > 0;
z3 +8
1
d) f (z) = , n ∈ Z, n ≥ 1, Γ : |z| = 2;
(z 2 + 1)n
1
e z2
e) f (z) = , Γ : |z| = R > 0;
z−1
z
f) f (z) = , Γ : |z| = 4.
sin2 z

163
Answer. a) We get that
I
f (z)dz = 2πi [res (f, 1) + res (f, −1) + res (f, i) + res (f, −i)] = 0;
Γ

b) Since z = 0 ∈ DR (0) is a removable singular point, the integral is null;


c)√The function f has three isolated singular points z1 = −2, z2,3 =
1 ± i 3. Since |z1 − 2| = 4 and |z2,3 − 2| = 2, it is necessary to discuss the
following situations:
1. if R < 2, then the integral is null due to Theorem 4.2.1 (Cauchy’s
Fundamental Theorem);
 √ √  πi
2. if R = 2, then I = 2πi res (f, 1 + i 3) + res (f, 1 − i 3) = − ;
12
 √ √  πi
3. if 2 < R < 4, then I=2πi res (f, 1 + i 3) + res (f, 1 − i 3) = − ;
6
 √ √ 
4. if R = 4, then I=2πi res (f, 1 + i 3) + res (f, 1 − i 3) +
πi
πi · res (f, −2) = − ;
12
 √ √ 
5. if R > 4, I = 2πi res (f, 1 + i 3) + res (f, 1 − i 3) + res (f, −2) =
0;

d) If n = 0, then I = 0 due to Theorem 4.2.1 (Cauchy’s Fundamental


Theorem).
If n ≥ 1, then ±i are poles of order n. Since
(n−1)
(−1)n (2n − 2)!

1 1 1
res (f, i) = lim n
= 2 · ,
(n − 1)! z→i (z + i) ((n − 1)!) (2i)2n−1
(n−1)
(−1)n (2n − 2)!

1 1 1
res (f, −i) = lim n
= − 2 · ,
(n − 1)! z→−i (z − i) ((n − 1)!) (2i)2n−1
it follows that I = 0;
e) We have the following situations:
1. if R < 1, then I = 2πi · res (f, 0) = 2πi(1 − e);

164
2. if R = 1, then I = 2πi · res (f, 0) + πi · res (f, 1) = πi(2 − e);

3. if R > 1, then I = 2πi (res (f, 0) + res (f, 1)) = 2πi;

f) We have that I = 2πi (res (f, −π) + res (f, 0) + res (f, π)) = 6πi.

E 25. Calculate the following real integrals, using residues:


Z 2π
cos x
a) I = dx;
0 (5 + 4 sin x)2
Z ∞
x2 + 2
b) I = dx;
0 x4 + 5x2 + 4
Z ∞
x sin(πx)
c) I = 2
dx;
−∞ 2x − 2x + 1
Z ∞
x sin x + cos x
d) I = dx.
0 (x2 + 1)2

Solution. a) Using the change of variable z = eix , one obtains




 x ∈ [0, 2π) → |z| = 1 (closed circle)
eix + e−ix z + z1 z2 + 1




 cos x = = =
 2 2 2z
e ix
− e−ix z − z1 z 2
−1
 sin x = = =
2i 2i 2iz



 dz = ieix dx = izdx ⇒ dx = dz



iz
The integral I becomes

z2 + 1
z2 + 1
I I
2z dz 1
I= 2 · =− dz.
z2 − 1 iz 2i |z|=1 (2z 2 + 5iz − 2)2

|z|=1
5+4
2iz

z2 + 1
Denoting by g(z) = , one finds that the function g has
(2z 2 + 5iz − 2)2
i
two isolated singular points, z = −2i and z = − (the solutions of the
2

165
equation 2z 2 + 5iz − 2 = 0). We only need to take into consideration the
i
points inside the disk D1 (0), so we choose z = − , which is a pole of order
2
2.
Since
" 2 #0
2
 
i 1 i z +1
res g, − = limi z+
2 1! z→− 2 2 (2z + 5iz − 2)2
2
 0
2
 i z2 + 1 
= limi  z +
 
2
2
 
z→− 2  i 2

4 z+ (z + 2i)
2
 2 0
1 z +1 1 4iz − 2
= limi = lim = 0,
4 z→− 2 (z + 2i)2 4 z→− 2i (z + 2i)3
 
1 i
the integral is I = − · 2πi · res g, − = 0;
2i 2
x2 + 2
b) First of all, one notices that the function f (x) = is
x4 + 5x2 + 4
anZ even function (one verifies that f (−x) = f (x), ∀x ∈ R), hence I =
1 ∞ x2 + 2
dx.
2 −∞ x4 + 5x2 + 4
z2 + 2
The complex function f (z) = 4 has four isolated singular
z + 5z 2 + 4
points, z = ±i and z = ±2i (the solutions of the equation z 4 + 5z 2 + 4 =
0 ⇔ (z 2 + 1)(z 2 + 4) = 0). In the upper half-plane there are only z = i and
z = 2i, both simple poles.
Since
z 2 + 2 1
res (f, i) = 3 =
4z + 10z z=i 6i
and
z 2 + 2 1
res (f, 2i) = 3 = ,
4z + 10z z=2i 6i

1 π
it follows that I = · 2πi [res (f, i) + res (f, 2i)] = ;
2 3
Z ∞
xeiπx
c) Consider the integral I, where I = Im J and J = 2
dx.
−∞ 2x − 2x + 1

166
zeiπz
The complex function f (z) = has two isolated singular
2z 2 − 2z + 1
1±i
points, z = , the solutions of the equation 2z 2 − 2z + 1 = 0. Only
2
1+i
z= is in the upper half-plane and it is a simple pole.
2
Since
zeiπz
 
1+i 1 + i iπ−π
res f, = 1+i = e 2 2,
2 4z − 2 z= 2 4i
it follows that
 
1+i π π π
J = 2πi · res f, = (1 + i)ei 2 − 2 ,
2 2
hence
π π π
I = Im J = Im (1 + i)ei 2 − 2
2
π h π
 π π i
= Im (1 + i)e− 2 cos + i sin
2 2 2
π − π2
 π −π
= Im (i − 1)e = e 2;
2 2
Z ∞ Z ∞
x sin x cos x
d) Let us split the integral I = 2 2
dx+ dx. Since
0 (x + 1) 0 (x2 + 1)2
x sin x cos x
f1 (x) = 2 2
and f2 (x) = 2 are both even functions, we have
(x + 1) (x + 1)2
that Z ∞ Z ∞ 
1 x sin x cos x
I= 2 2
dx + 2 2
dx .
2 −∞ (x + 1) −∞ (x + 1)
Z ∞
xeix
Let us denote J1 = 2 2
dx. The complex function f1 (z) =
−∞ (x + 1)
zeiz
has into the upper half-plane only the isolated singular point z = i,
(z 2 + 1)2
which is a pole of order 2.
Since
0 0
zeiz zeiz
 
2
res (f1 , i) = lim (z − i) 2 = lim
z→i (z + 1)2 z→i (z + i)2

eiz ((1 + zi)(z + i) − 2z) e−1


= lim = ,
z→i (z + i)3 4

167
e−1 πie−1
it follows that J1 = 2πi · = and
4 2

πe−1
Z
x sin x
dx = Im (J1 ) = .
−∞ (x2 + 1)2 2

eix
Z
Similarly, taking J2 = dx, one obtains J2 = πe−1 and
−∞ (x2 + 1)2
Z ∞
cos x
dx = Re (J2 ) = πe−1 .
−∞ (x2 + 1)2

From the last two equalities we get that

1 πe−1 3πe−1
 
−1
I= + πe = .
2 2 4

W 21. Calculate the following real integrals, using residues:


Z 2π
1
a) I = dx;
0 sin x + 2 cos x + 3
Z 2π
1
b) I = dx;
0 (5 − 4 cos x)2
Z 2π
1
c) I = dx, n ∈ N∗ ;
0 5 − 4 cos(nx)
Z 2π
1
d) I = dx, a ∈ R, |a| > 1;
0 a + cos x

(1 − a2 )einx
Z
e) I = dx, n ∈ Z+ , a ∈ R, |a| =
6 1;
0 1 − 2a cos x + a2

x2 + x + 1
Z
f) I = dx;
−∞ 1 + x4
Z ∞
1
g) I = dx;
0 (2x2 + 1)3

168

x4 − x3 + 1
Z
h) I = dx;
−∞ 1 + x12

x3 sin x
Z
i) I = dx;
0 (x2 + 4)2
Z ∞
(x + 1) cos(πx)
j) I = dx, a ∈ R, a 6= 0;
−∞ x 2 + a2

cos2 x
Z
k) I = dx;
0 x2 + 1
Z ∞
x sin(ax) + cos(bx)
l) I = , a, b, c ∈ R?+ .
0 x 2 + c2

10π 2π 2π sgn(a)
Answer. a) I = π; b) I = ; c) I = ; d) I = √ ;
27 3 a2 − 1
e) We need to discuss the values of a. If |a| < 1, a 6= 0, then I = 2πan

and for |a| > 1, one obtains I = − n . In the case a = 0, we have to discuss
a
the values of n. If n ≥ 1, the integral is null and for n = 0, one obtains
I = 2π;

√ 3π 2 2π  π π
f) I = 2π; g) I = ; h) I = cos + sin ;
32 3 12 12
π
i) I = 0; j) I = e−|a|π ;
a
Z ∞ Z ∞ Z ∞ 
1 + cos(2x) 1 1 cos(2x)
k) I = 2 + 1)
dx = 2+1
+ 2+1
dx . Since
Z ∞ 0 2(x Z ∞ 2 0 x 0 x
1 ∞ cos(2x)
Z
1 π cos(2x)
= and dx = dx, we easily get that
0 x2 + 1 2 0 x2 + 1 2 −∞ x2 + 1
π
I = (1 + e−2 );
4
π −ac e−bc
 
l) I = e + .
2 c
W 22. Let be f and g two complex functions.

a) Determine f analytic, f (z) = u(x, y) + iv(x, y), z = x + iy, if u(x, y) =


ϕ(x2 − y 2 ) and f (0) = 0, f (i) = −1;

169
f (z) · e2/z
I
b) For g(z) = , compute res (g, 1), res (g, 0) and g(z)dz.
z2 − z |z|=2

W 23. Let be F : D = R × (− π2 , π2 ) → R,

F (x, y) = ex (ax cos y − by sin y), a, b ∈ R.

a) Determine a relation between a and b if F is an analytic function in D;

b) Find an analytic function f , if Re f = F (x, y) + ϕ(x2 − y 2 ), ϕ ∈ C 2 .

Some topics which are not included in this book for lack of space can be
found in the Bibliography. For instance, limits and continuity of functions
of a complex variable, complex sequences and series or conformal mappings
are presented in [5], [20, Chapters 1 and 5], [18], [21], [24], [33], [35], [37].

170
Chapter 6

SOFTWARE

6.1 Stability with MATLAB


The Exponential Matrix
The linear continuous-time system with constant coefficients Y 0 (t) =
AY (t), t ∈ R and with the initial condition Y (0) = Y0 has the solution
Y (t) = eAt Y0 .
The syntax is B = expm(A) and it computes the exponential of the
matrix A.
In the case of asymptotically stable systems, the velocity of the conver-
gence to the null matrix of the exponential matrix depends on the distance
between the eigenvalues (in the left half-plane) and the imaginary axis (com-
pare examples 1-4). The eigenvalues −5, −6, −6.9 and −20 of the drift matri-
ces A have increasing distances with the imaginary axis and the entries of the
corresponding exponential matrices decrease (from 0.0067 to 10−8 × 0.2061).

Example 6.1.1. A =[-5 1;0 -5]; t = 0:0.1:1;


for i=1:11
EA=expm(A*t(i))
end
The answer is, for i = 11 (i.e. t(i) = 1):

EA =
0.0067 0.0067
0 0.0067

171
Example 6.1.2. A = [-6 1;0 -6];
answer:
EA =
0.0025 0.0025
0 0.0025
Example 6.1.3. A = [-6.9 1;0 -6.9];
answer:
EA =
0.0010 0.0010
0 0.0010
Example 6.1.4. A = [-20 1;0 -20];
answer:
EA = 1.0e − 008∗
0.2061 0.2061
0 0.2061
Example 6.1.5. A = [-5 1;0 -5]; t = 0:1:10;
for i=1:11
EA=expm(A*t(i))
end
answer:
EA = 1.0e − 020∗
0.0193 0.0193
0 0.0193
Example 6.1.6. Solve, for t = 3, the initial value problem Y 0 (t) = AY (t),
Y (0) = Y0 ,    
−2 6 1
A= , Y0 = .
3 −4 5
Solution. A = [-2 6;3 -4]; Y 0 = [1 5]0 ; Y 3 = expm(A∗ 3)∗ Y 0;
answer  
239.0996
Y3= .
133.8519

Stability
The commands eig and lyap can be used to verify the stability of linear
continuous-time systems with constant coefficients Y 0 (x) = AY (x), x ∈ R,
where A is a real n × n matrix (see Theorem 1.3.2).

172
The syntax is one of the following:

1. Λ = eig(A);

2. [T, D] = eig(A);

3. [T, D] = eig(A, ’nobalance’).

Action: determines the eigenvalues and the eigenvectors of the matrix A.


The first command, Λ = eig(A), determines the vector Λ = {λ1 , . . . , λn }
of the eigenvalues of the matrix A.
The second command, [T, D] = eig(A), computes the diagonal matrix
D, which has on the main diagonal the eigenvalues of the matrix A and the
matrix T , which has as its columns the eigenvectors corresponding to these
eigenvalues (hence A ∗ T = T ∗ D). If all the eigenvalues λ of A have the
geometric multiplicity mg (λ) equal to the algebraic multiplicity ma (λ), then
the matrix T is nonsingular and it is the transition matrix. If mg (λ) < ma (λ),
then the columns of T which correspond to eigenvalues from a Jordan cell
of A are linearly dependent and depend on the eigenvector corresponding to
that Jordan cell.
By Theorem 1.3.2, a linear continuous SDEs is asymptotically stable if
and only if all the eigenvalues λ of the drift matrix A verify Re λ < 0.
Now we are going to present some examples. We have to check the sta-
bility of the continuous-time LTIs with the following drift matrices.

Example 6.1.7. A = [0 1;-2 -3]; eig(A)


ans = −1, −2;
Answer: asymptotically stable.

Example 6.1.8. A = [2 1;-10 -4]; eig(A)


ans = −1.0000 + 1.0000i, −1.0000 − 1.0000i;
Answer: asymptotically stable.

Example 6.1.9. A = [-8 -2;20 5]; eig(A)


ans = −3, 0;
Answer: stable, but not asymptotically stable (why?).

Example 6.1.10. A = [-21.5000 -6.0000;60.5000 17.0000]; eig(A)


ans = −5.0000, 0.5000;
Answer: unstable.

173
Now we have to determine the eigenvalues and the eigenvectors of the
following drift matrices (for continuous-time systems).
Example 6.1.11. A = [-8 -6 6;6 2 -3;1 -1 0];
Solution. [T, D] = eig(A); dT = det(T )
Answer:
T= D=
0.4516 0.2673 −0.0000 −3.0000 0 0
−0.7903 −0.8018 0.7071 0 −2.0000 0
−0.4140 −0.5345 0.7071. 0 0 −1.0000
dT = −0.0142.
Conclusion: The eigenvalues −1, −2, −3 verify the condition Re λ < 0,
hence the system is asymptotically stable. The eigenvectors (the columns of
T ) are linearly independent (since det(T ) 6= 0). We can also observe that the
eigenvalues are distinct.
Example 6.1.12. A = [1 6 -10 -8;-1 -3 -3 5;1 2 -7 -1;0 1 -4 -2];
Solution. [T, D] = eig(A)
Answer:
T= D=
0.9049 0.4264 0.4264 −0.4264 −5 0 0 0
−0.0312 −0.8528 −0.8528 0.8528 0 −2 0 0
0.2496 −0.2132 −0.2132 0.2132 0 0 −2 0
0.3432 −0.2132 −0.2132 0.2132 0 0 0 −2

Conclusion: The eigenvalues −5, −2, −2, −2 verify the condition Reλ < 0,
hence the system is asymptotically stable.
The second column of T is an eigenvector v2 corresponding to the multiple
eigenvalue λ = 2 of A (with mg (λ) = 1 < 3 = ma (λ)). The third and the
fourth columns are respectively v2 and −v2 , hence the matrix T is singular.
This situation corresponds to the following Jordan canonical form of A
J=
−5 0 0 0
0 −2 1 0
0 0 −2 1
0 0 0 −2
where the second 3 × 3 Jordan cell is associated to the eigenvector v2 .

174
Example 6.1.13. A = [0 5 0 0;-5 0 0 0;0 0 0 -5;0 0 5 0];
Solution. [T, D] = eig(A); dT = det(T )
Answer:
 
0.0000 − 0.7071i 0.0000 + 0.7071i 0.0000 + 0.0000i 0.0000 + 0.0000i
 0.7071 + 0.0000i 0.7071 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 
T = 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 − 0.7071i

0.0000 + 0.7071i 
0.0000 + 0.0000i 0.0000 + 0.0000i −0.7071 + 0.0000i −0.7071 + 0.0000i

 
0.0000 + 5.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i
 0.0000 + 0.0000i 0.0000 − 5.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 
D=
 0.0000 + 0.0000i
.
0.0000 + 0.0000i 0.0000 + 5.0000i 0.0000 + 0.0000i 
0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 + 0.0000i 0.0000 − 5.0000i
dT = 1.
Conclusion: The eigenvalues λ1 = 5i and λ2 = −5i have algebraic multi-
plicities ma (λi ) = 2, i ∈ {1, 2}. The corresponding eigenvectors are linearly
independent, therefore mg (λi ) = 2 = ma (λi ), ∀i ∈ {1, 2}. The system is
stable, but not asymptotically stable since Re (λ1,2 ) = 0.

The Lyapunov Equation


The command lyap solves the Lyapunov equation AP + P A0 = −Q,
where A0 denotes the transpose of A. The syntax is P = lyap(A, Q).
In the following examples we will solve Lyapunov’s equation and we are
going to check the stability of the SDEs.
Example 6.1.14. A = [-1 0 0;1 -2 0;2 3 -1]; Q = [2 -4 1;-4 10 -4;1 -4 6];
P = lyap(A, Q)
eig(P )
The answer is
P=
1 −1 0
−1 52 0
0 0 3
ans = 0.3820, 2.6180, 3.0000.
The eigenvalues of the matrix P are positive, hence the matrix P is pos-
itive definite. It follows that A is a stable matrix (i.e. the system is asymp-
totically stable).

175
Example 6.1.15. A = [6.0000 2.0000;-16.0000 -5.0000]; Q = [1 -1;-1 2];
P = lyap(A, Q)
eig(P )
The answer is the following:
 
−3.7500 11.0000
P =
11.0000 −35.0000
and the eigenvalues are −38.4837 and −0.2663.
The eigenvalues of the matrix P are negative, hence the matrix P is
negative definite. It follows that A is an unstable matrix.

Now we will solve the Lyapunov equation with formula 1.41. We will
use the command kron which has the syntax K = kron(A, B). Action:
if A = [aij ]1≤i≤m;1≤j≤n is an m × n matrix and B is p × q matrix, then
K = kron(A, B) returns the Kronecker product of the matrices A and B,
i.e. the mp × nq matrix K = A ⊗ B = [aij B]1≤i≤m;1≤j≤n .
Example 6.1.16. A = [-1 2;3 -4]; B = [1 0 1;2 5 -2];
K = kron(A, B)
The answer is the following:
 
−1 0 −1 2 0 2
 −2 −5 2 4 10 −4 
K= .
 3 0 3 −4 0 −4 
6 15 −6 −8 −20 8

Example 6.1.17. A = [0 1;-2 -3]; n = size(A); Q = [4 1;1 2];


I = eye(n); & introduces the unit matrix of order n
L = kron(A0 , I) + kron(I, A0 ); & computes the matrix L = A0 ⊗ I + I ⊗ A0
q = Q(:); & writes the vector formed with the columns of the matrix Q
p = −inv(L) ∗ q; & solves the system L∗ p = −q
P = reshape(p, 2, 2); & reshapes the matrix P from the vector p
D1 = det(P(1:1)); & computes the principal minors of the matrix P
D2 = det(P(1:2,1:2)) & or D2 = det(P )
The answer is:  
0 −2 −2 0
 1 −3 0 −2 
L=
 1 0 −3 −2  ,

0 1 1 −6

176
   
4 3.3333  
 1   1.0000  3.3333 1.0000
q= , p = 
 1 
, P = ,
 1.0000  1.0000 0.6667
2 0.6667
D1 = 3.3333 > 0, D2 = 1.2222 > 0. The matrix P is positive definite, hence
the system is asymptotically stable.

Stability of linear discrete-time systems


For linear discrete-time systems Y (x + 1) = AY (x), x ∈ Z+ , where A
is an n × n matrix one can prove a result similar to Theorem 1.3.2, where
instead of Re λ < 0, Re λ > 0 and Re λ = 0 one writes respectively |λ| < 1,
|λ| > 1 and |λ| = 1.
In the examples we will check the stability of the discrete-time linear
time-invariant systems (LTI systems) with the following drift matrices.

Example 6.1.18. A = [2.5 0.6;-11.5 -2.80]; eig(A)


ans = 0.2, −0.5;
Answer: asymptotically stable.

Example 6.1.19. A = [5.0000 1.3750;-19.0000 -5.1250]; eig(A)


ans = −0.0625 + 0.7043i, −0.0625 − 0.7043i;
z(1) = −0.0625 + 0.7043i; z(2) = −0.0625 − 0.7043i;
for k = 1 : 2
mod(k) = abs(z(k)) & computes the modulus of the eigenvalues
end mod = 0.7071, mod = 0.7071;
Answer: asymptotically stable.

Example 6.1.20. A = [3 1;-13 -4]; eig(A)


ans = −0.5000 + 0.8660i, −0.5000 − 0.8660i;
z(1) = −0.5000 + 0.8660i; z(2) = −0.5000 − 0.8660i;
for k = 1 : 2
mod(k) = abs(z(k))
end mod = 1.0000, mod = 1.0000;
Answer: stable but not asymptotically stable (why?).

Example 6.1.21. A = [10.8000 2.4000;-36.4000 -8.2000]; eig(A)


ans = 3.0000, −0.4000;
Answer: unstable.

177
Discrete-time Lyapunov equation
The command which solves the discrete-time Lyapunov equation

AP A0 − P = −Q

has the syntax P = dlyap(A, Q).


Solve the following discrete-time Lyapunov equations and check the sta-
bility of the discrete-time LTIs with the following drift matrices.

Example 6.1.22. A = [0 1; -1/6 5/6]; Q = [1 1/3;1/3 19/36];


P = dlyap(A, Q)
The answer is the following
 
2 1
P =
1 1

The principal minors of P are ∆1 = 2 > 0 and ∆2 = 1 > 0, hence the


matrix P is positive definite. It follows that A is a stable matrix (i.e. the
discrete-time system is asymptotically stable).

6.2 MATLAB and MAPLE Commands for


Complex Numbers and for Functions of
a Complex Variable
6.2.1 MATLAB
Creating Complex Numbers
The basic imaginary unit is represented in MATLAB by either the letter
i or by the letter j. One way of creating a complex value in MATLAB
is to write the real part plus the imaginary part times i: z = x + yi or
z = x + y ∗ i (example: z = 3 + 5i). Another way is using the complex
function: z = complex(3, 5); answer: z = 3.0000 + 5.0000i. If X and Y are
matrices, then complex(A, B) returns A + B ∗ i.

Example 6.2.1. A = complex([1 -1], [2 3]);


Answer: A = 1.0000 + 2.0000i -1.0000+3.0000i.

178
Functions
Let z be a complex number and Z a complex array.
The function real, which is used as x = real(z) or X = real(Z) returns
the real part of z or the real part of the elements of Z.

Example 6.2.2. z = complex(−7, 3); x = real(z): x = −7;


Z = [2 − 3i 1 + 5i]; X = real(Z): X = 2 1.

The function imag, which is used as y = imag(z) or Y = imag(Z) returns


the imaginary part of z or the imaginary part of the elements of Z.

Example 6.2.3. z = π ∗ i; y = imag(z): y = 3.1416;


Z = [2 − 3i 1 + 5i]; Y = imag(Z): Y = −3 5.

The function abs, which is used as r = abs(z) or R = abs(Z) returns the


absolute value (complex modulus) of z or of the corresponding elements of
Z.

Example 6.2.4. z = 6 − 8i; r = abs(z): r = 10;


Z = [2 − 3i 1 + 5i]; R = abs(Z): R = 3.6056 5.0990.

The function angle, which is used as t = angle(z) or T = angle(Z)


returns the phase angle, in radians, of z or of the corresponding elements of
Z. The angle lies between −π and π. The statement Z = R ∗ exp(i ∗ T )
converts back to the original Z.

Example 6.2.5. z = sqrt(3) − i; r = abs(z): r = 2.0000;


t = angle(z): t = −0.5236; z = r ∗ exp(i ∗ t): z = 1.7321 − 1.000i.

The function conj, which is used as Zbar = conj(Z) returns the complex
conjugate of the elements of Z.

Example 6.2.6. z = 2 + 7i; zbar = conj(z): zbar = 2.0000 − 7.0000i;

Example 6.2.7. syms x y; Z = [3-i x-y ∗ i; 5 4+9i]; Zbar = conj(Z):


Zbar = [3 + i, conj(x) + conj(y) ∗ i]
[ 5, 4 − 9 ∗ i].
Now, let us declare x and y as real variables:
syms x y real; Z = [3-i x-y ∗ i;5 4+9i]; Zbar = conj(Z): Zbar = [3+i, x+y∗i]
[ 5, 4 − 9 ∗ i].

179
Taylor Series Expansion
The command is taylor and the syntax is the following:
sym z;
taylor(f (z), z, ’ExpansionPoint’, a, ’Order’, n),
where n is the truncation order specified by the name-value pair argument
Order, a is the expansion point specified by the name-value pair argument
ExpansionPoint.
If we write taylor(f (z), z, a) it means that the default truncation order
is n = 0 and if we omit also a, so we write taylor(f (z), z), then the default
expansion point is a = 0.

Example 6.2.8. sym z; f = exp(z);


Ts = taylor(f, z, ’ExpansionPoint’, 1, ’Order’, 5):
Ts = exp(1)+exp(1)∗(z −1)+(exp(1)∗(z −1)ˆ2)/2+(exp(1)∗(z −1)ˆ3)/
6 + (exp(1) ∗ (z − 1)ˆ4)/24.

Example 6.2.9. sym z; f = exp(z); Ts(z) = taylor(f, z):


Ts(z) = zˆ5/120 + zˆ4/24 + zˆ3/6 + zˆ2/2 + z + 1;
Ts(1 + i) = 22/15 + (23 ∗ i)/10.

Example 6.2.10. sym z; f = 1/(1 − z);


F (z) = taylor(f, z, ’ExpansionPoint’, 0, ’Order’, 10):
F (z) = zˆ9 + zˆ8 + zˆ7 + zˆ6 + zˆ5 + zˆ4 + zˆ3 + zˆ2 + z + 1
>> F (2i) = 205 + 410 ∗ i.

Example 6.2.11. Establish an (approximate) expansion for log(z − 1), by


integrating term by term F (z) and f (z) and by equalizing the integrals.
>> int(F (z), z);
ans = zˆ10/10 + zˆ9/9 + zˆ8/8 + zˆ7/7 + zˆ6/6 + zˆ5/5 + zˆ4/4 + zˆ3/
3 + zˆ2/2 + z
>> int(f (z), z); ans = − log(z − 1).
Actually, the geometric series is convergent for |z| < 1 and the precise
expansion is

− log(1 − z) = z + zˆ2/2 + zˆ3/3 + · · · + zˆn/n + · · · .

Example 6.2.12. F (z) = taylor(cos(z), z): F (z) = zˆ8/40320 − zˆ6/720 +


zˆ4/24 − zˆ2/2 + 1.

180
Example 6.2.13. F (z) = taylor(sin(z), z): F (z) = zˆ9/362880−zˆ7/5040+
zˆ5/120 − zˆ3/6 + z.

Example 6.2.14. F (z) = taylor(cosh(z), z): F (z) = zˆ8/40320+ zˆ6/720+


zˆ4/24 + zˆ2/2 + 1.

Example 6.2.15. F (z) = taylor(sinh(z), z): F (z) = zˆ9/362880 + zˆ7/


5040 + zˆ5/120 + zˆ3/6 + z.

Example 6.2.16. Expand in Laurent series about a point a the function


G(z) = sinh(1/(z − a)).
First we replace in the expansion of F (z) from Example 6.2.15, z by 1/
(z − a).
syms a; F (z) = zˆ9/362880 + zˆ7/5040 + zˆ5/120 + zˆ3/6 + z;
G(z) = F (1/(z − a)):
G(z) = −1/(a − z) − 1/(6 ∗ (a − z)ˆ3) − 1/(120 ∗ (a − z)ˆ5) − 1/(5040 ∗
(a − z)ˆ7) − 1/(362880 ∗ (a − z)ˆ9).

6.2.2 MAPLE
The basic imaginary unit is represented in MAPLE by the capital letter
I.

Functions
The function Complex is similar to the function complex in MATLAB.

Example 6.2.17. z = Complex (−3, 5): z = −3 + 5 I;


z = −3 + 5I: z = −3 + 5 I.

Example 6.2.18. Matrix ([[1, −2], [0, 3]]) + Matrix ([[Pi, 12], [5, 0]]) · I:

1 + Iπ −2 + 12 I
.
5I 3

The functions Re, Im, abs, argument, conjugate are similar to the
functions real, imag, abs, angle, conj in MATLAB.

Example 6.2.19. Re(1 − I · sqrt(Pi)): 1.



Example 6.2.20. Im(1 − I · sqrt(Pi)): − π.

181

Example 6.2.21. abs(1 − I · sqrt(Pi)): 1 + π.

Example 6.2.22. argument(1 − I · sqrt(Pi)): − arctan( π).
1
Example 6.2.23. argument(1 − I · sqrt(3)): − π.
3

Example 6.2.24. conjugate(1 − I · sqrt(Pi)): 1 + I π.

Example 6.2.25. Re(sin(4 + 7 · I)): sin(4) cosh(7).

Example 6.2.26. Im(sin(4 + 7 · I)): cos(4) sinh(7).

Example 6.2.27. Re(cos(4 + 7 · I)): cos(4) cosh(7).

Example 6.2.28. Im(cos(4 + 7 · I)): -sin(4) sinh(7).


1√
       
Pi Pi 1
Example 6.2.29. Re sin + ·I : 2 cosh π ;
4 3  2 3
1√
     
Pi Pi 1
Im sin + ·I : 2 sinh π ;
4 3 2s 3
       2  2
Pi Pi 1 1 1
abs sin + ·I : 2 cosh π + 2 sinh π ;
4 3 2 3 3
 
1

      sinh π 
Pi Pi  3
argument sin + ·I : arctan    ;
4 3 1
cosh π
       3
Pi Pi 1 1
conjugate sin + ·I : sin π − Iπ .
4 3 4 3
 
−1 3
Example 6.2.30. Re(Matrix ([[−1 + I, 3], [7I, −I · Pi]])): ;
  0 0
1 0
Im(Matrix ([[−1 + I, 3], [7I, −I · Pi]])): ;
7 −π
 √ 
2 3
abs(Matrix ([[−1 + I, 3], [7I, −I · Pi]])): .
7 π
The functions argument and conjugate do not apply to matrices.
In order to convert a complex number to the polar (exponential) form
we use the function polar. The syntax is polar(z), where z is a complex

182
expression, and furnishes polar(r, t), where r and t are real expressions. So,
polar converts the complex-valued expression z to its representation in polar
coordinates. The result is represented as polar(r, t), where r is the modulus
of z and t is the argument of z.
The point at infinity (complex infinity, denoted in MAPLE by cs-infinity)
has the algebraic form infinity + I · infinity = ∞ + ∞ I. and the canonical
(polar) form ∞eundefined I (i.e. r = ∞ and t = undefined).

 
1
Example 6.2.31. z := sqrt(3)−I: 3 − I; polar(z): polar 2, − π ;
6

> polar(∞ + I∞): polar(∞, undefined )

Roots of an Analytic Functions


The RootFinding package is a suite of advanced commands for finding
roots numerically. The top-level commands used for analytic functions are
Analytic and AnalyticZerosFound. These commands determine the zeros
of an analytic function of one complex variable.
The syntax is the following:
Analytic(f, z, a + c ∗ I..b + d ∗ I, . . .)
Analytic(f, z, re = a..b, im = c..d, . . .)
AnalyticZerosFound
In the previous syntax f represents an analytic function of the variable
z (or an equation defining such), z is an unknown (optional) and a, b, c, d
are real constants. Analytic finds all complex zeros of the analytic func-
tion f (z) within the rectangular region a ≤ Re z ≤ b, c ≤ Im z ≤ d in C.
AnalyticZerosFound returns a sequence of the zeros which have been lo-
cated. These may be accessed after Analytic returns, or if its computation
is interrupted.
The remaining arguments are interpreted as options.
digits := n; indicates that the accuracy of the zeros computed is usually
less than n digits; the minimum is 5.
iterations=n; indicates the number of iterations of Newton’s method to be
applied for each starting point; the default is 50.
plot; returns a plot of the zeros instead of the zeros.
When plotting, the zeros will be reduced modulo a in the real direction
and modulo b in the imaginary direction. With the option ’modulo’ they will

183
be reduced to the region 0 ≤ Re z ≤ a, 0 ≤ Im z ≤ b. With option ’modulo s’
a a b b
they will be reduced to the region − ≤ Re z ≤ , ≤ Im z ≤ .
2 2 2 2
Example 6.2.32. Load the package RootFindind.
with(RootFinding)
f := z 4 + 2 · z 3 + z 2 − 2 · z − 2
z 4 + 2z 3 + z 2 − 2z − 2
Analytic(f, z, −1 − I..1 + I)
−1.00000000000000, −1.00000000000000 − 1.00000000000000 I,
−1.00000000000000 + 1.00000000000000 I
Example 6.2.33. f := z 2 − 2 · z − 2 · I + 1
z 2 − 2z + 1 − 2 I
zeroes := Analytic(f, z, −1.5 − 1.5 · I..2 + 1.5 · I)
−1. I, 2.00000000000000 + 1.00000000000000 I
plots[complexplot](zeros, style=”point”, axes=boxed )

Example 6.2.34. Determine a polynomial with given roots and verify the
result.
 
Pi
zeros := 1 − I, 2 + 3 · I, , I · Pi :
2
f := mul (z − z0, z0 = zeros)
 
1
(z − 1 + I)(z − 2 − 3 I) z − π (z − Iπ)
2
Analytic(f, 3 · (−I)..4 + 4 · I)
1.57079632679490, 3.14159265358983I, 1.00000000000000−1.00000000000000I
2.00000000000000 + 3.00000000000000 I

184
Definite and Indefinite Integration
The command int performs definite and indefinite integration. The syn-
tax is the following: R
int(expression, x) expression dx
Rb
int(expression, x = a..b) a
expression dx
int(expression, [x, y])
int(expression, x = a..b, opt)
int(expression, [x = a..b, y = c..d, . . .], opt)
int(f, a..b, opt)
int(f, [a..b, c..d, . . .], opt)
The above parameters mean the following:

expression - algebraic expression; integrand


x, y - name; variables of integration
a, b, c, d - endpoints of interval on which the integral is taken
f - function, integrand
opt - (optional) a sequence of one or more points

If a and b are complex numbers, then the int routine computes the definite
integral over the straight line from a to b.

Example 6.2.35. int(exp(2 · z), z)

1 2z
e
2
Example 6.2.36. int(exp((2 − 3 · I) · z), z)
 
2 3
+ I e(2−3 I)z
13 13

Example 6.2.37. int(cos((Pi − I) · z), z)

sin((π − I)z)
π−I

Example 6.2.38. int(x3 , x = 2..3)

65
4

185
Example 6.2.39. int(x3 , x = 2 + I..3 + 4 · I)

−130 − 90 I

Example 6.2.40. f := log(z + 1):


int(f, z = 0..1)
−1 + 2 ln(2)
Example 6.2.41. f := log(z + 1):
int(f, z = I..1 − 3 · I)
1 1 1 1
− ln(2) − Iπ − I ln(2) + π + 4 I + ln(13)
2 4 2 4
   
3 3 3
−2 I arctan − I ln(13) − 3 arctan
2 2 2
Example 6.2.42. int(f, z = I..1 − 2 · I)
5 3 7 1
ln(2) − Iπ − I ln(2) − π − 1 + 3 I
2 4 2 4

Analytic Continuation, Branch Cuts and Singularities


FunctionAdvisor/analytic extension returns the definition of the a-
nalytic extension of a function, typically to the entire complex plane. If this
is not available, the FunctionAdvisor command returns NULL. The syntax
is the following:
FunctionAdvisor(analytic extension, math function)
For most functions the domain of the classical definition is the entire
complex plane.
Example 6.2.43. FunctionAdvisor (definition, Γ(x))
Z ∞
kI z−1
 
Γ(x) = d kI, And(0 < <(x))
0 e kI

FunctionAdvisor (analytic extension, Γ(z))


π
Γ(z) = , And(<(z) < 0)
sin(πz)Γ(1 − z)

186
FunctionAdvisor/branch cuts returns the branch cuts of a function.
The syntax is the following:
FunctionAdvisor(branch cuts, math function name)
FunctionAdvisor(branch cuts, math function name); returns the
branch cuts of the function, if any, or the string ”No branch cuts”. If the
requested information is not available, the FunctionAdvisor command re-
turns NULL.
FunctionAdvisor/branch points returns the branch points of a func-
tion. The syntax is the following:
FunctionAdvisor(branch points, math function)
FunctionAdvisor(branch points, math function); it will return the
branch points of the function, if any, or the string ”No branch points”. If
the requested information is not available, the FunctionAdvisor command
returns NULL.

Example 6.2.44. FunctionAdvisor (branch cuts, arcsin)

[arcsin(z), z ≤ −1, 1 ≤ z]

FunctionAdvisor (branch points, arcsin)

[arcsin(z), z ∈ [−1, 1, ∞ + ∞ I]]

Example 6.2.45. FunctionAdvisor (branch cuts, arctan)

[arctan(z), z ∈ ComplexRange(−∞ I, −I), z ∈ ComplexRange(I,

∞ I]

FunctionAdvisor (branch points, arctan)

[arcsin(z), z ∈ [−I, I]]

Example 6.2.46. FunctionAdvisor (branch cuts, sin(z))

[sin(z), ”No branch cuts”]

FunctionAdvisor (branch points, sin(z))

[sin(z), ”No branch points”]

187
The branch cuts are sometimes compactly expressed in terms of Real-
Range or ComplexRange; in these cases converting the result to relation
may be of help to figure out where the cuts are.

Example 6.2.47. FunctionAdvisor (branch cuts, arccot)

[arccot(z), z ∈ ComplexRange(−∞ I, −I), z ∈ ComplexRange(I, ∞ I]

> convert(, relation)

[arccot(z), And(<(z) = 0, −∞ ≤ =(z), =(z) ≤ −1), And(<(z) = 0, 1


≤ =(z), =(z) ≤ ∞)]

The command FunctionAdvisor/singularities returns the poles and


also the essential singularities of a function. The syntax is the following:
FunctionAdvisor(singularities, math function)

Example 6.2.48. FunctionAdvisor (singularities, arctan(z))

[arctan z, ”No isolated singularities”]

Example 6.2.49. FunctionAdvisor (singularities, exp(z))

[ez , z = ∞ + ∞ I]

But the answer is wrong for composite functions, as we can see from the
following.
  
1
Example 6.2.50. FunctionAdvisor singularities, exp
(z − 1)
   
1 1
sin , =∞+∞ I
z−1 z−1

The correct answer is: z = 1 is an essential singularity.


  
1
Example 6.2.51. FunctionAdvisor singularities, sin
(z − 1)
 
1 1
e z−1 , =∞+∞ I
z−1

188
Taylor and Laurent Series Expansion
For the Taylor series expansion we use the command taylor. The syntax
is the following:
taylor(expression, x = a, n)
The taylor command computes the order n Taylor series expansion of
expression, with respect to the variable x/z , about the point a.

Example 6.2.52. taylor (exp(x), x = 2, 6)


1 1 1
e2 + e2 (x − 2) + e2 (x − 2)2 + e2 (x − 2)3 + e2 (x − 2)4
2 6 24
1 2
e (x − 2)5 + O (x − 2)6

+
120
taylor (exp(z), z = I, 5)
1 1 1
eI + eI (z − I) + eI (z − I)2 + eI (z − I)3 + eI (z − I)4
2 6 24
5

+O (z − I)

taylor (exp(z), z = 1 + I, 4)
1 1
e1+I + e1+I (z − 1 − I) + e1+I (z − 1 − I)2 + e1+I (z − 1
2 6
3 4

−I) + O (z − 1 − I)
   
1
Example 6.2.53. taylor exp , z = 1, 5
z

3 13 73
e − e(z − 1) + e(z − 1)2 − e(z − 1)3 + e(z − 1)4
2 6 24
+O (z − 1)5


Example 6.2.54. The following answer is provided when the function is not
analytic ata.   
1
taylor exp , z = 0, 5
z
Error, (in series/exp) unable to compute series

189
Example 6.2.55. taylor (sin(z), z = 0, 10)
1 1 5 1 7 1
z − z3 + z 9 + O z 10

z − z +
6 120 5040 362880
Example 6.2.56. taylor (cos(z), z = 0, 10)
1 1 1 6 1
1 − z2 + z4 − z 8 + O z 10

z +
2 24 720 40320
 
1
Example 6.2.57. taylor , z = 0, 5
(1 − z)

1 + z + z 2 + z 3 + z 4 + O(z 5 )
 
1
Example 6.2.58. taylor , z = 1 − I, 5
(1 − z)

−I − (z − 1 + I) + I(z − 1 + I)2 + (z − 1 + I)3 − I(z − 1 + I)4

+O (z − 1 + I)5


 
1
Example 6.2.59. taylor , z = 0, 5
(1 − z)2

1 + 2z + 3z 2 + 4z 3 + 5z 4 + O z 5


Example 6.2.60. taylor (sqrt(1 + z), z = 0, 6)


1 1 1 5 4 7 5
1 + z − z2 + z3 − z + O z6

z +
2 8 16 128 256
Example 6.2.61. The answer that we get when a is a branch point.

taylor (sqrt(1 + z), z = −1, 6)

Error, does not have a taylor expansion, try series()

When we need to determine the Laurent series expansion of a function


we use numapprox[laurent]. The syntax is the following:
laurent(f, x = a, n)
laurent(f, x, n)

190
The function laurent computes, within the package numapprox, the
Laurent series expansion of the function f , with respect to the variable x/z,
about the point a, up to order n (optional).
If the result of the series function applied to the specified arguments is
a Laurent series with finite principal part (i.e. only a finite number of non-
negative powers appear in the series), then this result is returned; otherwise,
an error-return occurs.
Example 6.2.62. with(numapprox ):
1 1
f := z −→ z −→
(1 − z) 1−z
1 + z + z2 + z3 + z4 + z5 + O z6

laurent(f (z), z = 0)
 
1 1 7 2
z −2 + + z + O z4

Example 6.2.63. laurent ,z = 0
z · sin(z) 6 360
 
1
Example 6.2.64. laurent ,z = 1
z · cos(z − 1)
3 3 41 41
1 − (z − 1) + (z − 1)2 − (z − 1)3 + (z − 1)4 − (z
2 2 24 24
−1)5 + O (z − 1)6

!
1
Example 6.2.65. laurent , z = 1, 8
(z − 1)3 sin(z − 1)


1 7 31 127
(z − 1)−4 + (z − 1)−2 + + (z − 1)2 + (z
6 360 15120 604800
−1)4 + O (z − 1)5

 
1
Example 6.2.66. laurent 1 + , z = 1, 4 (z−1)−1 +1
(z − 1)
(z 3 + 4 · z 4 − z + 3)
 
Example 6.2.67. laurent , z = 1, 3
((z − 1)3 · (z + 2) · (z − 2))
7 68 400 1463
− (z − 1)−3 − (z − 1)−2 − (z − 1)−1 −
3 9 27 81
4450 13289
(z − 1)2 + O (z − 1)3

− (z − 1) −
243 729

191
(z 3 + 4 · z 4 − z + 3)
 
f := laurent , z = −2, 4
((z − 1)3 · (z + 2) · (z − 2))
61 163 167 7711
(z + 2)−1 − + (z + 2) + (z + 2)2
8 432 5184 186624
+O (z + 2)3


Computation of Residues
The command residue computes the algebraic residue of a function f at
an isolated singular point a. The syntax is residue(f, z = a).
(z + 1) z+1
Example 6.2.68. f := z −→ z −→
z z
residue (f (z), z = 0) 1
residue (f (z), z = infinity) −1
We have that 1 + (−1) = 0.
(4 · z 4 + z 3 − z + 3)
 
400
Example 6.2.69. residue , z = 1 −
((z − 1)3 · (z +2) · (z − 2)) 27
4 3

(4 · z + z − z + 3) 73
residue 3
,z = 2
 ((z − 1) 4· (z + 2) · (z − 2)) 4
(4 · z + z 3 − z + 3)

61
residue 3 · (z + 2) · (z − 2))
, z = −2
((z − 1) 108
4 3
 
(4 · z + z − z + 3)
residue , z = infinity −4
((z − 1)3 · (z + 2) · (z − 2))
400 73 61
We have that + + − 4 = 0.
27 4 108
 
Pi
Example 6.2.70. residue tan(z), z = −1
2
 
sin(z)
Example 6.2.71. residue ,z = 0 0 (z = 0 is a removable
z
singular point)
 
sin(z)
Example 6.2.72. residue ,z = 0 1
z2
 
sin(z) 1
Example 6.2.73. residue 4
,z = 0 −
z 6

192
Bibliography

[1] Abell, M., Braselton, J. P.: Differential Equations with MAPLE


V, Academic Press Professional, London, 1994.

[2] Ahmed, N. U.: Elements of Finite-Dimensional Systems and Control


Theory, Longman Scientific & Technical, London - New York, 1988.

[3] Balan, V., Pı̂rvan, M.: Matematici avansate pentru ingineri - Pro-
bleme date la Concursul Ştiinţific Studenţesc ”Traian Lalescu”, Mate-
matică, anii 2002-1014, Ed. Politehnica Press, Bucureşti, 2014.

[4] Breaz, N., Crăciun, M., Gaşpar, P., Miroiu, M., Paraschiv-
Munteanu, I.: Modelarea matematică prin Matlab, Ed. StudIS, Iaşi,
2013.

[5] Breaz, D., Suciu, N., Gaşpar, P., Barbu, G., Pı̂rvan, M.,
Prepeliţă, V., Breaz, N.: Transformări integrale şi funcţii com-
plexe cu aplicaţii ı̂n tehnică, Vol. 1 - Funcţii complexe cu aplicaţii ı̂n
tehnică, Ed. StudIS, Iaşi, 2013.

[6] Câşlaru, C., Prepeliţă, V., Drăguşin, C: Matematici Avansate.


Teorie şi aplicaţii, Ed. Fair Partners, Bucureşti, 2007.

[7] Cernea, A.: Elemente de teoria ecuaţiilor diferenţiale, Ed. Univer-


sităţii Bucureşti, Bucureşti, 2010.

[8] Dym, H.: Linear Algebra in Action, American Mathematical Society,


Providence, Rhode Island, 2013.

[9] Iosifescu, M., Stănăşilă, O., Ştefănoiu, D. eds.: Enciclopedie


Matematică, Ed. AGIR, Bucureşti, 2010.

193
[10] Halanay Andrei, Mateescu, M.: Elemente din teoria ecuaţiilor
diferenţiale şi a ecuaţiilor integrale, Ed. Matrix Rom, Bucureşti, 2010.

[11] Halanay Aristide: Teoria calitativă a ecuaţiilor diferenţiale, Edi-


tura Academiei R.S.R., Bucureşti, 1963.

[12] Halanay Aristide: Ecuaţii diferenţiale, Editura Didactică şi Peda-


gogică, Bucureşti, 1972.

[13] Hastings, S. P., McLeod, J. B.: Classical Methods in Ordinary


Differential Equations, American Mathematical Society, Providence,
Rhode Island, 2012.

[14] Kecs, W. W., Toma, A.: Calcul diferenţial şi integral, Editura
Edyro Press, Petroşani, 2010.

[15] Kreyszig, E., Advanced Engineering Mathematics, Wiley, Singapore,


2006.

[16] Lee, H. J., Schiesser, W. E.: Ordinary and Partial Differential


Equations Routines in C, C++, Fortran, Java, Maple, and MATLAB,
Chapman & Hall/CRC, Boca Raton London New York Washington
D.C., 2004.

[17] Lyapunov, A. M.: The general problem of the stability of motion,


Int. J. Control 55 (1992), 531-773.

[18] Marsden, J. E., Hoffman, M. J.: Basic Complex Analysis, W. H.


Freeman, New York, 1987.

[19] Mauch, S.: Introduction to Methods of Applied Mathematics or Ad-


vanced Mathematical Methods for Scientists and Engineers, http:
//physics.bgu.ac.il/~gedalin/Teaching/Mater/am.pdf, 2004.

[20] Meghea, C., Meghea, I.: Treatise of Differential Calculus and


Integral Calculus, Old City Publishing, Philadelphia & Éditions des
Archives Contemporaines, Paris, 2013.

[21] Mocică, C. G.: Funcţii de o variabilă complexă, Ed. Printech, Bu-


cureşti, 2004.

194
[22] Muir, T.: A Treatise on the Theorie of Determinants, Macmillan,
London, 1882.

[23] Olariu, V., Prepeliţă, V.: Matematici speciale, Editura Didactică


şi Pedagogică, Bucureşti, 1985.

[24] Olariu, V., Prepeliţă, V.: Teoria distribuţiilor, funcţii complexe


şi aplicaţii, Editura Ştiinţifică şi Enciclopedică, Bucureşti, 1986.

[25] Olariu, V., Olteanu C.: Linear Differential Equations. Classical


and Modern Theory, LAP Lambert Academic Publishing, Saarbrücken,
2016.

[26] Peano, G.: Sur le déterminant wronskien, Mathesis IX (1889), 75-76,


110-112.

[27] Petroşanu, D. M.: Matematici speciale: elemente teoretice şi


aplicaţii, Ed. Matrixrom, Bucureşti, 2016.

[28] Poincaré, H.: Mémoire sur les courbes dŕfinies par une équation
différentielle (I), J. Math. Pures Appl. 7 (1881), 375-422.

[29] Poincaré, H.: Mémoire sur les courbes dŕfinies par une équation
différentielle (II), J. Math. Pures Appl. 8 (1882), 251-296.

[30] Pragacz, P.: Algebraic Cycles, Sheaves, Shtukas, and Moduli,


Birkhäuser Verlag Basel, 1-20, 2007.

[31] Prepeliţă, V., Vasilache, T., Doroftei M.: Control Theory,


UPB Dep. Eng. Sci., Bucureşti, 1997.

[32] Rudner, V., Nicolescu, C.: Probleme de matematici speciale, Ed.


Didactică şi Pedagogică, Bucureşti, 1982.

[33] Sarason, D.: Complex Function Theory, Americal Mathematical So-


ciety, Providence, Rhode Island, 2007.

[34] Stănăşilă, N. O., Pı̂rvan, M., Olteanu, M. şi alţii: Teme


şi probleme pentru concursurile studenţeşti de matematică, Vol. 3 -
Concursuri Naţionale, Ed. StudIS, Iaşi, 2013.

195
[35] Stein, E., Shakarchi, R.: Princeton Lectures in Analysis. II Com-
plex Analysis, Princeton University Press, Princeton and Oxford, 2003.

[36] Stroud, K. A., Booth, D. J.: Advanced Engineering Mathematics,


Palgrave Macmillan, Hampshire & New York, 2003.

[37] Sveshnikov, A., Tikhonov, A.: The Theory of Functions of a Com-


plex Variable, MIR Publishers, Moscow, 1973.

[38] Teschl, G.: Ordinary Differential Equations and Dynamical Systems,


American Mathematical Society, Providence, Rhode Island, 2012.

196
Index

absolute value of a complex number, characteristic


65, 179 equation, 9, 12
algebraic branch point, 73 polynomial, 7, 28, 57, 58
algebraic form of a complex number, closed contour, 68
65 complex number, 61, 178
analytic (holomorphic) function, 85, complex plane, 65
98, 100 conjugate of a complex number, 65,
analytic continuation of a function, 179
88 connected set, 67
annulus, 68 continuity of a complex function, 70
argument of a complex number, 65, control of a system, 23
179 convergent sequence, 66
asymptotically stable in a neighbor-
hood of a SDEs derivative of a complex function, 84,
global, 32 85
local, 32 diagonalizable matrix, 8, 9, 34, 40,
asymptotically stable SDEs, 24, 25, 44, 45
28–30 domain, 68
doubly connected domain, 68
boundary of a set, 67
boundary point, 67 eigenspace, 7, 39
branches of a multiple-valued func- eigenvalue, 7–12, 25, 29, 34, 36, 38,
tion, 72, 187 40, 42, 44, 45, 52, 53, 171,
173–175
Cauchy’s Fundamental Theorem, 111 eigenvector, 7–11, 35, 37, 38, 40, 42,
Cauchy’s Integral Formula, 116 173, 174
Cauchy’s Integral Formula for Deriva- equilibrium position, 24, 31, 32, 56
tives, 119 Euler’s formula, 72
Cauchy’s Theorem for Multiply Con- exponential complex function, 71, 88
nected Domains, 113 exponential form of a complex num-
Cauchy-Riemann conditions, 86 ber, 66

197
extended complex plane, 67 removable, 129
exterior point, 67
Jordan cell, 7, 8, 22, 25, 46, 54, 173,
Fréchet differentiable function, 84 174
fundamental Jordan’s lemmas, 146
matrix, 13, 14, 18, 20, 21, 23
Kronecker product, 31, 176
system of solutions, 6, 8, 9, 11,
19, 36, 38, 39, 41, 43 lacuna, 69
Laurent series, 128, 133, 136, 190
general response, 23 limit of a complex function, 70
generalized eigenvector, 7, 8, 11, 12 linear and homogenous SDEs, 8
geometric series, 121 linear and homogenous SDEs with con-
Green-Riemann formula, 112 stant coefficients, 6, 171, 172
linear complex function, 70
harmonic function, 87
linear control system, 23
Hermitian matrix, 28
linear SDEs, 2
homogenous SDEs, 2, 4
linear time-invariant system, 23, 177
homogenous system, 5
linearization, 32
Hurwitz matrix, 27, 53
Liouville’s formula, 6
Hurwitz polynomial, 27, 28, 53
logarithmic branch point, 76
imaginary axis, 65 logarithmic complex function, 75
imaginary part of a complex function, Lyapunov equation, 29, 31, 53–55, 175,
70, 77, 95, 97 176
imaginary part of a complex number, Lyapunov function, 30, 55, 56
65, 77, 94, 179 Lyapunov function (for a linearized
indefinitely increasing sequence, 66 system), 33
initial condidition of a SDEs, 3
method of undetermined coefficients,
initial value problem, 3
22
input of a system, 23
multiplicity
input-output map, 23
algebraic, 7, 9, 12, 44, 173, 175
integral of a function over a curve,
geometric, 7, 37, 44, 173
108
interior of a set, 67 n-uply connected domain, 68
interior point, 67 neighborhood of a point, 66
inverted complex function, 71 neighborhood of the point at infinity,
isolated singular point, 125 67
essential, 130 non-diagonalizable matrix, 10, 40, 44,
pole of order k, 129 45, 47

198
nonlinear time invariant system, 31 stable matrix, 27, 29, 57, 175, 178
stable nonlinear time invariant sys-
open disk, 66 tem, 31
open set, 67 stable SDEs, 24, 25
ordinary point, 122 state equation, 23
output equation, 23 state of a system, 23
output of a system, 23
Taylor series, 122, 125, 189
partition, 108 the radical complex function, 72
path-wise connected set, 68 Theorem
Peano-Baker formula, 14, 17 Existence and Uniqueness of so-
point at infinity, 66 lutions of an SDEs, 3
polar coordinates, 65 of Analytic Continuation, 88
positive definite functional, 33 of Weierstrass, 122
positive definite matrix, 28 on Instability, 33
positive definite quadratic form, 30 on Local Asymptotic Stability, 33
principal branch, 74 on Local Stability, 33
principal vector, 8, 40–42, 49 trigonometric complex functions, 88
trigonometric form of a complex num-
quasipolynomials, 22 ber, 65
triply connected domain, 68
real axis, 65
real part of a complex function, 70, unstable nonlinear time invariant sys-
78 tem, 31
real part of a complex number, 65, 76, unstable SDEs, 24, 25
94, 179
residue at the point at infinity, 143 variation of parameters, 19, 21
residue of a function at a point, 141, variation of parameters formula, 20,
155, 157, 192 22, 23, 47
Residue theorem, 142
Weierstrass criterion, 16
Routh-Hurwitz criterion, 28, 53
Wronskian, 5, 6, 9
semigroup property, 14
simple pole, 129
simply connected set, 68
single-valued complex function, 69
singularity, 125
solution of a SDEs, 2, 7
spectrum, 7, 10

199

You might also like