You are on page 1of 133

Elements of Linear Algebra.

Course notes
December 15, 2014

Cristian Neculaescu
Bucharest University of Economic Studies, Faculty of Cybernetics, Departament of
Matematics, Cybernetics Building (Calea DorobanT i 1517, sector 1), room 2625, Office
hours: to be announced December 15, 2014
E-mail address: algecon2011@gmail.com
URL: roedu4you.ro, math4you section
Dedicated to the memory of S. Bach.

2000 Mathematics Subject Classication. Primary 05C38, 15A15; Secondary 05A15, 15A18

The Author thanks V. Exalted.


Abstract. Replace this text with your own abstract.

Contents
Preface

Part 1.

vi

Introduction to Linear Algebra

Prerequisites
Chapter 1. Vector Spaces and Vector Subspaces
1.1. Introductory denitions

1
2
5
5

Terminological Dierences

18

Changing the scalar multiplication may change the properties of the vector space

19

1.1.1. Lengths and Angles

20

1.2. Properties of vector spaces

20

1.3. Vector Spaces Examples

23

1.4. Exercises

24

1.5. Representation in Vector Spaces

25

1.6. Representation of Vectors

31

1.7. Operations with Subspaces

54

1.8. The Lattice of Subspaces

63

Chapter 2. Linear Transformations

69

2.1. Examples of Linear Transformations

71

2.2. Properties of Linear Transformations

72

2.3. Representations of Linear Transformations

76

2.3.1. Representations of Linear Functionals

78

2.4. The Factor Vector Space

79

2.4.1. Factor Space attached to a Subspace

79

2.5. The Isomorphisms Theorems

81

2.5.1. Projection Operators

82

2.5.2. Other Isomorphism Theorems

82

2.6. Introduction to Spectral Theory

86

2.6.1. Decomplexication of the Complex Jordan Canonical Form

102

iv

CONTENTS

2.6.2. The Jordan Canonical Form Procedure

105

2.6.3. Examples

107

2.7. Bilinear and Quadratic Forms

126

Chapter 3. Inner Product Vector Spaces

139

3.1. Ortogonality

145

3.2. The Projection of a Vector over a Subspace

148

3.3. The Least Square Method

150

3.3.1. Operations with Columns

152

3.3.2. Operations with Lines

154

3.4. Special Types of Operators

159

3.4.1. Projection Operators

159

3.4.2. The Adjoint Operator

161

3.4.3. The Isometric, Unitary and Orthogonal Operators

165

3.4.4. The Normal Operator

170

3.4.5. The Autoadjoint Operator

171

3.4.6. The Antiautoadjoint Operator

173

3.5. The Leontief Model

173

3.6. Vector Spaces over Complex Numbers

178

Chapter 4. A ne Spaces

181

4.1. Denitions

181

4.1.1. Properties

183

Part 2.

Linear Algebra Software Products

189

Chapter 5. Geogebra

191

Chapter 6. CARMetal

193

Part 3.

195

Appendices

Chapter 7. Reviews

197

Binary Logic

197

7.1. Predicate Logic

201

7.2. Sets

201

7.2.1. Operations with Sets

202

CONTENTS

7.2.2. Relations

203

7.2.3. Functions

205

7.2.4. Operations with Functions

206

7.3. Usual Sets of Numbers

207

7.3.1. The Decimal Structure of Numbers

208

7.3.2. The Set of Real Numbers

208

7.3.3. The Set of Complex Numbers

208

7.4. Cardinality of the Usual Sets of Numbers

213

7.5. General Matrix Topics

215

7.6. Matrix Operations

220

7.7. Other Matrix Operations

222

7.8. Partitioned Matrices

222

7.9. The MoorePenrose inverse of a Matrix. The Singular Value Decomposition

226

7.10. Algebraic Structures

229

Chapter 8. Examples of Exams from Previous Years

235

8.1. Written paper from 26.11.2014

235

8.2. Classical Solution Topics

239

8.3. Short Topics or Fixedform Topics

241

8.4. Problems from various sources

243

8.5. The Structure of the Semester Paper

244

8.6. Conditions for Exam

245

8.7. Old Structure of the Exam

246

Bibliography

247

vi

CONTENTS

Preface

Part 1

Introduction to Linear Algebra

Prerequisites
These notes are a continuation of the topics learned at the high school level. In particular, the following
topics are supposed to be known:
Elements of Set Theory
Important Sets of Numbers: N, Z, Q, R, C (and their basic properties)
Elements of Binary Logic
Polynomials
Exponents
Rational and Radical Expressions
Functions
denition, injectivity (onetoone functions), surjectivity (onto functions), bijective (one
one) and invertible functions.
Elementary functions:
Linear functions
Quadratic functions
Polynomial functions
Rational functions
Exponential functions
Logarithmic functions
Trigonometric and Inverse Trigonometric functions
Graphs of Elementary functions
Complex numbers:
modulus and argument,
trigonometric form,
DeMoivres theorem
nth Roots of Complex Numbers
The Principle of Mathematical Induction
Basic Discrete Mathematics:
Permutations and Combinations
The Binomial Theorem
2D Analytic Geometry
Equation of a Line, slope,
The Circle

PREREQUISITES

The Ellipse
The Hyperbola
The Parabola
Linear and Nonlinear Equations and Inequalities
Basic Linear Algebra:
Matrix Algebra, rank, Determinants (of an arbitrary dimension), inverse
Systems of Linear Equations
Cramers Rule
RouchCapelli [KroneckerCapelli] Theorem
Basic Abstract Algebra:
operation (law),
closure,
monoids,
groups,
rings,
elds,
morphisms.
0.0.1. Exercise. Show that in a monoid with neutral element the neutral element is unique.
0.0.2. Solution. Consider a monoid (M; ) and suppose there are two neutral elements with respect to
" ", denoted by e and f . Then (by the denition of the neutral element):
(1) 8x 2 M , x e = e x = x, and
(2) 8x 2 M , x f = f

x=x

Then, by replacing x = e in (1) and x = f in (2) we get:


f

e = e f = f and e f = f

e = e from which it follows that e = f . This means that the neutral

element, if it exists, it is unique.


0.0.3. Exercise. Show that in a monoid with neutral element if an element is symmetrizable then the
symmetric element is unique.
0.0.4. Solution. Consider a monoid (M; ) with neutral element denoted by e and a symmetrizable
element denoted by x0 . By the denition of the monoid, " " is associative.
Suppose there are two symmetric elements, denoted by x00 and x000 . Then (by the denition of the
symmetric element):
x0 x00 = x00 x0 = e, and

x0 x000 = x000 x0 = e.
Then x00 = x00 e = x00 (x0 x000 ) = (x00 x0 ) x000 = e x000 = x000 . So the symmetric element is unique.

CHAPTER 1

Vector Spaces and Vector Subspaces


1.1. Introductory denitions
1.1.1. Denition. (vector space) Given:
The sets:
V 6= ; (the elements of V are called vectors)1;
K 6= ; (the elements of K are called scalars);
The functions:
K ! K. The function +K is a composition law over K called scalar addition

+K ( ; ) : K

and will be denoted shortly by +";

(; ):K

K ! K. The function

is a composition law over K called scalar multipli-

cation and will be denoted shortly by ";


V ! V. The function +V is a composition law over V called vector addition

+V ( ; ) : V

and will be denoted shortly by +";


(; ) : K

V ! V. The function

is a composition law over V called multiplication

of vectors with scalars and will be denoted by ". For each xed
operation V 3 x 7!

2 K, the partial

x 2 V may be called homothety (dilatation or dilation) of parameter

[changing this operation may lead to changing the specics of the vector space see the
second subsection].
The pair (V; K) (together with the above operations) is called vector space if the following conditions
are met:
(1) (K; +K ;

K)

is a commutative eld;

(2) (V; +V ) is an Abelian group;


(3) 8a; b 2 K; 8x; y 2 V, we have:
(a) (a +K b) x = a x +V b x (distributivity of " with respect to +K ");
(b) a (x +V y) = a x +V a y (distributivity of " with respect to +V ");
(c) a (b x) = (a

b) x (mixed associativity);

(d) 1K x = x.
1 In

strict sense, the elements of V are position vectors. The names vector, position vector, point, should be used with
care. At the end of this section a subsection on this topic will be included.

1. VECTOR SPACES AND VECTOR SUBSPACES

1.1.2. Remark. The distinction between dierent operations will be made from context;
The element 0V is the neutral element with respect to vector addition and it will be denoted by
0;
The element 0K is the neutral element with respect to scalar addition and it will be denoted by 0;
The element 1K is the neutral element with respect to scalar multiplication and it will be denoted
by 1;
Notation conventions:
vectors will be denoted by small latin letters (a, b, c, u, v, w, x, y, z, etc.),
scalars will be denoted by small greek letters ( , , , , etc.),
sets will be denoted by "doubled" (blackboard bold font) capital letters (A, B, C, D, K, L,
M, N, R, X, V, etc.)
all the above may have subscripts (

0,

(2)

V1 , etc.) and/or superscripts (U1 , x3 , y4 , etc.)

depending on the context.


The following are examples of common situations when vector spaces are used. The 2D and 3D
examples are perhaps among the simplest, but simple is not always good: this simplicity has both good
and bad aspects, in that they oer geometrical intuitions which is good for two and three dimensions
but devastating for arbitrary dimensions. Moreover, even in two or three dimensions the pictures may be
misleading, because they suggest a certain measurement for distances, which is not always appropriate.
1.1.3. Example (A beautiful business situation). As the rst example in which vector spaces are present
I choose a beautiful old problem (in fact, a set of two problems) [found in [1], page 60, problems 41 and
42].
The prices for bushels of wheat and rye are pw and pr . The market demand for bushels of wheat and
rye are Dw = 4
Sw = 7 + pw

10pw + 7pr and Dr = 3 + 7pw


pr and Sr =

27

5pr . The market supply of bushels of wheat and rye are

pw + 2pr .

1. Find the equilibrium prices (the prices which equate demand and supply for both goods).
A tax tw per bushel is imposed on wheat producers and a tax tr per bushel is imposed on rye producers.
2. Find the new prices as functions of taxes.
3. Find the increase of prices due to taxes.
4. Show that a tax on wheat alone reduces both prices.
5. Show that a tax on rye alone increases both prices, with the increase in the rye price being greater
than the tax on rye.
1.1.4. Solution. Equilibrium takes place when demand meets supply:

1.1. INTRODUCTORY DEFINITIONS

8
< Dw = S w
: D =S
r
r

8
<

()

10pw + 7pr = 7 + pw

: 3 + 7p
w

8
< p0 = 219
w
13
with solution:
: p0 = 306

5pr =

27

supply are equal and are

pw + 2pr

16:84615385

23:53846154
So 1. the equilibrium prices are p0w = 219
and p0r =
13
r

pr

13

4
13

(for wheat) and

42
13

306
.
13

[At these equilibrium prices, the demand and

(for rye)]

When taxes are imposed to producers, the supply of both goods will take place at prices lowered by
the taxes:
the demand remains unchanged: Dw = 4

10pw + 7pr and Dr = 3 + 7pw

5pr ,

while the supply takes place at prices lowered by the taxes: Sw = 7 + (pw
Sr =

27

(pw

tw ) + 2 (pr

tw )

(pr

tr ) and

tr )

Equilibrium
again happens
when demand meets supply:
8
8
< Dw = Sw
< 4 10pw + 7pr = 7 + (pw tw ) (pr tr )
()
,
: D =S
: 3 + 7p
5pr = 27 (pw tw ) + 2 (pr tr )
w
r
r
8
1
< p w = 9 tr
t + 219
13
13 w
13
with solution:
: p = 14 t
3
t + 306
13 r

13 w

13

2.
2
3
2 The3new2prices as functions 3of taxes:
219
9
1
219
p
t
t + 13
5 = 4 13 5 +
4 w 5 = 4 13 r 13 w
3
306
14
306
t
t + 13
pr
13 r
13 w
13
3. 0
The2increase
of
prices
due
to
taxes
is:
2 31
3
1
13

@tr 4

tw 4

1
13

5A

3
14
4. A tax on wheat alone means2that 3
tr = 2
0;
pw
5=4
the prices in this situation are 4
pr
prices.

219
13
306
13

0 2
@tr 4

3
5

14

1
t
13 w

3
5

2
4

1
3

tw 4

1
3

31
5A

5 so a tax on wheat alone reduces both

5. A tax on rye alone means that


2 tw =30; 2
3
2
3
9
219
pw
5 = 4 13 5 + tr 4 13 5 so a tax on rye alone increases both
the prices in this situation are 4
306
14
pr
13
13
prices.
14
Moreover, the rise on rye price is tr , which is bigger than the tax on rye.
13

1.1.5. Example. A farmer has 45 ha to grow wheat and rye. She may sell a maximum amount of 140
tons of wheat and 120 tons of rye. She produces 5 tons of wheat and 4 tons of rye for each hectare and
she may sell the production for 30 e/ton (wheat) and 50 e/ton (rye). She needs 6 labor hours (wheat)

1. VECTOR SPACES AND VECTOR SUBSPACES

and 10 labor hours (rye) for each hectare to harvest the crop, for which she pays 10 e/lh and which is no
more than 350 lh. Find the maximum prot she may obtain and the necessary strategy for obtaining it.
xw = hectares used for wheat
xr = hectares used for rye
prot: 30 5 xw + 50 4 xr
xw + xr

10 10 xr

45

5xw

140

4xr

120

6xw + 10xr
xw , xr

10 6 xw

350

) the problem:
maximize:

subject to:

8
>
>
>
>
>
>
>
>
>
>
<

90xw + 100xr
xw + xr

45

xw

28

xr
30
>
>
>
>
>
3xw + 5xr 175
>
>
>
>
>
:
xw ; xr 0

1.1.6. Example (2D). Consider the following objects:


the vector space (R2 ; R) (the Euclidean Plane); within this environment, a vector is identied
2 3
1
with the position vector of the corresponding point; for example, the vector 4 5 is represented
2

1.1. INTRODUCTORY DEFINITIONS

graphically by the position vector with the origin at the point O (0; 0) and with the edge (or
terminal point) at the point P1 (1; 2).
2 3
2 3
3
1
the vectors: v1 = 4 5, v2 = 4 5, v1 , v2 2 R2 ;
1
2
2 3 2 3 2 3
4
3
1
the sum v1 + v2 = 4 5 + 4 5 = 4 5 = v3 ;
3
1
2
3
2
3
5;
the opposite vector for v2 is v2 = v4 = 4
1
the multiplication of the vector with a scalar v6 = 2; 6 v1 ;
the subtraction
with the opposed of the second vector: v1
2 of 3two 2vectors
3 is2 the addition
3
1
3
2
5 = v5 ;
v1 + ( v2 ) = 4 5 4 5 = 4
2
1
1

v2 =

Picture 1.1.1: Vectors and Operations in 2D

1.1.7. Example. (3D) Consider the following objects:

the vector space (R3 ; R) (the 3D Euclidean space); within this environment, a vector is identied
2 3
1
6 7
6 7
by the position vector of the corresponding point; for example, the vector 6 2 7 is represented
4 5
1
by the position vector starting at O (0; 0; 0) and ending at P1 (1; 2; 1).

10

1. VECTOR SPACES AND VECTOR SUBSPACES

6 7
6 7
6 7
6 7
the vectors: v1 = 6 2 7, v2 = 6 1 7, v1 ,
4 5
4 5
2
1
2 3 2 3 2
1
3
4
6 7 6 7 6
6 7 6 7 6
the sum v1 + v2 = 6 2 7 + 6 1 7 = 6 3
4 5 4 5 4
1
2
3
The opposite vector for v2 is

v2 2 R3 ;
3

7
7
7 = v3 ;
5

7
7
1 7;
5
2
= 2; 6 v ;
3 2 13 2
3
3
2
7 6 7 6
7
7 6 7 6
7
7 6 1 7 = 6 1 7 = v5 ;
5 4 5 4
5
2
1

6
6
v2 = v4 = 6
4

multiplication of a vector by a scalar v6


2
1
6
6
subtraction v1 v2 = v1 + ( v2 ) = 6 2
4
1

Picture 1.1.2: Vectors and


operations in 3D

1.1.8. Remark. There are various similarities and dierences between 2D and 3D:
The points and the position vectors are represented by ordered lists of numbers.
There is a certain ambiguity regarding the notions "point", "vector", "position vector". In order
to clear these ambiguities we have to cover another chapter named "A ne Geometry" [see, for
example, [26]]. For now it is enough to observe that in an environment called "vector space",
a "vector" which geometrically would have as origin some point other than the origin of the
coordinate system simply doesnt exist.

1.1. INTRODUCTORY DEFINITIONS

11

Ambiguities may also be found not only in the common language, but also in the scientic language: for example, the expression "x = 0" in R1 means the point with coordinates (0), in R2 it
means a line (the horizontal axis), in R3 it means a plane (the yOz plane), and so on.
A 2D line may be represented as "the set of all the solutions (x0 ; y0 ) of the equation ax+by+c = 0"
[with not all of a, b, c nulls]. This representation may be considered convenient for 2D, but in a
3D environment the similar representation "the set of all the solutions (x0 ; y0 ; z0 ) of the equation
ax + by + cz + d = 0" [with not all of a, b, c, d nulls] represents a plane and not a line; a line
could be viewed as an intersection between two planes (algebraically as the set of all solutions of
a system of two equations with three variables) and this could be generalized for n dimensions
(as the set of all solutions of a system of n equations with n

1 variables), but this is not that

convenient anymore.
An alternative convenient representation is the parametric representation:
Consider the points P1 and P3 with the corresponding position vectors v1 and v3 . The line
between P1 and P3 is the set of all points corresponding to the position vectors v(
(1

) v1 = v1 + (v3

v1 ),

v3 +

2 R.

Consider for example the points P1 (1; 2) and P3 (4; 3); the line between them has the equation:
x 1
y 2
=
) x 1 = 3 (y 2) ) x 3y + 5 = 0;
4 1
3 2
observe that
8 "x 3y + 5 = 0" is a linear system with one equation and two variables, with
< x = 3t 5
the solution
: y = t; t 2 R:
2 3
2 3
1
4
The position vectors for the points are v1 = 4 5 and v3 = 4 5;
2
3
the points of the line P1 P3 are exactly those points with position vectors given by
v(

= v1 + v2 [= v1 + (v3 v1 )] [= (1
) v1 + v3 ] =
3
02 3 2 31 2
3
1
4
1
3 +1
5, 2 R,
= 4 5 + @4 5 4 5A = 4
2
3
2
+2
)
2

8
< x = 3t 5
which is just another way to describe the general solution
:
: y = t; t 2 R
t=

+2)y =t=

+ 2, x = 3t

5 = 3 ( + 2)

Some positions of interest on this line:

5 = 3 + 1.

12

1. VECTOR SPACES AND VECTOR SUBSPACES

The Point

Remarks:

P1

"the origin of the line" v1 = v(0)

P3

v1 + v2 = v(1)

P2

not on the line

"the direction of the line"

P4

1
1
2

P7

v4 = v1

P8

1)

v7 = v(1=2) "the middle point of the segment P1 P3 "


v8 = v(

1
3

v2 = v(

1=3)

"(P1 P3 ) \ Oy"

[the point on the line with null rst coordinate]


[the yintercept point]
v9 = v(

P9

2)

"(P1 P3 ) \ Ox"

[the point on the line with null second coordinate]


[the xintercept point]

Picture 1.1.3: Points on a 2D line


In the pictures 1.1.1 and 1.1.2, the fact that the points P3 , P1 and P5 are collinear is no coincidence.
These points correspond to the vectors v3 = v1 + 1 v2 , v1 + 0 v2 , v5 = v1 + ( 1) v2 , which are
all of the form v1 + v2 , so they are on the same line (their edges are collinear).

1.1.9. Example. Consider the set


Rn = f(x1 ; x2 ;

; xn ) ; xi 2 R; i = 1; ng:

The set Rn is a real vector space with respect to the "componentwise operations":
Addition:
(x1 ; x2 ;

; xn ) + (y1 ; y2 ;
8 (x1 ; x2 ;

Def

; yn ) = (x1 + y1 ; x2 + y2 ;
; xn ) ; (y1 ; y2 ;

; yn ) 2 Rn

; xn + yn ) ;

1.1. INTRODUCTORY DEFINITIONS

13

(the "+" sign from the lefthand side refers to Rn addition while the "+" signs from the righthand side
are referring to R addition; we use the same graphical sign for dierent operations while the distinction
should be made from the context)
Multiplication of a vector by a scalar:
Def

(x1 ; x2 ;

; xn ) ; 8 2 R:

; xn ) = ( x1 ; x2 ;

(R; +; ) is a commutative eld; (Rn ; +) is an Abelian group (componentwise addition is associative, commutative, has a neutral element and each element has a symmetric, because of the similar properties of
R addition), the neutral element is 0Rn = (0; 0;
( x1 ; x2 ;

; 0) while the symmetric vector of (x1 ; x2 ;

; xn ) is

; xn ) (the opposite vector). Moreover, when the operations meet we have distributivity

properties:
( + ) (x1 ; x2 ;

; xn ) =

= (( + )x1 ; ( + )x2 ;

; ( + )xn ) =

= ( x 1 + x 1 ; x2 + x 2 ;

; xn + x n ) =

= ( x 1 ; x2 ;
=

(x1 ; x2 ;
((x1 ; x2 ;

; xn ) + ( x 1 ; x 2 ;
; xn ) + (x1 ; x2 ;
; xn ) + (y1 ; y2 ;

(x1 + y1 ; x2 + y2 ;

) (x1 ; x2 ;

; xn ) = ((
=

(x1 ; x2 ;
)x1 ; (

( x 1 ; x2 ;

; yn )) =

; (xn + yn )) =

= ( x 1 + y 1 ; x2 + y 2 ;

; xn ) ;

; xn + yn ) =

= ( (x1 + y1 ) ; (x2 + y2 ) ;

= ( x 1 ; x2 ;

; xn ) =

; xn + y n ) =

; xn ) + ( x 1 ; x2 ;
; xn ) +
)x2 ;

;(

; xn ) =

(y1 ; y2 ;

; xn ) =
; yn ) ;

)xn ) = (
(

(x1 ; x2 ;

( x1 ) ;

( x2 ) ;

( xn )) =

; xn )):

1.1.10. Example (Vector spaces over nite elds). The last example mentions a special topic which is
the beginning for "Coding Theory" and uses Chapters 3 and 4 from [27]. Theorem T. 3.3.3, page 26 [27]
proves that for each prime p and for each positive integer n there is a unique nite eld with pn elements
(and characteristic p), denoted by Fpn . If ; =
6 V is such that (V; Fpn ) is a vector space, then it is a vector
space over a nite eld which is a framework used in "Coding Theory" and "Cryptography". The nite
eld may be, for example, (Z2 ;

2;

2)

(Z2 = f0; 1g and the laws are addition and multiplication mod 2)

and the vector space may be Zk2 ; Z2 , which is the vector space of all the linear codes of length k over Z2 .

14

1. VECTOR SPACES AND VECTOR SUBSPACES

1.1.11. Denition. (vector subspace) Consider a vector space (V; K) and S

V a subset of vectors. S is

called vector subspace for (V; K) if:


(1) 8x; y 2 S; x + y 2 S;
(2) 8 2 K; 8x 2 S;

x 2 S:

1.1.12. Example. The set f (1; 1; 2) ; :


1.1.13. Example. the set f(1; 1; 1) +

2 Rg is a vector subspace for (R3 ; R);


2 Rg is not a vector subspace for (R3 ; R).

(1; 1; 2) ;

1.1.14. Example. The set S = f(x1 ; x2 ; x3 ) 2 R3 ; x2 = 0g is a vector subspace for (R3 ; R).
Consider x; y 2 S ) x = (x1 ; 0; x3 ) and y = (y1 ; 0; y3 );
we have x + y = (x1 ; 0; x3 ) + (y1 ; 0; y3 ) = (x1 + y1 ; 0; x3 + y3 ) 2 S,
and for

2 R, x =

(x1 ; 0; x3 ) = ( x1 ; 0; x3 ) 2 S.

From the denition of subspace it follows that S is a subspace of (R3 ; R). The set may be described in
the following way:
(x1 ; 0; x3 ) = (x1 ; 0; 0) + (0; 0; x3 ) = x1 (1; 0; 0) + x3 (0; 0; 1), so S = f (1; 0; 0) + (0; 0; 1) ; ;
1.1.15. Denition. (Linear combination) For p 2 N, i = 1; p; xi 2 V and
is called the linear combination of the vectors xi with the scalars
If, moreover, the scalars eld K = R and

0, 8i = 1; p, and

is called convex linear combination.

i.
p
P

2 K, the vector x =

2 Rg :
p
P

ix

i=1

= 1, then the linear combination

i=1

1.1.16. Example. 2 (1; 0; 0) + 3 (0; 1; 0) is a linear combination in R3 ; its value is (2; 3; 0).
1.1.17. Example. A 3D picture which contains the following objects:
The points O (0; 0; 0), P1 (1; 1; 0), P2 (0; 1; 1), P3 (1; 0; 1),
!
!
!
The vectors v1 = OP1 , v2 = OP2 , v3 = OP3 ,
the linear combinations v4 = 2v1 + 2v2 , v5 = 2v1 + 2v3 , v6 = 2v2 + 2v3
the planes (with dierent colors): p1 (the set of all linear combinations between v1 and v2 ) p2 (the set
of all linear combinations between v1 and v3 ) p3 (the set of all linear combinations between v2 and v3 ),
the lines: l1 : (0; 0; 0) + (1; 1; 0) l2 : (0; 0; 0) + (0; 1; 1) l3 : (0; 0; 0) + (1; 0; 1)
1.1.18. Example. Several special linear combinations:
the sum of two vectors: v1 + v2 = 1 v1 + 1 v2 ;
the subtraction of two vectors: v1

v2 = 1 v1

1 v2 ;

the null vector: 0 v1 + 0 v2 ;


the vectors which are "similar" with the vector v:
dilatations or contractions of the original vector)

v (they share the same direction; they are

1.1. INTRODUCTORY DEFINITIONS

15

linear combinations:
with a single vector:
with two vectors:

v [represents a line passing through origin] [a 1dimensional vector subspace]2

v1 + v2 [represents a plane through origin] [a 2dimensional vector subspace]

linear combinations with 3 vectors:

v1 + v2 + v3 [represents a 3dimensional hyperplane through

origin] [a 3dimensional vector subspace]


1.1.19. Denition. (Linear covering) If A V is a set of vectors, the set
( n
)
X
Def
i
i
spanK (A) =
i 2 K; x 2 A; i = 1; p
i x ; p 2 N;
i=1

of all the possible linear combinations with vectors from A is the linear covering of A (or the set spanned
by A) (or the set generated by A).
1.1.20. Example. In (R2 [X] ; R), for A = f1; Xg,
spanR A = fa 1 + b X; a; b 2 Rg
is the set of all linear combinations with the polynomials 1 and X and it is the set R1 [X].
1.1.21. Remark. The set spanK (A) depends on the set K of scalars: R3 is a vector space over each R
and Q elds, but the vector spaces (R3 ; R) and (R3 ; Q) behave dierently for example, the same set of
vectors spans dierently over R and over Q:
p
p
2 For

3; 1; 0 2 spanR (f(1; 0; 0) ; (0; 1; 0)g)

3; 1; 0 2
= spanQ (f(1; 0; 0) ; (0; 1; 0)g)

dimension, see the Denition 1.5.10, page 29.

16

1. VECTOR SPACES AND VECTOR SUBSPACES

1.1.22. Remark. Consider a nite set of vectors

xi ; i = 1; p

and the vector equation

p
P

ix

= 0

i=1

(with unknowns the p scalars


1

i ).

This equation is always compatible, as it always has the solution

= 0 (the null solution). Depending on the set of vectors, the null solution may be unique or

not. The following notions will distinguish between these two cases.
1.1.23. Denition. (Linear dependence and independence) A nite set of vectors xi ; i = 1; p is called
linear dependent if at least one vector is a linear combination of the other vectors. The situation where
the set is not linear dependent is called linear independent.
xi ; i = 1; p linear dependent: 9i0 2 1; p, 9
Equivalent: 9i0 2 1; p, xi0 2 spanK
p
P
Equivalent: the vector equation

; pg n fi0 g, such that xi0 =

2 K, i 2 f1;

xi ; i = 1; p n fxi0 g
ix

p
P

ix

i=1
i6=i0

= 0 has some solutions other than the null solution.

i=1

= spanK xi ; i = 1; p n fxi0 g
xi ; i = 1; p linear independent: 8i0 2 1; p, xi0 2
p
P
i
Equivalent: the vector equation
i x = 0 does not have other solution than the null solution.
i=1

1.1.24. Example. The set of R4 vectors f(1; 2; 2; 1) ; (1; 3; 4; 0) ; (2; 5; 2; 1)gis linear dependent because
(1; 2; 2; 1) + (1; 3; 4; 0) = (2; 5; 2; 1).
1.1.25. Example. The set of R4 vectors f(1; 2; 2; 1) ; (1; 3; 4; 0) ; (3; 5; 2; 1)g is linear independent because when considering the vector equation
1

(1; 2; 2; 1) +

system
8
>
>
>
>
>
>
>
< 2
>
>
2
>
>
>
>
>
:

+3

+5

=0

=0

+3

(1; 3; 4; 0) +

(3; 5; 2; 1) = (0; 0; 0; 0), this translates into the linear homogeneous

=0

+0 2+ 3 =0
which has only the null solution.
1

1.1.26. Remark. (Equivalent denitions for linear independence) A nite set of vectors xi ; i = 1; p is
linear independent if and only if one of the statements is true:
p
P
i
(1) The vector equation
i x = 0 has only the null solution (the null vector is a linear combination
i=1

only with null scalars)

(2) 8

2 K; i = 1; p; ((9i0 2 f1;

; pg ;

least one nonnull scalar is nonnull)

i0

6= 0) )

p
P

i=1

ix

6= 0) (each linear combination with at

Proof. The two statements are connected by the result from Logic which states (p ! q)
(see 7.0.21, page 200).

(eq !ep)

1.1. INTRODUCTORY DEFINITIONS

17

1.1.27. Remark. Study the nature of a nite set of vectors (Procedure)

Consider the nite vector set xi ; i = 1; p in the vector space (V; K).
Step 1 Consider the vector equation

p
P

ix

= 0, with unknowns

1,

p.

i=1

Step 2 Find all the solutions of the Step 1 equation.


Step 3 Depending on the ndings from Step 2, draw the conclusion:
[the null solution is unique] the vector set is linear independent;
[the null solution is not unique] the vector set is linear dependent;
nd a linear dependence of the vector by using a nonnull solution.

1.1.28. Example. In the vector space (R3 ; R), study the nature of the set fv 1 (m) ; v 2 (m) ; v 3 (m)g with
respect to the parameter m 2 R, where v 1 (m) = (m; 1; 1), v 2 (m) = (1; m; 1), v 3 (m) = (1; 1; m).
Step 1: Consider the vector equation
1v

(m) +

2v

(m) +

with the unknowns


1

(m; 1; 1) +

(m

1,

3v

(m) = 0,

2,

3,

(1; m; 1) +

2
3;

+m

which becomes:

(1; 1; m) = (0; 0; 0),


3;

+m

3)

= (0; 0; 0).

We
8 get the linear homogeneous system:
>
m 1+ 2+ 3 =0
>
>
<
1+m 2+ 3 = 0
>
>
>
:
1+ 2+m 3 = 0
Step 2: Solve the system and obtain the complete solution (dependent on the parameter m):
8
>
f(0; 0; 0)g ;
m 2 Rn f 2; 1g
>
>
<
S (m) =
f( a b; a; b) ; a; b 2 Rg ; m = 1
>
>
>
:
f(a; a; a) ; a 2 Rg ;
m= 2
Step 3 Conclusion:
for m 2 R n f 2; 1g the vector set is linear independent.
for m =

2 the vector set is linear dependent and a linear dependency is v 1 ( 2) + v 2 ( 2) +

v 3 ( 2) = 0.
for m = 1 the vector set is linear dependent and a linear dependency is 2v 1 (1)+v 2 (1)+v 3 (1) = 0.
In the conclusion we get two types of information:
a qualitative type (about the nature of the vector set: dependent/independent)
a quantitative type (for the linear dependent situation, we get a linear dependency).

18

1. VECTOR SPACES AND VECTOR SUBSPACES

1.1.29. Denition. (Set of generators) Consider (V; K) and the vector set X

V, we say that the nite set

of vectors xi ; i = 1; p is a set of generators for (or generates) the set X when X

span

xi ; i = 1; p

If the set X is not specied, we consider that X = V (the set generates the whole vector space).
1.1.30. Example. In the vector space (R2 [X] ; R), for A = f1; Xg,
spanR A = fa 1 + b X; a; b 2 Rg
is the set of all linear combinations with polynomials 1 and X so A generates all the set R2 [X].
Terminological Dierences. The terms "vector", "position vector" or "point", although usually
used interchangeably to designate an element of a vector space, in strict sense they have dierent meaning.
The term "point" refers to an element of the set V and designates a place in space. The set V is
regarded as an a ne space and the only accepted operation with points is subtraction, which gives a
vector.
The term "vector" refers to an object dened by two points, the rst one understood as the origin of
the vector and the other one understood as the edge of the vector.
The term "position vector" refers to those vectors for which the origin is the origin (the null point) of
the space and this is the term most suited for vector spaces.
~ and
1.1.31. Example. The points P = (2; 5) and Q = (6; 2) are identied with the position vectors OP
~
OQ.
The operation P

~ (with the origin the point Q and the edge P ,


Q has the result not the vector QP

~ with R = P
which in vector spaces does not exist) but the vector OR,

Q = ( 4; 3) (operation with

~ and OR
~ may be seen (in other settings) as congruent, they have dierent
points). Although the vectors QP
~
~ , is not an element of
origins and only one of them is a position vector (namely OR);
the other one, QP
the vector space.

1.1. INTRODUCTORY DEFINITIONS

19

Changing the scalar multiplication may change the properties of the vector space. Consider
p (X)
.
the set R [X] of all fractions of polynomials with real coe cients,
q (X)
The set R [X] is a eld together with addition and scalar multiplication.
The structure (R [X] ; R [X]) is a vector space over itself (and in this case the scalar multiplication is
the usual multiplication of fractions of polynomials), with dimension 1 (see the section 1.5 for dimension)
Consider an alternate "scalar multiplication":
X 2 v (X) =

( (X) ; v (X)) 7!
Because ( + ) (X 2 ) =

(X 2 )+ (X 2 ) and (

: R [X]

) (X 2 ) =

R [X] ! R [X]

(X 2 ) (X 2 ) and 1 2 R [X] with 1 (X 2 ) = 1,

all the axioms are met.


The structure (R [X] ; R [X]) has dimension 1 (and a basis is the polynomial 1), because v (X) =
v (X) 1; 8v 2 R [X].
The structure (R [X] ; R [X]) with " " as scalar multiplication:
p (X)
Consider a vector v (X) =
with p and q polynomials.
q (X)

p (X) =

n
X

ak X k =

k=0

[ n2 ]
X

a2i X 2i +

i=0

n 1
[X
2 ]

a2j+1 X 2j+1 = p1 X 2 + p2 X 2 X;

j=0

where:
p1 (X) =

[ n2 ]
X

a2i X i ; p2 (X) =

i=0

n 1
[X
2 ]

a2j+1 X j :

j=0

For each polynomial, group the even and odd exponents of X:


v (X) =

p (X)
p1 (X 2 ) + p2 (X 2 ) X
(p1 (X 2 ) + p2 (X 2 ) X) (q1 (X 2 ) q2 (X 2 ) X)
=
=
=
q (X)
q1 (X 2 ) + q2 (X 2 ) X
q12 (X 2 ) q22 (X 2 ) X 2

p1 (X 2 ) q1 (X 2 )

p1 (X 2 ) q1 (X 2 )
q12 (X 2 )

p1 (X 2 ) q2 (X 2 ) X + p2 (X 2 ) q1 (X 2 ) X
q12 (X 2 ) q22 (X 2 ) X 2
p2 (X 2 ) q2 (X 2 ) X 2
+
q22 (X 2 ) X 2

Observe that the polynomial v (X) =

p2 (X 2 ) q2 (X 2 ) X 2

p1 (X 2 ) q2 (X 2 ) + p2 (X 2 ) q1 (X 2 )
X
q12 (X 2 ) q22 (X 2 ) X 2

p (X)
may be written as a linear combination of the polynomials
q (X)

1 and X with the scalars


1 (X)

2 (X)

p1 (X) q1 (X)
q12 (X)

p2 (X) q2 (X) X
q22 (X) X

and
p1 (X) q2 (X) + p2 (X) q1 (X)
;
q12 (X) q22 (X) X

20

1. VECTOR SPACES AND VECTOR SUBSPACES

meaning that
v=

1+

X;

which shows that f1; Xg is a generating set. While the set f1; Xg is linear independent (exercise), it
follows that the structures (R[X]; R[X]; +; ) and (R[X]; R[X]; +; ) are dierent, because of the dierent
scalar multiplication.
[Citation???]
1.1.1. Lengths and Angles. We briey mention the notions3 connecting the vector space with
Euclidean Geometry; later there is an entire dedicated chapter 3, page 139.
For two vectors x, y 2 Rn , the expression
n
P
hx; yi = x y =
xi yi
i=1

is called scalar product.

The vectors are called perpendicular (orthogonal) [written x ? y] when hx; yi = 0.


rn
p
P 2
The length of a vector is kxk = hx; xi =
xi .
i=1

v
.
kvk
When two vectors are perpendicular, Pithagoras Theorem takes place: v ? w ) kv + wk2 = kvk2 +
A vector is called versor (unit vector) when kvk = 1. The versor of a nonzero vector v is

kwk2
Proof.
n
P

i=1

vi2 +

n
P

i=1

n
P

i=1

vi wi = 0 ) kv + wk2 =

wi2 = kvk2 + kwk2 .

1.1.32. Remark. kv
n
P

i=1

wi2

wk2 =

n
P

(vi

n
P

(vi + wi )2 =

i=1

n
P

i=1

wi )2 =

i=1

n
P

i=1

= kvk + kwk = kv + wk .

(vi2 + wi2

(vi2 + wi2 + 2vi wi ) =

2vi wi ) =

n
P

i=1

vi2 +

n
P

i=1

n
P

i=1

vi2 +

wi2

n
P

i=1

wi2 + 2

n
P

n
P

vi wi =

i=1

vi w i =

i=1

n
P

i=1

vi2 +

[The parallelogram generated by the two vectors is a rectangle, so the two diagonals have the same
length]
For two nonzero vectors, the cosine of the angle between their directions is cos (v; w) =

hv; wi
.
kvk kwk

1.2. Properties of vector spaces


1.2.1. Proposition. (Algebraic rules in a vector space)
(1) 8 ;

2 K; 8x; y 2 V, we have:

(2) 8 2 K;

(x

y) =

y, (

) x=

x;

0V = 0V ;

(3) 8x 2 V; 0K x = 0V ;
(4) 8x 2 V; ( 1K ) x =
3 It

x;

is about the default notions, in the sense that they are considered valid when no other specication was made.

1.2. PROPERTIES OF VECTOR SPACES

(5)

x = 0V )

= 0K or x = 0V .

Proof. Consider ;
(1)

x=
(

(2)

2 K; x; y 2 V arbitrary chosen;

((x

y) + y) =

)x + x ) x
0V =

(x

(3) 0K x = (

0V =

) x=

(4) 0V = 0K x = (1K
x = 0V and

(x

y)

y) +

x=(

x) =

[alternative:

(5)

21

(x

y) =

y; x = ((

) x;
x = 0V :

(0V + 0V ) =
x

0V +

0V

add

0V

in both terms

0V =

0V ]

x = 0V :

1K ) x = (1K + ( 1K )) x = 1K x + ( 1K ) x ) ( 1K ) x =
6= 0 ) 9

) + )x =

2 K and

x=

(1K x) =

x:

0V ) x = 0V .

Exercise: Study this result as a theoretical exercise.


1.2.2. Remark (Properties of the linear covering of a set). A set A is linear dependent () 9v 2 A,
v 2 span (A n fvg)
A set A is linear independent () 8v 2 A, v 2
= span (A n fvg)
span (fvg) = f v;
;=
6 A

V)A

2 Kg
span (A)

[proof: 8v 2 A, v = 1 v which is a linear combination]


;=
6 A

V ) 0 2 span (A)

[proof: 8v 2 A, 0 = 0 v which is a linear combination]


;=
6 A
;=
6 A1

V ) span (A) = span (A [ f0g)


A2

V ) span (A1 )

span (A2 )

1.2.3. Remark. span (xi )i=1;n is a vector subspace [In fact, the linear covering of any set not necessarily nite is a vector subspace, with a similar argument; with the convention span (;) = f0g, all the
possibilities are covered].
Proof. Consider v1 ; v2 2 span (xi )i=1;n and
2 K ) 9 i1 ;
n
n
P
P
j
x
;
j
=
1;
2;
then
v
+v
=
( i1 + i2 ) xi 2 span (xi )i=1;n and
1
2
i i

i=1

i=1

2
i

2 K, i = 1; n, such that vj =
n
P
v1 =
( i1 ) xi 2 span (xi )i=1;n
i=1

1.2.4. Remark. Consider a vector subspace V0 in (V; K). Then,


8n 2 N; 8xi 2 V0 ; 8

2 K; i = 1; n;

n
X
i=1

i xi

2 V0 :

22

1. VECTOR SPACES AND VECTOR SUBSPACES

(a subspace contains any linear combinations of its elements)


(span (V0 )

V0 )

(a slightly stronger statement actually takes place: if V0 is a subspace, the linear covering of any subset
of the subspace is included in the subspace: 8V1

V0 , span (V1 )

V0 )

Proof. By induction over n 2 N: for n = 1, axiom 2 of the denition of a subspace it follows that
for any x1 2 V0 and for any scalar

2 K, we have

1 x1

2 V0 . Suppose that the property takes place for

n 2 N and consider n + 1 arbitrary vectors and scalars xi 2 V0 ,


n+1
P

i=1
n+1
P

i xi

n
P

i xi

n+1 xn+1

i=1

i xi

i=1

2 V0 (the property for n)

2 V0 (the property for n = 1)

n+1 xn+1

1.2.5. Remark. span (xi )i=1;n

V0 subspace
V0 (xi )
i=1;n

9
>
>
>
>
>
=
>
>
>
>
>
;

2 K; i = 1; n + 1. Then:

n+1
X
i=1

i xi

2 V0 :

V0 (the linear covering of a set is the intersection of all

vector subspaces containing the set) (the linear covering of a set is the smallest in the sense of inclusion
subspace which contains the set) (the word "smallest" is used with respect to the inclusion relation;
"smallest" in this context means that any subspace which includes the set also includes the linear covering
of the set).
Proof. span (xi )i=1;n

V0 subspace
V0 (xi )
i=1;n

V0 because when a subspace includes the set, it also includes

the linear covering of the set (Remark (1.2.4)); the other inclusion follows from Proposition (1.2.3):
is itself a subspace including the set, and so it is within the family of subspaces which
T
are including the set, from where it follows span (xi )i=1;n
V0 because the intersection is

span (xi )i=1;n

V0
V0

subspace
(xi )i=1;n

included in each set which is intersected.


1.2.6. Remark. A1

A2

span (A1 ) ) span (A2 ) = span (A1 ).

1.2.7. Remark. span (span (A)) = span (A)


1.2.8. Remark (Exchange Property). v 2 span (A [ fwg) n span (A) ) w 2 span (A [ fvg) [if a vector v
is a linear combination of vectors from A with w but not from A only, then w is also a linear combination
of A with v]
Proof. v 2 span (A [ fwg) n span (A) ) v 2 span (A [ fwg) and v 2
= span (A).

1.3. VECTOR SPACES EXAMPLES

Since v 2 span (A [ fwg), 9n 2 N , 9 2 K, 9


The scalar

is nonzero, because otherwise if

23

2 K, 9vi 2 A, i = 1; n, such that v = w +


= 0 then v =

n
P

i vi

i=1

diction.
By dividing with , we get w =

n
P

i=1

n
P

i vi .

i=1

2 span (A) which is a contra-

vi 2 span (A [ fvg).

1.3. Vector Spaces Examples


1.3.1. Example. The space Kn of all the nite sequences with n components from the eld K. The
elements of the set Kn are (x1 ; x2 ;

; xn ), addition and multiplication by a scalar is done componentwise.

Further examples are for K = R, K = C, K = Q and for nite elds.


1.3.2. Example. Consider a set X 6= ; and the set of all functions with X as a denition domain,
and with nitelymany nonzero real values F (X) = ff ( ) : X ! R; f

(R ) is niteg. The set F (X)

together with the usual algebraic operations (with functions) is a vector space.[Interesting particular cases:
X = f1;

; ng; X = N].

1.3.3. Example. The set of matrices with elements from a eld K and with m lines and n columns.
The set is denoted by Mm

(K), the vectors are matrices, vector addition is the matrix addition and the

multiplication by a scalar is the multiplication of a matrix by a scalar.


1.3.4. Example. The set of all real sequences. The set is denoted by RN , a vector is a sequence of
real numbers, understood as the ordered and innite succession of the elements of the sequence. Vector
addition is the componentwise addition of the sequences
(an )n2N + (bn )n2N = (an + bn )n2N
(and it is a new sequence, whose terms are obtained by adding the corresponding terms of the sequences)
and the multiplication of a vector by a scalar is
(an )n2N = ( an )n2N :
1.3.5. Example. The set of sequences (an )n2N for which the series
limit lim

n
P

n!1 k=0

scalar.

a2k

1
P

n=0

a2n converges [this means that the

exists and it is nite], together with the usual sequence addition and multiplication by a

1.3.6. Example. The set of all Cauchy rational sequences (a sequence (an )n2N
sequence when 8" > 0, 9n" 2 N, 8n; m
vector space.

n" , jan

Q is called a Cauchy

am j < "), with the usual sequence operations is a

24

1. VECTOR SPACES AND VECTOR SUBSPACES

1.3.7. Example. The set of all real polynomials in the unknown t, denoted by R [t]. When p(t) =
a0 + a1 t +

+ an tn and q(t) = b0 + b1 t +

+ bn tn are two polynomials from R [t], with the operations


+ (an + bn )tn

p(t) + q(t) = (a0 + b0 ) + (a1 + b1 )t +


+ a n tn

p(t) = a0 + a1 t +

0 = 0 (the null polynomial)


we obtain a vector space structure (when adding two polynomials with dierent degrees, the polynomial
with the lowest degree is completed with null coe cients up to the bigger degree; the null polynomial is
considered to have degree

1).

1.3.8. Example. The set of all real polynomials in the unknown t and with degree (in t) at most n,
denoted by Rn [t], with the usual polynomial operations.
1.3.9. Example. The set of all functions f ( ) : R ! R of class C 1 and which are solutions of the
dierential equation f 0 (t) + af (t) = 0, 8t 2 R, together with the usual function operations.
1.3.10. Example. The set of real functions which are indenitely dierentiable D1 (R; R).
1.3.11. Example. The set of all real functions with domain [a; b] and codomain R, denoted by F ([a; b] ; R).
1.3.12. Example. The set of all Lipschitzian functions from F ([a; b] ; R) (functions f ( ) : [a; b] ! R such
that 9kf > 0 with jf (x)

f (y)j 6 kf jx

yj, 8x; y 2 [a; b]), denoted by L ([a; b] ; R).


1.4. Exercises

1.4.1. Example. The set of vectors f(1; 1; 1), (1; 2; 3), (3; 2; 1)g generates the vector space (R3 ; R).
1.4.2. Exercise. Consider a real vector space (V; R) and the operations:
+ : (V

V)

(V

V) ! V

dened by
(x1 ; y1 ) + (x2 ; y2 ) = (x1 + x2 ; y1 + y2 )
and
:C

(V

V) ! (V

V)

dened by
( + i ) (x; y) = ( x
Show that (V

y; x + y) :

V; C) with the above operations is a complex vector space (this vector space is called the

complexication of the real space (V; R) )

1.5. REPRESENTATION IN VECTOR SPACES

25

1.4.3. Exercise. Show that the set of the nondierentiable functions over [a; b] is not a vector space.
1.4.4. Exercise. Show that the union of two vector subspaces is generally not a subspace.
1.4.5. Exercise. Show that the set V0 = fx 2 Rn ; Ax = 0g is a vector subspace, where A 2 Mm

(R).

1.4.6. Exercise. Consider the subspaces A = f(0; a; b; 0) ; a; b 2 Rg, B = f(0; 0; a; b) ; a; b 2 Rg. Determine A + B.
1.4.7. Exercise. Show that if the linear operator U ( ) : Rn ! Rn satises U 2 ( ) + U ( ) + I ( ) = O ( ),
then U ( ) is bijective.
1.5. Representation in Vector Spaces
1.5.1. Proposition. (Equivalent denitions for a basis) Consider a nite set of vectors denoted by B
from the vector space (V; K). The following statements are equivalent:
(1) B is linear independent and it is maximal with this property (in the sense that 8v 2 VnB, the
set B [ fvg is not linear independent anymore);
(2) B is generating V and and it is minimal with this property (in the sense that 8v 2 B, the set
B n fvg doesnt generate V anymore);
(3) B is generating V and it is linear independent.
Proof. The proof will follow the steps: (1))(2))(3))(1).
(1))(2)
Assume B is linear independent and maximal (with this property); we prove by contradiction that the
set B generates V:
Assume that B is not generating V; then 9v0 2 V n span (B) and the new set B1 = B [ fv0 g is linear
independent (because otherwise v0 would be a linear combination with B elements) and strictly includes
B, which is a contradiction with the maximality of B (with respect to the linear independence property).
The contradiction originates from the hypothesis that B doesnt generate V, so, by contradiction, we get
that B generates V.
We prove also by contradiction that B is minimal (as a generating set for V)
Assume by contradiction that B, as a generating set of V, is not minimal. Then there is at least an
element v1 2 B such that B n fv1 g still generates V (at least an element may be removed from B without
aecting the generating property); since v1 is a linear combination of B n fv1 g, it follows that the initial
set B is not linear independent, which is a contradiction with the hypothesis. Since the contradiction
originates from the hypothesis that B as a generating set is not minimal, it follows that B is minimal (as
a generating set of V).

26

1. VECTOR SPACES AND VECTOR SUBSPACES

(2))(3)
Since B generates V, suppose by contradiction that the set B is not linear independent: 9v2 2 B such
that v2 2 span (B n fv2 g); then the set B n fv2 g

B and still holds the generating property (because

each vector of the space is a linear combination with vectors from B; if v2 participates at this linear
combination, by replacing it with the linear combination from B n fv2 g we get a new linear combination
only with vectors from B n fv2 g), which is a contradiction.
(3))(1)
Since B is linear independent, if B is not maximal then there is a vector v 0 2 VnB such that B [ fv 0 g
is still linear independent. This means that v 0 2
= span (B), which contradicts that B generates V.
1.5.2. Denition. In a vector space (V; K) a set B

V is called a basis when B satises one of the

equivalent statements from Proposition 1.5.1, page 25. A basis for which the order of the vectors also
matters is called an ordered basis.
1.5.3. Denition. A vector space with at least a basis which is a nite set is called a vector space of
nite type.
A vector space which has no nite basis is called a vector space of innite type.
1.5.4. Theorem. (The su ciency of maximality in a generating set) If the nite set S generates
V and the set B0

S is linear independent, then there is a set B such that B0

S and B is a basis

for V.
(the proof will show that when S is nite and generates V, the maximality of B in S as a linear
independent set is enough for the maximality of B in V as a linear independent set)
Proof. We inductively obtain a set B with the properties:
a) the set B is linear independent,
b) B0

c) the set B is maximal in S with the above properties.


The procedure for obtaining the set B:
Initially, B

B0 (the set B0 is linear independent and it may or may not be maximal in S with

respect to this property).


1.1. If B is maximal in S, then the procedure stops.
1.2. If B is not maximal in S, then there is a vector x 2 S n B which may be added to B without
losing the linear independence property.
2. Consider the new set B

B [ fxg, with x as in step 1.2.

3. Repeat the procedure for the new set.

1.5. REPRESENTATION IN VECTOR SPACES

27

Because of the nitude of the set S, the procedure stops in nite number of steps.
From the procedure we get a set B satisfying the properties a), b), c) and which is not necessarily
unique, but satises the following maximality property in S: if B1 is linear independent and B

B1

S,

then B = B1 .
From the procedure for obtaining B it also follows that jBj

jSj (the number of elements of B is

smaller than the number of elements of S).


The set B is a basis for V (the set B is a set which is maximal in V with respect to the linear
independence property):
Assume by contradiction that B is not a basis; then there is a vector x0 2 V such that x0 2
= span (B);
since S generates V we have x0 2 span (S) which means that there are k elements y1 ,
, yk from S, and
k
P
0
k scalars 1 ,
, k such that x0 =
i yi . If each element yi , i = 1; k is in span (B), then x 2 span (B)
i=1

also, so there is at least one element among yi 2 S; i = 1; k which is not in span (B) which we denote by
Def

y 0 ; then the set B 0 = B [ fy 0 g

S is linear independent and strictly includes B, which is a contradiction

with the maximality in S of the set B.


1.5.5. Corollary. From any nite set which generates the space it may be extracted a basis.
Proof. Consider a nite set S generating the space V. In the set S there is at least one nonzero
vector, denoted by v 0 (if all the vectors are zero, then S would not generate V). Then:
fv0 g is a linear independent set
fv0 g

and we apply the previous result to obtain a set B which is a basis for V such that fv0 g

S.

1.5.6. Theorem. (The Steinitz exchange lemma) Consider a linear independent set B = fv1 ;
and a generating set S = fu1 ;
fv1 ;

; un g, both in V. Then r

; vr g

n and, maybe with a reordering of the vectors,

; un g generates V. (any set of linear independent vectors has less vectors than any

; vr ; ur+1 ;

generating set and the linear independent set may replace the same number of vectors from the generating
set while preserving the generating property).
Proof. Induction over the number of linear independent vectors, j = 1; r:
For j = 1, 9

2 K; i = 1; n such that
v1 =

n
X

i ui ;

i=1

if all the scalars

are zero, then v1 would be zero (a contradiction with the linear independence property

of the set fv1 ;

; vr g); so at least one scalar is nonzero and, maybe with a reordering of the vectors from

28

1. VECTOR SPACES AND VECTOR SUBSPACES

the generating set, it may be assumed that


vectors v1 ; u2 ;

6= 0; then u1 may be written as a linear combination of the

; un :

(1.5.1)

u1 =

n
X

v1

Consider v 2 V arbitrary; 9
v=

n
P

i ui

1 u1

n
P

i ui

i=2

i=2

v1 +
1

so that v is a linear combination of v1 ; u2 ;

n
P

i ui ;

n
P

v1
1

i=2

i=2

i
1

ui

n
P

i ui

; un , which means that fv1 ; u2 ;

Assume that the statement is true for j

i=2

ui ;

i 1

but from (1.5.1) it follows:

i=1

n
P

ui :

2 K; i = 1; n, such that v =

i=1

If r

; un g generates V.

n.

1 = n ) r = n + 1, then there is a contradiction obtained from the existence of another vector,

vn+1 such that fv1 ;

; vn ; vn+1 g is linear independent:

since n vectors from the linear independent set B already replaced n vectors (which means all the
vectors) from the generating set S, this means that the set fv1 ;

; vn g is linear independent and generating

set, which means it is also maximal with respect to the linear independence property, a contradiction with
the existence of another vector vn+1 such that the set fv1 ;
So r

1 < n and fv1 ;

if all the scalars


of the set fv1 ;
r

; un g generates V )

; vr 1 ; ur ;
9

; vn ; vn+1 g is still linear independent.

2 K; vr =

r 1
X

i vi

i=1

2 K; i = r; n are zero, then vr =

; vr g; so there is an index i0 2 fr;

n
X

i ui ;

i=r

rP1

i vi

which contradicts the linear independence

i=1

; ng such that

i0

6= 0; assume by reordering that

6= 0; then ur may be written as a linear combination of the vectors fv1 ;

(1.5.2)

ur =

r 1
X

vr

because fv1 ;

n
X

vi

also generates V: 8v 2 V; 9

ui ;

2 K; i = 1; n;

v=
rP1

rP1

i vi

i=1

i vi

i=1

rP1
i=1

n
P

i=r
r

vr
r
i r
r

i ui

rP1
i=1

vi +

rP1

i vi

i=1

vi
r
i

r
r

+
n
P

i=r+1
n
P

vr +

r ur

i=r+1

; un g:

i=r+1

; un g generates V and from (1.5.2) it follows that fv1 ;

; vr 1 ; ur ;

i=1

; vr 1 ; vr ; ur+1 ;

n
P

i ui

i=r+1

ui
r
i

n
P

i=r+1
i r
r

ui ;

i ui

; vr ; ur+1 ;

; un g

1.5. REPRESENTATION IN VECTOR SPACES

29

which means that each vector from V is a linear combination of the set fv1 ;

; vr ; ur+1 ;

; un g (the set

generates V).
1.5.7. Corollary. In a nitetype vector space, any linear independent set may be completed up to a
basis.
Proof. Because the space is of nitetype, we have a nite generating set; by using the Steinitz
exchange Lemma, we may replace some vectors from the generating set the vectors of the linear independent
set, while preserving the generating property.
Now we have a linear independent set included in a nite generating set so we are in position to apply
the Theorem 1.5.4 to obtain a basis which contains the initial linear independent set (and which may be
considered as "completing" the initial independent set up to a basis).
1.5.8. Corollary. In a nitetype vector space, the number of vectors of any linear independent set is
smaller than the number of vectors of any generating set.
Proof. From the Steinitz Exchange Lemma.
1.5.9. Corollary. In a nitetype vector space, any two bases have the same number of elements.
Proof. Both bases may be viewed as linear independent sets and generating sets.
1.5.10. Denition. The number of vectors from any basis of a nitetype vector space is called the
dimension of the vectors space (V; K) and is denoted by dimK V.
1.5.11. Proposition. Given a basis in nitetype vectors space, any vector is a linear combination of the
basis. Moreover, the scalars participating at the linear combination are uniquely determined by the basis.
Proof. Consider a basis B = fu1 ;

; un g and v 2 V.

Because B generates V, the vector v is a linear combination of B (which proves the existence of the
scalars).
Moreover, the scalars are unique:
Assume two linear combinations: 9 i ;
n
X
i=1

i ui

2 K, i = 1; n, v =

n
X
i=1

i ui

and because B is a linear independent set it follows


uniquely determined by the basis.

n
X

n
P

i ui

i=1

i ) ui

n
P

i ui .

Then

i=1

= 0;

i=1

i;

8i = 1; n, which means that the scalars are

30

1. VECTOR SPACES AND VECTOR SUBSPACES

1.5.12. Denition. Given the vector space (V; K), the basis (ordered basis) B = fu1 ;

; un g and the

vector v 2 V, the scalars from the previous result are called the coordinates (the representation) of the
vector v in the basis B.
This will be denoted with [v]B and the coordinates will be considered (by convention) as columns:
2

6 1 7
6
7
6 2 7
6
7
[v]B = 6 . 7 [matrixform representation of v in B]
6 .. 7
6
7
4
5
n

v=

n
P

[vectorform representation of v in B] :

i vi

i=1

1.5.13. Remark (Canonical bases). Each vector space has an innity of possible bases. As a convention
for commonly used vector spaces, if there is no mention about the basis in which a representation takes
place, then it is assumed that the representation is done in some (conventionally) special basis usually
called the standard basis or the canonical basis.
[The standard basis for Rn ] The set E = fe1 ;
nents, e1 = (1; 0;
j = 1; n (

ij

; 0), e2 = (0; 1; 0;

; 0),

; en g, where the vectors ej , j = 1; n have each n compo, en = (0;

; 0; 1); in general, ej = (

1j ;

jj ;

nj ),

is the Kroneckers symbol).

[The standard basis for R2 ] The set E = fe1 ; e2 g = f(1; 0) ; (0; 1)g.
[The standard basis for R3 ] The set E = fe1 ; e2 ; e3 g = f(1; 0; 0) ; (0; 1; 0) ; (0; 0; 1)g.
[The standard basis for Rn [t]] [the set of all polynomials with real coe cients and degree at most n in
the unknown t] The set E = f1; t1 ;

; tn g.

[The standard basis for R [t]] [the set of all real polynomials in the unknown t] The set E = f1; t1 ;

; tn ;

[The standard basis for Mmn (R)] [the set of all matrices with m lines and n columns, with real
entries]
The set E = fE11 ; : : : ; Emn g where Ei0 j0 2 Mmn (R) are matrices with the general term aij =
8
< 1; (i; j) = (i0 ; j0 )

: 0; (i; j) 6= (i ; j ) :
0 0
[The standard basis
2
3
2
1 0
0 1
6
7
6
6
7
6
6 0 0 7, E12 = 6 0 0
4
5
4
0 0
0 0

for M3;2 (R)]


3
2
0
7
6
7
6
7, E21 = 6 1
5
4
0

The set E =
3
2
0
7
6
7
6
0 7, E22 = 6
5
4
0

fE11 ; E12 ; E21 ; E22 ; E31 ; E32 g with matrices E11 =


3
2
3
2
3
0 0
0 0
0 0
7
6
7
6
7
7
6
7
6
7
0 1 7, E31 = 6 0 0 7, E32 = 6 0 0 7.
5
4
5
4
5
0 0
1 0
0 1

1.5.14. Remark. Consider the vector space (V; K), a xed ordered basis B = (e1 ; ::; en )
n
n
n
P
P
P
u=
2 K. Then:
i ei , u1 =
i ei , u2 =
i ei 2 V and the scalars ,
i=1

i=1

i=1

The vector relation:

u = u1 + u2

V, the vectors

g.

1.6. REPRESENTATION OF VECTORS

31

takes place if and only if the corresponding matrix relation between the vectors representations is taking
place:
[u]B =

[u1 ]B + [u2 ]B :

Proof.2 The 3coordinates of the vectors in B are:


6 1 7
n
P
6 . 7
[u]B = 6 .. 7 () u =
4
5
i=1
n

6 1 7
n
P
6 . 7
[u1 ]B = 6 .. 7 u1 =
4
5
i=1
n

i ei

i ei

6 1 7
n
P
6 . 7
[u2 ]B = 6 .. 7 () u2 =
i ei .
4
5
i=1
n

Then:

u = u1 + u2 ()
n
n
P
P
()
i ei =
()

i=1
n
P

i ei

i=1

i ei

i=1

n
P

i=1

n
P

i=1
i

i ) ei

i ei

()

()

8i = 1; n (by the unicity of


3 2
3 2
1+
1
1
7 6
7 6
6 1 7 6
7 6 .. 7 6
6 .. 7 6
..
() 6 . 7 = 6
7=6 . 7+6
.
5 4
4
5 4
5 4
n
n+
n
n
() [u]B = [u1 ]B + [u2 ]B .
()

2i

+
3i 2

i,

the representation in a basis) ()


3
2
3
2
3
1

..
.

7
7
7=
5

6 1 7
6 .. 7
6 . 7+
5
4
n

6 1 7
6 .. 7
6 . 7 ()
4
5
n

1.6. Representation of Vectors


Consider a nitetype vector space V over the eld K and the ordered basis E = (e1 ; ::; en )
coordinates of the vectors ei with respect to E are:
2 3
2
1
6 7
6
6 7
6
6 0 7
6
6 7
6
6 7
6
7 ; [e2 ] = 6
[e1 ]E = 6
0
E
6 7
6
6 . 7
6
6 . 7
6
6 . 7
6
4 5
4
0

3
0 7
7
1 7
7
7
0 7
7;
.. 7
7
. 7
5
0

3
0
6 7
6 7
6 0 7
6 7
6 . 7
. 7
; [en ]E = 6
6 . 7:
6 7
6 7
6 0 7
4 5
1

V. The

32

1. VECTOR SPACES AND VECTOR SUBSPACES

Consider the arbitrary nite ordered set B1 = (v1 ;


matrix form
2

7
7
7
21 7
n
7
P
7 ; v1 =
31 7
i=1
.. 7
7
. 7
5
n1

6
6
6
6
6
[vm ]E = 6
6
6
6
6
4
vj =

7
7
7
22 7
n
7
P
7 ; v2 =
32 7
i=1
.. 7
7
. 7
5

i2 ei ;

n2

..
.
3

1m

7
7
7
7
n
7
P
7 ; vm =
7
i=1
7
7
7
5

2m
3m

..
.
nm
n
P

ij ei ;

i=1

Denote by [M (B1 )]E (2 Mn

i1 ei ;

12

6
6
6
6
6
[v2 ]E = 6
6
6
6
6
4
2

vector form

11

6
6
6
6
6
[v1 ]E = 6
6
6
6
6
4

; vm ), with coordinates in E as following:

im ei ;

8j = 1; m

(K)) the matrix with the coordinates of vj as column j,


2

6
6
6
6
6
[M (B1 )]E = 6
6
6
6
6
4

11

12

1m

21

22

2m

31

32

3m

..
.

..
.

n1

n2

..

..
.
nm

7
7
7
7
7
7:
7
7
7
7
5

The following properties of the set B1 may be studied with this matrix:

The vectors are considered as Kn elements (given by the number of lines);


The number of vectors is m (the number of columns);
The set B1 is a linear independent set if and only if rank [M (B1 )]E = m (a consequence is: the
set B1 cannot be linear independent when its number of vectors is bigger than the dimension of
the environment); [the matrix rank cannot be bigger than m; the matrix rank is smaller than m

1.6. REPRESENTATION OF VECTORS

33

if and only if (at least) a column is a linear combination of the others, which means that the set
B1 is linear dependent]
The set B1 generates the environment V if and only if rank [M (B1 )]E = n (a consequence is: the
set B1 cannot generate V when the number of vectors is smaller than the environment dimension);
[the set B1 generates V if and only if the nonhomogeneous linear system [M (B1 )]E x = [v]E is a
compatible system (has at least a solution) for each possible vector v; when rank [M (B1 )]E < n,
the set B1 may be completed with at least a certain additional vector v such that the rank of the
matrix attached to the new set is strictly bigger than the rank of the old matrix, and so for such
a vector v the initial linear system is incompatible]
The set B1 is a basis if and only if n = m and rank [M (B1 )]E = n. [from the previous remarks]
When the set B1 is an ordered basis, any vector from the environment may be represented in both
perspectives: the initial perspective E and the new perspective B1 .

In the following we will nd the connections between the two perspectives.


The representation of the new basis B1 with respect to the old basis E is (in matrix form):
2

6
6
6
6
6
[M (B1 )]E = 6
6
6
6
6
4

11

12

1n

21

22

2n

31

32

..
.

..
.

n1

n2

3n

...

..
.
nn

7
7
7
7
7
7:
7
7
7
7
5

This matrix has as columns the representations in the old basis of the new basis vectors.
In the new basis, the vectors of the new basis B1 will be represented as:
2 3
2 3
2 3
1
0
6 7
6 7
6 0 7
6 7
6 7
6 7
6 0 7
6 1 7
6 0 7
6 7
6 7
6 7
6 7
6 7
6 7
7 ; [v2 ] = 6 0 7 ; :::; [vn ] = 6 ... 7 ;
[v1 ]B1 = 6
0
B
B
6 7
6 7
6 7
1
1
6 . 7
6 . 7
6 7
6 . 7
6 . 7
6 7
6 . 7
6 . 7
6 0 7
4 5
4 5
4 5
0
0
1

34

1. VECTOR SPACES AND VECTOR SUBSPACES

while the vectors of the old basis E will be represented as


2
3
2
3
[e1 ]B1

0
11

6
6
6
6
6
=6
6
6
6
6
4

7
6
7
6
7
6
7
6
7
6
7 ; [e2 ] = 6
0
B1
6
31 7
6
.. 7
7
6
. 7
6
5
4
0
21

7
6
7
6
7
6
7
6
7
6
7 ; :::; [en ] = 6
0
B1
6
32 7
6
.. 7
7
6
. 7
6
5
4
0
22

0
n2

0
n1

n
X

ei =

0
12

0
1n
0
2n
0
3n

..
.
0
nn

7
7
7
7
7
7;
7
7
7
7
5

0
ji vj

j=1

Denote by [M (E)]B1 the matrix with columns given by the coordinates of the old basis E in the new basis
B1 ,

[M (E)]B1

6
6
6
6
6
=6
6
6
6
6
4

0
11

0
12

0
1n

0
21

0
22

0
2n

0
31

0
32

0
3n

..
.

..
.

0
n1

0
n2

..

..
.

0
nn

7
7
7
7
7
7:
7
7
7
7
5

Consider an arbitrary vector x, represented in the old basis E with coordinates


2
3
6 x1 7
6
7
6 x2 7
6
7
6
7
6
[x]E = 6 x3 7
7;
6 . 7
6 . 7
6 . 7
4
5
xn
and in the new basis B1 with coordinates
2

[x]B1

3
0
x
6 1 7
6 0 7
6 x 7
6 2 7
6
7
0 7:
=6
x
6 3 7
6 . 7
6 . 7
6 . 7
4
5
x0n

The connection between the two representations is given by:


x=

n
P

j=1

x0j vj =

n
P

i=1

n
P

j=1
n
P

j=1

x0j

x0j

n
P

i=1
!

ij ei

ij ei

n
P

j=1

n
P

i=1

n
P

j=1

x0j

n
P

x0j
i=1!
ij

ij ei

ei ;

1.6. REPRESENTATION OF VECTORS

but x =

n
P

xi ei , so that xi =

i=1

n
P

j=1

x0j

ij ,

matrix form of this relation is:


2 P
n
x0
2
3 6 j=1 j
6
6
6 x1 7 6
6
7 6 n
6
7 6 P 0
6
7 6
xj
6
7 6
6 x 7 6 j=1
6 2 7 6
6
7 6
6
7 6
6
7 6 P
n
6
7=6
x0j
6
7 6
6 x3 7 6 j=1
6
7 6
6
7 6
6
7 6
6
7 6
..
6 . 7 6
.
6 .. 7 6
6
7 6
4
5 6
6
xn
6 n
4 P 0
xj
j=1

1j

2j

3j

nj

35

8i = 1; n (from the unicity of representation in a basis). The


3

7 2
7
7
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7=6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 4
7
7
5

11

12

13

21

22

23

31

32

33

..
.

..
.

..
.

n1

n2

n3

1n

:::

2n

3n

...

..
.
nn

32
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
54

x01

x02

x03
..
.
x03

7
7
7
7
7
7
7
7
7
7
7;
7
7
7
7
7
7
7
7
7
5

and so we obtain:
[x]E = [M (B1 )]E [x]B1
(The matrix [M (B1 )]E is the changeofbasis matrix from B1 to E) The changeofbasis matrix from the
new basis to the old basis has as columns the coordinates in the new basis of the vectors from the old
basis. In a similar way we have the connection
[x]B1 = [M (E)]B1 [x]E
(The matrix [M (B1 )]E is the changeofbasis matrix from E to B1 ) which means that for any vector x,
we have the equality
[M (E)]B1 [x]E = ([M (B1 )]E )

[x]E ;

which means that the matrices are equal[M (E)]B1 = [M (B1 )]E 1 .
[M (B1 )]E
B1

[M (B1 )]E 1
A diagram of the matrix connections
between two bases

36

1. VECTOR SPACES AND VECTOR SUBSPACES

Given two ordered bases B1 and B2 , both represented in the initial ordered basis E, passing from B1
to B2 is accomplished by pivoting on E: by using the previous formulas, we get:
[x]B1 = [M (B1 )]E 1 [x]E [x]B2 = [M (B2 )]E 1 [x]E
[x]E = [M (B2 )]E [x]B2

[x]E = [M (B1 )]E [x]B1


from where we obtain the relations:

[x]B1 = [M (B1 )]E 1 [x]E = [M (B1 )]E 1 [M (B2 )]E [x]B2

[M (B2 )]E 1 [M (B1 )]E


[M (B1 )]E
B1

[M (B1 )]E 1

[M (B2 )]E
E

B2

[M (B2 )]E

[M (B1 )]E 1 [M (B2 )]E


A diagram of the matrix connections between two noninitial bases
both split by the initial basis and direct
A practical procedure to obtain the necessary calculations is given by the Gaussian Elimination Procedure.
The Gaussian Elimination Procedure generalizes the row reduction procedure for solving linear systems.
Given a linear system, the procedure chooses an equation and an unknown (which has a nonzero coe cient
in the chosen equation) and eliminate that unknown from all the other equations, by using appropriate
equation multiplications and equations additions (elementary transformations). The procedure uses the
compatibilities between the equality relation and the vector space operations, which are:
Multiplication with the same scalar of both terms of an equality preserves the equality.
Addition of the same object in both terms of an equality preserves the equality.
Replacing an object with an equal quantity preserves the equality.
[While these statements may seem superuous, a word of warning has to be said about using them
in a computational/numerical framework: it may happen that the replacement of the exact and symbolic
objects from the statements with their numerical counterparts might have undesired consequences and the
statements may not hold]
These principles may be used to formulate a systematic procedure for nding the general solution of a
linear system and we will describe this procedure.
Consider the system:

1.6. REPRESENTATION OF VECTORS

8
>
>
a11 x1 + a12 x2 +
>
>
>
>
>
>
a21 x1 + a22 x2 +
>
>
>
>
>
<

>
>
ai1 x1 + ai2 x2 +
>
>
>
>
>
>
>
>
>
>
>
: a x + a x +
n1 1
n2 2

37

a1j xj +

a1m xm

b1

a2j xj +

a2m xm

b2

aij xj +

aim xm

bi

anj xj +

anm xm

bn

in which the coe cient aij is nonzero. We want to eliminate the unknown xj from all the equations
except the ith equation (in other words, the coe cients of xj will become zero in all equations except the
equation i) and for equation i we want the coe cient to become 1. In order to obtain these goals, we will
perform the following operations:
divide the equation i with aij and replace the equation i with the results;
for each equation k = 1; n, k 6= i, add the equation k with the equation i multiplied by ( akj )
and replace the equation k with the result;
After these row operations, the new system will have the unknown xj only within the ith equation.
The operations are systematized with the following rules: the element aij is called PIVOT and the
new coe cients will be obtained by using THE PIVOT RULE:
(1) The places (l; j)l=1;n l6=i (the pivot column) are becoming zero while the place (i; j) (the pivot
place) becomes 1.
aik
(the pivot line is divided by the pivot).
aij
(3) The other places (neither the line nor the column of the pivot) (k; l)k=1;m k6=j are calculated by
(2) The places (i; k)k=1;m k6=j (the pivot line) are becoming

l=1;n l6=i

using
THE RECTANGLE RULE:
column j
line i

aij

column l

ail

j
line k

akj

akl

For each place (k; l), consider the rectangle with edges (i; j), (i; l), (k; j), (k; l) (specic for each
place (k; l)); the new value for akl is obtained by the formula: the product on the pivot diagonal
minus the product on the other diagonal and everything divided with the pivot:
a0kl =

aij akl

akj ail
aij

38

1. VECTOR SPACES AND VECTOR SUBSPACES

where a0kl stand for the new value of the place (k; l).

All the calculations are placed in a table and, from a vector space perspective, they have the following
meaning:

The rst column from the left is holding, for each step, the names of the current basis (representation system) (including the order of the vector basis, from upthe rst one, until downthe last
one)4.
The rst line represents the names of the included vectors5.
The (numerical) columns are the coordinates of the vectors (with names from the rst line) in the
current basis (given by the rst column).
The rst table represents the initial representation, usually done in the standard basis E.
The following tables are giving the coordinates in intermediary bases
The last table gives the representations in the nal basis.
The choice of the pivot aij means that the new basis is obtained by replacing in the old basis the
vector from the line i with the vector from the column j.

The initial table (with all the details)

4 This
5 This

v1

vj

vm

e1

ei

en

e1

a11

a1j

a1m

b1

e2
..
.

a21
..
.

a2j
..
.

a2m
..
.

0
..
.

0
..
.

0
..
.

b2
..
.

ei
..
.

ai1
..
.

aij #
..
.

aim
..
.

0
..
.

1
..
.

0
..
.

bi
..
.

en

an1

anj

anm

bn

column is a metadata column: it gives information about the names the current basis vectors and their order.
line is a metadata line which gives informations about the names of the vectors.

1.6. REPRESENTATION OF VECTORS

39

The second table, obtained with the pivot aij


v1

vj

vm

e1

e1

a011

a01m

e2
..
.

a021
..
.

0
..
.

a02m
..
.

0
..
.

ei

a0(i

1)1

a0(i

1)m

ei

ai1
aij

aim
aij

ei+1
..
.

a0(i+1)1
..
.

0
..
.

a0(i+1)m
..
.

0
..
.

en

a0n1

a0nm

a1j
aij

b01

a2j
aij

0
..
.

b02
..
.

..
.
a(i 1)j
aij

vj

en

1
aij
a(i+1)j
aij

..
.
anj
aij

b0i

bi
aij

0
..
.

b0i+1
..
.

b0n

The procedure may be used to nd the complete solution of a nonhomogeneous linear system
Ax = b
in the following way:

(1) Write the initial table.


(2) Apply the pivot rule until it is not possible anymore (once a pivot has been chosen on a line,
dont chose another pivot on that line anymore). With maybe a renaming of the columns or a
reordering of the lines, the nal table will be the following:

..
.

A012

b01

..
.

b02

(3) In the nal table:


the columns corresponding to the identity matrix I (the columns with previous pivots) are
called the main unknowns while the remaining unknowns are called the secondary unknowns;
the equations (lines) corresponding to the identity matrix I (the lines with previous pivots)
are called the main equations, while the other equations are called the secondary equations; the
secondary equations have all of them the lefthand side with all coe cients zero, because otherwise
some other pivots would be possible to choose (the table would not be the last one)

40

1. VECTOR SPACES AND VECTOR SUBSPACES

(4) The last table may be written in a system form like this:
2

I
6
6 ..
6 .
4
0
which means

..
.

A012

..
.

3 2
3
2
0
7 xP
b
74
5=4 1 5
7
5 x
b02
S

8
< I xP + A xS = b0 [main equations]
1
: 0 x + 0 x = b0 [secondary equations]
P
S
2

(5) The8secondary equations are used to decide the compatibility of the system
< b0 = 0 ) compatible system
2

: b0 6= 0 ) incompatible system
2
after which the secondary equations are becoming redundant and they may be discarded;

(6) When the system is compatible, the secondary unknowns are considered parameters while the
new system, viewed as
xP = b01

A012 xS ;

becomes a Cramer system: for each possible value of the parameters, the system has a unique
solution, which is already present in the way the system is written.

1.6.1. Example. Solve the systems:

(1)

(2)

(3)

8
>
4x + 3x2 + 3x3 = 14
>
>
< 1

>
>
>
:
8
>
>
>
<

>
>
>
:
8
>
>
>
<
>
>
>
:

6 7
6 7
3x1 + 2x2 + 5x3 = 13 (the solution is: 6 1 7)
4 5
2x1 + x2 + 8x3 = 13
1
2
4x1 + 3x2 + 3x3 = 6
3 + 12
6
6
(the solution is: 6 5
3x1 + 2x2 + x3 = 8
14
4
11x1 + 8x2 + 7x3 = 20
x1 + x2 + x3 = 10

2x1 + x2 + x3 = 16

(No solution)

3x1 + 2x2 + 2x3 = 24

1.6.2. Solution.

(1) The pivot procedure applied to the system is:

7
7
7,
5

2 R)

1.6. REPRESENTATION OF VECTORS

x1

x2

x3

14

13

13

1
0
0

3
4
1
4
1
2

3
4
11
4
13
2

j
j
j
j

41

7
2
5
2
6

j
1

11

10

11

j
1

1
2

x1

6
7 6 7
6
7 6 7
The solution may be read from the column b of the last table and it is 6 x2 7 = 6 1 7.
4
5 4 5
x3
1
In matrix form, the operations for each pivot are the following (by using elementary matrices):
0

10
1
1 0
1
3
3 7
0 0
1
B 4
C B4 3 3 14C B
4
4 2C
B 3
CB
B
1
11
5C
C
B
C B3 2 5 13C = B0
C,
1
0
B 4
C@
B
C
A
4
4
2
@ 2
A
A
@
1 13
2 1 8 13
0 1
0
6
0 4
10
1 0 2 2
1
3
3 7
1
1
3
0
1
0
9
11
B
CB
4
4 2C B
C
B
CB
1 11 5 C
C
B0
B
C
C=B
B
4
0
0
0 1
11
10C,
B
CB
C
@
A
4 4 2A
@
4 A@
1 13
0
1
0 0 1
1
0
6
2
2
2
0
10
1 0
1
1 0
9
1 0 9
11
1 0 0 2
B
CB
C B
C
B
CB
C B
C
=
B0 1 11 C B0 1
C
B
11
10
0 1 0 1C.
@
A@
A @
A
0 0 1
0 0 1
1
0 0 1 1

42

1. VECTOR SPACES AND VECTOR SUBSPACES

1
1
0
1 0 0 2
4 3 3 14
C
C
B
B
C
C
B
B
The matrix identity from the initial matrix B3 2 5 13C up to the nal matrix B0 1 0 1C
A
A
@
@
0 0 1 1
2 1 8 13
0
1
1
0
0
1
1
0
1
0 0
1 0
9 B1 3 0C B
4
3
3
14
CB
B
CB
C
C B 43
CB
B
CB
C
C
C
B
(by using elementary matrices) is: B0 1 11 C B0
B
4 0C B
1 0C 3 2 5 13C =
@
A@
A
@
4
A
4 A@ 2
0
1
0 0 1
2 1 8 13
0 1
2
4
0
1
1 0 0 2
B
C
B
C
B0 1 0 1C, or with
@
A
0 0 1 1
10
1 0
0
10
1
1
0 0
1 0
9 B1 3 0 C B
11
21
9
C B
B
CB
C
C B 43
C B
B
CB
C
C
B
C
=
B0 1 11 C B0
4 0C B
1 0C B 14
26 11 C and
@
A@
@
A
4
A
4 A@ 2
0
1
0 0 1
1
2 1
0 1
2
41
0
0
1
0

11 21
9
4 3 3
B
B
C
C
B
B
C
C
B 14
26 11 C = B3 2 5C,
@
@
A
A
1
2 1
2 1 8
in
0 the form:
10
1 0
4 3
4 3 3
1 0 0 2
B
CB
C B
B
CB
C B
B3 2 5C B0 1 0 1C = B3 2
@
A@
A @
2 1
2 1 8
0 0 1 1
[this is a factorization also known as

The extended form of the tables:

1
3 14
C
C
5 13C
A
8 13
LU decomposition, see [37], Section 2.6, page 83]

1.6. REPRESENTATION OF VECTORS

x1

x2

x3

43

e1

e2

e3

e1

4 #

14

e2

13

e3

13

3
4
1
- #
4
1
2

3
4
11
4
13
2

j
x1

e2

e3

j
j
j
j

7
2
5
2
6

j
j
j
j

1
4
3
4
1
2

x1

x2

11

10

e3

1 #

11

x1

x2

14

26 11

x3

11

21

In
pivot are (by using elementary1matrices):
0 matrix form,
1 0 the operations for each
1 0
1
3
3 7 1
0 0
0 0
1
B 4
C B4 3 3 14 1 0 0C B
C
4
4 2 4
B 3
CB
B
C
1
11
5
3
C
B
C B3 2 5 13 0 1 0C = B0
C
1
0
1
0
B 4
C@
B
C
A
4
4
2
4
@ 2
A
A
@
1 13
1
2 1 8 13 0 0 1
0 1
0
6
0 1
2
0 4
10
1 02 2
1
3
3 7 1
1
0 0
1
3
0
1
0
9
11
2
3
0
B
CB
C B
4
4 2 4
C
B
CB
C B
1 11 5
3
C
B
B0
C
C = B0 1
4
0
0
1
0
11
10 3
4 0C
B
CB
C
@
A
4 4 2
4
@
A
4 A@
1 13
1
0
1
0 0 1
1
1
2 1
0
6
0 1
2
2
2
2
0
10
1 0
1
1 0
9
1 0 9
11
2 3 0
1 0 0 2
11 21
9
B
CB
C B
C
B
CB
C B
C
=
B0 1 11 C B0 1
C
B
11
10 3
4 0
0 1 0 1 14
26 11 C
@
A@
A @
A
0 0 1
0 0 1
1
1
2 1
0 0 1 1 1
2 1

44

1. VECTOR SPACES AND VECTOR SUBSPACES

1
4 3 3 14 1 0 0
C
B
C
B
The matrix identity starting from the initial matrix B3 2 5 13 0 1 0C and ending with
A
@
2 1 8 13 0 0 1
0
1
1 0 0 2
11 21
9
B
C
B
C
the nal matrix B0 1 0 1 14
26 11 C (by using elementary matrices) is:
@
A
0 0 1 1 1
2 1
10
10
1
0
10
1
0 0
4
3
3
14
1
0
0
1 0
9 B1 3 0 C B
CB
C
B
CB
C B 43
CB
C
B
CB
C
B
C
B0 1 11 C B0
1 0C B3 2 5 13 0 1 0C =
4 0C B
A
@
A@
4
A@
4 A@ 2
0
1
2 1 8 13 0 0 1
0 0 1
0 1
2
41
0
1 0 0 2
11 21
9
B
C
B
C
= B0 1 0 1 14
26 11 C
@
A
0 0 1 1 1
2 1
0

10
1 0
10
1
0 0
1 0
9 B1 3 0 C B
C B
4
B
CB
B
C
C B
3
B
CB
B
C=B
By using B0 1 11 C B0
4 0C
1
0
CB 4
C @
@
A@
A
4 A@ 2
0
1
0 0 1
0 1
2
4
1
0
1 1 0
4 3 3
11 21
9
C
B
B
C
C
B
B
C
=
B
B 14
C
3 2 5C
26 11
A
@
@
A
2 1 8
1
2 1
the corresponding LUdecomposition is:
0
10
1 0
4 3 3
1 0 0 2
11 21
9
4 3 3
B
CB
C B
B
CB
C B
B3 2 5C B0 1 0 1 14
26 11 C = B3 2 5
@
A@
A @
2 1 8
2 1 8
0 0 1 1 1
2 1
Various verications from the table:
0

4
B
B
B3
@
2
0
4
B
B
B3
@
2
0
4
B
B
B3
@
2

3
2
1
3
2
1
3
2
1

10
1 0
1
3
11 21
9
1 0 0
CB
C B
C
CB
C B
C
=
C
B
C
B
5
14
26 11
0 1 0C
A@
A @
A
8
1
2 1
0 0 1
10 1 0 1
3
2
14
CB C B C
CB C B C
5C B1C = B13C
A@ A @ A
8
1
13
10
1 0 1
3
11
1
CB
C B C
CB
C B C
5C B 14 C = B0C
A@
A @ A
8
1
0

11
14
1

1
9
C
C
26 11 C and
A
2 1

21

1
14 1 0 0
C
C
13 0 1 0C
A
13 0 0 1

1.6. REPRESENTATION OF VECTORS

45

1 0 1
10
0
21
4 3 3
C B C
CB
B
C B C
CB
B
B3 2 5C B 26C = B1C
A @ A
A@
@
0
2
2 1 8
0
10 1 0 1
4 3 3
9
0
B
CB C B C
B
CB C B C
B3 2 5C B 11 C = B0C
@
A@ A @ A
2 1 8
1
1
0

(2) The pivot method for this system has the following form:

x1

x2

x3

11

20

3
4
1
4
1
4

3
4
5
4
5
4

1
0
0

j
j
j
j

3
2
7
2
7
2

j
3

12

14

The procedure cannot be continued anymore since the last line from the last table (corresponding with the unknowns) is zero. Because the element corresponding with the column b is
also zero, the system is compatible and 1undetermined (it has 2 main unknowns and 1 secondary
unknown).
82
>
> 12 + 3
6
7 >
<6
6
7
6
The complete solution of the system is 6 x2 7 2 6 14 5
4
5 >
4
>
>
:
x3
2

x1

7
7
7;
5

9
>
>
>
=
2R
>
>
>
;

In matrix form, the operations for pivot are the following (by using elementary matrices):

46

1. VECTOR SPACES AND VECTOR SUBSPACES

0
B
B
B
B
@

0
B
B
B
@

10
1 0
1
0 0
1
CB 4 3 3 6 C B
4
C
B
3
B
C
B 3 2 1 8 C=B
1 0 C
0
C
@
A B
4
A
@
11
11 8 7 20
0 1
0
4
1
0
1
0
3 3
3
1 3 0 B 1
1
4
4 2 C
CB
B
C
5
7
1
C
C=B
B 0
0
0
4 0 CB
AB
@
4
4 2 C
A
@
1
5 7
0
1 1
0
0
4
4 2

3
4
1
4
1
4

3
4
5
4
5
4

3
2
7
2
7
2

C
C
C
C
A

12

C
C
14 C
A
0

The matrix identity from the initial matrix up to the nal matrix (by using elementary matrices) is:

B
B
B 0
@
0

10
3 0 B
CB
C
4 0 CB
AB
@
1 1

10
1
0 0
CB 4 3 3 6
4
CB
3
1 0 C
3 2 1 8
CB
@
4
A
11
11 8 7 20
0 1
4

1 0
10 1
0 0
1 3 0 B
C B
B
CB 4
C B
3
B
CB
By using B 0
1 0 C
4 0 CB
C=B
@
A@
4
A @
11
0
1 1
0 1
4
0
1 1 0
1
2 3 0
4 3 0
B
C
B
C
B
C
B
C
=
B 3
C
B
4 0
3 2 0 C,
@
A
@
A
2
1 1
11 8 1
the attached LUdecomposition is:
0
10
1 0
4 3 0
1 0
3 12
4 3
B
CB
C B
B
CB
C B
B 3 2 0 CB 0 1 5
14 C = B 3 2
@
A@
A @
11 8 1
0 0 0
0
11 8
(3) The pivot method applied to this system is:
0

1 0

C B
C B
C=B 0 1
A @
0 0

5
0

C
C
4 0 C and
A
1 1

3
2

C
C
1 8 C.
A
7 20

12

C
C
14 C;
A
0

1.6. REPRESENTATION OF VECTORS

x1

x2

x3

10

16

24

47

j
1

10

) The system is incompatible.

j
1

In matrix form, the operations (with elementary matrices) are:


0
B
B
B
@

1
B
B
B 0
@
0

3 0
1
1
1

1 1
1 10
1 1 1 10
CB
C B
C
CB
C B
C
1
1
4 C
0 C B 2 1 1 16 C = B 0
A@
A @
A
1
3 2 2 24
0
1
1
6
1
10
1 0
1 0 0 6
0
1 1
1 10
C
CB
C B
C
CB
C B
0 CB 0
1
1
4 C=B 0 1 1 4 C
A
A@
A @
0 0 0
2
1
0
1
1
6

0 0

2 1

10

The matrix identity from the initial matrix up to the nal matrix is:
0

1
B
B
B 0
@
0

10

CB
CB
1 0 CB
A@
1 1

0 0

10

1 1 1 10
CB
CB
2 1 0 C B 2 1 1 16
A@
3 0 1
3 2 2 24

By using
0
10
1 0
1 1 0
1 0 0
1
B
CB
C B
C B
B
CB
B 0
1 0 CB 2 1 0 C = B 2
@
A@
A @
0
1 1
3 0 1
1
0
1 1 0
1
1 1 0
1 1 0
B
C
B
C
B
C
B
C
B 2
1 0 C = B 2 1 0 C,
@
A
@
A
3 2 1
1
1 1
the LUdecomposition is:

1 0 0
C B
C B
C=B 0 1 1
A @
0 0 0
0

C
C
1 0 C and
A
1 1

C
C
4 C
A
2

48

1. VECTOR SPACES AND VECTOR SUBSPACES

1 1 0

B
B
B 2 1 0
@
3 2 1

10

1 0 0

1 1 1 10

C
C B
C
C B
=
C
B
2 1 1 16 C.
4
A
A @
3 2 2 24
2

CB
CB
CB 0 1 1
A@
0 0 0

The matrix inverse may be obtained by using the same procedure, with the corresponding tables in
the form:
A

A
0

2 3

B
B
1.6.3. Example. Find the inverse of the matrix B 1 2
@
1 1

C
C
1 C.
A
2

1.6.4. Solution. The pivot table is the following:


2

1
0
0

3
2
1
2
1
2

1
2
1
2
3
2

j
j
j
j

1
2
1
2
1
2

j
1

3
5 1
2
2 2
1 3
1
0
1
0
j
2 2
2
1
1
1
0
0
1
j
2
2
2
From
the
last
three
columns
and
the
last
0
1 three lines we read the inverse matrix:
0
1 1
3
5 1
2 3
1
B 2
C
B
C
B 1 32 21 C
B
C
C
B 1 2
1 C =B
B 2 2
@
A
2 C
@ 1
1
1 A
1 1
2
2
2
2
In matrix form, the pivot operations are (with elementary matrices):
1

1.6. REPRESENTATION OF VECTORS

10

3
2
1
2
1
2

1
C B
B
C
0
2
1 0 1 0 C=B
A B
@
1
2 0 0 1
0
1 0
1 1
3
0 0
C B 1 0
2
2 2
C B
1
1
1
1 0 C
0 1
C=B
@
2
2
2
A
1
3
1
0 0
0 1
2
2
2
0
1
1 0
0 1
2
3 0
C B
B
C
0 1
1
1
1 2 0 C=B
A B
@
0
2
1 1 1
0 0
1
0
0
1
5 1
3
1 B
1
2 2 C
B
CB 2
C
1 3
1 C B
C
=B 0
1 CB
@
AB
2 C
A
@ 12 21
1
2
0
2
2
2
The LUdecomposition
in this situation is: 1
0
0
10
5 1
3
2 3
2 3
1 B 1 0 0
2
2 2 C
B
B
CB
C
1 C B
1 3
B
C
=B 1 2
B 1 2
0 1 0
1 CB
B
@
@
A@
2 2
2 C
A
1
1
1
1 1
1 1
2
0 0 1
2
2
2
0
10
10
1 0

2
CB
CB
1=2 1 0 C B 1
A@
1=2 0 1
1
0
0
1
1
3 0 B 1
B
CB
B
C
B 0 2 0 CB
0
@
AB
@
0 1 1
0
0
10
1 0 1=2
1
B
CB
B
CB
B 0 1
1=2 C B 0
@
A@
0 0
1=2
0
0
2 3
B
B
Verication: B 1 2
@
1 1
B
B
B
@

1=2

0 0

1 1 0 0

1 0 1=2
1
3 0
B
CB
B
CB
B 0 1
1=2 C B 0 2 0
@
A@
0 0
1=2
0 1 1
0
1 1 0
3
5 1
2
B 2
C
B
B 1 32 21 C
B
C =B
B 1
B 2 2
@
2 C
@ 1
A
1
1
1
2
2
2

CB
CB
CB
A@
3
2
1

1=2

0 0

C B
C B
1=2 1 0 C = B
A B
@
1=2 0 1
1

49

1
1
0 0
C
2
C
1
1 0 C
C
2
A
1
0 1
2
1
1
2
3 0
C
C
1
1 2 0 C
A
2
1 1 1
1
5 1
3
0
2
2 2 C
1 C
1 3
C
0
2 2
2 C
1
1
1 A
1
21 2
2
0 0
C
C
1 0 C
A
0 1
1
2
1
2
3
2

1 1 0 0

C
C
1 0 1 0 C, with
A
2 0 0 1
1
3
5 1
2
2 2 C
1 3
1 C
C and
2 2
2 C
1
1 A
1
2
2
2

C
C
1 C.
A
2

By using the pivot method and similar tables for calculations most of the required Linear Algebra
calculations may be obtained (as well as obtain them with a computer), in the same time keeping track of
the perspective interpretations in terms of bases. The theoretical framework is given by

1.6.5. Theorem. (The substitution Lemma) Consider a nitetype vector space (V; K), V1 the subm
P
space generated by the ordered linear independent set B = (e1 ;
; em ), and v =
i ei 2 V1 . If
i=1

6= 0 then B 0 = (e1 ;

ej 1 ; v; ej+1 ;

; em ) is a new ordered basis of V1 ; the connection between the

50

1. VECTOR SPACES AND VECTOR SUBSPACES

6
6 .
old [x]B = 6 ..
4
j

0
i

0
1

7
6
7
7
6 . 7
7 and new [x]B 0 = 6 .. 7 coordinates are given by of an arbitrary vector x 2 V1 is
5
4
5
0
m

0
j

i j

. Reciprocally, if B 0 is a linear independent ordered set, then

6= 0.

Proof. As

6= 0, from x =
x

other vectors from B: ej =

m
P

i ei

the vector ej may be written as a linear combination of x and the

i=1

m
P

i
j

i=1;i6=j

ei ) B 0 generates V1 (because B generates V1 and any linear

combination with ej may be replaced by a linear combination from B 0 ).


The set B 0 is a basis in V1 because it has the same number of vectors as B, which means that B 0 is
minimal as a generating set.
Alternative proof:
Let
=0)

8
<

such that

m
P

= 0 (si

6= 0)

i + j i = 0; i 6= j
0
) B is a basis in V1
m
m
P
P
Moreover, x =
e
=
i i

jv

i=1;i6=j

i=1

i ei

=0)

i ei

m
P

m
P

i ei +

i
i

j
j

; i 6= j;

j
j

0
j

m
P

i=1;i6=j

m
P

=0 )

i ) ei

i
j

i=1;i6=j

ei

m
P

i
j

i=1;i6=j

1.6.6. Example. Study the nature of the vector set fv1 ; v2 ; v3 ; v4 g, with vectors:
2

6
6
v1 = 6
4

2 3
2 3
2 3
2
1
1
7
6 7
6 7
6 7
6 7
6 7
6 7
7
0 7 ; v2 = 617 ; v3 = 617 ; v4 = 6 2 7
5
4 5
4 5
4 5
1
3
1
0
1

1.6.7. Solution. Consider the vector equation in the unknowns


1 v1

2 v2

3 v3

4 v4

1,

2,

3,

4:

= 0.

We have to nd all the scalars

1,

j ej

i=1;i6=j

= 0, 8i

i ei + j ej =
0
i

i ei

i=1

i=1;i6=j

i=1;i6=j

) the new coordinates are

m
P

2,

3,

satisfying the vector equation.

By replacing in the vector equation the coordinates of the vectors (in the standard basis) we get:
[v1 ]E + 2 [v2 ]E + 3 [v3 ]E + 4 [v4 ]E = [0]E )
2 3
2 3
2 3
2 3 2 3
1
2
1
1
0
6 7
6 7
6 7
6 7 6 7
6 7
6 7
6 7
6 7 6 7
) 1 6 0 7 + 2 617 + 3 617 + 4 6 2 7 = 6 0 7,
4 5
4 5
4 5
4 5 4 5
3
1
0
0
1
which is the same with the linear homogeneous system (with unknowns
1

1,

2,

3,

4 ):

ei +

1.6. REPRESENTATION OF VECTORS

8
>
>
>
<
>
>
>
:

+2

+2

2
3

51

=0

=0

+3 2+ 3 =0
Use the pivot method to solve the system and to keep the bases interpretations:
1

v1

v2

v3

v4

j
1 #

e2

e3

e2
e3

e3

1 #

1
2
1

v1

v2

3 #

e3

e2

j
v1

e1

e1

11

2
1
5

2
1
1
4
j
j
0
3
3
3
3
5
1
2 1
v2
j
0
1
0
j
j
0
3
3
3 3
11
1 5
1
v3
j
0
0
1
j
j
0
3
3 3
3
In matrix form, the pivot operations are the following (with elementary matrices):
0
10
1 0
1
1 0 0
1 2 1
1 1 0 0
1 2 1
1 1 0 0
B
CB
C B
C
CB
C B
C
B
B 0 1 0 CB 0 1 1 2 0 1 0 C = B 0 1 1 2 0 1 0 C
@
A@
A @
A
1 0 1
1 3 1 0 0 0 1
0 5 2
1 1 0 1
0
10
1 0
1
1
2 0
1 2 1
1 1 0 0
1 0
1
5 1
2 0
B
CB
C B
C
B
CB
C B
C
B 0 1 0 CB 0 1 1 2 0 1 0 C = B 0 1 1
2 0 1 0 C
@
A@
A @
A
0
5 1
0 5 2
1 1 0 1
0 0
3
11 1
5 1
0
10
0
1
1
4 2
1
1
1 0 0
1 0
1
0
1
5
1
2
0
B
3 C
3 3
3
3
C B
B
CB
5 1
2 1
C B
B 0 1 1 CB
B
B 0 1 1
2 0 1 0 C=B 0 1 0
B
@
A @
3 C
3 3
3 3
@
A
1
11
1 5
1
0 0
3
11 1
5 1
0 0
0 0 1
3
3
3 3
3
Verication:
v1

1
C
C
C
C
A

52

1. VECTOR SPACES AND VECTOR SUBSPACES

1 0
1
2
1
1
1
0
0
3
3
3 C B
B
C
1
2 1 C
B
B
C
C
= B 0 1 0 C.
B
C
@
A
3
3 3 A @
1 5
1
0 0 1
3 3
3
Conclusion: the set fv1 ; v2 ; v3 ; v4 g is linear dependent, with the subset fv1 ; v2 ; v3 g linear independent.
0

10
1 2 1 B
CB
C
0 1 1 CB
AB
@
1 3 1

1.6.8. Example. Consider the vectors:


0 1
0 1
0 1
0
1
m
B C
B C
B C
B C
B C
B C
v1 = B2C ; v2 (m) = B m C ; v3 (m) = B 0 C ; m 2 R:
@ A
@ A
@ A
1
1
1
Discuss the nature of the set fv1 ; v2 ; v3 g with respect the parameter m.
1.6.9. Solution. Consider the system (with unknowns
1 v1

2 v2

(m) +

3 v3

1,

2,

and with the parameter m):

(m) = 0.

1.6.10. Remark. The current presentation of the pivot method is far from complete:
the situation when the pivot cannot be taken from the main diagonal has not been covered,
numerical aspects (and approximate results) of the procedure are not covered,
obtaining the results by using software products and big/huge examples are not covered
other applications (in other disciplines) are not covered.
We hope that other texts will be able to cover these aspects.
1.6.11. Example (Economic Theory Application adapted from [[8]], page 108, Example 1). An American rm has a gross prot in amount of 100000 USD. The rm accepts to donate 10% from the net prot
for The Red Cross. The rm has to pay a state tax of 5% from the prot (after donation) and a federal
tax of 40% from the prot (after the donation and after the state tax).
Which are the taxes and how much does the rm donate?
What is the real value of the donation?
1.6.12. Solution. Denote by D the donation, by S the state tax and by F the federal tax.
The net prot is 100000 S F ;
1
(100000 S F ) ) 10D + S + F = 100000;
D=
10
5
S=
(100000 D) ) D + 20S = 100000;
100
40
F =
(100000 D S) ) 2D + 2S + 5F = 200000 )
100
Solve the linear system:

1.6. REPRESENTATION OF VECTORS

53

8
8
11400000
>
>
>
F =
35736:67712;
10D + S + F = 100000
>
>
>
>
319
<
<
1500000
, the solution is:
S=
4702:194357;
D + 20S = 100000
>
>
319
>
>
>
>
>
:
: D = 1900000 5956:112853:
2D + 2S + 5F = 200000
319
When there is no donation, the taxes would be bigger:
5
D=0)S=
100000 = 5000, and
100
40
(100000 5000) = 38000
F =
100
The dierence between the taxes without donation and the taxes with donation is:
1500000 11400000
817000
5000 + 38000
=
2561:128527
319
319
319
The real value of the donation is the dierence between the donation made and the tax excess when
the donation is absent:
1900000 817000
1083000
=
= 3394:984326.
319
319
319

54

1. VECTOR SPACES AND VECTOR SUBSPACES

1.7. Operations with Subspaces


1.7.1. Proposition (Prop. 5.11, [19]). If (V; K) is a nitetype vector space and V0

V is a vector

subspace, then:
(1) (V0 ; K) is also of nitetype;
(2) 8B0 basis for V0 , 9B basis for V such that B0

B (each basis of a subspace may be extended up

to a basis of the space)


(3) dim V0

dim V;

(4) dim V0 = dim V () V0 = V.


1.7.2. Proposition. The intersection of any family of subspaces is a subspace (and a nonvoid set).
Proof. If (V; K) is a vector space and (Vi )i2I are vector subspaces in (V; K), then for V0 :=
have:

i2I

Vi we

8i 2 I, 0 2 Vi ) 0 2 V0 () V0 6= ;);
1. x, y 2 V0 ) 8i 2 I x, y 2 Vi ) 8i 2 I, x + y 2 Vi ) x + y 2 V0
2. x 2 V0 ,

2 K ) 8i 2 I, x 2 Vi and

2 K ) 8i 2 I, x 2 Vi ) x 2 V0 .

1.7.3. Example. Consider the vectors: v1 = (2; 2; 2), v2 = (2; 2; 1), v3 = ( 3; 3; 2), v4 = (2; 5; 1) and
the subspaces V1 = span (fv1 ; v2 g), V2 = span (fv3 ; v4 g).
Describe the intersection V1 \ V2 .
1.7.4. Solution. v 2 V1 \ V2 ) v is simultaneously a linear combination of v1 , v2 and of v3 , v4 so we get
the vector relation:
v=

1 v1

[v]E =

2 v2

4 v4 ,

which written in the standard basis is:

[v2 ]E = 3 [v3 ]E +
2 3
2
2
6 7
6
6 7
6
By replacing [v1 ]E = 6 2 7, [v2 ]E = 6
4 5
4
2
we obtain the system:
8
>
2 1+2 2 = 3 3+2 4
>
>
<
2 1 + 2 2 = 3 3 + 5 4 , with the
>
>
>
:
2 1
2 = 2 3+ 4
1

[v1 ]E +

3 v3

We get [v]E =

[v1 ]E +

[v2 ]E =

7
12

[v4 ]E .
3
2
2
3
7
6
7
6
2 7, [v3 ]E = 6 3
5
4
1
2

solution:

2
6 7
6 7
6 2 7+
4 5
2

8
>
>
>
>
>
>
>
<
>
>
>
>
>
>
>
:

7
6

7
12

7
6

1
2

2 R:
2
3
2
6
6
7
6
6
7
6 2 7= 6
6
4
5
4
1
2

7
6 7
7
6 7
7, [v4 ]E = 6 5 7,
5
4 5
1

7
2
7
2
0

3
7
7
7
7
5

1.7. OPERATIONS WITH SUBSPACES

The vector v (from the intersection) is: v =


(2; 5; 1) =

7 7
; ;0
2 2

7
12

(2; 2; 2)+ 76 (2; 2; 1) =

55
7 7
; ;0
2 2

or v =

1
2

( 3; 3; 2)+

The intersection of the two subspaces is:


V1 \ V2 =

7 7
; ;0 ;
2 2

2R

= f (1; 1; 0) ;

2 Rg :

Remarks:

8
>
2 1+2 2 = 3
>
>
<
The system
2 1 + 2 2 = 3 has no solution, so that v3 62 V1 .
>
>
>
:
2 1
2 = 2
8
>
2 1+2 2 =2
>
>
<
The system
2 1 + 2 2 = 5 has no solution, so that v4 62 V1 .
>
>
>
:
2 1
2 = 1
8
>
2= 3 3+2 4
>
>
<
The system
2 = 3 3 + 5 4 has no solution, so that v1 62 V2 .
>
>
>
:
2=2 3+ 4
8
>
2= 3 3+2 4
>
>
<
The system
2 = 3 3 + 5 4 has no solution, so that v2 62 V2 .
>
>
>
:
1=2 3+ 4
Because there is no basis mentioned, we assume that the initial representation is done by using the

standard ordered basis E = (e1 ; e2 ; e3 ), where e1 = (1; 0; 0), e2 = (0; 1; 0), e3 = (0; 0; 1).
In the ordered basis E the objects are represented as following:
2 3
2 3
2 3
2 3
2
3
2
3
1
0
0
2
2
3
6 7
6 7
6 7
6 7
6
6
7
7
6 7
6 7
6 7
6 7
6
6
7
7
[e1 ]E = 6 0 7, [e2 ]E = 6 1 7, [e3 ]E = 6 0 7, [v1 ]E = 6 2 7, [v2 ]E = 6 2 7, [v3 ]E = 6 3 7,
4 5
4 5
4 5
4 5
4
5
4
5
0
0
1
2
1
2
2 3
2 3
7
2
6 7
6 2 7
6 7
6 7
[v4 ]E = 6 5 7, [v]E = 6 72 7.
4 5
4 5
1
0
2
3
2 2
6
7
6
7
The set B1 = (v1 ; v2 ) is linear independent (because the matrix 6 2 2 7 has rank 2) and generates
4
5
2
1
V1 , so that B1 is a basis for V1 , while the dimension of V1 is 2, and the objects within V1 are represented
like this:

56

1. VECTOR SPACES AND VECTOR SUBSPACES

Figure 1. Intersection of two subspaces


2

[v1 ]B1 = 4

5, [v2 ] = 4
B1

5, @ [v3 ] , @ [v4 ] , [v] =


B1
B1
B1

1
0
in B1 because it is from the intersection).

2
4

7
12
7
6

5 (the vector v may be represented

The set B2 = (v3 ; v4 ) is a basis in V2 , the dimension of V2 is 2, and the objects from V2 are represented
as:

[v3 ]B2 = 4

5, [v4 ] = 4
B2

5, @ [v1 ] , @ [v2 ] , [v] =


B2
B2
B2

0
1
in B2 because it is from the intersection).

2
4

1
2

5 (the vector v may be represented

In the picture there are the following objects (in terms of both Linear Algebra and Analytic Geometry):
The points:
O (0; 0; 0), P 1 (2; 2; 2), P 2 (2; 2; 1), P 3 ( 3; 3; 2), P 4 (2; 5; 1);
The position vectors:
!
!
!
!
OP 1, OP 2, OP 3, OP 4 (the vectors v1 , v2 , v3 , v4 );
The planes:
(P L1) :

x + y = 0 (the subspace V1 ),

(P L2) :

x+y

3z = 0 (the subspace V2 );

the intersecting line for the planes (P L1) and (P L2), (P L12): x = (0; 0; 0) + s (1; 1; 0) (the subspace
V1 \ V2 = f (1; 1; 0) ;

2 Rg, with dimension 1). The picture was obtained with the software product

"Archimedes Geo3D, version 1.3.6", www.raumgeometrie.de.

1.7. OPERATIONS WITH SUBSPACES

57

1.7.5. Denition. Given a family of subspaces (Vi )i=1;k , the set


k
X

Def

Vi =

i=1

( k
X
i=1

vi ; vi 2 Vi ; 8i = 1; k

is called the sum of subspaces.


1.7.6. Proposition. The sum of subspaces is a subspace.
k
P

Proof. xj 2
k
P

i=1

(vi1 + vi2 ) 2

k
P

i=1

Vi .

i=1

Similar, for

Vi ; j = 1; 2 ) 8i = 1; k, 9vij 2 Vi ; j = 1; 2 such that xj =

k
P

2 K; x1 =

i=1

( vi1 ) 2

k
P

i=1

k
P

i=1

vij ) x1 + x2 =

Vi .

1.7.7. Proposition. The sum of subspaces is the covering of their union


!
k
k
X
[
Vi = span
Vi
i=1

i=1

and it is the smallest space which contains the union (in terms of inclusion).
k
P

Proof. Consider x 2

i=1

Vi ) 8i 2 I, 9vi 2 Vi

Conversely, consider x 2 span


that x =

m
P

j vj .

k
S

i=1

Vi

k
S

i=1

Vi , such that x =

) 9m 2 N , 9

k
P

i=1

2 K, j = 1; m, 9vj 2

j=1

For each j = 1; m, 9ij 2 I, vj = uij 2 Vij )

j u ij

vi ) x 2 span

2 Vij an then x =

m
P

k
S

i=1

j u ij

j=1

k
S

i=1

Vi .

Vi , j = 1; m, such

i2I

Vi [for two

dierent indices or more j1 , j2 it may happen that the indices ij1 and ij2 are the same and in this
situation

j1 uij1

j2 uij2

2 Vij1 ].

1.7.8. Proposition. max (dim Vi )


i=1;k

dim

k
P

i=1

Vi

k
P

Proof. For the rst inequality we have 8i = 1; k; Vi


max (dim Vi )
i=1;k

k
P

i=1

dim

k
P

i=1

dim (Vi ).

i=1

Vi .

k
P

i=1

Vi ) 8i = 1; k; dim Vi

dim

k
P

i=1

For the second inequality, consider a basis for each space and their union; the union generates the set
k
P
Vi and the number of its vectors is smaller than
dim (Vi );
i=1

because any basis has at most as many vectors as a generating set, we get that dim

k
P

i=1
k
P

Vi

dim (Vi ).

Vi

i=1

1.7.9. Proposition. (Equivalent denitions for the direct sum of two subspaces) Consider two
subspaces V1 and V2 , and their sum V1 + V2 in the vector space (V; K). The following statements are
equivalent:

58

1. VECTOR SPACES AND VECTOR SUBSPACES

(1) 8v 2 V1 + V2 , 9!v1 2 V1 , 9!v2 2 V2 , v = v1 + v2 [Each vector admits a unique decomposition as a


sum between a vector from V1 and a vector from V2 ].
(2) V1 \ V2 = f0V g [The intersection between the two spaces is the null vector subspace].
(3) For any basis B1 of V1 , and for any basis B2 of V2 , the set B1 [ B2 is a basis for V1 + V2 .
(4) dim (V1 + V2 ) = dim V1 + dim V2 [The dimension of the sum subspace equals the sum of the
dimensions of the component subspaces].
Proof. We will prove the equivalence of the statements 1. and 2., 2 and 3., 3 and 4.
1.)2.
By contradiction:
non (V1 \ V2 = f0V g)

(9x 2 V1 \ V2 ; x 6= 0V ).

Consider v 2 V1 + V2 and x 2 V1 \ V2 , x 6= 0V .
v 2 V1 + V2 ) 9v1 2 V1 , v2 2 V2 , such that v = v1 + v2 .
But since x 2 V1 \V2 , v = (v1

x)+(v2 + x), with v1 x 2 V1 and v2 +x 2 V2 , so v = (v1

x)+(v2 + x)

is another decomposition for v, which is distinct because x 6= 0V .


This means: (9x 2 V1 \ V2 ; x 6= 0V ) ) (v = v1 + v2 = (v1

x) + (v2 + x)) [the decomposition is not

unique]
So (non(2:) ) non(1:))

(1: ) 2:)

2.)1.
By contradiction:
If the decomposition is not unique, then 9u1 ; v1 2 V1 , 9u2 ; v2 2 V2 , such that u1 6= v1 or u2 6= v2 and
v = v1 + v2 = u1 + u2 .
Then 0V 6= v1

u1 = v2

So (non (1:) ) non (2:))

u2 2 V1 \ V2 ) (9x (= v1

u1 = v2

u2 ) 2 V1 \ V2 ; x 6= 0V )

(2: ) 1:)

2.,3. Consider a basis (ei )i=1;k1

V1 , (fj )j=1;k2

V2 in each subspace.

)
Assume that V1 \ V2 = f0V g.
The union set (ei )i=1;k1 [ (fj )j=1;k2 is a basis for V1 + V2 :
The set (ei )i=1;k1 [ (fj )j=1;k2 generates V1 + V2 , because
span (ei )i=1;k1 = V1 and span (fj )j=1;k2 = V2 )
) span (ei )i=1;k1 [ (fj )j=1;k2

span (ei )i=1;k1 + span (fj )j=1;k2 = V1 + V2 .

The set (ei )i=1;k1 [ (fj )j=1;k2 is linear independent:


Consider a null linear combination
k1
k2
P
P
i ei +
j fj = 0 )
i=1

j=1

1.7. OPERATIONS WITH SUBSPACES

)
)
)

k1
P

i=1
k1
P

i=1
k2
P

i ei

k2
P

j=1
i ei
j fj

j=1

j fj

59

2 V1 \ V2 = f0V g

= 0V ) 8i = 1; k1 ,
= 0V ) 8j = 1; k2 ,

= 0 [the set (ei )i=1;k1 is linear independent]

= 0 [the set (fj )j=1;k2 is linear independent]

Thus the set (ei )i=1;k1 [ (fj )j=1;k2 is linear independent.


"("
Assume that for any basis B1 of V1 , and for any basis B2 of V2 , the set B1 [ B2 is a basis for V1 + V2 .
Assume by contradiction that there is x 2 V1 \ V2 , x 6= 0V .
Then the nonzero vector x has two distinct representations: one in B1 and another one in B2 , which
is a contradiction with the linear independence of the set B1 [ B2 .
3. () 4.
")"
When the statement 3. happens, then B1 \ B2 = ;, because otherwise if v 2 B1 \ B2 then by taking the
new set B10 = (B1 n fvg) [ f2vg [by replacing in B1 the vector v with the vector 2v] the set B1 is another
basis for V1 ; the sets B1 [ B2 and B10 [ B2 cannot have the same number of elements while they are both
bases of V1 + V2 , which is a contradiction.
So jB1 [ B2 j = jB1 j + jB2 j, which means that dim (V1 + V2 ) = dim V1 + dim V2 .
"("
Assume that dim (V1 + V2 ) = dim V1 + dim V2 and consider a basis B1 for V1 and a basis B2 for V2 .
Since span (B1 ) = V1 and span (B2 ) = V2 , span (B1 [ B2 )

V1 +V2 [the set B1 [B2 generates V1 +V2 ]

If B1 \ B2 6= ;, then jB1 [ B2 j < jB1 j + jB2 j ) dim (V1 + V2 ) < dim V1 + dim V2 contradiction, so
B1 \ B2 = ;.
If B1 \ B2 = ; but the set B1 [ B2 is not linear independent, then dim (V1 + V2 ) < jB1 [ B2 j =
jB1 j + jB2 j, which is again a contradiction.
So for any basis B1 of V1 , and for any basis B2 of V2 , the set B1 [ B2 is a basis for V1 + V2 .
1.7.10. Denition. The sum V1 + V2 of two subspaces V1 and V2 is called direct sum when any condition
from the proposition 1.7.9 takes place. The direct sum of two subspaces is denoted by V1

V2 .

[the direct sum concept may be seen as a generalization of linear independence; the direct sum may be
seen as the linear independence of a set of subspaces]
1.7.11. Denition. Consider the subspace V2 in (V; K). If V1 is another subspace of (V; K) such that
V = V1
6 The

V2 , then V1 is called a complement6 of V2 in V.

terminology used is dierent for various existing schools. The AngloSaxon school uses the term "complementary
subspaces", while the French school uses the term "sousespaces supplmentaires"; moreover, the dierences remain over the

60

1. VECTOR SPACES AND VECTOR SUBSPACES

1.7.12. Proposition. If V1 is a subspace of (V; K) then there is a subspace V2 such that V = V1

V2 .

[Each subspace has at least a complement]


Proof. Consider a basis (ei )i=1;k1 of V1 ; since this basis is a linear independent set of V, [by Corollary
1.5.7, page 29] it may be completed up to a basis in V with some vectors denoted by (fj )j=1;k2 . Then the
set (ei )i=1;k1 [ (fj )j=1;k2 is a basis for V and the set V2 = span (fj )j=1;k2 is a subspace and a complement
of V1 in V:
x 2 V1 \ V2 ) the vector x may be represented in both sets as linear combinations,
k2
k1
P
P
x=
j fj , so that
i ei =
j=1

i=1

k1
P

i ei

k2
P

j fj

= 0,

j=1

i=1

and since the above expression is a null linear combination of the set (ei )i=1;k1 [ (fj )j=1;k2 which is
a basis, all the scalars are zero, which means that any element of the intersection is null, which in turn
means that the sum of the subspaces is direct.
1.7.13. Remark. It may be observed from the proof that, since a linear independent set may be completed
in many ways up to a basis, the complement (of a proper subspace) is not unique7.
1.7.14. Example. If V1 = f (1; 0) ;

2 Rg, then the set f(1; 0)g may be completed up to a basis in

R2 with each of the vectors (0; 1), (1; 1), (1; 1) so that each of the subspaces V2 = f (0; 1) ;
V3 = f (1; 1) ;
V1

2 Rg, V4 = f (1; 1) ;

2 Rg is a complement of V1 in R2 : R2 = V1

V2 = V1

2 Rg,
V3 =

V4 .

1.7.15. Remark. If V1 is a subspace in V such that dim V1 = k and dim V = n, then any complement of
V1 in V has dimension n

k. The dimension of the complement is also called the codimension of V1 .

1.7.16. Theorem. [Equivalent denitions for the direct sum of more than two subspaces]
k
P
Consider in (V; K) k subspaces (Vi )i=1;k such that V =
Vi . The following statements are equivalent:
i=1

k
P

(1) 8v 2 V, 8i = 1; k, 9!vi 2 Vi such that v =


vi .
i=1
!
k
P
(2) 8j = 1; k; Vj \
Vi = f0V g.
i=1;i6=j

(3) For i = 1; k, 8Bi basis in Vi , the set


(4)

k
P

i=1

dim Vi = dim V.

k
S

Bi is linear independent.

i=1

translations (an English text translated from French uses the term "supplementary subspaces"). When we consider other
schools (such as the Russian school or the German school) and related translations, we nd a certain ambiguity even in the
same language.
7 The vector space (V; K) has two improper subspaces: f0g (the null subspace) and V (the whole space). Each improper
subspace has a unique complement, namely the other improper subspace.

1.7. OPERATIONS WITH SUBSPACES

61

Proof. The following parts will be proved: 1 ) 2, 2 ) 1, 2 ) 3, 3 ) 2, "3.)4." and


"4.)3."
1 ) 2

e2 )e1

k
P

Vi
Suppose by contradiction that there is an index j 2 1; k such that Vj \
i=1;i6=j
!
k
k
P
P
Vi and x 6= 0.
Vi n f0g ) x 2 Vj and x 2
Then 9x 2 Vj \

6= f0g.

i=1;i6=j

i=1;i6=j

k
P
Vi we have:
vi0 , vi 2 Vi , i 6= j and for an arbitrary vector v 2
i=1
i=1;i6=j
!
!
k
k
k
k
k
k
P
P
P
P
P
P
v=
vi =
vi +vj =
vi x +(vj + x) =
vi
vi0 +(vj + x) =
(vi

Then x =

i=1

k
P

i=1;i6=j

i=1;i6=j

i=1;i6=j

i=1;i6=j

vi0 )+

i=1;i6=j

(vj + x), which is another decomposition for x, which is distinct from the rst one because x 6= 0, which
is a contradiction with the unicity of the decomposition.
2 ) 1 e1 )e2
If an element has two decompositions,

k
P

vi =

i=1

if there is an index j such that vj

k
P

i=1

vj0 6= 0, then vj0

vi0 , then
vj =

(vi

i=1
k
P

(vi

i=1;i6=j

f0V g, which is a contradiction.


It follows that 8j = 1; k, vj

k
P

vi0 ) = 0;
vi0 ) 6= 0 ) 9j; Vj \

k
P

i=1;i6=j

Vi

6=

vj0 = 0 which means that the two decomposition are identical.

2)3
Consider a basis Bi for each subspace Vi . The union of these bases

k
S

Bi generates the sum, and from

i=1

the condition 2. we also get the linear independence of the union:


k
S
If
Bi is not independent, then there is an index j and a nonzero vector in Vj which is a sum of
i=1
!
k
P
vectors from the other subspaces, which means Vj \
Vi 6= f0V g, which is a contradiction.
i=1;i6=j

It follows that the set

k
S

Bi is independent, which is 3.

i=1

3)2

Consider a basis Bi for each subspace Vi . The union of these bases


independent and also generates the sum.
k
P

k
S

Bi is assumed to be linear

i=1

If there is an index j such that Vj \


Vi 6= f0g then there is a vector x0 such that 0 6= x0 2
i=1;i6=j
!
k
k
P
S
Vj \
Vi and for a certain arbitrary vector v its representation in terms of
Bi may be modied
i=1

i=1;i6=j

by means of x0 to obtain another distinct representation, by using the distinct representations of x0 in Vj


k
P
and in
Vi :
i=1;i6=j
!
kj0
kj
k
P
P
P
j0 j0
j j
x0 =
)
i ei =
i ei
i=1

j=1;j6=j0

i=1

62

1. VECTOR SPACES AND VECTOR SUBSPACES

)v=

k
P

j=1

kj
P

i=1

j j
i ei

kj0
P

j0 j0
i ei +

i=1

k
P

j=1;j6=j0

kj
P

i=1

j j
i ei

which is a contradiction with the linear independence of

kj0
P

i=1
k
S

j0
i

j0
i

eji 0 +

k
P

j=1;j6=j0

Bi .

kj
P

i=1

j
i

j
i

eji ,

i=1

3.)4.
If for i = 1; k, 8Bi basis in Vi , the set
1
0
k
S
8j = 1; k, Bj \ @ Bi A = ;

k
S

Bi is linear independent, then

i=1

i=1
i6=j

because otherwise, by altering the basis Bj (such that the above condition is satised), we would be

able to obtain alternative unions which are linear independent but with dierent numbers of elements,
which is a contradiction with the fact that each basis has the same number of elements.
It follows that all the possible intersections of the sets Bi are void, which means that
k
k
k
k
P
P
P
S
jBi j ) dim
Vi =
dim Vi .
Bi =
i=1

i=1

i=1

i=1

4.)3.

Assume that dim

k
P

i=1

The union

k
S

Vi

Bi generates

i=1

k
P

i=1
k
P

dim Vi and consider a basis Bi for each Vi .

Vi ; if the set

i=1
k
P

have strictly less elements than

i=1

k
S

Bi is not linear independent, then a basis of

i=1

k
P

i=1

dim Vi , which contradicts the assumption.

Vi will

1.7.17. Denition. The sum of a set of subsets (Vi )i=1;k is called direct when any condition from the
Theorem 1.7.16 is satised.
The notation used for this special summation is

k
L

i=1

is direct).

Vi (and it means that the sum

k
P

i=1

Vi of the subspaces

1.7.18. Theorem. (The Grassmann Formula) For any two subspaces V1 and V2 of (V; K) we have:
dim V1 + dim V2 = dim (V1 + V2 ) + dim (V1 \ V2 ) :
Proof. Consider
V01 a complement of V1 \ V2 in V1 : V1 = (V1 \ V2 )
V02 a complement of V1 \ V2 in V2 : V2 = (V1 \ V2 )
It
8 follows:
>
V01 \ V2 = V1 \ V01 \ V2 = (V1 \ V2 ) \ V01 = f0g
>
>
| {z }
>
<
0
=V1

>
>
V02 \ V1 = V2 \ V02 \ V1 = (V1 \ V2 ) \ V02 = f0g :
>
>
| {z }
:
=V02

8
< (V1 \ V2 ) \ V0 = f0g;
1
0
V1 )
: V0
V1 :
1
8
< (V1 \ V2 ) \ V0 = f0g;
2
V02 )
: V0
V:
2

1.8. THE LATTICE OF SUBSPACES

63

We show that the sum from the righthand part is direct: V1 +V2 = (V1 \ V2 )+V01 +V02 = (V1 \ V2 )
V01

V02 by using the Theorem 1.7.16.


We have to prove the relations:
(V1 \ V2 ) \ (V01 + V02 ) = f0g
x 2 (V1 \ V2 ) \ (V01 + V02 ) ) x 2 V1 ; x 2 V2 ; x 2 V01 + V02 )
x 2 V1 ; x 2 V2 ; x = u1 + u2 ; ui 2 V0i ) u1 2 V01
u1 = x

V1 and

u2 2 V2 ) u1 2 V1 \ V2 \ V01 = f0g ) u1 = 0;

similar, u2 = 0 and so x = 0.
V01 \ ((V1 \ V2 ) + V02 ) = V01 \ V2 = f0g.
V02 \ ((V1 \ V2 ) + V01 ) = V02 \ V1 = f0g.
So the sum (V1 \ V2 ) + V01 + V02 is direct which means that the following relation between dimensions
takes place:
dim (V1 + V2 ) = dim (V1 \ V2 ) + dim V01 + dim V02 ;
because dim V0i = dim Vi

dim (V1 \ V2 ) it follows that


dim (V1 + V2 ) = dim V1 + dim V2

dim (V1 \ V2 ) ;

which concludes the proof.

1.8. The Lattice of Subspaces


Consider a vector space (V; K), with dim V =n, and the set of all its vector subspaces, denoted by
SL (V)

P (V).

Consider on SL (V) the operations:


Intersection "\"; properties of the intersection:
[SL (V) is a closed part of P (V) with respect to intersection] 8W1 ;W2 2SL(V) , W1 \ W2 2 SL (V)
[idempotency of intersection] 8W2SL(V) , W \ W = W
[commutativity of intersection] 8W1 ;W2 2SL(V) , W1 \ W2 = W2 \ W1
[associativity of intersection] 8W1 ;W2 ;W3 2SL(V) , (W1 \ W2 ) \ W3 = W1 \ (W2 \ W3 )
[neutral element of intersection, V] 8W2SL(V) , V \ W = W
[rst element, f0V g] 8W2SL(V) , f0V g \ W = f0V g
[there is no inverse with respect to intersection]
Sum "+"; properties of the sum:
[SL (V) is a stable part of P (V) with respect to sum] 8W1 ;W2 2SL(V) , W1 + W2 2 SL (V)
[idempotency of sum] 8W2SL(V) , W + W = W

64

1. VECTOR SPACES AND VECTOR SUBSPACES

[commutativity of sum] 8W1 ;W2 2SL(V) , W1 + W2 = W2 + W1


[associativity of sum] 8W1 ;W2 ;W3 2SL(V) , (W1 + W2 ) + W3 = W1 + (W2 + W3 )
[neutral element of sum, f0V g] 8W2SL(V) , f0V g + W = W
[last element, V] 8W2SL(V) , V + W = V
[there is no inverse element with respect to sum]
Properties when the intersection and sum are meeting:
[absorption] W0 \ (W0 + W) = W0 + (W0 \ W) = W0
[the operations are not distributive (one with respect to the other one)]
9W1 ;W2 W3 2SL(V) , W1 \ (W2 + W3 ) 6= (W1 \ W2 ) + (W1 \ W3 )
for example, 3 distinct lines passing through origin and contained in the same plane
W1 \ (W2 + W3 ) = W1 and (W1 \ W2 ) + (W1 \ W3 ) = f0V g
9W1 ;W2 W3 2SL(V) , W1 + (W2 \ W3 ) 6= (W1 + W2 ) \ (W1 + W3 )
W1 + (W2 \ W3 ) = W1 and (W1 + W2 ) \ (W1 + W3 ) = W1 + W2 = W1 + W3 = the containing plane.
[the complement of an element, which is not unique] 8W2SL(V) , 9W0 2SL(V) , W + W0 = V and W \ W0 =
f0V g.
Consider on SL (V) the relation " " [inclusion].
Inclusion has the following properties:
[reexivity] 8W2SL(V) , W

[antisymmetry] 8W1 ;W2 2SL(V) , W1

W2 and W2

W1 ) W1 = W2

[transitivity] 8W1 ;W2 ;W3 2SL(V) , W1

W2 and W2

W3 ) W1

W3 .

Inclusion is a partial relation [some elements are incomparable]


e (W1

W2 ) ) W2

W1 OR (W1 and W2 are incomparable)

Properties when inclusion meets with intersection and sum:


[the consistency of inclusion with respect to intersection] W1
[the consistency of inclusion with respect to sum] W1

W2 () W1 \ W2 = W1

W2 () W1 + W2 = W2

8W0 2SL(V) , 8W2SL(V) , the function W 7! W0 \ W is increasing (nonstrictly) (isotone) [when W1 and W2
are comparable and W1

W2 , then, for each W0 , W1 \ W0 and W2 \ W0 are also comparable and the

inclusion relation remains the same: W1 \ W0

W2 \ W0 ]

8W0 2SL(V) , 8W2SL(V) , the function W 7! W0 + W is increasing (isotone) [if W1 and W2 are comparable
and W1

W2 , then, for each W0 , W1 + W0 and W2 + W0 are also comparable and the inclusion relation

remains the same: W1 + W0

W2 + W0 ]

[distributivity inclusions, which may be strict, because of the lack of distributivity]


(W1 \ W2 ) + (W1 \ W3 )

W1 \ (W2 + W3 )

1.8. THE LATTICE OF SUBSPACES

W1 + (W2 \ W3 )

65

(W1 + W2 ) \ (W1 + W3 )

[SL (V) is modular] 8W1 ;W2 ;W3 2SL(V) with W1

W3 , we have: W1 + (W2 \ W3 ) = (W1 + W2 ) \ W3

Proof:
" "
W1

W3

and
W1

W1 + W2

and
W2 \ W3

9
>
>
>
=
>
>
>
;

W2

and
W2 \ W3 W3
) W1 + (W2 \ W3 )

) W1

9
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
=

W3 \ (W1 + W2 )

9
W1 + W2 >
>
>
=
>
>
>
;

) W2 \ W3

W3 \ (W1 + W2 )

>
>
>
>
>
>
>
>
>
>
W3 \ (W1 + W2 ) >
>
>
>
>
>
;

" "

9
8
8
9
>
>
>
>
x 2 W3
x 2 W3
>
>
>
>
>
>
>
>
=
<
<
=
)
)
x 2 W3 \ (W1 + W2 ) )
and 9x1 2 W1 ; 9x2 2 W2 ;
and
>
>
>
>
>
>
>
>
>
>
>
>
;
:
:
;
x = x1 + x2
x 2 W1 + W2
8
9
>
>
x
2
W
\
W
(
)
>
>
2
3
>
>
< 2
=
)
) x = x1 + x2 2 W1 + (W2 \ W3 )
and
>
>
>
>
>
>
:
;
x1 2 W1
( ) because x2 2 W2 and x2 = x x1 2 W3 , because x 2 W3 and x1 2 W1 W3
Def: A subset of SL (V) is called a chain when it is totally ordered [doesnt contain incomparable
elements]; when the chain has a nite number k of elements, the number k

1 is called the length of the

chain.
Def: For an element W 2 SL (V), the superior margin of all the chains between f0V g and W is called
the height of W. The elements with height 1 are called atoms or points.
Remark: Any subset of a chain is also a chain.
Remark: Any chain from SL (V) is of nite length, of at most the dimension of the embedding vector
space
Remark: Any chain from SL (V), with length k is isomorphic with the set f1;
there is a bijection between the chain and the set f1;
i

; kg [in the sense that

; kg which preserves the order: ' (i)

' (j) ()

j]
Def: If W1

W2 , the set [W1 ; W2 ] = fW 2SL (V) ; W1

W2 g is called the interval between W1

and W2 . When the interval [W1 ; W2 ] = fW1 ; W2 g [it doesnt contain intermediary elements] the interval
is called prime and we say that W2 covers W1 .

66

1. VECTOR SPACES AND VECTOR SUBSPACES

Remark: SL (V) is an interval: SL (V) = [f0V g ; V].


Remark: An interval may have an innite number of elements [for example, when W is a plane, the
interval [f0V g ; W] contains all the lines of the plane passing through origin]
Remark: The elements of an interval are not necessarily comparable [the previous example or [f0V g ; V] =
SL (V)]
Def: Two intervals are called transposed when they may be written as [W1 \ W2 ; W1 ] and [W2 ; W1 + W2 ],
for a suitable choice of the subspaces W1 and W2 .
Def: Two intervals are called projective when there is a nite sequence of intervals two by two transposed.
Remark: The dimension as a vector subspace of an element from SL (V) is a function d ( ) : SL (V) !
f0; 1;

; Lg with the properties:

* W1

W2 (strict inclusion) ) d (W1 ) < d (W2 )

* when W2 is immediate superior to W1 , d (W2 ) = d (W1 ) + 1


The function d ( ) induces on SL (V) a graduated partial ordered structure, which satises the Jordan
Dedekind condition: All the maximal chains with the same edges have the same nite length
Def: A function v ( ) : SL (V) ! R is called a valuation when v (W1 ) + v (W2 ) = v (W1 + W2 ) +
v (W1 \ W2 ). The valuation is called isotone [increasing] when W1
positive [strictly increasing] when W1

W2 ) v (W1 )

v (W2 ) and is called

W2 ) v (W1 ) < v (W2 ).

Remark: Given a strictly increasing valuation v ( ), the value of an interval [W1 ; W2 ] is dened as
v ([W1 ; W2 ]) = v (W2 )

v (W1 ).

Remark: d ( ) is a strictly increasing valuation.


The set SL (V) has an innite number of elements (for example an innity of 1dimensional vector
subspaces).
Remark: When W1 6= W2 and they are both immediate superior to W0 , then W1 + W2 is immediate
superior for both W1 and W2 . Dually, when W0 is immediate superior for both W1 and W2 , then both
W1 and W2 are immediate superior for W1 \ W2 .
Remark: The functions 'W1 ( ) : [W2 ; W1 + W2 ] ! [W1 \ W2 ; W1 ] dened by 'W1 (W) = W \ W1
and

W2

( ) : [W1 \ W2 ; W1 ] ! [W2 ; W1 + W2 ] dened by

W2

(U) = U + W are isomorphisms which

are inverse to each other. Moreover, the intervals [W1 \ W2 ; W1 ] and [W1 ; W1 + W2 ] are transported in
transposed isomorphic intervals by the function

W2

( ), respectively by 'W1 ( ).

Remark: The projective intervals are isomorphic.


Remark: Any subspace W is the sum of the lines passing through origin which are included in W.
Remark: The sub lattice generated by two subspaces W1 and W2 is fW1 ; W2 ; W1 \ W2 ; W1 + W2 g.

1.8. THE LATTICE OF SUBSPACES

67

Remark: The sublattice generated by three subspaces


Remark: When two subspaces W1 and W2 are comparable (in the sense that W1

W2 or W2

W1 ),

and there is a subspace W0 such that W0 \ W1 = W0 \ W2 and W0 + W1 = W0 + W2 , then W1 = W2 .


Remark: For each interval [W1 ; W2 ]

SL (V), and for each element of the interval there is a comple-

ment with respect to the interval. For W 2 [W1 ; W2 ], the complement of W with respect to [W1 ; W2 ] is
the subspace (W0 \ W2 ) + W1 .
Remark: Given a strictly increasing valuation v ( ), no interval can be projective with respect to a
proper subinterval.
Remark: Given a strictly increasing valuation v ( ), all the projective intervals have the same value.
Remark: Any valuation associates a unique value for each class of prime projective intervals.
Remark: If p (W) is the number of prime intervals of a chain between f0V g and W, then p ( ) is a
valuation.
Remark: Any linear combination of valuations is a valuation.
P
Remark: The structure of a valuation: v (W) = v (f0V g) +

projective intervals,

p p (W),

where, for each class of prime

is a value attached with the class and p ( ) is the number of prime projective

intervals with that class, in any maximal chain between f0V g and W.

CHAPTER 2

Linear Transformations
2.0.1. Denition. (Linear Transformation) If (V1 ; K) and (V2 ; K) are vector spaces (over the same scalar
eld), a function U ( ) : V1 ! V2 is called vector spaces morphism (or linear transformation) if:
(1) U (x + y) = U (x) + U (y) ; 8x; y 2 V1 ( U ( ) is a group morphism (additivity));
(2) U ( x) = U (x) ; 8x 2 V1 ; 8 2 K (U ( ) is homogeneous).
[additivity and homogeneity together are also called linearity]
The set of all vector spaces morphisms between (V1 ; K) and (V2 ; K) is denoted by LK (V1 ; V2 ).
2.0.2. Example. Consider the vector spaces (R3 ; R) and (R2 [X] ; R) (Rn [X] is the set of all polynomials
of degree at most n, with the unknown denoted by X and with real coe cients).
The function U ( ) : P2 [X] ! R3 dened by U (P ) = xP 2 R3 (for a polynomial P (X) = aX 2 +bX +c 2
R2 [X] we attach the vector xP = (a; b; c)) is a vector spaces morphism.
The operations in (R2 [X] ; R) are (the denitions should be known from the highschool):
Def

P ( ) + Q ( ) = (P + Q) ( ) ; where: (P + Q) (X) = P (X) + Q (X) ;


Def

P ( ) = ((

P ) ( )) ; where: (

P ) (X) =

P (X) :

P (X) = aX 2 + bX + c 2 R2 [X] ) U (P ) = (a; b; c) 2 R3 ;


Q (X) = a1 X 2 + b1 X + c1 2 R2 [X] ) U (Q) = (a1 ; b1 ; c1 ) 2 R3 :
(P + Q) ( ) is dened by:
(P + Q) (X) = (a + a1 ) X 2 + (b + b1 ) X + (c + c1 ) 2 R2 [X] )
) U ((P + Q)) = (a + a1 ; b + b1 ; c + c1 ) = (a; b; c) + (a1 ; b1 ; c1 ) = U (P ) + U (Q)
( P ) ( ) is dened by:
( P ) (X) = aX 2 + bX + c )
) U (( P )) = ( a; b; c) =

(a; b; c) = U (P ) :

2.0.3. Denition. A linear transformation between two vector spaces which is bijective is called isomorphism.
2.0.4. Denition. (Isomorphic spaces) Two vector spaces are called isomorphic if between them there is
an isomorphism. We denote this situation by (V1 ; K) = (V2 ; K).

70

2. LINEAR TRANSFORMATIONS

2.0.5. Remark. Intuitively speaking, when two vector spaces are isomorphic, the vector space algebraic
structure does not distinguish between the spaces. Still the spaces may be dierent and they may be distinguished from other perspectives, like other abstract structures and/or other nonmathematical reasons.
An example for this type of situation is the study of the vector spaces (R2 ; R) and (C; R).

2.0.6. Proposition. Any nitetype vector space (V; K) with dimK V = n is isomorphic with the vector
space (Kn ; K) 2.0.4.

Proof. Consider a nitetype vector space (V; K) and B a basis in (V; K).
Consider the function ' ( ) : V ! Kn ; dened by ' (v) = [v]B .
The function ' ( ) is linear:
2

6 1 7
6
7
n
6 2 7
X
6
7
[v1 ]B = 6 . 7 ; v1 =
6 .. 7
i=1
6
7
4
5

6
7
6
7
n
6 0 7
X
2
6
7
i vi ; [v2 ]B = 6 . 7 ; v2 =
6 .. 7
i=1
6
7
4
5

v1 + v2 =
2

6
6
6
6
[v1 + v2 ]B = 6
6
6
4

i vi

i=1

1
2

0
1

+
..
.

0
2

0
n

0
i vi

0
n

n
X

0
1

n
X

0
i vi

i=1

=
3

n
X

0
1

i=1

0
i ) vi

7
7 6 1 7 6
7
7 6
7 6
7 6 2 7 6 0 7
7 6 2 7
7 6
7 = 6 . 7 + 6 . 7 = [v1 ]B + [v2 ]B
7 6 .. 7 6 .. 7
7 6
7
7 6
5
5 4
5 4
0
n

so ' (v1 + v2 ) = ' (v1 ) + ' (v2 ) (from the unicity of a representation in a basis).
Similar, for

2 K we have

v1 =

n
X
i=1

so ' (

v1 ) =

6
6
6
6
)
v
)
[
v
]
=
6
i
i
1 B
6
6
4

1
2

..
.
n

7
7
7
7
7=
7
7
5

6 1 7
6
7
6 2 7
6
7
6 . 7=
6 .. 7
6
7
4
5

[v1 ]B

' (v1 ), which means that the function is linear.

The function is bijective: the injectivity results from the linear independence property for the basis (the
unicity of the coordinates), while the surjectivity results from the generating property of the basis.

2.0.7. Exercise. Show that the morphism from the previous example is bijective.

2.1. EXAMPLES OF LINEAR TRANSFORMATIONS

71

2.0.8. Denition. (Linear functional) Any linear transformation between the vector spaces (V; K) and
(K; K) is called a linear transformation over (V; K) (any element of the set LK (V; K)). The set LK (V; K)
is also denoted by V0 (= LK (V; K)) and is called the algebraic dual of the vector space (V; K).

2.0.9. Exercise. Show that U ( ) : R2 [X] ! R, dened by U (P ) = P (1) is a linear functional.

2.1. Examples of Linear Transformations


2.1.1. Example. The derivation operator, over the vector space of all functions with a single variable
and indenitely derivable: D ( ) : C 1 (R) ! C 1 (R), D (f ( )) = f 0 ( ).
The derivation operator is linear: D ((f + g) ( )) = (f + g)0 ( ) = f 0 ( )+g 0 ( ); D (( f ) ( )) = ( f )0 ( ) =
f 0 ( ).

2.1.2. Example. The integration operator, dened over the vector space of all integrable functions:
Rt
I ( ) : F ! F , I (f ( )) (t) = f ( ) d .
a

The operator is linear because of the integral properties.

2.1.3. Example. Consider the vector spaces (R3 ; R) and (R2 [X] ; R) (Rn [X] is the set of all polynomials
of degree at most n, in the unknown X and with real coe cients).
The function U ( ) : R2 [X] ! R3 dened by U (P ( )) = xP ( ) 2 R3 (for a polynomial P (X) =
aX 2 + bX + c 2 P2 [X] we attach the vector xP ( ) = (a; b; c)) is a morphism of vector spaces.

72

2. LINEAR TRANSFORMATIONS

2.2. Properties of Linear Transformations


2.2.1. Proposition. (Properties of linear transformations) Consider two vector spaces over the same
eld of scalars (V1 ; K), (V2 ; K) and a linear transformation U ( ) : V1 ! V2 . Then:
(1) U ( ) is linear , 8 ;

2 K, 8x1 ; x2 2 V1 , U ( x1 + x2 ) = U (x1 ) + U (x2 )

(2) [[19], Prop. 6.1, page 91] U ( ) is linear , GU ( ) = f(x; U (x)) ; x 2 V1 g is a vector subspace in
(V1

V2 ; K).

(3) U ( ) is linear )
(a) U (0V1 ) = 0V2 ;
(b) 8 vector subspace V01

V1 , U (V01 ) = fU (v) ; v 2 V01 g is a vector subspace in V2 ;

(c) 8 vector subspace V02

V2 , U

(V02 ) = fv 2 V1 ; U (v) 2 V02 g is a vector subspace in V1 .

Note: for points 3.b and 3.c we use the direct image of a set by a function and the preimage
of a set by a function; see the Denition 7.2.18 and the Remark 7.2.19.

Proof. Let U ( ) : V1 ! V2 be a linear transformation.


(1) ")" When U ( ) is a linear transformation, and x1 , x2 2 V and ,
U ( x1 + x2 )

additivity

U ( x1 ) + U ( x2 )

homogeneity

2 K, we have:

U (x1 ) + U (x2 ).

"(" When U ( ) is a function which satises the relation:


8 ;

2 K, 8x1 ; x2 2 V1 , U ( x1 + x2 ) = U (x1 ) + U (x2 ),

for

for

= 0, we get U ( x1 ) = U (x1 ) [homogeneity],

= 1, we get U (x1 + x2 ) = U (x1 ) + U (x2 ) [additivity],

which means that the function U ( ) is a linear transformation.


(2) When (V1 ; K) and (V2 ; K) are two vector spaces over the same eld of scalars, the set V1

V2

may be viewed as a vector space (the product vector space) with the operations:
(v1 ; v2 ) + (w1 ; w2 ) = (v1 + w1 ; v2 + w2 ) (the addition of the place i is the addition of the vector
space Vi )
(v1 ; v2 ) = ( v1 ; v2 ) (the multiplication with a scalar of the place i is the multiplication with
a scalar of the vector space Vi )
Because of the vector space properties of the structures (V1 ; K) and (V2 ; K), the structure
(V1

V2 ; K) is also a vector space.


")"
Assume U ( ) is linear and choose two vectors w1 , w2 2 GU ( ) . Then there are vectors v1 ,

v2 2 V1 such that w1 = (v1 ; U (v1 )) and w2 = (v2 ; U (v2 )).


Then

2.2. PROPERTIES OF LINEAR TRANSFORMATIONS

w1 + w2 = (v1 ; U (v1 )) + (v2 ; U (v2 )) = (v1 + v2 ; U (v1 ) + U (v2 ))

73

additivity

(v1 + v2 ; U (v1 + v2 )) 2

Gu( ) .
Also for
w1 =

2 K,
(v1 ; U (v1 )) = ( v1 ; U (v1 ))

homogeneity

( v1 ; U ( v1 )) 2 GU ( ) .

"("
Assume that GU ( ) is a vector subspace in (V1

V2 ; K) and consider two vectors v1 , v2 2 V1 .

Then all the pairs (v1 ; U (v1 )), (v2 ; U (v2 )), (v1 + v2 ; U (v1 + v2 )) belong to GU ( ) , which is a
subspace, so that (v1 ; U (v1 )) + (v2 ; U (v2 )) = (v1 + v2 ; U (v1 ) + U (v2 )) 2 GU ( ) .
Since the set GU ( ) is the graphic of a function,
9
(v1 + v2 ; U (v1 ) + U (v2 )) 2 GU ( ) =
) U (v1 ) + U (v2 ) = U (v1 + v2 )
;
(v1 + v2 ; U (v1 + v2 )) 2 GU ( )
[because otherwise there would be an element v1 + v2 for which the function would have two
images, which contradicts the denition of a function, see 7.2.16, page 205]
In a similar way, if (v1 ; U (v1 )), ( v; U ( v)) 2 GU ( ) , then also

(v1 ; U (v1 )) = ( v1 ; U (v1 ))

and by the same argument as above we get U ( v1 ) = U (v1 ).


(3) When U ( ) is linear:
(a) U (0V1 ) = U (x

x) = U (x)

U (x) = 0V2 .

2 K, y1 ; y2 2 U (V01 )9) 9x1 ; x2 2 V01 such that U (xi ) = yi ;


V01 subspace ) x1 + x2 2 V01 =
)
U ( ) linear ;
) U ( x1 + x2 ) = U (x1 ) + U (x2 ) = y1 + y2 2 U (V01 ).

(b) Consider ;

(c) Consider ;

2 K, x1 ; x2 2 U

(V02 ) ) U (x1 ), U (x2 ) 2 V02

V02 subspace ) V02 3 U (x1 ) + U (x1 ) = U ( x1 + x2 ) ) x1 + x2 2 U

(V02 ).

2.2.2. Denition. The codomain subspace U (V1 ) is called the image of the linear transformation and is
denoted by Im U ( ); its dimension is also called the rank of the linear transformation and is denoted by
dim Im U ( ) = rank U ( ).
2.2.3. Denition. The domain subspace U

(f0V1 g) is called the kernel of the linear transformation and

is denoted by ker U ( ); its dimension is also called the nullity of the linear transformation and is denoted
by dim ker U ( ) = null U ( ).
2.2.4. Theorem (The ranknullity theorem for linear transformations). Consider two vector spaces
(V1 ; K) and (V2 ; K), with V1 of nitetype. For a linear transformation U ( ) : V1 ! V2 , we have:
dim V1 = dim (ker U ( )) + dim (Im U ( )).

74

2. LINEAR TRANSFORMATIONS

Proof. Consider a basis fw1 = U (u1 ) ;


set B1 = fu1 ;

; uk g

; wk = U (uk )g

V2 and the attached vector

Im U ( )

V1 .

Consider a basis B2 = fv1 ;

; vp g

ker U ( )

V1 .

Then the set B = B1 [ B2 is a basis for V1 :


[observe that B1 \ B2 = ; since v 2 B1 \ B2 ) U (v) = 0 which cannot be a vector in a basis]
k
k
P
P
Consider x 2 V1 ) U (x) 2 Im U ( ) ) 9 1 ,
, k 2 K such that U (x) =
w
=
i i
i U (ui ) =
i=1

k
P

i=1

i ui

i=1

Denote u =

k
P

i ui

i=1

2 V1 and consider the decomposition x = (x

Then U (x u) = U (x)
p
P
that x u =
j vj .

U (u) = U (x)

U (x) = 0, so that x

u) + u.
u 2 ker U ( ) ) 9 1 ,

2 K such

j=1

We get that x =

p
P

j vj

j=1

k
P

so that V1 = span B.

i ui

i=1

Consider a second linear combination x =

p
P

j vj

j=1
p
P

0
j

j=1

vj =

k
P

i=1

0
i

i ) ui

combination of kernel elements]:


!
p
P
0
=U
0=U
j
j vj
j=1

is a basis]

0
i

i , 8i = 1; k )

j=1

i ui

p
P

j=1

i=1

0
j vj

k
P

i=1

0
i ui ,

then

and by applying U ( ) we get [since the lefthand term is a linear

k
P

i=1

p
P

k
P

0
i

i ) ui
0
j

)0=

k
P

i=1

0
i

i ) wi

) [since the set fw1 ;

vj = 0 ) [since the set B2 is a basis]

0
j,

; wk g

8j = 1; p so

that the representation of the vector x is unique and B is a basis for V1 .


Because k = dim Im U ( ), and p = dim ker U ( ) and jBj = k + p, we get the result.
2.2.5. Remark. When between the spaces V1 and V2 there is a bijective linear transformation, the
dimensions of the spaces are equal.
Proof. When the linear function U ( ) : V1 ! V2 is bijective, then ker U ( ) = f0g (from injectivity)
and Im U ( ) = V2 (from surjectivity); from the previous Theorem 2.2.4 we get that
dim V1 = dim (ker U ( )) + dim (Im U ( )) = dim V2 .
The next two propositions will show that the inversion (when possible) and the composition (when
allowed) are preserving the linear characteristic of the involved functions.
2.2.6. Proposition. Consider two vector spaces (V1 ; K), (V2 ; K) over the same eld and a linear bijective
transformation U ( ) : V1 ! V2 .
Then the inverse function U
linear transformation is linear).

( ) : V2 ! V1 is also a linear transformation (the inverse of an invertible

2.2. PROPERTIES OF LINEAR TRANSFORMATIONS

75

Proof. We know that U ( ) satises


U (u + v) = U (u) + U (v), and U ( u) = U (u).
Consider x; y 2 V2 and U

(x) = u, U

(y) = v;

then U (u) = x, U (v) = y and U (u + v) = U (u) + U (v) ) U (u + v) = x + y ) U


)U

For

(x + y) = U

(x) + U

2 K, x 2 V2 , U

(x + y) = u + v

(y) which means that the inverse function is additive.

(x) = v ) U (v) = x and

U ( v) = U (v) ) U ( v) = x ) v = U

( x) ) U

(x) = U

( x).

2.2.7. Proposition. Consider three vector spaces over the same eld and the linear transformations
U1 ( ) : V1 ! V2 , U2 ( ) : V2 ! V3 .
Then the function U ( ) : V1 ! V3 dened by U (v) = U2 (U1 (v)), 8v 2 V1 is also linear (the composition preserves linearity of functions).
Proof. Let v1 ; v2 2 V1 ; we have U (v1 + v2 ) = U2 (U1 (v1 + v2 ))

additivity

of U1 ( )

U2 (U1 (v1 ) + U1 (v2 ))

additivity

of U2 ( )

U2 (U1 (v1 )) + U2 (U1 (v2 )) = U (v1 ) + U (v2 ).


Homogeneity is proven similarly.
2.2.8. Remark. When the domain and the codomain of the linear transformation are the same, U ( ) :
V ! V, we may talk about the powers of the linear operator U ( ):
U 2 ( ) = (U

U ) ( ), U n ( ) = (U
|

{z

U ) ( ).
}

n times

The identity on V, IV ( ) : V ! V, IV (v) = v is a linear operator.


2.2.9. Remark. When p (x) =

n
P

ak xk is an arbitrary polynomial (with real or complex coe cients), we

k=0

may talk about the polynomial operator, which is a new operator (from Proposition 2.2.7) with the form
n
P
given by: p (U ( )) =
ak U k ( ), where U 0 ( ) = IV ( ) (the identity operator on V).
k=0

2.2.10. Remark. When p ( ) and q ( ) are two polynomials, the composition of the two attached operator

polynomials is commutative: p (U ( )) q (U ( )) = q (U ( )) p (U ( )).


This means that, while in general the function composition is not commutative, in the special case of
polynomial operators for the same operator the composition is commutative. The proof relies on the fact
that any two powers of the same operator commutes U k

Uj ( ) = Uj

U k ( ) = U k+j ( ) and is left as

an exercise.
2.2.11. Remark. The relation = ( Denition (2.0.4)) is an equivalence relation between vector spaces
over the same eld (it is reexive, symmetric and transitive) (the relation is dened over a set of vector
spaces and it establishes equivalence classes).

76

2. LINEAR TRANSFORMATIONS

Proof. Reexivity follows from the fact that the identity operator IV1 ( ) : V1 ! V1 , U (v) = v is
linear and bijective, so V1 = V1 .
The symmetry follows from Proposition (2.2.6), because when V1 = V2 , then 9 U ( ) : V1 ! V2 linear
bijective ) U

( ) : V2 ! V1 linear bijective ) V2 = V1 .

Transitivity follows from Proposition (2.2.7) because when V1 = V2 and V2 = V3 then there are some
isomorphisms U ( ) : V1 ! V2 and V ( ) : V2 ! V3 and the new function (V

U ) ( ) : V1 ! V3 preserves

by composition both the linearity and the bijectivity properties, so that V1 = V3 .

2.2.12. Remark. The set LK (V1 ; V2 ) together with the usual algebraic operations with functions
Def

(U1 ( ) + U2 ( )) (x) = U1 (x) + U2 (x) ;


Def

( U1 ( )) (x) =

U1 (x) ;

has a vector space structure over the eld K (in particular the algebraic dual (??) is a vector space over
the eld K).

Proof. (LK (V1 ; V2 ) ; +) has a group structure (from the properties of addition over V2 ) with neutral
element the null operator O ( ) : V1 ! V2 , O (v)

0.

2.3. Representations of Linear Transformations


2.3.1. Remark. Consider a linear transformation U ( ) : V1 ! V2 and an ordered basis Bd = (e1 ; ::; en )
for the domain V1 .
Then the function U ( ) is uniquely determined by the values on the basis (U (e1 ) ;
linear transformation is uniquely determined by the values on a basis of the domain).
n
P
The proof relies on the remark that x 2 V1 ) 9 i 2 K, x =
i ei ) U (x) = U
n
P

i=1

iU

(ei ).

; U (en )) (any
n
P

i ei

i=1

i=1

Consider a linear transformation U ( ) : V1 ! V2 between two nitetype vector spaces over the same
eld K.
Assume dim V1 = n, dim V2 = m and x the ordered bases Bd = (e1 ; ::; en ) in V1 and Bc = (f1 ; ::; fm )
in V2 .
Consider the representations in the codomain of the images of the vectors of the basis from the domain:

2.3. REPRESENTATIONS OF LINEAR TRANSFORMATIONS

U (ej ) =

77

m
P

aij fi 2 V2 ; j = 1; n (vector representation)


2
3
6 a1j 7
6
7
6 a2j 7
6
7
= 6 . 7 (coordinate representation)
6 .. 7
6
7
4
5
amj

i=1

[U (ej )]Bc

d
The matrix which has as columns [U (ej )]Bc , namely [U ( )]B
Bc = [aij ] i=1;n = [U (e1 )]Bc

[U (ej )]Bc

[U (

j=1;m

is called the matrix associated with U ( ) at bases Bd and Bc .


Conversely, for each matrix A 2 Mm

(R) and for each possible choice of ordered bases in both the

domain and the codomain there is an unique associated linear transformation dened by the above formula
[U (x)]Bc = A [x]Bd . In other words, the function U ( ) : Rn ! Rm dened as U (x) = Ax is a linear
transformation.
When x 2 V1 with coordinates in Bd given by [x]Bd

U (x) = U

n
P

xj e j

j=1

n P
m
P

n
P

xj U (ej ) =

j=1

aij xj fi =

j=1 i=1

3
x1
7
6
6 . 7
= 6 .. 7, then
5
4
xn
2

m
P

n
P

i=1

n
P

m
P
xj
aij fi =
j=1
i=1
!

aij xj

j=1

fi

therefore the representation of U (x) in Bc is

[U (x)]Bc

2 P
n
a x
6 j=1 1j j
6 n
6 P
6
a2j xj
6
= 6 j=1
6
..
6
6 n .
4 P
amj xj
j=1

7
7 6 a11 a12
7 6
7 6 a21 a22
7 6
7=6 .
..
7 6 ..
.
7 6
7 4
5
am1 am2

d
[U (x)]Bc = [U ( )]B
Bc [x]Bd :

3 2

a1n 7
7
a2n 7
7
.. 7
..
.
. 7
7
5
amn

6
6
6
6
6
6
6
4

x1 7
7
x2 7
7
= [U (e1 )]Bc
.. 7
7
. 7
5
xn

[U (ej )]Bc

[U (en )]Bc [x]Bd ;

d
The representation matrix [U ( )]B
Bc is unique because for two matrices A and B the following happens:

if 8x 2 Rn , Ax = Bx, then A = B.

2.3.2. Proposition. The linear transformation U ( ) is bijective if and only if its attached matrix (for a
certain choice of bases) is invertible.

78

2. LINEAR TRANSFORMATIONS

Proof. ) When U ( ) is bijective, form Remark 2.2.5 the two spaces have equal dimensions (so
that the matrix is square) and for each y 2 V2 the system Ax = y has a unique solution (the unicity comes
from injectivity while the existence comes from surjectivity).
Suppose by contradiction that the matrix is not invertible; then det A = 0, which means that the
columns of the matrix are linear dependent. Consider a nonzero linear combination for the null vector; in
matrix form, this means a nonzero solution of the system Ax = 0, which is a contradiction with the unicity
of the null solution, contradiction obtained from the hypothesis that the matrix would not be invertible.
In conclusion there exists the matrix A 1 .
( When the matrix A is invertible, then, keeping the same basis for each space, the function
V ( ) : V2 ! V1 dened by V (y) = A 1 y is exactly the inverse function for U ( ), because
U (V (y)) = U (A 1 y) = A (A 1 y) = y = 1V2 (y) and V (U (x)) = V (Ax) = A

(Ax) = x = 1V1 (x),

which means that U ( ) is invertible, and so it is bijective.

2.3.3. Remark.

(1) The rank of the linear transformation (Def. 2.2.2, page 73) equals the rank of

the matrix representing the linear transformation (in a certain choice for bases in both the domain
and the codomain).
(2) A linear transformation is injective if and only if its rank equals the dimension of the domain.
(3) A linear transformation is surjective if and only if its rank equals the dimension of the codomain.

2.3.4. Remark. When the basis in V1 changes from Bd to Bd0 and the basis in V2 changes from Bc to Bc0
the representation of the linear transformation changes in the following way:
B0

Bd
[U (x)]Bc = [U ( )]B
[x]Bd ; [U (x)]Bc0 = [U ( )]Bdc0 [x]B 0 ;
c
d

[x]B 0 = [M (Bd )]B 0 [x]Bd ; [y]Bc0 = [M (Bc )]Bc0 [y]Bc ;


d

and so
d
[U (x)]Bc = [M (Bc )]Bc10 [U (x)]Bc0 = [U ( )]B
Bc [x]Bd )
d
[U (x)]Bc0 = [M (Bc )]Bc0 [U ( )]B
Bc [x]Bd =

1
d
= [M (Bc )]Bc0 [U ( )]B
Bc [x]Bd [M (Bd )]B 0 [x]B 0 :
d

2.3.1. Representations of Linear Functionals. Given a basis Bd = fv1 ;

; vn g in (V; K) and

a basis Bc = fwg in (K; K), any linear functional f ( ) : V ! K (which is a particular type of linear
transformation) admits a unique representation:
f (v) = f

n
X
i=1

i vi

n
X
i=1

if

(vi ) ;

2.4. THE FACTOR VECTOR SPACE

79

so that the value of the linear functional in v is uniquely determined by the values of the linear functional
at the basis vectors and by the coordinates of the vector. In matrix form:
2
3
[f (v)]Bc =

6 1 7
6
7
i6 2 7
7
6
f (vn ) 6 . 7 =
6 .. 7
6
7
4
5

f (v1 ) f (v2 )

f (v1 ) f (v2 )

f (vn )

[v]Bd :

Moreover, when for a xed basis E in V with dim V = n we denote by


of the vector, then for each i the function

( ) : V ! K the ith coordinate

( ) is a linear functional and the set of linear functionals

( i ( ))i=1;n is a basis in the dual space V0 [called the dual basis of E in V]. It follows that for nitetype
vector spaces the dual space is isomorphic with the space: V ' V0 .

2.4. The Factor Vector Space


2.4.1. Factor Space attached to a Subspace.
2.4.1. Remark. Consider a vector space (V; K) and V0 a vector subspace. Consider on V the relation
Def

u v v (mod V0 ) , u

v 2 V0 :

This is an equivalence relation on V.


Proof. Reexivity: 8v 2 V, v v v (mod V0 ), because v

v = 0 2 V0

Symmetry: u v v (mod V0 ) ) u v 2 V0 vector subspace ) v u =

Transitivity: u v v (mod V0 ) and v v w (mod V0 ) ) u


(v

w) 2 V0 ) u v w (mod V0 ).

v and v

(u

v) 2 V0 ) v v u (mod V0 ).

w 2 V0 ) u

w = (u

v) +

The equivalence relation " v (mod V0 )" generates on V equivalence classes with respect to V0 (mod
V0 ); they will be denoted by x
b (mod V0 ); x^ is the set of all elements of V which are equivalent with x mod
V0 :

x
b (mod V0 ) = fv 2 V; x v v (mod V0 )g = x + V0 = fx + v0 ; v0 2 V0 g :

The dimension of an equivalence class is considered by denition to be equal with the dimension
of V0 .
Two equivalence classes may only be identical or disjoint:
Proof. ; =
6 x
b \ yb () 9z0 2 x
b \ yb () 9v0 ; u0 2 V0 , z0 = x + v0 = y + u0 .

80

2. LINEAR TRANSFORMATIONS

Let v 2 x
b ) 9v1 2 V0 , v = x + v1 ) v = y + (u0
Similar, we also get yb

v0 + v1 ) 2 yb ) x
b

yb.

x
b, so that two equivalence classes which are not disjoint are identical.

The set of all equivalence classes modV0 is a partition of the set V:

Proof. x 2 V ) x 2 x
b [each element belongs to at least an equivalence class mod V0 and, as two

distinct classes are disjoint, x belongs to exactly one class]

2.4.2. Denition. The set of all equivalence classes is called the factor set mod V0 ; this set is denoted by
V=V0 = fb
x (mod V0 ) ; x 2 Vg
(it is a set of equivalence classes, so it is a set of sets)
2.4.3. Remark. For each xed x 2 V, the function

( ) : V0 ! (b
x (mod V0 )) dened by

(v) = x + v

is a bijection.
Proof. Injectivity: let v1 , v2 2 V0 such that

(v1 ) =

Surjectivity: y 2 x
b (mod V0 ) = 9vy 2 V0 , y = x + vy )

(v2 ) ) x + v1 = x + v2 ) v1 = v2 .

(vy ) = x + vy = y.

2.4.4. Proposition. With the elements of the set V=V0 we may dene vector space operations:
Def
Addition modV0 : (b
x (mod V0 )) + (b
y (mod V0 )) = x[
+ y (mod V0 )

Multiplication with a scalar modV0 :

(b
x (mod V0 )) = cx (mod V0 )

Def
Proof. Addition x
b + yb = x[
+ y is well dened (doesnt depend on representatives) because, when

x
b=x
b1 and yb = yb1 , then x

x1 and y

y1 2 V0 ) (x + y)

(x1 + y1 ) 2 V0 so that x[
+ y = x\
1 + y1 .

Associativity is a consequence of the associativity of the operation on V:


\
(b
x + yb) + zb = x[
+ y + zb = (x +
y) + z = x +\
(y + z) = x
b + y[
+z =x
b + (b
y + zb);
the neutral element is b
0(= V0 ) and the opposed vector is
Multiplication with a scalar:

and so

(x

x
b = cx.

x
b = cx is a well dened operation, because when x
b=x
b1 then x x1 2 V0

x1 ) 2 V0 , which means cx = cx1 .

Moreover, we have the distributivity properties:


( + )x
b=( \
+ )x = \
x + x = cx + cx = x
b+ x
b;
\
(b
x + yb) = x[
+ y = (x
+ y) = \
x + y = cx + cy;
( x
b) = cx = \
( x) = (\
)x = (

)x
b;1 x
b = 1dx = x
b:

So (V=V0 ; K) has a vector space structure (together with the operations between classes dened above).

2.4.5. Remark. The function


phism and ker ( ) = V0 .

( ) : V ! (V=V0 ) given by

(x) = x
b is a (surjective) vector space mor-

2.5. THE ISOMORPHISMS THEOREMS

Proof.
morphism.

(x + y) = x[
+y =x
b + yb =

(x) + (y) and

( x) = cx = x
b =

The kernel of the linear transformation is ker ( ) = fx 2 V;


(x0 ) = xb0 = b
0 ) V0

x0 2 V0 )

81

ker ( )

(x) so the function is a

(x) = b
0 2 V=V0 g = V0 :

(x) = b
0)x
b=b
0 ) x 2 V0 .

2.4.6. Denition. The vector space V=V0 is called the factor space of V with respect to V0 .
2.4.7. Theorem. (The dimension of the factor space) Consider a nitetype vector space (V; K)
and a subspace V0 . Then
dim (V=V0 ) = dim V
Proof. Choose a basis x1 ;
Consider the set yb1 ;

because x1 ;
r
P

j yj ,

i=1

Consider

if 0 6= v0 =
that the set x1 ;

; yr is a basis in V we have 8v 2 V 9 i ;

r
P

i=1

2 K such that
r
P

j yj

i=1

; xk in V0 and complete it up to a basis x1 ;

; ybr in V=V0 ;

; xk ; y1 ;

which means vb =

dim V0 :

; xk ; y1 ;

; yr in V.

2 K such that v =

k
X

i xi

i=1

| {z }
2V0

bj
jy
r
P

i=1

and so yb1 ;
bj
jy

=b
0)

; ybr generates V=V0 .

r
P

i=1

j yj

2 V0 ;

2 V0 , since v0 may be represented as a linear combination of xi it would follow


; yr while a basis, it would accept a null linear combination with nonzero

; xk ; y1 ;

scalars, which is a contradiction;


so we get that v0 = 0 and as y1 ;

; yr are linear independent in V we get that all the scalars

are

zero.
This means that yb1 ;

dim V=V0 = dim V

; ybr is a basis for the factor space V=V0 and dim V=V0 = r, which means

dim V0 .

2.5. The Isomorphisms Theorems


2.5.1. Theorem. (The First Isomorphism Theorem) Consider two vector spaces over the same eld
(V1 ; K) and (V2 ; K) and a linear transformation U ( ) : V1 ! V2 . Then the vector spaces V1 = ker U ( ) and
Im U ( ) are isomorphic.
Proof. Dene the function U~ ( ) : V1 = ker U ( ) ! Im U ( ) by U~ (b
x) = U (x);
x

U~ ( ) is well dened (the denition doesnt depend on the representatives) because when x
b = yb then
y 2 ker U ( ), which means U (x

U~ ( ) is a morphism:

y) = 0 () U (x) = U (y).

82

2. LINEAR TRANSFORMATIONS

~ x1 ) + U~ (b
additivity: U~ (b
x1 + x
b2 ) = U~ x\
x2 );
1 + x2 = U (x1 + x2 ) = U (x1 ) + U (x2 ) = U (b
homogeneity: U~ ( x
b) = U~ (cx) = U ( x) = U (x) = U~ (b
x).

surjectivity of U~ ( ): y 2 Im U ( ) ) 9xy 2 V1 , U (xy ) = y ) U~ (xy ) = y.


injectivity of U~ ( ): U~ (b
x1 ) = U~ (b
x2 ) ) U (x1 ) = U (x2 ) ) x1
So U~ ( ) is a vector spaces isomorphism.

x2 2 ker U ( ) ) x
b1 = x
b2 .

2.5.1. Projection Operators.


2.5.2. Denition. Consider a vector space (V; K) and two vector subspaces V1 and V2 such that V =
V1

V2 . Then, for each x 2 V, 9!x1 2 V1 and 9!x2 2 V2 such that x = x1 + x2 .


The vector x1 is called the projection of x on V1 in the direction V2 .

2.5.3. Denition. A linear transformation p ( ) : V ! V is called projection when p (p (x)) = p (x), 8x 2


V [idempotency property].
2.5.4. Theorem. For a projection p ( ) we have: V = p (V)
Proof. Let v 2 V ) v = p (v) + v
p (v

p (v)) = p (v)

p2 (v) = p (v)

ker p ( ).

p (v), with p (v) 2 p (V) and


p (v) = 0, so v

p (v) 2 ker (p ( )) and V = p (V) + ker p ( ).

The sum is direct because v 2 p (V) \ ker p ( ) ) 0 = p (v) and 9u, v = p (u) ) 0 = p (v) = p2 (u) =
p (u) = v so p (V ) \ ker p ( ) = f0g.
2.5.5. Proposition. If p ( ) : V ! V is a projection, then (1V
Moreover, ker (1V

p) ( ) = Im p ( ) and Im (1V

Proof. x 2 ker (1V

p) ( ) ) x

p) ( ) = ker p ( ).

p (x) = 0 ) x = p (x) 2 Im p ( ).

x 2 Im p ( ) ) 9y, x = p (y) ) p (x) = p (p (y)) = p (y) = x ) x


x 2 Im (1V

p) ( ) ) 9y, x = (1V

p) ( ) : V ! V is also a projection.

p) (y) = y

p (x) = 0 ) x 2 ker (1V

p (y) ) p (x) = p (y)

p (p (y)) = p (y)

p) ( )
p (y) = 0 )

p (x) = 0 ) x 2 ker p ( ).
x 2 ker p ( ) ) p (x) = 0 ) x = x

p (x) = (1V

p) (x) ) x 2 Im (1V

p) ( ).

2.5.6. Proposition. Consider a vector space (V; K) and a subspace V0 . Any complement of V0 in V
(i.e. a subspace V1 such that V0

V1 = V) is isomorphic with V=V0 ; moreover, any two complements are

isomorphic.
2.5.2. Other Isomorphism Theorems.
2.5.7. Theorem. (The Second Isomorphism Theorem) Consider a vector space (V; K) and V1 and
V2 subspaces in V. Then the vector spaces V1 = (V1 \ V2 ) and (V1 + V2 ) =V2 are isomorphic.

2.5. THE ISOMORPHISMS THEOREMS

83

Proof. Dene the function ' ( ) : V1 ! (V1 + V2 ) =V2 by ' (x) = x


b = x + V2 .
The function is a morphism:

' (x + y) = x[
+y =x
b + yb = ' (x) + ' (y), and

' ( x) = cx = x
b = ' (x).

Injectivity: x 2 ker ' ( ) , x 2 V1 and x


b=b
0 () x 2 V2 ) so that ker ' ( ) = V1 \ V2 .

Surjectivity: Consider y 2 (V1 + V2 ) =V2 ) 9x1 2 V1 , 9x2 2 V2 , y = x1 + x2 + V2 = x1 + V2 =

x
b1 ) ' (x1 ) = y ) the function is surjective.

From Theorem 2.5.1, page 81 it follows that the spaces V1 = ker ' ( ) and Im ' ( ) are isomorphic, which

means that the vector spaces V1 = (V1 \ V2 ) and (V1 + V2 ) =V2 are isomorphic.
2.5.8. Corollary. Consider a vector space (V; K) and two subspaces V1 and V2 . Then
dim V1 + dim V2 = dim (V1 + V2 ) + dim (V1 \ V2 ) :
Proof. From the previous theorem we know that V1 = (V1 \ V2 ) and (V1 + V2 ) =V2 are isomorphic,
so they have the same dimension. Then
dim (V1 = (V1 \ V2 )) = dim ((V1 + V2 ) =V2 )
and from (Theorem 2.4.7, page 81) we know that
dim (V1 = (V1 \ V2 )) = dim V1

dim (V1 \ V2 )

and
dim ((V1 + V2 ) =V2 ) = dim (V1 + V2 )

dim V2

so we get
dim V1

dim (V1 \ V2 ) = dim (V1 + V2 )

dim V2 :

2.5.9. Theorem. (The Third Isomorphism Theorem) Consider a vector space (V; K) and V1 and
V2 subspaces in V such that V

V1

V2 .

Then the vector spaces ((V=V2 ) = (V1 =V2 )) and V=V1 are isomorphic.
Proof. Denote by x
b = x + V2 the class of x with respect to V2 (that is, an element of V=V2 ) and

with x
e = x + V1 the class of x with respect to V1 (that is, an element of V=V1 );
dene the function ' ( ) : (V=V2 ) ! (V=V1 ) by ' (b
x) = x
e.

The function ' ( ) is well dened, because x


b1 = x
b2 ) x1

x2 2 V 2 ) x1

x2 2 V 1 ) x
e1 = x
e2 .

84

2. LINEAR TRANSFORMATIONS

The function ' ( ) is a morphism because ' (b


x1 + x
b2 ) = ' x\
e1 + x
e2 = ' (b
x1 ) +
1 + x2 = x^
1 + x2 = x

' (b
x2 ) and ' ( x
b) = ' (cx) = fx = x
e = ' (b
x).

Im ' ( ) = V=V1 (the function is surjective), while its kernel is V1 =V2 :


x
b 2 ker ' ( ) , ' (b
x) = e
0,x
e=e
0 , x 2 V1 so x
b 2 ker ' ( ) () x
b = x + V2 and x 2 V1 which

means x
b 2 V1 =V2

V=V2 .

Apply Theorem 2.5.1, page 81 to obtain that

(V=V2 )
and Im ' ( ) are isomorphic, which means
ker ' ( )

((V=V2 ) = (V1 =V2 )) and V=V1 are isomorphic.


2.5.10. Corollary. Consider two nitetype vector spaces (V1 ; K), (V2 ; K) and U ( ) : V1 ! V2 a morphism between them. Then:
(1) dim V1 = dim (ker U ( )) + dim (Im U ( ));
(2) U ( ) injective , dim V1 = dim (Im U ( ));
(3) U ( ) surjective , dim V2 = dim (Im U ( )).
Proof. 1. Use the Theorem 2.5.1 and the Theorem 2.4.7.
The vector spaces V1 = ker U ( ) and Im U ( ) are isomorphic, so that their dimensions are equal: dim (V1 = ker U
dim (Im U ( )).
Because dim (V1 = ker U ( )) = dim (V1 )
dim (V1 )

dim (ker U ( )), we get:

dim (ker U ( )) = dim (Im U ( )) ) dim (V1 ) = dim (ker U ( )) + dim (Im U ( )).

2. U ( ) injective , ker U ( ) = f0g

()

dim (ker U ( )) = 0 and from 1.

() dim (V1 ) =

dim (Im U ( )).


3. U ( ) surjective , Im U ( ) = V2 () dim V2 = dim (Im U ( )) (the last equivalence takes place
because Im U ( ) is a subspace of V2 ; while their dimensions are equal, because the spaces are of nitetype,
they are equal as sets).
2.5.11. Theorem. (Sards Quotient Theorem, nitedimensional form) Consider three vectors spaces X,
Y, Z over the same eld K and the linear transformations S ( ) : X ! Y and T ( ) : X ! Z.
If S ( ) is surjective and ker S ( )

ker T ( ), then there is an unique linear transformation R ( ) : Y ! Z

such that T = R S.
b
Proof. Consider the linear transformations S^ ( ) : X= ker S ( ) ! Y and T^ ( ) : X= ker T ( ) ! Z,

dened by:

b b
S^ (^
x) = S (x) and T^ x
^ = T (x).
b
The functions S^ ( ) and T^ ( ) are welldened (Exercise!).
b
The functions S^ ( ) and T^ ( ) are linear transformations (Exercise!).

2.5. THE ISOMORPHISMS THEOREMS

85

b
The functions S^ ( ) and T^ ( ) injective (Exercise!).

S^ ( ) is surjective (because S ( ) is surjective) (Exercise!).

) S^ ( ) bijective ) 9S^

( ) : Y ! X= ker S ( ) isomorphism.

b^.
Consider the function P ( ) : X= ker S ( ) ! X= ker T ( ), dened by P (^
x) = x
P ( ) is well dened (Exercise!).

P ( ) is a linear transformation (Exercise!).


b
Dene R ( ) : Y ! Z by: R ( ) = T^ P S^
b
(R S) (x) = R (S (x)) = T^ P S^ 1 (S (x))

( ). Then
b
b b
= T^ (P (^
x)) = T^ x
^ = T (x).

2.5.12. Remark. Conversely, if X, Y, Z are vector spaces over the same eld of scalars K and S ( ) : X ! Y
and T ( ) : X ! Z are linear transformations such that there is another linear transformation R ( ) : Y ! Z
with T = R S, then it happens that ker S ( )

ker T ( ) (Exercise!).

86

2. LINEAR TRANSFORMATIONS

2.6. Introduction to Spectral Theory


2.6.1. Denition. Consider a vector space (V; K), a subspace V0

V and a linear operator U ( ) : V ! V.

The subspace V0 is called invariant with respect to U ( ) (or under U ( ), or U ( )invariant) if


U (V0 )

V0 :

2.6.2. Remark. When a linear operator over an ndimensional vector space has an mdimensional invariant subspace, the matrix corresponding to a basis in which the rst m vectors form a basis for the
invariant
subspace
3 has
2 the form 3
2
A
A12
A
A12
5 = 4 11
5 2 Mn;n (with A21 = 0 2Mn m;m ).
4 11
0 A22
A21 A22
[When the basis is such that the subbasis is on the last places, then the matrix will have A12 = 0]
Proof. 1Consider a vector space (V; K) with dimension dimK V =n and V0 a subspace in V, with
dimK V0 = m.
Choose a basis of V such that its rst m vectors are a basis for V0 :
Start with the set B0 = fv1 ; v2 ;

; vm g which is a basis for V0 and consider a vector vm+1 2 V n V0 .

Then the set B0 [ fvm+1 g = fv1 ; v2 ;

; vm ; vm+1 g is a linear independent set (by contradiction: if the set

is linear dependent, then vm+1 would be a linear combination of B0 , a contradiction with vm+1 2 V n V0 ).
Repeat the procedure for a nite number of times (n

m times) to get a basis B in V for which the

rst m vectors are a basis for V0 .


We get the basis B = fv1 ; v2 ;
fv1 ; v2 ;

; vm ; vm+1 ;

; vn g = B0 [ fvm+1 ;

; vn g for V in which B0 =

; vm g is a basis for V0 .

In this basis, the vectors of the subspace are represented as a column with zero on the last n

places.
Consider a linear operator U ( ) : V ! V for which U (V0 )

V0 and consider the basis B, with

properties as above, be xed in V considered both as the domain and as the codomain.
The matrix which represents U ( ) is a matrix which has as columns [U (vj )]B :
U (v1 ); U (v2 );

; U (vm ); U (vm+1 ) ;

; U (vn ) are the images by the linear operator of the basis vectors

and they get represented in the codomain with respect to the same basis.
Since V0 is invariant with respect to U ( ), the images of the set B0 remain in V0 :
fU (v1 ); U (v2 );

; U (vm )g

with zero on the last n


1 Proof

V0 , which means that the representation of each of them is a column

m positions:

completed after a question from Alexandru Gr


adinaru, 20132014.

2.6. INTRODUCTION TO SPECTRAL THEORY

a
6 1j
6
6 a2j
6
6 .
6 ..
6
m
6
P
U (vj ) =
aij vi ; [U (vj )]B = 6
6 amj
i=1
6
6
6 0
6
6 ..
6 .
4
0
The matrix of U ( ) has the form:
2
a
a1m
a1(m+1)
6 11
6
6 a21
a2m
a2(m+1)
6
6 .
.
..
..
6 ..
.
6
6
6 a
amm
am(m+1)
6 m1
6
6
0 a(m+1)(m+1)
6 0
6
..
..
6 ..
6 .
.
.
4
0
0
an(m+1)

87

7
7
7
7
7
7
7
7
7 ,8j = 1; m.
7
7
7
7
7
7
7
5
3

a1n
a2n

amn
a(m+1)n

ann

7
7
7
7
7
2
7
7
7
6
7, which is of the form 6
7
4
7
7
7
7
7
7
5

A11
m m

A12
m (n m)

A22
(n m) (n m)

(n m) m

7
7.
5

2.6.3. Remark. When the space may be represented as a direct sum of U ( )invariant subspaces, then
the matrix representation of U ( ) in a suitable basis (in a basis obtained as an union of bases of the
invariant spaces) has the form

which is called a pseudodiagonal form.

6 A11 0
6
6 0 A22
6
6
6
6
4
0
0

Proof. For two subspaces, consider V = V1

0
0
..

.
Akk

3
7
7
7
7
7
7
7
5

V2 with U (V1 )

V1 and U (V2 )

V2 and consider a

basis attached to this direct sum:


f1 ;

; fk1 ; g1 ;

; gk2 ;

then the last k2 coordinates of U (fj ) are zero (because they are representable only on V1 ) while the rst
k1 coordinates of U (gj ) are zero (because they are representable only on V2 ).
2
3
A11 0
5.
We get the pseudodiagonal matrix form 4
0 A22
For more invariant subspaces the pseudodiagonal form is obtained by induction.
2.6.4. Remark ([42]). When V0 is U ( )invariant, it is possible for a complement of V0 not to be U ( )
invariant;

88

2. LINEAR TRANSFORMATIONS

moreover, it is possible that any complement of V0 not to be U ( )invariant.


2.6.5. Denition. The U ( )invariant one dimensional subspaces are also called U ( )invariant directions.
Any nonzero vector of an U ( )invariant direction is called eigenvector.
2.6.6. Remark. The vectors x and U (x) are collinear if and only if there is a scalar

such that

U (x) = x:
The scalar

is called the eigenvalue of the eigenvector.

2.6.7. Remark. The eigenvalue corresponding to an eigenvector is unique.


Proof. When U (x) =
1x

2x

) (

2) x

and U (x) =

1x

2x

with

6=

2,

then

= 0 ) x = 0 which is a contradiction with the requirement that the

eigenvector is nonzero.
2.6.8. Remark. When an eigenvector x corresponds to an eigenvalue , any vector

x with

6= 0 is

another eigenvector corresponding to the same eigenvalue.


Proof. When U (x) = x, by multiplying with

we get U ( x) =

x ) U ( x) =

( x) so that

the vector x is also an eigenvector of the eigenvalue .


2.6.9. Remark. Consider a set of eigenvectors, each of them corresponding to the same linear operator
but to a dierent eigenvalue.
Then the set is linear independent.
Proof. Consider the linear operator U ( ) and v1 ;
1;

; vm eigenvectors corresponding to the eigenvalues

m.

By induction, for m = 1, v1 6= 0 so that the set fv1 g is linear independent.


For m = 2, when
0 = U (0) = U (
For

1 v1

1 v1

2 v2

2 v2 )

= 0, then by applying the operator we get

1U

(v1 ) +

(v2 ) =

1 1 v1

2 2 v2 .

= 0, we get:

9
>
>
1 v1 + 2 v2 = 0; >
>
>
>
>
=
v
+
v
=
v
=
0
1 1 1
2 2 2
1 1 1
)
>
>
=
6
0
>
1
>
>
>
>
v1 6= 0 ;
For 2 =
6 0, we get:
2

2U

9
>
1 v1 + 2 v2 = 0; >
>
=
)
=
0
1
>
>
>
;
v2 6= 0

= 0.

2.6. INTRODUCTION TO SPECTRAL THEORY

9
>
1 v1 + 2 v2 = 0 >
>
=
)2
1 1 v1 + 2 2 v2 = 0
>
>
>
;
=
6
0
1
which is a contradiction.

1 2 v1

1 1 v1

Let the statement be true for m


1 v1

m vm

U( )

=0 )

1 1 v1

= 0,

if

6= 0, multiply the rst equality by

m ) v1 +

m 1 m 1 vm 1

m 1

subtraction

2 ) v1

=0)

1 eigenvectors corresponding to distinct eigenvalues; then we have:

if

1 1 v1

9
>
2 v2 = 0 >
>
=
2 v2 = 0
>
>
>
;
=
6
0
1

89

m 1

m m vm

= 0;

= 0 and apply the induction hypothesis;


m

and subtract the two equalities, to get:

m ) vm 1

= 0; since this is a relation involving m

we may apply the induction hypothesis to get that 8j,

m)

=0)

1 eigenvectors,

= 0.

2.6.10. Remark. Any linear operator over a nitetype vector space has a number of distinct eigenvalues
at most equal with the dimension of the domain.
Proof. For each distinct eigenvalue we have at least an eigenvector, and since their set is a linear
independent one, the number of vectors in the set cannot exceed the dimension of the vector space.
2.6.11. Remark. Several eigenvectors may correspond to the same eigenvalue and their set may still be
linear independent.
2.6.12. Remark. Consider a xed eigenvalue
attached to

for the linear operator U ( ). The set of all eigenvectors

(the eigenset) V = fv 2 V; U (v) = vg is a vector subspace.

Proof. U (x1 ) =

x1 and U (x2 ) =

x2 ) U (

1 x1

2 x2 )

1 x1

2 x2 )

so that any linear

combination of eigenvectors is another eigenvector corresponding to the same eigenvalue.


2.6.13. Denition. The dimension dim V of the eigenset is called the geometric multiplicity of .
2.6.14. Remark. The eigenset V is the kernel of the linear operator N ( ) : V ! V, N (v) = U (v)
Proof. V = fv 2 V; U (v) = vg = fv 2 V; U (v)
= fv 2 V; N (v) = 0g = N

v.

v = 0g =

(f0g) = ker N ( ).

2.6.15. Remark. For the linear operator U ( ) : V ! V consider the same basis B both in the domain
and the codomain.
When we write the relation U (v) = v in matrix form in terms of B representation in which the linear
operator is represented by the matrix A we get
A [v] =

[v] ) A [v] = In [v] ) (A

In ) [v] = 0;

90

2. LINEAR TRANSFORMATIONS

which is a homogeneous system with the coordinates of the vector v as unknowns and the scalar

as

parameter. The necessary and su cient condition for the system to accept nonzero solutions is that the
determinant of system matrix to be zero, that is
det (A
2.6.16. Denition. 1. The equation det (A

In ) = 0:

In ) = 0 (in the unknown ) is called the characteristic

equation of the linear operator U ( ) / matrix A.


2. The polynomial

7! det (A

In ) is called the characteristic polynomial of the linear operator

U ( ) / matrix A (it is a polynomial with the degree equal with the dimension of the vector space)
3. The roots of the characteristic polynomial are called the eigenvalues of the linear operator /matrix.
4. The multiplicity of an eigenvalue as a root of the characteristic polynomial is called the algebraic
multiplicity of the eigenvalue.
4. The set of all distinct roots of the characteristic polynomial is called the spectrum of the linear
operator and is denoted by

(A) or

(U ( )).

2.6.17. Remark. An equivalent form is det ( In


2.6.18. Remark. The polynomial
det (A

I) = ( 1)n

7! det (A

+ ( 1)n

A) = ( 1)n det (A

In ).

I) has the following structure:


n 1

T r (A)

+ det (A)

2.6.19. Remark. The characteristic polynomial does not depend on the basis in which the linear operator
is represented (the basis in which the linear operator is represented with the matrix A).
Proof. When a linear operator is represented in two dierent bases with matrices A an B, these
1

matrices are connected by the formula B = T


det (T

AT

I) = det (T
= det T

det (A

AT , where T is the changeofbases matrix; we have

AT

T ) = det (T

I) det T = det (A

(A

I) T ) =

I) :

2.6.20. Example. 1. When the matrix representing the linear operator is diagonal, A = diag (d1 ;
n
Q
the characteristic polynomial is 7!
(di
).

; dn ),

i=1

2. When the matrix representing the linear operator is upper semitriangular (which means that the

elements below the main diagonal are zero, or aij = 0, 8i; j with i
j
n
Q
polynomial is 7!
(aii
) (aii are the elements of the main diagonal).

2), then the characteristic

i=1

3. When the matrix representing the linear operator has a pseudodiagonal form (Remark 2.6.3)

2.6. INTRODUCTION TO SPECTRAL THEORY

91

0 7
6 A11 0
7
6
7
6 0 A22
p
p
0
P
Q
7
6
with Aii 2 Mki ki and
A=6
ki = n, then det (A
In ) =
det (Aii
Iki )
7
..
7
6
i=1
i=1
.
7
6
5
4
0
0
App
(with the identity matrices of the corresponding dimension) (the characteristic polynomial is the product
of the characteristic polynomials corresponding to the linear operators attached to the submatrices of the
pseudodiagonal)

2.6.21. Remark. We consider mainly linear operators over the eld of scalars R. Solving the characteristic
equation may lead to the following possible situations:
(1) All the roots belong to R and they are distinct.
In this case, there is a basis of eigenvectors corresponding to the eigenvalues, all the eigensets
have dimension 1 and the matrix of the linear operator in this basis (for both the domain and the
codomain) has diagonal form, with eigenvalues on the main diagonal, in the order given by the
order of the eigenvectors in the basis.
(2) All the roots belong to R but some of them are not distinct.
In this case, for a given eigenvalue with (the algebraic) multiplicity greater than 1, the question
is whether the geometric multiplicity of the eigenvalue equals the algebraic multiplicity of the
eigenvalue. There are two subcases:
(a) For each eigenvalue, the geometric and the algebraic multiplicities are equal.
(b) For at least one eigenvalue, the geometric and the algebraic multiplicities are not equal.
(3) Some of the eigenvalues are not from R (they are complex, form C).

Even if the linear operator is represented by a real matrix, some of the eigenvalues may be complex
and in this case their associated eigenvectors will also have complex coordinates.
To solve this di culty we will follow an indirect approach: even if the linear operator is initially
considered over the vector space (Rn ; R), it will be considered from the beginning to be dened over the
complex vector space (Cn ; C), and in this context the (complex) Jordan canonical form will be obtained,
after which it will be applied a certain decomplexication procedure to obtain the real counterpart of the
Jordan canonical form.
Consider a nitedimensional complex vector space (V; C) and a linear operator U ( ) : V ! V represented with the matrix A in a certain xed basis, and with the characteristic polynomial det (A
k
Q
ni
0=
(
(where some of the eigenvalues may be complex numbers).
i)
i=1

In ) =

92

2. LINEAR TRANSFORMATIONS

2.6.22. Denition. The linear operator U ( ) : V ! V is called diagonalizable when there is a basis of
the vector space V where the attached matrix is diagonal.
2.6.23. Remark. The linear operator is diagonalizable () 9P an invertible matrix (which is the
changeofbasis matrix from the old basis to the new basis) such that the matrix P

AP is diagonal.

2.6.24. Theorem. The linear operator U ( ) : V ! V is diagonalizable () there is a basis in V which


has only eigenvectors of U ( ).
Proof. ")"
Suppose that the representation of U ( ) in the basis B = fv1 ;

; vn g (both in the domain and in the

codomain) is denoted by D and it is a diagonal matrix:


2
3
d1 0 0
6
7
6
7
D = 6 0 . . . 0 7.
4
5
0 0 dn
2
3

6 1k 7
6 . 7
In coordinates, [vk ]B = 6 .. 7 (the column k of the identity matrix, 1 on the line k and 0 for the rest
4
5
nk

of the lines); then


2

d1

0
..

32

7 6 1k 7
6
76 . 7
6
[U (vk )]B = 6 0
. 0 7 6 .. 7 = dk [vk ]B
54
5
4
0 0 dn
nk
so that [U (vk )]B = dk [vk ]B , which means that vk is an eigenvector corresponding to the eigenvalue

dk .
This means that each vector of the basis is an eigenvector while the elements of the main diagonal of
the matrix D are eigenvalues of U ( ).
"("
When the basis B = fv1 ;

; vn g has only vectors which are eigenvectors for U ( ), then for each

k = 1; n, U (vk ) = k vk and because B is a basis we may write this in matrix form:


2
3
2
3
6 1k 7
6
6 . 7
6
[vk ]B = 6 .. 7 and [U (vk )]B = 6
4
5
4
nk

For an arbitrary vector v =

n
P

k=1

U (v) = U

n
P

k=1

k vk

n
P

k=1

kU

k 1k

..
.

k nk

7
7
7.
5

6 1 7
6 .. 7
k vk (represented in the basis B), [v]B = 6 . 7 and
4
5
n

(vk ) =

n
P

k=1

k k vk ,

so that

2.6. INTRODUCTION TO SPECTRAL THEORY

1 1

0
...

32

0
...

93

7
76
7 6
7 6
7
7 6 .. 7 6
7 6
=
7
6
7
6
7=6 0
0
0 7 [v]B ,
0
.
5
54
5 4
5 4
0 0
0 0
n
n
n
n n
which means that the matrix attached to U ( ) in the basis B is diagonal.

6
6
[U (v)]B = 6
4

..
.

In other words, if we may nd a basis of eigenvectors, then in this basis the linear operator has a
diagonal matrix with eigenvalues on the diagonal.

2.6.25. Example. The linear operator represented in the standard basis with the matrix
2
3
4
0
1
6
7
6
7
A=6 1
6
2 7, has:
4
5
5
0
0
3
2
4
0
1
7
6
7
6
3
2 2 + 29 + 30
the characteristic polynomial: 7! det 6
1
6
2 7=
5
4
5
0
0
3
2
the characteristic equation:
2 + 29 + 30 = 0
the roots of the characteristic equation (the eigenvalues):

= 5,

6,

the eigenvectors attached to the eigenvalue 1 = 5 are the solutions of the system:
2
32
3 2 3
4
0
1
0
1
6
76 1 7 6 7
6
76
7 6 7
6
1
6
2 7 6 2 7 = 6 0 7, which means
1
4
54
5 4 5
5
0
0
0
1
3
8
8
8 2
9
3
>
>
>
>
( 1) 1 + 0 2 + 1 3 = 0
=
1
>
>
>
>
>
>
>
>
<
< 1
< 6
7
=
6
7
3
3 7;
)
,
V
=
2
R
6
( 1) 1 + ( 11) 2 + ( 2) 3 = 0
1
2 =
11
>
>
>
>
4 11 5
>
>
>
>
>
>
>
>
:
:
:
;
5 1 + 0 2 + ( 5) 3 = 0
=
1
3
8 2 3
9
>
>
>
>
> 6 0 7
>
<
=
6 7
the eigenvectors attached to the eigenvalue 2 = 6 are V 2 =
6 1 7; 2 R
>
>
4 5
>
>
>
>
:
;
0
8 2
9
3
1
>
>
>
>
>
>
< 6 5 7
=
6 9 7
the eigenvectors attached the eigenvalue 3 = 1 are V 3 =
;
2
R
6 25 7
>
>
4
5
>
>
>
>
:
;
1
We know that eigenvectors corresponding to distinct eigenvalues are linear independent from 2.6.9,
so that if we choose an eigenvector for each eigenvalue we get 3 eigenvectors which will form a linear
independent set; as it is also maximal (the embedding space has dimension 3) it is a basis.

94

2. LINEAR TRANSFORMATIONS

82
>
>
>
<6
6
If the basis is 6
>
4
>
>
:

1
3
11

3 2

3 2

0
7 6 7 6
7 6 7 6
7;6 1 7;6
5 4 5 4
0

vectors as columns)
2
3
1
1 0
5 7
6
6 3
9 7
T = 6 11 1
7, its inverse
25
4
5
1 0 1
32
2
5
1
0
4
6 76
6 6
76
6 4
T 1 AT = 6 55
76 1
1 19
55
54
4
5
5
0 6
5
6
which is a diagonal form.

is T

0
6
0

1
5
9
25

39
>
>
=
7>
7
7 , the changeofbasis matrix is (the matrix with the
5>
>
>
;
2

6
6
=6
4

32

76
76
2 76
54
0

1
6

4
55

19
55

5
6

5
6

5
6

1
3
11

0
1
0

7
7
7, while
5
1
5

9
25

7 6
7 6
7=6 0
5 4
0

0
6
0

7
7
0 7,
5
1

82
3 2
3 2 39
1
>
1
0 >
>
>
>
=
7 6 7>
<6 5 7 6
6 9 7 6 3 7 6 7
When the eigenvectors are placed in a dierent order, B = 6 25 7 ; 6 11 7 ; 6 1 7 , then
>
5 4
5 4 5>
4
>
>
>
>
;
:
0
1
1
2
2
3
3
1
5
5
1
0
0
6 7
6 5
6 6
7
6 9
6
7
7
3
T = 6 25
1 7, T 1 = 6 56 0 61 7 and
11
4
4
5
5
4
19
1
1 0
1 55
55
2
32
32
3 2
3
5
5
1
1 0 0
0 6
4
0
1
1 0
6 6
76
76 5
7 6
7
6
76
76 9
7 6
7
3
=
T 1 AT = 6 56 0 61 7 6 1
6
7
6
7
1
0 5 0 7
6
2
11
4
54
5 4 25
5 4
5
4
19
5
0
0
1
1 0
0 0
6
1 55
55
(also a diagonal form, but on the diagonal the order of the eigenvalues is changed)
82
39
3 2 3 2
>
5 >
0
11
>
>
>
7 6 7 6
7>
=
<6
7 6 7 6
7
6
If we choose dierent eigenvectors, B = 6 3 7 ; 6 5 7 ; 6 9 7 , then
>
5>
4
5 4 5 4
>
>
>
>
;
:
25
11
0
2
3
2
3
5
1
0 66
11 0
5
6 66
7
6
7
6
1 19 7
6
7
1
4
6
7 and
T =6 3 5
9 7, T = 6 275
275 7
4
5
5
4
5
1
1
11 0 25
0
30
30
2
32
32
3 2
3
5
1
0 66 7
4
0
1
11 0
5
5 0
0
6 66
6
76
7 6
7
6
7
1
6
7
6
7
6
7
4
19 7
T 1 AT = 6
=
6
7
6
7
6
7
1
6
2
3
5
9
0
6
0
6 275 5 275 7 4
54
5 4
5
4
5
1
1
5
0
0
11 0 25
0 0
1
0 30
30

(the same diagonal form)

2.6.26. Theorem. (The Hamilton-Cayley Theorem) Consider a linear operator U ( ) which is represented with the matrix A in a certain basis and with the characteristic polynomial P ( ) = det (A
Then P (A) = 0 2Mn

(C) (the null matrix) and P (U ( )) = O ( ) (the null linear operator).

I).

2.6. INTRODUCTION TO SPECTRAL THEORY

Proof. Consider the matrix A

95

I; this matrix is invertible when

2
=

(A) and in this case we

consider the inverse matrix:


(A

I)

1
det (A

I)

B =

1
B ;
P( )

where B is the matrix with elements Aji (the cofactors (algebraic complements) of the matrix (A
which are polynomials of degree at most n

I)),

1 in .

Then we may write


B = B0 + B1 +

+ Bn

n 1

where Bk are square matrices of dimension n. We get


P ( ) I = (A

I) B

and we rewrite this relation with respect to the increasing exponents of ;


with the notation P ( ) =
(

) I = (A

the previous equality becomes:


I) B0 + B1 +

+ Bn

n 1

where we multiply on the righthand term and organize by the increasing exponents of , to obtain:
0I

1I

nI

+ (ABn

= AB0 + (AB1
1

Bn 2 )

n 1

B0 ) + (AB2
+ ( Bn 1 )

B1 )

because this is an equality of two polynomials, their corresponding coe cients should be equal. We get
the equalities:
0I

AB0

1I

AB1

B0

A (to the left)

2I

AB2

B1

A2 (to the left)

=
n 1I
nI

= ABn

Bn

Bn

j
j

An

(to the left)

An (to the left)

96

2. LINEAR TRANSFORMATIONS

we multiply conveniently to the left in order to obtain the cancellation of the terms to the right, and we
get
0I

AB0

1A

A2 B1

AB0

2
2A

A3 B2

A2 B1

=
n 1
n 1A
n
nA

= An Bn

An 1 Bn

An Bn

n 1

from where by addition it follows


0I

1A

2
2A

n 1A

n
nA

= 0;

which means P (A) = 0 (the null matrix) and correspondingly P (U ( )) = O ( ) (the null operator)
2.6.27. Remark. The set Mn (R) of the square matrices with dimension n is a vector space with dimension
n2 , so that for each matrix A 2 Mn (R), span Ak ; k 2 N has dimension at most n2 . The Hamilton
Cayley Theorem shows that dim span Ak ; k 2 N

n + 1. In this inequality, the equality generally

, As are linear independent satises


s
P
k
n + 1, while the linear dependence of the powers has the form
k A = 0 and corresponds to a

doesnt takes place; the least number s for which the powers A0 , A1 ,
s

certain polynomial m ( ) =

s
P

k=0

, which is called the minimal polynomial of the matrix A.

k=0

2.6.28. Remark. (The greatest common divisor of several polynomials) A certain property which
will be used for the Jordan Canonical form is the property 2.6.3, which may be seen in particular forms
in 2.6.1 and in 2.6.2.
For two polynomials p1 (x) and p2 (x) (with real or complex coe cients), their greatest common divisor
is a new polynomial d (x) (with coe cients from the same eld) which divides the two polynomials and
with this property has maximal degree (there is no other polynomial of bigger degree which divides both
polynomials). The existence and nding of the polynomial d (x) may be obtained by factorization (which
is a procedure limited by the impossibility of eectively nding the factorization of a polynomial) or by
using the Euclid Algorithm for nding the greatest common divisor for polynomials, as the last nonzero
remainder; the unicity of d (x) may be obtained with the extra condition that the polynomial should have
the leading coe cient 12. Some of the properties of the polynomial d (x) = gcd (p1 ( ) ; p2 ( )) are:
if d1 (x) jp1 (x) and d1 (x) jp2 (x) then d1 (x) jd (x) (if a polynomial d1 ( ) divides both of p1 ( ) and
p2 ( ) then d1 ( ) also divides their greatest common divisor d ( ))
2 The

leading coe cient is the coe cient of the monomial with the greatest degree of the polynomial. These polynomials are
also called monic polynomials.

2.6. INTRODUCTION TO SPECTRAL THEORY

97

gcd (p1 ( ) ; p2 ( )) = gcd (p2 ( ) ; p1 ( )) (in the Euclid algorithm, it doesnt matter which polynomial
is the rst)
if gcd (p1 ( ) ; q ( )) = 1 then gcd (p1 ( ) ; p2 ( )) = gcd (p1 ( ) ; p2 ( ) q ( ))
d ( ) is the least degree polynomial with the property: there are polynomials h1 ( ) and h2 ( ) such
that
(2.6.1)

d ( ) = h1 ( ) p1 ( ) + h2 ( ) p2 ( ) ;
for three or more polynomials, the greatest common divisor is dened recursively:
gcd (p1 ( ) ; p2 ( ) ; p3 ( )) = gcd (gcd (p1 ( ) ; p2 ( )) ; p3 ( )) ;
while the property 2.6.1 may be extended for three polynomials in the following way:
gcd (p1 ( ) ; p2 ( ) ; p3 ( )) = gcd (gcd (p1 ( ) ; p2 ( )) ; p3 ( )) =
= h01 ( ) gcd (p1 ( ) ; p2 ( )) + h02 ( ) p3 ( ) =

(2.6.2)

= h01 ( ) (h001 ( ) p1 ( ) + h02 ( ) p2 ( )) + h02 ( ) p3 ( ) =


= h2 ( ) p1 ( ) + h2 ( ) p2 ( ) + h3 ( ) p3 ( ) :
for a certain number k of polynomials their greatest common divisor is dened similarly:
gcd (p1 ( ) ; p2 ( ) ;

; pk ( )) = gcd (gcd (p1 ( ) ;

; pk

( )) ; pk ( ))

while the property 2.6.2 may be extended for k polynomials in the following way: 9hi ( ) polynomials, i = 1; k such that
(2.6.3)

gcd (p1 ( ) ; p2 ( ) ;

; pk ( )) =

k
X

hi ( ) pi ( ) :

i=1

2.6.29. Denition. A linear operator N ( ) is called nilpotent when there is r 2 N such that N r ( ) = O ( )
(the null operator).

2.6.30. Remark. If N ( ) is nilpotent, f 6= 0 is a nonzero vector and k + 1 = min fs; N s (f ) = 0g, then
the set f; N (f ) ;

Proof.

; N k (f ) is linear independent.

k
X

(f ) = 0 j N k ( ) )

( ) we get that 8j = 1; k,

iN

i=0

in a similar way by applying N k

0N

(f ) = 0 )
j

= 0.

= 0;

98

2. LINEAR TRANSFORMATIONS

2.6.31. Theorem. (Reducing a complex linear operator to nilpotent linear operators) Consider
k
Q
nj
a linear operator U ( ) : Cn ! Cn with the characteristic polynomial P ( ) =
(
j) .
j=1

For each j = 1; k, consider the set V


(1) For each j = 1; k, V
k
L
(2) V =
V j.

nj
j I) ( )). Then:

= ker ((U

is an U ( )invariant subspace.

j=1

(3) The restriction of U ( ) at each subspace V j , denoted by Uj ( ) : V


U (x), 8x 2 V
over V

has the structure Uj ( ) = (Nj +

j I) (

! V j , dened by Uj (x) =

), with Nj ( ) nilpotent linear operator

and I ( ) the unit linear operator over V j .

Proof.
(1) The set V

is a subspace, since it is the kernel of a certain linear operator.

Take v 2 V

nj
j I)

= ker ((U

nj
j In )

( )); then (U

(v) = 0.

We have to prove that U (v) 2 V j , which means (U


(U

j I)

nj

(U (v)) = (U

j I)

nj

((U

= (U

j I)

nj

(U

= (U

j I) ((U

so the subspace V j
k
Q
(2) We have P ( ) =
(

jI

j I) (v)
j I)

= (U
j I) (0) +
is U ( )invariant.
nj
j) ,

nj

j In )

j I) (v))

(U (v)) = 0.

=
nj
j I)

+ (U

(v)) +

nj

(U

0 = 0 ) U (v) 2 V

( j I) (v) =
j I)

nj

(v) =

and for j = 1; k we consider the polynomials Pj ( ) and Qj ( ),

j=1

where

k
Q

P( )
.
Pj ( )
i=1;i6=j
We have gcd (Q1 ( ) ;
; Qk ( )) = 1 so from Remark 2.6.28 and the property 2.6.3
k
P
there are some polynomials hj ( ) such that
hj ( ) Qj ( ) = 1.

Pj ( ) = (

nj
j)

and Qj ( ) =

ni
i)

j=1

Moreover, we have P ( ) = Pj ( ) Qj ( ) ; 8j = 1; k; we write these relations for the attached


linear operator polynomials and we use the HamiltonCayley Theorem:
0 =P (U ( )) = Pj (U ( )) Qj (U ( )) )
) Pj (U ( )) ((Qj (U ( ))) (v)) = 0; 8v 2 V;
) (Qj (U ( ))) (v) 2 V j ; 8v 2 V
and
k
P
hj (U ( )) Qj (U ( )) = I ( ), which means, taking into account (from the Remark 2.2.10)
j=1

that hj (U ( )) Qj (U ( )) = Qj (U ( )) hj (U ( )),
k
k
P
P
hj (U ( )) ((Qj (U ( ))) (v)) =
(Qj (U ( ))) ((hj (U ( ))) (v)) = v; 8v 2 V;
j=1

j=1

2.6. INTRODUCTION TO SPECTRAL THEORY

99

because (Qj (U ( ))) (v) 2 V j and V j is U ( )invariant, we get that (Qj (U ( ))) ((hj (U ( ))) (v)) 2
V j , so that any vector may be written as a sum of vectors from V j , j = 1; k, and so we have
k
P
that
V j = V.
j=1

The sum is direct: take vj 2 V

such that

k
P

vj = 0; then we have Pi (U ( )) (vi ) = 0 and

j=1

Qi (U ( )) (vj ) = 0; 8j = 1; k; j 6= i )
!
k
P
Qi (U ( )) (vi ) = Qi (U ( ))
vl = 0;
l=1;l6=i

since the polynomials Pi ( ) and Qi ( ) are relatively prime, from 2.6.1 we get the existence of
the polynomials R1 ( ) and R2 ( ) such that
R1 ( ) Pi ( ) + R2 ( ) Qi ( ) = 1, which means, by passing to linear operator polynomials, that
0 = R1 (U ( )) ((Pi (U ( ))) (vi )) + R2 (U ( )) ((Qi (U ( ))) (vi )) = vi , which means that vi = 0;
we get that the decomposition is unique and the sum is direct.
(3) Consider the linear operator Uj ( ) : V

! V j , Uj (v) = U (v), 8v 2 V

(the restriction of U ( )

over V j ).
Because V

= ker ((U

j I)

nj

( )), we have (U

j I)

nj

(v) = 0, 8v 2 V j so that (Uj

0, 8v 2 V j . This means that the linear operator Nj ( ) = (Uj


Uj ( ) = (Uj

j I) (

)+

jI (

) = Nj ( ) +

jI (

j I) (

j I)

nj

) is nilpotent, and while

), we get the structure from the statement.

2.6.32. Theorem. (The Jordan canonical form for nilpotent operators) Consider a nilpotent
linear operator N ( ) : V ! V over a nitetype vector space V. Then there is a basis for which the matrix
representation of N ( ) is a Jordan block attached to the zero eigenvalue.
Proof. N ( ) nilpotent ) 9!r 2 N such that ker N r ( ) = V and ker N r

( ) & V (r is the smallest

exponent k for which N k ( ) = O ( )),


Not

r = min dim N k ( ) = dim V = n


k

(the existence comes from the nilpotency of N ( ), while the unicity is a result of r being the smallest);
moreover, we have x 2 ker N k ( ) ) N k (x) = 0 ) N N k (x) = 0 ) x 2 ker N k+1 ( ) so that:
ker N ( )

ker N k ( )

ker N 2 ( )

ker N r ( ) = V

(the kernels of the successive powers of N ( ) are forming a chain, which is nite (because of the nite
dimension of V) and equal with V from the exponent r above) the resulting chain is:
f0g = ker N 0 ( )

ker N 1 ( )

ker N r

ker N 2 ( )
Not

()

6=

ker N r ( ) = V

and consider their dimension, denoted by mk = dim ker N k ( ), so that


k = 0; r ) 0 = m0

m1

m2

mr

< mr = n.

(v) =

100

2. LINEAR TRANSFORMATIONS

For each k = 1; r, we have ker N k ( ) = ker N k


V = ker N r ( ) = ker N r
we have qk = mk

()

Not

Qk , with Q1 = ker N ( ) and qk = dim Qk , while

()

Qr ;

mk 1 ; k = 1; r and also denote dim Qr = qr = mr

Choose a basis in Qr , denoted by ff1 ;

mr

Not
1

= p1 .

; fp1 g; this set is linear independent and

V = ker N r ( ) = ker N r 1 ( ) Qr = ker N r 1 ( ) spanff1 ;


; fp1 g. Moreover,
p
1
P
r 1
( ) ) 8i = 1; p1 , i = 0 (because of the direct sum). We have the decomposition
i fi 2 ker N

i=1

V = ker N r ( ) = span ff1 ;

ker N r

fp 1 g

( ):

; N (fp1 ) belong all to ker N r

By applying N ( ), we get that the vectors N (f1 ) ;


fi 2 ker N r ( ) ) N r (fi ) = 0 ) N r

(N (fj )) = 0 ) N (fj ) 2 N r

( ) (because

( )).

Moreover, if any linear combination with these vectors would belong to ker N r 2 ( ), then
p1
p1
p1
P
P
P
r 2
= 0 from where
= Nr 1
( ) which means that N r 2
i fi
i N (fi )
i N (fi ) 2 ker N

i=1

we get

i fi

i=1

2 ker N r

())

= 0,

and this means that the set fN (f1 ) ;


ker N r

( ) n ker N r

qr = p1 (because Qr

( ), so fN (f1 ) ;

; N (fp1 )g is linear independent and fN (f1 ) ;


Qr 1 , while dim Qr

; N (fp1 )g

= qr

= mr

; N (fp1 )g
mr

Not

includes a linear independent set with p1 vectors). Denote by p2 = qr

complete the linear independent set fN (f1 ) ;


fp1 +1 ;

i=1

i=1

p1
P

; N (fp1 )g in Qr

up to a basis of Qr

p1 and

with the vectors

; fp1 +p2 .

The set fN (f1 ) ;

; fp1 +p2 g is a basis in Qr

; N (fp1 ) ; fp1 +1 ;

V = ker N r ( ) = span ff1 ;

fp 1 g

ker N r

= span ff1 ;

fp 1 g

Qr

ker N r

; N 2 (fp1 ) ; N (fp1 +1 ) ;

and we have the decomposition

()=

= span ff1 ; fp1 g spanfN (f1 ) ;


Apply N ( ) to the set fN (f1 ) ;
; N (fp1 ) ; fp1 +1 ;
fN 2 (f1 ) ;

()=

; N (fp1 ) ; fp1 +1 ;
; fp1 +p2 g
; fp1 +p2 g to get the set

ker N r

( ):

; N (fp1 +p2 )g which is a linear independent set (with similar ar-

guments as above) in Qr 2 .
Moreover, qr
we obtain that qr
f1 ;
N (f1 ) ;
N 2 (f1 ) ;

Not

qr

qr

1
1

and denote by p3 = qr
qr

q2

qr

= qr

p1

p2 . In a nite number of steps,

q1 and the structure:


basis in Qr ;

fp 1 ;
;

N (fp1 ) ;

fp1 +1 ;

; N 2 (fp1 ) ;

N (fp1 +1 ) ;

basis in Qr 1 ;

fp1 +p2 ;
N (fp1 +p2 ) ; fp1 +p2 +1 ;

basis in Qr 2 ;

; fp1 +p2 +p3

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Nr

(f1 ) ;
; N r 1 (fp1 ) ; N r 2 (fp1 +1 ) ;
; N r 2 (fp1 +p2 ) ;
; fp1 + +pr 1 +1 ;
The last line has only eigenvectors, all attached to the zero eigenvalue.

; fp1 +

+pr

1 +pr

basis in Q1 :

2.6. INTRODUCTION TO SPECTRAL THEORY

101

Each column of the table is a linear independent set which determine an N ( )invariant subspace; the
rst p1 subspaces have dimension r; the next p2 subspaces have dimension r

1, and so on; the last pr

subspaces have dimension 1. The entire space V is a direct sum of the subspaces on the columns; for the
rst column, choose as basis the set
Nr

(f1 ) ; N r

(f1 ) ;

; N (f1 ) ; f1 (in this order).

In this basis the restriction of N ( ) is given by the values of N ( ) at the vectors forming the basis:
f1 2 V = ker N r ) N r (f1 )2= 0 3

0 7
7
0 7
7
.. 7
. 7
7
7
7
0 7
5
0
2 3
6 1 7
6 7
6 0 7
6 7
6 . 7
r 2
r 1
. 7
N (N
(f1 )) = N
(f1 ) = 6
6 . 7
6 7
6 7
6 0 7
4 5
0
::::::::::::::::::::::::::::::::::::::::::::::::::::::
2
2 3
3
0 7
6 0 1
6 0 7
6
6 . 7
.. 7
6 0 0
6 .. 7
. 7
7
6
6 7
6 . .
6 7
7 Not
6
6
7
.
.
N (f1 ) = 6 0 7 ) N jsp inv (x) = 6 . .
0 7
7 x = J0 (r) x;
6
6 7
7
6
6 7
7
1 7
6 0 0
6 1 7
4 5
4
5
0 0
0
0
nally, we get for the matrix representation of N ( ) the following structure:
2
p1 cells
J (r)
6 0
6
...
6
0
0
with
6
6
6
order r
J0 (r)
6
6
6
p2 cells
J0 (r 1)
6
6
6
..
0
0
.
6
with
6
6
6
order r 1
J0 (r 1)
6
6
..
..
..
..
6
.
.
.
.
6
6
6
pr cells
J0 (1)
6
6
6
...
0
0
with
6
4
order 1
J0 (1)
6
6
6
6
6
r 1
r
N (N
(f1 )) = N (f1 ) = 6
6
6
6
6
4

3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5

102

2. LINEAR TRANSFORMATIONS

2.6.33. Denition. An r

r matrix is called rth order Jordan cell attached to the scalar

the form:

6
6
6 0
6
J (r) = 6
6
6
4
0

..

...
0

2.6.34. Denition. A Jordan block of order (r1 ;


(r1 +

+ rk )

(r1 +

0 7
7
0 7
7
7:
1 7
7
5

; rk ) attached to the scalar

+ rk ) with the form:


2
0
0
6 J (r1 )
6
6
0
J (r2 )
0
6
J (r1 ;
rk ) = 6
..
6
.
6
4
0
0
J (rk )

2.6.35. Denition. A Jordan matrix attached to the scalars ( 1 ;


i = 1; s is a matrix with the form:
2
1
6 J 1 r1 ;
6
6
0
6
J =6
6
6
4
0

;r

1
k1

r1 2 ;

is a matrix of dimension

7
7
7
7
7:
7
7
5

s)

and the orders r1 i ;

0
; rk22

r1 s ;

rkii ,

...
0

when it has

; rkss

7
7
7
7
7:
7
7
5

2.6.1. Decomplexication of the Complex Jordan Canonical Form. When the eld of scalars

is real, the eigenvalues are generally speaking complex numbers, and so they dont belong to the eld of
scalars; moreover, the attached eigenvectors have complex coordinates and thus they dont belong to the
vector space. The following procedure obtains a pseudodiagonal form for this situation:
Consider the space (Rn ; R) and a linear operator U ( ) : Rn ! Rn with the representation (in the
standard basis) given by the matrix A [[U (x)]E = A [x]E ]. The the characteristic polynomial P ( ) =
det (A

I) is a polynomial with real coe cients. Consider a complex eigenvalue

> 0 with multiplicity m. Then

+ i 2 C n R;

i is also an eigenvalue with the same multiplicity (because

when a polynomial with real coe cients has a complex root, it also has as root its complex conjugate).
The same matrix A denes a new linear operator over Cn given by [U (x)]E = A [x]E . Over the eld C
this linear operator admits a Jordan basis in which for the complex eigenvalue

we have m corresponding

basis vectors (some of them are eigenvectors) with complex coordinates, denoted by f1 ;
complex conjugates f1 ;

; fm are the corresponding vectors for the eigenvalue :

; fm ; then their

2.6. INTRODUCTION TO SPECTRAL THEORY

103

If v is an eigenvector for A, Av = v, then A (Re v + i Im v) = (Re + i Im ) (Re v + Im v) ) Av = v


(since the matrix A has real components) so that v is an eigenvector for
The Jordan basis corresponding to the eigenvalue

is given by the relations:

; Af1q = f1q

Af11 = f11 ;

; Af2q = f1q + f2q

Af21 = f11 + f21 ;

.........................................................
Afn11 = fn11

+ fn11 ;

; Afnqq = fnqq

+ fnqq

The vectors fjk are linear independent and they form the corresponding part of the Jordan basis for
the eigenvalue .
Starting from the vectors attached to these two complex conjugated eigenvalues we may build a basis
with real coordinates by replacing each pair of complex conjugated vectors fjk ; fjk with the pair of real
vectors
gjk =

1
2

fjk + fjk , hkj =

1
2i

fjk

fjk . From the relations


Afjk = fjk

+ fjk ;

Afjk = fjk

+ fjk

and from the relations


f + f = (Re ( ) + i Im ( )) (Re (f ) + i Im (f )) +
+ (Re ( )
= 2 Re ( ) Re (f )
f

i Im ( )) (Re (f )

i Im (f )) =

2 Im ( ) Im (f ) = 2 (Re ( ) g

Im ( ) h) ;

f = (Re ( ) + i Im ( )) (Re (f ) + i Im (f ))
(Re ( )

i Im ( )) (Re (f )

i Im (f )) =

= 2i Re ( ) Im (f ) + 2i Im ( ) Re (f )
it follows that
Agjk = gjk
Ahkj = hkj

+ Re ( ) gjk

+ Re ( ) gjk + Im ( ) hkj

so that the replacements for each pair of complex cells (for


0
@

0
0

Im ( ) hkj ;

1
A

0
@

+ i ) are:
1
A

104

2. LINEAR TRANSFORMATIONS

0
B
B
B
B
B
B
B
B
B
B
B
B
B
@

0
0
0
0
0

1 0 0 C
B
B
C
B
B
C
B
B 0
0
0
C
B
B
C
B
B
C
B 0 0
B 0 0
1 C
B
B
A
@
@
0 0
0 0 0
1
0
1 0 0 0 0
C
B
C
B
B
1 0 0 0 C
C
B
C
B
B 0 0
0
0 0 0 C
C
B
C
B
C
B 0 0
0 0
1 0 C
B
C
B
C
B
0 0 0
1 C
B 0 0
A
@
0 0
0 0 0 0
:::::::::::::::::::::::::::::::::

1
0

0 C
C
1 C
C
C
C
C
A

0
1
0

C
C
0 C
C
C
0 C
C
C
1 C
C
C
C
C
A

2.6. INTRODUCTION TO SPECTRAL THEORY

105

2.6.2. The Jordan Canonical Form Procedure.

(1) Find the matrix of the linear operator


(2) Solve (in C) the characteristic equation det (A
k
P
with their multiplicity orders nj ,
nj = n.

I) = 0 and nd all the roots

2 C, j = 1; k,

j=1

For each root

(3) Find the set V

(with multiplicity nj ) obtained at step 2.:


nj
j I)

= ker (U

( ).

(4) Find the restriction Uj ( ) of the linear operator U ( ) over the set V j , Uj ( ) : V

! V j;

Uj (x) = U (x) ; 8x 2 V j .
(5) Find the nilpotent linear operator Nj ( ) : V
(Uj

j I) (x),

! V j , dened by: Nj (x) = Uj (x)

j I (x)

8x 2 V j .

(6) Find the chain of kernels:


f0g = ker Nj0 ( )

ker Nj2 ( )

ker Nj ( )

ker Nj j

()

ker Nj j ( ) = V j ;

nd
rj = min dim ker Njk ( ) = dim V
k

= nj

and for each k = 1; rj , consider the decomposition


ker Njk ( ) = ker Njk

()

Qjk ;

(7) Find mjk = dim ker Njk ( ), k = 0; rj


(8) Find qkj = dim Qjk = mjk

mjk 1 ; k = 1; rj

(9) Find
pj1 = mjrj

mjrj

pj2 = qrjj

pj1 ,

pj3 = qrjj

qrjj

= qrjj ,

= qrjj

pj1

pj2

(10) Choose a basis in Qjrj , denoted by f1j ;


(11) Find the vectors
basis in Qjrj

fpjj +1 ;
1

; fpjj +pj
1
2

fpjj ;
1

such that they complete the set Nj f1j ;

1;

(12) Continue this procedure until the following structure is obtained:

; Nj fpjj

up to a

106

2. LINEAR TRANSFORMATIONS

f1j ;

fpjj ;

basis in Qjrj ;

Nj f1j ;

fpjj +1 ;

Nj fpjj +1 ;

Nj fpjj ;

; Nj2 fpjj ;

Nj2 f1j ;

basis in Qjrj

fpjj +pj ;
1

Nj fpjj +pj ; fpjj +pj +1 ;


1

basis in Qjrj

; fpjj +pj +pj


1

1;

2;

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
r

Nj j

f1j ;

; Nj j

fpjj ; Nj j
1

fpjj +1 ;
1

; Nj j

fpjj +pj ;
1

; fpjj +
1

+pjr

1 +1

; fpjj +
1

+pjr

j
1 +prj

basis in Qj1 :

The Jordan basis is obtained by ordering the above vectors in the following way: choose the
vectors on the columns, from below to above (for each column, from the bigger exponent to the
smaller exponent) for example, for the rst column:
r

Nj j

f1j ; Nj j

f1j ;

; Nj f1j ; f1j

(13) Obtain in this basis a matrix for the linear operator which has pj1 cells of order rj , pj2 cells of order
rj

1, and so on.

2.6. INTRODUCTION TO SPECTRAL THEORY

107

2.6.3. Examples.

4
4
2.6.36. Example (One eigenvalue
with multiplicity
2
3 4). Consider U ( ) : R ! R with the matrix in the
1
1 0 7
6 1
6
7
6 1 3
7
0
1
6
7
standard basis given by A = 6
7; determine the Jordan canonical form and a Jordan
6 1 0
1 1 7
6
7
4
5
0
1
1 1
basis.
2
3
1
1 0 7
6 1
6
7
6 1 3
7
0
1
6
7
Step 1: the matrix is A = 6
7.
6 1 0
1 1 7
6
7
4
5
0
1
1 1

Step 2: The eigenvalues are: det (A

I4 ) =

Step 3: The dimension of the eigenset attached to the eigenvalue

= 1:

0
)

=(

= 1, n1 = 4.

Consider the
(U
1R4 )4 ( ) = (U 1R4 )4 ( ); its matrix is (A
2 linear operator 3
1
1
0 7
6 0
6
7
6 1 2
7
0
1
6
7 Not.
A I4 = 6
7 = B.
6 1 0
7
2
1
6
7
4
5
0
1
1 0
2
32 2
3
1
1
0 7
2 2 7
6 0
6 2 2
6
7
6
7
6 1 2
7
6 2 2
7
0
1
2
2
6
7
6
7
(A I4 )2 = 6
7 =6
7 = B2.
6 1 0
7
6
2 1 7
2 2
2 7
6
6 2
7
4
5
4
5
0
1
1 0
2
2 2
2
2
33 2
3
1
1
0 7
6 0
6 0 0 0 0 7
6
7
6
7
6 1 2
7
6 0 0 0 0 7
0
1
6
7
6
7
(A I4 )3 = 6
7 =6
7 = B3.
6 1 0
7
6
7
2 1 7
6
6 0 0 0 0 7
4
5
4
5
0 0 0 0
0
1
1 0

I4 )4 .

1)4 = 0 )

108

2. LINEAR TRANSFORMATIONS

(A

So V

1
1
0
6 0
6
6 1 2
0
1
6
4
I4 ) = 6
6 1 0
2 1
6
4
0
1
1 0
1

= ker (U

n1
1 I)

34

7
6 0
7
6
7
6 0
7
6
7 =6
7
6 0
7
6
5
4
0

( ) = ker (U

0 0 0 7
7
0 0 0 7
7
7 = B4.
0 0 0 7
7
5
0 0 0

1R4 )4 ( ) = R4 .

Step 4: The restriction U1 ( ) for U ( ) over V 1 , U1 ( ) : V

! V 1 ; U1 (x) = U (x), 8x 2 V

is:

U1 ( ) : R4 ! R4 ; U1 (x) = U (x) (U1 ( ) is identical with U ( )).


4
4
Step 5: The
2 nilpotent linear3operator N1 ( ) : R ! R , dened by N1 (x) = U1 (x)
1
1
0 7
6 0
7
6
7
6 1 2
0
1
7
6
matrix B = 6
7.
6 1 0
2 1 7
7
6
5
4
0
1
1 0
Step 6: Find

r1 = min dim ker N1k ( ) = dim V


k

1 I4

(x) has the

= n1 = 4

from where we get r1 = 3. The chain of kernels:


ker N12 ( )
ker N1r1 1 ( ) ker N1r1 ( ) = V 1 :
32
3 2 3
2
1
1
0 7 6 x1 7 6 0 7
6 0
76
7 6 7
6
7 6 x2 7 6 0 7
6 1 2
0
1
76
7 6 7
6
ker N1 ( ) is the set of solutions of the system: 6
76
7 = 6 7, which means all the
6
7 6 7
6 1 0
2 1 7
7 6 x3 7 6 0 7
6
4
54
5 4 5
0
0
1
1 0
x4
2
3
6 2a + b 7
6
7
6
7
a
6
7
vectors of the form: 6
7
6
7
a
6
7
4
5
b
2
32
3 2 3
2 2 7 6 x1 7 6 0 7
6 2 2
6
76
7 6 7
6 2 2
7 6 x2 7 6 0 7
2
2
6
7
6
7 6 7
ker N12 ( ) is the set of solutions of the system: 6
76
7 = 6 7, which means
6 2
76 x 7 6 0 7
2
2
2
6
76 3 7 6 7
4
54
5 4 5
2
2 2
2
x4
0
2
3
6 a b+c 7
6
7
6
7
a
6
7
all the vectors of the form: 6
7.
6
7
b
6
7
4
5
c
f0g = ker N10 ( )

ker N1 ( )

2.6. INTRODUCTION TO SPECTRAL THEORY

The linear operator: has the matrix:


N10 ( )

I4

N1 ( )

N12

N13

()

()

109

mjk

has the kernel:

qkj

f0g
m10 = 0
82
3 2 39
>
>
>
2 7 6 1 7>
>
>
>
>
6
>
>
>
7
7
6
>
6
>
=
<6 1 7 6 0 7>
7 6 7
6
span 6
m11 = 2
7;6 7
>
7
7
6
>
6
>
>
>
6 1 7 6 0 7>
>
>
>
5 4 5>
4
>
>
>
>
: 0
1 ;
82 3 2
3 2 39
>
>
>
>
>
>
>
6 1 7 6 1 7 6 1 7>
>
>
>
7
7
6
7
6
>
6
>
=
<6 1 7 6 0 7 6 0 7 >
7 6 7
6 7 6
span 6 7 ; 6
m12 = 3
7;6 7
>
7
7
6
7
6
>
6
>
>
>
6 0 7 6 1 7 6 0 7>
>
>
>
5 4 5>
4 5 4
>
>
>
>
: 0
1 ;
0

m13

=4

q11 = m11
=2

q21 = m12
=3

q31 = m13
=4

p11 = q31 = 1 ) the Jordan matrix has 1 cell of order r1 = 3


p12 = q21

q31 = 0 ) the Jordan matrix has 0 cells of order r1

p13 = q11

q21 = 2

1 = 1 ) the
2
6 1 1
6
6 0 1
6
6
So the Jordan matrix is 6
6 0 0
6
6
6
4
0 0

1=2

Jordan matrix has 1 cell of order r1


3
0 j 0 7
7
1 j 0 7
7
7
1 j 0 7
7.
7
7
j
7
5
0 j 1

2=1

Step 10 [Finding the Jordan basis]: For each k = 1; r1 = 1; 3 consider the decomposition
ker N1k ( ) = ker N1k

()

Q1k ;

Find the subspaces Q1k :


f0g = ker N10 ( )

ker N1 ( )

ker N12 ( )

ker N13 ( )

ker N1 ( )

Q12

ker N12 ( )

= R4
Q13

k
ker N1 ( )

Q12

m10 =
0=2

m11 =
2=1

m12 =
3=1

110

2. LINEAR TRANSFORMATIONS

82
>
>
>
>
6
>
>
6
>
<6
6
ker N12 ( ) = span 6
>
6
>
>
6
>
>
4
>
>
:

3 2

39
>
>
>
7>
>
7>
=
7>
7
7
7>
>
7>
>
5>
>
>
;

3 2

1 7 6 1 7 6 1
7 6
7 6
6 0 7 6 0
1 7
7 6
7 6
7;6
7;6
6
7
6
0 7 6 1 7
7 6 0
5 4
5 4
1
0
0
4
ker N13 ( ) = R4 = ker N12 ( ) Q13 ) complete the basis of ker N12 ( ) up to a basis of
; the completion
2R3

6 1 7
6 7
6 0 7
6 7
may be done with any vector from R4 n ker N12 ( ), for example with the vector v1 = 6 7.
6 0 7
6 7
4 5
0
82 3 2
3 2 3 2 39
>
>
>
1 7 6 1 7 6 1 7 6 1 7>
>
>
>
6
>
>
6 7 6
7 6 7 6 7>
>
>
>
<6 1 7 6 0 7 6 0 7 6 0 7>
=
6 7 6
7 6 7 6 7
The set 6 7 ; 6
7 ; 6 7 ; 6 7 is linear independent, so it is a basis of R4 = ker N13 ( ) in
>
6
7
6
7 6 7 6 7>
>
>
6 0 7 6 1 7 6 0 7 6 0 7>
>
>
>
>
4
5
4
5 4 5 4 5>
>
>
>
>
: 0
0
1
0 ;
2
which the rst 8
32
vectors
2 for
3 ker N1 ( ).
39are a basis
>
>
>
1 7>
>
>
6 1 7
6
>
>
>6 7>
6 7
>
>
>
6 0 7
=
<6 0 7>
6 7
6 7
1
= ker N12 ( ) ) N12 (v1 ) 6= 0
Q3 = span 6 7 . v1 = 6 7 2 R4 = ker N13 ( ) ) N13 (v1 ) = 0 v1 2
6
7
>
6
7
>
>
>6 0 7>
6 0 7
>
>
>
>
4 5
4 5>
>
>
>
;
: 0 >
0
3
2
N1 (v1 ) = 0 ) N1 (N1 (v1 )) = 0 ) N1 (v1 ) 2 ker N12 ( ) and N1 (N12 (v1 )) = 0 so that N12 (v1 ) 2 ker N1 ( )
f0g = ker N10 ( )

dim=2

ker N1 ( )

dim=3

dim=4

ker N12 ( )

ker N13 ( )

dim=2

ker N1 ( )
2N12 (v1 )

1
1
6 1 7
6 0
6 7
6
6 0 7
6 1 2
0
6 7
6
v1 = 6 7, N1 (v1 ) = 6
6 0 7
6 1 0
2
6 7
6
4 5
4
0
0
1
1
2
3
6 2 7
6
7
6 2 7
6
7
6
7 the vectors v1 , N1 (v1 ), N12 (v1 )
6 2 7
6
7
4
5
2

32

0 76
76
6
1 7
76
76
6
1 7
76
54
0

Q12

dim=3
ker N12 (
2N1 (v1 )

= R4

dim=1
Q13
2v1

1 7 6 0 7
1
1
0
6 0
7 6
7
6
6
7
6 1 2
0 7
0
1
7 6 1 7 2
6
7=6
7, N1 (v1 ) = 6
6
7
6 1 0
0 7
2 1
7 6 1 7
6
5 4
5
4
0
0
0
1
1 0

correspond to the cell of order 3.

32

76 0 7
76
7
76 1 7
76
7
76
7=
76 1 7
76
7
54
5
0

2.6. INTRODUCTION TO SPECTRAL THEORY

111

To complete the basis, another vector has to be chosen to correspond to the cell of order 1, which
means from ker N1 ( ), which should be linear independent with the one already chosen from ker N1 ( ),
(v21 ).
which means with N128
>
>
>
>
6 2
>
>
6
>
<6 1
6
ker N1 ( ) = span 6
>
6 1
>
>
6
>
>
4
>
>
: 0
2 3
6 1 7
6 7
6 0 7
6 7
6 7
6 0 7
6 7
4 5
1

3 2

7 6 1
7 6
7 6 0
7 6
7;6
7 6 0
7 6
5 4
1

3
2
3
2
39
2
>
>
6
>
6 2 7
7>
6 2 7
7
6
>
7
6
7>
6
>
6
6 1 7
7=
6 2 7
7
6
7
6
6
7
7 + 26
7 = 26
7 and 6
7
6
7
6
7>
6
6
>
6 1 7
7>
6 2 7
>
5
4
5
4
5>
4
>
>
;
0
2

1 7
7
0 7
7
7 2 ker N1 ( ) Chose v2 =
0 7
7
5
1

82
3 2
3 2 3 2 39
>
>
>
>
>
>
6 2 7 6 0 7 6 1 7 6 1 7>
>
>
>
>
7
6
7
6
7
6
7
6
>
>
=
<6 2 7 6 1 7 6 0 7 6 0 7>
6
7 6
7 6 7 6 7
;
;
;
The Jordan basis is J = fN12 (v1 ) ; N1 (v1 ) ; v1 ; v2 g = 6
7 6
7 6 7 6 7
6 2 7 6 1 7 6 0 7 6 0 7>
>
>
>
6
7
6
7 6 7 6 7>
>
>
>
>
4
5
4
5 4 5 4 5>
>
>
>
>
: 2
0
0
1 ;
The matrix of N1 ( ) in this basis has as columns the representations of the images through N1 ( ) of

the basis vectors:

6 1 7
6 0 7
6 7
6 7
6 0 7
6 0 7
6 7
6 7
N1 (N12 (v1 )) = 0 ) [N1 (N12 (v1 ))]J = 6 7 N1 (N1 (v1 )) = N12 (v1 ) ) [N1 (N1 (v1 ))]J = 6 7
6 0 7
6 0 7
6 7
6 7
4 5
4 5
0
0
2 3
2 3
6 0 7
6 0 7
6 7
6 7
6 1 7
6 0 7
6 7
6 7
N1 (v1 ) = N1 (v1 ) ) [N1 (v1 )]J = 6 7 N1 (v2 ) = 0 v2 ) [N1 (v2 )]J = 6 7
6 0 7
6 0 7
6 7
6 7
4 5
4 5
0
0
2
3
6 0 1 0 0 7
6
7
6 0 0 1 0 7
6
7
So the matrix of N1 ( ) in the basis J is 6
7.
6 0 0 0 0 7
6
7
4
5
0 0 0 0
The connection between the nilpotent linear operator and the restriction of the original linear operator
(in this case, exactly the original linear operator) is
N1 (x) = U1 (x)
is:

1 I4

(x) ) U1 (x) = N1 +

1 I4

(x) so the matrix of the original linear operator in J

112

2. LINEAR TRANSFORMATIONS

6 1 0
6 0 1 0 0 7
7
6
6
6 0 1
6 0 0 1 0 7
7
6
6
7+1 6
6
6 0 0
6 0 0 0 0 7
7
6
6
5
4
4
0 0
0 0 0 0
The change
ofbasis matrix
2
3

0 0 7 6 1 1
7 6
6
0 0 7
7 6 0 1
7=6
6
1 0 7
7 6 0 0
5 4
0 0
0 1
from the
2 standard

1
1 1 7
4
6 0
7
6
1
6 0
1 0 0 7
7
6
2
1
7)C =6
6 1
1 0 0 7
1
7
6
5
4
0 0 1
0 1
32 2
2
1
1
0 76 1
4
4
6 0
76
6
1
1
76 1
6 0
0
76
6
2
2
J = C 1 AC = 6
76
6
6 1
1 1
1 7
76 1
6
54
4
1
0 12
0
1
2
[the2Jordan decomposition
32
2 the initial matrix]
3 of

6 2
6
6 2
6
C=6
6 2
6
4
2

1
4

0 0 7
7
1 0 7
7
7
1 0 7
7
5
0 1
basis to3the basis J is:

0 7
7
1
7
0
7
2
7 and the Jordan canonical
1
1 7
7
5
1
1
2
3
32
1 1 7
1
1 0 76 2 0
7
76
7
6 2
1
0
0
3
0 1 7
7
76
7 =
76
7
6
1 0 0 7
0
1 1 76 2
7
5
54
2
0
0 1
1
1 1
32

1 1 0 0 76
76
6
0 1 1 0 7
3
76
76
6
0 0 1 0 7
0
76
54
0 0 0 1
1
2
6 1 1 0 j
6
6 0 1 1 j
6
6
We observe the structure of the Jordan cells: 6
6 0 0 1 j
6
6
j
6
4
0 0 0 j

6 1
6
6 1
6
6
6 1
6
4
0

0 7 6 2
7 6
6
0 1 7
7 6 2
7=6
6
1 1 7
7 6 2
5 4
2
1 1
1

1 1 76
76
6
1 0 0 7
76
76
6
1 0 0 7
76
54
0 0 1

1
4

1
2

32

0 7
7
0 7
7
7
0 7
7.
7
7
7
5
1

1
4
1
2

1
1
2

form is:

6 1 1 0
6
6 0 1 1
6
6
6 0 0 1
6
4
0 0 0

0 7
7
0 7
7
7 or
0 7
7
5
1

0 7
7
0 7
7
7
1 7
7
5
1

2.6.37. Example. Use the 8


previous Jordan decomposition to solve the linear homogeneous ordinary
>
>
x01 = x1 + x2 + x3
>
>
>
>
>
< x0 = 3x2 x1 + x4
2
dierential equations system
>
>
x03 = x4 x3 x1
>
>
>
>
>
: x0 = x4 x3 x2 :
4
2
3 2
32
3
0
1
1 0 7 6 x1 7
6 x1 7 6 1
6
7 6
76
7
6 x0 7 6 1 3
6
7
0 1 7
6 2 7 6
7 6 x2 7
The matrix form of the system is: 6
7=6
76
7
6 x0 7 6 1 0
76 x 7
1
1
6 3 7 6
76 3 7
4
5 4
54
5
0
x4
0
1
1 1
x4
Replace the matrix with its Jordan decomposition (from the previous example):

2.6. INTRODUCTION TO SPECTRAL THEORY

x01

32

1 1 76 1 1
76
6
1 0 0 7
76 0 1
76
6
1 0 0 7
76 0 0
54
0 0
0 0 1
2
1
4
6 0
6
1
6 0
6
2
multiply the equality with 6
6 1
1
6
4
0 1
2
32
3 22
1
1
0 7 6 x01 7 6 1
4
4
6 0
6
76
7 6
1
1
6 0
7 6 x0 7 6 0
0
6
76 2 7 6
2
2
6
76
7=6
6 1
7
6
0 7
1 1
1 7 6 x3 7 6
6
6 0
4
54
5 4
1
0 21
1
0
x04
2
Apply
2
3the 2change of variables:3 2
6
7 6 2
6
7 6
6 x0 7 6 2
6 2 7 6
6
7=6
6 x0 7 6 2
6 3 7 6
4
5 4
2
x04

1
4

1
4

32

1
4

0 0 76 0
76
1
1
6
1 0 7
76 0
2
2
76
7
6
1 0 76 1
1 1
54
1
0 1
0 12
2
3
1
0 7
4
7
1
0 7
7
2
7 to the left:
1
1 7
7
5
1
1
2
32
1
1
1 0 0 76 0
4
4
76
1
1
6
1 1 0 7
76 0
2
2
76
6
0 1 0 7
1 1
76 1
54
1
0 0 1
0 12
2

y10

0 7 6 x1 7
7 6
6
6 y1 7 6 0
7 6
6
7
76
6
7 6
1
1
0 7
6
6
6
7
7
6 y2 7 6 0
0 7 6 x2 7
6 y2 7 6
6
7 6
2
2
7=6
7)6
76
6
7=6
6 y0 7 6
76 x 7
6 y 7 6 1
1
1
1
6 3 7 6
76 3 7
6 3 7 6
5 4
4
5
54
4
5 4
1
1
0
y4
x4
1
0 2
y4
2
The
in the variables
32
3y is:2
3
2
3 system
2 new
0
6 y1 7 6 1 1 0 0 7 6 y1 7 6 y1 + y2 7
76
7 6
7
7 6
6
6 y 0 7 6 0 1 1 0 7 6 y2 7 6 y2 + y3 7
76
7 6
7
6 2 7 6
76
7=6
7
7=6
6
7
6 y0 7 6 0 0 1 0 7 6 y 7 6 y
76 3 7 6
7
6 3 7 6
3
54
5 4
5
5 4
4
y4
0 0 0 1
y4
y40
Solve the system in y:
2
3 2
t
t
6 y1 (t) 7 6 e te
6
7 6
6 y2 (t) 7 6 0 et
6
7 6
6
7=6
6 y (t) 7 6
6 3
7 6 0 0
4
5 4
y4 (t)
0 0
Return
to3 the2original
2
6 x1 (t) 7 6 2
6
7 6
6 x2 (t) 7 6 2
6
7 6
6
7=6
6 x (t) 7 6 2
6 3
7 6
4
5 4
x4 (t)
2

1
4

1
4

1
2

1
2

32

0 76
76
6
0 7
76
76
6
1 7
76
54
1

113

x1 7
7
x2 7
7
7
x3 7
7
5
x4

32

0 76
76
6
0 7
76
76
6
1 7
76
54
1
1
4
1
2

1
1
2

x1 7
7
x2 7
7
7
x3 7
7
5
x4
32

x01

0 76
7
7
76
0 7
6
7
0 7 6 x2 7
7
76
6 x0 7
1 7
76 3 7
5
54
0
x4
1

y4 (t) = c4 et , y3 (t) = c3 et , y2 (t) = (c3 t + c2 ) et , y1 (t) =


32
3
1 2 t
t e 0 7 6 c1 7
2
76
7
6 c2 7
tet
0 7
76
7
76
7
7
6
7
et
0 7 6 c3 7
54
5
t
c
0
e
4
variables:
32
3

1 1 76
76
6
1 0 0 7
76
76
6
1 0 0 7
76
54
0 0 1

y1 (t) 7
7
y2 (t) 7
7
7=
y3 (t) 7
7
5
y4 (t)

c3

t2
+ c2 t + c1 et
2

114

2. LINEAR TRANSFORMATIONS

6 2
6
6 2
6
=6
6 2
6
4
2

32

1 1 76
76
6
1 0 0 7
76
76
6
1 0 0 7
76
54
0 0 1

te

et

1 2 t
te
2
tet

et

32

0 76
76
6
0 7
76
76
76
0 76
54
t
e

2 t

c1 7 6 c4 e
2c1 e + c3 (e
t e ) 2tc2 e
7 6
2 t
t
t
t
t
6
c2 7
7 6 c3 (t e + te ) c2 (e + 2te ) 2c1 e
7=6
6
2 t
c3 7
tet ) c2 (et 2tet ) + 2c1 et
7 6 c3 (t e
5 4
c4
2c1 et + c4 et + 2tc2 et + t2 c3 et

3
7
7
7
7
7
7
7
5

2.6. INTRODUCTION TO SPECTRAL THEORY

115

2.6.38. Example. (Two eigenvalues with multiplicity 2) [Problem 2.23,


2 pag.
6 0 2
6
6 1 0
6
4
4
R ! R with the matrix representation in the standard basis A = 6
6 0 1
6
4
0 0
Jordan canonical form and a Jordan
basis. 3
2
1 7
6 0 2 0
7
6
6 1 0 0 0 7
7
6
Step 1: the matrix is A = 6
7.
6 0 1 0 0 7
7
6
5
4
0 0 1 0
2

Step 2: The eigenvalues are: det (A

I4 ) =

= 1:

50, [17]]3 Consider U ( ) :


0
1 7
7
0 0 7
7
7; determine the
0 0 7
7
5
1 0

+1 = (

1)2 ( + 1)2

0
1

)
)

= 1, n1 = 2,

1, n2 = 2

Step 3: Find the dimension of the eigenset for

2
Consider the
1R4 )2 ( ); its matrix is (A I4 )2 .
1 1R4 ) ( ) = (U
2 linear operator (U3
0
1 7
6 1 2
7
6
7
6 1
1
0
0
7 Not.
6
A I4 = 6
7 = B (the matrix has rank 3).
6 0
7
1
1
0
6
7
5
4
0
0
1
1
2
32
2
3
0
1 7
4
1 2 7
6 1 2
6 3
6
7
6
7
6 1
7
6 2 3
7
1
0
0
0
1
6
7
6
7
(A I4 )2 = 6
7 = 6
7 = B 2 (the matrix has rank 2) (by
6 0
7
6
1
1 0 7
2 1
0 7
6
6 1
7
4
5
4
5
0
0
1
1
0
1
2 1

raising at other powers, the rank of the matrix B k will stay the same).
The
2 set
6 3
6
6 2
6
6
6 1
6
4
0

V
4
3
2
1

1
= ker (U 3 2 1 I)n3
()2
= ker3(U

1
2

2 76
76
6
1 7
76
76
6
0 7
76
54
1

x1 7 6
7 6
6
x2 7
7 6
7=6
6
x3 7
7 6
5 4
x4

0 7
7
0 7
7
7,
0 7
7
5
0

1R4 )2 2
( ) is the set of solutions
of the system:
3
6 x1 = 3a + 2b; 7
6
7
6 x2 = 2a + b 7
6
7
meaning 6
7;
6 x =a
7
6 3
7
4
5
x4 = b

116

2. LINEAR TRANSFORMATIONS

6 2 7
6 3 7
7
6
6 7
6 1 7
6 2 7
7
6
6 7
with v1 = 6 7, v2 = 6
7, we get: V
6 0 7
6 1 7
7
6
6 7
5
4
4 5
1
0

= span (v1 ; v2 ), while the set B1 = fv1 ; v2 g is a basis for

V 1.

1
1
1
1
Step 4: Determine
U1 ( )3of U2( ) over
3 V , U1 ( ) : V ! V ; U1 (x) = U (x), 8x 2 V :
32
2 the restriction
1 76 3 7 6 4 7
6 0 2 0
2
3
76 7 6 7
6
6 1 0 0 0 76 2 7 6 3 7
2
76 7 6 7
6
5 (the scalars
U (v1 ) = Av1 = 6
7 6 7 = 6 7 = 2v1 + ( 1) v2 ) [U (v1 )]B1 = 4
6 0 1 0 0 76 1 7 6 2 7
1
76 7 6 7
6
54 5 4 5
4
1
0
0 0 1 0
2 and 1 may be found
3 1) =
2 from the system
3 2 U (v
2 av13+ bv2 )

6 0 2 0
6
6 1 0 0
6
U (v2 ) = Av2 = 6
6 0 1 0
6
4
0 0 1
1 and 0 may be found from the

1 76
76
6
0 7
76
76
6
0 7
76
54
0
system

2 7 6 3 7
2 3
7 6 7
6 2 7
1
1 7
7 6 7
7 = 6 7 = 1 v1 + 0 v2 ) [U (v2 )]B1 = 4 5 (the scalars
6 7
0
0 7
7 6 1 7
5 4 5
1
0
U (v2 ) = av1 + bv2 )
3
2
2 1
5 [x] in the basis B1 = fv1 ; v2 g of V 1 .
We get U1 ( ) : V 1 ! V 1 ; [U1 (x)]B1 = 4
B1
1 0
Step 5: The nilpotent linear operator N1 ( ) : V 1 ! V 1 (attached to the eigenvalue 1 ) is the

restriction over V

) over V
2
2 1
1
5 1 4
In the basis B1 N1 ( ) has the matrix 4
1 0
0
2
32 2
1
1
0
5 =4
Remark that the matrix is nilpotent: 4
1
1
0
Step 6: Find the chain of kernels:
1

of (U

f0g = ker N10 ( )

1 I4 ) (

) (or (U1
2

ker N1 ( )

The linear operator: Its matrix:


N10 ( )

1 I2 ) (

ker N12 ( )

Its kernel:

).

0
1
0
0

5=4

f0g

1
1

5.

5.

ker N1r1

()

dimension:

0 = m10
82
39
< 1 =
1
1
4
5 span 4
5
N1 ( )
1 = m11
:
;
1
1
1
82
2
3
3 2 39
< 1
0
0
1 =
2
4
5
4
5
4
5
N1 ( )
span
;
2 = m12
:
;
0 0
1
0
The kernel of N1 ( ) is the set of all solutions of the system:
I2
2

ker N1r1 ( ) = V

2.6. INTRODUCTION TO SPECTRAL THEORY

2
4

1
1

1
1

32
54

x1
x2

5=4

0
0

117

5, which means all the vectors of the form: 4

r1 = min dim ker N1k ( ) = dim V


k

x1 = a
x2 =

5. Calculate

= n1 = 2

to get r1 = 2 and for each k = 1; r1 consider the decomposition


ker N1k ( ) = ker N1k

Q1k ;

()

Find the subspaces Q1k :


f0g = ker N10 ( )

ker N12 ( )

ker N1 ( )

= R2

k
ker N1 ( )

Q12

82
39
< 1 =
5 (which is a basis for ker N1 ( )) up to a basis for ker N12 ( ) = R2 , for example
Complete the set 4
:
1 ;
2 3
1
with the vector [u2 ]B1 = 4 5 (may be chosen any vector from ker N12 ( ) n ker N1 ( )).
0
2
32 3 2
3
1
1
1
1
54 5 = 4
5 2 ker N1 ( ).
[u1 ]B1 = [N1 (u2 )]B1 = 4
1
1
0
1
2 3
02 31 2
32 3
1
1
1
1
1
54 5 =
Remark: if the chosen vector would be, for example, 4 5, then N1 @4 5A = 4
1
1
1
1
1
2
3
2
3
2
1
4
5, which is from ker N1 ( ), but not exactly the same vector 4
5, but a certain linear combina2
1
tion.
The Jordan basis in V

for N1 ( ) is B1 = fu1 ; u2 g, where:


2 3
0
N1 (u1 ) = 0 ) [N1 (u1 )]B = 4 5
1
0
2 3
1
N1 (u2 ) = u1 ) [N1 (u2 )]B = 4 5
1
0
2
3
2
3
0 1
0 1
5 [x] , which means that B1 is the Jordan basis for N1 ( ) while 4
5
We get [N1 (x)]B = 4
B1
1
0 0
0 0
is the Jordan canonical form for N1 ( ).
1

Repeat the previous steps for the next eigenvalue,

1 with multiplicity n2 = 2:

Step 3: Find the dimension of the set attached to

1:

118

2. LINEAR TRANSFORMATIONS

Consider (U
2

1R4 )2 (3) = (U + 1R4 )2 ( ); its matrix is (A + I4 )2 .

1 7
6 1 2 0
7
6
6 1 1 0 0 7
7 Not.
6
A + I4 = 6
7 = C (rank 3).
6 0 1 1 0 7
7
6
5
4
0 0 1 1
32 2
2
1 7
1
2
6 1 2 0
6 3 4
7
6
6
6 1 1 0 0 7
6 2 3 0
1
7
6
6
(A + I4 )2 = 6
7 =6
6 0 1 1 0 7
6 1 2 1
0
7
6
6
4
5
4
0 0 1 1
0 1 2
1
powers the matrix B, the corresponding rank remains
The
2
6 3
6
6 2
6
6
6 1
6
4
0

set V
4

2
2

6
6
6
6
with v3 = 6
6
6
4

7
7
7
7
7 = C 2 (rank 2) (if we continue raising to succesive
7
7
5
the same).

n1
2
3I) 2(

2
) =3ker (U
3 set of all solutions of the system:
2 + 1R4 ) ( ) is the
2 7 6 x1 7 6 0 7
6 x1 = 3a + 2b; 7
7
6
76
7 6 7
6 x2 = 2a b 7
6 x2 7 6 0 7
1 7
7
6
76
7 6 7
7
76
7 = 6 7, ) 6
7
6 x =a
6
7 6 7
0 7
7
6 3
7 6 x3 7 6 0 7
5
4
54
5 4 5
x4 = b
0
1
x4
3
2
3
3 7
6 2 7
7
6
7
6 1 7
2 7
7
6
7
7, v4 = 6
7, we get: V 2 = span (v3 ; v4 ); the set B2 = fv3 ; v4 g is a basis in V 2 .
7
6
7
1 7
6 0 7
5
4
5
0
1

= ker3(U
2

2
Step 4: Find the2restriction U2 ( 3
) of
2 U ( )3over2the set3V , U2 ( ) : V

! V 2 ; U2 (x) = U (x), 8x 2 V 2 :

1 76 3 7 6 4 7
6 0 2 0
2
3
6
76
7 6
7
6 1 0 0 0 76 2 7 6 3 7
2
6
76
7 6
7
5 (the scalars
U (v3 ) = Av3 = 6
76
7=6
7 = 2v1 + v2 ) [U (v3 )]B2 = 4
6 0 1 0 0 76 1 7 6 2 7
1
6
76
7 6
7
4
54
5 4
5
0 0 1 0
0
1
2 and 1 may be found
av3 + bv
3 4)
2 from the system
3 2 U (v33 ) = 2

1 76
6 0 2 0
6
76
6 1 0 0 0 76
6
76
U (v4 ) = Av4 = 6
76
6 0 1 0 0 76
6
76
4
54
0 0 1 0
scalars 1 and 0 may be found from the

2 7
6 3 7
2
3
7
6
7
7
6
7
1 7
1
6 2 7
5 (the
7 = 6
7 = v1 + 0 v2 ) [U (v4 )]B2 = 4
7
6
7
0 7
0
6 1 7
5
4
5
1
0
system U (v4 ) = av3 + bv4 )
2
3
2
1
5 [x] with B2 = fv3 ; v4 g a basis of V 2 .
We get U2 ( ) : V 2 ! V 2 ; [U2 (x)]B2 = 4
B2
1
0
Step 5: The nilpotent linear operator N2 ( ) : V 2 ! V 2 (attached to the eigenvalue 2 ) is the

restriction over V

of the linear operator (U

2 I4 ) (

) (or the linear operator (U2

2 I2 ) (

) over V 2 ).

2.6. INTRODUCTION TO SPECTRAL THEORY

In basis B2 the linear operator N2 ( ) has the matrix 4


2

Observe that the matrix is nilpotent: 4

Step 6: Find the chain of kernels:


f0g = ker N20 ( )

ker N2 ( )

32

1
2

5 =4

0
3

0 0

5+1 4

0 0

kernel:

dimension:

N20 ( )

f0g

0 = m10

N2 ( )

N22 ( )

82
39
< 1 =
5
5 span 4
4
1 = m11
;
:
1
1
1
82
3 2 39
2
3
<
1
1 =
0 0
5;4 5
4
5
span 4
2 = m12
;
:
1
0
0 0
2
1

The kernel of N2 ( ) is the set of all solutions of the system: 4


2

means all the vectors of the form: 4

x1 = a

x2 =

Find

0 1

5=4

ker N2r1 ( ) = V

()

Linear operator: matrix:


I2
2

1 0

5.

5.

ker N2r1

ker N22 ( )

119

1
1

32
54

x1
x2

5 = 4

0
0

5, which

r2 = min dim ker N2k ( ) = dim V


k

= n2 = 2

from where we get r2 = 2 and for each k = 1; r2 consider the decomposition


ker N2k ( ) = ker N2k

()

Q2k ;

Find the sets Q2k :


f0g = ker N20 ( )

ker N2 ( )

ker N22 ( )

= R2

k
ker N2 ( )

Q22

82
39
< 1 =
5 (which is a basis over ker N2 ( )) up to a basis of ker N22 ( ) = R2 , for example
Complete the set 4
:
1 ;
2 3
1
with th vector [u4 ]B2 = 4 5 (may be chosen any vector from ker N22 ( ) n ker N2 ( )).
0
2
32 3 2
3
1
1
1
1
54 5 = 4
5 2 ker N2 ( ).
[u3 ]B2 = [N2 (u4 )]B2 = 4
1
1
0
1
The Jordan basis in V 2 for the linear operator N2 ( ) is B2 = fu3 ; u4 g, where:

120

2. LINEAR TRANSFORMATIONS

N2 (u3 ) = 0 ) [N2 (u3 )]B = 4


2

3
5

0
2 3
1
N2 (u4 ) = u3 ) [N2 (u4 )]B = 4 5
2
0
3
3
2
2
0 1
0 1
5
5 [x] , which means that B2 is a Jordan basis for N2 ( ) while 4
We get [N2 (x)]B = 4
B2
2
0 0
0 0
is the Jordan canonical form for the linear operator N2 ( ).
Assemble all the
results3in the initial basis of R4 for both eigenvalues:
2 obtained
3
2
6 3 7
6 2 7
2 3
2
3
6 7
6
7
6 2 7
6 1 7
1
1
6 7
6
7
5, [u2 ] = 4 5,
For 1 : v1 = 6 7, v2 = 6
7, B1 = fv1 ; v2 g is a basis of V 1 , [u1 ]B1 = 4
B1
6 1 7
6 0 7
0
1
6 7
6
7
4 5
4
5
0
1

1
basis
B1 = fu1 ; u2 g Jordan
3 in V2
2
6
6 3 7
7
6
6
6
6 2 7
7
6
6
For 2 : v3 = 6
7, v4 = 6
6
6 1 7
7
6
6
5
4
4
0

B2 = fu3 ; u4 g Jordan basis in V

2 7
3
2
2 3
7
7
1 7
1
1
5, [u4 ] = 4 5,
7, B2 = fv3 ; v4 g is a basis in V 2 , [u3 ]B2 = 4
B2
0 7
1
0
7
5
1

The Jordan basis of the


3 2linear3operator
2 3U ( )
2 initial
6 3 7 6 2 7 6 1 7
6 7 6
7 6 7
6 2 7 6 1 7 6 1 7
6 7 6
7 6 7
u1 = 1 v1 1 v2 = 6 7 6
7=6 7
6 1 7 6 0 7 6 1 7
6 7 6
7 6 7
4 5 4
5 4 5
0
1
1
2 3
6 3 7
6 7
6 2 7
6 7
u2 = 1 v1 + 0 v2 = 6 7
6 1 7
6 7
4 5
0
2
3 2
3 2
6 3 7 6 2 7 6
6
7 6
7 6
6 2 7 6 1 7 6
6
7 6
7 6
u3 = ( 1) v3 + 1 v4 = 6
7+6
7=6
6 1 7 6 0 7 6
6
7 6
7 6
4
5 4
5 4
0
1

is:

1 7
7
1 7
7
7
1 7
7
5
1

2.6. INTRODUCTION TO SPECTRAL THEORY

121

6 3 7
7
6
6 2 7
7
6
u4 = 1 v3 + 0 v4 = 6
7
6 1 7
7
6
5
4
0

1 3
6 1 3
6
6 1 2 1
2
6
The changeofbasis matrix is: 6
6 1 1
1 1
6
4
1 0 1
0
2 1
3
1 32
0
6 4
4
2 76 0 2 0
6
6 1
1
1
1 7
76 1 0 0
6
7
6 4
4
4
4 76
Verication: 6 1
6
3
1
76 0 1 0
6
0
76
6
4 2 54
4 4
1
1 1
1
0 0 1
4
4
4 4
The
of the initial
3 2
3 2matrix:
2 Jordan decomposition
1 7 6 1 3
1 3 76 1 1
6 0 2 0
6
7 6
76
6 1 0 0 0 7 6 1 2 1
76 0 1
2
6
7 6
76
=
6
7 6
76
6 0 1 0 0 7 6 1 1
6
1 1 7
6
7 6
76 0 0
4
5 4
54
0 0 1 0
1 0 1
0
0 0

7
6
6
7
7
6
7
6
7, its inverse: 6
7
6
7
6
5
4
32

1 76
76
6
0 7
76
76
6
0 7
76
54
0

0
0
1
0

1 3
1 2

1 0

0 76
76
6
0 7
76
76
6
1 7
76
54
1

1 1

32

1
4
1
4
1
4
1
4

1
1
4
1
4
1
4
1
4

0
1
4
0
1
34

3
4
1
4
3
4
1
24

3 7 6
7 6
6
2 7
7 6
7=6
6
1 7
7 6
5 4
0
0
1
4
0
1
4

3
4
1
4
3
4
1
4

1
2
1
4
1
2
1
4

1 1
0 1
0 0
0 0

1
2
1
4
1
2
1
4

7
7
7
7
7,
7
7
5
0
0
1
0

0 7
7
0 7
7
7
1 7
7
5
1

7
7
7
7
7
7
7
5

2.6.39. Example. Usage of the previous8 decomposition for solving the attached linear homogeneous
>
>
x_ 1 = 2x2 x4
>
>
>
>
>
< x_ 2 = x1
system of ordinary dierential equations:
>
>
x_ 3 = x2
>
>
>
>
>
: x_ 4 = x3
2
3 2
32
3
1 7 6 x1 7
6 x_ 1 7 6 0 2 0
6
7 6
76
7
6 x_ 2 7 6 1 0 0 0 7 6 x2 7
6
7 6
76
7
The matrix form: 6
7=6
76
7 ) use the Jordan decomposition:
6 x_ 7 6 0 1 0 0 7 6 x 7
6 3 7 6
76 3 7
4
5 4
54
5
x_ 4
0 0 1 0
x4
3
2
3 2
32
32 1
3
1 32
1 3 76 1 1 0
0 76 4 0
6 x_ 1 7 6 1 3
4
2 7 6 x1 7
76
76 1
6
7
6
7 6
1
1
1 7
7 6 x2 7
6 x_ 2 7 6 1 2 1
6
6
2 7
0 7
7
6
7
6
7 6
76 0 1 0
76 4
4
4
4 76
7)
6
7=6
76
76 1
3
1
76 x 7
6 x_ 7 6 1 1
6
6
0
1 1 7
1 1 7
76 3 7
6 3 7 6
76 0 0
76
4 2 54
4
5
5 4
54
54 4
1
1
1 1
x4
x_ 4
1 0 1
0
0 0 0
1

122

2. LINEAR TRANSFORMATIONS

32 1
3 2
3
1 32
3
1 32
0
0 76 4
6
4
2 7 6 x_ 1 7 6 1 1 0
4
2 7 6 x1
76 1
7 6
76
6
6
1
1
1
1
1 7
6
7 6 x_ 2 7 6 0 1 0
7 6 x2
6
0 7
7
6
7
6
7
6
7
6
4
4 76
4
4
4 76
)6
7 6 41
7=6
6
3
1
3
1
76
7 6 x_ 7 6 0 0
76 x
6
1
1
0
76
76 3 7 6
76 3
6
4 2 54
4 2 54
54 4
5 4
4
1 1
1
1 1
1
0 0 0
1
x_ 4
x4
4 24 3 2
43
4
4 4
3
2
1
3
1
0
y
6 1 7 6 4
4
2 7 6 x1 7
6
7 6 1
7
6
1
1
1 7
6 y2 7 6
7 6 x2 7
6
7 6 4
7
4
4
4 76
Change of variables: 6
7=6
7)
6
3 1 7
6 y 7 6 1
7
7
6
0
6 3 7 6
7 6 x3 7
4 2 54
4
5
5 4 4
1
1 1
1
y4
x4
4
4 3 4 4
2
3 2 1
3
1 32
0
6 y_ 1 7 6 4
4
2 7 6 x_ 1 7
6
7 6 1
6
7
1
1
1 7
6 y_ 2 7 6
7 6 x_ 2 7
6
7 6 4
7
6
7
4
4
4 76
)6
7=6 1
7 ) the initial system becomes:
3
1
6 y_ 7 6
7 6 x_ 7
0
6 3 7 6
76 3 7
4 2 54
4
5 4 4
5
1
1 1
1
y_ 4
x_ 4
4
4 342 4 3 2
2
3 2
3
0 7 6 y1 7 6 y1 + y2 7
6 y_ 1 7 6 1 1 0
6
76
7 6
7 6
7
6 y_ 2 7 6 0 1 0
6
7 6
7
0 7
7 6 y2 7 6 y2
6
7 6
7
6
76
7=6
7=6
7)
6 y_ 7 6 0 0
76 y 7 6 y
7
1
1
y
6 3 7 6
76 3 7 6 4
3 7
4
54
5 4
5 4
5
0 0 0
1
y_ 4
y4
y4
2
3 2
3
2
3 2
3
y_ 1
y1 + y2
y_ 3
y4 y3
5=4
5 and 4
5=4
5
) decoupling over the eigenvalues: 4
y_ 2
y2
y_ 4
y4
Solve
2
3the systems:
2
3
2

y_ 1

1
4
1
4
1
4
1
4

5=4

0
1
4
0
1
4

y1 + y2

5 ) y2 = k2 et , y_ 1 = y1 + y2 = y1 + k2 et ) y_ 1 e

y1 e

y_ 2
y2
t
k2 ) y1 e = k2 t + k1 ) y1 = (k2 t + k1 ) et
2
3 2
3
y_ 3
y4 y3
4
5=4
5 ) y4 = k4 e t ) y_ 3 = y3 + k4 e t ) y_ 3 et + y3
y_ 4
y4
t
y3 e = k4 t + k3 ) y3 = (k4 t + k3 ) e t
2
3 2
3 2
t
6 y1 7 6 (k2 t + k1 ) e 7 6
7 6
6
7 6
t
6 y2 7 6
7 6
k
e
2
6
7 6
7 6
Obtain the matrix form of the solution: 6
7=6
7=6
6 y 7 6 (k t + k ) e t 7 6
6 3 7 6 4
7 6
3
4
5 4
5 4
t
y4
k4 e
3
2
3
2
3 2 1
3
1
0
y
1
6
7 6 4
4
2 7 6 x1 7
6
7
6
7 6 1
1
1
1 7
7 6 x2 7
6 y2 7 6
6
7
6
7 6 4
4
4
4 7
because 6
76
7, we get:
7=6 1
3
1
76 x 7
6 y 7 6
0
76 3 7
6 3 7 6
4 2 54
4
5
5 4 4
1
1
1 1
x4
y4
4
4
4 4

3
7
7
7
7
7
7
7
5

= k2 ) ) (y1 e t )t =

et = k4 ) ) (y3 et )t = k4 )

t e

et

e
0

32
t

t e
e

7 6 k1 7
76
7
7 6 k2 7
76
7
76
7;
76 k 7
76 3 7
54
5
k4

2.6. INTRODUCTION TO SPECTRAL THEORY

32

1 3 7 6 y1 7
6 x1 7 6 1 3
76
7
6
7 6
7 6 y2 7
6 x2 7 6 1 2 1
2
76
7
6
7 6
76
7, and the solution:
6
7=6
7
6
6 x 7 6 1 1
1 1 7 6 y3 7
7
6 3 7 6
54
5
4
5 4
1 0 1
0
y4
x4
32
32
3 2
2
t
t
0
0 76
1 3 76 e t e
6 x1 7 6 1 3
76
76
7 6
6
6
6
6 x2 7 6 1 2 1
et
0
0 7
2 7
76 0
76
7 6
6
76
76
7=6
6
76 0
6 x 7 6 1 1
t
t 76
0
e
t
e
1
1
76
76
6 3 7 6
54
54
5 4
4
t
1 0 1
0
0
0
0
e
x4
2
3
t
t
t
t
t
t
te ) + k2 (3e + te ) k3 e 7
6 k1 e + k4 (3e
6
7
6 k1 et k4 (2e t te t ) + k2 (2et + tet ) + k3 e t 7
6
7
=6
7
6 k (et + tet ) + k et k e t + k (e t te t ) 7
7
6 2
1
3
4
4
5
k1 et + k3 e t + tk2 et + tk4 e t

k1 7
7
k2 7
7
7=
k3 7
7
5
k4

123

124

2. LINEAR TRANSFORMATIONS

2.6.40. Example (Two eigenvalues, one with multiplicity 2). Consider the linear operator U ( ) given by
2
3
1
1
1
6
7
6
7
the matrix A = 6 3
4
3 7 2 M3 3 (R) in the strandard basis.
4
5
4
7
6
Find the Jordan canonical form and a Jordan basis.
The eigenvalues are the solutions of the characteristic equation: det (A

I3 ) = 0 )

2)2 =

1) (

0
For

1, the eigenset is V1 = fx 2 R3 ; (A

I3 ) x = 0g = f (0; 1; 1) ;

2 Rg while the alge-

braic and geometric dimensions of the eigenvalue are both 1.


For

= 2, the eigenset is V2 = f (1; 0; 1) ;

2 Rg and the algebraic dimension of the eigenvalue is

2 while the geometric dimension is 1, so that the linear operator is not diagonalizable.

6
6
Consider the linear operator N2 ( ) with the matrix in standard basis B2 = A 2I3 = 6
4
The kernel of N2 ( ) is ker N2 ( ) = V2 = f (1; 0; 1) ;
2

6
6
The linear operator N22 ( ) has the matrix B22 = 6
4

kernel ker N22 ( ) = f (1; 0; 1) + (0; 1; 2) ;

2 Rg.
2

2 Rg

32

0
6
7
6
7
3 7 = 6 9
4
5
9
4
1

33

0
18
18
0

7
0

7
7
3 7.
5
4

7
7
9 7 and the
5
9
0

6
7
7
6
7
7
The linear operator
( ) has the matrix
3
6
3 7 = 6 27
54
27 7 and its
4
5
5
4
7
4
27
54
27
3
2
2
stops
kernel is ker N2 ( ) = ker N2 ( ), so that the chain of kernels ker N2 ( ) ker N2 ( ) = ker N23 ( )
N23

B23

6
6
= 6
4

(stabilizes) at the exponent 2.


f0g

ker N2 ( )

ker N22 ( )

= ker N23 ( )

jj
ker N2 ( )
Choose a complement

Q22

3v 0
3N2 (v 0 )
2
Q2 of ker N2 (

) over ker N22 ( ), for example Q22 = f (0; 1; 2) ;

basis over Q22 , for example v 0 = (0; 1; 2).


2
32
3 2
3
1
1
1
0
1
6
76
7 6
7
6
76
7 6
7
0
Then N2 (v ) = 6 3
6
3 7 6 1 7 = 6 0 7 2 ker N2 ( )
4
54
5 4
5
4
7
4
2
1
The Jordan basis is f(0; 1; 1) ; (1; 0; 1) ; (0; 1; 2)g, the changeofbasis matrices are

2 Rg and a

2.6. INTRODUCTION TO SPECTRAL THEORY

6
6
C=6
4

7
6
7
7
6
7
1
1 0
1 7 and C = 6 1 0
1 7
5
4
5
1
1
2
1
1
2
The Jordan2 canonical form3is2
32
0
1
1
1
1
2
1
76
76
6
76
76
6
C 1 AC = 6 1
4
3 76 1
0
0 76 3
54
54
4
1
4
7
6
1
1
1

6
6
=6 1
4
1
1

2
0
1
2

125

7
7
0 7.
5
1
1 0 0

7
7 6
7
7 6
0
1 7=6 0 2 1 7
5
5 4
0 0 2
1
2
3
2
6 1 j 0 0 7
7
6
7
6
7
6
In the Jordan basis, the matrix is a Jordan matrix with two blocks 6
7
6 0 j 2 1 7
7
6
5
4
0 j 0 2
Remarks:
2
3
1
1
1
6
7
6
7
the linear operator N2 ( ) has the matrix 6 3
6
3 7
4
5
4
7
4
the linear operator N2 ( ) is not nilpotent;
the kernel ker N22 ( ) is N2 ( )invariant and has a basis f(1; 0; 1) ; (0; 1; 2)g, because:
2
32
3 2 3
0
1
1
1
1
6
76
7 6 7
6
76
7 6 7
6 3
6
3 7 6 0 7 = 6 0 7 = 0 (1; 0; 1) + 0 (0; 1; 2) and
4
54
5 4 5
0
4
7
4
1
2
32
3 2
3
1
1
1
1
0
6
76
7 6
7
6
76
7 6
7
=
6 3
7
6
7
6
6
3
1
0 7 = 1 (1; 0; 1) + 0 (0; 1; 2).
4
54
5 4
5
4
7
4
2
1
The restriction of the linear operator N2 ( ) over ker N22 ( ), N2r ( ) : ker N22 ( ) ! ker N22 ( ) has the
2
3
0 1
5 for the basis f(1; 0; 1) ; (0; 1; 2)g.
matrix 4
0 0
2
32 2
3
0
1
0
0
5 =4
5.
The linear operator N2r ( ) is nilpotent, because 4
0 0
0 0

Bibliography
[1] Allen, Roy, George, Douglas: "Mathematical Analysis for Economists", MacMillan and Co., London, 1938.
[2] Andreescu, Titu; Andrica, Dorin; "Complex Numbers from A to...Z", Birkhuser, Boston, 2006.
[3] Anton, Howard; Rorres, Chris: "Elementary Linear Algebra Applications version", Tenth edition, 2010, Wiley.
[4] B
adin, Luiza; C
arpusc
a, Mihaela; Ciurea, Grigore; S
erban, Radu: "Algebr
a Liniar
a Culegere de Probleme", Editura
ASE, 1999.
[5] Bellman, Richard: Introducere n analiza matricial
a, (traducere din limba englez
a), Editura Tehnic
a, Bucuresti,
1969. (Titlul original: Introduction to Matrix Analysis, McGraw-Hill Book Company, Inc., 1960)
[6] Benz, Walter: "Classical Geometries in Modern Contexts - Geometry of Real Inner Product Spaces" - Second Edition,
Springer, 2007.
[7] Blair, Peter, D.; Miller, Ronald, E.: "InputOutput Analysis: Foundations and Extensions", Cambridge University
Press, 2009.
[8] Blume, Lawrence; Simon, Carl, P.: "Mathematics for Economists", W. W. Norton & Company Inc., 1994.
[9] Bourbaki, N.: Elements de mathematique, Paris, Acta Sci. Ind. Herman, Cie, 1953.
[10] Busneag, Dumitru; Chirtes, Florentina; Piciu, Dana: "Probleme de Algebr
a Liniar
a", Craiova, 2002.
[11] Burlacu, V., Cenusa, Gh., S
acuiu, I., Toma, M.: Curs de Matematici, Academia de Studii Economice, Facultatea de
Planicare si Cibernetic
a Economic
a, remultiplicare, uz intern, Bucuresti, 1982.
[12] Chirita, Stan: "Probleme de Matematici superioare", Editura Didactic
a si Pedagogic
a, Bucuresti, 1989.
[13] Chitescu, I.: Spa
tii de func
tii, Editura stiintic
a si enciclopedic
a, Bucuresti, 1983.
[14] Colojoar
a, I.: Analiza matematic
a Editura didactic
a si pedagogic
a, Bucuresti, 1983.
[15] Cr
aciun, V. C.: Exerci
tii
si probleme de analiz
a matematic
a, Tipograa Universit
atii Bucuresti, 1984.
[16] Cristescu, R.: Analiza func
tional
a, Editura stiintic
a si enciclopedic
a, Bucuresti, 1983.
[17] Dr
agusin, C., Dr
agusin, L., Radu, C.: Aplica
tii de algebr
a, geometrie
si matematici speciale, Editura Didactic
a
si Pedagogic
a, Bucuresti, 1991.
[18] Glazman, I. M., Liubici, I. U.: Analiza liniar
a pe spa
tii nit dimensionale, Editura stiintic
a si Enciclopedic
a,
Bucuresti, 1980.
[19] Golan, Jonathan S.:The Linear Algebra a Beginning Graduate Student Ought to Know, Springer, third edition,
2012.
[20] Greene, William H.: "Econometric Analysis", sixth edition, Prentice Hall, 2003.
[21] Guerrien, B.: Algebre lineare pour economistes, Economica, Paris, 1991.
[22] Halanay, Aristide; Olaru, Valter Vasile; Turbatu, Stelian: "Analiz
a Matematic
a", Editura Didactic
a si Pedagogic
a,
Bucuresti, 1983.
[23] Holmes, Richard, B.: Geometric Functional Analysis and its Applications
[24] Ion D. Ion; Radu, N.: Algebr
a, Editura Didactic
a si Pedagogic
a, Bucuresti, 1991.

248

BIBLIOGRAPHY

[25] Kurosh, A.: Cours dalgebre superieure, Editions MIR, Moscou, 1980.
[26] Leung, KamTim: "LINEAR ALGEBRA AND GEOMETRY", HONG KONG UNIVERSITY PRESS, 1974.
[27] Ling, San; Xing, Chaoping: "Coding Theory A First Course", Cambridge University Press, 2004.
[28] McFadden, Daniel: Curs Economics 240B (Econometrics), Second Half, 2001 (class website, PDF)
[29] Monk, J., D.: Mathematical Logic, Springer-Verlag, 1976.
[30] Pavel, Matei: "Algebr
a liniar
a si Geometrie analitic
a culegere de probleme", UTCB, 2007.
[31] R
adulescu, M., R
adulescu, S.: Teoreme
si probleme de Analiz
a Matematic
a, Editura didactic
a si Pedagogic
a,
Bucuresti, 1982.
[32] Rockafellar, R.,Tyrrel: Convex Analysis, Princeton, New Jersey, Princeton University Press, 1970.
[33] Roman, Steven: "Advanced Linear Algebra", Third Edition, Springer, 2008.
[34] Saporta, G., S
tef
anescu, M. V.: Analiza datelor
si informatic
a cu aplica
tii la studii de pia
ta
si sondaje de

opinie-, Editura economic


a, 1996.
[35] S
abac, I. Gh.: Matematici speciale, vol I, II, Editura didactic
a si pedagogic
a, Bucuresti, 1981.
[36] S
ilov, G. E.: Analiz
a matematic
a (Spa
tii nit dimensionale), Editura stiintic
a si enciclopedic
a, Bucuresti, 1983.
[37] Strang, Gilbert: Introduction to Linear Algebra, Third Edition, Springer, 2003
[38] Tarrida, Agust, Revents: "A ne Maps, Euclidean Motions and Quadrics", Springer, 2011.
[39] Thompson, Anthony, C.: "Minkowski Geometry", CUP, 1996.
[40] Treil, Sergei: "Linear Algebra Done Wrong", http://www.math.brown.edu/~treil/papers/LADW/book.pdf, last accessed on 18.12.2011.
[41] Weintraub, Steven, H.: "Jordan Canonical Form: Application to Dierential Equations", Morgan & Claypool, 2008.
[42] Weintraub, Steven, H.: "Jordan Canonical Form: Theory and Practice", Morgan & Claypool, 2009.

You might also like