Professional Documents
Culture Documents
OUTLINE
SERIES
THEORY aiMl>ROBLEMS
of
MATRICES
by
FRANK AYRES,
JR.
including
Completely Solved
in
Detail
SCHAUM PUBLISHING
NEW YORK
CO.
SCHAVM'S OUTLINE OF
THEORY
AI\[D
OF
PROBLEMi;
MATRICES
BY
FRANK AYRES,
JR., Ph.D.
Department of Mathematics
Dickinson College
^C^O
-^
('opyright
AH
Rights Reserved.
Printed in the
be reproduced,
United
States
No part
of this publication
may
02656
Preface
Elementary matrix algebra has now become an integral part of the mathematical background necessary for such diverse fields as electrical engineering and education, chemistry and sociology, as well as for statistics and pure mathematics. This book, in presenting the more
rial,
is
essential matedesigned primarily to serve as a useful supplement to current texts and as a handy reference book for those working in the several fields which require some knowledge of matrix theory. Moreover, the statements of theory and principle are sufficiently complete that the book could be used as a text by itself.
The material has been divided into twenty-six chapters, since the logical arrangement is thereby not disturbed while the usefulness as a reference book is increased. This also permits a separation of the treatment of real matrices, with which the majority of readers will be concerned, from that of matrices with complex elements. Each chapter contains a statement of pertinent definitions, principles, and theorems, fully illustrated by examples. These, in turn, are followed by a carefully selected set of solved problems and a considerable number of supplementary exercises.
The beginning student in matrix algebra soon finds that the solutions of numerical exercises are disarmingly simple. Difficulties are likely to arise from the constant round of definition, theorem, proof. The trouble here is essentially a matter of lack of mathematical maturity,'
and
normally to be expected, since usually the student's previous work in mathematics has been concerned with the solution of numerical problems while precise statements of principles and proofs of theorems have in large part been deferred for later courses. The aim of the present
book
lems
is
in
if he persists through the introductory paragraphs and solved probany chapter, to develop a reasonable degree of self-assurance about the material.
The solved problems, in addition to giving more variety to the examples illustrating the theorems, contain most of the proofs of any considerable length together with representative shorter proofs. The supplementary problems call both for the solution of numerical exercises
and
for proofs.
Some
more important, however, are the many theorems whose proofs require but a few lines. Some are of the type frequently misnamed "obvious" while others will be found to call for considerable
None should be treated lightly, however, for it is due precisely to the abundance of such theorems that elementary matrix algebra becomes a natural first course for those seeking to attain a degree of mathematical maturity. While the large number of these problems in any chapter makes it impractical to solve all of them before moving to the next, special attention
ingenuity.
is
much
directed to the supplementary problems of the first two chapters. mastery of these will do to give the reader confidence to stand on his own feet thereafter.
Schaum
Company
Frank Ayres,
CarHsle, Pa.
Jr.
October, 1962
CONTENTS
Page
Chapter
MATRICES
Equal matrices. Products by partitioning.
Matrices.
Sums
of matrices.
Products of matrices.
Chapter
10
Triangular matrices.
Chapter
20
Minors
Chapter
EVALUATION OF DETERMINANTS
Expansion along a row or column. The Laplace expansion. Expansion along the first row and column. Determinant of a product. Derivative
of a determinant.
32
Chapter
EQUIVALENCE
Non-singular and singular matrices. Elementary transformations. Inverse of an elementary transformation. Equivalent matrices. Row canonical form. Normal form. Elementary matrices. Canonical sets under equivalence. Rank of a product.
39
Rank
of a matrix.
Chapter
49
adjoint.
The adjoint
of a product.
Minor of an
Chapter
55
Chapter
FIELDS
Number
fields.
64
General
fields.
Sub-fields.
Matrices over a
field.
CONTENTS
Page
Chapter
67
Chapter
10
LINEAR EQUATIONS
rule.
75
System of non-homogeneous equations. Solution using matrices. Cramer's Systems of homogeneous equations.
Chapter
11
VECTOR SPACES
Vector spaces.
section
85
space.
Sub-spaces. Basis and dimension. Sum space. InterNull space of a matrix. Sylvester's laws of nullity.
Chapter
12
LINEAR TRANSFORMATIONS
Singular and non-singular transformations. space. Permutation matrix.
94
Change of
basis.
Invariant
Chapter
13
100
Chapter
14
110
Complex numbers.
Chapter
15
CONGRUENCE
Congruent symmetric matrices. Canonical forms of real symmetric, skew-symmetric, Hermitian, skew-Hermitian matrices under congruence.
Congruent matrices.
115
Chapter
16
BILINEAR FORMS
Matrix form.
formations.
125
Chapter
17
QUADRATIC FORMS
Matrix form. Transformations. Canonical forms. Lagrange reduction. Sylvester's law of inertia. Definite and semi-definite forms. Principal minors. Regular form. Kronecker's reduction. Factorable forms.
131
CONTENTS
Page
Chapter
lo
HERMITIAN FORMS
Matrix form.
definite forms.
146
Canonical forms.
Definite
Transformations.
and semi-
Chapter lif
149
Chapter
20
SIMILARITY
Similar matrices.
156
Reduction to triangular form.
Diagonable matrices.
Chapter
21
163
Chapter
22
172
common
divisor.
Least
common
of polynomials. multiple.
Unique factorization.
Chapter
2o
LAMBDA MATRICES
The X-matrix or matrix polynomial. Sums, products, and quotients. Remainder theorem. Cayley-Hamilton theorem. Derivative of a matrix.
179
Chapter
24
188
Elementary
divisors.
Chapter
25
196
and
non-
derogatory matrices.
Minimum
Derogatory
Chapter
26
203
INDEX
215
INDEX OF SYMBOLS
219
'
chapter
Matrices
l'
3
1
(a)
1
and
(b)
-1
5 is
called a matrix.
The matrix
(a) could
be
or
i2x + 3y = 7
-
\x-
y = 5
Later,
trix (b)
we
shall see
how
the matrix
may be used
or
systems.
The ma-
we might consider
its
(2, 1,4),
origin or
such questions as whether or not the three points on the same line through the origin.
lie in the
In the matrix
^11 ^12
'^IS
'
(1.1)
the numbers or functions a^- are called its elements. In the double subscript notation, the first
subscript indicates the row and the second subscript indicates the column in which the element
stands.
Thus,
ra"
all
as
first
matrix of
said to be of
"m by
We
or
mxra.
(
(In
),
||
||,
are
sometimes
used.
"
or
"the
mxn
4".
matrix A =
When
we
n, (1.1) is
elements
a^,
022.
its
diagonal elements.
The sum
of the diagonal
MATRICES
[CHAP.
[bij]
B)
if
and only
if
they have the same order and each element of one is equal to the corresponding element of the
and only
^^J
if
'V
if
a
and only
= 1,2,
if
ro;
1,
n)
one
is a
ZERO MATRIX. A
which
is zero, is
When ^
is a zero
we
mxn
SUMS OF MATRICES.
is
If
4 = [a^A and S =
matrix
[fe.^-]
are two
mxn
matrices, their
is the
sum sum
(difference),
A B,
defined as the
mxn
A
corresponding elements of
C =
(difference) of the
'\
31
1
Example
1.
It
14
and =
2
I
0'
-12
2
1)
1
then
5
A
and
+B
Lo+(-
+3 +2
+ 5j
1-2
O-(-l)
2-3
1-2
3-0 4-5
or
Two
subtraction.
(a)
Two
and
matrices of different orders cannot be added or subtracted. (b) above are non-conformable for addition and subtraction.
The sum
of k matrices
is
A and each
of its elements is k
We
define:
If
tinguish it from [k] which is a 1x1 matrix) then by kA = Ak A by multiplying each of its elements by k.
Example
2.
If
[:
'11
I
then
A+A + A
and
-2 3A
A-3
3
-5A
r-5(l)
-5(-2)"|
r-5
L-10
10-1
L-5(2)
In particular,
-5(3) J
-15j
by -A, called the negative of /4, is meant the matrix obtained from A by mulby -1 or by simply changing the sign of all of its elements. For every A, we have A +(-A) = 0, where indicates the zero matrix of the same order as A.
tiplying each of its elements
Assuming
(a)
(b) (c)
for addition,
we
state:
(d)
A + B = B + A (commutative law) A + (B+C) = (A + B)+C (associative law) k(A + B) = kA + kB = (A + B)k, A- a scalar There exists a matrix D such that A + D = B.
These laws are a result of the laws of elementary algebra governing the addition of numbers and polynomials. They show, moreover,
1.
of addition
CHAP.
1]
MATRICES
MULTIPLICATION. By
the product
fell
AB
Ixm
a^m]
and
the
mxl
^31
matrix
fi
is
fesi
+ aimfemi]
fell
fe^i
That
is,
[an
Oi
'^imj
L^ii
fell
"*"
^12
feji
imi]
-|^^2
a^^fe^^J
femi
Note that the operation is row by column; each element of the row is multiplied into the corresponding element of the column and then the products are summed.
1'
Example
3.
(a)
[2
4]
[7]
L
(b)
2.
-2 [3
-1
4]
[-6 - 6+ 12]
3J
By
is
the product
AB
mxp
p
xn matrix B =
=
[bij]
meant the
mxn
matrix
C =
\c;;'\
<-
where
"-ip i>,pj
-'
Hj
^^^"ikbkj
(f
1, 2,
,m-;
1,
re).
Think of A as consisting of m rows and B as consisting of n columns. In forming C = AB each row of A is multiplied once and only once into each column of B. The element c^ of C is then the product of the ith row of A and the /th column of B.
Example
4.
^11
*^ 1
'011611 + 012621
B
."31
[611
O32
621
612I 622J
021611+022^21
."31611+032621
^12 +
'^22
^22
"31612+002600
The product ^S is defined or A is conformable to B for multiplication only when the number columns of A is equal to the number of rows of S. If ^ is conformable to B for multiplication {AB is defined), B is not necessarily conformable to A for multiplication (BA may or may not be
of
^^^ined)-
See Problems
that
3-4.
Assuming
(e)
(/")
A,B,C
=
A(B + C) = AB
(A +
B)C
AC
AC BC
(g)
A(BC) = (AB)C
(associative law)
However,
(A) (i) (/)
AB AB AB
i=
BA, generally,
= =
AC
0,
See Problems
3-8.
MATRICES
[CHAP.
PRODUCTS BY PARTITIONING.
into n matrices of order pxl.
Let
A=
[a^J]
be of order
mxp
and
8=
m
[b^j]
be of order pxn.
In
For example,
A and 6 be
parti-
(pixnj)
(p2Xra2)
(p3Xre2)
(mixpi)
(m]^xp2) (m,2Xp2)
(m-Lxps)
(P2X%)
Jmgxpi)
I
(m2Xp3)_
(psxni)
Am
A^i
A^2
I
Aig "21
A,
I
O22
"32
"31
In
in exactly the
necessary that the columns of A and the rows of 6 be partitioned m^, mg, re^, rig may be any non-negative (including 0) integers and rei+ ^2 = n. Then
it
is
^8
^llOll + '4i2021
""
'4l3"31
A-2_iiB^^
A-^-^B-12
+ A-^qBqq + '^13832
Cii
C12
c
A21B11 + 422^21 +
_C21
1^22.
1110
and B
1
Examples.
Compute
/4S, given
/I
3
1
110
12
fill
Partitioning so that
11
^12
2
3
'0
11
A21 A22
and
B
B21
^12
111
2 2
1
1
10
S22
3
1
I
we have
AB
[A^-iB-i^i
+ A-12B21
A-ij^B'12 + A-i,
=
1
AqiB^^ + ^22^21
1 1 1 1 1 1
1"
^21^12 *
-^2^
B,
2
1
i4'
+ [l][2
'
1]
[2]
[3
[1
3]
0]
[?
+ [l][2]
[1
0]
2
3-1
[4
To
o]
To]
Tol
4
7
4 7
3
3 5
5 4
5JL0J
2] [2]
5 2
2
_[l
1]
[2
1]
[0]+[2]
[3
9.
(pixpi)
(P2X pi)
(piX P2)
^" ^ (P2X Ps)
All
A21
A-^2
'
""
' j
(P2XPS)
I
A22
T
...
I
.(PsXpi)
(psXp2)
(PsXPs)
/loo
and
let
8, C,
...
be partitioned in exactly the same manner. Then sums, differences, and products the matrices A^i, A^2< : Sii. 5i2. - ^n, C12
CHAP.
1]
MATRICES
SOLVED PROBLEMS
1
2-10
2
1 2_
1.
(a)
4
2
'l
-5
2
2-2 3-1
15
-4
2I
3
=
ri + 3
2+(-4)
-1 +
1
4+1
2
+2
0+5 -5 + (-2)
2+0
+3
0+2 1+3
2 + (-l)
4-202 5 2 4 5 4-741
-2
-1
2
O'
1
3-412
1
(b)
2-5
1
12
2
2-2 3-1
3
1
0-2
1-3 2+1
-2
2
3-5
-3 -2
-1
6-3
6
3
6,
(c)
4
2
"l
12
.
-5
2
2.
-15
-1
2
o"
1
(d)
4
2
-5
-1 -2 1 -4 -2 -1 -2 5 -1 -2
-3
2.
If
3 5
4 6
and 6
-2 -5
3
P
find
1
s
D =
r
t
such that A + B -
T)
0.
1-3-p
If
A-\-B-D
3+1-r
_5+4-
2-p
-
-9
"0
0"
4-r 9-
-1-s
9-u_
=
.0
-2-p
and p = -2,
4-r
and
= 4,
Then D
=
-
4
9
-1
9.
A +
3. (a)
[4
6]
[17]
2(4)
(b)
2(5)
2(6)-
8 =
10
15
12 18
[4
e]
3(4)
3(5)
3(6)
12
[J
4
(c)
-1(4)
-1(5)
-1(6).
.-4 -5 -6
[1
3]
5
[
-6 -7
8
6"
10
-11 -8
1
1 (6)
+ 2(7) + 3(-8)]
[19 4
-4 -4]
r2(l) + 3(2)+4(3)1
_
'
poT
L29J
l(-4) + 2(5) + 1(2)]
(rf)
C
l]
(e)
"gl
2J
2J
_2
8] [s -I2J
{2
.
-1
1
1"
Let
<4
=
.1
2
1.
Then
'2
A""
-1
1
\2
-1
1
r
2
1-
"5
-3
1
r
4
2.
5
and
2 3
-3 f
1
-1
1
1'
11
=
=
1
2
1
2 _3
4
2
.1
2
1.
8-18
-8
b
show
-1
-1
.8-43
The reader
will
that
MATRICES
[CHAP.
5.
Show
2
that:
? 2
(b)
1=17=1
2
a-J
j=i 1=1
b^hCj^j)
^
22
3
a,-,-,
(c)
2:
k=i
2 (")
aife(
S
h=t
h=i
=
2 (2
k=i
aif,bkh)chj
^
J,
ife(*fei +'^fej)
"il^^i +
2
'^lj)
+ '^t2(*2i+'^2i)
2
(il^i+''i2*2i)
(il<^17+'^i2'^2i)
2 (6)
2 2
t=i j=i
a-''
2
i=i
(an + 0^2 +
(Oil + O21)
2 2
a^g)
+ (a^t + 092 +
<223)
+ (12 + "22)
2
+ (ai3 + "23)
2
i=l
a-
"
2
i
a + a. = l ^2 i = i 13
=22
3
o--
j = ii = i
This is simply the statement that in summing all of the elements of a matrix, one may sum elements of each row or the elements of each column.
2
3
first
the
2 =
(c)
^_j
^-^
(oil =
k=l
3
,^
222
a-,(bi, c
tfe^
fei
ij
6,
&,
fee
2JI
fe3
O 37'
fell
oifefefei)cii
^
a{fefefe2)c2f
-^
(2
fe=i
fe=i
ai<^b^^)c3j **
-^
/.l/li^^^'fe'^^'^'^r
6.
is of order
mxn
of
and
if
S =
[fe^-] ^
and C =
nxp, then
/}(B +
C)
The elements
feijj+cij,
of the
j'th
row
are a^^
a^^,
...
+C
sum
are
is Of
62J+
c'2j
fenj
+%j- Then
A(B + C)
the
%i)
S^^iifefefej +^2^aifeC;^j,
7.
mxn,
if
S =
and
if
C =
[c.-A is of order
-^
pxo
The elements
p
P
of the
j
throw
of /4 areai^.a^g, ...,a-
of
BC
are
2
h=
i
6ih c^
P
b^f^Cf^-
'iJ'
2
P
b^j^c^j-,
P + tn
ait^^b^chj +ai2^^b2h%j
=
J^
*n/i<;/ij
P n ^^^^j_"ik''kh)<^hj
n
=
n
+ (X^aif^bf^2)<:QJ +
n
+ (^^"ikbkp)<^pj
(2^^0ifefefei)cij
This
is the
j'th
(AB)C.
8.
in
= =
and then
(/), (e),
Using
and then
(A+B)(C+D) (A+B)(C+D)
+B)C
+ (A
+B)D
BC
BD.
CHAP.
1]
MATRICES
"l
o'
1
1
l'
"l
10
1 1 1 1
'l
o'
1 1
"4
2"
9. (a)
1 1
2 3
_3
[3
2]
2 3
4
6
6 9
3
3
4
7
2_
'1
10
2
'
10
1
0'
'1
10
1
1
10
1
o"
[0]
2
:]
10
4
i
oil
1
(b)
!
10 3 10
5 6
:o]
[0
1
[0:
[0]
3]
]
j2
1
10
10
[0]
3
1
[0
"][o
2 3
12
10
18
"1
0'
4
5 6 7 6 5
5 6
2 3
4'
4
5
5
6_
1 1
-4- i
0^0
1
2 3
3|4
4 15
5
1
4 3
1
2 3
1 1
3
1
2
1
7 8
5
3
1
2
1
3 4
9
2
1
1
5 6
6 7 6
7"
2
1 1
(c)
1
2
1
4
9 8
6
7
2
1
5
8.
2
1
8 5
2
1
_0
+i
-^1
8 7
[l]-[8 7]
[l]-[6 5 4]
9
11'
"13"
[1]-[1]
'7
10 31
11
13
13
16
39"
19
41"
20
13
33 22
13
"35
24
.13
37 26
13
5
31
28
13
4]
30
.13.
20
13
8
33 22
13
7
10 35 24
13
6
13
16
19
37 26
13
5
39 41 28 30
13 13
1
[6
[l]
%i
10.
Hiyi+
os yi +
"1272
11^1
.J1
0]^2^2
Let
%2 %3
be a
O21 Zi +
h-2
% 2 72
new coordinates
(z^, zg).
The
result of applying
=
~
(o-Li
6^j +
"*"
aj^2
621) Zi
1) ^1
+
"*"
(on ii2 +
(^2 ("3
1
'^12^22) ^2
^2 2 ^22) ^2
Z2
-.
((^2 1 C^l 1
^2 2 ^2
^12
"*"
X3
("31^11 + ^32^21) Zl
*i
%1
"21
^3
1
012
"22
r-
Vl
Vr,
'^3
The
022
"11
"2 Os
1
%2
"^2 ^^3
pri ii2
^2
1
-1
/,
^',
2 2
622
in n variables
,
'
MATRICES
[CHAP.
SUPPLEMENTARY PROBLEMS
"l
-3'
2
,
-1
2
2I
4
,
2I 2
3_
Given A =
5
.1
4
_2
5
3_
and C =
-1
1_
J -2
-3
I
4
(a)
-f
7
4j
,
-51
Compute: A + B =
9
_3
A-C
=
_
5-3
t-2j
-1
-2 -4
(6)
Compute:
-2-4
-10
-2
(c)
(d)
-4 -2
0-S
Verify:
A + {B~C)
(A+B)-C.
that
D such
1
A+D^B.
1
Verify that
=B-A
-(A-B).
-11
-1
2
1
6 12 6
12.
Given A =
generally.
-3
-1
and B
2
1
4
2
6
3
compute
AB
= Q
and
BA
-22 -11
-2
-1 -2 -1
Hence, ABi^BA
\ -3
13.
2
,
Given A =
-3 -3 -1
1
111 1-212
C
1
14
10
and C
2 3
1-1-2
-2 -1 -1
show
that
AB
AC. Thus, AB
AC
2-5-1
-1
3
,
3 2
,
14.
Given A =
2 3
and C
^["12
[2
3 -4"!
ij
-1
-1
0-2
=
show
that
(AB)C
A(BC).
15.
11,
show
that
A(B + C)
+ B^
AB
AC
and
(A+B)C
AC +BC.
(ABf ^ A^
2AB
and A^
- B^ ^ (A-B)(A+B).
-3 -5
5
17.
Given A =
-14
1
-13
B
= 0,
1
-2 -4
4
-3 -5
5
and
-3 -4
=
-13
AC
= A, that
-13
1
-2 -3
(a) (b)
show
that
AB
BA
CA
= C. =
(a) to
show
ACB
CS^,
A'^
B"^ =
(A
-B)(A +
B),
(A
Bf
= A'^ + b'
18.
Given
Ans.
where
2
i
powers of A
I,
A, -I,
-A according
as n
=4p,
4p-l-l,
ip +
2,
4p-^3, where
/=
n
[0
ij-
19.
Show
[
any two
or
more
-'"'[;;][-:;] [rM'i
-"]
[0 -]- ["0 3-
[? 0]
20.
Given the matrices A of order mx, B of order nxp, and C of order rx^, under what conditions on p, 9, and r would the matrices be conformable for finding the products and what is the order of each" (a) ABC (b)ACB, (c)A(B + C)?
Ans.
(a)
p =
r;
mx-q
(b) r = n = g;
x p
(c)
= n, p = q;
x g
CHAP.
1]
MATRICES
21.
10
(a)
11
1
j
10
and
10
j
B
1
"
1 1 1
Ans.
1 1
2
1
0^
1
o"
1
(b)
and
1
B
--1
Ans.
m
4
1
12
1
10
'
10
'2
(c)
A
1
and
1
I
B
1
1
i i
Ans
2
10
2
22. Prove:
(a) trace (A
+ B) = trace A + trace B,
A.
h V,
=
-2
1
1 1
2n+r.-3y.
722"]
Y^l
^;^^^l^
y,\
[2
-3
[2
-3
2 2
-1
3
r -zi +
[-221 - 622J'
24. If
-4
= [aiji
m
=
x n
and
if
show
,n).
that
(/4+B)C
=^C
the
+ BC.
25. Let
/4
= [a^-] and
[fej,-;,]
where
(i
1,
m;
1,
p;
A;
1, 2, ...
Denote by
/S-
sum
of
Pi
182
j)^.
Show
Pi
is the
sum
of the
to
in
Problems 12 and
26.
13.
Determinative
Reflexive
(II)
(iii)
Symmetric
Transitive
(iv)
If a is in
c.
is called an
equivalence relation.
similarity of triangles,
Show
relations.
Show
27.
Show
that conform ability for addition of matrices is an equivalence relation while con form ability for multi-
plication is not.
28. Prove:
If .4, B,
AC
CA and BC
CB,
then
(AB
BA)C
C(AB
BA).
chapter 2
Some Types
of Matrices
for
i>j
is called
upper triangu-
a^=
for
i<j
is called
lower triangular.
Thus
"12
tip o
"13
tZo o
"2n
is
03S
is
lower triangular.
The matrix D
Ogj
which
is
is call-
...
a,
ed a diagonal matrix.
It
'
^nn )
See Problem
If
1.
D above,
Oi^ = Ogs =
ci
k,
D
'1
is
if.
and
is
denoted by
For example
t"]
When
/+/ +
...
and
1
1
the order is evident or immaterial, an identity matrix will be denoted by /. Clearly, to p terms = p / = diag (p,p,p = I-I ... to p factors = /. Identity map) and
[1 .
2
c
3I
(5
,
then l2-A =
A.L
hAh
A, as the reader
may
readily show.
10
CHAP.
2]
11
If
A and B
AB
is a
is
any ra-square
2.
commutes with
L
See Problem
If
that
AB
-BA,
A,
k-t-i
th^ matrices
A:
A and 6
for
which A
,fe+i
where
k is
A,
k.
i =
1,
See Problems
3-4.
matrix
for
which
4=0,
aP
where p
0,
is
If
p is the
5-6.
ther
is said to
be nilpotent of index
See Problems
If A and B are square matrices such that AB = BA = I, then B is called the inverse of A and we write B = A'^ (B equals A inverse). The matrix B also has A as its inverse and we may write A = B~^
1
3 3
-2 -3
1
1
o'
/,
Example
1.
Since
-1
=
( 1
12
the other.
-1
We
shall find later (Chapter?) that not every square matrix has an inverse. A has an inverse then that inverse is unique.
We can show
See Problem
7.
If A and B are square matrices of the same order with inverses A^^ and B~^ respectively, then (AB)''^ = B'^-A~^, that is,
1. The inverse of the product of verse order of these inverses.
tijvo
See Problem
8.
A
utory.
matrix
A such
An
An
matrix of order
an mxn matrix A
is
nxm obtained by interchanging the rows and A and is denoted by A' (A transpose). For
4'
^[-e]
IS
i' =
5
6.
aIr
in the ith
row
in the /th
A and
B,
and
(b)
if
A:
is
(A'Y =
and
(kAy = kA'
In
Problems 10 and
II.
11,
we
prove:
The transpose
of the
sum
sum
(A +
BY
A'+ S'
12
[CHAP. 2
and
in.
The transpose
i.e.,
of the product of
two matrices
=
B'-A'
is the
transposes,
(AB)'
SYMMETRIC MATRICES. A
matrix
1
A =
2
a^j]
is
square matrix A such that A'= A is called symmetric. symmetric provided o^-- = a,for all values of i and ,^
,
/.
4-5 3-5 6
2 In
is
for
any scalar
k.
Problem
IV.
If
13,
we prove
an ra-square matrix, then A + A'
is
is
symmetric.
A square matrix A such that A'= -A is called skew-symmetric. Thus, a square matrix A is skew-symmetric provided a^.- = -Uj^ for all values of i and /. Clearly, the diagonal elements are
"
-2
2
3"
zeroes.
For example,
is
is
kA
for
any scalar
k.
-3 -4
With only minor changes in Problem 13, we can prove
V.
If
is
A' is
skew-symmetric.
Let a and b be real numbers and let i = V-1 then, z = a+ hi is The complex numbers a+bi and a-bi are called conjugates, each the other. If z = a+ bi, its conjugate is denoted by z = a+ hi.
;
z^
= a + bi
and
Z2
z^
a~
bi,
then
Z2 =
z-^
a-
bi
= a+
bi,
z.i=a+bi and
z-L
Z2
= c+
di,
then
z^
+ Zg = (a+c) + (b+d)i
and
of
z^
= (a+c) - (b+d)i =
is the
(a-bi) + (c-di)
of their conjugates.
sum
sum
(ii)
z^- Zg
= (ac-bd) + (ad+bc)i
z^^
numbers
When i is a matrix having complex numbers as elements, the matrix obtained from A by replacing each element by its conjugate is called the conjugate of /4 and is denoted by A (A conjugate).
l
+
3
2i
-21
3 2
Example 2. When A
then
=
3i
2-3i
If
A and B
is
= A
and
(X4) = ~k-A
Using
(i)
and
(ii)
above,
we may prove
CHAP.
2]
13
_VII. The conjugate of the sum of two matrices is the sum of their conjugates, (A + B) = A + B.
i.e.,
same
order, of
is
It
is
sometimes written as
A*.
A, i.e.,
of the conjugate of
is
Example
3.
Prom Example
1
- 2i
-i
(AY
=
2+
3i
while
+
i
2i
A'
2
and
[l-2i
(A')
3 2
1
=
(AY
Zi
-i
+ 3iJ
HERMITIAN MATRICES. A
is
square matrix Hermitian provided a^j = uj^ for Hermitian matrix are real numbers.
1
^ =
all
[0^^]
values of
A'
= A
is called Herraitian.
Thus,
/I
1-i
3
2
i
+j
2
is Hermitian.
-I
Is
kA Hermitian
square matrix
if
^ =
such that
for all
1= -A
i
is
called skew-Hermitian.
/.
Thus, ^
is
skew-
Hermitian provided
Clearly, the diagonal elements of a skew-Hermitian matrix are either zeroes or pure imaginaries.
i
a^^ = -o^--
values of
and
l-i
3i
i
2
i
-1-t
-2
is
skew-Hermitian.
Is
kA skew-Hermitian
it
k is any real
By making
X.
If
we may
prove
Z is
Hermitian and
A-T is
skew-Hermitian.
DIRECT SUM.
ization
Let A^, A^
,ms.
The general-
^1 A^
...
...
diag(^i,^2
...
As)
A,
sum
of the Ai
14
[CHAP. 2
-1
3
Examples. Let
/4i=[2], ^5
B:}
and An
2 4
1
-2
2
12
The
direct
sum
of
A^,A^,Aq
3
is
diag(^3^,
/42, /4g)
12-1
2
1-2
1,
illustrates
...,Ag)
s),
the
and
S = diag(Si, S2
= diag(^iSj^, A^B^
^s).
then
AB
^s^s)-
SOLVED PROBLEMS
ii
Or,
"12
"22
'm
11^1
22 21
1.
Since
2i
2n
OooOoo
"^22 ''2n
the product
AB
of
''ml
"7)12
"tn
''WB "Ml
''wm"m2
^mm"nn
an m-square diagonal matrix A = diag(oii, a^^ an) and any mxn matrix B is obtained by multiplying the first row of B by %i, the second row of B by a^^- and so on.
2.
Show
and
K
P^+*'^
commute
for all
values of
o, 6, c, J.
\"
^1
"^l
'"^
'^l
["
*1
2
3.
-2 -4
4
is idempotent.
Show
that
-13
1
-2 -3_
2
-2 -4'
4
'
-2 -4"
4
-2
3
-4
4
-13
_ 1
-13
.
-2 -3_
-2 -3_
-2
-3
4.
Show
that if
AB
=
=
A and BA =
A-A
=
B,
then i and
= .4(34) =
are idempotent.
=
.45.4 =
(AB)A
A^ and
ABA
AB
then 4^ =
.4
and 4
is idempotent.
Use BAB
to
show
that
is idempotent.
CHAP.
2]
15
1 5.
3 6
is nilpotent of order 3.
Show
that
2
1
-1
3
-3
1 3" 1 1 1
5
-2
3 -1
and
A^.A
-1 -3
-2 -1 -3
-1 -3
-1 -1 -3
-2 -1 -3
6. If
is
nilpotent of index
0,
2,
show
=
that
= 0.
AOAf=
for n
=
Since /? =
A^ =
a''
...
^"
Then A{IAf
A{lnA)
AnA^
= A.
7.
Let
C.
A,B,C
Thus,
5 = C= ^
be square^ matrices such that AB = 1 and CA = 1. Then ^ is the unique inverse of ^. (What is S~^?)
iCA)B =
CUB)
so that B
8.
Prove:
(AB)^ =
definition
B~^-A'^
By
{ABf{AB)
(B' -A
(AB)(ABf
=
/.
Now
B^-I-B
=
)AB
b\a''-A)B
=
B-'.B
=
/
and
AB(B~'^-A~'-)
A(B-B''^)A~^
A-A
By Problem
7,
(AB)^
B-'.A-'
9.
Prove:
matrix
is involutory if
and only
if
(IA)(I+ A) =
I
0.
and
/I
is involutory.
Supposed
is involutory; then
A^ =
and (I-A)(I+A) =
I-A^
I-I
= 0.
10.
Prove:
Let
A'.
(A+B)' =
A' +
B'.
A = [ay] and B
= [6^].
We need
bjj_.
only check that the element in the ith row and /th column of
bj^.
and aj^+
11.
Prove:
(AB)' =
S'i'.
[6y] be
of order
is
nxp
=
then
AB
[cy]
is of order
mxp.
The
AB
cy
iy, b^j
bnj
Thus,
(AB)' =
B'A'.
12.
Prove:
(ABC)' = CB'A'.
mite
ABC
(AB)C.
Then by Problem
,
1 1
C'B'A'.
2g
[CHAP.
13.
Show
that if
A =
[o^j]
is n-square, then
B =
[bij] =
+ A' is symmetric.
First Proof.
The element
bij = aij + a^i.
oj^j]
in the ith
.4
is aij
/I'
is aji;
hence,
.4'
The element
= a^^ + aij.
is
hence,
bji
Thus,
bij = bji
and B
is
symmetric.
Second Proof.
By Problem
10,
(A+Ay
A' +
(A')'
A'+A
A + A' and
(^ +.4') is symmetric.
14. Prove:
If
A and B
AB
B'A' =
is
symmetric
if
and only
if
A and B
commute.
Suppose A and B commute so that Suppose
trices
AB
BA.
Then (AB)' =
BA
AB and AB
;
is
symmetric.
=
AB
is
Now
(AB)' = B'A' =
BA
hence,
AB
BA and
the ma-
A and B commute.
15. Prove:
P'AP
is
.4
if
is of order
mxn then B =
If If
is is
P'AP
and B
is
symmetric.
,4
-P'AP
and B is skew-symmetric.
16. Prove:
If
A and B are
A and 5 commute
if
and only
if
A~
kl and
B-
kl
commute
for
every scalar
AB
=
= B.4
and
+ k^I + k^I
=
{A-kI)(B-kI)
AB - k(A+B)
BA - k(A+B)
(B-kr)(A-kl)
Thus,
Akl
and B kl commute.
(A-kI)(B-kI)
AB - k(A+B)
BA - k(A+B)
+ k^I + k^I
=
(B
-kr\(A-kI)
AB
CHAP.
21
17
SUPPLEMENTARY PROBLEMS
17.
Show
is
BA
of an
mxn
matrix
S and ^
diag(aii,a22
a).
Hint.
See Problem
1.
19.
Show
A:
can be written as
Wand that kA
klA =Aia.g(k,k
k) A,
row order of A
a'^ a'^ = a'^
re-square,
show
that
A^ where
2-3-5
21. (a)
Show
that
-14
1
-13
and
_3 _5
3
are idempotent.
-3
-4J
[-1
(b)
Using A and B, show that the converse of Problem 4 does not hold.
is idempotent,
22.
If
show
2
that
I-A
is
AB
BA
= 0.
2
1
23. (a)
If
2
1.
show
that
.4
4.A
5! =
0.
2
2 (b) If
2
1
3 2
1
,
1
1
-1
2
show
that
-4
- 2A
- 94
0.
but
-4^
2/1
9/ /
0.
-1 -1 -1"
24.
2
1
o"
3
1
Show
that
_
1 1
-1 -1
1
=
-1
/.
-1 -1
-2 -6
2
25.
Show
that
-3
is periodic, of period 2.
0-3
-3 -4
4
is nilpotent.
26.
Show
that
-13
1
-3 -4
'12
27.
-1 -6
2 9
Show
that
(a)
and
commute.
-1 ~1 -1
_-l -1 -4_
2/3
"112'
(b)
-1/3'
and
-3/5
2/5
1/5
commute.
-12
28.
7/15 -1/5
1/15
Show
that
Bf
A^ +
B'^
29.
Show
that each of
30. Prove:
Find
all matrices
a^^).
Ans.
(a) diag(a,fe, c)
are arbitrary.
18
[CHAP. 2
3 7
I
"3-2-1
is the inverse of
32.
Show
that
(a)
-4
2
"
-1
1.
-2 -4 -5
3
is the inverse of
(b)
10 2 10 4 2 10
-2100
0-2
_ 8
10
1.
-2311
33. Set
-1 -1
[3:][:^E3
[3
4]-
^"'-
[3/2
-1/2]
are different from zero, is a
34.
Show
A and
in the
same
order.
Thus, the
-1
4 4
3"|
35.
Show
that
4-3 3-3
and
-1 0-1 -4 -4 4-3J
are involutory.
10
36. Let
10
-1
I2
/j.
by partitioning.
/I21
Show
that
A
/s
a
c
'2
0-1
(b) (kA)" = kA',
37. Prove:
(a)(A')'=A,
a positive integer.
38. Prove:
(ABCy
C'^B-^A-'^.
Hint. Write
ABC=iAB)C.
(c) (A'^Y^ =
39. Prove:
(a) (A-^)'^ =
A,
integer.
40.
Show
41. Prove:
(a)(A)=A,
1
(b) (A +
B)
I
A + B
i
(c)(kA)=kA,
(d)
{AB) = A B.
1
J
+
2
+ 3
42. Show:
(a)
2- 3
1
i
-i
is
Hermitian,
1
i
+
2i
(b)
-1 +
is
skew-Hermitian,
-2-3J
(c)
(<f )
-1
iB is Hermitian,
is Hermitian
and B is skew-Hermitian.
43.
If
is n-square,
show
that
(a)
are symmetric,
(6) A-irA',
44. Prove:
If
is
HA
is Hermitian.
45. Prove:
is real
skew-symmetric.
46. Prove:
(a)
and C
47. Prove: 48.
Every skew-Hermitian matrix A can be written as A = B + iC where B is real and skew-symmetric (6) A' A is real if and only if B and C anti-commute.
A' and
If
B\
if
Show
that for
m and
n positive integers, ^4
A and 5 commute.
CHAP.
2]
19
A A
49.
1
A
1
nA
,ra
Show
raA
(a)
(6)
2n('-l)A n-i
nX
A"
A
and
50. Prove:
If
is
symmetric
or
are symmetric.
51. Prove: If
is
symmetric so also is
a-4
+6/1^
+.-.+/
where
a, 6
integer.
52. Prove:
bfe
written as
/I
B +C where B
is
53. Prove:
is real
and skew-symmetric or
if
is
54.
Show
that the theorem of Problem 52 can be stated: Every square matrix A can be written as A =B+iC where B and C are Hermitian.
55. Prove:
If
A and B
(c)
idempotent,
BA
then
and
A'B"= B\
(b) A"
56.
If^
is involutory.
show
kO-A)
,
j(I+A)
^(I-A) =
0.
57. If the
(a)
show:
(A
Y=
(A')
(b)
(A)^ = A-\
of
(c)
(r)-^=(^-V
(A'''-)
Hint, (a)
AA~^
I.
obtain
as the inverse of
/!'
58.
Find
all
i, 2, 3),
(6)
diag(l.
1, 2, 2).
b. c
are scalars.
59. If
A^.A^
A^
mj^.m^
m^.
which commute
with
diag(^i, ^2
Ans. dia.g(B^.B^
-85
60. If
AB
= 0,
where A and B are non-zero n-square matrices, then A and B are called divisors of zero. A and B of Problem 21 are divisors of zero.
A^)
Show
61. If
...,
A
s),
diae(Ai.A2
and
di&giB^.B^
B^)
(J
1, 2,
show
=
that
A+B
AB
=
trace
diag(^i+Si,^2 + S2
=
-^s
+ Bs)
diag(^iBi. /I2S2
A^B^)
...
AB
+ trace A^B^.
62. Prove: If
^ and B
A
AB
is
symmetric
if
and only
if
A and S commute.
63. Prove: If
is n-square
and
A and B commute.
Prove that
Ci =
64.
Let A and B he n-square matrices and let ri, rg, si, S2 be scalars such that ri4+siB, C2 = r^A+SQB commute if and only if A and B commute.
risj
7^
rssi.
65.
Show
(6)
/I
that the n-square matrix A will not have an inverse when (a) A has a row (column) of zero elements or has two identical rows (columns) or (c) ^ has arow(column)whichis the sum of two other rows(columns).
66. If
A and B
(A+B)A~'^(A-B)
(A-B)A''^(A+B)
chapter 3
Determinant of a Square Matrix
PERMUTATIONS. Consider
(3.1)
123
4!
132
213
231
312
321
1234 1324
If in
2134 2314
3124 3214
4123 4213
we say that there is an a given permutation a larger integer precedes a smaller one, the permutation is (odd), even is inversion. If in a given permutation the number of inversions there is no inversince even called even (odd). For example, in (3.1) the permutation 123 is permutation the 312 is even since in sion, the permutation 132 is odd since in it 3 precedes 2,
it
2,
and 3 precedes 2. In (3.2) the permutation 4213 4 precedes 1, 4 precedes 3, and 2 precedes 1.
precedes
1
is
even since in
it
4 precedes
"12 "IS
*
Uo 1 Oo p ^03
(3.3)
-'ni
"ne "n3
and a product
(3.4)
^j, "'^L ~^3 1 ~-'2 3Jo
'^" "^n
comes from any row and one of n of its elements, selected so that one and only one element of convenience, the factors matter as a column. In from any (3.4), comes one element and only
re; the have been arranged so that the sequence of first subscripts is the natural order 1,2 of the intepermutations re! of the one some then is subscripts second sequence j^, j^ / of (Facility will be gained if the reader will parallel the work of this section bere. gers 1,2
ginning with a product arranged so that the sequence of second subscripts is in natural order.) For a given permutation /i,/2, ...,^ of the second subscripts, define ^j^j^....j^ = +1 or -1
according as the permutation is even or odd and form the signed product
(3.5)
-W-2-
k'^k ^^h
'^Jn
By
signed prodthe determinant of A, denoted by U|, is meant the sum of all the different Ul, which can be formed from the elements of A; thus.
le Jlj2p=n\
The determinant
M
re
'^h
a determinant of order
re.
20
CHAP.
3]
21
(3.6)
we have
for n
and n = 3,
CE-j^2^21
"12
12
(3.7)
^11^22
"'"
21 '^12^^21
^21
"22
11^22
and
'll
"12
"13
P*^
(3.8)
21 "31
"?
^123
ail0223S +
231
^132 Oll02s'^32
^213 Oi221<%3
"32
"S3
%2 02331
^11023032
312 Ol3'%lC'32
^321
%3 ^^2 %!
flu 022 %3
012^1033
%l(22 033 - 02S32) " "12(021 "ss " ^S^Sl) + ^IS (a2lOS2 - 022^31)
022
"^32
^3 ^3
O21
"31
053
21
<^2
^^32
+
<^33
Oi
'^Sl
Example
1
1.
21
1-4
(a) 3 2
(fc)
2-3
-2
-11
2-0
(-1)3
+ 3
3 2 3 5 10
1 1
(O
11
01
2
11
+
2
11
2(0-0
2
(d)
1
2(-l)
- 3(-2) +
5(1)
-3 -4
0-2
-5 -6
O-o!
20
-18
See Problem
1.
PROPERTIES OF DETERMINANTS.
nant
is the
Ul
is
given by (3.6).
and we have
I.
Suppose that every element of the sth row (every element of the/th column) is zero. Since every term of (3.6) contains one element from this row (column), every term in the sum is zero
If
is zero, then U| =0. Consider the transpose A' of A. It can be seen readily that every term of (3.6) can be obtained from A' by choosing properly the factors in order from the first, second, ... columns. Thus, II. If 4 is a square matrix then U'l = \A\; that is, for every theorem concerning the rows of a determinant there is a corresponding theorem concerning the columns and vice versa.
a scalar k.
fth
Denote by B the matrix obtained by multiplying each of the elements of the ith row of A by Since each term in the expansion of |5| contains one and only one element from its row, that is, one and only one element having A; as a factor,
\B\
1
P
\: , , a. , n^n \ *%'2---i'iii2J2----anj^!
.
Jc\A\
Thus,
22
[CHAP.
every element of a row (column) of a determinant U| is multiplied by a scalar k, the determinant is multiplied by k; if every element of a row (column) of a determinant \A\ has k as
III.
If
a factor then k
a 7']^]^
O\
OrQ
1
\A\
For example,
CI
t*
QO
f (X'
product in (3.6) of
(3.6) is the
Let S denote the matrix obtained from A by interchanging its ith and (i+l)st rows. Each \A\ is a product of |s|, and vice versa; hence, except possibly for signs,
expansion of
i
\b\.
In
term of
its
\b\,
before i+1 in the row subscripts is an inversion; thus, each product of (3.6) with
|s|
and
\e\
= -
\A\.
Hence,
is
obtained from A by interchanging any two adjacent rows (columns), then \b\ =
- \a\.
As a consequence
V.
VI.
|s|
If
of
Theorem
IV,
we have
its
is
\b\
= -\A\.
If
is
its ith
= (-i)^UI.
VII.
If
\A\
row of A
is
expressed as a binomial
%,
^li
b-^:
c-^j,
= 1,2,
...,n).
Then
^Jd2
in
ii2
O22
^^^ii ^
""in
Os-,-
p^kk^ Jn ^^k'^k'^k-h,. 21
"njn^
Os,
"Jn
^13
^m
02n
'In
"23
Oni
In general,
VIII.
If
n2
"na
'^nn
-^n2
'-"ns
every element of the ith row (column) of A is the sum of p terms, then
\A\
can
be expressed as the sum of p determinants. The elements in the ith rows (columns) of these p determinants are respectively the first, second, ..., pth terms of the sums and all other rows
is
is obtained from
A by adding
to the
elements of
its ith
\b\
Cfrii
T"/CCEog
{-
^22
Clf=n
Clnr
Clr^r,
Clf,',
Karin.
Cfg ^
H'Qri^
^Q'2
"^
i^Clr
Zgg +
ka^^
When from A
the elements of its ith row and /th column are removed, the determinant
or of \a\
and denoted by
\M^j\.
CHAP.
3]
23
More frequently,
cofactor of
a^
it is
and
is
The signed
minor,
(-if'*'^ \Mij\
is
called the
"V
11
12
'^SS
''13
Example
2.
If
"21
"'31
"23
''SS
^^32
M 11
and
22
"23
"21 "23
ki2
"32 "33
"31
21
22
kic
"33
"31
"32
1+1
I
1 + 21
(-1)
kll
1^11
ai2
I
(-1)
\m.
1^12!
14-31
(-1)
ki3
Then
(3.8) is
Ml
= =
aiilA/iil
O12IM12I
+
aislMigl
"11*11
ai20ii2
"130^13
In
Problem
9,
we prove
of the determinant U|, where A is the matrix of (3.3), is the sum of the products obtained by multiplying each element of a row (column) of U| by its cofactor, i.e..
X.
The value
n
(3.9)
Ul
aiiflii
a^g a^g
^in * in
^ aik'kk
n
fe?i"fei*j
(3.10)
Ml
"^if^if +
021*2/ +
a^jd^j
(ij, = 1,2
n)
Using Theorem
XI.
VII,
we can prove
of
ra-square
The sum of the products formed by multiplying the elements of a row (column) matrix A by the corresponding cofactors of another row (column) of A is zero.
If
an
Example
3.
is the matrix of
Example
2,
we have
+ +
"320^32
and
while
"310^31
+
+
"330^33
= =
Ul
'4
I
I
"12*12
"22*22
"32*32
and
"31*21 "12*13
+
+
"32*22
"22*23
+
+
"SS*23 "32*33
= =
MINORS AND ALGEBRAIC COMPLEMENTS. Consider the matrix (3.3). Let i^ arranged in h, i^ order of magnitude, be m, (1 < m < ra), of the row indices 1,2 n and let j^, j^ /^ arranged in order of magnitude, be m of the column indices. Let the remaining row and cofumn indi,
i^,^,,
i^
a separation of the row and column indices determines uniquely two matrices
a.,
,
and /+;+2
Such
'l'J'2
'^i.
Jm
.Jl.j2.-
im
(3.11)
i2.ii
''2.
72
i2 :h
^n
"^m Ji
%J2
^n
24
[CHAP.
and
''m+i'J'm+i
'-m
+ i'Jffl+s
n.7n
^ra+2>Jn
.7ra
+ i'Jm+2'
Jn
''m
+ S'Jm +
^m+2>7m+2
(3.12)
^^Jm-i-i
^n'j7ii+2
^ri'Jn
called sub-matrices of ^.
The determinant
A and the
pair of minors
Jm
J'm-'-i'Jn^-Q
Jn
''r
and
^m+l'
'-m-^2
are called
complementary minors
5 -square matrix
of A,
[iij.
"I'Z
%4
^^34
'^15
'^SS
I
i
1,3 '^2,5
'^21
I
'^23
2,4,5
~
I I
and
f^Sl
'
'^l, 3.4-
"32 "42
63
"44
^^45
Let
(3.13)
U +
q
In
+ in +
h
+
i
and
(3.14)
i-n
+1 +
fm+2 +
+ *n +
^ra
+ /m+2 + " + In
Ji, is.
J,
(-1)
A,-
complement
of
Jm-'-i'Jn-i-2
7(4-1.
'to+2
J+i'Jm,+2
7n is called
and
(-l)'^
/I.
''m,+
l>^m+2> ''-
Jm
^n
H>h
Exainple4. For the minors of Examples.
I
2+-5 + 14-31
13
Ap' g
(-1)
I
=
I I
,1.3
/i^
I 1
is
Of
,2 4 5 A.{3\i
and
l(-3 + 4-^2l-446
(-1)
.2,4,5 A-^ 34
-I
2 4 5 ^ijg',4
is
1.3
^^is
the same.
Is this
always true?
.Ji
Ji
When m =
l,
(3.11)
becomes
J2.J3.
A
Jn
is
OL-Lj.
[%ii]
Ml
i
I
and
"HJi
an element of A.
The
complementary minor
algebraic complement is the cofactor
is
called a principal
of A; the alge-
minor of A.
The complement of a principal minor of A is also a principal minor braic complement of a principal minor is its complement.
CHAP.
3]
25
H'
"?2
"11
"24
044
(^25
"15
^1.3
and
31 S3
A,
,t
0^2
045
52
are a pair of complementary principal minors of
.
"54
055
A What is the algebraic complement of each ? The terms minor, complementary minor, algebraic complement, and principal minor as defined above for a square matrix A will also be used without change in connection with U
|
SOLVED PROBLEMS
1- (")
1-1
1
!l 4|
2
2-4
- 3(-l)
11
4
(b)
5l
3
5
4 6
5 7 6
(1)
6
5 7
3 5
4
(l)(4-7
5-6)
=
+ 2(3-6
4-5)
71
-2-4
-18
-6
(c)
3
5
1
4
6
15
21
1(4-21
11
id)
3
1
5 3
1(3-3
5-1)
2.
first
of the other
columns,
1
1 1 1
-4
1
1
1 1
-4
1
1
-4
-4
1 1
1 1
-4
1
-4
1
-4
-4
by Theorem
I.
3.
to the third,
Vn
1
a
b c
h
c
1 1 1
a
b
c
1 1 1
1 1
+a
a+h
a+b+c a+b+c
(a + b + c)
1 1
4.
Adding to the third row the first and second rows, then removing the common factor 2; subtracting the second row from the third; subtracting the third row from the first; subtracting 'the first row from the second; finally carrying the third row over the other rows
26
[CHAP.
Oj^+fej
a^+b^
62 + ^2
ag+63
&3+^3
"s+^s+'^s
b-i
6^+C-L
62+ ^2
C2 + a2
^3+^3
i^+Ci
a5^+5]^+Cj^
bi + c^
b^+c^
b^+Cr
Cj+a^
C3+a3
a2+fe2+"^2
62
bi
63
62 C2
^3 Cg
a^^
02
^2 C2
flg
6^ +
C-L
62+^2
02
^3+^3
'^3
Cl
a-,
bi
C]^
63 Cg
a^
Oo 03
Oi
5.
%
^2
1
1 1
Ul
02
-(Oj^
- 02) (02 ~
'^s) ('^s
%)
Og
Og
first;
then
Oi + a2
1 1
l-2
2
fl-j^
^2
ar,
02
2
(a^
2 02)
a2
2
O2
by Theorem
III
Og
Og
Similarly,
Og
ag
and
are factors.
Now M|
letters; hence.
(i) \a\
k(a^^-a^)(a^-as)(as-a^)
A:
The product of the diagonal elements. 0^02. is a term of \a\ and, from (i), the term is -ka^a^. Thus, -l and \a\ = -{a^-a^){a^-a2){as-a^). Note that U vanishes if and only if two of the a^, og. os are
|
equal.
6.
Prove:
If
is
is
skew-symmetric and of odd order 2pskew-symmetric, -4'= -A; then Ut = -U! and Ul = 0.
1,
then Ml =
Since A
\-A
(-if^-^-UI
II,
Ul
Ul; hence,
7.
Prove:
If
is
is
Since A
and
=
\a\ =
U'| =
U| by Theorem
II.
But
'^
if
Ul
then
^iij2...j"yi%
%;
+ *'
Ul
=
|eii;2---Jn^iii"2i?--^"in
"
"
^'
Now Ul
Ul
requires 6 = 0; hence.
U|
is a real
number.
2 3
8.
2 2
12
a,,
=
(-ir'\l 12
(-ly
,2+1
?1
2 3
2,
ai2 =
(-1) 1+2
2
2 3 2
-2,
Mi3 =
1+3 (-ly
|i
11
2I
2 2
2 2 2 3
a22 = (-1)
2+2I
-1,
a,23
(-1)
2+3
1 1
2 3 2
(-ly
3+
age
(-1) 3+2I
3 2I
a 33
(-1)
3+3I
31
CHAP.
3]
27
elements
in
in the display as the element, Write the display of signs for a 5-square matrix.
whose cofactor
is required, oc-
9.
Prove: The value of the determinant U| of an re-square matrix A is the sum of the products obtained by multiplying each element of a row (column) of A by its cofactor.
We
The terms
Now
e.
kh---Jn~
^jbjb- -in
^i"''^ i"
^ P"'^tin
lJi..fe.
-in.
the
is in natural order.
Then
(a)
may
be written as
(6)
where the summation extends over the cr= (n-i)! permutations of the integers 2,3
022
(c)
n,
and hence, as
2S
<2n
"an
"ii
"11
'twill
^n2
"ns
Consider the matrix B obtained from A by moving its sth column over the first s-l columns. By Theorem \B\ = (-1) U|. Moreover, the element standing in the first row and first column of B is a^s and the minor of a^^ in B is precisely the minor \M^\ of a^s in A. By the argument leading to (c), the terms of ais mis\ are all the terms of \b\ having a^s as a factor and, thus, all the terms of (-1)^"^ having a.^
VI.
a factor.
Then
U|
as
the terms of
ais!(-if
M^/isli
Thus
(3.15)
+
"11*11
since (-1)
,s+i = (-1)
+ ai5!(-lf||/,3|! + +
...
+ ai !(-!)'*" M,
+ ai2i2
+ ain*in
We have
(3.9) with
We
U| along
its first
ments. Let
m.r,JWlTr\T B be the
the first
umn over
"' ''^ '^ ^^^^^ '^' ^^-^^ '' ' = '^ '^ '''^i"^'J by f^P^^ting the above arguJ'*' ^iT^ matrix obtained from A by moving its rth row over the first r-1 rows and then its .th col^
T~l
(-1)
s-l
(-!)
\a
(-1)
u
the minor of a^^ in ^^
first
first
column of 5
is
a and
the terms of
is yreciseiy i precisely
Then
r+fel
,l,-rkU-l)
and we have (3.9)
for
j
M.rk\
2
k=i
a^j^a. rk^rk
r.
28
[CHAP.
10.
When
oLij
A =
"12
[a^j]
show
that
%,f-i
(h,j-i
"'1
%,j4-i
02
i +
^
^'i.n
*2n
(i)
fC^GC^-j
"T
fCi^OL^j
"T
itj^dj^j
'^i
^2
"W,j-i
<%,
'^n
In making these with k^ 0^7 with Atj, This relation follows from (3.10) by replacing a^j with k^, 027 ^2J appearing is affected since none contains an element (t^j replacements none of the cofactors OLij.QL^j.
, ,
from the
th
column of A
VII, the determinant in {i) is
(i) is
I
By Theorem
(r
= 1,2 n
and
s
^
/).
j).
By Theorems Vin,
A
are replaced
when
Ay
= 1,2
and
by
/ci,/c2.
3 4
5 3
(e)
28
25
38 65
83
11. Evaluate:
(a)
04
-5
1 1
(c)
2
5
42 38
56 47
-2
2
((f)
-4
4 8
(b)
-2 1 5
-3 2 4
(a)
3 2
Cl-^^fX^Q
"f"
022^22 ^ ^32 32
ai2 +
a22 +
(-5)o;g2
-5
34.2I -5(-l)-'
1 2
I
5(4-6)
-10
3 4l
(b) Subtracting
twice the second column from the third (see Theorem IX)
1
4 8
4
1
-2 15
-3 2 4
2
3
4
1
-2 -3
4
2
1 3
3(-l)''
-3
-3(14)
first
to the third
5 3
3-3(1) 4-3(2)
1
5-3(3)
3 =
1
-2 -4
2
9 3
-2 -4
9
2
-2 5
-4
-2 +
2(1)
+ 2(2)
-4+
2(3)
(-4 + 36)
-32
in (c)
-4
3
5-11
4
-2 -3
1
27
8
-11 -41 -2 - 11
27
CHAP.
3]
29
(e)
first
column, then using TheoremIX to reduce the elements in the remaining columns
2
28
25
38
14
25
38
65
14
2
3
42 38 65
56
38
47 83
47 83
2
4 3
-2
5
14
-12
6
9
1
-14
-1
6
4-13
12.
-14(-l-54)
1
770
-1
Show
that p and q, given by (3.13) and (3.14), are either both even or both odd.
is
+ 2+-+7J) +
2+-"+n)
2-kn{n + l)
n(n +1)
or both odd.
Now
(-1)
p+q
= (-1)^
even (either n or n + 1 is even); hence, p and q are either both even and only one need be computed.
is
Thus,
12
6
13.
3 8
4
9
[..]
11
12 17 22
13
14 19
10 15
the algebraic
complement
of
Ao's
is
16
.21
18
23
24
20 25
13
16
21 18
20 25
23
'4^'4^'5|
is
I12 141
SUPPLEMENTARY PROBLEMS
14.
Show
1, 2, 3. 4,
is even,
24135
is odd,
41532
is even,
53142
is
15.
1,
2,3,4, taken together; show that half are even and half are odd.
16.
Let the elements of the diagonal of a 5-square matrix A be a.b.cd.e. diagonal, upper triangular, or lower triangular then \a\ = abcde.
Show, using
(3.6), that
when ^
is
17.
Given
-4
[j J] each product is 4.
and B = [^
6J
^^^^
^^^^
AB^BA^ A'b
4 AB' ^
a'b'
^ B'a'
18.
Evaluate, as in Problem
2
(a)
1,
-1
2-2
3 =
2 3
2 4
3
27
(b)
12
2 3
(c)
-1
-2 -3 -4
30
[CHAP.
1 2
10
9
19.
(a)
Evaluate
\A
2 3
4 5 11
(b)
Denote by B Evaluate \B
| \
-4
its
second column by
5.
Theorem
III.
(c)
Denote by
I
.4
by interchanging
its first
Evaluate
to verify
Theorem V.
1
2 7
2 3
(d)
Show
that
2 3 5
2 3 4 4 5 3
4 5 8
7
3
(e)
Obtain from \A
the determinant
|o| =
2 3
first
5-1
Evaluate
j
to verify
Theorem
IX.
subtract twice the first row from the second and four times the first row from the third. the resulting determinant.
I
U
I
Evaluate
(g) In
/I
I
it
\A
I
Compare with
(e).
Do
20.
If -4 is
is a scalar,
U^
/t^M
|.
21.
Prove: (a)
If
U
A
= k.
then
\a\ = k =
\a'\.
|
(b) If
is skew-Hermitian, then
-4
number.
22.
(a)
of adjacent
in
Theorem V
(b)
Theorem
VI.
23.
Prove Theorem VII. Hint: Interchange the identical rows and use Theorem V.
Prove:
If
24.
,4
o.
25.
Use Theorems
26.
in
Problem
11.
a b
27.
Use
(3.6) to evaluate
\A\
d
e
;
".P/.
Thus,
if
= diag(A-^. A^).
where
g h
A^,
U4i|
-1/3
28.
Show
2/3 2/3
is that element.
1/3
-3 -3
1
29.
Show
is the
numbered column.
30.
Prove: (a)
If
A A
is is
symmetric then
<Xij=
0^j^
when
j
Ot--^
(b) If
aij = (-1)
"P when
CHAP.
3]
31
31.
show
that
(b)
12
_'^13
(^^22
32
Otgg
AC = [.
known.
^23
(c) explain
why
known as soon as
a
a^
(a) is
be
32.
6^
c^
be
ca
c2
52
respectively by a,b.c
ab
ab
ca be
ca
be
the rows to
show
that
ab ca
ab
a^
33.
a
6 e
1 1 I 1
bed
a-^
a^ a
6^
aed abd
abc
(a
e^
b)(a
c^
c)(a
d)(b
c)(i
c'^
d)(e
- d).
1 1
d^ d
d d^ d
1 1
1 ... 1
1 1
1
1 1
1 1
1 1 1
1...1
1
34.
Show
a\
0...1
("-!)
1
1 1 1
n-1
(-1)
(n~l).
1 1
1 1
...
...
1
1 1
...0
re-i
n-2
a^
1
ra-i
n-2
ar, 1
35.
Prove:
S(i
2)(i
as).
(ai-a^H(a2- 03X02-04)...
(02
-a^j
7i-i
- "ni
n-i
ra-2
na^+b-^
36.
na^+b^
nb^ + c^
na^+bg nb^+Cg
ncQ + a^
(n
O]^
02
62
O3 is
Cg
nb-j^+e^
nc-i^+a.^
+ l)(rf--n +
1)
61 Ci
nCQ+a^
Cg
X
37. Without expanding,
xb x-c
has
as a root.
show
x-i-a
x+b +6
a
38.
x+c
a
a
a+ b
Prove
,n-i
b
{na + b).
+6
chapter 4
Evaluation of Determinants
determinants of orders two and three are found in Chapters. In Problem 11 of that chapter, two uses of Theorem IX were illustrated: (a) to obtain an element 1 or 1 if the given determinant contains no such element, (b) to replace an element of a given
determinant with
0.
For determinants of higher orders, the general procedure is to replace, by repeated use of Theorem IX, Chapters, the given determinant U! by another |b| -= \bij\ having the property that all elements, except one, in some row (column) are zero. If &>, is this non-zero element
'Pq
and ^p^
is its cofactor.
bpq
dp^
i-ir
^Pq
minor of
b.
'pq
Then
the minor of bp
is treated in similar
Example
1.
3-2 4 3-2 12
2 3 2
2 + 2(3)
+ 2(-2)
-2
+ 2(1)
1
4 + 2(2) 2
8-10
3-2
-6
8
1
8 2
-2
2-3(-2)
4
3 4
3-3(3)
3-3(1)
4-3(2)
5
0-2
-2405
=
-2
-2405
8
8-1
(-1)"
-6 -2
8
+ 8(-l) -1
8(8) 8(4) 8 4
+ 8(-l)
0-10
58
8
-2
5
-6 +
-2 +
'^
37
-2 + 8(8)
5
62
37
+ 8(4)
30
-(-l)- = (-l)P'
-286
See Problems 1-3
30
may be used:
For determinants having elements of the type in Example 2 below, the following variation divide the first row by one of its non-zero elements and proceed to obtain zero
or
elements in a row
Example
0.921
2.
column.
0.185
0.476
0.527 0.637
0.841
0.614
0.138
0.921
0.201
0.517
0.667
0.138
0.921
0.201
0.517
0.667
0.782
0.157 0.484
0.555
0.782
0.157
0.484 0.555
0.527
0.637
0.841
0.123 -0.384
0.872
0.312
0.799
0.448
0.872
0.312
0.799 0.448
0.309
0.492
0.196
0.217
0.240
1
0.680
0.123 -0.384
0.921
-0.320
0.921(-0.384) 0.309
0.492
1
0.309
0.492
0.196
0.217
0.196
0.217
0.680
0.240
0.680
0.240
0.309
0.265
0.757
0.921(-0.384)
0.309
0.492
0.265 0.757
=
0.217 0.240
0.921(-0.384)
0.492
0.921(-0.384)(0.104)
-0.037
32
CHAP.
4]
EVALUATION OF DETERMINANTS
33
The expansion
of a determinant
i^
Instead of selecting one row of \a\, let when arranged in order of magnitude, be selected. Prom these
n(n-
l)...(ra
1-2
....
m + l) m
minors
can be formed by making all possible selections of m columns from the n columns. minors and their algebraic complements, we have the Laplace expansion
>
Using these
Jrr.
(4.1)
|(-1)^
Jn+i' jm+2
.
Jn
i.
i.
^m4-i'% + 2
^n
+h
Evaluate
3-2 4 2 3-2 12
,
first
two rows.
3 4
-2405
Prom
(4.1),
1+2+1+21
(-1)
1 + 2+1 + SI
U1.2
+
U-5aI ^3,4
1
(-1)"
,1,31
2,41
_l + 2 + l + 4|
1,4
(-1)
+
,,1+2 + 2+4. .2,4 .13 , (-1) Mi'jl-Us'.
(-1)
^1,:
(-ly
1 + 2+3+4
.34
*l,2r 1^3, 4I
2 3
4
5
2 3
-2
1
2 4
-2
+
3
+
5 3 4 2
2 3
4 2 3
3
-2
4
5
-2
_2
_2
-2
+
D
-2
1
-2 4
(-13)(15)
(14)(6)
+ (-8)(16)
-286
See Problems 4-6
DETERMINANT OF A PRODUCT.
(4.2)
If
A and B
Us
Ul-lsl
See Problem 7
If
^ = [aid
n
n
is
n-square, then
ii
^11 ^1
i=2
j,-=2
IJ
IJ
Where
a^
is the
cofactor of
o^ and
a^^-is the
"ii^iil
ii Otj
I
of^.
A =
[a^,-] ^tj.
Then
34
EVALUATION OF DETERMINANTS
[CHAP. 4
\A\, of \A\ with respect to x is the sum of n determinants obtained dx by replacing in all possible ways the elements of one row (column) of by their derivatives with respect to x.
I.
The
derivative,
Example
4.
x^
x^\
3
x""
2x =
1
x2 x +
x^
x+ +
1
3
x-"
dx
2x-\
X
2x-\
X
3^^
2x-i
-2
5
-2
X 6x
-2
4x
12x^
See Problem 8
SOLVED PROBLEMS
2 3
7
1.
4 2 4
-2 4 -3 10
3
-2
3
2 3 3
3-2
-2
2
1
4
2
2(2)
3
4
5
-286
4
5 3
4
5
(See Example 1)
-2
-2
There are, of course, many other ways of obtaining an element +1 or -1; for example, subtract the column from the second, the fourth column from the second, the first row from the second, etc.
first
-1
3 2 2 5
-1 +
3
-2
1
2+2
2 5
2-2(1) -2-2(2)
1-2(2)
10
2 2 3 3
2
3
4
1
2
3 3
4
1
+2
4-6 4-3
-6-2(-3)
-5
4
-3
+ 3 -3-2(3)
18-9
-4
-6
=
3-2(4)
4
4-2(4)
8-3(4)
4
1
4-3
8
4-3
4-3
-9 -4
=
1-3(4)
-72
-9-3(-3)
-11 -4
-5
3
-11 -4
1+
2j
3.
Evaluate
\A\
2-3i
2 +
: i
1-
2i
2J;
then
+j
+i
1+2j
1+2;
+J
14j
2J 5j
(l+J)(l+2j)Ml
(-1+3j)|-^|
2
5
5-J
-4 + 7i
1+i
l
2
1
5-J
-4 + 7j -10 +
+
18
2J
1
8-
25 -
-4 + 7J -10 + 2i
2j
I
-6
and
,4
14i
25 -
5j
CHAP.
4]
EVALUATION OF DETERMINANTS
35
4.
\A\
\aij\
m<n
in
ji'h
Jrn
A;
H-''^
1
%\
of 1^1 in
row
after
by
interchanges of adjacent rows of |^ the row numbered ii can be brought row. by i^- 2 interchanges of adjacent rows the tow numbered is can be brought into the second -m interchanges of adjacent rows the row numbered % can be brought into the mth row. Thus.
i;
Now
by
(I'l- 1)
I'l,
+ (igi^
2)
numbered
first
+ 'm-2'"('"+l) interchanges of adjacent rows the rows m rows. Similarly, after /i + j^ + + /^ - \m(m+l)
interchanges of adjacent columns, the columns numbered /i,/2 occupy the position of the first m col/jj umns. As a result of the interchanges of adjacent rows and adjacent columns, the minor selected above occupies the upper left corner and its complement occupies the lower right comer of the determinant; moreover.
I
cr
[1
+ ^2+ +
in
/i
= j- + i + + + /i + + ii + + in changes. Thus
Jm+i'Jn-i-2
'm-H'''m +2-
/2
+ /-"('" +1)
times which
is
equivalent to
Ji.J2'
Jm
'-m
Jn yields
' Jn
m!(ra
AH'''^
or
Ji~h(a)
Jn
Jn-H' Jm+2'
"TOJ-l'
(-ir
n+2'
yields
w!(n-
n(n~l)...(n-m + l) different m- square l'2....m m\{n m)\ minors may be selected. Each of these minors when multiplied by its algebraic complement yields m!(/j-m)' terms of U|. Since, by their formation, there are no duplicate terms of U| among these products.
Let
I'l,
12
in be held fixed.
S(-i/
where
/i, 72
Jn
I'm
Jm-1' Jvn
Jn
I
,
i^
+ i^+
+ i^ +j^ + j^ +
+ in
different selections
5.
Evaluate
12 3 4 12 1 11 3 4 12
2
(-1)^
21
1
I
|1
ll
+
2 2
(-ir
21 41
2
1
ll
+
3 ll
(-If
2
3
11
13
ll
4
1
41
(-3)(1)
(-2)(1)
(5)(-l)
6. li
C B
Prom the
plement
is
first n
B
Its
|s|.
rows of |P| only one non-zero n-square minor, U|, can be formed. Hence, by the Laplace expansion, |p| = |.4|.|b|.
algebraic com-
7.
Prove
\AB\ =
A
| |
S
and
Suppose Problem 6
= [a^j]
= [bij]
are n -square.
Let
= [c^j] =
AB
so that
^Hkhj-
i^rom
36
EVALUATION OF DETERMINANTS
[CHAP. 4
ail
012
"m
2n
021
022
"ni
n2
"nn
6ii
-1 -1
612
..
hi
?,2
621
^22
..
-1
To
the (n+l)st column of \P\ add fen times the first column, 621 times the second column
6i times
we have
On
"21
012
"m
2ra
Cii
"22
C21
"ni
"n2
"nn
Cm
612
-1 -1
h,
62
622
..
Next, to the
(n
fei2
times the
first
We have
in 2n
Cii
C12
C21
C22
"nn
Cm
Cn2
^13
^m
b^n
*23
A
\P\
C
.
Prom
Its
Hence,
\p\
(-i)Vlf" ^"^"lc|
and
\c\
\ab\ = U1.|b|.
Oil
8.
%2 %S
where
Oif
Let A =
aij(x),
(i, j
= 1,2,3),
Then
%l"22*33 +
and, denoting -ra;:
122Sa31
%s"S2"21
"
(^ix"^"^ "
0:l202lOs3
"
(hs^^O'Sl
by
a,',-,
dx
^^
^J
CHAP.
4]
EVALUATION OF DETERMINANTS
37
dx
'^ll"22''S3
+ "l2"23"31 + 023012131
+ O31O12O23
+ %3032021
f
021013032
f
Oj^jCggOgg
f
>i23'^n''S2
OgjOj^jOgg
a^OQiOg^
0210-12033
Og^a^a^^
air^a^^a^.^
02205^303^
Ogj^o^gOjg
Oil
0^11
o-L20^i2
"*"
OisOi^g + a^-^dj^x
022(^22
+ 023(^23
"^
Og-LCXgi
+ 032(^32 + 033(^33
11
O12 Ol3
O22
O32
'11
"12
"13
022
023
21
O23
i2i
"22 "23
032 033
31
O33
'31
"32 "33
3.
SUPPLEMENTARY PROBLEMS
9.
Evaluate:
3
2
(o)
11
-2 -1
3
156
ic)
2 3
1
-4 4 -3 -4 -5
3
5 3
1
-304
-4 -2
-1
1
-2 -2
3
1
{b)
4
2
16 12 9
2 7
2
1
41
(d)
1
1
118
-4 -3 -2 -5 -2 2 2 -2
10. If
Is n-square,
show
that \A^A
is real
and non-negative.
11.
Evaluate the determinant of Problem 9(o) using minors from the first two columns.
first
12. (o)
Let
aJ"'
4B| Use |^B|
=
"^
and B =
''
'^
bjj
(< + 02)(6i+62) (oi
[^-02 ojj
|.4|"|B| i.4|-|B| to
1^-62
show
that
0000
61 + 163
(0161-0262)
+ (0261+0163)
(6)
Let
+ t03
a2+iaA
I
tOj^
and
62 + 164
-Og + ja^
Oi-iogJ
inii. \ts\
to
Use \AB
222222,2,2
[-62 + 864
6i-j6gJ
2
13.
1
1
Evaluate
4
5 6 5 4
3
3
2
2 2
2
first three
rows.
Ans. -720
1 1
1
3
3
38
EVALUATION OF DETERMINANTS
[CHAP. 4
112 12
14.
Evaluate
first
two columns.
.4ns. 2
11
square matrices, use the Laplace expansion to prove
|diag(4i,.42
As)\
=
15. If
^1,^2
^^s
^"^^
Uil-U,!
....
a^ a^ Og a^
*1
16.
Expand
a-^
*2 ^3 *4
first
a^
*4
*1
^3
a2 62 60
6..
K
17.
62
63
to
show
A B C
where
is fe-square, is
zero when
> 2"\A\
to = aiiOiii + ai2ai2 + OisO^is
18. In
+ ai4i4.
OL^a. tti*
along its
first col-
umn
show
^11^11
ti
.-^
1=2 J=2
where
ffi,-
is the
Of
U
show
that the bordered determinant
19. If a^j
[a^,-],
"12
ire
Pi
?i
92
'^12
?n
22
"2ra
P2
Pi "11
"
"m
Pilj .^ 'J t=i j=i
Cti
^V
"ni n2
li
"nra Pre
Pn "ni
"712
"nn
92
In
Hint.
Use
(4 .3).
20.
|.4|
X
()
(b)
I
2*:
2
1
;<:
a:
-1
4
a;
x-l
3
a:
1
2a:
x^
(c)
+5
2x
Zx +
3-2
-4ns.
x^+l
12a;^
x+l
15a:*,
x^
(a)
2a:
+ 9a:^-
8a;=^
,
(6)
6a:
+ 21a:^ +
(c) 6a:^
5*"^
- 28x^ +
=
9a:^
20a;
21.
Prove
If
A and B are
A non-singular and
if
ff
4 + iS
is Hermitian, then
chapter 5
Equivalence
r if
any, is zero.
zero matrix is
'l
2 3 5
2 3
Example
1.
The rank
of
2 3
4
7
is
r=
since
2
-1^0
while
= 0.
See Problem
1.
An
A
is
re-square matrix
called singular.
that is,
if
A ^
0.
Otherwise,
Prom
I.
AB\
A\-\B\ follows
of
The product
two
or
is non-singular;
the prod-
uct of two or more re-square matrices is singular if at least one of the matrices is singular.
ELEMENTARY TRANSFORMATIONS.
on a matrix do not change either
(1)
by Hij;
(2)
The multiplication
by a non-zero scalar
k,
denoted by H^(k);
The
(3)
multiplication of every element of the ith column by a non-zero scalar k, denoted by Ki(k).
to the
k,
to the elements of the ith column of ments of the /th column, denoted by K^j(k)
k,
the transformations
are
is clear that
Problem
2, it is
the rows (columns) of a an elementary transformation cannot alter the shown that an elementary transformation does not alter its
inverse of an elementary transformawhich undoes the effect of the elementary transformation; that is, after A has been subjected to one of the elementary transformations and then the resulting matrix has been subjected to the inverse of that elementary transformation, the final result is the matrix A.
39
40
EQUIVALENCE
[CHAP.
Example
2.
Let A
4 5 6
7 8 9
"l
3"
The The
H2i(-2)
is to
produce B
2
.7
10
8 9
effect of the elementary row transformation ff2i(+ 2) on B is to produce A again Thus, ff2i(-2) and H2x(+2) are inverse elementary row transformations.
are
-1
^ij
Hi\k) = H^(i/k)
H--(k)
(3')
= H^A-k)
K^jik)
= Kij(-k)
We have
II.
The inverse
same
type.
matrices A and B are called equivalent, A'^B, from the other by a sequence of elementary transformations.
if
Equivalent matrices have the same order and the same rank.
Examples. Applying
in turn the elementary transformations
1
-1
3
4 5
12-14
5-3
-1 -2
6
-1
5 5
12-1
'^
2 -1
-2
-7
-7
-3 -3
5-3
S
I
I
t^
0,
the rank of
is 2
hence,
^ is 2. This procedure of obtaining from A an eauivalent matrix B from which the rank is evident by inspection is to be compared with that of computing the various minors of -4.
See Problem
3.
ROW EQUIVALENCE.
lone,
If
is said to
a matrix A is reduced to B by the use of elementary row transformations abe row equivalent to A and conversely. The matrices A and B of Example 3
Any non-zero
(a)
matrix
of rank
r is
all other
rows have
this
(C) ]\
=1,2, ...,r), the first non-zero element element stands be numbered 't
;'.
is 1;
let the
column
in
which
<
/2
<
<
j^.
j^, (i
=1,2
r),
is the
element
of
CHAP.
5]
EQUIVALENCE
41
To reduce A
(ii) (is) If
to C,
7^
suppose
/i is
the
to
number of the
it
first
non-zero column of A.
aij 171
0,
use
but
//i(l/oi,iji
reduce
ffij,
to
1,
when necessary.
(i^).
If
a; J
Vi
o^ ^^7
0,
use
and proceed as in
(ii)
first row,
B =
C.
Other-
wise, suppose
number of the
if
first
column
bqj^
in
use ^2(1/^2^2) as in
in
(il),
(ii);
&2J2=
but
0,
which this does not occur. If &2j ^ 0, use H^^ and proceed as in (ii). Then, as
If
first
is reached.
ff2i(-2), ffgiCD
Example
4.
2(l/5)
//i2(l). //ssC-S)
applied
2 4
-1
3
4
5
^\j
-1
5
1
'\^
-1
1
1
'%^
2
1
17/5
2
1
-3
-2
-7
-3
-3/5 -3
-3/5
(a)-(rf).
See Problem
4.
>
can be reduced
to
(5.1)
\l%
"'"'
own normal
M
form. of the first row
first
1
Since both row and column transformations may be used here, the element
obtained in the section above can be moved into the
first
first
column.
row and
column can be cleared of other non-zero elements. row can be brought into the second column, and so on.
For example, the sequence
/^32(-l), ^42(3)
ff2i(-2),
of the
second
^31(1).
^2i(-2),
I
Ksi(l),
X4i(-4).
K23, K^{\/%),
See Problem
5.
ELEMENTARY MATRICES.
row (column) matrix. Here, an elementary matrix will be denoted by the symbol introduced to denote the elementary transformation which produces the matrix.
0' 1
Example
5.
Examples
/g
1 1_
"0
1
1
0'
'1
0"
'l
0' 1
= K,
1_
Ha(k)
K-sik).
H^g(k)
k
1_
K^^yk)
_0
k_
42
EQUIVALENCE
[CHAP.
is non-singular.
(Why?)
The
mxn
matrix
A can be produced by
multiplying
A by an elementary
matrix.
To
to
Ijn
of order
To
3~
f
1
'l
3~
"7
9"
Example
6.
When A
4 5 6
_7
H^^-A
3'
=
1
4 5 6
0_
4 5 6
_1
interchanges the
first
and third
9_
7 8 9_
3_
1
rows of A
;
"l
o"
"723'
=
4/^13(2) =
4 5 6
10
_2
1_
16 5 6
_25
adds to t( the
first
J
the third column.
8 9
39J
LET
AND B BE EQUIVALENT MATRICES. Let the elementary row and column matrices corresponding to the elementary row and column transformations which reduce /I to 5 be designated as //i./Zg where //^ is the first row transformation, //g is the second, ...; ^s< J^\,T^Q. K^ is the first column transformation, K^ is the second Then
'^t
(5.2)
//.
IU-H,.A K-i-K^
H^-H^
K,
PAQ
where
(5.3)
Ih
and
We have
III.
Two
if
and only
if
P and Q
Example
PAQ
= B.
-1
2~|
7.
When
2
_1
5-23,
2
1
2J
["100"! r
1
1-200
1
~1
0~
2"
"1000"
o"j
1
0-2
ij
10 10
1
10 10
_0
1
10
1
10 1 10
1_ _0 1_
"1000" 10
5
_0
1_
[j^i
iJ
1-25-4
10
o"l 1
[:;=}
Since any matrix
is
=
1
PAQ
[1
10
1
oj
equivalent to
its
IV. If ^ is an re-square non-singular matrix, there exist non -singular matrices as defined in (5.3) such that PAQ = 1^
.
P and Q
See Problem
6.
CHAP.
5]
EQUIVALENCE
43
Let
H^...H^-H^
and
K-^-K^-.-Kt
and
(5.4)
P~^
H;\hI\..H^^
and
let
Q''
Kt...Kt-Ktthat
PAQ
Then A
=
(5.5)
P'\PAQ)Q^
P'^-k-Q^
P'^-Q'^
We have proved
V.
of
elementary matrices.
See Problem
7.
From
this follow
If If
VI.
VII.
is non-singular, the
rank of
AB
(also of
BA)
is that of B.
P and Q
PAQ
is that of A.
In
Problem
8,
we prove
if
Two mxn
and only
if
A set of my.n matrices is called a canonical set under equivalence if every mx-n matrix is equivalent to one and only one matrix of the set. Such a canonical set is given by (5.1) as r ranges over the values 1,2 m or 1,2. ...,re whichever is the smaller.
See Problem
9.
RANK OF A PRODUCT.
matrices
r.
By Theorem
non-singular
P and Q such
PAQ
Then 4 = P
(5.6)
P''
NQ
Let S be a
pxre matrix
AB
VI, the rank of
P~'NQ'^B
AB is that of NQ'^B. Now the rows of NQ~'b consist of the firstr m-r rows of zeroes. Hence, the rank of AB cannot exceed r the rank of A Similarly, the rank of AB cannot exceed that of S. We have proved
rows of
By Theorem
Q B and
IX.
The rank
of the product of two matrices cannot exceed the rank of either factor.
suppose iS = then from (5.6). NQ-'b = 0. This requires that the first r rows of Q'^B be zeroes while the remaining rows may be arbitrary. Thus, the rank of Q-'b and, hence the rank of B cannot exceed p-r. We have proved
:
is
of rank r
and
if
is
such that
AB
the
44
EQUIVALENCE
[CHAP.
SOLVED PROBLEMS
1.
(a)
The rank
of
-4
1
2 3] sj 2 3 5
is 2
since
-4
(b)
The rank
of
12
"0
is 2
since
and
^0.
2 2 4 8 2
3"
(c)
The rank
of
A
_0
4 6
6 9_
is 1
since
0,
is 0, but nov
every element
is
Show
shall consider only row transformations here and leave consideration of the column transformations Let the rank of the mxn matrix ,4 be r so that every (r+l)-square minor of A, it any, is zero. Let B be the matrix obtained from .4 by a row transformation. Denote by \R\ any (r+l)-square minor of A and by Is] the (r+l)-squaie minor of B having the same position as \R\
We
as an exercise.
Its effect on |/?| is either (i) to leave interchange one of its rows with a row not of \R\
. ;
it
.
unchanged,
In the
(ii) to
(i),
interchange
= \r\
case
\S\
=0;
case
(ii),
\S\
= -\r\ =
in the
case
(iii),
\s\ is,
Let the row transformation be Hi(k). Its effect on \R\ is either (1) to leave A:. Then, respectively, |S| = |/?| = o or |S| = ;i:|/?| = o.
it
unchanged
or (ii) to multi-
Let the row transformation be Hij(k). Its effect on |/?| is either (i) to leave it unchanged, (ii) to increase one of its rows by k times another of its rows, or (iii) to increase one of its rows by k times a row not of S|. In the cases (i) and (ii), |S|=|ft| = 0; in the case (iii), \s\ = = /?| + A: (another (r+i)-square minor of /I)
| |
k-0
0.
On
it.
it,
2
1
3"
1
'-^-/
3
"^
"1
2
1
3~
1
-""J
'1
2
1
3"
1
(a)
2
_3
3
1_
-3 -3
-4 -8
_0
2_
_0
1_
//2i(-2). ffsi(-3);
"1
The rank
is 3.
2 3
0"
1
''V.
0"
0'
(b)
2 4 3 2
3
_6
-3
4
2
3
5_
''\j
-4
-8
-8 -3
'~^
-8 -3
3 2 2
r^
-4
-8 3
S.
-3
2
2
5_
The rank
is 3.
8 7 5_
-4 4 -11 i
+
'~^
p -4 -11
1
-3
1+i
(c)
1
i
A
1
2j
_1
2j
2J_
'-Vy
2J
B.
The rank
is 2.
2j
l+i_
Note.
The equivalent matrices 5 obtained here are not unique. In particular, since in (o) and (i) only row transformations were used, the reader may obtain others by using only column transformations. When the elements are rational numbers, there generally is no gain in mixing row and column
transformations.
CHAP.
5]
EQUIVALENCE
45
4.
C row equivalent
to
each
2
'->-'
(a)
A =
13 12 6
2 3 9
113
12
6 2 3 9
113
13
3
l"
13-2 13-2
-2_
1
10 4 13-2
113
1
13
3
3
--v^
GOOD
3"
2
3
(b)
2 4
.1
-2 -2 -3 -1
2
1
-2
-2 3
1
'10
^\y
7~
-1
1
'^
-1
1
6 4
2
1
2
1
4_
10 10
-1
2
2_
--O
10 1 10 0-1 10 2
1
6_
p -1
5_
pool
5.
Reduce each of
1
-1
1
1
^\y
-1
1
"l
o"|
fl
o"
'l
o'
'l
'V.
o'
(a)
3 2
4
3
2 5
-2
5 3
0-2
p-
1-2
Lo 2
1-2
11
10
P
10
p
1
0_
7 2
7 2 sj
7 3_
-7_
Us o]
The elementary transformations
are:
^2i(-3). 3i{2); K2i(-2), K4i(l); Kgg; Hg^-^y. ^32(2). /f42(~5); /fsd/ll), ^43(7)
(6)
[0234"! P 23 5 4^-0
4 8 13 I2J
|_4
354
2
i
1
'-^
3
2
5
3 2 3 4
'Xy
"1000"
"1000'
^\j
"1000"
<^
8 13 12
2 8 13
13 13
4
4.
10 p
1
10
0_
0_
are:
^12: Ki(2); H3i(-2); KsiC-S), X3i(-5), K4i(-4); KsCi); /fs2(-3), K42(-4); ftjgC- 1)
6.
Reduce A
1 2 3-2 2-2 1 3 3 04 1
to
normal form
A'
P^
and
(^^
such that
P-i_AQ^
A^.
Since A is 3x4,
we
is
performed on a row of
-3
-2 -3
1
10
1
1 1 1
12
2-2
3
3
1
3-2
-5
7 7
1 1 1
10
1 1
-6
-6
2
-5
-6 -5 7 -2 -6 -5 7 -3
-5
-2
0-1-1
1/3 -3
-1/6
0-1/6 -5/6
10
1 1
10
1
7/6
1-57-210
0-1-11
10
1
or
Pi
0-210
0-1-1
1
46
EQUIVALENCE
[CHAP.
1
1
Thus,
Pi
-1/6 -5/6
-2
-1
1
-1
10
1
7/6
and
PiAQ-i =
10 10
N.
7.
Express A =
1
.1
4
3
3
4_
//2i(-l).
=
to
that is,
[see (5.2)]
H^-H^-A-K^-K^
ff3i(-l)-^2i(-l)-'4-X2i(-3)-?f3:i.(-3)
fl
0"1
fl
0"j
fl
3"|
fl 3
O'
Prom
(5.5),
Hl-H'^-K^-Kt
110
_0
010
[l
010
[p
010
_0
1_
ij
ij
ij
8.
Prove:
If
Two mxn
if
and only
if
A and B have
Conversely,
VII,
other.
By Theorem
same rank, both are equivalent to the same matrix (5.1) and are equivalent ^ and B are equivalent, there exist non-singular matrices P and Q such that B A and B have the same rank.
if
each
PAQ.
9.
[i:l-[:i:]
A
canonical set
tor
[!:][::]
non-zero
3x4
matrices is
Vh o]
[:=:]
nm
r^,
&:]
B consisting
n.
10. If from a
square matrix
a submatrix
r^
of s rows (columns) of
is
+ s -
The normal form of A has n-rj^ rows whose elements whose elements are zeroes. Clearly
'A
>
- n as required.
CHAP.
5]
EQUIVALENCE
47
SUPPLEMENTARY PROBLEMS
2
1
2 2
(c)
12-23
2
4
5 (d)
5 6
6 7 8
7
8 9
2
11.
3 5
2
3
5-46
2
(a)
3 3
(b)
4 3 4
4 5 7 4 6
-1 -3
2
-2
4-16
10 11 12 13 14
15 16 17 18 19
Ans.
(a) 2,
(b) 3,
(c) 4.
(d) 2
12.
that
A,
A'.
A. and
.4'
13.
Show
that the canonical matrix C, row equivalent to a given matrix A, is uniquely determined by A.
14.
to
2-3"]^ri 0-7]
-4j
2 2
12
(b)
3 4
1/9"
3 4
_4
[2 5
[o
12
1
10
1
2j
3
2
1/9
(c)
2 3
11/9
-3
10 10 12
3 (d) 2
10 1 10-1
1 1
-1
-1
-1
6
2
1 1
1
^V/
(e)
2
10 0-12 10
1
-2
1
5
1
-3
14.
Ans.
(a)
(b).(c)
[/g
o]
(d)
P^
j]
(e)
P'
jl
12
16.
4
1
Let A
2
3
12
(a)
(6)
From Prom
/a
form Z/^^. /^O). //i3(-4) and check that each HA effects the corresponding row transformation. form K^^. Ks(-l). K^^O) and show that each AK effects the corresponding column transformation.
H^
{?,),
(a).
.
K^lo)
4)
1
Check Check
H,H-H~^^l KK~^ = I
"0
0'
"0
and C
=
(e)
Compute B
4"|
^12
^
ft,(3) -//isC-
-4
1
H^^(-4:)-H^(3)-Hi
1/3
0.
ij
(/)
that
BC
CB
H-.
that
/?',-=
/?
K-(k) = H^(k).
and K^,(/t)
= H^-(k)
that if
is a product of
elementary column matrices. R'is the product in reverse order of the same
18.
(a)
(b)
AB AB
and
and
BA BA
.4
A and B
is singular.
19.
UP
Hint. Express
and Q are non-singular, show that A,PA,AQ, and PAQ have the same rank. P and Q as products of elementary matrices.
20.
Reduce B
13 6-1 14 5 1 15 4 3
to
normal form
/V
P^
and
Qr,
48
EQUIVALENCE
[CHAP.
21. (a)
(6)
Show Show
number of matrices
n+l.
m+l
mxn
22.
Given A
12 13
4
2
4 6
of rank 2.
7^
such that
AB
= 0.
2 5 6
10
Hint.
and take
Q-'b
abed
_e
A_
where a.b
are arbitrary.
23.
The matrix A
24. If the
mxn
rj
and
rg
respectively,
show
25.
By considering each
of the
show
that
\AB\ = |^|
|fi|
26.
(a) If at least
\AB\ =
|/4|-|s|
(6) If
both are
27.
Show
28. Prove:
of a non-singular matrix
is I
and conversely.
29. Prove:
to
Hint.
30.
Show how
to effect
(3).
31. Prove: If
.4
is an
mxn
m,
State the
chapter 6
The Adjoint of a Square Matrix
THE ADJOINT.
Let A =
[a^ ]
oij-
(6.1)
adjoint
adj
12
^22
'^ire
'^sn
Note carefully that the cofactors of the elements of the ith row (column) of A are the elements of the jth column (row) of adj A.
1
2 3 3
3
2
Example
i.
2 3
4
=
11=
6,
ai2 = -2.
Cl-lS
-3.
fflsi
= 1,
0122 =
-5,
6
a^g
= 3,
ffai
= -5,
Otgg
4,
agg = -1
1-5 1
4
3
and
adj
-2 -5 -3
-1
See Problems
1-2.
Using Theorems
and XI of Chapter
3,
we
find
%1 %2
(6.2)
"In
0271
i(adj A)
'^
^^2
ni
CE 7
diag(M),
1^1
1^1)
A-L
-7 and
6 1 -5I -2 -5 5 4 -3 33 -ij
(adj A)
of
Example
1,
U|
2
3 3 3 2
1
/I
f-T
=
[_
0'
(adj /I)
2
3
0-7
-7
-7/
By
(6.3)
we have
=
I
U|.
adj
/I
|,
adj
.
I
There follow
I.
If
is re-square
(6.4)
4
I
49
50
[CHAP.
II.
If
is re-square
(ad}A)A
If
=
adj
If
is of
0.
is of rank 1.
See Problem
3-
In
Problem
4,
we
prove
If
A and B are
(6.5)
AB
adj
adj
MINOR OF AN ADJOINT.
^ii.i2
In
Problem
i
6,
we prove
-square minor of the
re-square matrix
IV.
Let
^l'''-2
be
A =
[o,-,-].
tjj
7m + i' 7m+2'
let
Jn
be
its
complement
in A,
and
let
%.
^2
%
in adj
Jm
same position
A as those
of
occupy
in A.
Then
Ji' J2
Jm
Jvi + i'
Jm + 2
Jn
(6.6)
M:
s
(-D^'i^l'
where
j\
S2
+ +
%+
/i
72
+ +
Jt,
If in (6.6)
is non-singular,
,?i.
then
J2<
(6.7)
M
When m =
2, (6.7)
' Jm
H, %,
(-1)
aH-i u
I
Jm+i' Jm + 2
i-
Jn
^M+1' 'm+2. . ^n
becomes
Jg>J4-
Jn
(6.8)
(_l)V*2+Jl-'j2m
H'J2
^2'J2
.J1.J2
\A\
algebraic complement of
When m = n-l,
(6.9)
(6.7)
becomes
Jl'j2
Jre-1
M,
(-1)
Ul
a.
When m =
n, (6.7)
becomes
(6.4).
CHAP.
6]
51
SOLVED PROBLEMS
1.
The
adjoint of
i4
a h
IS c
'^ll
0^21
I
d -b
I
3 4
4 3
2.
4 3l
1
1
3 4
-7
The
adjoint of
A =
[;]
IS
4
3 3
1 1 1
1
3
1
-1 -1
1
1 1 1
3
2
4
1
-2
2
3
3.
Prove:
If
A
we
is of order n
is of
rank
1.
First
IS at least one.
exactly one.
note that, since A is of rank n-\. there is at least one non-zero cofactor and the rank of adj A By Theorem X, Chapter 5, the rank of adj^ is at most n-{n-l) = i. Hence, the rank is
4.
Prove:
adj
^S
= adjS
adj
y4
^y(6.2)
Since
^Badj^S
^S-adjS-adj^
=
.4
\ab\-I
/I
{a.aiAB)AB
= =
(S adj
S ) adj
^(l^l-Oadj/i
jslc^adj/^) =
|b|.U|./
=
\
\AB\-I
and
(adjB-adj^)4fi
that
adjS {(adj^)^lfi =
adj
adjS-U|./.B
=
U|{(adjB)Si
Ab\
we conclude
^S
adj
adj
/I
5.
Show
that
(6.2)
adj(adji) = U|
and
(6.4),
-A,
if
\a\
?^
0.
By
adj
adj (adj
^)
= =
adj/l|
|adj^|)
1
diagd^l
Ul
1^-1
)
Ul
Then
adj (adj ^)
adj (adj
= = =
UT'^-^
\aT' -A
^) ^)
and
adj (adj
UT'^-^
6.
Prove:
Ji'
Let
A;
h
;
in
'"n
A =
[a:-] tj-
Jm+i. Jw4-2
let
in
^n
^m+i. ^m+2
^^^
nH.i2
in\
/I
is
^H,i^, ...,i
occupy in A.
Then
52
[CHAP.
u
where
s =
Ji
Ji.
J/2'
H'
^2-
'
>
Jm
^m
'
Jn
(-ifur
7m
^i
+ f2+ +
+/1 + 72+
Prom
a,-
a,-
1-
a,-
a,-
h-j9
a,,
^^2
>
'00
,/2
"^ra.ii
"^mJ's
"im.im
"^raJm+i
"
"^m^n
tti,
*i
ai
Wl'A
a,-
a,-
a,-
''m+iJs
'
''m+iJm
'^m+iJm+i
''m+i-Jn
"ht.ji
"W2
in.i
'
"V.im4.i
"traJn
a,-
a,-
a,-
Ul
\a\
h.Jm^i.
''i2Jn
Ul
"^M'Jn^i
" "^-Jn
"%-i-i'Jn-^i
"V+i'in
^ri'^m+i
"in.in
(-ifUI- M %
where
s is
H\
''w
Jm+1' Jm-t2
4.
Jn
^2
^m+l' ''m+2
%
immediately.
Prom
7.
Prove:
of i.
If
is a
skew-symmetric of order
2ra,
By
its definition.
we
are to
show
is a perfect square.
r = when A
\
oi
\,
The theorem
is true for n = l
since,
\A\ = a
,1
2
.
\-a Oj
and consider the skew-symmetric matrix A = [a^j] of
'
-zri^i.-zii^-zy '^2k^..2n^\
Assume now
order2A: +
when
n = k
2.
By
partitioning, write
P^l
\ \
= E= where
r
\
Then S
is
skew-sym-
"
CHAP.
6]
53
\b\ =
/I,
in the
3,
elements of B.
we have by Problem
Chapter
and (6.8)
*2fe+l, 2fe+l
f''2fe+l,
0^2^+2. 2fe+l
0'2fe4-2,
0^2^+2, 2fe+l
2fe+2
2fe+2
^2^+1, 2/!e+2
Moreover,
Otsfe+s.sfe+i =
-fflgfe+i^sfe+s
hence,
1^1/
a perfect square.
Ll2/fe+l.
2fe+2
and
SUPPLEMENTARY PROBLEMS
8.
of:
~1
2
1
3~
2
1
S'
5
'l
2*
2"
(o)
_0
2
0_
(b)
_0
2
1_
(c)
2
_3
110
(rf)
2
1
1
1_
10
1
-2
1
-2
4
1
(rf)
2 2
-4
6
Ans.
(a)
_0
0-2
1_
(i)
-2
1
(c)
-2
1
-5 -2
-16
10
10 3-5
-2
9.
Verify:
(a)
(b)
(c)
adjoint of a scalar matrix is a scalar matrix. adjoint of a diagonal matrix is a diagonal matrix.
adjoint of a triangular matrix is a triangular matrix.
/I
7^
adj^ =0.
11. If
is a
-1 -2 -2
12.
Show
-4 -3 -3
is 3^' and the adjoint of
-2
1
2-2
13. Prove:
If
is
itself.
<n-\.
then
adj^
= 0.
14. Prove: If
is
15. Prove: If
^ A
is Hermitian,
so also
is adj/1.
16. Prove: If
is
skew-symmetric of order
, then
adj^
is
54
[CHAP. 6
17. Is there a
for
skew-Hermitian matrices?
18.
-Hij
=
-1
(6) adj
H^ (k)
diag(l/A:. l/k
Hij(k), HJ
1/k,
1,
l/k
l/k),
19. Prove: If
is
n-l
and
if
H^...H^
-H-^
-A K-^-K,2---^t
where \
is
or
-1
-1
-1
-1
adj
adj
Xi
adj
K^
adj
K^
adj
adj H^
adj
H^
adj H^
20.
Use
1110
(o)
of
Problem 7 Chapter 5
,
2 (b)
3 3
2 2 4
2
12
4 6
-14
7
-3 -3
1
1
2-2
-2
2
14
(b)
-2
1
Ans. (a)
-1 -1
-7
21. Let
1-1
= [a^--]
If
S(adjB)
and
\b\
Ui
22. Prove: If
is n-square then
adj (adj
^)
UI
23.
('/ = 1. 2, ...,n)
be the lower triangular matrix whose triangle is the Pascal triangle; for
10
12 10 13
3
1
110
Define
(i)
bij
"V
= (-1)^ ^ai, Hj
and verify
for n
=2,3,4
=
that
[6.^,-]
adJ-4
tjj
24.
its ith
Show
that
^pj
pq
where
o!^-
chapter 7
The Inverse of a Matrix
IF A
AND B
A
is
AB
BA
=1,
(B=A~^) and
Problem
I.
l,
we prove
A has an Inverse
if
An
ra-square matrix
and only
if it is
non-singular.
The inverse
II.
If
then
AB
AC
implies
B =
THE INVERSE
diag (i k,
...,
A)
is the
diagonal matrix
diag(i/k^,l/kr,.
1A)
^^
:^i' -^2
sum diag(/}
diag(Al^.
A^
A~s)
Procedures for computing the inverse of a general non-singular matrix are given below.
(6.2)
adj
i = U|
/.
if
is non-singular
n/M|
i2/U|
(7.1)
a^^/\A\
a22/!.4|
...
ai/U|
a2/UI
...
adj^
am/Ml
a^n/\A\
...
ann/\A\
2 3
Example
-7
is
1 1
1.
From Problem
-1 -1
2,
1 1
4 3
-2
"7/2
-3 -
Since
Ml
= -2,
A~^ =
^'^^ -^
f
X
-k
2
1
^J
See Problem
2.
55
56
[CHAP.
H^.-.H^.H^. A-K^-K2...Kt
PAQ
Then A ^ P
(7.2)
Q^
=
(S~V^ = S,
=
A'^
(P'^-Q'Y
7,
Q-P
Example
2.
From Problem
Chapters,
1 1
"l
-3
1
-3
1
HqH^ AK^Kq
-1
1_
1 1
A
_0
=
1
-1
1
1__
-3
1
o"
'l
-3"
1
"106" '100"
-1
1
1
7 =
-3 -3
1
Then
K^K2H2'i
-1
-1
-10
to
In Chapter 5
it
was shown
transformations alone.
(7.3)
that a non-singular matrix can be reduced Then, from (7.2) with Q =1, we have
H,...H^.H,
is,
III.
That
If
is
reduced to
is
equal
of
.4
4 3
of
Example
/I
to/.
13
Write the matrix [A
Ir,
Q
1
We have
1
3 3
1
"X^
3
1
1
'\j
3
1
4
1
-3
-"o
1 1 1
-3 -3
1
[AI^]
1 1
4 3
3 4
-1 -1
-1
-li
[/3^"']
7
-3 -3
1
1
by
(7,3).
Thus, as A
is
reduced to
/g, /g is
carried into
-1 -1
See Problem
3.
INVERSE BY PARTITIONING.
its
inverse
B =
[6.^,-]
be par-
41
(pxp)
^21
A12
^11
"12
(pxq)
and
(pxp)
"21
(px?)
where p + q = n
S22
(qxp)
A22 (qxq)
(qxp)
(qxq)
CHAP.
7]
57
Since
AB
BA
(i)
I^,
we have
=
=
Ip
(iii)
/4iiSii + /lisBgi
Bgi'^ii + B^z'^si
=
=
Iq
(7.4)
(ii)
4iiSi2 +
A-^^B^-z
(iv)
Bjiiia + Bss^^as
Then, provided
/4ij^
is non-singular,
!Bxi
where
'4ij^
(A.^^^
"-i-zK
(^^si '^n)
^21
~C
(^21 '^u)
.422
- A^i^Ali A^^).
See Problem
In practice, A.^^ is usually
4.
To
used.
Let
"11
'
[Oil
2i
%2l
^^aj
%2
022
11
13 O23
33_
,
%2 %3 %4
022
Osi 31
G^ =
021
Ogj
023
033
'''43
'^24 6(34
Ogj
32
41
%2
''44
After computing C^^, partition G3 so that /422 = [o33] and use (7.5) to obtain C^^. Repeat the proc
ess on
G4. after
partitioning
it
1 1
4 3
3 4
using partitioning.
Take ^11 11
[!'].
^12
^421 =
[13],
Now
^=
Then
^22
'42i(^;' ^12)
[4]-[l3]M
[1],
and
f^
[l]
Sii
4\
('4i\/ii2)rw^ii)
[j
i]^[o]fii-'^^^
S12
-('4i\^i2)r^ "
I
oj
S21
-^
r'
(^2i'4la)
[-1,0]
522
[1]
"
fill
-3 -3
1 1
fil2
and
S21 B22
_
-1 -1
See Problems
5-6.
58
[CHAP. 7
is
symmetric, aij =
ciji
If there is to be any gain in computing A~^ as the product of elementary matrices, the elementary transformations must be performed so that the property of being symmetric is preserved. This requires that the transformations occur in pairs, a row transformation followed immediately
c
...
a
b
.
...
...
Ht
^l
a
b c
^i(- ^)
^^i(- |)
is
replaced by
1,
and K^{l/\Ja).
In general,
procedure
recommended.
the method of partitioning is used since then (7.5) reduces to
(7.6)
where
A^^ - A.^^(Af-^A^^).
See Problem
7.
When A
is
is
may be used
which
found by =
(A'Ai^A-
(7.7)
A'^
SOLVED PROBLEMS
1.
Prove:
An
re-square matrix
is
A has an inverse
By Theorem
A''^
if
and only
5,
if it is
non- singular.
Suppose A
that
non-singular.
IV,
Chapter
exists.
If
P and Q such
Q-P
Supposed
A
-1
exists.
The A- A
=1
is of rank n.
/4
were singular,
_i
/l/l
n; hence,
is non-singular.
CHAP.
7]
59
2.
(a)
When A
[?
!]
1
/I
I
=5, adji
r4-3l,andi"=r4/5-3/5-]
[-1
2j
[_-l/5
2/5J
1
2 3
-5
1
-5
1
(b)
When A =
2 3
1
7
-5
and A
7
-5
2 4
3.
3
5 2
4 =
3 6 2 5
_4 5
-3
14_
0'
1
"K^
14
2 4
1
1
3/2
5 2
1/2
1
^\j
3/2
'
1/2
[AIJ
3 2
6
5
5
2
10
1 1
3 6 2 5
1/2 -1
1
-3/2
'
1
1
- 3 10
1 1
1
1 1
-3
14
10
1
-1 -5
8
-1
4 5
14
_4
14
0'
-3
5/2
10
-2
1
'Xy
2
1
3/2
1
]
1/2
"1
7/2
11
-2
1
0'
-1 -5
]
-1
;
1 1
'V^
1-1-5
1-2 5-5
-1
1/2 -1
-3/2
-3
2 3
1
-3
10
-2
13
-
h
-7 -2
2 2
--10
1
'-\J
-5
13
10
1
'X'
18
0'
10
1 1
18
-7-2
2
1
'
-7| -4 -2 -3
j
-7 -2
-4
1
-3
[
10
1_
11
-2 3/5 1/5
1
'^-/
-23
10
1
29
-64/5 -18/5
26/5
6/5
3/5
100]
1
i
-12 -2
-2
7/5
2/5
1/5
1
1
Ih
A-']
-23
29
-64/5 -18/5
26/5
6/5 3/5 7/5
2/5
1/5
The inverse
is
10
1
-12 -2
-2
(i)
iiiSn
4.
Solve
(.., /fiiS,2
(11)
+ iisSsi
= =
(iii)
S^i^ii + B^^A^^ =
for
+ A^^B^^
Bii,Si2,S2i, and S
(iV)
^21^12 + 52,42
from
(iii),
= /
Set
S22 = f"^
Prom
(ii),
B^^ = -(-4^1-412)^"^;
B21 =
- ^~\A^^At^y,
and, from
(i),
B-^^
^ti - A'l^A^^B^^ =
4\
+ (^lVi2)f~\'42iA\).
-f
(-42i'4ii)^i2 + ^'^422
*.
and
'^'2
('^21'*lll)'^12
60
[CHAP.
5.
12 13
2
3
3
by partitioning.
1111
1
2 3
(a)
Take Gg
3 3
[24],
Now
-1
and
[3]
r 3
11
, ^11-41. -
,-1
[3
|^_^
-2
'421-4 11
-2
[2 0],
[24]
,f
^22 -
'42i('4ii^i2)
[3] - [2
4]r
[-3],
and
f
3
=[-l/3]
Then
Sii = 4l\ +
(.4lVi2)rV2i^i\)
[j
"^l +
Hr-i] [2
0]
-2] _ [2 ol
ij
-1
[o oJ
3L-3
3J
-'^
B12 = ('4ii'4i2)C
1
3
f (/I2A1)
i[2
0],
4-fl
3-6
and
-3
3
D01
Oo
2
0-1
3
,
(6) Partition
A so
that
A^^
3 3
A^Q
2 3
-421
=[
1 ]
and
/I22 =
[ I ]
4 3
3
3-6
Now
4i
-3
2
A~'^
A^iA-^-^
"X"L2
3 2J,
0-1
1
.f
[l]-[l
l](i-)
-1
&
3 = 3 2
and
C'^=[3]
-6
3
3'
-6
3
3'
1
-2
1
Then
B,
^1
-1
"
3 [3]-|[2
-3 2]
-1
0'
-1
4
=
6-9
-2
3
1-2
-2
-1
B12 =
-3
_
1_
S21 =
[-2
-2].
B22
[3]
1-210
and
[Sii S21
S12
1-2
-2
2-3
3
B22J
1-11
3-2
CHAP.
7]
61
3 3
6.
1 1
by partitioning.
4 3
We cannot
take
-4,
1 1
1
3 3
.
7 -
3
1
-3
Then
1
By Example 3,
the inverse of
//
1 1
4 3 3 4
"1
is
-1 -1
7
-3 -3'
1 1
0'
1 1
-3
1
-3'
1
B^Ho
1
1
1 1
Thus, if the (ra-i)-square minor A^.^_ of the n-square non-singular matrix A is singular, we first bring a non-singular (n-l)-square matrix into the upper left corner to obtain S, find the inverse of B, and by the proper transformation on B~^ obtain A''^
2
7.
Compute
13
-1
1-12
2-3 1-1
4
2-3-1
2
1
-1
2
1
Consider
first the
submatrix G^
3 2
partitioned so that
-1
-421
[-1 2],
^22=
[1]
Now
^22 - ^2i(Ai^i2) =
[l]-[-l
2]rM
[-2]
and
f^=
[-i]
Then
B,
B12
["jj.
B21
[-k
k].
B22 = [-i]
-5
5
and
-1
5
10
-5
-5
-1
2
1
3 2
[2 -3 -1],
[4]
-1
-5
5
-1/5
2/5
.
Now
A<
_1_
10
3-1
-5
5
<f= [18/5
],
-5
-2
[5/I8].
62
[CHAP.
5-7
521 =
Then
JS^
5-15
-7
5
^[1
-2 10],
B22 =
[5/I8]
11
5-71
5-2
11
5
and
5-1
-7
1
10
5
-2 10
SUPPLEMENTARY PROBLEMS
8.
2-1"
1
"2
3 3 2
4"
"1
3"
2 3
(a)
-1
_
2
1_
(b)
4
_1
(O
2 3
4
5
(d)
-1
n U
n w
n ^
1
J.
6
3
3-15
Inverses
(a)
-10
9
,
-3
3
-2/3
1/3
3-1 ^ -15
5
-
^'^i
-
15
-4 -14
1
(c)
-3
-
-1
-
(d)
3
-J
-5
2-1
9.
8((?)
as a direct sum.
10.
Obtain the inverses of the matrices of Problems using the method of Problems.
13
1 1 1 1
3
3
3 2
.
4
3
2 3 3 2
7 2 (c)
5 3 6
2
3 3
3
1
2
3
11.
Same,
for the
matrices
(a)
-4 -5
8
3-1
2
3
4
(d)
(b)
5 2
7
3
9
3
13
1
1
11
3
2 (a)
1
-4 -5
4
12
11-1 1-2-12 2
16
-6
- 30
-144
48
(c)
36
60
21
22
41
-1
18
30
6
-2 -1
48
-26
16
(b)
-1 -7 -3
1
-18
21
-5 -8
6
(^)i
-1
2
-30 -15
15
12
-9
-1 -1
12
6-9
-6 -1
-7
-1
12.
of
Example 4
13.
Obtain by partitioning the inverses of the matrices of Problems 8(a), 8(6), 11(a) - 11(c).
CHAP.
7]
63
12-12
14.
12
(b)
2
3
2-11
1
112
2
-1 -1
2
-1
2
3
2
3
3 3
1-12
-1
-1 -1 -1 -1 -1 1 -1 -5 -1
1
Ans. (a)
-1
(b)
-1
-1 -1
15. Prove: If
.4
is non-singular,
then
AB
AC
implies
= C.
16.
Show
(a)
andS,
that
if
(b)
A und B
(c)
a.nd
(BA)A
17.
Show
Hint:
A
.
symmetric, so also is
/I
^.
A^A
(AA"^)' =
(A~^)'A
18.
Show
(a) A~'^B.
(b) AB"'^.
and
(c) A~^B~^
are symmetric.
Hint: (a)
19.
An mxn matrix A
a right inverse
if
is said to
and only
if
have a right inverse B if AB = I and a left inverse C it CA = I. Show that A has A is of rank m and has a left inverse if and only if the rank of A is n
20.
Find a
right inverse of
13 2 3 14 13 13 5 4
if
one exists.
2
1
Hint.
The rank
1
1
4
3
is
right inverse of
17
-9 -5
3
1
is the
4x3 matrix
-4 -1
L J
3 3
-3 -3
1
21,
Show
1 1
4 3
3 4
-1
-1
as another
1
right inverse of A.
22. Obtain
-310b
-3
1
-1 -1
as a
left
inverse of
3
3
4
3
3
,
c are arbitrary.
23.
Show
that
13 14
2
3
4
5 5
7
9
left inverse.
24. Prove: If
|.4ii|
0,
then
/111
A21
then
(/
^12 Aq2
|.4ii|
-1^22 - ^^si^ii/ligl
25. If
|/+
l|
;^
0,
.4)-^
and
(I
- A) commute.
26. Prove:(i) of
6.
chapter 8
Fields
NUMBER
FIELDS. A
collection or set S of real or complex numbers, consisting of more than the ele-
ment 0, is called a number field provided the operations of addition, subtraction, multiplication, and division (except by 0) on any two of the numbers yield a number of S.
Examples
of
number fields
numbers,
are:
numbers,
(b) the set of all real (c) the set of all (d) the set of all
where a and b are rational numbers, where a and b are real numbers.
,
The
set of all integers and the set of all numbers of the form
bvs
where
i is
a rational
GENERAL FIELDS. A
F, i.e. scalars,
collection or set S of two or more elements, together with two operations called
().
is called a field
provided that
a, b, c,
...
being elements of
a + b is a unique element of
a+b
= b+a
a + (b + c)
= (a +
b) + c
F F
in
F such
-a
in
that
a+
+a =
a.
As
F such
that
a + (-a) =
0.
a-b
is a
unique element of
ab = ba
(ab)c = a(bc)
?^
such that
1-a =
o.
in
F such
that
a-
a'''^
-0=1.
c)
Di
a(b +
= ab+ ac
D^'-
(a+b)c = ac + bc
In addition to the number fields listed above, other examples of fields are:
(e) the set of all quotients
^^^
Q{x)
of polynomials in
2x2
ti
where a and
b are real
numbers.
This field, called of characteristic 2, will be excluded hereafter. customary proof that a determinant having two rows identical not valid. By interchanging the two identical rows, we are led to D = -D or 2D =
which
0+0=0.
but
is not
necessarily
0.
64
CHAP.
8]
FIELDS
65
SUBFIELDS.
If
If
if
For exam-
ple,
the field of all real numbers is a subfield of the field of all complex numbers; the field of
all rational
numbers
field of all
complex
numbers.
all of the
1/4
1/2 2/3
is
and
11
2
1
- 3i
is over the
complex
field
Here, A is also over the real field while B is not; also A is over the complex field.
Let
A.B, C,
...
F and
let
F be
which
is the
An examination
on these matrices, individually or collectively, in the previous chapters shows that no F are ever required. For example:
difference, and product are matrices over F.
The sum,
If
If
over F.
that
A ^l then A
PAQ
and
is over F.
If
and
rank
r,
its
it
will be
assumed
that
is the
In later chapters it will at times be necessary to restrict the field, say, to the real field. At other times, the field of the elements will be extended, say, from the rational field to the real field. Otherwise, the statement "A over F" implies no restriction on the field, except for the excluded field of characteristic two.
SOLVED PROBLEM
1.
To do
product
this
we simply check
1.
the properties
(/I4) is
and
the
If
a + bi
and
di
bi
is
-a-bi.
is
(a+bi){c +
di) = (ac
1
bi^o
is
bi
_
b^
hi
a + bi
a^
+62
a^+
a^ + b^
66
FIELDS
[CHAP.
SUPPLEMENTARY PROBLEMS
2.
Verify (a) the set of all real numbers of the form a + b\r5
(6) the set of all quotients
where a and
b are ra:ional
numbers and
^
Q(x)
3.
numbers a + bvS, where a and b are rational numbers, and numbers a+bi, where a and b are rational numbers are subfields of the complex
a
b
field.
4.
b
,
where a and
a
b
b are rational
numbers, forms a
field.
Show
that this is a subfield of the field of all 2x2 matrices of the form
h
a
w]\eK a and
h are real
numbers.
5.
2x2
6.
set
of elements a,b.c.... satisfying the conditions {Ai, A^. A^. A^, A^; Mi.
M.y,
Page 64
is called
a ring.
ring.
To emphasize the fact that multiplication is not commutative, R may be called a non- commutative When a ring R satisfies Mg, it is called commutative. When a ring R satisfies M^. it is spoken of as
...
is an
0,+l,2,+3,
...
is an
example of a non-commutative
,
set of all
2x2
where a and
(a) of
Problem 6 be turned into a commutative ring with unit element by simply adjoining the ele-
to the set?
8.
By Problem
4,
Is every
i
element a field?
To
ol
^
,
9.
all
2x2
matrices
Call
where a and
b are in
F.
If
A.
is
=
.
show
that
LA
= A.
L a
10.
all
K
bi
be the field of
all
2x2
-b^
matrices
where
p, q,
^
as corresponding elements of
numbers.
the two sets and call each the image of the other.
To
[2
3
-4l
;
L4
Oj
2i, 3+ ^^
5.
Show
Show
What
is the
sum (product)
of their
images
in C.
is the identity
element of C.
image
"!]
This
is
chapter 9
Linear
Dependence
of Vectors
and Forms
pair of real numbers (%, x^) is used to denote a point Z in a plane. The same 2-vector or vector two-dimensional the to denote used here will be as x^], written [x^, of numbers,
OX
^21..
^12 +
^2!
X.2(X^1. X22)
X(Xi. Xq)
Fig. 9-1
If
.Yi
= [%i,xi2]
and
X^=
[x^^.x^q]
their
9-2) yields
A3
A^ + A2
^11
"^
-^21
-^12
"^
-'^221
Treating X^ and Xg as 1x2 matrices, we see that this en in Chapter 1. Moreover, if k is any scalar,
kX-i
L rC%'\
-1
is
for
fv
X-y
VECTORS. By
of F, as
over
is
of n
elements x^
(9.1)
X
x^,X2
[a;i,x2, ...,%]
...,
The elements
%
it
reth
components of X.
(91)
I X-i
X2,
Xi
Now
and
as a column vector.
same vector; however, we shall speak of (9.1) as a row vector We may, then, consider the pxq matrix A as defining p row vectors
components of a
17-vector) or
67
68
[CHAP.
The
tor are
vector, all of
two row (column) vectors and the product of a scalar and a vec-
Example
-^2= [2.2,-3],
^3= [0,-4,1],
= [6. 2,
and
10,
^4= [-4,-4,6]
-I5]
=
2X-^-iX^
IX^+X^.
2X^
=
2[3,l,-4] - 5[2,2,-3]
-S] - [lO,
[-4,-8,7]
(6)
(c)
2[2,2,-3]+ [-4,-4.6]
=
[o,0.o] =
-3X2-^3
X^- Xq+
{d)
2X^ -
X^
The vectors used here are row vectors. Note umn vectors, the results remain correct.
that if
each bracket
is
re-vectors over
|.%]^,%2'
^mJ
m
=
elements
h-^^.k^,
,k^
of F, not
such that
k.^X-1
(9.3)
k^X^^
1.
k^X-^
Otherwise, the
four vectors of
Example
(c)
By
^2 ^"d
Xj^_
Xq^.
+
0,
;c2A'2
[Zk^ + 2k^,
0,
k^+
[o,
0,0]
Then
ft^
+ 2^2 =
and
-ik^ -
= 0.
Prom
Any
n-vector
A
if
X^
of
F such
%A]^ +
that
:2A2
Xfn+i
k^Xf^
BASIC THEOREMS.
If
in (9.3),
k^
?^
0,
we may solve
for
^i
(9.4)
= Xi
- T"!^!'^! + =
SiJVi
+ ^i-i^t-i + -^i+i^i+i +
+ ^m^m!
or
+ s^_iZi_i + s^j-i^i+i +
+ s^J^
Thus,
I. If m vectors are linearly dependent, some one of them a linear combination of the others.
CHAP.
9]
69
II.
If
vectors X^, X^
, X^
2,
X^j,-^
is
X^ are linearly independent while the set obtained by addlinearly dependent, then Jt^+i can be expressed as a linear com-
Xg=
0.
Clearly,
X^.X^.^niXg Zg=2A^i-3^2-
are
III.
If
X^, X^
X^ there
is
a subset of
r<m
linearly dependent, the vectors of the entire set are linearly dependent.
Example
4.
By
(b) of
Example
1,
the vectors
(d),
See Problem
1.
%2
(9.5)
m<n
*TOl
^n2
%
r
are linearly independent while each of the remaining m-r vectors can be expressed as a linear combination of these r vectors. See Problems 2-3.
V.
is
necessary and sufficient condition that the vectors (9.2) be linearly dependent
r<m.
If
linearly independent.
The
If
if
m>n.
the set of vectors (9.2) is linearly independent so also is every subset of them.
A LINEAR FORM
(9-6)
over
in
ra
variables n
x^, x^
a^x^
is a
2
i=i
OiXi
a^x^
a^A:^
in
F
linear forms in n variables
/l
CIu% + %2^2 +
+
'^2n*re
(9.7)
/m
1^1
0~foXo
%n^n
-'ml
"m2
If
k^ +
F such
=
that
Kk
^^^2/2
A;4
70
[CHAP. 9
independent.
to the linear
dependence
/i =
or
otherwise the forms are said to be linearly independence of the forms of (9.7) is equivalent independence of the row vectors of A.
or
Example
5.
The forms
2xi
- 2 +
3*g,
/2 =
x.^+
2%
4^=3.
/g =
ix^
Tx^ + Xg
2-13
ent since
A =
4
1
is of rank 2.
Here,
3/^
"if^
fs =
-7
The system
(9.7) is necessarily
dependent
if
m>n. Why?
SOLVED PROBLEMS
1.
Prove:
If
among
the
X^,
r<m,
which is
vectors.
k^X^ +
+ k^X^ = + k^X^ +
k^X^
+ k^X^ +
0-.Yot
with not all of the k'a equal to zero and the entire set of vectors is linearly dependent.
2.
Prove:
If
ra-vectors is
vectors which are linearly independent while each of the remaining m-r vectors can be expressed
vectors.
.
is
Let (9.5) be the matrix and suppose first that m<n If the r-rowed minor in the upper left hand comer equal to zero, we interchange rows and columns as are necessary to bring a non-vanishing r-rowed minor
into this position and then renumber all rows and columns in natural order.
Thus, we have
11
11?
21
25-'
*11
12
%-r Xiq
.
%1 %2
*rl X^2
Xqt x^q
^rr Xrq
.
xp^
Xpr,
xpf xpq
xj^q
are respectively from any row and any column not included in A.
x^q. xpq,
Let h^,k^,
Then,
by (3.10)
CHAP.
9]
71
fci^ii
k2X2i +
+ krXri + ^r+i*^i
= =
(i
= 1,2 =
r)
and by hypothesis
k^x^q + k^xQq +
+ krx^q + krA-ixpq
Now
u.
column of
not appearing in A.
The
V be replaced by another of the remaining columns, say the column numbered cofactors of the elements of this column are precisely the k's obtained above
k^x^n + ^2*2W +
so that
+ ^rXru + ^r-n^pu
Thus,
k^x^j;
k^2t +
t,
"
f'r'^rt
f'r-n'^pt
(t
= 1,2
n)
and,
summing over
all
values of
k^X^ + k^X^ +
k^X^ +
r
k^^-^Xp
Since
/i:,^+i
ji^
0,
Xp
X^.
is
a.
X-^^.
X^
X^.
But
^^j
^r+2
^I
^j^,
X^
consider the matrix when to each of the given m vectors m-n additional zero compoThis matrix is [^ o]. Clearly the linear dependence or independence of the vectors and also the rank of A have not been changed.
m>n.
Xr+^
X^
X^
as
was
to be proved.
3.
= [2,3,1,-1]
[2, 3,
^2 = [3,-1,2,1]
^^3= [1,-5,8,-7]
and
(b)
^2=
1,-2]
^3= [4,6,2,-3]
is linearly dependent.
In
(a)
Here,
12-34 3-121
1-5
is of
rank
2;
X.^
and X^
The minor
8-7
1
-3
The cofactors
of the elements of the third column are
2
j^
.
-1
and -7.
3-12
1-5
8
respectively -14,
7,
Then -1^X^ +
1X2-1X2=
and Xg
-2X^ + X^.
2
(b)
3 3
Here
4
2 2
6
3
is of rank 2;
X^ and Xg.
Now
-1
the
we interchange
3 2
2 for
which
2
-2
5^0.
-1
1 1
The cofactors
column of
-2
are
4-32.
2X^ + 2X2 - 2Xs =
and
Xg
Xi +
X,
72
[CHAP. 9
4.
Let Pi(l, 1, 1), PsCl, 2, 3), PsiZ, 1, 2), and P^(2, 3, 4) be points in ordinary space. and the origin of coordinates determine a plane tt of equation
The
points
Pi,
P^
X y z
(i)
1111
12
3
1 1
2y +
member
4
of
(i).
we have
2
3
1
1111
12
3
1
1
1110
12
3
1
4
1
3
1
4
1
Thus, P4 lies in
tt.
The
[P^.,
Px.Pq
]'
is of rank 2.
We have
Show
verified:
Any
three points of ordinary space lie in a plane through the origin provided the matrix
2.
SUPPLEMENTARY PROBLEMS
5.
Prove:
If
vectors
X,^.
X^
X-^
are linearly independent while the set obtained by adding another vector
X^^.
-^m+i is linearly
6.
Show
Problems
siXi
is unique.
m
Hint: Suppose
^7^+1 =
n
kiXi
=
1,
m and consider
X
i=i
2
i=i
(A:^.
- s^ ) X^
i=i
7.
Prove:
necessary and sufficient condition that the vectors (9.2) be linearly dependent
r<m.
In (9.3) subtract
S2.
second row by
^s Indicated in (9.4).
For the
2.
8.
Examine each of the following sets of vectors over the real field for linear dependence or independence. In each dependent set select a maximum linearly independent subset and express each of the remaining vectors
as a linear combination of these.
^1
Xj_
[1,2,1]
(c)
^1
= =
[2,-1,3,2]
X2 = [2,1.4]
^2 =
[4,2 1.-2 .3]
(a)
X^
[1,3,4.2]
(6)
Xa
= =
[4,5,6]
Xs X^
Xs
(c)
X3 = [3,-5,2,2] ^4
Ans.
(a)
[1.8.-3]
A.Q
=
= =
Xq
2X^ - X^
A3 = 2a^ +
(b)
A. A
5a 1 2a o
x^
2^1--X^ 2X2--^1
CHAP.
9]
73
9.
Why can
Show
10.
or
X^ = aXj, a
dependent.
Is the
converse true?
11.
Showthatanyn-vectorZandthen-zero vector are linearly dependent; hence, k^-O = where fc^ = o and ftj ^ 0.
A"
proportional.
12. (a)
Show Show
that
X._ =
that
Zi
3% 2::i
Xg + 2Xg + x^
(b)
2Xx
3%-L 5Xj^
3Xg +
4A!g
- 2*4
()
fz /a
= =
f^ = fg =
+ 2^2 -
2x3 + 5*4
X4.
5x^
3/i
95C2
+ 8xq /g
x^.
- X2+ 2Xq +
Arts, (a)
2/2
14.
or
Hox
cient matrix
+ aij^x
1,2
m)
and show that the system is linearly dependent or independent according as the row vectors of the coeffi"10 20
"11 21
"^0
"ni
nn
r
of
is less than or
equal to
15. If the
polynomials of either system are linearly dependent, find a linear combination which is identically
x
zero.
Pi =
(a)
~ 3x2 + 4^ _
2x2
Pj_
2x* +
3:c^
-4x^+5x
x^ +
+ 3
\
P2 = Ps = x
- 6 +
X
2P3 =
(b)
P2 = Pg =
x +
2x2- Zx +
2*2 +
X* +
=
2x-
X + 2
Ans.
(a)
2Pi + Pg
(6)
P^ +
P^-
2Pg
16.
[::]-Ci]-[::]
Show
that
fe^il/i
A:2^^2
+ ^3^3 =
when
not all the k's (in F) are zero, requires that the rank of the
abed
matrix
e
f
q
g h
s
t
be <
3.
M-^.Mz.Mq
components,)
to a set of
mxn matrices
74
[CHAP.
2
2
2
3
3 2
1
,
3 6
17.
Show
that
3
1
4
2
_
4 2
and
2
_
3
-
18.
Show
that
of the matrices
n
n
[o
oj'
[o oj
and
[i
oj'
X^^.X^
X^
show
Y-^.Yr,
1^
where
7^
2
A
aijXj.
and only
if
\_aij'\
is non-singular.
20. If
is of rank
C^,
r,
where
C^
show how to construct a non-singular matrix B such that AB C^ are a given set of linearly independent columns of A.
= [C^,
C2
C7-,
o]
21.
1, 1, 1),
Pjd,
(6) (c)
PaY
is
Ps(2, 2, 2, 2), and /VO, 4. 5, 6) of four-dimensional space, so that the points lie on a line through the origin. of rank 2 so that these points lie in a plane through the origin,
2, 3. 4), 1
22.
Show
A^ +
where the k^ are scalars of F
Hint: Consider
/,
+ kr,A^
...
+ kp^^A + kpl
4,^^,/!^
in the light of
Problem
16.
23.
is satisfied
by
L J, [:;]
4^-24
A '-[:-:]
(b)
\_
\.
(c)
[;:]
=
Ans.
(a)
4^-24=0,
(c),
(b)
+ 27 =
0,
(c)
A^ - 2A +1
24. In
(b)
A'''^
I-^A.
(c)
A~'^=2l-A,
and
A'
chapter 10
Linear Equations
of
unknowns
xi.a;.
>
*r?
OqiX-i +
022X2 +
(10.1)
"Wl*!"'" Ob2*^2 +
\
+ c!2n*-n
^2
OIj,
Aa
in
(o's)
F
x^
in
By
a solution in
F which
sat-
equations.
When
it
is
said to be consistent;
many
solutions.
of linear equations over F in the same number of unknowns are called equivevery solution of either system is a solution of the other. A system of equations equivalent to (10.1) may be obtained from it by applying one or more of the transformations: (o) interchanging any two of the equations, (b) multiplying any equation by any non-zero constant in
Two systems
if
alent
F, or (c) adding to any equation a constant multiple of another equation. Solving a system of consistent equations consists in replacing the given system by an equivalent system of prescribed form.
^11
\'
=
(10.2)
02
02 2
^2n
*2
K
hm
r".
or,
more compactly, as
(10.3)
?here
AX
A =
[o^^-] is
H
=
[x^.Xr,
xj\ and H
[h^h
A^]'
Consider now
for the
ai2
02 2
"i?i
K
[A
(10.4)
021
^271^2
H]
ml
0^2
<^nn
(Each row of (10.4) is simply an abbreviation of a corresponding equation of (10.1); to read the equation from the row, we simply supply the unknowns and the + and = signs properly.)
75
76
LINEAR EQUATIONS
[CHAP. 10
To solve
to replace
the entire
the system (10.1) by means of (10.4), we proceed by elementary row transformations A by the row equivalent canonical matrix of Chapter 5. In doing this, we operate on rows of (10.4).
Example
1.
3xi +
^.X-^
2x-j^
x^
^Xq^
2X2
Xg
=
=
+ ^Xq + 2xq
2
1
4
1 1
2'
"1
2
1
1 1
2
1
-2
=
-3
-1
-]
-5 -5 -5 -5
0.
V
.0
1
-11 -5 -5
'1
-1
1
1'
1 1 1
1
1
.0
OJ
x-i
0.
is the
=1, 2
= 0,
xq =
1.
Ex-
= [l,
0, l]
FUNDAMENTAL THEOREMS.
When the coefficient matrix A of the system (10.1) is reduced to the A:]'. row equivalent canonical form C, suppose {A H] is reduced to [C K], where K= ^1,^5 If A is of rank r, the first r rows of C contain one or more non-zero elements. The first non-zero element in each of these rows is 1 and the column in which that 1 sta,nds has zeroes elsewhere.
of zeroes.
Prom
the first
x:
,
...
bleS X:
If
Jr+1
If
X:
Jr+2
= k
...
,xj
r rows of [C K], we may obtain each of Chapter 5) in terms of the remaining varia-
k^.
k^
ac,-
0,
values for
X,-
X.
% Ji
,,
a solution.
kj; 7^
On
least one of
V+i'
"r+s
'
J2 is different
x ,
...
X- constitute Jr
0,
0%
+ 0*71
7^
and (10.1)
is inconsistent.
A and
[A
H]
Thus
=
A system AX
of
linear equations in n
of the
if
r<n, re-r of the unknowns may be chosen When these n-r r unknowns is of rank r. are uniquely determined. unknowns other r values, the whatever assigned any unknowns are
In a consistent
system (10.1)
of rank
Xi +
2*2
3*^3
~ 4^4
2x4.
= = =
6 4
Example
2.
aci
+ 3^2 +
5a:2
xg
2^1 +
2%3
5x^1
10
CHAP.
10]
LINEAR EQUATIONS
77
12-3-4
[A
12-3-4
">-/
6
'Xy
1 1
-11 -8 10
4
H]
13 1-2 2 5-2-5
4 10_
1 1
4 4
2-2 3-2
lO"
4
2-2
_0
10
"l
-11
1
-V
0-2
[C K]
^0
10
-2-4o,
Since A and [A H] are each of rank r = 3, the given system is consistent; moreover, the general solution contains n-r = 4-3 = 1 arbitrary constant. Prom the last row of [C K], x^= 0. Let xs = a, where a is arbitrary; then aj-l = 10 + 11a and xq = -2-4o.
The solution
of the
system
is given
by x^ = 10 + lla, xj =
xg = a,
x^,
X
If
or
= [lO
+ llo,
-2-4a,
a, o]'
1) that
solution
is over
F.
If
F when the arbitrary values to be assigned are over F. However, the system has many solutions over any field 2F of which F is a subfield. For example, the system of Example 2 has infinitely many solutions over F (the rational field) if o is restricted to rational numbers, it has infinitely many real solutions if a is restricted to real numbers, it has infinitely many complex solutions if a is any whatever complex number.
tions
over
infinitely
See Problems
1-2.
NON-HOMOGENEOUS EQUATIONS. A
a-^
linear equation
Xi + 0^X2 +
+ n*n
is called
non-homogeneous
if
equations provided
is not a
A 7^ 0. A system AX = H is called a system of non-homogeneous zero vector. The systems of Examples 1 and 2 are non-homogeneous
systems.
In
Problem 3 we prove
ni.
A system
of
re
non-homogeneous equations
in n
is n, that is.
method above, two additional procedures for solving a consistent system non-homogeneous equations in as many unknowns AX = H are given below. The first of these is the familiar solution by determinants.
of n
(a) Solution
by Cramer's Rule. Denote by 4, (i = 1,2 n) the matrix obtained from A by replacing Its Jth column with the column of constants (the h's). Then, if \A\ y^ 0, the system AX = H has the unique solution
<1"'5)
777
X2
'
"'
See Problem
2xi
4.
X2 + Sxg + X2
*4
Example
Xl +
3.
+ 2*2 + 2x3
We
find
1 1
6 2
-3 -4 -2 1
-120.
-3
-4
1
-2
2
-240
2-3
-3
78
LINEAR EQUATIONS
[CHAP. 10
2
1
15
6 2 8
-1 -3 -4
3 2
8-21
2
-24,
1-1-4
1
2-3
2
1 1 1
2
5 5
2-3
and
3 2
6
2
-3 -1 -2 8
2
2_
-96
Then
x-i
-240
-120
A^
-24
-120
= 0,
and
Ml
-120
-96
^4
-120
^.
If
^ #
|
0,
AX
is
given
by
(10.6)
A-^-AX
A-^H
or
A-^H
Xg
is 2 3 2
1 1
2xi + 3X2 +
Example
4.
The
x^ +
2%
a;2
+ 3xg +
1
3 2
3^1 +
2a:3
-5
1
From Problem
2(6), Chapter 7,
A
18
-5
1 ["9"
Then
"35'
1
-5
1
-5
1
AX
A-^H
J_
18
-5
1
29
18
.
-5
L8_
5_
The
See Problem
5.
HOMOGENEOUS EQUATIONS. A
(10.7)
is called
linear equation
'^1*1 +
"2*2 +
+ ''n'^n
homogeneous.
A system
of linear equations
(10.8)
in n
AX
is called a
unknowns
always consistent.
system of homogeneous equations. For the system (10.8) the rank and the augmented matrix [A 0] are the same; thus, the system is = is always a solution; it is Note that X = 0, that is, %i = xs = =
the rank ot
xj^
A
=
II
is n,
unique solution
X2=
...
then n of the equations of (10.8) can be solved by Cramer's rule for the = x^= and the system has only the trivial solution. If the rank of
is
r<n. Theorem
IV.
trivial
Thus,
V.
=
|
0.
VI. If the rank of (10.8) is r < n, the system has exactly n-r linearly independent solutions such that every solution is a linear combination of these n-r and every such linear See Problem 6. combination is a solution.
CHAP.
10]
LINEAR EQUATIONS
79
two distinct solutions of AX = H. Then AX^ = H, AX^ = H, and A (Xx- X^) = Y = X^~ X^ is a non-trivial solution of AX = 0.
AY
0.
then Zy, +
'P
Conversely, if Z is any non-trivial solution of AX = and if X^ is any solution of AX = H, X = Xl+ Z is also a solution of AX = H. As Z ranges over the complete solution of AX = 0, Z ranges over the complete solution of AX = H. Thus,
VII.
If
AX
E AX
is
consistent, a complete
AX
= H.
system
i
Example
5.
In the
Xi
2x2
+ 3x3 =
set x^ = 0; then xg = 2 and x^ =
5
1.
Xj +
^2 + 2 Xg
particular
solution is
A:^,
= [o,
1, 2]'.
The complete
solution of < *^
\^Xi
~
+
*^ Xq + 2*3 =
-^
IS
[-7a,a,3oJ
where a
is arbitrary.
is
[-7a,a,3o]' +
[O, 1,2]'
[-7a,
+a, 2+3a]'
Note. The above procedure may be extended to larger systems. However, it is first necessary to show that the system is consistent. This is a long step in solving the system by the augmented matrix method given earlier.
SOLVED PROBLEMS
xi +
e
X2 ~ 2xs + ^2 +
X4 + 3 K5
=
= =
2i 3
ail
2%
+ 2x4 + 6*5
+ 2 X2 - 4 Xg - 3 X4 - 9 xg
tion
[A H]
1-2 2-1 2
2
l'
2
-v-
"1
r
'\J
1 1
-4 -3 -9
3_
1
-3 -1
-2 -2
2
1
-6
1
8
3
1
0_
0-1
-2
-18
3
1-2
-6
1 1
-18
1
-2000
13
Then
x^ -
solution
^4
1.
+ 33=5 =
0.
Take
x^^
2a,
x^^a,
xg = a and x^ = b. where a and b are arbitrary; the complete x^ = -3b. x^ = b or as AT = [l, 2a, a, -3 6, 6]'.
x^ +
X2 + 2Xg +
X4
= = =
5 2 7
.
Solve
2%i + 3x2 -
4xi +
Snliitinn*
5%
'11
2
s - 2x4 + 3xg
2
5"'
"V-
'112
1 1
[A H]
3-1-22
5
13
7
O-^tj +
1-5-4 1-5-4
-8
-13
-5 -4 -8 -5
is inconsistent
The
last
80
LINEAR EQUATIONS
[CHAP. 10
3.
Prove:
A system AX
7^
of n
non-homogeneous equations
in n
provided \A\
If
0.
is non-singular,
[/
it
is
equivalent to
=
/.
When A
is
/,
suppose
[A H] is reduced to
K].
Then X
is a solution of the
AK
and
AL
H, and
AK
AL.
unique.
4.
+ ain*n
+ 2n*n
h^
^z
+ "nn^n
^n
[a.
and
in
by
ttni.
n
ai^di-^x-^ +
n ai20iii% +
10,
and add. n
1
We have
S
which by Theorems
2
i=i
.S
i=l
1=1
3,
ain^ilXn
=1
Chapter
reduces to
"m
'^2n
22
^n
""no.
so that
x-i
^11 j-
^1
"n
n2 and sum to obtain
''is '^23
\A\.x^
"m
"2n
so that
*<,
(i)
respectively by a^n. 2n
(^,71
"2,n-i ^2
so that
x^.
'ni
2Xi +
5.
X2 + 5 *3 +
*^4
~ = =
-1
using the inverse of the coefficient matrix.
=
120 120
Solution:
2
15
-120'
80
The inverse
of
1-3-4
6-2 1 2-3 2
3 2
Then
40
.24
-40.
CHAP.
10]
LINEAR EQUATIONS
81
"120
1
120
-120"
80
5"
"
120
-1
8
.
1/5
40
.24
(See Example
3.)
-40.
2.
4/5
Xl +
6.
^2 +
s +
*4
4a;4
Xj^
= =
Solve
i + 3*2 + 2xq +
2Xi
Solution:
Xg -
o"
'X/
1
1
-
11110
2
\A
U\
2
1
13
-1
1
''\J
-2 -1 -3
i
1
11110
1
-i
3 2
o"
i 1
is 2,
The complete solution of the system is we may obtain exactly n-r = 4-2
taking a =
1,
x^ =
= 2
1
-^a + 16, ^^
first
6 =
and then a =
X2 = -2, *3 =
3,
6 =
x^ =
0.
1,
X4 =
and
x^ = -1, x^ = -3, ^3 =
b =
3,
:x:4
by taking a =
and a =
b =
37
7.
Prove: In a square matrix A of order n and rank n-1, the cofactors of the elements of any two rows
=0,
0).
AX
system
(A'X =
Now the system has but one linearly independent solution since A (A^) is of rank n-l. Hence, for the cofactors of another row (column) of A (another solution X^ of the system), we have X^ = kX^.
8.
Prove:
If
/"i,
f^
f^ are
m<n
linear forms
^j
are linearly dependent if and only if the
^^'ijfi-
0-=1.2
p)
mxp
The g's
that
if
and only
if
a.
in
lgl + "2g2 +
+apgp
ai
^^Hxfi
P
.|
si^fi
...
ap
.2
sj^fi
.2
2
J=l
1=1
a;:S--)f.
J ^J
82
LINEAR EQUATIONS
[CHAP. 10
j?/i^v
X
^i^-ii
+ "p'iP
in p unknovras
(i
1,
m)
s^j xj -
has a non-
= [a^.a^,
apV
if
and
r<p.
9.
Suppose A =
[a-
] of order n
is singular.
Show
that there
[h^j]
?^
of
AB
0.
Let Bi.Bg
of
B.
Then, by hypothesis,
AB^
^65
..
AB^
= 0.
Con-
ii&it + "iQ^'st +
"m ^nt
ni*it + n2^2t+
"nn^nt
i>nt
Since the coefficient matrix ^ is singular, the system in the unknowns h^t,^'2t
Similarly,
AB^
= 0,
AB^
= 0,
...
SUPPLEMENTARY PROBLEMS
10.
Find
x^ +
= = =
=
4
3 5
{a)
Xj_
2^:5
+ *3
3*'4
(c)
2%i + 5%2
x-^
-^
1 x^
~ 2%3 7^3
->=3
Xj^
+ %2 +
+ %4
X^ +
Xq +
3C2
Xg
2 Xg
= =
4
(d)
^1 + %2 +
-^S
%
:>:4
= = =
4 -4
2
(b)
\2x^
+ 5
X\
X-i
-^
Xq
x^ +
Xr,
Xq +
+ x^
/4ns.
(a) (b)
! = xi =
=
(d) Xi
1 + 2a -7a/3 + % = 1,
+ 3c,
x:2
ajj
= o, X3 = b, x^ =
17/3,
:;3
4a/3 = 2
5/3, Xg
11.
Find
x-i
2x^
*1 + 2*2
"*'
3*3
+ 3x3
(c)
(a)
<
2*1 +
3*i
X2 + 3*3
*3
12*1
+ 5*2 + 6*3
+ 2*2 +
2x-i
(.h)
4*1
X2 + ^x^
+ 2*3 +
2*1
3*1 + 22 +
*i
*3
- 4*2 + 5*3
0,
*2
2*1
11*5
1*3
5*4
/4ns.
(o)
*i = -3o, X2 =
*g = a
(6) *i = (d)
*2
5
*3
J.
= a
3,
*i
*o =
*4
CHAP.
10]
LINEAR EQUATIONS
83
12.
x-i
= c, x^ = d, x^ =
^c -
O
x^ =
c +d.
o
o
13. Given
A =
2 3
2
3
find a matrix
of rank 2
such that
AB
= 0.
Hint.
B from
the solutions of
AX
= 0.
14.
is singular if
and only
if its
15.
AX
be a system of n homogeneous equations in n unknowns and suppose A of rank r = n-1. [a^i, a^j OLinV of a row of ^ is a solution of AX = 0.
Show
16.
Use Problem
/
^1
15 to solve:
- 2%2 +
ixg 3%
= =
(b)
/ 2xj_
+ 2x^
Xg
(c)
y'ix^
4%2
2.^1
+ 3*2 + 4
= =
+ 2%3
=
1 2xi
%2 + 6 Xg
To
-2
5
a""
third
row of
Ans.
(a) xi =
-27a. X2
= 0, X3 =
9a
or
[3a,
0,
-a]',
(6) [2a,
-7a, -17a]',
(c)
17.
AX =H
Let the coefficient and the augmented matrix of the system of 3 non-homogeneous equations in 5 unknowns be of rank 2 and assume the canonical form of the augmented matrix to be
1 1
613
623
614 624
fcj^g
ci
625
C2
_0
ci, C2
equal to
0.
First choose
1,
AX
= H.
Then choose
X3 =
and obtain X^ = [c^, c^, 0, 0, o]' as a solualso X3 = x^ = 0, x^ = 1 and X3 = x^ = 0, Xg == 1 to obshow that these 5-2 + 1=4 solutions are linearly independent.
x^ = xg =
0,
X3 = x^ = x^ =
18. Consider
of the solutions
that
is a solution of
i^
AX
of
Problem 17
Show
if
and only
(i)
1-
Thus, with s
for (O,
IS a
complete solution of
Hint. Follow
AX
s,. .3,
.4 arbitrary except
= H.
19.
Problem 17 with c^ = c^
r^
= 0.
20.
matrix of rank
and S
is a p
matrix of rank
r^
such that
AB
= 0, then
r,
r^
< ^-
n f
21.
.1 = [a^^-] of rank 2, verify: In an x matrix A of rank .. the r-square determinants formed from the columns of a submatrix consisting of any r rows of A are proportional to the r-square determinants formed from any other submatrix consisting r
Hint.
(7 =
Suppose the
1.2
5).
of rows of A two rows are linearly independent so that a^j = p^.a^j +Ps^a,.. a^.-p^a^,f^ ij +p42a2,ha2 2j. j Evaluate the 2-square determinants j
first
I
"17
ai5
a^q
025
\'^3q
agg
and
"4(7
"AS
^'"
f?ctors''hcJ?'"
'
where (h,i,j,h =
'''''"''
''
"^
(a)
1,
a^^-a^^
a^J^(x^,
a^^aJJ
a^jUji
n).
84
LINEAR EQUATIONS
[CHAP. 10
24.
Show
that
32-1-13
IS
row equivalent to
-,,.
122-2
2 3
10
From B =[A H]
-3
ij
[_0
system
of 6 linear equations in 4
m>n
linear equations in n
ra
unknowns has 5 linearly independent equations. Show that a system of unknowns can have at most re +1 linearly independent equations. Show that when
there are
25. If
AX =H
is consistent
and of rank
r,
for
what set of
equations in n unknowns with coeffi26. Generalize the results of Problems 17 and 18 to m non-homogeneous and the augmented matrix of the coefficient If the r to prove: cient and augmented matrix of the same rank Xn-r-n are rank r and if X-^^.X^ have unknowns = in n equations non-homogeneous AX H of m
system
X
n-r+i
where
si^i + 51^2 +
^n-r+i^n-r+i
S
i=i
1,
is
a complete solution.
i and
a
c
h
h
2 and
Iq
by
E^
oEq +
6/2
"11 _/lJ
'eI
'e2
h
Show
that
cE^ + dlQ
d
'b
h.
>-
and
d
I^,
A
c
and
I^,
and E^
28.
Let the system of n linear equations in n unknowns 4X = H, H ^ system AX = K has a unique solution for any n-vector K ^ 0.
'l
0,
-1
1
1"
'xi
"n"
=
AX
X
2
1
3 3
X2
_s_
72
p.
= E^, =
n).
Now
30. Let
write
of A'
Y
(i
A be n-square and non-singular, and let S^ be the solution of AX n-vector whose ith component is 1 and whose other components are 0.
Let 4 be an m X
matrix with
1,
31.
71
m<n
1
and let S^ be a solution of AX - E^, {i -1,2,. and whose other components are 0. If K = \k^. k^.
k^Si + A:2S2
+ +
m).
,
where E^ is the
,
k^' show
that
^n^%
is a solution of
AX
= K.
Chapter
Vector Spaces
11
all
vectors will
a^]'.
we
now be column vectors. When components are disThe transpose mark (') indicates that the elements are
to be written in a column.
A
them
if
F is said to be closed under addition if the sum of any two of Similarly, the set is said to be closed under scalar multiplication
Example
all vectors [x^. x^. x^Y of ordinary space havinr equal components (x-^ = x^^ x^) closed under both addition and scalar multiplication. For, the sum of any two of the vectors and k times any vector (k real) are again vectors having equal components.
is
The set of
(6)
The set
of all vectors
is
multiplication.
F which
if
is
Thus,
X^, X^
closed under both addition and scalar multiX^ are n-vectors over F, the set of all
<"!)
is a vector
K^i
space over F.
KX^
KX^
(kiinF)
For example, both of the sets of vectors (a) and (b) of Example 1 are Clearly, every vector space (ll.l) contains the zero re-vector while the zero re-vector alone is a vector space. (The space (11.1) is also called a linear vector space.)
vector spaces.
The
space over
F.
SUBSPACES. A
set V of the vectors of V^(F) is called a subspace of V^(F) provided V is closed under addition and scalar multiplication. Thus, the zero re-vector is a subspace of F(F)- so also IS V^(F) Itself. The set (a) of Example 1 is a subspace (a line) of ordinary space. In general If X^, X^, ..., Z^ belong to V^(F), the space of all linear combinations (ll.l) is a subspace of
vector space V
lie in
is
V and
said to be spanned or generated by the re-vectors (b) every vector of F is a linear combination
X^,
X^
(11.1).
X^_
Examples. Let F be the field R of real numbers so that the 3-vectors X^ ^3= [1.3,2]' and X^= [3,2,1]' lie in ordinary space S =
s can be expressed as
V^R).
\i
.
,
2 i]'
\a b -
cY
j
ot
85
86
VECTOR SPACES
[CHAP. 11
yi +
Ti ^1
y2 +
ys
3y4.
274.
+ 72^2 + ys'^s +
y4-^4
yi
+ 2X2 + Sys +
3y2 +
2y3
yi +
Ja
yi +
(i)
y2
ys
3x4
yi +
yi
2y2 +
3y2
3ys + 2y4
2y3
y4-
is consistent.
Xj^,
Xj. Xg,
X4 spanS.
tt)
and ^2 are linearly independent. They span a subspace (the plane real numbers. S which contains every vector hX^ + kX.^. where /i and k are
The vectors
X-y
of
See Problem
1.
BASIS
maximum number of linthe dimension of a vector space V is meant the number of linearly inminimum same thing, the early independent vectors in F or, what is the is considered as space ordinary geometry, dependent vectors required to span V. In elementary it as a considering been have we Here (a, 6, c). a 3-space (space of dimension three) of points is of line L the and is of dimension 2 3-space of vectors ia,h,c \. The plane n of Example 2
AND DIMENSION. By
dimension
1.
r
A
we
When
n,
basis of the space. Each veclinearly independent vectors of V^(F) is called a of this basis. All bases of vectors the combination of tor of the space is then a unique linear independent vectors of the linearly r V^(F) have exactly the same number of vectors but any
set of
The vectors
X^.
Xr,.
X^
of
Example
b. c ]'
of S can be
expressed
yi
X-y 71-^1
+ +
ys
2y2 3X2
a
h c
,
+ + +
ys 3ys
2ys
+ j^Xq + yg^a
yi
yi +
!yi xi
The
resulting system of equations
y2 +
ys =
,^ + 3X3 = yi + 2y2
i),
has a u-
Xi+
nique solution. basis of S
.
3X2 + 2xs =
are not a The vectors X^.X^.X^ are a basis of S. The vectors X^.X^.X^ X^. X^ set the basis is whose tt Example 2, of (Show this.) They span the subspace
Theorems I-V
stated as:
I
of
In
particular,
Theorem IV may be
re-
If
X X^
..
X^
of their components,' then from the set r the remaining m-T vectors lie. T vectors span a V^iF) in which
the rank of the raa:ro matrix Unearly independent vectors may be selected. These
F and
if r is
See Problems
2-3.
CHAP.
11]
VECTOR SPACES
87
If
JYi,
^2,
...,
Zm
are
m<n
X^,
V^iF) and
if
J^+j^,
^m4.2. .
^n
X.^,
X^
X^
is a basis of V^iF).
III.
If
Z^.Zg,...,! are m<7i linearly independent -vectors over F, then the p vectors m
^i
i^'ij^i
if
(/=l-2
P)
if
p>m
or,
when p<m,
IV. If
Zi,
.2, ...,
n
Yi
=
7
1
=
1
a^jXj
"
(i
= 1,2
re)
if
[a^-] is nonsingular.
IDENTICAL SUBSPACES.
only
is
if
If
,V^(F) and
X(F)
are two
is a vector of ^V;[(F)
subspaces of F(F), they are identical if and and conversely, that is, if and only if each
See Problem
5.
X+Y
Let V\F) and f3^) be two vector spaces. By their where X is in V^(F) and Y is in V^iF). Clearly, this
is a vector space; we call it the sum space V^^iF). The dimension s of the sum space of two vector spaces does not exceed the sum of their dimensions.
the intersection of the two vector spaces is meant the totality of vectors common to the Now if Z is a vector common to the two spaces, so also is aX; likewise if X and y are common to the two spaces so also is aX^bY. Thus, the intersection of two spaces is a vector space; we call it the intersection space V\F). The dimension of the intersection space of two vector spaces cannot exceed the smaller of the dimensions of the two spaces.
By
two spaces.
V. If two vector spaces FV) and V^(F) have V^\F) as section space, then h + k = s + t.
inter-
Example 4. Consider the subspace 7f, spanned by X^ and X^ of Example 2 and the subspace tt^ spanned by Xs and X^. Since rr^ and tt^ are not identical (prove this) and since the four vectors span S, the sum space of tt^ and tt^ is S.
Now 4X1 - X2 = X^: thus, X^ lies in both tt^ and 7t^. The subspace (line L) spanned by X^ IS then the intersection space of 77^ and 77^ Note that 77^ and 77^ are each of dimension 2, S IS of dimension 3, and L is of dimension 1. This agrees with Theorem V.
.
NUIXITY OF A MATRIX.
,
For a system of homogeneous equations AX = 0, the solution vectors X constitute a vector space called the null space of A. The dimension of this space, denoted by ^ A'^ IS called the nullity of A.
Restating Theorem VI, Chapter
VI.
If
10,
we have
A has
nullity N^
then
AX
88
VECTOR SPACES
[CHAP. 11
Xti A
AX
is a linear
basis for the null space of A is any set of N^ linearly independent solutions of
AX
0.
9.
See Problem
Vn. For an
(11.2)
mxre matrix
of rank rj
rA
and nullity
Nji
N/^,
If
A and B are
of order
ra
rg
the
AB
>
'^
'B
- "
(11.3)
Nab > Na
Nab > Nb
Na + Nb
Nab
<
See Problem
10.
ra-vectors
0]',
0]',
E^ = [0,1,0
En = [0,0,0
1]'
are called elementaiy or unit vectors over F. The elementary vector Ej, whose /th component The elementary vectors E^, E^ constitute an is 1, is called the /th elementary vector.
Every vector
[%,%
n
1=1
sum
X
Of the elementary vectors.
xiEi
XxE-^
2-^2
+
are
'^nEr,
The components %,
x^
x^ oi
X X
Then there
a^
X
i
1
=i
aiZi
a.^Z.^
02
Zg +
dj^Zj^
These scalars
JY^
01,05
= [oi, og
Writing
(11.4)
X
is the matrix
[Zi.Zg
Zn]Xz
=
Zi,
Z-Xz
Zg
Z.
(F) and
where Z
Examples.
If
Zi =
-1
]',
Z3
= [l, -1,
-1
]'
is
a basis of
Fg
Xz
[1.2.3]'
r
2
1
"1
-1
3
-1
[7,0,-2]'
-1 -1
-2
See Problem 11.
CHAP.
11]
VECTOR SPACES
89
Let
(11.5)
W^,
Suppose
-r
Xig
= [61,^2
K\' so
that
X
(11.4) and (11.5),
= =
\_\,\
WX^
Xy
From
(11.6)
JV^
IT
Zj^
and
X^
IF"^Z.
^'^.Z-X^
PX,
where P =
Thus,
VIII.
If
P determined
,
Xf/
= PX^
See Problem
12.
SOLVED PROBLEMS
1.
The
A'
= [%,
x^, Xg,
x^Y,
where x^ + x^ + x^ + x^ = Q
vectors of the set.
is
a subspace V of V^(F)
since the sum of any two vectors of the set and any scalar multiple of a vector of the set have
2
2.
4
is of rank 2, the vectors
X.^
Since
2
1
= [1,2,2,
1 ]',
1 ]'
4
3
1
(F).
X-^
Now any two of these vectors are linearly independent; hence, we may take and Xg as a basis of the V2{F).
A:g.
or
X^
3.
Since
14 13 12 12
2
^"1
= [1,1,1,0]',
A's
= [4,3,2,-1
]',
A'g
= [2,1,0,-1]',
-1 -1 -2
and
^"4
For a basis, we may take any two of the vectors except the pair Xg, X^.
4.
The vectors
X^, X^, Xg of
Problem
2 lie in V^(F).
Find a basis.
For a basis of this space we may take X^,X^,X^ = [l.O.O.O]', and Xg = [o, 1,0,0]' or X^.X2.Xg = since the matrices [X^, X2, X^.Xg] and [X^. X2.Xg. Xy] are of rank [1.2,3.4]'. and X; = [1,3,6,8]'
4.
90
VECTOR SPACES
[CHAP. 11
5.
Let Zi = [l,2,l]', Z2 = [l,2,3]', A:3 = [3,6,5 ]', Y^ = [0,Q,lY, i'2=[l.2,5]' be vectors of Show that the space spanned by X-^, X^, Xq and the space spanned by Y.^, Y^ are identical.
First,
Vs(F).
we
Xq
2Zj^
X^..
Also, the Yi being linearly independent span a space of dimension two. say
2lg^(F)
(c
Thus, any vector iX.^, Y^ = 2^2- ^li ^i = ^i^ - 4^1. X^ = Y^- 271. (50 + 26)^2 - (2a + b)X.^ of iPg^CF) and any vector cXj^ + dX^ of iff(f) + d)Y2 - (4c + 2<i)Ki of QVi(F). Hence, the two spaces are identical.
Next,
Fi =
kXz -
aY-^
fcK,
of
is
a vector
is a vector
6. (a) If
X^ =
then
-2% +
1
5X2 + IXg
(b) If
1,0,-2,1]',
then
3Ci
1
1 1
%
is of
%
%4.
2 3
1
1
-2
1
1
rank
2.
Since
1
0,
this requires
= - 2*1 + 4% -
and
xi
%
4.
=
1
%+
2*2 -
s;*
These problems
linearly independent
verify: Every ^(^ ) may be defined as the totality of solutions over F of a system homogeneous linear equations over F in n unknowns.
of n-k
7.
two vector spaces Vn(F) and I^(F) have Vn(F) as sum space and V^iF) as intersection space, then h + k = s + t.
Prove:
If
Suppose
s
= h; then Vr!XF) is a
is 1^' itself.
Thus, s=k.
=h
and
will
show
same
is true if
= k.
Suppose next that t<h. t<k and let X^^.X^ Yh so that X1.X2 ^t.i't-n A:i,A:2 ^t.Zt+1 2fe span I^'^F).
^t^-i, yt-+2
X^ span
Yh
V^(F).
Now suppose
(11-4)
X 1=1
t
"iXi
.S
t=t+i
a^Yi
S
i=t+i
biZi
ft
i=i h
i=t+i
i=t+i
k
to
left
belongs to P^(F),and from the right member, belongs also to ^(F); thus
it
belongs
X-t
a^+i = at+2 =
k
2;
"f,
= 0.
Now
from (11.4),
2
i=i
"iXi
b^Z^
''t^i =
s t*^"^, *t+2 = = ^fe = as was to be proved.
:
i=t+i
But the X's and Z's are linearly independent so that a^ = 05 = = t = the ^f's.y-s, and Z's are a linearly independent set and span ^(F). Then
=h + k-t
CHAP.
11]
VECTOR SPACES
91
8.
1,1,1
]'
and
Fj
1.0,1
]'
113 2 110
1
is of rank 3, the
sum
12
space
is V,(F).
X-^,
X^,
and
Y^
Prom
h + k
t.
To
find a basis,
we equate
linear combinations
- 3e
c
take d =
for
^2a
( 3a
66
= =
1
obtaining
a = 1/3,
6 = -4/3,
c =
-2/3.
Then
- 2e
aX;]^
+ 6^2 = [-1.-2/3.-1/3
]' is
is also a basis.
113
9.
Determine a basis
space of A
2
2
4
1
10
113
Consider the system of equations
AX
x^ =
=
which reduces
to
\x2+
A
basis for the null space of
.4
Xs + 2x^ =
of these equations.
10. Prove:
r >
AB
r A
- n.
Suppose
first that
Then the
10,
first
r,
rows of
AB
r.
rows of B while
"
By Problem
AB
is
^45
>
'k
+ %
PAQ
Suppose next that A is not of the above form. Then there exist nonsingular matrices P and Q such that has that form while the rank of PAQB is exactly that of AB (why?).
the special
case when
11.
Find
its
[l 1 0]'
Write
1 1
!a
Then
a
+ i + c = +
b
c c
(i)
aZ^ + 6Z2 +
cZr^.
that is.
2
1
+ h
1
= =
2
1
and a =
0,
6 =
1,
c = 2.
Thus
we have X^
[0,-1,2]'
92
VECTOR SPACES
[CHAP.
11
Solution (6).
Rewriting
(i)
as
{Z-^.Z^.Z^^X^ = ZXg,
1
we have
1
-1
X^
Z X
1 1
-1
1 1
.
2
1 _ _
[0,-1,2^
12.
Let X^ and
X-^
X
ff^
7,^
= [1,1,0]'
= [1,2,2
]'.
1 1
'l
2 2
1
l'
Here
\_Z^, Z^,
Zg]
2 2
and
2-3 2 0-1 2
-3
-1
4
1
1 1
Then
W'^Z
by (11.6).
0-3
SUPPLEMENTARY PROBLEMS
13.
Let
[x-i^.
x^. x^.
X4,y
R denotes
Which of the
x-^
with x^ = x^.
0.
x^=2x^.
x^.Xr,.Xs.xj^
integral.
vectors with %4 =
(e).
14.
Show
that
1.1.1.1
]'
fi^(F) of
Problem
2.
15.
Determine the dimension of the vector space spanned by each set of vectors.
[1.1,1.1]'
[1,2,3.4.5]'
(a)
[l.l.O.-l]'
(b)
[3,4,5,6]'
,
[5.4,3,2,1]',
[1,2,3,4]'
^'^
[1.2,3,4]'
[1.1,1.1,1]'
[2.3.3,3]'
[1,0,-1,-2]'
Ans.
r= 2
16. (a)
(6)
Show that the vectors X-^ = [l,-l,l]' and X^ = [3,4,-2]' span the same space as Y^ = [9,5,-1 ]' and 72= [-17,-11.3]'. Show that the vectors X^ = [ 1,-1,1 ]' and A'2 = [3,4,-2 ]' do not span the same space as Ti = [-2,2,-2]'
and K, = [4,3,1]'.
n. Show
X^.X^
X^
X^
Hint.
Assume
51
aiXi
biXi-
CHAP.
11]
VECTOR SPACES
93
18.
Consider the
4x4
matrix whose columns are the vectors of a basis of the Vi(R) of Problem 2 and a basis of 3. Show that the rank of this matrix is 4; hence. V^R) is the sum space and l^(R), the
19.
in
Problem
8,
20.
Show
that the space spanned by [l,0,0,0,o]', [0,0,0,0,1 ]', [l.O,l,0,0]', [0,0,1,0,0]' [l,0,0,l,l]' and the space spanned by [l,0.0.0,l]', [0,1,0,1,0 ]', [o,l,-2,l,o]', [l,0,-l,0,l ]', [o,l,l,l,o]' are of dimensions 4 and 3, respectively. Show that [l,0,l,0,l]' and [l,0,2,0,l]' are a basis for the intersection space.
Z^=
(c)
[l.l.o]',
(a)
(b)
[1,0, l]',
[l.l.l]'.
Ans.
[-1/3,2/3,0]',
-1
]',
(c)
[l/3, 1/3,
]'
Ans.
(a) [-2,-1,1]',
[-6,7,-2]',
]'
23.
Let X^ and
trix
X^^
Determine the
P such
that Xj^ =
PX^
Zi=
ea)
1^1
[1,0,0]',
Z2=[i,o,l]',
[^2= [1,2,3]',
2
Z3=
If'3=
[1,1, il^^^
Zi = [0,1,0]',
!fi
2^
= [1.1,0]',
23
= [1.2.3]'
= [0,1,0]',
[1,-1,1]'
= [l,1.0]'.
1^2= [1,1.1]',
1^3= [1,2.1]'
41
,
r
(6)
-2"1
2
Ans.
(a)
^
2
-1
2j
ij
24. Prove: If Pj
[''1.^2
is a solution of
AX
n
=
Ej
(j = 1,2
n).
then
hjPj
is a
solution of
AX
= H.
where H =
KVH
=
Hint.
h^Ej^ +
h^E^+
+hnE^.
25.
The vector space defined by all linear combinations of the columns of a matrix A is called the column space of A. The vector space defined by all linear combinations of the rows of A is called the row space of ^. Show that the columns of AB are in the column space of A and the rows of AB are in
the row space of
fi.
26.
Show
that
the vector
if
and only
if
the
1111
(6)
space of
(a)
1
1-1,
1
12 12
3 4 3
Ans.
(a)
[1,-1,-1]',
(6)
1,1,-1, -i
]',
28. Prove:
(a)
N^,>N^. N^^>N^
= n
(b)N^,<N^^N^
r^
Hint:
r^g
r^g
<
and
rg
Consider n~r^g
Then resolve
Problem
5.
chapter 12
Linear Transformations
DEFINITION. Let
%]' and Y =
lyx. y^.
JnY ^^ ^^ vectors
are related by
yi
'^ll'^l
"^
^12 "^2
"^
^^Yi J^
+
(12.1)
df^Yj.^ri
or, briefly,
AX
[a^.-]
where A =
is over F.
Then (12.1) is a transformation T which carries any vector X Y of the same space, called its image.
of
(a)
(fe)
it
it
b.
2 5 3
'12"
A"
Example
1.
AX
in ordinary
space Vq(R).
13
'l
1
2 5 3
(a)
The image
of
A"
= [2,0,5]' is
1 1
2 3
= 5
27 17
[12,27.17]'.
'l
'x{
*2
=
'2'
(6)
is
= [2,0.5]' is obtained
by solving
1
.1
2 3
5
3_
3.
112
Since
12 13
5
3 5
10 10
1
13/5
11/5
.
[13/5,11/5,-7/5]'.
-7/5
BASIC THEOREMS.
if
If in (12.1),
= [\,Q
"nf]'-
then
I.
Y =
[a^j.a^j
Y =
an,
Ogi,
...,
a^J'
and, in general,
of the
basis vectors are known, the respective columns of A being the coordinates of the images
of these vectors.
See Problem
l.
94
CHAP.
12]
LINEAR TRANSFORMATIONS
95
A A
transformation, is non-singular.
in.
See Problem
2.
See Problem
3.
When A
A'^y
carries the set of vectors Y^, Y^, ...,\ whose components are the columns of A into the basis vectors of the space. It is also a linear transformation.
V. The elementary vectors ^ of \{F) may be transformed into any set of n linearly independent n-vectors by a non-singular linear transformation and conversely.
VI. If
IT
CZ
carries
Y = AX carries a vector X into a vector F, if Z = BY carries Y into Z, and if Z into W, then Z = BY = {BA)X carries X into Z and IF = (CBA\X carries
into
IF.
VII. When any two sets of re linearly independent re-vectors are given, there exists a non-singular linear transformation which carries the vectors of one set into the vectors of
the other.
CHANGE OF
BASIS. Relative to a Z-basis, let 7^ = AX^, be a linear transformation of ^(F). Suppose changed and let X^ and Y^ be the coordinates of X^, and Y^ respectively relative to the new basis. By Theorem VIH, Chapter 11, there exists a non-singular matrix P such that X-^ = ?X^ and Yy, = PY^ or, setting ?~^ = Q, such that
that the basis is
X^
QX^
=
and
Y^
QY^
=
Then
where
(12.2)
Y^
Q-%
Q-^X,
6
=
Q-^AQX^
BX^
Q-^AQ
Two
matrices A and B such that there exists a non-singular matrix We have proved
for
which B = Q'^AQ
vm. If Y^ = AX^ is a linear transformation of V^(F) relative to a given basis (Z-basis) and Yjf = BX^ is the same linear transformation relative to another basis (IF-basis), then A and B are similar.
Note. Since
P"^, (12.2)
later.
will be
for
made
might have been written as B = PAP-^. A study of similar matrices There we shall agree to write B = R'^AR instead of S = SAS'^ but
no compelling reason.
96
LINEAR TRANSFORMATIONS
[CHAP. 12
3
1
Example
2.
Let
AX
1
1
2 3
let
W^
2
IFg
[l.2,l]', W^ = [1,-1.2]',
image relative
= BXjf
corresponding to
AX.
1
(c)
Given the vector X = [3,0,2]', Find the linear transformation Use the result of (b) to find the image ijf of Xy =
(a)
(b)
[1.3,3]'.
1 1
Write
[W^.W^.W^] =
2
1
-1 -1
2
then
-1
1
9
-2
-1
-1 -3
Xff =
(a)
Relative to the
If -basis,
the vector
W X
Yf^
= [l,l,l]'. =
The image
of
is
AX
[9,5,7]'
W Y
[14/3,20/9.19/9]'.
36
(b)
21
-15
-11
-1
w\
36
W^AX
(W~^AW)Xjf = BXj^
21
10
-3 23
21
-15
-11 -1
6
= 2
(c) Yj.
21
10
3
3
[6,2,7]'
-3 23
L.
See Problem
5,
SOLVED PROBLEMS
1. (a)
Y = AX which
carries E^ into
Y^
= [1,2,3]',
E^ into [3,1,2]',
{h)
(c)
(d)
Find the images of li= [1,1,1]', I2 = [3,-1,4 ]', and ^3 = [4,0,5]'. Show that X^ and Zg ^-^^ linearly independent as also are their images. Show that Xi, X^, and Z3 are linearly dependent as also are their images.
1
3
1
2
1
(a)
By TheoremI, A
=^
AX
2 3
13
(6)
2
1
The image
of
X^=
[l,l.l]' is
Y-^
[6.4,8]'.
The image
of
-Yg is
image of Xg
is
K3
=[ 14.13,27]'.
1
6
is 2 as also is that of [^i, Kg]
8 9
(c)
The rank
of [A'^.Xg] =
1
_1
-1
4
4
8
19
We may compare
X^
CHAP.
12]
LINEAR TRANSFORMATIONS
97
2.
Prove:
if
and only
if
is non-singular.
=
Suppose A is non-singular and the transforms of X^ ^ X^ are Y = AX^ = AX^. Then A{X-i^-Xi) homogeneous linear equations AX = Q has the non-trivial solution X = X-^-X^. This sible if and only if .4| = o, a contradiction of the hypothesis that A is non-singular.
the system of
|
and
is pos-
3.
Prove:
non-singular linear transformation carries linearly independent vectors into linearly in-
dependent vectors.
Assume the contrary, that is, suppose that the images Yi = AXi. (i = 1,2 ent vectors X-^.Xr, Xp are linearly dependent. Then there exist scalars s-^.s^
P
p)
sp
^
1=1
H'^i
^lYi +
s^Y2+
s^Yfy ^^
f^
P
'
.|^ ^(-4^1)
A(Sj^X^+ S2X^+
+ spXp)
o.
But this
is contrary to the
Hence, the
independent.
4.
F =
iZ
carries Z^ =
1,0,1
]'
into
[2,3,-1]',
,!
fj^,
[3.0,-2]', and I3
=[
^^s
=[ 1.-1.1
]'
into
Let
aX.i^
+ bXr,+
cXg= E^;
then
-6
a + b
+ 2c =
and a
-^,
6 = 1,
c =
i.
Thus,
E^= -hX^ + X^
^Xg
c =
and
its
imageis
J
Y,
-^2, 3,-1
]'+ [3,0,-2]' +
is K, = [l,l,l
Y2 = 1-1.3,1
]'.
-1
3
1
[Y^.Y^.Yg]X
-2
2
1
5. If
Yy
= AXf, =
2
3
2
1
X^
Problem
12,
Chap-
same transformation
7^ =
BX^
4
1
1"
From Problem
12,
Chapter
11,
Xf^r
PX^
^z-
Then
--3
0_
-1
P''X,.,
-1"
-1
2
1
:
^!i
Q^w
d
-2
14 -6 14
9
3
and
PY
Q-^AX,
Q^^Q^r.,
\^
X,
-9
98
LINEAR TRANSFORMATIONS
[CHAP. 12
SUPPLEMENTARY PROBLEMS
6. In
Problem
show:
(b)
A Y
into
7.
Using the transformation of Probleml, find (a) the image of [-2.-5.-5]'. -4ns. (a) [8,5.11]', (b) [-3.-1. 2]'
Study the effect of the transformation Y = IX. also Y = klX.
Af
= [1,1,2]',
X whose
image
is
8.
9.
Set up the linear transformation which carries E^ into [l,2,3]', 5 into [3.1. 2]', and 3 into [2,-1,-1]'. Show that the transformation is singular and carries the linearly independent vectors [ 1,1,1 ]' and [2,0,2]' into the same image vector.
10.
if
X-^.X^.
....
X^
11.
Use Theorem
changed.
III
to
show
12.
X.
show
(a) it is singular,
(6) the
in-
-2
dependent vectors
of V^{R) is a Vs(R).
1
1
3 5
]',
^i=[l,l,l]',
JVg =
[2.I.2
(c) the
image
13.
2 4 X.
show
1
(a) it is singular,
(6) the
113
(R) spanned by
[
1.1,1
]'
and [3.2.0
]'
spanned by [5,7.5]'.
14.
Let Xi and
Yi,
(j
= 1,2
AX
Y
15. Prove:
BZ
12
16.
3
1
1_
Let
AX
^2
3
_1
2
1
A^
new
basis, say
Z, =
[1,1,0]',
(a)
(6)
= [1.0.1]',
Z3
= [1.1.1]'
A^
be Chosen.
Let
AT
Show
that
= [14,10,6]' is the
image of
new
Y has coordinates
Y^ = [8,4,2]'
0-1"
-1
1
(c)
X^
PX
and Y^ = PY.
where P
iZ^,
1
Zf2,
Z^i
-1
(d)
Yy =
Q ^AQX.
where Q
=P
^.
0"
1 1
1
17.
7^
1
1
1
Xjf
W^= [o,-l,2]',
IK,= [4,1,0]'
CHAP.
121
LINEAR TRANSFORMATIONS
99
IK5
[-2.0,-4]'- Find the representation relative to the Z-basis: Z^ = [i,-l,l]', Z2 = [l,0,-l]', Z3=[l,2,l]'.
-10
Am
2-5 -10 2
2 in the linear transformation
18.
If.
AX. A
is singular,
123"
(a)
Problem
12.
(6)
Problem
13.
(c)
2 3
4
6
X.
-4ns.
(a)
(6) (c)
]'
19.
If y = AX carries every vector of a vector space I^ into a vector of that same space, v^ is called an Invariant space of the transformation. Show that in the real space V^{R) under the linear transformation
1
-f
\
(a)
12
2
2
X. the
\l^
13
1
^ spanned
spaces.
(c)
10
1
-14-6
20.
PX
yi
(i
n.
^h
1.2
n)
in
which
/1./2
/^ is
a permuta-
(b) (c)
(d)
and
P^^P^Pt.
number of the elementary
n
Prove:
PP'
/.
(e)
(/)
Show that each permutation matrix P can be expressed as a product umn matrices K^2, ^28 ^n-T-nWrite P = [^^, E^^. ^^] where ii, ij % is a permutation
mentary n-vectors.
of a
col-
of
.
1,2
and ^
= [s, 1, 4,
Find a rule (other than P~ = P') for writing P~ For example, when n = 4 and 2], then P'^ = [2. '4. 1. 3]; when P = [E^ E^. 1, 3], then P~^ = [g, 2, 4, 1].
chapter 13
Vectors Over the Real Field
all
= [ji,
YnY
+
X-Y
1.
x^y^
x^j^
XnJn
Example
^2=
1-
[2.1,2]',
^3 = [l.-2.l]':
X^-X^
X-^-X^
= = = =
1-2
1-1 1-1
1- 1
= = =
=
+ l(-2) + +
1-1 1-2
+1-1
1-1
X.yX^
^1-2^:2
+ +
1-4
1-4
10
2(^1-^2)
X.Y
X'Y
Y'X
The use of X'Y and Y'X is helpful; however, X'Y and Y'X are 1x1 matrices while X-Y is the element of the matrix. With this understanding, (13.1') will be used In vector analysis, the inner product is callhere. Some authors write Z|y for X-Y
.
The following
(a)
X^-X^ = X^-X^,
X^-hX^ = HX^-X^)
(X^+Xs)-X^
= =
+
(13.2)
ib)
(c)
X^-(X^ + Xs)
(X^+ X^)
-
X^-X^ + X^-X^
(Xs+ X^)
X^-X^
vectors
X and Y
of
Example
are orthogonal.
^i(R),
denoted by
\\
X\\
is
(13.3)
II
^11
\/
X-X
\/
xl +
xl+
---
X.
1(c),
\\
X^\\ =
V3
See Problems
1-2.
100
CHAP.
13]
101
(13.3),
it
may be shown
=
that
x.Y
vector
iil|z+y||'
=
1
WxW
- ||y|h
The elementary vectors E^
A
are
X whose
length is
||z||
If
\X-Y\
<
\\x\\.\\y\\
that is, the numerical value of the inner product of two real vectors is at most the product of
their lengths.
See Problem
3.
If
)/(/?),
then
l!^+yll
<
IUII + ||y||
If
X^,
0,
X^
X^
i
are
m<n
then for
= 1,2
m. (c.lX^+ 0^X2+
we have
Any
set of
m<
vector
is said to be
if it is
orthogonal to every
If
a vector
is
X^,
it is
orthogonal
to the space
spanned by them.
See Problem
4.
at
of V^CR)
which
is orthogonal to
V^\R).
See Problem
5.
Since mutually orthogonal vectors are linearly independent, a vector space V'^(R), m>0, can contain no more than m mutually orthogonal vectors. Suppose we have found r<m mutually orthogonal vectors of a V^(R). They span a V^iR), a subspace of V*(R), and by Theorem HI, there exists at least one vector of V^(R) which is orthogonal to the I^(/?). We now have r+l mutually orthogonal vectors of l^(R) and by repeating the argument, we show
IV.
Every vector space V^(R), m>0, contains m but not more than
mutually orthog-
onal vectors.
Two vector spaces are said to be orthogonal if every vector of one is orthogonal to every vector of the other space. For example, the space spanned by X^ = [1,0,0,1]' and X^ = [0,1,1,0]' is orthogonal to the space spanned by X^ = [ 1,0,0,-1]' and X^ = [0,1,-1,0 ]' since (aX^ + bXr,) (0X3+ dX^) = for all a,b,c,d.
102
[CHAP. 13
V.
tor
The set
\
space Vn'^(R). ^ n
ggg Problem
6.
We may associate with any vector ^ 7^ o a unique unit vector U obtained by dividing the components of X by \\X\\ This operation is called normalization. Thus, to normalize the vector X = [2,4,4]', divide each component by ||^|| = V4 + 16 + 16 = 6 and obtain the unit vector
.
[1/3,2/3.2/3]'.
basis of Vn(R) which consists of mutually orthogonal vectors is called an orthogonal baif the mutually orthogonal vectors are also unit vectors, the basis is called a
See Problem
7.
Suppose
X^,
X^
are
a basis of
Define
= X,
y,
^S
Y ^3
^g'^3
^1-^3
Y V
V V
-'w-l
-'m
Xi
'^
%-l
"
ll*^!
y'
"'l
Then the
basis of
unit vectors
Gj =
^
1^11
(i
= l,2,...,m)
F(/?).
Example
3.
Construct, using the Gram-Schmidt process, an orthogonal basis of V2(R). given a basis
Xs=[i.2.zY.
Y^
X^ = [1.1.1]'
(ii)
Y^
X^- ~^Yi
^3 -
[1,-2,1]'
^1-1 =
[1,-2.1]'
(ill)
^3 =
^Y, - ^Y,
[1,2,3]'
- -^y, - ^[1,1.1]'
[-1,0,1]'
The vectors
G^ = -jpj =
]',
G2 =
-^,
and
Gg
-ii-
[-I/V2,
0,
1/\A2]'
Each vector
X.^
Gj =
Note that Fg =
-^2
here because
and
A^2 a^re
orthogonal vectors.
CHAP.
13]
103
Let
y^, Yg
Zi,
^2
^m be a basis
X^,
X^
Xg,(l< s<
m),
are
mutually orthogonal.
}^
Then, by the Gram-Schmidt process, we may obtain an orthogonal basis of the space of which, it is easy to show, Yj^ = X^, (i = 1,2 Thus, s).
X-i^,
VI. If
X2,
is an
orthonormal basis.
THE GRAMIAN.
Let X^, X^
Z>,
A^ A^
' '
A-L
A>) Ajj
A]^
A^ A^ A2
X-^ A,
A2 Aj X2 A2
(13.8)
A2
'
A2 A^ A2 A2
X^Xp XpXp
Xp- X^
Xp- X2
...
Xp- Xp
XpX-i
XpX^
...
is diagonal.
Problem
VII.
14,
Chapter
17,
we
shall prove
Z^, X^,
....
Xp,
G >0. The
|
equality holds
if
and only
if
ORTHOGONAL MATRICES. A
(13.9) that is, if
(13.9')
if
AA'
A'A
A'
A'
Prom
(13.9)
it
l/\/3
l/x/6
-1/^2
is orthogonal.
Example
4.
By Examples, A
l/\[Z
-2/\/6
\/\[&
l/x/2
l/\/3
is orthogonal, its
X.
XI.
The determinant
ORTHOGONAL TRANSFORMATIONS.
(13.10)
Let
AX
104
[CHAP. 13
be a linear transformation in Xi(R) and let the images of the n-vectors I^ and X^ be denoted by Yi and Y^ respectively. Prom (13.4) we have
x^-x,
and
=
=
u\\x,^x,f k\\\Y,^Y,f if
\\X,f
y,
II
\\X,f]
Y,-Y,
Y,f]
it
Comparing
right
and
left
preserves inner
if
and only
if it
A
lem
10,
linear transformation
Y=AX
is called
orthogonal
if its
matrix
is orthogonal.
we prove
XIII.
In Prob-
if
and only
if its
matrix is orthogonal.
l/\/2
l/v/6
-I/V2
AX
l/\/3
-2/\/6
l/\/6
X
l/v^
ii3
orthogonal.
The image of
I/V3
[a,i,c]'is
y
[
"
_^
_1_
_f
26
c "]/
XIV. If (13.10) is a transformation of coordinates from the -basis to another, the Zbasis, then the Z-basis is orthonormal if and only if A is orthogonal.
SOLVED PROBLEMS
1.
Zj.
find:
length of each.
2
(a)
X^-X^
XiX^
[1.2.3] -3
4
1(2)
2(-3) +
3(4)
(6)
\\X^f
X^.X^
X[X^
[1,2,3]
14
and
\\Xji
vTi
lA-jf
2(2)
+ (-3)(-3) + 4(4)
29
and
\\X^\\
V29
CHAP.
13]
105
2. (a)
Show
(b)
that 1 = [1/3, -2/3, -2/3 ]' and Y ^ [2/3.-1/3, 2/3]' are orthogonal. Find a vector Z orthogonal to both X and Y.
2/3
(a)
X-Y
Xy
[1/3,-2/3,-2/3] -1/3
2/3
1/3
2/3
O"
(6) Write
[A:,y,o]
-2/3 -1/3
-2/3
2/3
(3.11)
column of zeros.
Then by
is
orthogonal to both
A:
and K.
3.
If
X
or
Vn(R), then
\X-Y\ <
that
||A'||.||y||
if A"
Assume then
llaA^
yf
= =
=
(aX + Y)-(aX + Y)
[ax^ + y^.ax^+y^
(a x^
axn+yn]-[ax^+y^. ax^ + y^
yj^)
axn+jnY
+ 2ax^y^ +
+ (o^^ + 2a%j^ + y^
a^xf
+ 2aX.Y
\\Yf >
Now a quadratic polynomial in a is greater than or equal to zero for all real values of o discriminant is less than or equal to zero. Thus,
i(X.Yf - i\\xf- \\Yf
and \x-y\
if
and only
if its
<
<
m-\\Y\
4.
Prove:
If
a vector
is orthogonal to
X^,
X^
X^.
it
is orthogonal to the
Any vector
Then
a^X^)-Y
Thus, Y
a^X^-Y + a^X^-Y +
a^X^-Y
Xi-Y
= 0, (i = 1,2
m).
is
is
In particular, if
orthogonal to every vector of the space and by definition is orthogonal to every vector of a basis of a vector space, it is
5.
Prove:
If
a ^() is a subspace of a V^(R), k>h. then there exists at least one vector
A"
of
which
v\R) "
and
^^*
^h be a basis
of the
FV).
let X^,^^
be a vector
in the
vM)
<*)
X
that
a^X^ + a^X^ +
...
aj^X,^
+ a^^^Xf,^^
The condition
Xf,
106
[CHAP. 13
a^X^-X^ + a^X^-X^ +
...
=
=
o o
a^X^.X^ + a^X^.X^ +
...
ft + l unknowns a^.a^ ay,^^. By Theorem IV, Chapter 10, a non-trivial solution exists. When these values are substituted in (i), we have a non-zero (why?) vector X orthogonal to the basis vectors of the kA) and hence to that space.
In the
""^^
Vn-k'J'^^
^ ^^^ ^^''^'''^ orthogonal to every vector of a given V^{R) is a unique vector space
The
..-vectors
Jt,-
satisfy the
X^.X=o.X^.X=Q
X^^
Xk.X=0
system
(i) is
Since the
of rank k
Uniqueness follows from the fact that the intersection space of the V^iR) and the V^'^(R) space so that the sum space is Xi{R).
7.
V^(R),
given
Note that A- is a unit vector. Choose Y = [l/Vl, Then, as in Problem 2(a), obtain Z = [l/^Jl. -l/VI,
-IA/2 ]'
to
-Y =
1A/3]'
complete the
8.
Y^.
Yr,
Y^
vectors to be found.
(a)
Take Take
Y-^^
X^.
(b)
Y^ = Xr, + aY-^
Since
Y-^
Y^.Y^
and
o =
Y^-X^ + Y^-aY^
Y^-X^ + aY^-Y^
- -i-2
Thus.
K, =
Y^
(c)
Take
Y^ =
X3 +
aYr,
+ bY^
=
Since
Y^. K,,
yi-Yj,
Y^-
X^ + bY^-Y^
and
Y^.Y^
= =
Y^-X^ + aY^-Y^
Then
- ^ll^
and
7, =
Z3 -
^ki^ K _
2W^ v
(d)
is obtained.
CHAP.
13]
107
9.
Fg,
Take
Y^ =
X^
=
= [2.I.3Y.
Then
=
[1,2.3]'
K,
X^ -
^^i-i
"2
- ii[2,1.3]'
[-6/7.15/14,3/14]'
^3 V
^1 '"^3
[1.1.1]'
- ^
9
14
I4J
[_3
3j
]',
-1/^3
]'
10. Prove:
if
and only
if its
matrix is orthogonal.
Let
y^. Yq
AX.
Suppose A
(1)
/.
Then
=
Y^-Y^
YiY^
(X'^A')(AX^)
X\X^
X^^-X^
and, by
are preserved.
Then
X{(A'A)X2
X^X^.
A'A=1
and A
is orthogonal.
SUPPLEIVIENTARY PROBLEMS
H. Given
the vectors A'l = [l,2,l
]'.
.Yg =
]'.
find:
X.^,
Xq
Ans.
-3
(6)
Vb,
3,
V^
(c)
[l,0,-l
]',
[3.-2.1]'
12.
^3(7?).
verify (13.2).
13.
14.
Let
(a)
a:
= [1.2.3,4]' and
containing
and y.
Show
that
(b) Write
vector
and Y.
vector of
orthogonal to itself
if
and only
if it is
If X-^. X^. Xg are a set of linearly dependent non-zero ^-vectors and X^ and Xq are linearly dependent.
Z^ Xg = Xj_-Xq=
0,
then
108
[CHAP. 13
16.
Prove:
vector
P^"(/?) If
and only
if it
17.
Prove:
If
space is
I^(if).
18.
Show
A"
that
!|Z +
<
(||.Y||
||y||)^,
19.
Prove:
||
1|
|1a:|1
+ \y\
if
and only
if
20.
11.
]'
21.
Show
(o)
Y .Z
X^
of
Problem
2 are
22.
Show
them.
that if X^.
X^
are linearly independent so also are the unit vectors obtained by normalizing
(6)
Show
that
if
the vectors of (a) are mutually orthogonal non-zero vectors, so also are the unit vectors
23.
Prove: (a)
If
A
.4
is is
orthogonal and
.4
1,
each element of A
is
equal to
its
cofactor in
^4
(6) If
of^
24.
Prove Theorems
25.
Prove;
If
.4
is orthogonal,
C'BC commute.
if
26.
(or A'A),
where A
is n-square. is a
diagonal matrix
and only
if
27.
Prove:
UX
If
and Y are n-vectors, then XY'-YX' is symmetric, and Y are n-vectors and A
28.
Prove:
is n-square, then
X-(^AY) = {A'X) -Y
n
29.
Prove:
If
A"]^,
Z^
A'
if
A^
2
^=1
c^X^,
then
(a)
X-X^ =
c^,
(i = 1,2
n);
(b)X.X
30.
cl+4+...
cl
VsiR). given
(a)
X^ =
(6) [3,0,2]'
X^.
[0.l/^^2.l/\f2Y,
0,
[-4/\^, -3/^34,
0,
[3/VI3,
2/\/l3]',
[2/VI3.
V^i^R)
-3/VT3]', [0,1,0]'
in order:
31.
[1,-1,0]',
[1.0,1]',
[2,-1,-2]',
[1,3,1]',
[1,-1,-2]'
[4,0,-1]'
(6) (e)
[3.2,1]'
[2,-1,0]',
(a)
(b)
[4,-1,0]',
Ans.
[iV^. -iV^,
[^\/~2, 0,
0]',
iV2]',
[iV^.
0,
-iV2]'
[o,0,-l]'
]'
(c)
[2V5/5, -V5/5,
0]',
[^/5/5.2^/5/5.oy,
I^(ft),
32.
given
^"1
=[ 1,1,-1
and
ATg =
[2,1,0
]'.
Take
Y^ = X^,
Ans.
]'
CHAP.
13]
109
33-
X-^ =
[7,_i,_i
]'.
34.
Show
in
[l.2,3,4]',
35.
Prove:
If
is
skew-symmetric. and
+ A is non-singular, then
= (I
-A)(I + A)~^
is orthogonal.
36.
Use Problem 35
"
given
12
s"!
(a)
(b)
-5
10
2-3
(b)
oj'
"5-14
-12
Ans.
(a)
-51
^
.4
-10
10
-5 -10
2
5 -I2J'
-11
where P
non-singular, then
37. Prove:
If
is an orthogonal matrix
and
it
AP
is
PB^
is orthogonal.
38.
AX
be-
comes
71 =
P^^APX^
or
Show
that if
is
orthogonal so also is
B.
and con-
versely, to prove
Theorem XIV.
and
/
39.
Prove:
If
is orthogonal
= (I
- A) (I + A)'"^
is
skew-symmetric.
40.
Let
X
y
= [xj^.x^.xsY
and
= [yi.yQ.ygY
25, 23]'
X xY
of
and
as
ZxF
Jl
,
= [21,
where
21 =
3 ys
Z2 =
.
^1 Zg =
yi
^3 ys
%
X^, Y^.
],
After identifying
2 yi
yi
estciblish:
of
(6) (c)
XxY
-(YxX-)
=
(d)
(kX)xY
k(XxY)
XxikY),
k a scalar.
41. If W. X. Y.
(a)
X x{Y+Z)
XxY
XxZ
=
(b)X-(YxZ)
(c)
(d)
Z-(XxY) W-Z
X-Z
\XYZ\
X-Y Y-Y
chapter 14
Vectors Over the Complex Field
COMPLEX NUMBERS.
x and j are real numbers and i is defined by the relation j^ = 1, z = x^iy is called a complex number. The real number x is called the real part and the real number y is
If
fy.
are equal if and only if the real and imaginary parts of one are equal
A complex number
The conjugate
x + iy =
if
and only
z
if
x = y =
0.
of the
complex number
its
= x+iy
is given
by
= x+iy = xiy.
The sum
It
The absolute value |z| of the complex number z = x+iy is given by follows immediately that for any complex number z = x + iy,
|z|
\J
z-z =
\fW+y^
(14.1)
> \A
and
\z\
>
\y\
VECTORS. Let X
The
is to
totality of
cerning vectors of I^(C) will reduce to a theorem of Chapter 13 when only real vectors are considered.
If
= [%, x^
XnY and y =
[ji,
72
product
is
defined as
(14.2)
X-Y
XY
%yi
x^y^
XnJn
I-y =
y^
(/)
X-Y+Y.X
2?.{X.Y)
is the real part of
(cX)-Y = c(X-Y)
X-(cY) = c(X-Y)
(g)
where R(X-Y)
X-Y.
(14.3)
(c) (d)
X-Y-Y-X
(e)
(Y+Z)-X =
Y-X+Z-X
X
is
See Problem
||Z||
1.
The length
of a vector
given by
\/
X-X =
Q.
\/%%
%^2
+ XnXn-
Two
vectors
if
X-Y = Y-X =
For vectors of
(14.4)
Triangle Inequality
\\X+Y\\
2)
<
||Z||
||i'll
\X-Y\
<
\\X\\-\\Y\\
110
CHAP.
14]
111
I.
Any
set of
X^,
X^
X^,
then
it
is or-
III.
If
V^iC) is a subspace of V^C), k>h, then there exists at least one vector
in
J^(C),
m>0, contains m
m mutually
orthog-
onal vectors.
basis of I^(C) which consists of mutually orthogonal vectors is called an orthogonal If the mutually orthogonal vectors are also unit vectors, the basis is called a nonnal or orthonormal basis.
basis.
Let
=
X,. X^
X^
Define
X,
In
Xn
(14.6)
Yd
Xn
^2-^3
Yi-Xs
Y^
Y,.Y,
Y .Y l2-l2
^
y
-'m-i
y n
_ ~
y
"Si
T;
Y y 'a-i'^m y
^m-i'-'m-i
'y^
The
unit vectors
Gi
7.
(i
= 1.2
m)
V. If X^, X^ X^, (l<s<m), are mutually orthogonal unit vectors of ^(0) there exist unit vectors (obtained by the Gram-Schmidt Process) Z,,,, X, ^ ;i' in the snaoe such that the set Z Z^ Z, is an orthonormal basis.
THE GRAMIAN.
X^
Xp be a
set of
;s-
Xi'Xi Xj^-Xq
X^'X'i
X^-Xp
X^-Xp
Aj
X-y
X-^Xq
X^ Xp X2 xp
X^-X^
(14.7)
Ag X^
XqXq
Xp-X^ Xp-X^
Xp-Xp
XI X-^ XI x^
x^x. p^p
is diagonal.
Following Problem
14,
Chapter
17,
we may prove
\G\
VI For a set of re-vectors X^. X^ Xp with complex elements, holds if and only if the vectors are linearly dependent.
>
0.
The equality
112
fCHAP. 14
UNITARY MATRICES. An
if
(AyA = A(A)'=
I,
that is
if
(i)'=
^.
we have
IX.
The product
X.
The determinant
1.
linear transformation
AX
where A
XI.
if its
A
If
linear transformation preserves lengths (and hence, inner products) if and only
matrix is unitary.
XII.
AX
is a transformation of
is unitary.
SOLVED PROBLEMS
1.
Given X = [l+j, -J, 1]' and Y = [2 + 3J, 1- 2J, i]', (c) verify X-Y + Y-X = 2R(X-Y) (a) find X-Y and Y-X (d) verify X-Y -Y-X = 2C(X-Y) (b) verify X-Y = Y-X
2+
(a)
3/
X-Y
X'Y
[l-i,f,i;
l-2i
i
7 4 3j
+i
Y-X
Y'X
[2-3i.l + 2i.-i]
3i
(b)
(c) (d)
From
(a):
3i
= X-Y. =
+ +
3i) 3j)
+ (7-3i) =
14
6j
= =
2(7)
2R(X-y) 2C(X-Y)
-Y-X
-(7-30
2(30 =
2.
\X-Y\
<
p||
11Y||.
As
in the
is true if
or
= 0.
When X and Y
are non-zero
Yf
{aX+Y)-{aX+Y)
a'Wxf
+ 2aR{X-Y) +
\\Y\\
>
0.
CHAP.
14]
113
if
and only
and
if its
discriminant is non-positive,
R{X-Yf If
\xf\Yf
-F
|
Ra-Y)
if
<
O.
IkMyll
define
c =
X-Y
0,
then
|;f
fl(;5:-y)
||y
X-Y
,i
X-Y
\x-y\
Then
R[ciX-Y)]
R(cX-Y) <
=
\\cX\\-\\y\\
|c||U||-iyl|
|UM|y||
while, by (14.3(6)),
R(cX-Y)
\X-Y\.
Thus,
forall^andy.
3.
Prove:
(B)
B =
=
\(A)'AV
(A'A)
(A)A
and B is Hermitian.
4. If
i = B + iC
Since
is
if
and only
if
B and C anti-commute.
B+iC
Hermitian,
=
if
(A)A
This
is real if
(B + iCUB+iC)
(B + iC)(B+iC)
B^ + i(BC + CB) - C^
if
and only
BC + CS
= o or
BC
-CB
thus,
if
and only
B and C anti-commute.
5.
Prove:
If
is skew-Hermitian. then
iA
is Hermitian.
(A)' =
Consider B = ~iA.
Since A is skew-Hermitian,
(S)'
-A. Then
i(-A)
=
(riA)'
t(Z)'
-iA
and S
is Hermitian.
The reader
= iA.
SUPPLEMENTARY PROBLEMS
6.
(a) (b)
(c)
A:2 = [l,
1+
/,
o]',
and Xg =
[i, 1
j,
2]'
each vector Xi
is
show
that
(d)
X^ and
Jf g
Ans.
2-3i.-i
(b) ^/e
V^ V^
.
(d)
7.
Show
that [l + i.i.lV.
[iA-i.oy, and
[l -i.
1. 3j ]'
8.
9.
10.
Prove Theorems
I-IV.
11.
114
[CHAP. 14
12.
in order, construct
an orthonormal basis
for iy.C)
when
the
[O.^V2.-iV2]', [1(1 + 0.
ri
[-^i,
3jy
r
^(1 + 0.
^d
]'
(&)
[2(1+0. 2..2
T J.
3+
[3;^.
4^. 4^ J-
13. Prove: If
/I
is
a matrix over the complex field, then A + A has only real elements and
A- A
imaginary elements.
14.
Prove Theorem V.
15. If
is n-square,
show
if
diagonal
/
and only
if
if
the columns of
A'A =
If
if
and only
the columns of
16.
Prove:
is re-square,
then
X-AY
A'X-Y
17.
18.
is
+t
r
19.
i+j1
(6)
i i
Use Problem 18
"^
L-1+.-
J-
-1 + j
_9 + 8i
1
-10 -4i
l
-16-18i
-10 -4t -9 + 8j
and
r-l +
|_
2j
-4-2i"|
(b)
Ans.
(a) 5
-2-24J
29
12i
2-4i
-2-jJ'
4-lOj
of the
-2-24J
20. Prove:
21.
If
same
order, then
AB
BA
are unitary.
Problem
10.
Chapter
13. to prove
Theorem XI.
22. Prove: If
is unitary
is involutory.
1(1 +
J//3
l/\/3
2^
4 + 3i
is unitary.
23.
Show that
-k
2y/T5
-i/y/3
-^ 2Vl5_
-1
24. Prove: If
A A
is unitary
and
if
= .4P
is unitary. is
25. Prove:
is unitary
and I+A
is non-singular, then
(I
- A)
(,!
+ A)'^
skew-Hermitian.
chapter 15
Congruence
CONGRUENT MATRICES. Two
(15.1)
re-square matrices
over
if
FAP
same
Clearly, congruence is a special case of equivalence so that congruent matrices have the
rank.
When P
is
expressed as a product of elementary column matrices, P' is the product in reis, A and B are congruent provided A can
SYMMETRIC MATRICES.
I.
In
Problem
1,
we prove
whose
Example
1.
r is congruent over F to a diagonal matrix diagonal elements are non-zero while all other elements are zero.
- P'AP
is diagonal, given
12
2
3
3 5
3 5
8
10
10
-8
reducing A to D, we use [A /] and calculate en route the matrix P' First we use and XgiC-S), then H^-ii-2) and K^^{-2) to obtain zeroes in the first row and in the first column. Considerable time is saved, however, if the three
In
.
row transformations are made first and then the three column transformations. transformed into a symmetric matrix, an error has been made. We have
If
is not then
12
[AH^
2 3
3 5 8
2 8
1 1 1
1
10
c
'V-
10
4 4
3
2
5
8
10
0-1-1 0-1-1
4
10
-8
1
4-12
1
0-100
4
-2
-1
10
c
~\^
0-100
4
-2
10
4
1
-1
4
-10
-1-110
[DP']
115
116
CONGRUENCE
[CHAP. 15
-2 -10 -1
1
4-1
1
Then
10
The matrix D
to
is not unique.
10
0-100
ffgCi) and Kg(^).
for
D by
10
while the
10
transformations
replace
D by
0-900
.
There
is,
however, no pair of
D by
Let the real symmetric matrix A be reduced by real elementary transformations to a congruent diagonal matrix D, that is, let P'AP = D. While the non-zero diagonal elements of D depend both on A and P. it will be shown in Chapter 17 that the number
of positive non-zero diagonal elements
depends solely on
A.
By
of
a sequence of row and the same column transformations of type 1 the diagonal elements may be rearranged so that the positive elements precede the negative elements. Then a
sequence of real row and the same column transformations of type 2 may be used to reduce the diagonal matrix to one in which the non-zero diagonal elements are either +1 or 1. We have
II.
r is
matrix
P
(15.2)
'r-p
The
p-(r p)
is called the
signature.
Example
2.
Applying the transformations H23. K^a and H^ik), Kr,(k) to the result of Example
1 1
1,
we have
10
1
[A\n
0-100
4
-2
-10
C
1 1
10
0-10
-5
-2
10
s
IC\(/]
-1 -1
and (/AQ = C. Thus, A
is of
-1-110
index p =
2,
rank
= 3,
and signature
= 1.
if
III. Two re-square real symmetric matrices are congruent over the real field if and only they have the same rank and the same index or the same rank and the same signature.
In the real field the set of all re-square matrices of the type (15.2) is a canonical set over
congruence
symmetric matrices.
CHAP.
15]
CONGRUENCE
117
IN
have
r
IV. Every ra-square complex symmetric matrix of rank complex numbers to a canonical matrix
/^
is
(15.3)
H^ii)
1
and
K^f^i)
to the result of
Example
1
2,
we have
10
1
-5 -2
[^1/]
2
1 1
0-10
10 10 10
-5
-2i
2
i
[D
']
1
-1 -1
-1 -1
and
R'AR
^&:]
if
V.
Two
if
ra-square
numbers
and only
complex symmetric matrices are congruent over the field of complex they have the same rank.
SKEW-SYMMETRIC MATRICES.
If
is
skew-symmetric, then
=
(FAP)'
Thus,
VI.
FAT
r(-A)P
-FAP
Every matrix
B =
FAP
symmetric.
In
Problem
VII.
4,
we prove
is
congruent over
to a canoni-
B
= r
I
diag(Di, Dj
0^,0,..., 0)
where
D,-
,(f=l,2
t).
The rank
of
/}
is
2t.
See Problems.
There follows
Vin. Two ra-square skew-symmetric matrices over they have the same rank.
if
and only
if
The set
of all matrices of the type (15.4) is a canonical set over congruence for re-square
skew-symmetric matrices.
[^
],
or
conjunctive
if
P such
that
(15.5)
FAP
Thus,
118
CONGRUENCE
[CHAP.
15
IX.
Two
if
and only
if
ed from the other by a sequence of pairs of elementary transformations, each pair consisting of a column transformation and the corresponding conjugate row transformation.
X.
An Hermitian
matrix
of rank
r is
(15.6)
-Ir-p
The
p-(r-p)
Two
rank and index or the same rank and the same signature.
The reduction
of
elementary transformations.
The extreme
Problem?.
See Problems 6-7.
SKEW-HERMITIAN MATRICES.
If
is
skew-Hermitian, then
=
(FAPy
Thus,
XII.
(PAT)
FAP
Every matrix B =
FAP
Hermitian.
By Problems, Chapter
14,
= -iA is Hermitian
if
is
skew-Hermitian.
By Theorem
P such
that
Ip
FHP
-Ir-p
Then
iFHP
= iF{-iA)P =
FAP
=
= iC
and
Up
(15.7)
FAP
- ilr~p
Thus,
XIII.
which
is the rank of
A and
p is the
index of iA.
if
XIV. Two re-square skew-Hermitian matrices A and B are conjunctive they have the same rank while -iA and -iB have the same index.
and only
if
See Problem
8.
CHAP.
15]
CONGRUENCE
119
SOLVED PROBLEMS
1.
of rank
Suppose the symmetric matrix A = [a^,-] is not diagonal. If a^ / 0. a sequence of pairs of elementary 3, each consisting of a row transformation and the same column transformation, will
"n2
"ns
Now
Suppose then
we have obtained
the matrix
hss
S+1, s+2
*s+l, n
k^ S+2, s+ 1
'^ri
s+i
"n,
s->-2
in
= o.
If
every
it
4y
= 0,
we have proved
r.
however, some
say
If,
s+^ /
0,
we move
column
transformation of type 1 when u = v; otherwise, we add the (s+u)th row to the (s+t;)th row and after the corresponding column transformation have a diagonal element different from zero. (When a^, = o, we proceed as in the case Ar^+i ^^.^ = o above.)
Since
we
whose
first t
are led to a sequence of equivalent matrices, A is ultimately reduced to a diagonal matrix diagonal elements are non-zero while all other elements are zero.
'12
2.
2 5 5
to
Reduce
3 5
.2 In
2 3 5
2
1
1
1
u\n
1 1 1
2
2
5 5
c
1
-1
1
1 1
-2 -2
C
1
-1 2
c
1
-4
-1
1
1
-2
[0|^']
To
obtain (15.2),
1
we have
1 1 1 1 1 1
[0
Pi']
-4
-1
-2^f2
k\f2
1
i\Pi
\C\P'\
-2
0-1
-2
120
CONGRUENCE
[CHAP. 15
-av^ -2
j\/2
1
and
5a/2
To
obtain (15.3),
we have
1 1 1
1
[D\Pl]
1
1
1 1
-2\/2
2i
2\/2
-i
iV2
Vc\?'\
-1
-2V2
k\f2
2J
and
-i
3.
in
1
=
1
2-j
i
2-i
10 + 2i
^
1
i
1+J 2-j
1 1
1
1 1
[^1/]
1
c
'-\J
2j
-i
1 1
+i
2-i
10 +
2j
3-2J
_
10 p
-1-j
10
1
1 1
c
2i
1
-i
7+ 13
4i!'
5+12J
2i
-5+12i 3-2t
13
13
[c\n
1
-I
7+
4i
13
-5+12i
Here.
1
13
3-2j
13
4.
to a matrix
B
where
A
D^
=
)
diag(Di, Dg
0)
U :]
then S = ^.
(J
= 1,2,
It
= Q.
If
-4
0,
then some
mj
first
-aji ?^ 0. Interchange the sth and first rows and the columns and the /th and second columns to replace
oy a..
j
-a,;, -Oy
g 2
first
first
column by
l/a^^-
'S*
i
CHAP.
15]
CONGRUENCE
121
1
''2
to obtain
-1 4
and from
it,
3,
obtain
-1
Di
F4
If
/v=
0.
until
is obtained.
5.
is in
-2 -1 -4 3
Using
a^s 4 0,
we need
only interchange the third and second rows followed by the interchange of the
2
third
1-3
-2 -1 -2
10 10
10
-2
to obtain
1
-1 -2
10
1
0-3
-4320
Next, multiply the
10 10
1
-4230
first
1
first row and first column by 2; then proceed to clear the two columns of non-zero elements. We have, in turn.
first
10
-1
1
k
1
-1 -2
0-3
10 10
1
and
-5
5
i -2
-2230
1
-1
-2
-1/5
to obtain
1/2
-1
1
10
1/10 -1/5
Di
P'
Do
1
0-10
1/2
-1
0-2
1/10 -1
Thus when
-1/5
=
10-2
1
P'A?
= diag (Di.Dj).
6.
is in
1
-3 + 2i
-3 - 2j
122
CONGRUENCE
[CHAP. 15
1-j -3 + 2i
j
1
1
[^1/]
1+i
2
i
-i
1
1
c
5
-l-i
3
1 1
-3 -
2j
-13
2j
10
00
25
13
2-3i
1
5_
2-3f
1
13
13
3
13
1
5V13
5\/l3
\/l3
1
-13
2i
0-1
2i
VT.5
Vl3
vc\p'^
2+
3t
3-2i
\fl3
5\/l3
13
and
5\/T3
1
^/I3
\/T3
7.
Find a non-singular matrix P such that P'AP is in canonical form (15.6), given
1 l
+ 2i
5
- 2i
3f
- 3i -4 - 2
2
2+
1
-4 +
1
2f
13
+
5
2i
3j
1 1
[41/]
2i
3i
-4 - 2s
2i
HC
1
5j
-1 +
2j
10
1
2+
-4 +
13
10
1
-5i
-2-3!
1
10
HC
10
5i
10
1
i
HC
10
-5J
-2-Zi
-5/2
-2-2J
5J
HC
10
0-1
'10
10
\/T6
-4 -
4j
\/To
x/Io
x/To
[cin
-4 +
4i
ao
and
10
\/T0
10
vTo
CHAP.
IB]
CONGRUENCE
123
8.
is in
i
1 +
l
=
1
+ 2t
2j
-l +
i
2i
-J
2
2
-i
2
1-i
1
+i
-l-2j
1
-i
1
is
1,
-l].
10
Then P'AP
= diag[i.
i.
-i
SUPPLEMENTARY PROBLEMS
9.
is in O'
2"
"
(d-)
-1 o1
2
1
(a)
[-:
-:]
(6)
3-1
-1 -2_
1
4 4
0,
-1
.
.2
2
11.
1 1
-2
1
"^V2 -^v^ -1
.
-1 -1
1
Ans.
(a)
(b)
(c)
[::]
I
kV2
ky/2
-k
i
(d)
10.
is in
2i
r
I
H- j
1
4j
i + 2i 1+211
I
'
+j
+i
2i
-l-2i
-3 5i
|_1
2i
+ 4iJ
2
- 4;
-1 -
Ans.
(a)
2(-i-2,)]
(b)
2(1 -J)
i/\/2
(1
+ 0/2
(l-J)/\/2
(-3-2i)/l3
(3
+ 2i)/l3
11.
is in
(6)
12-2
:
-1 -2
A
3
A =
-"
-2
3
-2 -3 (b)
Ans.
(a)
1
2
1
-3/2"'
1
1
(c)
-2103
2-1-3
'1
-1
0-1
(d)
10 10
P
2
1
1
=
-1/3 -2/3 2
1/3
1
124
CONGRUENCE
[CHAP. 15
12.
in
(a)
tl +
l
1 3i
+i
3
i
+
3
2i
3i"|
'
(6)
1-i
2
-i
4
(c)
1-j
3 + 2f
3-4i
18
1
10
3 + 4t
-1-i (-5-i)/\/5
1
Ans.
(a)
[1
-i +
1
sfl
(6)
(2-f)/V5
(c)
(-2-J)
1
i/Vs
canonical form (15.7). given
i
13.
in
-1-i
-1-j
1
-1
(a)
(c)
1-j
.
1-i
-i.
L-i+j
i
J l+i
i
-1
2i
i
2+j
(6)
(i)
-1
_-2 + i
l-2i
-l+i
6i
,
-1-2J
-l'
i
Ans.
(a)
(l-J)/\/2
l/\/2
P [:
1
-r]
-i
1
(c)
P
,0
-1
1,
-2 + 3j -2 1
(rf)
l/x/2
(l-2i)/VT0
(-2-i)/yJT0
l/\/l0
-l/\/2'
i/\/2
(6)
i/y/2
14
If
^1
show
satisfies
C'DC
if
and only
if
|Cl=l.
L-i oj
Let A be a non-singular n-square real symmetric matrix of index
is even. p.
15.
Show
that
>
if
and only
if
-p
16- Prove:
Hint.
Take P
= I
17.
18.
Prove:
It
if
and only
if
is
symmetric (skew-symmetric).
(S
19.
Let S be a non-singular symmetric matrix and T be a skew-symmetric matrix such that (S + singular. Show that P'SP = S when
7")
- T)
is non-
P
Hint.
(S+TT^(S-T)
P'SP =[(S-Tf^(S+T)S~'^(S-T)<.S+T)'^r^.
20.
Let S be a non-singular symmetric matrix and let T be such that (S+T)(S - T) is non-singular. then T is skew-symmetric. if P'SP = S when P = (S + TT^iS- T) and I+P is non-singular,
Hint.
Show
that
S(/-P)(/ + P)-^
S(I + PrHl-P).
21.
Show
chapter 16
Bilinear Forms
AN EXPRESSION
and
(y^, y^
which
Jri)
is linear
is called a bilinear
and homogeneous in each of the sets of variables fonn in these variables. For example,
(x^, x^
m)
XxJi
2x172
ISxija
4x271
15x272
^s/s
and
x)
and {y^.y^
Jn)
maybe
%i%yi
+ 021^271
+ +
oi2%y2 +
+ +
%r!
X i7?i
022^72 +
fl2ra%7ra
+ ami^myi +
or,
Om2%y2 +
""mv^n Jn
more
briefly, as
n
(16.1)
/(^.y)
t
S
=i
S ai-x^y
j=i
-^
'
%1
%
*2> .
012
^22
in
'7i' 72
Ojl
2n
_0'n\
Om2
Omra_
_Yn_
where
[x^, x^
]',
^ =
[0^^],
and
i'
= [yi,72
7]'.
The matrix A
of the coefficients is called the matrix of the bilinear form and the rank of
A
1.
See Problem
Example
1.
Ti
Ui.
%.%
y2
73
X'AY
125
126
BILINEAR FORMS
[CHAP. 16
CANONICAL FORMS.
(16.2)
Let the
new variables
u's
by means of the
lin-
ear transformation
Xi
1.
bijUj,
(f=l,2
m)
or
BU
and the n y's be replaced by new variables v's by means of the linear transformation
(16.3)
Ti
2
J=i
c^jVJ,
(f=l,2
n)
or
Y = CV
IX,
We have X'AY = (BU)"A(CV) = U'(BAC)V. Now applying the linear transformations U = V = lY we obtain a new bilinear form in the original variables X\B'AC)Y = X'DY
Two
if
and only
if
only
If
if
Two bilinear forms with mxn matrices A and B over they have the same rank.
r,
sire
equivalent over
if
and
P and Q such
that
FAQ
Taking B = P'
(16.4)
in (16.2)
Ir
and C =
reduced
to
UXPAQ)V
U'
U^V^
^2^2
U^V^
Thus,
II.
Any
of rank
mations over
u^^v^
+ u^i'r-
1 1 1
X^AY
X'
ofE^xamplel,
10 10
1
10
-1
1 1
-1
1 1
10
1
10
1
10 110 110 10 10 1
10
10
1
10
1
10
1
10
1
1
10
1-1-110
1
1-1-110
0-110
1
h
Thus,
p'
-1
1
1
-1
1 1
PU
and
QV
reduce X'AY to
1 1 1 1 1
1 1
-l'
1 1
-1
U%V
U-^V-:^
+ U^V^ + Ugfg
CHAP.
16]
BILINEAR FORMS
127
The equations
X-j^
Ur,
Ti
= =
^1
V^
"3
1^3
X2
=
=
U,
and
72 Ts
Xg
^3
See Problem 2
bilinear form
1 S
a,-,-
ij x '^iJj
X'AY
is called
symmetric
according as A is
skew-symmetric
Hermltlan
Hermltlan
alternate Hermltlan
skew-Hermltlan
COGREDIENT TRANSFORMATIONS.
(i. 2
^) and
=
(yi.ys. .3^)-
Consider a bilinear form X'AY in the two sets of n variables When the ;'s and y's are subjected to the same transforma-
tion
CU
III.
and Y =
CV
We have
CU
and
Y = CV,
where A
If
U\C'AC)V.
is
IV.
the variables.
V.
Two
if
variables
and only
if their
F are equivalent under cogredient transformations of the matrices are congruent over F.
From Theorem
I,
ChapterlS, we have
r
(16.5)
%%yi
II
02*272
OrXryr
From Theorems
15,
follows
VIl. A real symmetric bilinear form of rank r can be reduced by non-singular cogredient transformations of the variables in the real field to
(16.6)
iyi
in the
%y2 +
xpyp
a;^4.i
7^4.1
x^Jr
and
(16.7)
complex
field to
%yi +
2y2
x^y^
See Problem
3.
CONTRAGREDIENT TRANSFORMATIONS.
formation
Let the bilinear form be that of the section above. When X = (C~^yU and the y's are subjected to the trans-
Y = CV,
We have
128
BILINEAR FORMS
[CHAP. 16
VIII.
Y = CV,
X'AY, where A
IX.
U\C AC)V.
is
The
bilinear form
X'lY =
x^y^ + x^y^ +
%y
if
and
only
if
In
Problem 4, we prove
if
and only
if its
rank is one.
SOLVED PROBLEMS
1.
%yi
+ 2iy2
13%yg
4a^yi + 15x^y^
-13]
Jx
1
-13l
x^y^
%. 2
4 15
-1
Ji 73
= X'
-4
15
-ij
2.
Ix^y^ + 2%yi
"ix-^y^
3-2
1
The
2-2
3
3
1
By Problem
6.
Chapter
5.
10
-2
1 1
-1/6 -5/6
and
-1 -1
mations
10
1
7/6
are such that
/g
PAQ
14
X^
=
Ui
2U2 U2
Uq
y
6
^
P'U
or
:x:2
U3 "3
and
QV
or
xs =
12
3.
3 5 8
X'AY = X'
2 3 2
Y
10
5
8
10
-8
mations
field.
(b) (16.6) in
(o)
Prom Example
1,
Chapter
-2 -10 -1 4 -1 1
u.
1
1
-2 -10 -1
1
4-1
1
10
CHAP.
16]
BILINEAR FORMS
129
reduce
XAY
to
uj^'i
"2% +
^usfg.
-5 -2 -1
2
1
-5 -2 -1
2
(6)
Prom Example
2,
Chapter
15,
-1
u.
1
1-1
1
- "3%2,
Prom the
result of
Example
1 1
Chapter
1
15.
we may obtain
000'
2
1
1
-1
-5 -2
-1
10 10 10
-5202
-2j
i
-1-110
1
-5 -2J -1 i -1 2
u.
1
1
-5 -2J -1 -1 2
t
reduce X'AY to
4.
Prove:
if
and only
if its
rank is
1.
= [a^-], as
aij
aj5
a^bj
fk^'j
aib^
bib^
fe*s_
H
"k
Lfei "ks_
"k
vanishes.
of
/4
is 1.
Conversely, suppose that the bilinear form is of rank 1. Then by Theorem I there exist non-singular linear transformations which reduce the form to U'(BAC)V = u^v^. Now the inverses of the transformations
=
jnr
and
'^jyj
(S rijXj)(l s^jyp
f(x.y)
130
BILINEAR FORMS
[CHAP. 16
SUPPLEMENTARY PROBLEMS
5.
Obtain linear transformations which reduce each of the following bilinear forms to canonical form (16.4)
(a) x^j^
- 2^73 +
"ix^j^
^sTs
7
(rf)
2-510
(6)
10
8
1
4 7
3 5
5 3
8
5 6
-4
-11
7 y.
X'
1
5-510
6.
-2 -2
4
5 8
12
6
10
2 5
(a)
y and
(6)
X'
3
1
10
2
2
5_
14
1
-2 -7
1
'l
-3 -4V3/3
1
Ans.
(a)
2
1
(6)
\/3/3
\/3/3
Ir
7. If Bj^,
B2. Ci
B-^A^C^ = B^A^C^ =
= (B'^B^iU,
C^C^V
5, in
8. Interpret
Problem
23,
Chapter
1 1
1 1
.
1
2
/.
1 22
i
2
9. Write the
transformation contragredient to
.Y
=
1
.4ns.
-2
i 2
2
2
~2
10.
is
.Y
PV Y
.
PV.
11.
-1
12. If
X'AY
is a real
XA Y
Show
that
when
reciprocal bilinear forms are transformed cogrediently by the same orthogonal transformation, reciprocal bilinear forms result.
13.
Use Problem
4, Chapter 15, to show that there exist cogredient transformations duce an alternate bilinear form of rank r = 2t to the canonical form
PU. Y
PV
which
re-
U1V2
W2V-L
+ U3V4.- u^V3 +
14.
Determine canonical forms for Hermitian and alternate Hermitian bilinear forms.
Hint. See (15.6) and (15.7).
chapter 17
Quadratic Forms
A HOMOGENEOUS POLYNOMIAL
(17.1)
of the type
n
q
X'AX
whose coefficients
^It ^2
^7J'
a^,-
"u
are elements of
over
in the variables
Example
1.
2
as^^
+ 2*2 -
2
"^x^
'ix^x^
may be
written in various
We
shall
agree that the matrix of a quadratic form be symmetric and shall always separate the crossproduct terms so that a a^ Thus,
'
"-^s
1
7Xg
4
4Xj^X2
SXj^Xg
-2
2
X'
0-7
and the rank of
[o^^- ]
If
TRANSFORMATIONS. The
(17.2)
BY
(.BYyA(BY)
YXB'AB)Y
Two
X=BY
we have
The rank
variables.
II.
Two
if
and only
if their
matrices are
congruent over F.
Prom Problem
the form
(17.3)
1,
Chapter
15, it
can be reduced to
K^i
Kr^
+ h^y^,
hi/=o
131
132
QUADRATIC FORMS
[CHAP. 17
in
tion
which only terms in the squares of the variables occur, by a non-singular linear transformaX = BY. ViTe recall that the matrix B is the product of elementary column matrices while 6'
-2
Example
2.
Reduce
X'
X
-7
1
of
Example
-2
2
o'
1
1
We have
[A I]
-2
4
0-2
8
1
1
0-7
1
-23
-4
0-2
9
=
1
[D Bl
2
1
4
4
1
Thus.
BY
reduces q to
q'
y=-2v^+9v=.
See Problems
1-2.
reduction of a quadratic form to the form (17.3) can be carried out by a procedure, known as Lagrange's Reduction, which consists essentially of repeated com-
Example
3.
1
7%; 3
'
4x^*2
"*"
^*1*3
7^:
Ux^-2xJ
2Sx
(\-2^2+H)'
yi =
- 2(A:^-4^3f + Qxl
xx
- 2x2 +
xq^
4acs
xi
ot
= = =
yi
2y2 +
4ys
Thus,
ys 73
=
=
- 4x3
Xg
X2 X3
y2 + 4y3
ys
reduces q to
y^ - 2y^ + 9y^.
See Problem
3.
q = X'AX be reduced by a real non-singular one or more of the h^ are negative, there exists anonsingular transformation X= CZ, where C is obtained from 6 by a sequence of row and column transformations of type 1, which carries q into
Let the
(17.4)
sz^ s,z7
11
s z^ s^z2 2
...
Sfyzj,
s*+iZa+i
...
s^;.
in
which the terms with positive coefficients precede those with negative coefficients.
CHAP.
17]
QUADRATIC FORMS
133
Now
Wi
"'i
= =
VSi Zi
'j
a
(/
= 1,2
r)
=r+l,r+2
71)
or
'''4k-
;-
'
.1.1
J
(17.5)
wl + wl +
w^^
;2^^
wl
we have
Every real quadratic form can be reduced by a real non-singular transformation to the canonical form (17.5) where p, the number of positive terms, is called the index and r
III.
Example
4.
In
I
Example
o
2,
into
y^ q" =
"
The non-singular transformation yi = z^, y^ = 23, ys = 22 carries q' 2y^ + 9yQ ~ + ^^3 ^^^ ^^^ non-singular transformation 21 = w^, 22 = w^/S, zg = wa/v^ 922 zf
i.
^12
o
a^ + 2x^
- Ix^ - ixx
3
8xx 1213
+
was reduced
to
J reduces q
to
"'
2,2 w^+ w^ -
w^.
Wi + ^W2 + V^M>3
1
4/3
v^
X2
|mJ2 + 5V^;g
4/3
1/3
2^
X3
3"^
=
qr
to
W^ +
wl-
w%.
The quadratic q
2.
In
Problem
5,
we prove
IV. If a real quadratic form is reduced by two real non-singular transformations to canonical forms (17.5), they have the same rank and the same index.
Thus, the index of a real symmetric matrix depends upon the matrix and not upon the mentary transformations which produce (15.2).
ele-
The difference between the number of positive and negative terms, p - {r-p), in (17.5) is called the signature of the quadratic form. As a consequence of Theorem IV, we have
V.
Two
if
and only
nature.
real quadratic forms each in n variables are equivalent over the real field if they have the same rank and the same index or the same rank and the same sig-
= =
^iVi.
yj
,
1.
r)
Zj
(7
= r+1, r+2,
...,7i)
134
QUADRATIC FORMS
[CHAP. 17
Ul
Y
carries (17.3) into
(17.6)
1
-'-
''''{/h^'/h;'-
1, 1,
)^
<
VI.
-4--
4
field of rank
r
Thus,
singular transformation over the complex field to the canonical form (17.6).
VII.
Two complex
q = X'AX, \a\ f 0, in Thus, in the real field a y^ and for any non-trivial set
0.
=X'AX,
n.
\a\
=0,
= p<
"^
be reduced
to
y^ + yj
Thus, in the real field a positive semi-definite quad+ y^ r< re, and for any non-trivial set of values of
,
the
a;'s,
q>
0.
i.e.,
q = X'AX is called negative definite if its index p = 0, = n,p = 0. Thus, in the real field a negative definite form can be reduced to -y^ - y^ - y^ and for any non-trivial set of values of the x's, q < 0.
A
i.e.,
r
X'AX
p =
-yl
- y^ Clearly,
if
Thus, in the real field a negative semi-definite form can be reduced ^f ^y non-trivial set of values of the x's, q 5 0. ^"^ y^
0.
=X'AX
U| >0.
PRINCIPAL MINORS. A
minor of a matrix A is called principal if it is obtained by deleting certain rows and the same numbered columns of A. Thus, the diagonal elements of a principal minor of A are diagonal elements of A.
In Problem
IX.
6,
we prove
r
has
at least
r differ-
matrix i of a real quadratic form q = X'AX is called definite or semi-definite according as the quadratic form is definite or semi-definite. We have
singular matrix
XI.
C such C
that
A =
C'C.
exists a matrix
of rank
such that
A of rank r A = C'C.
is positive semi-definite if
and only
if
there
7.
See Problem
CHAP.
17]
QUADRATIC FORMS
135
XII.
If
A A
is positive.
See Problem
XIII. If
8.
is non-negative.
[a^,-]
over F,
we
Oil
O12
%1
Pa
Ogi
Oqi
04.2
%3
O23
'^3
(17.7)
PO =
1,
Pl=
Oil.
P2 =
021
052
Pn= Ul
022
O32
In
Problem
9,
we prove
if
Pn-iPn^^
1
but Pn-i = 0,
then p_2
and p
Example
5.
X'AX
112 112
2
X, 2
3
4
1
po =
1,
pi =
1,
p2 =
0,
p3 =
0,
P4 =
m=
1.
12
Here
Oigs
t^
0;
the transformation
Kg^X
yields
'1112 112 2
(i)
X'
12 14
2 2 4
1-
for
which po =
1,
Pi =
1,
P2 =
0.
Ps = -1.
P4=
Thus,
for
in the
symmetric matrix A of rank r is said to be regularly arranged if no two consecutive p's sequence po.pi, -.Pr ^^^ zero. When A is regularly arranged the quadratic form X'AX is said to be regular. In Example 5, the given form is not regular; the quadratic form (i) in the same example is regular.
r.
By Theorem
IX,
A contains
at
vanishing r-square principal minor M whose elements can be brought into the upper left corner of A. Then p^ ^^ while p^^.^ = p^^.^ = ... = p = 0. By Theorem XIV, the first r rows and the first r columns may be rearranged so that at least one of Pr-i and p^_2 is different from zero. If p^_^ j^ and p^_2 =0, we apply the above procedure to the matrix of Pt--i; if Pt--2^0,
we apply
^^^^
so on, until
is regularly arranged.
Thus,
See Problem
XVII. A real quadratic form X'AX is positive definite leading principal minors are positive.
XVIII.
if
10.
and only
if its
rank is
re
and
all
X'AX of rank
Pr
is positive.
136
QUADRATIC FORMS
[CHAP. 17
method of Kronecker which only the squared terms of the variables appear
for
is
based on
r,
XIX.
q =
X'AX
in n variables of rank
then by a non-
singular linear transformation over F it can be brought to q' = singular r-rowed minor C oi A occupies the upper left corner of B.
X'BX
F which reduces
q to
XX. li q = X'AX is a non-singular quadratic form over F in n variables and a ^ 0, the non-singular transformation
if
Pn-i =
a
^nnyn
or
1, 2.
re-1)
10
1
a^n
BY
1
<^n-i,
carries q into
HiJiYj + Pn-xPnYn
in
-2
2
4
1
-^i
-2
-2 ^ 0.
Example
6.
X'AX
= X'
-2 4
P2 =
O^S
The
0-7
- 8y3
-2
non-singular transformation
Xl
n
= =
a.13
ys
yi
1 1
-8 -8
ys +
a23 ys
ss ys
=
ys -
8y3
2ys
xs
0-2
reduces X'AX to
1
1
-2
2
1 1
-8 -8
Y'
-2
2
Y'
-2
4
-2
8 -8 -2
0-7
in
0-2
squared form
36
in
XXI.
but
U
'
q =
0,
a, 're,?!-! ^
F and
if
o^-i, n-i
= ^nn
xi
*ra-i
yi
CLi,n-iyn-i
'^inyn,
(i
1, 2,
...,n-2)
^n-i.n yn.
Xn
a n n-i Yn-
,
or
1 1
... ...
!, ^.
a2 n-1
1
a.-2,
_i n-1
sy
= a '-'-n.-2, n
'-n-i,
...
,0
...
a__i
CHAP.
17]
QUADRATIC FORMS
n-2
137
n-2
carries 9 into
(Hjyiyj
i-i j-\
1'^n,n-\Pnjn-\yn-
The
further transformation
'
Ji
Zi,
(i
1,
n-2)
Jn-x
= =
Zn-i Zn-i
Zn
2n
Jn
n-2
n-2
yields
V
2
1
.2
J -1
a^jz^zj +
2a_^_j^p(zn-i
- z^
in
Example
7.
X'AX
X'
2
1
4
3
3
1
^22
- O'ss =
but
xi = = =
tXg2 =
-1
+
i^
0.
yi
cti272
*i3ys
*23r3
or
X2 xq
-1
a 32 72
0-10
]
reduces X'AX
1 1
to
1
1 1 1
-1
-1
2
1
4
3
3
1
rSY
yl
+ 2y,y3
-]
The transformation
)'l
1
= = =
21
72
22
23 23
1 1
-1
1
73
carries
22 +
Y'BY
1
into
1
1
o'
1 1
1 1 1
Z'
1
1
-1
1
z'
<
-2
-1
2zi
Consider now a quadratic form in n variables of rank r. By Theorem XIX, q can be reduced to 9i = X'AX where A has a non-singular r-square minor in the upper left hand corner and zeros elsewhere. By Theorem XVI, A may be regularly arranged.
If
pr-i
0,
Theorem
XX
(17.8)
Pr-iVrYr
If pr-i = but a.r-1, r-i ^0, interchanges of the last two rows and the last two columns yield a matrix in which the new pr-i = a^_i^^_i f 0. Since pr-2 ^ 0, Theorem XX can be used twice to isolate two squared terms
(17.9)
Pr-2
r-i,
r-iJ
a r-i., r-iPrYr
which have opposite signs since pr-2 and pr have opposite signs by Theorem XV.
138
QUADRATIC FORMS
[CHAP. 17
If
pr-L =
used
to
a,-,
r-i ^
(17.10)
2ar,r~ipr{yr-i-yr)
reduced
to
another con-
sequence pr-i,Pr presents a permanence or a variation of sign. In (17.9) and (17.10) it is seen that the sequences p^_2, oLr-i. r-v Pr ^"^^ Pr-2' ^r, r-v Pr Present one permanence and one variation Thus, of sign regardless of the sign of dr-i, r-i and a.^, r-i
In (17.8) the isolated term will be positive or negative according as the
XXII. It q = X'AX, a regular quadratic form of rank r, is reduced to canonical form by the method of Kronecker, the number of positive terms is exactly the number of permanences of sign and the number of negative terms is exactly the number of variations of
sign in the sequence
po, pi, P2
P^,
Let X'AX f
0,
X'AX
(oj.%1
+ 02*2 +
+ ;)( iiXi +
b.2_X2
+ + b^x^)
0-1
to.;
hi
J
tj
.^
is non-singular.
Let the
becomes
= = =
OiaCi
+ 02*2 +
+ 0 + ^n^n
y2
ys
^1*1 + ^2^2 +
Xs
(
=
2.
x^
transforms
If
i )
into
y-^y-z
of rank
2.
Thus,
1) is of rank
!Y^
72
=
=
a^Xx + 02*2 + X2
+ (^n^n
Xn
1.
y?
of rank
1.
Conversely,
if
or 2
it
to y^ or
r^ + y^, each of which may be written in the complex field as the product of two linear factors.
We have proved
XXIII.
quadratic form
if
X'AX ^
rank is
linear factors
and only
if its
r<2
CHAP.
17]
QUADRATIC FORMS
139
SOLVED PROBLEMS
12
1.
3 5
2
8
Reduce
q = X'AX = X'
2 3 2
3 5 8
10
to the
form (17.3).
10-8
Prom Example
1,
Chapter
15,
12
[^1/]
2 3 2 3 5 8
3 5 8
2 8
1 1 1 1
c
^\J
0-100
4
10 -2100
-10
iD'.P']
4
1
10
10
-8
1
-1-110
Y
reduces q to the required form y^
-2
1
-10 -1 4 -1
1 1
PY
yl^'^yl-
2.
Reduce
q = X'AX = X'
2
2
4 8
8 4
to the
form (17.3).
We
find
1
[^1/] C
[D\P']
2
1
i
2
-4
1
1
PY
-i Y
2
3.
Lagrange reduction.
(a)
= =
- 24x^+
8a:^a=2
2{xf+ 2x^(2x^ + 3x^+2x^)1 + 5x^+ 19x1 " 24*4 + ^S^j^g - 8x^x^ - IGx^x^ 2{xl+ 2x^(.2x^+Zx^+2x^) + {2x^ + 3x^ + 2x^f\
+ bxl + 19*2 - 24*2 ^ jg^^^^ ~ 8*2^^ g^^^^ _ ^^ "2*3 _
_ 2(2*^ + 3v +2*,)2
^q_^
2 (*i 2 (*i
+ *3 - 32*2 -
(^ + ^ + 4^^)2
ryi
+ 4 (*g
- 2*4)2
+ 2*4
*i + 2*2 + 3*3
\yi =
Irs
=
*2 +
*s + 4*4
*3-2*t
*4
'Educes,
to
2y2_3y22+4y2.
00^
(i)
2,
we have
16*2*3 + **
,
*2 +
4*^*2 +
4*^*3 + 4*2 +
tenii in
(*^
*2*3
we use
140
QUADRATIC FORMS
[CHAP. 17
i )
Xl
Zl
A;2
22
Xs -
Z2 + ^3
to obtain
(2i
+ 4z2+2z3f'
+82^+82223
(z^ +
422+2z3)2+8(22+2Z3f - 2z2
-
y^
+ ^y^ -
2yi^
4 2
1
1 1
4 2
1
1 1
Now Y
X
2
1
Z and
from
i),
X\ hence,
1 6"
i
1
X
1
ii
-1
1
-1
'1
-1
-4
1
1
2
I
1
4.
2,
[^1/] C
-4
-2
-2
i
2
and applying the tiansfoimations HgCiV^). K2(i+-\/2) and ^3(5 a/2). ^3(2
-|
V2
),
we have
10
[^!/]
1
1 1
10
0-1
10
[cic']
0-2
-2
1
-iVl
ii\/2
-V2
k^f2 -\\f2 y i\f2
reduces
q = X'
2 4 8
= (^y
2 2
8 4
2,2
iv^
,2
5.
Prove:
If
a real quadratic form q is carried by two non-singular transformations into two distinct
reduced forms
(i)
y; +
yg
y^
and
<ii)
y' + yl +
yq
yq,i
yj.2
y'
then p = q
X
).
FY
GY
be the transfor-
Then
filial
+ +
bi2X2 +
622*^2
+ binxn + b^nXn
621X1
F~^X
bjiixi
+ bn2X2 +
+ bnnpcn
and
cii% + ci22 +
C21X1 + C222 +
Cin^n
+ C2ren
G'^X
cr!.i;i
cn2*:2
cnnXn
CHAP.
17]
QUADRATIC FORMS
141
q.
Thus,
(iii)
(''ii*i
*i2*2+- +
*in^n)^ +
...
(CiiA:^
+ c^2*2+- + c^^f +
('^g
"*"
+ (cq^x^ +
'^q
x^+- + c
x^f
Consider the
*ii
+ l.l^l'''
"^g
+1,2*2
+ l,/!^^
r-q+p <
equations
^g+i,i*i + '^(7+1,2*2 +
't?+2,l*l
+ 6i2 *2 +
+ *in*n = + 6^
'^g
M*re =
''21*1+^22*2 +
+ '^9+2,2*2 +
bpiH
By Theorem
+ *;^2*2 +
+ 6jbn% =
^ri
*i
+ ^2*2
+ c^^x^
IV,
(
substituted into
(Ot^.ttg
0!^^).
When
this
solutionis
(*;b+i,i'i
+ bp^i,rf-J^
(-^11
%22 +
+ c.^n'^nf +
>an)^
q<p
assumption that
6.
has
at least
different
it
has
at least
'rfi,s2
stands in the rows numbered Ji,i2 matrix and let the columns numbered
I^^t
one r-square minor which is different from zero. Suppose that it these rows be moved above to become the first r rows of the i^ be moved in front to be the first r columns.
rows are linearly independent while all other rows are linear combinations of them. By taking proper linear combinations of the first r rows and adding to the other rows, these last n-r rows can be reduced to zero. Since A is symmetric, the same operations on the columns will reduce the last n-r columns to zero. Hence, we now have
the first
r
Now
ili2
H2i2
"Vii
%i2
"vv
in
in the
pal minor of
7.
Prove:
matrix
A
C
of rank
is positive semi-definite if
and only
if
there exists a
of rank
Since A is of rank
'--[t:]
142
QUADRATIC FORMS
[CHAP. 17
such that A
of rank
r
B%B.
= C'C
Since
A'i'=
Ai = A^
we have A
Set
= A^B; then
is
and A
as required.
r;
Let
its
canonical
N^
diag(di,
(^2
^5,0,0
0)
Set
where each di is either +1 or -1. Then there exists a non-singular real matrix E such that E'(C'C)E = CE = B = [bij\. Since B'B = N^, we have
2
A^g
2 (i =
1,
s)
and
*ii
*i2
^jn
0'
ii =
s+l,s+2
n)
Clearly, each
d;>0 and A
is positive
semi -definite.
8.
Prove:
If
is positive.
Let q = XAX. The principal minor of A obtained by deleting its ith tow and column is the matrix A^ of the quadratic form q^ obtained from q by setting x^ = 0. Now every value of qj_ for non-trivial sets of values of its variables is also a value of g and, hence, is positive. Thus, Aj_ is positive definite.
three,
This argument may be repeated for the principal minors ... rows and the same columns of A.
A^j,
A^^j-^, ...
By Theorem
VI,
Ai>
0,
A^j>
0,
...
Prove: Any ra-square non-singular matrix A = [ay] can be rearranged by interchanges of certain rows and the interchanges of corresponding columns so that not both p_^ and p_2 are zero.
Clearly, the theorem is true for A of order
1
and of order
2.
?^
Moreover,
it is
true for
of order
n>2
when
p^_^ =
a^
(a)
j^
0.
Suppose
?^
01^ =
Suppose
some CL^
0.
After the ith row and the ith column have been new matrix has p_^ = CC^j^ 0.
at
moved
to
Suppose (6) all a^i = 0, Since \A\ J^ 0. and the ith column into the (n-l)st position.
a a
In the
0.
Move
_i,n =
\n-i
^-
^^
^n-i,n-i
^n-i,n
Vi,n
-a.n-i,n
PnsPn
a n, n-i
and
Vi,n
Pn^^O.
Note that this also proves Theorem XV.
2
10.
1 1 1
that
q =
X'AX = X'
2
13
3
is regular.
1111
Here
po
2
1,
pi =
0,
ps =
0,
ps = -4, p* = -3.
2
Since
pi = P2 =
0,
ps ^
0,
we examine
the matrix
3 4
of p3
=
2
?^
CHAP.
17]
QUADRATIC FORMS
143
A yields
2
1
r
for
4 3
1111
which p^ 1,
11
;<:g
p^ =
0,
as xg
11.
X'
2 3
12 3 15
5
4
6
3
X.
2 3
Here Pq - 1, p^ - 1, p^ and q is regular. The sequence of p's presents one 3, pg - 20, p^ = -5 permanence and three variations in sign; the reduced form will have one positive and three negative terms.
Since each pj /
0,
XIX
y\
"^yl
- eo^g -
looy^
12
12.
3 6 9 2
= X'AX = X'
2 3
4
6
3 2 5
X.
13
Here A is
of rank 3 and
fflgg
i 0.
An interchange
1
of the last two rows and the last two columns carries
12 13
A
into
2 4 3 6
12 10
^
0.
13
in
5
which
2
1
4 3
3 5
Since S is of rank
3, it
can be reduced to
2 4 3
13
3 6 2 9
1
2 1 4 3
3 5
1
Now
to
XCX
= X'
2
1
for
which p^ =
1,
p^ =
1,
p^ =0, pg = -1.
The reduced
Since p =
but
1
= 4 5
i^
0,
the reduced
PoPiri +
Pxy-i-zJi
y-iiP^yl
yl + 4y| -
4y|
1-2
13.
= X'
-2
2-12
Here p^
- 1, p^ 1,
1112
1
;
1-1
X.
p^ -
0,
Pg
9,
p^ = 27
the reduced form will have two positive and two nega-
-2
4
I
1 1
tive terms.
2
1
of pg.
Since /^gg =
but /3g2
=-3^0
ij
yl + 54y|
54^3^
- 243y|
144
QUADRATIC FORMS
[CHAP. 17
14. Prove:
Xp,
A]_
Ai
A2
Ag Ai
*
A2 ^2
'
;t2-x >
Z^ Zi
Xp- X2
Xp- Xp
if
and only
if
X
=
= [i,2
xp]' f 0.
=
Then Z
2
i=i
X^x^
0<Z-Z
1=1
X-x.
)( 2 X:xA
j=i
J
3
X\X':XAX t-j'
|
r(A:^-;f,-)^ t
J'
X'GX
G >
|
0.
(6)
Then there
exist scalars
k^^^.k^
such that
S f = i=i
X..^
4,,jy.^
kiXjj-Xi
+ k^Xj
X2 +
+ kpXj
Xp
= 1.2
p)
of
homogeneous equations
Xj-X^xx + Xa-Xqxq +
'J
+ Xa-X, 'j
'
P ^P
0.
= 1,2
p)
k;
(i = 1,2
p),
and \G\
We have proved
that
G >
1
0.
(b).
we need only
p
i= 1
to
assume
Thus,
J
=
|
and
= 0, (/ = 1,2
p)
where
f= l^k^Xi.
S kjXy^ =
-
^.
^=
0,
if
= 0,
SUPPLEMENTARY PROBLEMS
15. Write the following quadratic forms in matrix notation
(a)
xl + 4*1^2 +
34
^JX
(6)
2x1 -
6j;i2
^s
<<^)
^1
" 2^| -
3a:
2-3
Ans.
(a)
12
(c)
1
rP
(b)
X'
-3
2 3
-2 -4 -4 -3
2
-3
2
xi.x^.xs
whose matrix
is
-3
1
-5
Ans.
2x1
- 6x^x^ +
2x.^Xg
+ 2x^ + 8X2^3
5<
17.
of
Problem
4
(6)
1111
X
(a)
X'
2 4
6-2 X
-2 18
yl
1-13-3
012
X
(c)
13
2^2^
r
y^
-1
(d)
^
1
-2
3
1-3 1-3
(6)
y,=
2-10
(c)
-2
Ans.
(a)
+ 2y^ - 48y|
+ 4y^
,,
72 + 8yg
id)
2 Ji
2,2
-y^z^ya
a;i
CHAP.
17]
QUADRATIC FORMS
145
18.
(a)
Show
that
X'\
\x = ^'
different ranks.
(b)
Show
19.
Show
are equivalent.
20.
Prove:
real symmetric matrix is positive (negative) definite if and only if it is congruent over the real
field to / (-/).
21.
of
Problem 12
is
reduced
to
X'CX hy X
RX.
where
Ks^K^^{-5)K^^a).
Then prove
22.
Show
Show
two real quadratic forms in the same variables are positive definite, so also is their sum. that if q^ is a positive definite fonn in x^.x^ xs and 92 is a positive definite form in xs^i. xs+2. ..x^, then g = ?i + 92 is a positive definite form in xx,x2 x^.
that if
23.
Prove: If
Hint: Consider
is any real non-singular matrix, then C'C is positive definite. XlX = Y'ClCY
24.
Theorem
X.)
Hint: Consider
A can be D'AD=I.
written as 4 =
CC. (Problems
25.
Prove: If a real symmetric matrix A is positive definite, so also is A^ for p any positive integer.
Prove: If ^ is a real positive definite symmetric matrix and CB is orthogonal.
26.
if
B and C
B'AB =1 and A
= C'C
then
27.
Prove; Every principal minor of a positive semi-definite matrix A is equal to or greater than zero.
28.
Show
that
if
and only
if
ac-l^ >0.
29.
XX
and XXI.
30.
By Kronecker's
of the following
-1
2
(a)
X'
-1
-1
2
(c)
12 12 12 r
2
10-2
(e)
0-1
1112
2 2 2 3
1
X'
(g)
X'
1
-2
3_
-2
1
-2
2
4-4
(b)
X'
-4
2
-3 X
1
(d)
X'
2-462
3
2 -1"
2
-3
9
3
3
1
(/)
X'
4
3
2 3
(h)
X'
12
-12
11
1111
17(rf).
Hint: In (g), renumber the variables to obtain (e) and also as in Problem
Ans.
(a)
p^=p^=p^=p^=l;
iyl - 16y| + 16y|
+ y| + y2 yf
^^j
(/) (g) (A)
Pq = Pi =
22 =
-!
P3
-1
2 Ti
2,2
72
"^
^3
(b) (c)
(d)
Po =
Pi=l. 2S=-4. P3
(e).
-16;
y^^
+ 128y|- 128y=
yl jI
q
^yl
8y,
+ 4y= - 3y2
See
31.
Show
that
xf-
Gxl -
6^
- Zxl - x^x^ -
xx
2xx
Uxx
XUx 2
4-
^xx^ can be 3
4-
factored
chapter 18
Hermitian Forms
by
_
h
X'HX
S
v=\
S
j=i
__
_
hij
h^jXiXj,
= hj^
where
is Hermitian
Hermitian form.
The rank
and
hence,
we
here are analogous to those of Chapter 17 and their proofs require only minor
changes from
Moreover,
for
the pair of
hjiXjXi
hijXiXj
+ hijXiXj
Thus,
1.
The values
into an-
{mH{BY)
Y(WHB)Y
Hermitian forms in the same variables x^ are called equivalent if and only if there exists a non-singular linear transformation X = BY which, together with Y = IX, carries one of the forms into the other. Since B'HB and H are conjunctive, we have
Two
and
HI.
Two
if
and only
if their
can be reduced
to
diagonal
Kjxn
k^y-iLji
'crfryr'
^i^^
and real
by a non-singular linear transformation X = BY. From (18.2) the matrix B is a product of elementary column matrices while B' is the product in reverse order of the conjugate elementary row matrices.
By a
(18.4)
further linear transformation, (18.3) can be reduced to the canonical form [see (15.6)]
%Zi + i^22 +
+ 'zpzp
z^+i2^ + i
z-rV
146
CHAP.
18]
HERMITIAN FORMS
147
of index p and signature p-(r-p). Here, also, p depends upon the given form and not upon the transformation which reduces that form to (18.4). IV. Two Hermitian forms each in the same n variables are equivalent if and only they have the same rank and the same index or the same rank and the same signature.
if
non-singular Hermitian form h = X'HX in a variables and index are equal to n. Thus, a positive definite Hery^y^ + y^y^ + +Jnyn and for any non-trivial set of values of
A
a.:e
equal,
rj= p <
n.
yiTi + y2y2 +
+ yrYr
r<
n,
X'HX is called positive semi-definite if its rank and index Thus, a positive semi-definite Hermitian form can be reduced to and for any non-trivial set of values of the x's, h>0.
H of an Hermitian form XliX is called positive definite or positive semidefinite according as the form is positive definite or positive semi-definite.
The matrix
V.
An Hermitian fom
= C'C.
is positive,
and conversely.
non-negative, and
is
conversely.
SOLVED PROBLEM
1 1. 1
+ 2i
5
- 3i
to
Reduce
X'
\-2i
2 + 3i
-4-2i X
2f
15,
-4 +
13
Prom Problem
1
7,
Chapter
2j
+
5
2-Zi
-4 2j
I I
o"
'l
I I
0'
i/^To
j/a/TO
1-2/
2
HC
'^^
2/\/io
j/yTo
3i
-4 +
2i
13
l_
-1
(-4-4j)/v^
1/vTo
2/VlO
1/VIO
(-4 +
4i)/V^
BY
,0
-i/yflQ
-j/VTo
1/v^
to
148
HERMITIAN FORMS
[CHAP.
18
SUPPLEMENTARY PROBLEMS
2.
Reduce each
3i
2
2
+
4
3i 3i
[1
l-2i
+ 2il
2
(c)
+ +
I
3j
3j
3t
1-i
2i
(b)
x'\
i"
For
jx :]^
(6), first
(d)
1+j
3
2-J
2
2J
+J
4
first row.
Hint:
by
Ans.
{a)
(-l-20/\/3l^
=
tl
(.b)
X V2
1
nyi - y-m
(-1 + 30/3
1/3
-l"
(c)
-1
1
nn
- 7272
'l
(-l+f)/A/2
(-l +
30/\/^
y
;
(d)
=
.0
1/^2
(-3-2J)/\/lO
J<iyi
2/VlO
3.
X
I
I
BY
(a) of
Problem 2 into
(6).
Ans.
X= -^\
j_ri
v/2"Lo
Ti (-i-2i)/v3i :^'
~
n
\y
ij'
l/v^
Jb
1+i
-1
-3+J
11
1+j
l+2i
5 10
4.
Show
that X'
1-t
is positive definite
and X'
1-i
3 5
is positive semi-definite.
-1
-3-J
l-2i
5.
6.
Obtain
for
17, for
quadratic forms.
xi
X-L
X2
^n
hin
All
/J21
hi2 h22
n
_S Tj^-Xj^Xj
7.
Prove:
^2
h^n
where
77-
= \h^j\
1=1 j=i
^n
Hint:
K.
K.
^nn
Use
(4.3).
chapter 19
The Characteristic Equation of a Matrix
THE PROBLEM.
Let Y = AX,
where A = [o^],
is
(i,j
= 1,2
n).
Z=
[%i,%2
x^Y
into a vector
Y =
\.%,J-z
JriY
X vectors X
We
shall investigate
here the
a scalar of
or of
some
field 3F of
being carried by the transformation into KX, where A is either which F is a subfield.
for
which
AX
XX
From
(19.1),
we
obtain
A di^
(19.2)
O^ 2
^ f^2
\X-AX
(XI-A)X
^21
^271
~
The system
of
On2
X a-nn
(19.3)
\XI-A\
A Oil a-zi
c-ni
0-tn
ci^n
A Onn
The expansion of this determinant yields a polynomial 0(A) of degree re in A which is known as the characteristic polynomial of the transformation or of the matrix A. The equation <^(X) = is called the characteristic equation of A and its roots Ai,A2 A are called the characteristic roots of i. If
A = Ai is a characteristic root, then (19.2) has non-trivial solutions which are the components of invariant or characteristic vectors associated with (corresponding to) that root.
Characteristic roots are also known as latent roots and eigenvalues; characteristic vectors
Example
1.
A-2
The characteristic equation
is
-2
-1 -1
1,
A-3
-2
-1 -1
A^'
- tA^ + llA
and the
A-2
Ai=
5,
A2=
A3=
1.
149
150
[CHAP. 19
When A
Ai =
5,
(19.2)
becomes
1
3
-1 -1
-2
-ll Vxi
^2
xs
=
or
1
-1
-1
Xl
2-1
-2
3
X2
_X3
-2 -1
2
-1
-1
since
-1
-1
3
is
row equivalent
to
-1
-2
solution is given by
is the
A=5
hence, associated with the characteristic root x-i = x^ = xq = i one-dimensional vector space spanned by the vector [l,l,l]'. Every vector
\
When A
A2=1,
1
(19.2)
becomes
Xl
-2 -l'
-2 -2
-1
-1
1
1
X2
_X3_
Xl + 2x2
+^3
Two
and X2 = [1,0,-1]'.
See Problems
1-2.
GENERAL THEOREMS.
I.
In
Problem
3,
(A:
3)
of
Xj^ As Xk ^f^ distinct characteristic roots of a matrix A and if Xi, X^ the X's are linewith these roots, respectively vectors associated invariant are non-zero If Ai,
arly independent.
In
Problem
4,
(ra
3) of
k\
where A is re-square, with respect to A is II. The Ath derivative of cfi(X) = \XI A\ times the sum of the principal minors of order n-k of the characteristic matrix when k<n,
,
is
re!
when k = n, and
of
is
when k>n.
As a consequence
III.
If
Ai is an r-fold characteristic root of an re-square matrix A, the rank of Xil A nr and the dimension of the associated invariant vector space is not
r.
greater than
See Problem
5.
In particular
Iir.
is
re
Xi is a simple characteristic root of an re-square matrix A, the rank of Xil and the dimension of the associated invariant vector space is 1.
If
2 2
1 1
1 1
3
2
of Ex.
1,
<^(A) =
(A-5)(A-1) =
0. The invariant vector [l,l,l]' associated with the characteristic root A=5 and the linearly independent invariant vectors [2,-l,o]' and [l,0,-l]' associated with the multiple root A=l are a linearly independent set (see Theorem I).
The
A=5
is of
CHAP.
19]
151
dimension
multiplicity
1.
2,
is of dimension 2 (see
The invariant vector space associated with Theorems III and III').
= l, of
6.
Since any principal minor of A" is equal to the corresponding principal minor ot A, we have
by (19.4) of Problem
IV.
1,
The characteristic
roots of
A and
Since any principal minor of A is the conjugate of the corresponding principal minor of A,
we have
V.
of
/I'
roots of A.
By comparing
VI. If
characteristic equations,
we have
if
A;
Xi,
A2
scalar, then
VII.
If
AAi,
kX^
A are the characteristic roots of an ra-square matrix A and AA are the characteristic roots of kA.
is a
scalar, then
A are the characteristic roots of an n-square matrix A and \-Lk,\2k \ik are the characteristic roots of 4 -A/.
if
& is a
In
Problem
VIII.
7,
we prove
is
If
SOLVED PROBLEMS
1. If
is re-square,
show
that
(19.4)
0(A) (m =
1,
|A/-^|
...
+ s_,A + (-l)"|i|
where s^
sum
We
A-aii
0-021
0-ai2
a-in
A-a22
0-a2n
Xar
O-'^ni
O-Ons
and, each element being a binomial, suppose that the determinant has been expressed as the sum of 2" determinants in accordance with Theorem VHI, Chapter 3. One of these determinants has A as diagonal elements and zeroes elsewhere; its value is A". Another is free of A; its value is (-1)"|^| The remaining determinants have m columns, {m = 1, 2, ....- 1), of -A and n~m columns each of which contains just one non-zero element A.
its
-A
columns numbered h,
ia
are columns
terminant becomes
After an even number of interchanges (count them) of adjacent rows and of adjacent columns, the de-
152
[CHAP. 19
%.ii
"is.ii
"iiA
"inA
(-1)'"
"imA
"im.-im
I
'i.'2
^n
A
"in-k
..
%.fe
%''''OT
..
A
.
t^.to
im!
where
Aj
i ^'2
i\
-4
Now
(-1)"
n (n
S ^'^
P
%
^
n taken
H.^
as
(ii, is
i^
1)
...
(n
-m + 1)
different combinations of 1, 2
at a
time.
-4 -1 -4
2.
Use
(19.4) of Problem
to
A =
-1 -1
We have
Sl
1+0-2+6
1
-4
+
-1
-4
+
6
1
-2
+
3 6
S2 2
-1
+
-2
-1
-2
-1
-3
1
2-5+16-9
1
-4 -1
5
1
SS
-4 -4 -4
4 7 6
-1
-4
3 6
5-4
+
1
1
1
-2
-1
-2
3 6
-1
-3
-2
--
-1
=
4-1
+ 16
8 + 2
Ml
Then
\XI
2
A^'
- A\
A'^- 5
+ 9A^ -
7A
2.
3.
Let
Ai, A"!;
As, ^s)
Show
A3, ^3 be distinct characteristic roots and associated invariant vectors of A. X3 are linearly independent.
Assume
(i)
assume
a-]_Xi
+ 0^X2 + 03X3
;
Multiply
(II)
(1)
A^^
we have
=
Multiply
(III)
by A and obtain
Now
may be
written as
1 1
aiXi
(iv)
Ai Ai
Ao As \l
An A3
02X2 ooXq
"3^3
x\
A\
CHAP.
19]
153
By Problem
5,
Chapter
3,
Ai
\l
As Al
As x%
0;
hence,
B~
exists.
Multiplying (iv) by
B~
we
have
[aiX-j,
a^X^.asXaY
= 0.
A On
4.
-012
From
(j!)(X)
\\1
A\
-021
-<31
A 022
-032
-ai3 - 123
we obtain
A- 033
A -Oil
O12
1
-Ol3
A -Oil
-02
-ai2 A (322
<^'(A)
-021
A-O22
- O32
-023
Ois 023
1
- Osi
A-O33
A -Oil Os 1
-031
-Ol3
-O32
A-a3g
A -Oil
-O2I -012
A-O22
032
the
-O23I
A Ossl
A
of
Og 3
A 022
XI - A
of order
two
<^"(A)
A -022
-O23
+
1
-032
+
A -033
1
OSI c
A -Oil
-oia
1
A 033
A -01 1
-0 12
1
-021
A-o22
-A
of order
one
0"'(A)
Also
<j!.^"\A)
= cji^^hX) =
...
5.
Prove: If X^ is an r-fold characteristic root of an re-square matrix A, the rank of X^J -A is not less than n T and the dimension of the associated invariant vector space is not greater than r.
Since Ai is an r-fold root of <;6(A) = 0, ^SCA^) = ct>(X^) = 4' <.\) = = 4>''^~^\x^) = and cfy'^AXi) + 0. Now (^"^HX^ is r! times the sum of the principal minors of order n-r of A^/ -A; hence, not every principal minor can vanish and X^l-A is of rank at least n-r. By (11.2), the associated invariant vector space otXj^l -A,
I.e., its
r.
6.
2,
find the characteristic roots and the associated invariant vector spaces.
are
1
1, 1, 1, 2.
4 4
is of rank 3;
For A =
2:
XI
-A
-2
1 1
2-5
-1
-3
-4
is that
sion
1.
spanned by
5
[2, 3,
-2, -3]'.
For
A=
1:
XI -A
=
1
-1
-4 -2
-1
is of rank 3; its null-space is of
dimen-
1-4
sion
1.
1-5
[3, 6,
The associated
-4, -5]'
154
[CHAP. 19
7.
Prove:
If
n-
By Problem
(i)
1,
a"+ .!"-%
s^.
(i
^n-i^ + (-1)
all
l'^!
where
= 1. 2
n - 1) is (-1)
l/xZ-adj/ll
m"
Si/^""^
...
+ S^^^/s +
(-if
ladj^l
where Sj
(/
= 1, 2
1)
sum
A
=
and
s,-
and
S,-
Jl 1
(-if Ml"-^si,
+
|adj
|;x/-adj A\
+ ._,A^"-' + ._2MlAi"-" +
...
s,|/irV
siMf-V
n-i
and
Mr"|/x/-adj
Now
4|
(-if !i (-if ii
s,(i^) +
\A\
...
._i(f^f-' \A\
+
(-if
(^f Ml
\A\
/(/^)
/(R)
and by
(i)
+ ,,(1) +
...
._,(lf-
a"/(4^)
(-ifia"+ sia"-'+
...
^_,a
(-i)"Ml
8.
We have
</,(A)
|X/-P|
\\PIP'-P\
|-PA(^/-P')|
AAA
F
is a reciprocal equation.
=
A"|f/-P|
A"c^(i)
SUPPLEMENTARY PROBLEMS
9.
determine the characteristic roots and a basis of each of the associated invariant
-1
1
2
1
-2 -8 -12
(e)
1
2
(g)
1
1 1
1
-1 -1
-1
(a)
12
2
1
(c)
4
1
(O
2
1
-12
1
(rf)
-1 J
'
-2
1
-3 -9
1
-12
4
1_
2 (A)
2 2
1_
'2
(6)
2
1
(/)
2
_0
(/)
1+t
-1
3 2
(A:)
-3
_i
-1
2-i
2-4 2-1
2-1
2, [2, 2,
2,
6-10
9
6
'-1
1
-6 -3
3'
112-1
2
(0
-5 -4 -3 -2
-3 -3
-6 -4
-2
1
n)
-1
-5
-1
-1
-5
Ans.
(a)
1, [1,
-1,0]';
-1,-2]';
3,
[1,-1,-2]'
[1,3,1]'; [2,1,0]';
1, [3, 2, 1]'
1,
[1,1,-1]';
[1,1,1]'
1,
CHAP.
19]
155
(e)
2,
[2,-1,0]';
0,
1,
[4,-1,0]';
[12,-4, -1]'
3,
1,
[4,0,-1]'
(/) 0,
[3,-1,0]';
[1,1,0]'
4,
[1,-1,0]';
1,
i,
[0,0,1]';
[l
[1,1,0]'
-J,
-1, [0,1,-1]';
+ j,l,l]';
1,0]';
[l-i.1,1]'
[0,3, 1,2]'
(/) 2, [1,0,1]';
(/c)
1+t,
[0,
2-2j, [1,0,-1]'
2,
1, 1, 0,
[1,0,-1,0]', [1,-1,0,0]';
[1,2,3,2]';
[2,1,0, 1]';
[-2,4, 1.2]';
3,
(/) (m)
-1, [-3,0,1,4]'
1,
[3,0, 1,4]';
10.
Prove:
If A" is
if
AX
XX
then
X'AX
H.
Prove:
Prove Theorems
and
VI.
13.
Prove Theorem
Hint.
If
VII.
|A/-.4|
=(A-Ai)(A-X2)...(A-A) then
roots of the direct
\(X
+ k)I
-A\
= (X + k~ X-O (X + k
-X2)
(X + k -X^^.
14. Prove:
The characteristic
sum
diag(.4i,
A2
13.
Prove:
If
A and
A'
equation.
16.
t:]
A and B
r<n, show
that
NA
and
AN
Prove: Prove:
roots.
If the n-stjuare
matrix A is of rank
r.
then at least n
-r
17.
If
is non-singular, then
18.
17,
show
that
roots.
X9.
Write
iAl IA2
lA
are
20. Prove:
The characteristic
If
1.
Hint.
X'
(PX.Y(PX-)
21.
Prove: If A^ ^ \ is a characteristic root and A^ is the associated invariant vector of an orthogonal matrix P, then XlXi = 0.
The characteristic
II,
1.
0(0)
<^ (0)
= =
(-i)"M|
(-1)
of
<P
(0)
(-1)
r!
of
.4
0^">(O)
n!
Problem 23 into
c/3(X)
(?i(0)
+ (^'(0)-X +-^(f^'(0)-X^
...
+^0'^'^'(O)-A"
to obtain (19.4).
chapter 20
Similarity
TWO
ra-SQUARE MATRICES A and B over F are called similar over F matrix R over F such that
(20.1)
2 2
3
if
B = R'^-AR
11
1
Example
1.
The matrices A
1
1
of
Example
1,
Chapter
19,
and
2J
B
are similar.
R~'^AR
-110
-1 E7
-3 -3]
ij
li 2
1
2
3
1 1
133"
1
5 =
14
1
13]
[1
12
6
is
13
iJ
1)^ =
of
5
An
that
root
invariant vector of
B associated with A=
Y^ = [l,0,0]' and
it
is readily
shown
an invariant vector of A associated with the same X-L A = 5. The reader will show that Fg = [7, -2, O]' and Yg = [n, -3, -2]' are arly independent invariant vectors of B associated with A = 1 while X^ = RY2 are a pair of linearly independent invariant vectors of A associated with the same
= RYi =
[1, 1, 1]' is
characteristic
Example
I.
Two
same characteristic
roots.
1.
If
is an invariant vector of
A^ of B, then
root A^ of A.
RY
is an invariant vector of
D =
diag(ai, 02
a^)
are
The elementary
DE^ = a^E^,
(i
1, 2, ...,n).
and 4
for proofs)
Any
If
invariant vectors.
IV.
an re-square matrix
A has
re
it
is similar
5.
to a diagonal matrix.
See Problem
In
Problem
V.
6,
we prove
field
Over a
if
and only
if
\IA
factors completely in
1/
F and
156
CHAP.
20]
SIMILARITY
157
Not every ra-square matrix is similar to a diagonal matrix. The matrix of Problem 6, Chapexample. There, corresponding to the triple root X = 1, the null-space of A/-i is of dimension 1.
ter 19, is an
We can
VI.
prove, however,
to
a triangular matrix
is
any real
n-
orthogonal matrix
P such
that
square matrix with real characteristic roots, there exists an P'^ AP = P'AP is triangular and has as diagonal elements
See Problem
11.
of
Theorem
of
Theorem VHI
DIAGONABLE MATRICES. A
Theorem IV
is
matrix A which is similar to a diagonal matrix is called diagonable. basic to the study of certain types of diagonable matrices in the next chapter.
SOLVED PROBLEMS
1.
Prove:
Two
same characteristic
roots.
Let A and B
(')
R~^AR
A/-B
\I -
R'^AR
=
R-'^UR - R-''aR
=
R'^(XI-A)R
and
\\I-B\
\R-^.\XI-A\.\R\
\XI-A\
Thus, A and S have the same characteristic equation and the same characteristic roots.
2.
Prove:
If
is an invariant vector oi
RY
is an invariant vector of
=
B = R ^AR corresponding to the characteristic root A corresponding to the same characteristic root \,- of A.
=
A,-,
then
By hypothesis, BY
X^Y and RB
AR; then
=
AX
and
A"
ARY
RBY
RX^Y
X-RY
X^X
Is an invariant vector of
A corresponding
3.
Prove:
vectors.
has
re
Let R AR = diag(6i, 62. > b^) = B. Now the elementary vectors fii, Eg ^n ^'^ invariant vectors Then, by Theorem n, the vectors Xj = RE. are invariant vectors of A. Since R is non-singular, its column vectors are linearly independent.
of B.
158
SIMILARITY
[CHAP. 20
4.
Prove:
If
it
is similar to a
diagonal matrix.
Let the n linearly independent invariant vectors X^. Xq
teristic roots Ai, Xs. /^n ^ '^^"^
,4A'^ = A^A'^,
0'
1, 2
n).
charac-
AR
[AX^.AX^
AX^]
Ai
[Ai^i, ^2^2
...
1X1,
X2
A2
'^n-'
...
diag(Ai, A2
A)
Hence,
R ^AR
= diag(Ai,
A2
A).
5.
of
Example
1,
Chapter
19, is
Zi = [1,1,1]',
1
Z2 = [2,-1,0]',
2
1
;
^3 = [1,0,-1]'
1
Take
R =
[Jf 1,
Z2, ^3!
-1
then
1-2
1
and
10-1
1
-3
5
1 0"
1 1
2
1
,1
1"
1 1 1
R'UR
a diagonal matrix.
1 1
-2
2
3
2
1
2_
-1 -1
P
-3
6.
Prove:
of
Over a
field
factors completely in
F an n-square matrix A is similar to a diagonal matrix if and only if Xl-A F and the multiplicity of each A.^ is equal to the dimension of the null-space
X^I-A.
First,
and that exactly k of these characteristic B R-'^AR diag(Ai, A2, ^n' Then X^I - B has exactly k zeroes in its diagonal and, hence, is of rankn-fe; its thus, A.^/-^ has the same null-space is then of dimension n-(n -k)=k. But \l-A = R (Xj^I - B) R~^ \^I B rank n -k and nullity k as has Conversely, let Ai,A2 Ag be the distinct characteristic roots of A with respective multiplicities
suppose that
'i-''2
r,
where
ri+r2+...+rc
Take
Xi^.Xi^.
XJ'
.,Xir-
= 1,2
s).
exist scalars a
(i)
such that
+
*
(o2i''''2i
"^
+ 022^^22 +
=
''2'r2^2r2)
...
+
Yi =
(osi^si
"^
^s^Xs'z
"^
Now
each vector
ai^Xu
ai^Xi^ +
+ air^Xir^) =
0,
(i
1,
s).
for
otherwise,
(i);
it
is
an
Theorem I their totality is linearly independent. But this contradicts X's constitute a basis of K and A is similar to a diagonal matrix by Theorem IV.
invariant vector and by
thus, the
7.
Prove:
are the
characteristic roots of A.
A and let X-l be an invariant vector of A corresponding to column of a non-singular matrix Qi whose remaining columns may be any whatever such that \Qi\ ^ 0. The first column of AQi_ is AX^ = X-^X^ and the first column of O]'. Thus, Ql^AQi is Ql^XiXi^. But this, being the first column of Qi'^XiQi. is [Ai,
Let the characteristic roots of A be Ai, Aj
Take
X-i
as the
first
CHAP.
20]
SIMILARITY
159
(i)
Qi^AQ^
A-^
hi
[o
Bil
A,j
where
is of order n |A/
1.
Since
-ft
and Qi^ AQi_ and A have the same characteristic roots, it A. If n = 2, A-i = [A2] and the theorem is proved
the first
Otherwise, let X^ be an invariant vector of Ai corresponding to the characteristic root As- Take X^ as column of a non-singular matrix Qq whose remaining columns may be any whatever such that IftI /^ 0.
Then
(il)
Q2^A^Q^
[^2
[0
52I
A^j
where A2
is of order n
2.
If
n =
3,
/I2 = [As],
Q2J
I2
<?3
n-2
Qn-i
such that
(J
/1()
is triangular
8.
Q AQ
is triangular, given
9-1
6 5
8-9
5
-1
1
-5
5
-4
5
-4
Here |A/-^| = (A^-l)(A^-4) and the characteristic roots are 1,-1,2,-2. Take [5,5,- 1, 3j', an invariant vector corresponding to the characteristic root 1 as the first column of a non-singular matrix Qx whose remaining columns are elementary vectors, say
^5000'
5
-10
3
10 10
1
Then
1 1
5-1
5
8-9
-15 -12
20
16
[O
-5
1
and
5
Qi AQ^
4
3
-3
17
[4, 0,
Aq
4
A
then
-l]'.
Take ft
i
o"!
-20
II
and
ft A1Q2
J_
20
-15 -48
-11
20]
r_i
=
5I
64
48j
L
Take Qs
"d
}
[.:
[s, ll]'.
then
<?3'
H[-11
8
2/5I
"I sj
,!
and
ft'-42ft
[2
-2J
160
SIMILARITY
[CHAP. 20
Now
5
32
4
.
pi
[o
o"|
'
[h
[o
o1
Qsj
5
1
Q''
8
-40 40
4
Q^j
160
20
160_
-1
11
"l
1_
-180 40 -220
-7 -9/5"
5 2
1
-1
and
2/5
-2
9.
If
is
matrix
P such
any real re-square matrix with real characteristic roots then there exists an orthogonal that P''^AP is triangular and has as diagonal elements the characteristic roots of ^.
X be the characteristic roots of A. Since the roots are real the associated invariant As in Problem 7, let Qi be formed having an invariant vector corresponding to Aj as first column. Using the Gram-Schmidt process, obtain from Qt an orthogonal matrix Pi whose first column is proportional to that of Q-^. Then
Let Ai.Xs
P'lAP^
where Ai
is of order n
hi
fill
Next, form Qq having as first column an invariant vector of Ai corresponding to the root A2 and, using the Gram-Schmidt process, obtain an orthogonal matrix P^- Then
TAs
iji-g
Ss"!
[0
After sufficient repetitions, build the orthogonal matrix
A^j
P
which P~^AP
pi
=
Pi.
|_o
0]
pn-2
"I
pj
P_J
for
is triangular
10.
1"
1
P-^AP
P""
1
1
3 2
2_
is triangular
Prom Example
to
5, 1, 1
is [1,0, -1]'.
1
We take
Qi
10
1
-1
1 1
/V2
1
l/v^
1/1/2"
[l, 0, -l]'.
Pi
-l/\/2
an orthogonal matrix whose
first
column is proportional to
We
find
1/a/2
-I/V2
1
2
1
1
1
l/>/2
1
I/V2"
=
Pl'^Pi
3
2
3^/2"
\0
A^j
1/1/2"
l/^/T
-1/V2
1/
V2
21/2"
CHAP.
20]
SIMILARITY
161
Now Ax has A
we
[l,
r
obtain by the Gram-Schmidt process the orthogonal matrix
"2=1 1/V3
-1/^^
1/a/3
2/v/6
r-n
I
Lv2
\_
rn
ij
Then
L-2/V6
1/\A2
I/V3J
1/^6
P-,
I
1 1
X]
2/^6
1/V6"
-I/V2"
-I/a/S"
is orthogonal and
P~^ AP =
->/2
5
11.
f/
such that
V
2
-4 - 6i
+ 3i
A(A^
1 +t 2 - 2J
-1 + i
-6 -
4f
6 + 4f
-3 5
2i
of
is
+(-4-J)A+
i)
0,
For A =
0,
take
[l,
Qi
-1
_
1
1_
unitary matrix
i/\fz
f/i
i/Ve'
-i/\/2
-l/vA3
2/v^
l/\/6"
1/V3"
1/^2"
Now
-2y/2a-i)
1-i
~(26 + 24:i)/y/6
(2 + 3i)/\A3"
3
+2J
= Ui.
12.
P such
that
P ^AP
is triangular
-1
5
-1
may be taken as
[l,0, -l]'.
The
[1,1,1]',
[1,-2,1]' respectively.
Now
orthogonal.
Taking
l/V^
P
=
l/y/J
l/y/3
l/y/6
-2/^/6
l/yje
-1/-/2
l/y/3
we
find
P~^ AP
= diag(2,3, 6).
This suggests the more thorough study of the real symmetric matrix made
162
SIMILARITY
[CHAP. 20
SUPPLEMENTARY PROBLEMS
13.
roots of
Find an orthogonal matrix P such that f'^AP is triangular and has as diagonal elements the characteristic A for each of the matrices A of Problem 9(a), (6), (c), (d). Chapter 19.
\l\pZ
Ins.
(a)
\/Zs[2
2/3
2/3
1/3
(c)
I/V3"
1/-/2
-1/^/6""
-l/v/2
\/^^pi
-4/3a/2
i/Va"
-1/1/3
2/>/6
1/^
1/^6"
-i/Ve"'
I/V2"
(b)
1
-l/V'2
(rf)
1/V^ -i/'v^
1/V3"
2/V6"
I/a/2"
1/^2"
1/V2"
(6)
I/V3"
-1/\/6"
14.
not.
and (rf) are Chapter 19 and determine those which are similar to a diagonal matrix having the characteristic roots as diagonal elements.
of
Problem
9,
15.
For each of the matrices A of Problem 9(j). (/), Chapter 19, find a unitary matrix V such that angular and has as diagonal elements the characteristic roots of A.
U'^AU
is tri-
l/v^
Ans.
(i)
-(l+i)/2
2
(/)
\l\p2.
1
-1//!
l/>/2
a-i)/2^/2
(1
\/\[2
-0/2/2"
i 2
\/^[2
\/y[2
16.
Prove:
If
is real
17.
Theorem
VIII.
18.
for
(i
1, 2
m).
Show
that
= diag(Ci, C2
S
are similar.
19.
and
ij
C)
Hint.
Suppose C^ =
and
= diag(Bi,
Bg
B^).
/i
Let
= diag(Bi, Sj)
= diagCSj, B^).
/ = diagC/i./g),
and
/g are
those
B
B
Show
oJ
that
BT^EK
to prove
B and C are
similar.
20.
to
= diag(Bi,
Bg
B)
21. If
AE
Hint.
Q'^BP'^N.
See Problem
15,
Chapter 19
22. If A-i.A^
...
A^ are non-singular and of the same order, show thut A-^A^-.. A^, A2As-.. A^A^, A3... A^A-^^A^. have the same characteristic equation.
23.
Let
of
(a)
.4.
Q'^AQ
B where B
is triangular
Show that S
roots of A.
A Q
is triangular
(b)
Show
that
X,-
= trace
24.
Show
r
1 2_
-1
25.
Show
that
1
_1
3
2
and
-3
2-1
-2
3
chapter 21
Similarity to
a Diagonal Matrix
we
The characteristic
See Problem
II.
1.
The
See Problem
2.
When A
III.
is real
If
9,
Chapter 20, is
0;
hence,
...,
symmetric matrix with characteristic roots Ai, \^, then there exists a real orthogonal matrix P such that P'AP = P'^AP = diag(Ai, Aj
is a real ra-square
A,
A).
Theorem
IV.
III
implies
If
r^
of a real
r^.
Theorem
III
becomes
BY
V. Every real quadratic form q = X' AX can be reduced by an orthogonal transformation to a canonical form
(21.1)
Aiyi + X^yl +
r
...
+ X^y"^
where
is the rank of
A and
Ai,
Aj
A^ are
its
the
Thus, the rank of q is the number of non-zero characteristic roots of A while the index is number of positive characteristic roots or, by Descartes Rule of signs, the number of varia-
^|
0.
real symmetric matrix is positive definite if and only if all of its characteristic
ORTHOGONAL
SIMILARITY. If P is an orthogonal matrix and 8 = P"^ AP then B is said to be orthogonally similar to A. Since P~^ = P', B is also orthogonally congruent and orthogonally equivalent to A. Theorem III may be restated as
.
VII. Every real symmetric matrix A is orthogonally similar to a diagonal matrix whose diagonal elements are the characteristic roots of A.
See Problem
3.
Let the characteristic roots of the real symmetric matrix A be arranged so that Ai ^ A2 = Then diag(Ai, A2, . A.) is a unique diagonal matrix similar to A. ... ^A. The totality of such diagonal matrices constitutes a canonical set for real symmetric matrices under orthogonal similarity. We have
VIII.
Two
the
same characteristic
real symmetric matrices are orthogonally similar if and only if they roots, that is, if and only if they are similar.
have
163
164
[CHAP. 21
In
Problem
4,
we prove
(x-^^, x^ x^) and if tive definite, there exists a real non- singular linear transformation X = CY
AX
and
X'BX
X'BX
is posi-
which carries
X'AX
into
AiXi + A2Y2 +
and X' BX into
+ Ay
yi
22
+ 72
+ Tn
\\BA\=0.
See also Problems
4-5.
HERMITIAN MATRICES.
X.
we have
See Problem
XI.
7.
The
A, there
U such
that
U'HU = U~^HU =
.
diag(Ai, A2,
...,
A).
The matrix H
U'
HU
r-
be arranged so that Ai i A2 ^
Then diag(Ai, A2
follows
A.) is
The
totality of
i Ansuch diago-
nal matrices constitutes a canonical set for Hermitian matrices under unitary similarity.
There
XIV.
Two
if
and only
if
and only
if
NORMAL MATRICES. An
diagonal,
real
if
symmetric,
unitary matrices.
Let .4 be a normal matrix and f/ be a unitary matrix, and write B = U'AU. and B'B = U'A'U -U'AU = U'A'AU = U'AJ'U = U'AU -U'A'U = BB'. Thus,
XV.
In
If
^
8,
is a
is
a.
U'AU
is a normal matrix.
Problem
XVI.
If
we prove
is an invariant vector corresponding to the characteristic root A/ of a northen X^ is also an invariant vector of A' corresponding to the characteristic
X^
mal
rnatrix A,
root A^.
In
Problem
XVII.
9,
we prove
square matrix A is unitarily similar to a diagonal matrix
if
and only
if
is
normal.
As a consequence, we have
XVIII.
If
is normal,
See Problem
10.
CHAP.
211
165
MX.
If
r^
if
and only
if
SOLVED PROBLEMS
1.
Prove:
The
A are
all real.
Consider
=
B
which
is real
+ ik)I-A\[(h-ik)I-A\ + ik)I
(hl-Af+k'^I
{h
-A
is singular.
such that
BX
and, hence,
X'BX
The vector
X\h!-AfX
A"
+ k^X'X
{(hi
X'(hl-A)'(hl
-A)X
0.
+ k^X'X
Also,
= =
(/(/- 4)
is real; hence,
X'X>0. Thus,
A:
and
2.
Prove:
and X^ be invariant vectors associated respectively with the distinct characteristic roots Xi and
As of 4.
Then
=
AX^
Ai^i
and
AX^
= Aj'^s.
also
A^2'4^i =
X-iX^X^
and
Xj^AX2 = XqXiX^
Taking transposes
^1 A X^ = A 1 A*! ^2
and
XqAX-j^ =
= 0.
X2X2X1
Then
X^X^X^
A2^iA^2
X[X2
3.
Find an orthogonal matrix P such that P'^AP is diagonal and has as diagonal elements the characteristic roots of
A given
,
^7-2
-2
1
10
-2
7
-2
is
A-7
2
-1
2
A -10
2
-1
and the characteristic roots are
1
A-7
- 2
6, 6, 12.
-1
2
Xl
For A =
6,
we have
2
1
-4
2
X2
X3
or
Xi
- 2x2 + xs
in-
-1
0,
-l]' and
X2
= [l,
1, l]'.
When A
= 12,
we take X3 =
166
[CHAP. 21
of
we have
1/VT
P
=
1/x/y
X/\f&
I/VT
-i/x/T
-2/sf6
l/yjl
l/VT
It
is left as an exercise to
show
that
P~^AP
= (iiag(6,
6, 12).
4.
Prove:
nite,
If
transformation
...
^nTn
^'^'^
^'B^
into yi + y| +
GV
V'(G'BG)V
jjii.jj.^
jji^vf + fJ.2V2 +
+ Ait;^
where
Let
(ii)
= diag(l/\/7H, X/sfJT^
Then V =
1
HW
2
carries
(i)
into
W
for the real quadratic form
it
(H G
BGH)W
G'AGH)W
=
+ wt
Now
(H
KY
which
carries
into
Y'(K'h'g'AGHK)Y
where Xi.Xg
formation
X^y2 + X^y^ +
...
+ X^y^
there exists a real non-singular trans...
X are the characteristic roots ot H G AGH. Thus, CY = GHKY which carries X'AX into X^y^ + X^y^ +
Y'(K'h'g'BGHK)Y
Y (K~ IK)Y
-'i
+ Xy^
-'2
Since for
all
values of
X,
K'h'g'(\B-A)GHK
X
XK'h'g'BGHK - K'h'g'AGHK
= =
diag(X,X
X)
- diag(Xi,X2
X)
diag(X-Xi,
X-X2
X-X)
it
=0
5.
Prom Problem
3,
\/^pr
l/vA3"
l/sfQ
1//6"
(GH)W
-l/\f2 \/2\fl
l/VT
IATs
-2/'/6"
1/^r
1/2x^3"
1/v^
1/(
1/3n/2
1/3-/2
-1/:
1/f
_-l/2>/3
7
1/3\A2
-2
10
carries
X'BX =
X'
2
1
-2
7
into
ir'ZIF.
-2
carries
1/3
X'AX
into
tr'
0^
CHAP.
21]
167
W
=
KY
= lY.
CY
{GH)Y
X'BX
that
is left as an excersise to
show
6.
Prove:
A = CP
where C
is positive definite
Since A is non-singular, AA' is positive definite symmetric by Theorem X, Chapter Q"'' with each k^) = B AA' Q = diag(A;i, ij Q such that
17.
A:^
Then there
>
0.
Define
Bi
diag(\/^,
VT^
\fk~^)
and
= QB-^Q''^. =
Now C
is positive definite
symmetric and
C^
Define
QB^Q-^QB^Q-^
=
QB^Q-^
= I
QBQand
AA'
is orthogonal.
C'^A.
Then
PP'
C'^AA'C'^^ C ^C'^C^
is orthogonal as required.
Thus A
CP
with C
7.
Prove:
The
that
Let \j be a characteristic root of the Hermitian matrix H. Then there exists a non-zero vector X^ such Now X'^HX^ = XjX'j^Xiis real and different from zero and so also is the conjugate transHXj^ = A jX^
.
pose
X\HX;j^ =
\^X\X^.
8.
Prove:
If X: is an invariant vector corresponding to a characteristic root A,- of a normal matrix A, then Z^ is an invariant vector of A corresponding to the characteristic root X^.
Since A is normal,
(XI-A)(XI-A)'
(KI-A)(XI-A')
XXI - XA'
XXI - \ a'
a' A
0;
XA
AA
-A)
-XA+
-A)X^=
so that XI
-A
is normal. =
By hypothesis, BX;
=
= (X^I
then
(SX^)'(SA'i)
X-B'-BXi
X'iB.B'X^
(W X ^)' {W X ^)
and
B'X^
(X^/-I')Zi
^.
9.
Prove:
An
re-square matrix
is normal.
and only
if
A is normal.
Suppose A
By
Theorem vni. Chapter 20, there exists a unitary matrix U such that
^1
6i2
bin
As
bin
U AU
....
Xfi-i
bn-i. n
B'b
By Theorem XV, B is normal so that B' B = BB is XiXi while the corresponding element of BB
Now
is
...
bmbm
Since these elements are equal and since each b^j b-^j 2 0, we conclude that each b^j = 0. Continuing we conclude that every b^; of B with the corresponding elements in the second row and second column, A). Conversely, let A be diagonal; then A is normal is zero. Thus, B = diag(Ai,A2
168
[CHAP. 21
10. Prove:
If
orthogonal.
Let Ai,
A*!
AXi=\iXi, AX^
and, since
and Xj, ^2 be distinct characteristic roots and associated invariant vectors of A. Then = \^X^ and, by_Problem 8. jI'A'i =Xi^i,_I'A'2 = AaATa. Now ^2/lA'i = Ai^a^i and,
X[a' X^
= X^X'-^X^.
But
X[a' X^
\^X[X2.
Xx iXo.,
XiX2=
as required.
11.
x^ -
12xix.2.
- 4.xi = 40 =
or
(i)
X'AX
X
[-:
::]
40
of
A is
A-1
|A/-^|
=
6 =
(A-5)(A + 8)
[3, -2]'
and
"
[2, 3]'
respectively as associated
Now
tsVTs '__
|
2/V/T3I
'^lll
whose columns
3/vT3j
to
o"!
The transformation
1
PY
reduces
(i)
-2/\flf\
3/VT3.
-6~]
3/x/T3
13"! Vis'
fs
-
'VTs
2/\/l3
-\y 3/\/l3.
13 J
y'\
Lo
-Sj
5yl
^-y|
40
The conic
axes
in plane analytic
12.
One problem
to
i.e.,
Without
at-
we show here
9
1
"
1
3
1
-1
2 2
-7
1
1
1
2
2
and
=
1
-1
-7
-9
The
characteristic equation of
is
A-3
1A/-.11
-1
-1
-1
A
-2
-2 A
:
fi =
~.
-;=|
A2
V2
-2,
V3
^N-TfJ
CHAP.
21]
169
Using only the elementary row transformations Hj(k) and H^:(k), where
r-1
4,
3
1
1
-1
-7
--VJ
-4
Bi
2 2
1
1
1 1
ft]
y +
-1
-7
-9
-4
'3x +
X X + 2y
we
find
0,
z =
or
C(-l,
0, 4).
Prom
D^,
we have
d = -4.
0, 4).
The rank
equation is
of
is 3
4; the
111
+
4F
4.
2Z
- 4
The equations
of translation are
x = x'
di,
1,
y = y',
z = z'
The
1-2. i^s-
\_v\.v.2,v^.
-i/i/s"
-1/vT
i/vT
Y Z\- E
[X Y Z]
2/vT
i/vT
-i/x/T
i/VT
SUPPLEMENTARY PROBLEMS
13.
For each of the following real symmetric matrices A find an orthogonal matrix nal and has as diagonal elements the characteristic roots of A.
,
P such
2
(a) 2
-1
.
2
(b)
_1
()
2
(c)
-4
2
21
"3
2
2
2"
-1
4
r
-1
4_
3
2_
-4
_ 2
-2
-1_
id)
2
_2
-1
-1
2_
-2
4_
_1
2/3 1/3
-1
2/3 1/3
i/\/^
Ans.
(a)
1
1/VT
(b)
l/\f2
\/sf&
l/v/
l/v/s"
i/a/s"
(c)
-2/vT
-2/3
1/3
2/3 2/3
i/VT
-i/v^T
2/3
(d)
-1/VT
1/3
1/\A6"
-2/3
2/3
1/3
(e)
i/x/T
2/^/6'
1/vT
i/vT
-l/v/T
-2/3
2/3
--2/3
-1/ 3
14.
2/3
yV 6
-1/\A2
i/VT
Find a linear transformation which reduces X'BX to X' AX to Xtyl-^X^y^ + Asy^, where yl-^y'^-^y'l and of \XB - A\ =0, given
-2
{a)
l\
[7 -2
1
10
-2
7J
5=03
\\
r2
fj
(6)
-4
7 =
-4
1
-4
-8
1_
,
2 5 4
2 4 5
-4
_-4
2 2
-2
l/\/T
2J
-8
1/3VT
-2/3x^2"
1/31/2"
2/3
1/3
l/3x/l0
/4ns.
(a)
2/3^/10
2/3\/l0
-\/\[2
-2/3
170
[CHAP.
21
15.
If
/'"^^P = diag(Ai,
'^1
A.1
Xi,
'^ ^
X^ + in
'^r-i-2
^ri)'
"^S"
P"\Ai/ -/1)P
= diag(0,
0,
-^r + i'
~'^r+2
^1 ~^v)
2 to
''^"'^
r.
16.
17.
18. Identify
(o) (h)
each locus:
24xi:>;2
20x2-
27%^
=
=
"i'
369,
(c)
(''^
\Q?,
x\ - 312%ia:>, + 17 %|
= 8
900,
3x^+ 2%2
+ 3%2
*i + 2^:1x2 + Xg
19.
Prove:
Each characteristic
7
root of
is either zero or
a pure imaginary.
+ ^ is non-singular,
(/
/ -.4 is
non-singular.
B =
+ /iri(/-4) is orthogonal.
20.
Prove:
If
Is
21.
Prove:
If
is normal, then
is similar to
22. Prove:
if
and only
if it
can be expressed as
H + iK, where H
and
are
commu-
23. If
Xi, A,2
A.,
then
roots of
Hint.
AA
for
J
are XiAi,
C/"^<4(7 =
T*
X2A2
T
= [. ],
^n^rv
where
C/
_
is unitary
_
tr(-4/l')
Write
and T is triangular.
Now
tr(rT') =
requires
ty =
24.
Prove:
If
is non-singular, then
AA'
is real
and non-singular.
25.
Prove:
If
if
^B
and
BA
are normal.
A be
(X-x^f^iX-x^f'^ ...a-'Kfs
P such
that
P'^ AP
B.,
diag(Ai/.ri' '^2-'r2
0, ...,
-^sV^)
0,/
'i
,
Define by
(j
= 1, 2
s) the
0) obtained
by replacing A. by
and Xj,
r a
i).
by
in the right
member
T? E,
.
DR. , PBiP~^.
= 1,2
s)
Show
(a) (6) (c)
(d)
that
~-
P'^AP
A
AiBi
A282+
...
A5B,
E^Ej The
=0
for
^j.
=
/
(e)
(/) (g)
1 + 2 + + E3
(Xil-A)Ei
= 1,2
s)
(h)
IfpM
Hint.
p(A)
p(Ai)i + p(A2)2 +
+ P(^s^^s-
Establish
^^
Aii + A22 +
xIe
Aii + A22 +
+ X^E^.
CHAP.
21]
171
(i)
Each E^
Hint.
is a polynomial in A.
Define
/(A)
(A
-Xi)(A-X2)
(A -X^)
and
/^(A)
/(A)/(A-A^),
(i
= l,2
s).
Then
(/)
A
If
matrix
If
fi
commutes with A
with
if
it
Hint.
(k) (I)
B commutes
A A
U
If
A-^
(m)
Al%
=
A-2%
...
+ A-^fig
A^'^
,1/2
vTii
VA22
...
VAs^s
it
Equation
decomposition of A.
Show
that
is unique.
27. (o)
24
-20
24
lO"
4/9
=
-4/9
4/9
2/9"
5/9
4/9
5/9
-2/9'
-20
_
-10
9_
49
-4/9
2/9
-2/9
1/9
+ 4
4/9
2/9
10
-10
"
-2/9
-2/9
2/9
8/9
29
20 29 10
-lO"
10
44_
(b)
Obtain
A-^
1
--
20
196
-10
38/9
(c)
-20/9
38/9
10/9'
Obtain
//^
-20/9
_
1
-10/9
23/9_
0/9
-10/ 9
28.
B commute.
Use Problem
If
26
(/).
29.
Prove:
U and a positive
such that
Hint.
A= HU
H
by H^ = AA'
Define
If
and
f/
= H~'^A.
30.
Prove: Prove:
is non-singular, then
is
normal
if
and only
if
31.
and only
if
Hermitian matrix
32.
33.
such that
H~^AH
is normal.
Prove: Prove:
real symmetric (Hermitian) matrix is idempotent if and only if its characteristic roots are O's real
and
I's.
If .4 is
r.
= tiA.
34.
Let A be normal, B =
Prove:
(a)
A and
(S')"^ commute,
is unitary.
35.
Prove: If
tf
~iH)
is unitary.
36. If
is
2i
Prove:
(a)
(6)
of
U'^AU, where U
real.
(c)
(d)
A
A
is real
is real
symmetric (Hermitian),
Ai i A < A, where Ai
is the
chapter 22
F.
commutative with
itself
Let X denote an abstract symbol (indeterminate) which is assumed and with the elements of a field F. The expression
n
n-i
(22.1)
f(X)
cirik
On-^X
a^X +
p a^X
where the
If
a^ are in
oj;
F
,
is called
a polynomial in \ over F.
,
(22.1) is called the zero polynomial and we write f(\) = 0. If a^ ^ of degree n and a is called its leading coefficient. The polynomial /(A) (22.1) is said to be = said to be of degree zero; the degree of the zero polynomial is not defined. Oq^ oq ^ is
every
If
a =
in (22.1), the
polynomials in A which contain, apart from terms with zero coefficients, the same terms are said to be equal.
Two
The
F[X\ as elements
of a
number sys-
For example
= g(X) + f(X)
and
n,
f(X)-g(X) = g(X)-f(X)
If
(i)
^(X) + g(A)
degree
when m.>n,
of degree at most
of degree
re
when m<n.
(ii)
/(A)-g(A)
If
is of
degree
m + ra.
0,
/(A)
?^
while
f(X)-g(X) =
then
g(X) =
If
g(A) ^
then
A(A) =
A:(A)
QUOTIENTS.
In
I.
Problem
If
1,
we prove
are polynomials in F[A], then there exist unique polynomials
is either the
(22.2)
/(A)
h{X)-g{X) +
r(A)
r(A) = 0, g{X) is said
Here, r(A) is called the remainder in the division of /(A) by g(A). to divide /(A) while g(A) and h{X) are called factors of /(A).
If
172
CHAP.
22]
173
Let
/(A)
= h(X)-g(X).
When
g(A.) is
if its
Example
1,
it
is factorable
is irreducible;
X-
a.
Then
(22.2)
becomes
=
r,
h(X)-(X-a) +
where
is free of A.
By
(22.3),
f(a) =
and we have
until a remainder free of
n.
is f(a).
When
/(A) is divided by
A- a
m. A
X-
a as a factor
if
and only
if
0.
GREATEST COMMON
f(X) and g(A).
DIVISOR.
If
and
g(A), it is called a
common
if
divisor of
A
(i)
(ii)
/"(A)
and g(X)
common
(Hi)
Problem
2,
we prove
not both the zero polynomial, they have
a unique greatest
common
divisor d(X) and there exist polynomials h(X) and ^(A) in F[X]
such that
(22-4)
d(X)
h(X)-f(X) +
k(X)
g(X)
3.
of
/"(A)
d(X) =
1.
of /(A)
(A^
+4)(A^+3A+
i/(A)
5)
and g(A) =
(A^
-1)(A^ + 3A+5)
is
A +3A+5. and
(22.4) is
A.=
3A +
- lg(A)
5
We have also
V.
(1-A)-/(A) + (X+4)-g(X)
0.
This illustrates
not
1,
If the greatest common divisor of /(A) of degree re>0 and g(X) of degree m>0 is there exist non-zero polynomials a(X) of degree <m and 6(A) of degree <n such that
a (A) -/(A) +
b(X)- g(X)
and conversely.
See Problem
4.
if
their greatest
174
[CHAP. 22
is
/(A)
one of
/(A)
and
h(X).
If /"(A)
and
g:(A)
each divides
A(A),
soalsodoes /(A)-g(A).
UNIQUE FACTORIZATION.
IX.
In
Problems, we prove
(22.5)
9i(A)
?2(A)
^^(A)
where
is a
SOLVED PROBLEMS
1.
Prove:
in F[X],
If
/(A) and g(X)^0 are polynomials in F[X], there exist unique polynomials h(X) and r(X) where r(A) is either the zero polynomial or is of degree less than that of g(X), such that
/(A)
(i)
Let
h(X)-g(X) +
r(X)
n
/(A)
= o^jA
n-i + %1-iA. +
+ a^A + Oq
and
g(A)
=
h^X
if
i^-iA
6iA +
6o.
^m then
?^
/(A) =
or if
n<m.
Suppose that
=
n>m;
c^-iA
/(A)
7^ A
g(A)
/i(A)
cpX
Co
we have proved
~X
and
r(X) = /i(A).
Otherwise, we form
/(A)
^A"-\(A) - ;^A^-\(A)
/,(A)
f^X) =0 or is of degree less than that of g(A), we have proved the theorem. Otherwise, we repeat Since in each step, the degree of the remainder (assumed ^ 0) is reduced, we eventually reach a remainder r(A) = 4(A) which is either the zero polynomial or is of degree less than that of g(A).
Again,
if
the process.
h(X)-g(X)
+ r(X)
and
/(A)
'c(A)-g(A)
s(A)
r(A)
Then
+ s(X)
s(X)
k(X)-g(X)
and
[k(X) -h(X)]g(X)
=
A:(A)
r(X)
Now r(A)-s(A)
is of
while, unless
=
0,
h(X) = 0.
[k(X)
- A(A)]g(A)
is of degree equal
to or greater than m.
r(A)
s(A) =
Then both
CHAP.
22]
175
2.
Prove:
If f(X) and g(A) are polynomials in F[X], not both zero, they have a unique greatest divisor d(\) and there exist polynomials A(A) and k(\) in F such that
common
(a)
da)
If,
h{X)-f(X) +
is the
ka)-g(\)
say,
1
/(A) =
0,
.
then d(X) =
bin,
g(X)
where i^
we have
(a) with
A(A) =
and k(X) = 6
is not greater
By Theorem
I,
we have
/(A.)
?i(A)-g(A) +
If r^(X)
where
h(X) =
If (ii)
r^^(X)
=0
7^
or is of =
= 0,
then d(X) =
b'^^giX)
and we have
(a)
with
and k(X)
r^(A)
0,
6^
we have
g(A)
=
g2(A)-ri(A)
ri(A).
If r^(X)
+ r^iX)
= 0,
where
/^(A)
or is of
we have from
( i
/(A)
?i(A)-g(A)
and from
If
it
r^(X) 4 0,
we have
'i(A)
(i")
93(A)
r^(X').
TsCA) + rsCA)
If rg(A) = 0,
where
rgCA) =
or is of
we have -
from
i)
and
(ii)
'2(A)
g(A)
?2(A)-'-i(A)
g(A)
?2(A)[/(A)
9i(A)-g(A)]
-92(A) -/(A)
[1 + 9i(A)-92(A)]g(A)
and from
it
Continuing the process under the assumption that each new remainder is different from
general,
(v)
we have,
in
'i(A)
9i+2(A)-'-i+i(A)
ri+2(A)
's-2(A)
9s(A)-'-s-i(A)
r^cA),
&(A)?^0
and
(^^
's-i(A)
9s+i(A)-'-s(A)
By
(vi), rjCA)
(v).
Prom
(iv).
we have
9s-i(A)-'s-2(A)
r5_i(A)
(vi),
we conclude
If
then
rf(A)
= e~^'s(A).
Prom(l)
ri(A)
/(A)
91(A)
g(A)
/!i(A)
-/(A) + /ci(A)
g(A)
=
and substituting
A2(A)-/(A)
in (ii)
'2(A)
-52(A). /(A)
+
.
[l + 9i(A)-92(A)]g(A)
+ i2(A)-g(A)
Prom
(iii),
/3(A)
t^(K)
qa(>^)
r^(X)
we have
92(A) 93(A)]g(A)
'3(A)
= =
[1 + 92(A) 93(A)]/(A)
[-9i(A)
93(A)
9i(A)
h3(X)-f(X)
finally,
43(A) g(A)
Continuing,
we obtain
rs(A)
hs(X)'f(X)
+
=
ksiX)-g(.X)
Then
d(X)
c-\^(X)
c-i/5(A)-/(A) + c-iA:s(A)-g(A)
h(X)-f(X) + k(X)-g(X)
as required.
The proof
176
[CHAP. 22
3.
and
III.
g(\)
)C
2)i-)^-X+2
in the form of
Theorem
We
(i)
(li)
(iil)
find = =
/(A)
g(A)
(3A+l)g(A) +
(A'
+ 4A^+6A+4)
(A^
(A-2)(A^ +
=
4A^ +
6A+4) +
+ 7A+10)
A^+4A2+6A + 4
A^
and
(Iv)
7A + 10
(^A + p^)(17A+34)
J-(17A+34)
=
divisor is
A +
17A
+34
from
(A
+ 4A^ + 6A + 4)
(A
3)(A^ + 7A + 10)
Substituting for A^ + 7A + 10
(ii)
17A
+34
(A^
+ 4>f + 6A + 4)
(A
4;? +
6A+4)]
(A^
- 5A +
7)(A^ + 4>f + 6A + 4) -
- 3)g(A)
and
for
+ 4A^ + 6A + 4 from
17A
+34
A +
2
(A^
- 5A + 7)/(A) + (-3A +
14A^
- 17A - 4)g(A)
Then
=
i(A^ - 5A +
7) -/(A)
i(-3A
+ 14A^ - 17A
4)-g(A)
4.
Prove:
If
the greatest
common
re>
< m and
is not 1,
a(X)-f(X) +
b(X)-g(X)
and conversely.
Let the greatest common divisor of /(A) and g(A) be d(X) ^
/(A)
=
rf(A)-/i(A)
1
;
then
=
and
g(A)
d(X)-g^(X)
where
/i(A) is of
degree
<n
<m. Now
=
gi(A)-rf(A)-/i(A)
g(A)-/i(A)
and
gi(A)-/(A)
[-/i(A)-g(A)]
(a).
= gi(A)
we have
Conversely, suppose /(A) and g(A) are relatively prime and polynomials h(X) and ^(A) such that
A(A)-/(A) +
k(X)-g(X)
(a) holds.
Then by Theorem IV
there exist
Then, using
(a),
a(X)
= =
a(X)-h(X)-f(X)
+
+
a(A)-A:(A)-g(A)
-b(X)-h(X)-g(X)
if (a)
a(A)-i(A)-g(A)
a(A).
CHAP.
22]
177
5.
c-q^(X).q^a)
...
qr(\)
where
,^
is a
Write (i)
/(A)
is the leading coefficient of /(A).
a-/i(A)
is irreducible,
where a
theorem.
(")
If g(A)
If /^(A)
n-g(A)-A(A)
Otherwise, further factor-
and
/!(A)
(ii)
To
"n
are two factorizations with
Pi(A) which,
Pi(A).
9i(A)
?2(A)
. .
9r(A)
and
a p^(X) PaCA)
p^CA)
r<s.
Pi(A)-p2(A)
p^(\).
by a change
92(A) divides
in numbering,
may be taken as
...
Then
... ps(A), it must divide some one of the Since Pi(A) is monic and irreducible, ?i(A) =
Eventually,
we have
r
ity is impossible,
= s
and, after a repetition of the argument above, 92(A) = P2(A). r and Pr-^i(A) Pr+2(A) ... p^cA) = 1. Since the latter equaland uniqueness is established.
Ps(A)
i
P2(A)-p3(A)
= 1,2
SUPPLEMENTARY PROBLEMS
6.
Give an example
in
7.
Prove Theorem
Prove:
III.
8.
If /(A)
it
divides
g(A) + h(\).
9.
Find a necessary and sufficient condition that the two non-zero polynomials /(A) and g(A) each other.
For each of the following, express the greatest common divisor
()
(6)
in
F[X] divide
10.
in the form of
Theorem IV
/(A)
/(A)
= = = =
2A^-A=+2A2-6A-4,
A^A"
g(A) g(A)
1,
= =
A*-A=-A2+2A-2
A=
=
3A=
- 11A+
- 2A" - 2A -
3
1
(c)
(d)
/(A) /(A)
(a)
3A^ -
4A='
+ A^
- A^ - A + - 5A + 6,
1)/(A) +
g(A) g(A)
=
A% 2A%
-h
2A +
A= + 2A
Ans.
A=-2
- ^ (Ato
i (2A=+ l)g(A)
lo
(6)
A-
-l_(A+4)/(A) + l-(A^+5A+5)g(A)
('^)
^+1
^
^"^^
jL(5A+2)/(A)
^^{-l5X^+4.A>?~55X+ib)sa)
11.
then
g(A) = d{X)-h(X)
or
178
[CHAP. 22
12.
VIII.
13.
If
it
divides a( A).
14.
Find the greatest common divisor and the least common multiple of
=
/(A)
= =
A^g.c.d. g.c.d.
1,
g(A)
A=l.c.m,
(b)
/(A)
(a)
(A-l)(A+lf(A+2).
= =
g(A)
=
(A+l)(A+2f(A-3)
1)
Ans.
A-l;
(A^- l)(A^ + A+
=
(b)
(A+i)(A + 2);
l.c.m.
3)
15.
Given 4
show
2 ij 12 ;
(o) (b)
(f)(k)
A^
SA^
- 9A =
and
(f)(A)
A -
sA!^
- 9A -
51
m(A)
= 0.
when m(A)
A^
- 4A -
16.
What property of a
by a polynomial domain?
/(c) = 0.
17.
The scalar
and only
if
c is
if
A-
c is a factor of /(A).
18.
Suppose
/(A) =
(A-c)^g(A).
(a)
if
Show
k-l
of /'(A),
(b)
Show
that c is a
root of multiplicity
k>
of /(A)
and only
19.
Take Show
Hint:
/(A) and g(A), not both 0, in F[\] with greatest that if D(A) is the greatest
common
common
D (A)
=
= d(X).
Let
g(A) = t(X)-D(X).
and D(X)
c{X)-d(X).
20.
Prove:
An n-square matrix A
'
a^A
in
/I.
+ a^^^A
+ aiA + oqI
chapter 23
Lambda
Matrices
non-zero
mxn
oii(A)
(23.1)
ai2(A)
a22(A.)
...
...
ai(A) CsnCA)
^(A)
[ij (A)]
asiCA)
Omi(A)
is called a A-matrix.
a^^iX.)
...
aj^n(>^)
Let p be the maximum degree in A of the polynomials a^jiX) of (23.1). written as a matrix polynomial of degree p in A,
(23.2)
^(A)
Ap\^
sp-l A.^X^-^
...
A,\
+ Ao
mxn
'^(A)
matrices over F.
A^+A
.
A* + 2A= + 3A^ + 5]
A^
A=
- 4
- 3A=
J
1
A^ +
u
-J
A +
-4
L
zero.
The
A(X) is ra-square, it is called singular or non-singular according as \A(X)\ is oris not Further. A(X) is called proper or improper according as A^ is non-singular or singular matrix polynomial of Example 1 is non-singular and improper.
If
Consider
the
two
n-square
A-matrices
or
matrix polynomials
^(A)
A^X^
.P-'
p -1
+ Ai^X + Aq
and
(23.4)
B(X)
S^_,
a"^
'
...
SiA + So
(j
The matrices (23.3) and (23.4) are said to be equal, ^(A) = 5(A), provided = Bp=a andAVI,' ^ ^ = 0,1,2 p).
The product /1(A) B(A) is a A-matrix or matrix polynomial of degree at most p+q. If either ^(A) or B(A) is non-singular, the degree of ^(A) 5(A) and also of 5(A) /1(A) is exactly p + 9. The equality (23.3) is not disturbed when A is replaced throughout by any scalar k of F For example, putting A = A in (23.3) yields
A(k)
Apk^
+ Ap_.^k^~^ +
...
A^k ^ Ao
179
180
LAMBDA MATRICES
[CHAP. 23
However, when \
to the
We
define
(23.5)
Ag(C)
A^C^
. ^^ C^A^
A^_,C^ p-i
A^ ^p
...
+ A,_C + Ao
and
(23.6)
4 (J)
2.
. C^ + .
+ ^^^i +
^o
Example
Let
X XmI
X''
o1.
To
[l
i1.
oj
To
[-2
il
2
4_
(X)
\X-2
Then
j,nd
+ 2j
P [O
4J
and
2j
=
_3
ij
[lo
15
Ajf(C)
[o ij [3
'
[1
oJ
[3
4)
'
[-2
14
2J
26
12
27
1.
Al(C)
[17
See Problem
DIVISION.
In
Problem
I.
2,
we prove
if
If
A(\) and B{X) are matrix polynomials (23.3) and (23.4) and
Bq is non-singular,
polynomials
Qi_(X), RxiX);
(23.7)
.4(A)
and
(23.8)
If
A(X)
S(A)-(?2(A) + R2(X)
if
/?i(A)=0,
/1(A).
/?2(A) = 0,
visor of
jV
Example
3. If
A"^
+ A -
A% A% A +
2A''
21
(A)
L
'4(A)
2A^
-A
2
and
+
X%
(A)
'
2A
a^aJ
Q-l(X)-B(X)
'1
then
Ta^-i
=
|_
A-iirA^+i
ii
A'^+aJ
r 2A
^
2A+3
+
Ri(X)
2A
JL
j_-5A
-2A_
and
A(X)
1_
A^ + aJLA-I
ij
B{k)-Q2{X)
See Problem
3.
A
(23.9)
6(A)
bqX"-!^ + 5,_,A'?"'-/ +
B(A) =
...
+ b,X
+ bol^
i(A)-/
ra-square ma-
is called scalar.
trix
fc(A)-/
polynomial.
If in
6(A) = b(X)
I.
then
(23.10)
A(X)
B(X)- Q^(X) +
R^X)
2A
A + ll
and
Example
4.
Let
(A)
B(A) = (A + 2)/2.
Then
LA^^-I
2A+1J
CHAP.
23]
LAMBDA MATRICES
181
rA + 2
If
ll
To
-ll
If
/?i(A)
II.
in (23.10), then
^(A) = 6(A)- /
(?i(A)
and we have
matrix polynomial i(A) = [oy (A)] of degree n is divisible by a scalar matrix polynomial S(A) = 6(A) / if and only if every oy (A) is divisible by 6(A).
Let A(X) be the A-matrix of (23.3) and let B =[6^,-] be an re-square Since Xl - B is non-singular, we may write
A(\)
(2i(A)-(A/-S) +
/?!
and
(23.12)
A(\)
A.
It
(A/-fi)-(?2(A) + R2
can be shown
m.
If
is
ra-
re, until
A, are obtained,
then
=
=
A^B^
B^Ap
Ap_^B^~'
B^~'a^_^
\I - B
=
...
A,B
+ Ao
and
a,
Examples. Let
Aj^(B)
...
+ BA, + "^
^
The
^(A)
."^M
A='
and
r~^
|_-3
flO
LA-2
[a +
+ 2j
3
A-4J
isl 26j
=
'^'
IfA-l
4JL-3
[4
and
'^'^
A+
-2I A-4J
"
[14
U
2,
a'-JH'
a!4J
j?2
[17
27]
(^'-B^Q^i^^-^^
Theorem
III.
Prom Example
R^ = Ag(B) and
When ^(A)
'4(A)
/(A)-/
apIX^ + ap.jx''^ +
...
+ ai/A + Oq/
^1
and we have
^2
apB^
ap_^B^~^
...
+ oifi + aol
A/-B
until a remainder/?.
= f(B).
As
a consequence,
V.
we have
KI^- B
if
and only
if
f(B) =
0.
CAYLEY-HAMILTON THEOREM.
trix
Consider the re-square matrix A = [o,-,-] having characteristic maijj Xl - A and characteristic equation 0(A) = \Xl - A\ = 0. By (6.2)
(A/-i)-adj(A/-i)
(f>(X)-I
182
LAMBDA MATRICES
[CHAP. 23
Then ^(A)
/ is divisible
by XI - A and, by Theorem V,
[oj,-]
(^(A) =
0.
Thus,
<fi(X)
0.
Example
6.
of
is
V.i
32
31
A -
7>i
+ llX - 5 =
0.
Now
62
31 31
63
31
62
32
and
32
31 31
62 63
62
3l"
12
13
12
6
6
7.
"2
2 3
2
r
1
o"
1
31
32_
- 7
6 6
11
_1
1_
See Problem
4.
SOLVED PROBLEMS
1.
A(X)
[,
A+
1
compute
-4
ij
when
C=
L
2J
^'<'^'
i o]C
'2
'
"3
o][o
'
c 3
and
2.
Prove:
there ^(A) and B(A) are the A-matrices (23.3) and (23.4) and if Bq is non-singular, then are either and R^iX) exist unique polynomial matrices Q-,(X), i(A); QsO^). R2M. where Ri(A) zero or of degree less than that of 6(A), such that
If
(i)
^(A)
QAX)-B(X) +
Ri(X)
and
(ii)
^(A)
If
p <
q,
then
(I)
and
4(A) - ApB-^B(X)X^''^
C(A)
where C(A)
If
is either zero or of
degree at most p q,
1.
we have
(i)
with
ApB'q'-^''^
and
R^iX) =
C(A)
CHAP.
23]
LAMBDA MATRICES
183
If
C(X) =
C^X +
...
where
s>
q.
form
A(\)
If
- ApB-^''B(\)\P-1 q,
C^S-1B(A)X^-'?
(i)
D(X)
we have
with
Qi(\)
otherwise,
and
Ri(X) = 0(A)
...
we continue
(i).
the process.
of decreasing degrees,
we
ultimately reach a matrix polynomial which is either zero or of degree less than g
and we have
To
obtain
(II),
begin with
^(A)
- B(X)B-^''ApXP~1
left
1,
Chapter 22.
rA^ +
3.
Given
2A=-l
1
A'-A-l]
and
S(A)
A= A"
^(A)
|_ L
A + A' A" A^ +
L-A^ + 2
A^
- A
(?2(A),
A(X)
= Qi(X)-B(X) + Ri(X).
A(X)
= S(A)-(?2(A) + R^iX)
as in Problem
2.
We have
and
Here,
So
-ii
and
ij
S2
-,
=
Tl [i
ii
L-1
(a)
2J
We compute
^(A)
-^4S;^5(A)A^
3 =
1
1'
-2
A +
1]
10
-10
3]
2
fo
-l"!
^ A +
f-l
-1'
C(A)
11
D{X)
C(A)
- C3S2^S(A)A
2
= 3
A2 +
1
A +
-6
and
D(A) - D^B^^BCX)
J
3 5
1
11 L
P
5]
P-i
-1I
= _
-6
=
5"]
["-13
-2
3J
^ +
[_
-9
'1
-6A-13 -2A-9
A= +
5A + 3 3A + 5
fli(A)
Then
(?i(A)
(A^X^ + Cs A
+O2 }B^-^
--
A +
p
fA^
q
+
P
e]
^1
4A
+ 4
4
A^ +
sA +
5
2A+
3A +
(b)
We compute
^(A) - B(A)S2^'44A^ E(X)
(A) - 5(A) B2
^sA
and
F(X) - B(X)B2
F^
ftsCA)
184
LAMBDA MATRICES
[CHAP. 23
Then
QziX)
B2^(A^>? + 3 A +
F2)
[A^+4A
A^ +
+ 4
9
2X +
2I
6A +
3A + 5J
1
1
2
1
1
4.
Given
3 2
its characteristic
and A
also, since
is non-singular, to
compute A
-2
-1
and A'
A-1
A/-^|
Then
-3 -2
-1
A-1
-3
A^-3A^-7A-11
A-1
5"
"8
8 7 8
"l
1
1
2"
"1
0"
1 1
42
=
31
29
31
3A +1 A + 11/
8
_13
8
8_
3 2
1
1_
11
45 53
39
45
42
42
31
29"
'8 + 7
8
8 7
8
5'
"1
1 1
2
1
1_
193
=
160
177
144
160
^*
3/ +7/
=
UA
45
_53
39 45
31
42_
8 8
3
_2
224
272
13
224
193
From
11/
-1 A -3-4
+A
we have
'1
0' '1
1 1
2"
8 7
8
5 8
8_
-J-7/
34 +/II
11
_0
1
1_
3 2
1
1_
8
_13
-2
-1
-1
5
-3
-1
11
7
-2
2 5
-1"
5
"1
0"
1 1
2
1 1
i{_7^-i -3I
+ A\
121
-3
-1
- 33
1
1
11
3 2
-2
29'
-8
40
121
-24
-1
40
-24
-8
-27
5.
,^n
x:.
of degree p in
Prove that
\h(A)\
= h(\)- h(X2)
h(X^).
We have
(i)
\XI-A\
(A-Ai)(A-Ao)...(A-A^)
Let
(ii)
h(x)
Then
h(A)
=
c(sil-A)(S2l-A)...(spI-A)
4
CHAP.
23]
LAMBDA MATRICES
185
and
|A(^)|
=
c'P\siI-A\-\s2l-A\ ...\sp!-A\
!c(si-Ai)(si-A2)
...
(si-A)i
...
Jc(S2-Ai)(S2-A.2)
(S2-X)!
...
...
(Sp
-X)!
...
ic(si-A)(s2-A)
...
(s^-A)!
h(\i_)h(\2) ...h(\^)
using
(ii).
SUPPLEMENTARY PROBLEMS
6.
Given
A(\) =
Pr^ [a^+i
[_A^
^1
A-iJ
sna
B(A)=r^^
[a +
1
^^^^1 a
compute:
_r2\2+2A
(a)
-4(A)
A^+2A1
2A ij
+ B(A) + A + 2
(6)
A(\) - B(A)
r2A
^
-A^l
[a^-a
rA* +
-ij
A* + 3A= + 3A=1 -, A o
A-'
2A=+A=+A
o
1
(c)
^(A)-S(A) =
L ^ +2A^ (rf)
A'
2A^J
B(A)M(A)
r2A'' +
1_
3A'+A^+A
+ 3A^ + 3A
2A^
-a]
2A='
2\^
7.
Given
/1(A)
compute:
Ag(C)
B;p(C)
5
5
-2
-1
[-:
3
<?i?(C)
1
3"]
4;f(C)-B^(C)
[l7
Bn(C)-4n(C) i?^
-7J
^;?(C)
= 9
= 3
-3
-3I
^i(C)
^i(C)-B^(C)
B^(C)-A^(C)
E
[3
:]
[::;]
3-
--M
if
8.
If /4(A)
A-matrix,
and B(A) are proper n-square A-matrices of respective degrees p and q, and show that the degree of the triple product in any order is at least p+q.
186
LAMBDA MATRICES
[CHAP. 23
9.
(?i(A), /?i(A);
|> + 2X
(a)
.4(X)
A
,,
,
La%i
f
-A.2
,,
"
'
1
I
.
Fa
BiX)
a]
a^-aJ
+
A.
''''-'I
-aJ
-A^ + 2A + 2l
1
FA-I
-ll
(b)
AiX)
U.2A-1
A^-2A+1
A'*
''''-
J'
+ A^ +
[o
5A^+2A
4A^ +
x4
+ 4
1
A'^ 1
7A-2
+ 2
(c)
.4(A)
3A''
3A 2
3A^+ 2A
6A+
2A - A +
A^ + 2A''
A + A^ +
8A -
A^+1
B(X)
3A-1
A+
1
2A
A^
A-2
2A
A""
3AnA"-l
(d)
A"-l
A* + A^ +
2
A^-A
A 1
A(X)
A=-A^+l
A^ + A
B(X)
2
A^-2A-1 A+1
A
A^
A''
A +
+ A
A^-2A
A^ + A 1
A+
a1
2A* + A -
A-2
.
[\+i
Ans.
(a)
r 2a
'^^''-'-
a-i1
-A + 2j'
fo [l
o]
ij
Q
X
^=^'^^
ij'
''^'''+2
f-A
(b)
-A-l]
.,(A)^0;
f-A-l
-A +
ll
f-
3
1
0,(A)=
\^_^^^
,,(A)=
_J,
A^+3
|^_^
J,
-fjA-3
..(A).
|^_ 1
-A+1
(c)
-A+7
-16A + 14
fti(A)
-5A + 2
(?i(A)
A^-1
3A + 5
A
A"
2
-3A+2
-21A + 4
-2A+3
loA +
3
A-5
18A-7
2A-3
A-l
<?2(A)
A-6
5A-7
A^ +
1
,2
A
2
/?2(A) =
1
A+
3A^ + 6A + 31
(d)
-3A^-5A-16
A=^
3A'^-7A
-A^ +
2A'^
+ 8
Qi(X)
A -
3
1
- A -
4A -7
1
-2A -
- 2A -
81A +
Ri(X)
46
1
-12A 15A -
16
9
-85A 12A -
23
5 2
4A -
-9A -
-7A
+ 31
17A -
3A^+5A
<?2(A)
-A^-A-4
A"
3
2A^-4A
+ 3
A - 14
-2A + 6A - 6
-3A 7lA +
fl2(A)
2A - 2A -A +
ll
46
-12A llA +
8 6 4
-26A - 30 -15A - 30
4A - 4
16A - 16
2A +
10.
figCA) = -4^(0)
where B(A)=A/-C.
CHAP.
23]
LAMBDA MATRICES
187
11.
Given
[\^
3X+
^2 A^-3A
.
1
''*
A-2
.4(A) =
C(X)
=1fx
A +
2
^
A 1 A A+lJ
+ 2j
[A - 3 ^A
(a)
compute
B(A)- C(A)
(b)
A^'+SA^+sA
[A + 4
A +
A -
si
r-9 A +
S(A)
-A - gl
10
A='-A^-3A+2j
in
A-6
ij
|_13A-6
9A +
10J
Given
compute as
Problem 4
10
'1
,
A^
|_i2
^^1 i7j
[I
13.
!;]
Prove:
Hint.
If
A and B
are similar matrices and g(A) is any scalar polynomial, then g(A) and g(fi) are similar.
Show
If
first that
and B
S^
14.
Prove:
fi
= diag(Si,
is
g(B)
15.
diag(g(Si), g(B2)
g(B))
Hint.
16.
The matrix C is called a root of the scalar matrix polynomial B(A) of (23.9) if B(C) C is a root of B(A) if and only if the characteristic matrix of C divides B(A).
Prove: If Ai, Aj A are the characteristic roots of A and characteristic roots of f(A) are /(Ai), /(A2) /(A)Hint.
if
17.
Write
\- f(x)
c(xi- x)(x2=
x) ...(x^
- x)
so that
\\I
Nowuse
|*j/-4|
(xj-Ai)(::i-A2)...(*^-A)
and
- f(A)\ = c'^\xj - A\ \x2l - A\ ...\xsl -A\. c(xi -Xj) (x2-\j) ... (x^ -\j) = \- f(\j).
1
-1
3
1
18.
f(A) =
A'^
-2A+3,
given
2
1
19.
20.
is an invariant vector of
A of Problem
17, then
21.
Let A(t)
A(t)
= [a^,-()]
where the
polynomials
t.
Take
if it
were a polynomial Sfnomial with wit! constant coefficients to suggest the defi,
22.
Derive formulas
(a)
-r-\A(t)
for:
+ B{t)\;
(b)
-T-\cA(t)\.
where
dt
c is
a constant or c =[ciA;
(c)
^{A(t)dt
B(t)\;
(d)
dt
^A
'it).
n
Hint.
use
and differentiate
cv,-()
k=i ^^
a,-i,(0 6i,,-().
For
(d),
"^^
chapter 24
Smith Normal Form
BY AN ELEMENTARY TRANSFORMATION
(7)
The interchange
The multiplication
The addition
to the ith
/(X),
denoted by ^^(/(X));
the addition to the ith column of the product of /(X) and the ;th column, denoted by K^j (/(X)).
These
are the elementary transformations of Chapter 5 except that in (3) the word scalar has
been replaced by polynomial. An elementary transformation and the elementary matrix obtained by performing the elementary transformation on / will again be denoted by the same symbol. Also, a row transformation on ^(X) is effected by multiplying it on the left by the appropriate H and a column transformation is effected by multiplying ^(X) on the right by the appropriate K.
Paralleling Chapter
I.
5,
we have
is an elementary
ma-
trix in F[\].
II.
If
\A(\)\ = k
0,
III.
The rank
Two
re-square X-matrices i(X) and B(X) with elements in F[X] are called equivalent proff 2
'
...
K^ such
that
B(X)
P(\)-A(X)-Qa)
Thus,
IV.
Equivalent
mxn
In
Problems
and
2,
we prove
Let A(\) and B(X) be equivalent matrices of rank r; then the greatest common divisor of all s-square minors of A(\), s r, is also the greatest common divisor of all ssquare minors of B(\).
In
Problem
VI.
3,
we prove
r
to the
188
CHAP.
24]
189
A (A)
/2(A)
(24.2)
N(\)
...
0_
where each
/^(A) is
monic and
of rank
(i
1, 2, ...
1).
common
divisor of
s-square minors of A(\), s ^ r, is the greatest common divisor of all s-square minors of A^(A) by Theorem V. Since in N(\) each f^(X) divides /^^j(A), the greatest common divisor of all ssquare minors of N(\) and thus of A(\) is
(24.3)
g,(A)
/i(\)-f2(A)-...-4(A),
to
(s
= l,2,
.r)
diag(/-i(A), /2(A)
/^(A), 0,
and to
/Vi(A)
diag(Ai(A). AsCA)
h^iX),
O)
By
(24.3),
gs(A)
/i(A)-/2(A)-...-/s(A)
Ai(A)-A2(A)-...-As(A)
so that
Now
(24.4)
U\)
r)
h^W,
in gen-
eral, if
we
define
go(A) =
then
s:s(^)/s-i(^)
4(^)
h^(\).
(s
1, 2
and we have
VII.
A +
+ 3
Example
1.
Consider
.4(A)
A'+2A^+A
A^ +
A^+A'^ + A
A^ +
2A%3A'^+A
3A^ +
3A +
2A +
6A
+ 3
It
common divisor
A(X) is
gi(A) =
1,
the greatest
= A^
common
=
=X,
and g3(A)
/i(A)
= i\A(X)\
+X^
Then, by (24.4),
/2(A)
gi(A)
1,
g2(A)/gi(A)
A,
/3(A)
g3(A)/g2(A)
A^+A
10
A'(A)
A^+A
For another reduction, see Problem
4.
polynomials
f^(X),f-2{X)
/V(A)
form of A(X) are called invariant factors of ^(A). If 4(A) = fk(X) = 1 and each is called a trivial invariant factor.
k r,
then
/i(A) = /2(A) =
...
As
we have
190
[CHAP. 24
VIII.
Two
re-square X-matrices over F[X] are equivalent over F[X\ if and only if they
ELEMENTARY
(24.5)
be expressed as
f^(\)
{p,(X)\'^i^\p^(\)fi^...\PsMi^^^.(i = i.2
r)
where pi(A), p2(X), ., PgCX) are distinct monic, irreducible polynomials of F[X\. q^j may be zero and the corresponding factor may be suppressed; however, since r-l;/ = l,2, .s). /i+i(A), qi+t,j^gij. (f = 1.2
Some
fj
of the
(A)
divides
The
of A(X).
factors
!p-(X)S'^*i
which appear
Example
2.
field
10
10
(A.-1)(A^+1)
'
(A-1)(A^ + 1)^A
(A-l)^(X" + l)^A^(A"-3)
The rank
is 5.
The invariant
/i(A)
=
1,
factors are
/2(A)
=
1,
/3(A) =
(A-1)(A^+1),
U(k)
(\-l)(X^+lfX.
are
/5(A)
(A-l)''(A.''+lf A^(A''-3)
(X-l)^
A-1,
A-1.
(A^'+l)'',
(A^+l)^,
(A'^+l),
A^
A,
A""
- 3 each ele-
distinct;
in the listing
Example
3.
(a)
Over the
Example
2 are
ele-
A-1,
A-1,
(A^lf,
(A^'
l)''
(A^'
l),
X',
A,
X-\fz,
X+x/l
since
(6)
A^-3
can be factored.
field the invariant factors remain
(A-1)^,
A-1,
(X-if,
A-1,
X-i,
{X + if,
A^,
A,
(X + if,
X + i.
(X-if.
X-\/l,
X+\/l
The
Invariant factors of a A-matrix determine its rank and its elementary divisors;
conversely, the rank and elementary divisors determine the invariant factors.
Example
4.
be
A,
(A-l)^
(A-^)^
A-l,
(A+l)^
A+
Find the invariant factors and write the Smith canonical form.
To form
i.e.,
if
CHAP.
24]
191
To form
f^i\),
common multiple
and
A^(X-lf(A+l)
1.
Repeating, /sCX) =
A(X-l).
is
Now
\(XN(\)
1)
A^(A-lf (A + 1)
A=(A--lf(X+lf
Since the invariant factors of a A-matrix are invariant under elementary transformations, so Thus,
IX. Two re-square X-matrices over F[\] are equivalent over F[X] have the same rank and the same elementary divisors.
if
and only
if they
SOLVED PROBLEMS
1.
Prove:
If P(\) is a product of elementary matrices, then the greatest common divisor of all ssquare minors of P(\)- A(X) is also the greatest common divisor of all s-square minors of A(\).
It
is
necessary only
to
consider P(A)-.4(A) where P(A) is each of the three types of elementary matrices H.
Let R(\) be an s-square minor of .4(A) and let S(A) be the s-square minor of P(X)- A(\) having the same position as R(X). Consider P(A) = H^j its effect on ^(A) is either (i) to leave R(X) unchanged, (u) to interchange two rows of R(X). or (iii) to interchange a row of R(X) with a row not in R(X). In the case of (i), S(X) = R(X); in the case of (ii), S(A) = -R(X); in the case of (Iii), S(X) is except possibly for sign another s-square minor of ^(A).
;
Consider P(X)
= H^ (k);
or S(X) = kR(X).
(i) to leave R{X) unchanged, (U) to increase one of the rows of R(X) by /(A) times another row of R(X). or (iii) to increase one of the rows of R(X) by /(A) times a row not of R(X). In the case of (i) and (li), S(X) = R(Xy, in the case of (iii),
Finally, consider
on ^(A) is either
S(X)
R(X) f(X)-T(X)
.4(A).
Thus, any s-square minor of P(X)' A(X) is a linear combination of s-square minors of /4(A). If g(A) is the greatest common divisor of all s-square minors of .4(A) and gi(A) is the greatest common divisor of all s-square minors of P(A)--4(A), then g(A) divides gi(A). Let B(X) = P(XyA(X).
Now
.4(A) =
P"^(A)-
fi(A)
gi(A) = g(A).
2.
Prove:
If P(X) and Q(X) are products of elementary matrices, then the greatest common divisor of all s-square minors of P(A) -^(A) -QiX) is also the greatest common divisor of all s-square minors otA(X).
Let B(A) = P(A)M(A) and C(A) = B(X) Q(X). Since C'(A) = Q'(X)- b'(X) and Q'(X) is a product of elementary matrices, the greatest common divisor of all s-square minors of C'(A) is the greatest common divisor of all s-square minors of s'(A). But the greatest common divisor of all s-square minors of C'(X) is the great
est
common divisor of all s-square minors of C(A) and the same is true common divisor of all s-square minors of C(A) = P(X)- A(X)- Q(.X) is
square minors of ^(A).
for B'(A)
the greatest
common
divisor of all s-
192
[CHAP. 24
3.
Prove:
[a^j(\)]
of rank
/2(A)
N(X)
L(A)
where each
fi(\) is
monic and
(i
1, 2,
...,r-
1).
The theorem is true for A(\) = 0. Suppose A(X) ^ 0; then there is an element a::(K) of minimum J By means of a transformation of type 2, this element may be made monic and, by the proper interchanges of rows and of columns, can be brought into the (l,l)-position in thematrix to become the new aii(A).
7^
degree.
(a)
Then by transformations
of type
3,
B(A)
Let
ai--(A)
be an element in the
first
row
an (A).
By Theorem
I,
ay (A)
where
type
ri^-(A) is
g(A)an(A) + ry(A)
Prom the
/th
the first column so that the element in the first row and /th column is
2,
now
ri,-(A).
By a transformation
it
of
replace this element by one which is monic and, by an interchange of columns bring
into the
(i).
(l,l)-position as the
new oii(A).
row and the
If
of A{X),
we proceed
to obtain
Otherwise, after a finite number of repetitions of the above procedure, we obtain a matrix in which every
element
If
in the first
first
we proceed
to obtain
(i).
Otherwise, suppose
a,-,-(A)
"J
is
not divisible by
aii(X) - qix(X)- a^^iX) and aij(A) = gijXA)- aii(A). From the ith row subtract the product of gii(A) and the first row. This replaces aii(A) by and aij (A) by aij{X) - qii(X) a^jiX).
On (A).
Let
Now
add the
jth
row
to the first.
aij (A)
gij(A)U- ?ii(A)!aii(A)
remainder) for aii(A). This procedure is continued so long as the monic polynomial last selected as aii(A) does not divide every element of the matrix. After a finite number of steps we must obtain an a 11(A) which
(i).
we
/2(A)
C(A)
Ultimately,
form.
Since /i(A) is a divisor of every element of B(A) and /2(A) is the greatest common divisor of the elements
of S(A), /i(A) divides /2(A).
Similarly,
it
is found that
CHAP.
24]
193
4.
Reduce
X + ^(A)
A='
A+
+A
2
A + 3
+ A
1
+ 2A= +
A +
a""
2A''
+ 3A^
+A
A^
3A +
X" + 2
A +
a""
+ 6a + 3
to its
It
necessary to follow the procedure of Problem 3 here. The element /i(A) of the Smith normal common divisor of the elements of A(\); clearly this is 1. We proceed at once to obtain such an element in the (l,l)-position and then obtain (1) of Problem 3. After subtracting the second column
form is the greatest
from the
first,
we obtain
1
A+
a"
1
A+
2A +
3A^ +
1
"
A + A
A +
'4(A)
A^
A% A
2A +
3A%A
6A
+ 3
A%A
2A^ + 2A.
A+
1
A^ +
A + A
2A + 2A_
Lo
S(A)J
Now
the greatest
common
Then
A+A
2A^ + 2A and this is the required form. 2A^ + 2A
A" + A
5.
Reduce
A -4(A)
1
A + 2
A''
A""
+ A
A^
2A
:
A=- 2A
to its
- 3A +
A= + A -
We
AA=
A +
AA
A +
2"
^(A)
"l
^\^
AA
A^-3A
"
+ 2
A=+2A A^+A-3_
"l
A A +
1 1
A+1
1
-A -
-V-
-1
-A -1
A +
-A'
A+1
^23(1); Hs2(A +
l),
A+1
A+
-A
l),
A(A+i)
.Ksi(-A -2);
^2i(-A), ^3i(-A +
2);
/f2i(-A +
HqsC-I);
SUPPLEMENTARY PROBLEMS
6.
7.
Show
that
H.jK^j
H^{k)K^a/k)
H^j{f{\))
Kj^{-f(,X))
/.
Prove:
and only
if |.4(X)| is
a non-zero
constant.
194
[CHAP. 24
8.
Prove:
to / by
elementary transformations
if
is a non-zero constant.
9.
Prove:
if
and only
if /1(A) is
a product of
elementary matrices.
10.
A(Xr^
given
Q(X)-P(X)
A+
-4(A)
A+
A+2
A+2
A+1
-A-1
-A^'-A +
A^ + 2
l
Hint.
See Problem
6,
Chapter
5.
Ans
A-1
-A
A^ +
2A-1
-A^ - 3 A - 2
A+
11.
to its Smith
normal form:
1
'-\>
A
>?
A%A
_2A''
+ 2A
A-1 A^-1
2A^ - 3A + 2
A
A^
A
A= +
\^-2A
A
1
>+
(b)
2A
-A^A
1
'Xy
A-1
_ A^
A^
A^
A^- 2A+
2A=
- A=
A+
l_
A=
I
1
A+1
2A-2
+
1
A1
A^ A"
'-Xj
00
1
a""
A% A
(c)
a""
2A''
- 2A +
a''2A''
a''
2A
A + 4
A""
- A -
3\^
-7A
+ 4
-2A=
A=
- A
A^A""
A^ + 2
A^ +
(d)
2A''-2A''
1
- 2A^
1
A^1 1
'-\j
A +
1
A" + A
\^ + l
A^ +
A""
+ A -
A^ +
A
1
A +
A"
A"1
A^
+A
A"
A=
A= +
A%
+ 3
A^ +
A -
A^
A-1
1
a""
A%A^
+
1
A-^
A= +
2
A^
i_
'a=^
3A
4A 2
A= + 3
A-2
(e)
A-1
4A+3
A^ +
A+
A-2
3A + 2
1
3\ +
A^ +
2\ + 2
2A
6A
+ 4
A%
'-VJ
6A 1
A ^+ 2A + 3J
D
"a^
if)
\^
-2\
A+
A=( A-
ifOV+l I
12.
for
each of the
CHAP.
24]
195
13.
Find
its
+ X^
(X^-lf. (X^-lf
X''
(c) (d)
X" + X,
A^ + A,
(a)
(b) (c) (d)
X^,
A'
A''
+ 2X'' A,
2A^+
A''
X"^
X.
+ 2A +
A,
A''+A''+2A'' + 2A +
A''
+A
Ans.
A^,
(A
-if, A-1,
A-1
(A
A+1, A+1,
A, A, A, A,
-if,
(A
-if
A^
A.
A^'
(A^'+lf,
l,
A-1
(A^
A,
A^ +
(A^+lf,
+ lf.
A+1
whose rank
is six.
14.
What are
its invariant
A+
A,
1,
A+2, A +
3,
A+
(c) (d)
(A
A^,
-if,
(A
A,
-if,
(A
A^
A=,
(A1,
1,
1)2,
A-1
A(A + l)(A + 2)(A+3)(A + 4)
A^
(A + 2f,
(A+2f
.4ns.
(a)
(b)
1,
1,
1,
1,
1,
A,
A,
A^cA-l),
A^A-lf
+ 2f,
A''(A
(c)
1,
1,
1, 1,
A-1, (A1,
(d)
A(A + 2f,
2f
15.
!Dxx
(0 + 2)^1
+ (D + i)x2
- (D-l)xs
(D + l)x2
+
= = =
(D+ 2)X2
t
e*
where
xj, x^,
xs are
unknown
and
at
4.
Hint.
system is
AX
Now
the polynomials in
D D+2
D+1 D+1
Xl
'o'
-D + 1 D+2
X2
3_
e\
combine as do the polynomials in A of a A-matrix; hence, beginning with a that of Problem 6, Chapter 5, and using in order the elementary transformations:
of
/I
K2i(D +
l),
+1).
Hs^i-kD),
Hs(2). Ks(l/5)
obtain
-1
^(5D2+12D + 7)
^-^(SD^ + TD)
PAQ
5D + 6
-4
.4
-1
-jD
\D
A'l
-5D2-8D-2 -D 4D + 2
the Smith normal form of A.
i(5D2+7D + 2)
d=+|d + | 5 5
Use the
get
yi = 0,
linear transformation
QY
to carry
AX
into
AQY
ys
=
and from
PAQY
N-J
PH
y2=-4e*
X
=
(D2+|D
+ i)y3
6e*-l
and
Finally, use
i
=
QY
3Cie-**/5 + it _ |,
^2
12Cie-'^*/5 + Cje"* - i,
Xs
-2Cie-**/s+ ie*+ i
chapter 25
The Minimum Polynomial of a Matrix
THE CHARACTERISTIC MATRIX \I-A
I.
of an -square matrix A over F is a non-singular A-matrix having invariant factors and elementary divisors. Using (24.4) it is easy to show
If
is a
1,
diagonal matrix, the elementary divisors of \I- D are its diagonal elements.
In
Problem
II.
we prove
A and B over F are similar over F if and only if their charsame invariant factors or the same rank and the same elemen-
Two
re-square matrices
Prom Theorems
HI.
and
II,
we have
A over F
is similar to a diagonal matrix if and only if
An
re-square matrix
\I- A
invariant factors of \I -
Let P(X) and Q(\) be non-singular matrices such that P(X) (\1-A)-Q(\) mal form
diag(fi(X), 4(X),
...,
Smith nor-
j^a))
Now
\P(\).(XI-A).Q(\)\
Since <^(X) and
IV.
/^(A) are
\P(X)\.\Qa)\4,a)
|
=
=
1
A(A)-/2(A)-.../na).
monic,
P(A)
1 |
(?(A)
and we have
is the
iant factors of
the Cay ley-Hamilton Theorem (Chapter 23), every re-square matrix A satisfies its characteristic equation <^(A) = of degree re. That monic polynomial m(X) of minimum degree such that m(A) = is caUed the minimum polynomial of A and m(X) = is called the minimum equation of A. (m(X) is also called the minimum function oi A.)
5.^
The most elementary procedure for finding the minimum polynomial of i following routine:
(I)
(II)
involves the
If
If
If
4 = A ^
oq/,
then m{X) =
X-ao:
i^ = a^A +
aj
then
m(A) =
A^
- a^X - uq
,
(III)
A^ ^ aA + hi
for all a
and b but
're(A)
A''
=
a^)?
a^A"^
+ a^A + a^I
then
A^
- a^X - Oq
and so on.
122
Example
1.
2 2
2
1
196
CHAP.
25]
197
Clearly A
oqI
is
impossible.
8 9
Set
'9
8
8
8
8 9 =
'122"
i
2
2
1
'100"
+
oo
1 1
2
1
Using the
first
first
9 =
|
Oj^
+ ao_
then
8 =
2ai
a^= i and oq
A^ = iA +
51
= 5.
After (and not before) checking for every element of A^, we conclude that and the required minimum polynomial is X^ - 4X - 5.
In
Problem
V.
if
If
2,
we prove
if
is
and only
3,
any re-square matrix over F and /(A) is any polynomial over F, then f(A) = the minimum polynomial m{\) of A divides /(A).
In
Problem
VI.
we prove
ra-square matrix
fn(X) of
of
of
^(A), of the
we have
minimum polynomial
and
Vin. The characteristic matrix of an ra-square matrix A has distinct linear elementary if m(A), the minimum polynomial of A, has only distinct linear factors.
NON-DEROGATORY MATRICES. An
IX.
re-square matrix
A whose
if
minimum
otherwise, derogatory.
We have
just one non-trivial
An
re-square matrix
is
non-derogatory
and only
if
A has
similarity invariant.
It
is also
easy
to
show
/rei(A)
X.
If
is the least
common
multiple of ^^(A)
matrices.
in
Let gi(X),g2(X),
..gnCA)
F[A] and
= 1,2,
'^
...,m).
Then
B = diag(^i, ^2
acteristic and
^n)
has
^(A)
fgi(A)!
ig2(A)P-ig^(A)!
,02
as both char-
minimum polynomial.
g(X)
^(A)
a" +
a-iA"'^
a^X +
Oq
We
(25-2)
= [-a],
if
g(X)
X+a
and for
re
>
198
[CHAP. 25
1 1
(25.3)
C(g)
1 1
Oq
-%
-e
-'^n-3
-an-2
-Gj
In
Problem
XII.
4,
we prove
its character-
istic
The companion matrix C(g) of a polynomial g(\) has g(\) as both and minimum polynomial.
(Some authors prefer to define C(g) as the transpose Both forms will be used here.)
See Problem
It
5.
is
easy to show
XIII.
If /4 is
then
O'
...
(25.4)
/ =
[o],
if
ra
1,
and
=
... ...
if
ra
>
has
f^{\)
SOLVED PROBLEMS
1.
Prove: Two ra-square matrices A and S over F are similar over F if and only if their characteristic matrices have the same invariant factors or the same elementary divisors in F[X\-
equivalent.
Suppose A and B are similar. From (i) of Problem 1, Chapter 20. it follows that Xl - A and XI -B are Then by Theorems VHI and IX of Chapter 24, they have the same invariant factors and the same
Conversely,
let Xl - A and Xl - B have the same invariant factors or elementary divisors. Chapter 24 there exist non-singular A-matrices P(X) and Q{X) such that
elementary divisors.
Then by
Theorem
VIII,
P{X)-{XI-A)-Q{X)
or
Xl - B
(i)
P{X)-{XI-A)
(Xl-B)-Q
(X)
Let
(ii)
P{X)
Q(X)
= = =
(X/-S).Si(A) + Ri S2(X)-(A/-S) +
S3(A)-(A/-.4) +
/?2
(iii)
(iv)
Q'\X)
/?3
A.
Substituting in
i ),
we have
(A/-B)-S3(A)-(A/-4)
=
(Xl-B)-S^(X)-(Xl-A)
or
(V)
+ Rj^(Xl-A)
(XI-B)Rs
{Xl-B){S^{X)-Sa{X)\{Xl-A)
(A/-B)Rg
R^{XI-A)
CHAP.
25]
199
Then
(vi)
S^(X)
Sg(A) =
and
(Xl-B)Rg
left
R^(\I-A)
member
is of degree at
member
most
Using
I
(ill), (Iv),
and
(vi)
= =
=
Q(^)-Q~\\)
Q(\)\Ss(\)-(Xl-A) + Rs\
Q(X.)-S3(X).(X!-A) +
Q(X)
Ss(X)
\S2(X)-(XI-B) + R^lRg
= =
or
(vii)
(XI
- A) + S2(X)-(Xl-B)Rs + R^Rs
- R^Rs
and
/ =
RxUXl - A)
(vii) is of
Now
Q{X)-Sq(X) + S^(X)R-^ =
right
R2R3
degree zero
in
X while the
member
is of
Thus, Rg = R^
=
XI - B
Since A.B.R^, and R^ are free of A. to be proved.
/?i(A/-^)/?2
ARi2 - R^AR^
=
/Ji =
R^: then XI - B
2.
Prove:
only
if
If
is
F and
if
and
A divides
By
q(X)-n(X) +
r(A)
and then
f(A)
=
q(A)-n(A) + r(A)
r(A)
Suppose f(A) = 0; then r(A) = 0. Now if r(A) ji 0, its degree is less than that of m(X), contrary to the hypothesis that m(A) is the minimum polynomial of ^. Thus, r(A) = and m(A) divides /(A).
Conversely, suppose /(A) = ?(A)-m(A).
Then f(A)
= q(A)-m(,A) = 0.
3.
Prove: The minimum polynomial m(X) of an ra-square matrix A which has the highest degree.
Let
gri-i(X)
(n
- l)-square minors
of XI
-A. Then
\XI-A\
and
cf,(X)
g-i(A)-/(A)
adj(A/-/l)
g_i(A)-S(A)
is 1.
Now
(A/-^).adj(A/-^)
(f)(X)-I
sothat
=
(X/-A)-gn_^(X).B(X)
or
gn-i(A)-/(A)-/
(i)
(A/-^).S(A)
fn(X)-I
0.
Then XI
-A
is a divisor of f^(X)-I
200
[CHAP. 25
By Theorem
(ii)
V, m(\) divides
fn(X-)-
Suppose
fn(\)
=
g(X.)-m(\)
Since m(A) =
0,
XI
-A
is a divisor of
m(X)
/,
say
m(\)-l
(\I-A).C(\)
(ii).
(\I-A)-B(\)
and
fn(X.)-l
9(A)-m(A)-/
9(A)
5(A)
?(A)
C(A)
and, by
Now
=1
(ii),
m(A)
as was to be proved.
4.
Prove: The companion matrix C(g) of a polynomial g(\) has g(A) as both minimum polynomial.
its
characteristic and
of (25.3) is
-1
A -1
^1
"2
an-2
2
A + a^_j
third column,
,n-l
.
the first column add A times the second column, column to obtain
To
A times the
-1
A
G(A)
A
g(A)
-1
Oi
a^
a_2
A+a^-l
Since
|G(A)| = g(A),
the characteristic polynomial of C(g) is g(A). Since the minor of the element g(A) common divisor of all (n-l)-square minors of G(\) is 1. Thus, C(g) is non -derogis g(A).
minimum polynomial
5.
of
g(\)
A^ + 2A
A=
+ 6A -
is
5
10
10
10
1
or, if
preferred.
10 0-6 10 1
5-61-20
10-2
1
CHAP.
25]
201
SUPPLEMENTARY PROBLEMS
6.
+ \^
- 2\ -
(d)
(e) (/)
X""
2X!'
- X^
+ 2X
(b)
(c)
(X^-i)(X+2)
]?()? + I)
(X-lf
1
(A+2)(A^-2A^ + 4A-8)
1
'o
1
o"
1
Ans.
(a)
1
(b)
(c)
1
--3
2
1
-1
8
o"
4
1
-2
1 1 1
o'
(d)
1
(e)
1
(/)
1
-2
0-10
A=
[a^]
for
16
7.
which (011-022)^ +
g(X)
).
4012021/0
is non-derogatory.
8.
4 to diag (1
9.
For each of the following matrices ^. (i) find the characteristic and minimum polynomial and
non-trivial invariant factors and the elementary divisors in the rational field.
1
1 1
(ii) list
the
2
(c)
1
1
2
1
1
1
2
2 2
(a)
2
3
(b)
id)
2 2
2
1
(e)
-2 -1 -3
1"
_1
'211
4
if)
2-3
(g)
1-3
ih)
-6 -2 -3 -2 -3 -1 -1 -2
(a)
-1 -6 -3 -6 -3 -3 -4 -3
2
-5 -2
Ans.
<^(A)
m(X)
m(,\.)
=
=
(A-l)(A-2)(A-3);
X^;
if. = e.d
i.f.
i.f.
(A-l)(A-2)(A-3);
(A-3)
(b) <^( A)
=
= =
= A=
(c)
(^(A)
(A-if(A-2);
(A-1) (A-2)
;
A-1, (A-i)(A-2)
m(X)
(/)(A)
e.d.
id)
(A+ihA-5)
(A+l)(A-5)
A=
i.f.
m(A)
,^(A)
ie)
= =
e.d.
- 4A=
i.f.
A,
A''-4A
m(A) = A^ - 4A
c^(A)
= =
e.d. A, A,
AA+l,
A(A-H)^(A-1)
i.f.
A^-A
if)
m(A)
(/)(A)
A(A^-l)
A2(A+1)2
;
e.d. A,
i.f.
A,
A(A+l)2
is)
m(A) =
oS(A)
A(A+1)2
e.d. A. A,
(A+D^
(A-2) (A""-
A-
2)'';
i.f.
(h)
m(X)
A^-A-2
VIII.
e.d.
10.
11.
m(D)
= diag(m(Bi), m(B^)
requires
m(Si) = miB^) =
m^X)
divide m(X).
202
[CHAP. 25
12.
13. If
A Show
is
n-square and if k is the least positive integer such that A = 0, .4 is culled nilpotent of index that A is nilpotent of index k if and only if its characteristic roots are all zero.
A:.
14.
Prove: (a) The characteristic roots of an n-square idempotent matrix A are either
(b)
or
1.
The rank
of
is the
1.
15.
Prove: Let
A.B.C.D
F
it
with C and
non-singular.
matrices
P and Q such
in
PCQ
= A.
PDQ
and only
if
R(X) = \C - A
XD-B
have the
or the
Problem
16.
Prove:
If
s,
slmA.
A
of
17.
Problem
9(h).
18.
^ 0.
The theorem follows from Theorem VII or assume the contrary and Then {A -\^I)q(A) + rl = and A -X^l has an inverse.
w:rite
m(A) = {\-X^)q{X) +
r,
19.
Use
-4
[o
^ to
show
that the
minimum polynomial
;]
20.
Prove:
If
g(\) is any scalar polynomial in A, then g(^) is singular if and only if the greatest
common
divi-
minimum polynomial
i-
oi A, is d(k) ^ 1.
(ii)
1 1
and use Theorem V, Chapter 22. and use Theorem IV, Chapter 22.
expressible as a polynomial in A of
g(.4) is
is
22.
Prove:
If
the
minimum polynomial m(A) of A over F is irreducible in F\X\ and is of degree A with coefficients in F of degree < s constitutes a field.
s in A,
then the
23.
Let A and S be square matrices and denote by m(A) and (A) respectively the minimum polynomials oi and BA. Prove:
(a) m(A) = n(A)
(&) m(A)
AB
.A
Hint.
B.m(^S)./l
(B,4).m(B.4)
and
A-n{BA)-B
m >
=
(AB)-n(AB)
0.
24.
Let A be of dimension
mxn
n,
AB
and BA.
Show 0(A)
25.
Let
X.I
Prove:
If
A and B com-
mute, then
an invariant vector of B.
of
chapter 26
Canonical Forms Under Similarity
THE PROBLEM.
In
Chapter 25
matrices A and
In this chapter,
it was shown that the characteristic matrices of two similar n-square R'^AR over F have the same invariant factors and the same elementary divisors. we establish representatives of the set of all matrices R'^AR which are (i) sim-
(ii)
put into view either the invariant factors or the elementary divisors.
These matrices,
ical matrix
They correspond
r
to the
canon-
'[^
o]
all
mxn
matrices of rank
under equivalence.
Let A be an ra-square matrix over F and suppose first that its characteristic matrix has just one non-trivial invariant factor /(\). The companion matrix C(f ) of /(A) was shown in Chapter 25 to be similar to A. We define it to be the rational canonical form S of all matrices similar to A
is
...,
diag
(1, 1
1,
f.(X), f.^^iX),
f^(X))
1
f^(\.)
of degree s^,
(i
= j.j +
n).
We define as the
ra-
A
C(/))
is similar to
S
that
To show
D^ = diag
(1, 1
.^^
D).
to
By a sequence
of
interchanges of two rows and the same two columns, we have S similar
diag (1,1
l.fja), fj^^(\)
f^iX))
We have proved
I.
Every square matrix A is similar to the direct sum (26.2) of the companion matrices
Example
1.
X+1.
fg(X)
A%L
/io(A)
A^ +
2A +
Then
1
1 1 1
C(fa)
= [-1],
C(f9)
C(fio)
-10
10
1
-10
203
-2
204
[CHAP. 26
and
-1
1
1
-1
1
diag(C(/g),C(/9),C(/io))
1 1
1 1
-1
Is the required form of
-2
Theorem
I.
Note.
immaterial.
The
order in which the companion matrices are arranged along the diagonal is
Also
-1 -1
1
1
-1
1 1 1
-2
1
1
using the transpose of each of the companion matrices above is an alternate form.
Let the characteristic matrix of A have as non-trivial invariant tors the polynomials f^(X) of (26.1). Suppose that the elementary divisors are powers of t tinct irreducible polynomials in F[X]: pi(A,), paCX), ..-, PjCA). Let
(26.3)
fi(\)
facdis-
ipi(X)l^"!p2(A)!^2^..{p,(X)!'^is(pi(A)i'"''iP2(A)i'"''...iPi(
(i
= i,i+'^
n)
where not every factor need appear since some of the ^'s may be zero.
C{p?^^) of any factor present has
C{{^) is similar to
diag (C(p^?"
).
Cipl-'i
Cip^^ti))
We have
Every square matrix A over F is similar to the direct sum of the companion matriII. ces of the elementary divisors over F of XI A.
Example
2.
1,
.
A+1, X+1,
A^-A,+
l,
(X
-A+1)
[-1],
[-1],
[-:
.;]
[.:
\
1
-3
II is
CHAP.
26]
205
-1
-1
1
-1 -2
1
-1
1 1 1
1
-1
-3
q =
1,
if
q>l,
C(p)
M
C(p)
..
(26.4)
Cq(P)
=
C(p)
M
C(p)
where
is a matrix of the
same order as C(p) having the element 1 in the lower left hand corner The matrix C^^Cp) of (26.4), with the understanding that Ci(p) = C(p) is
Note that
in (26.4), there is a
continuous line of
N
Ca(P)
C'ip)
C'ip)
C'ip)
C'ip)
where A' is a matrix of the same order as C'ip) having the element 1 in the upper right hand corner and zeroes elsewhere. In this form there is a continuous line of I's just below the diagonal.
Examples. Let
IpiX.)]'^ = (\^
2\-
1)'^.
Then
1 1
C(p) =
M,
M=y
and
-2
1 1
-2
1 1 1
CqiP)
-2
1 1
1
-2
206
[CHAP. 26
In
Problem
1, it
is
shown
to
C(p^) and
may be substituted
for it in the
II.
We have
III.
of
\I - A.
1,
Example
4.
2,
(A+l)^is
-11
and that of (A^-A + 1)^ is
10
1
0-11
nils
-1
-1
-1
1
-1
1
-1
1 1
-1
1
1
-1
The use
Theorem
is
some-
what misleading.
was used
Theorem
III
is
A are necessary. But this is, of course, true Theorems n and III. To further add to the consometimes called the rational canonical form.
Let the elementary divisors of the characteristic matrix of A be powers of linear polynomials. The canonical form of Theorem III is then the direct sum of hypercompanion matrices of the form
i
1 1
..
.
(26.5)
C^(P)
..
. .
1
<^.,
!p(A)!
(A-a^)^
III is
2.
or classical
be the field in which the characteristic polynomial of a matrix A factors Then A is similar over ^to the direct sum of hypercompanion matrices of the form (26.5), each matrix corresponding to an elementary divisor (X-a^)'^.
5.
Let
Example
-A
be:
A-i, X + i, (X-i)
(X+i)
CHAP.
26]
207
of
is
-1
i
1
i
-i
-i
An
re-square matrix
and only
if
the elementary
divisors oi XI -
are linear polynomials, that is, if and only if the is the product of distinct linear polynomials.
minimum polynomial of
See Problems 24.
In concluding this discussion of canonical forms, that a reduction of any re-square matrix to its rational canonical form can be made, at least theoretically, without having prior knowledge of the invariant factors of \I - A. A somewhat different treatment of this can be found in Dickson, L. E., Modern Algebraic
will be
shown
Benj.H. Sanborn, 1926. Some improvement on purely computational aspects Browne, E. T., American Mathematical Monthly, vol. 48 (1940).
ries,
Theoin
is
made
We
in to
If A is an re-square matrix and X is an re-vector over F and if g(X) is the monic polynomial F[\] of minimum degree such that g(A)-X = 0. then with respect to A the vector X is said belong to g(X).
If,
with respect to A
the vector
belongs
to g(X) of
degree
p,
vectors
X,AX.A''X
2
/4^
as its leader.
-6 -3 -2
= X.
Example
6.
Let A
1 1
The vectors
Then
2
AX
= [2,
1, l]'
while
(A
-I)X=0
Y;
thus,
and
A"
[1,0, -ij',
If
AY
=[-1,0,1]'
(A+I)Y=0
m(\) is the minimum polynomial of an re-square matrix A, then m(A)- X = for every reThus, there can be no chain of length greater than the degree of m(A). For the matrix of Example 6, the minimum polynomial is A^ - 1.
vector X.
non-singular matrix
(26-6)
Let S be the rational canonical form of the re-square matrix A over F. R over F such that
R'^AR
diag(C.,C.^^
C)
where, for convenience, Q/^) in (26.2) has been replaced by C^. companion matrix of the invariant factor
/iCA)
We
A
+
shall
assume
that
C,-
the
A^i
c^ ^_X'i-
c,.
-^i2
.
Ci
.
-^i3
208
[CHAP. 26
Prom
(26.7)
(26.6),
we have
AR
RS
Rdiag(Cj,Cj^^
C)
so that R^
Let R be separated into column blocks have the same number of columns.
R-,R-^
(26.7),
R^
n)
From
=
AR
and
A[R..R.^^
RJ
IRj.Rj,,
/?Jdiag(C,,..C^.^^
C)
iRjCj.Rj^^C.^^
(i=j,j +
l
R,,CJ
AR^ = R^Ci.
Denote the
si
n)
RiCi
Since
, Risi\Ci
=
st
^__
[Ri2,Ri3, ,Risj,> -
Rik^ik]
=
ARi we have
(26.8)
A[Ri^,Ri^, -.Risi]
[ARi^.ARi2,. ..ARisil
RiCi
Ri2 =
ARn,
Risi = A^'-'Ri.
and
(26.9)
-|>i.%
-i =
K
^Ris,
we obtain
=
c^kA^-'Ri^
l
A'iRu
+ cirDRir =
or
(26.10)
(A^^ + cis.A^'i'^
definition of C^ above, (26.10)
...
a^A
=
Prom the
(26.11)
may be
written as
fi(A)-Ri^
Let Rit be denoted by X, so that (26.11) becomes fj(A)-X: =0; then, since X-,AXj, A ^ X^ are linearly independent, the vector X^ belongs to the invariant factor f^iX). Thus, the column vectors of fi^ consist of the vectors of the chain having X^, belonging to /^(X),
Xj^
as leader.
To summarize:
chains
Xi,AXi
whose leaders belong
satisfy the condition
A^^"'Xi
i
^ s. n
(i=j,i +
n)
/ (A),
fj.AX.)
/^(X)
<
s,-
s- ^^ +1
J
...
We have
VI.
let let
X^ be
of
maximum length
of
for all
ra-
vectors over F\
(it)
of a chain
maximum length (any member of which preceding members and those of for all n^_^
)
vectors over
(iii)
F which
let
X^^^he
is linearly
the leader of a chain -n-2 ^ maximium length (any member of which independent of the preceding members and those of and fi'^.i)
F which
and_i;
and so on.
Then,
for
CHAP.
26]
209
AR
Example
7.
Let
1 1
2
3
4X
= [l,
1, l]',
^^A: =
arly
Independent while
/gCX) =
belongs to
Thus,
{A-bA^+I)X
= Q
and
A'
R
we
find
1
[a;.
4A',4=A']
15
1
-3
3
5
14 25 30
6-5
-1
1
AR
[AX.A'^X^a'^X]
1
1
0-1
and
R ^AR
=
1
Here A is non-derogatory with minimum polynomial m(A) irreducible over the rational Every 3-vector over this field belongs to m(X). (see Problem 11), and leads a chain
three.
field.
of length
S.
R'^AR =
Example
8.
Let
Tak
A"
= [l,-l,o]';
then ^ A' =
AT
and
AT
belongs to
A-
1.
NowA-1
cannot be the minimum polynomial m(A) of A. 11). and could be a similarity invariant of A.
Next, take
It is,
F=[l,0,0]',
The vectors
Y, -4? = [2,
1, 2]',
^^^ =[ 11,
8, s]'
are linearly
to
independent while
A^Y
= [54,43,46]'
= 5A'^Y +
SAY -77.
Thus, Y belongs
m(A) =
A - 5A - 3A +
7 = <^(A).
The polynomial
A-
in fact,
unless
the first choice of vector belongs to a polynomial which could reasonably be the function, it should be considered a false start. The reader may verify that
minimum
R'^AR
11
when
[Y,AY,A'^Y]
See Problems
5-6.
SOLVED PROBLEMS
Prove:
s.
The minor of the element in the last row and first column of A/ - C (p) is 1 divisor of all (s-l)-square minors of A/ - C (p) is 1. Then the invanant facl.f^(\).
=
But /^(A) =
=
ip(A)!'?
since
=
|A/-C^(p)|
\XI-C(pf
jp(A)!9
210
[CHAP. 26
2.
(a) is that of
1.
Theorems I and II, the non-trivial invariant factor and elementary The canonical form of Theorem III is (b).
1 1 1
-1
(b)
1
(a)
-1
-1
-4
-6
-4
-1
(a) is that of
Theorem
2,
I,
X+
X+
2,
X-
2,
X-
2,
X+
3.
-2
4
(a) 1 1
(b)
-2
2
12
-3
-3
4.
(a) is that of
,
X+
2,
(X'^
+2X-lf
(X^
Theorem III. Over the rational field the elementary divisors +2X- if and the invariant factors are
1)^
(X +
are
(X + 2)(X^ + 2X -
2)(X^+2X-1)^
II
of
Theorem
is (b)
is (c).
-2
-2
1 1
-
-2
1
1
(a)
--2
-2
1
1
-2
1
1
-2
1 1 1
2
(b)
-10
--6
1 1
1
ID
1 1
1
11
12
17
-14
--21
-8
CHAP.
26]
211
-200000000000
-2
1 1
-1
(c)
-2
-4
1 1 1 1
1
1
-6
-9
-6
-2
-1
-6
2
-2
1
-1
5.
Let
A =
-2
1
-1 -1 -1
-1
1
Take X
2
3
1 1
-2
0_
Then
AX
-4
[-2
1, 1, 1. 1, 1]',
A^X
= [1,0,-1,0,0
-ir.
A''
=[-3.1.1,1.
1,2]'
A"
-Z;
belongs to
A*-2A^ + 1
We
tentatively
assume
The vector
AY
Xs
- [-1,0,
1,
Y belongs
for Y.
Y = [o, 0, 0, 1, 0, O]' is linearly independent of the members of the chain led by -1, 1,0]' is linearly independent of Y and the members 2,, = Y of the chain. Now A^Y
Since the two polynomials complete the set of non-trivial invariant factors
Xg and
so that
write
to
A -1.
When
-1
1
we
-2
-3
1
"o 1
[Xs.AXs.Xg.AXe.A^Xe.A^'Xfi
1
-1
u
-1
1
R~'^AR 1 1
-1
1
1
1
-1
the rational canonical form 'of ^.
Note.
The vector Z =
1,
and
AZ
- [3, 0, -2,
-2,
o]' is
linearly independent of
1-1,1,0,0,0,1]'= -AXg + A Xg + Z; then (A'' ~ 1)(Z - AXg) = and W=Z-AXe = [2.0 -1 -1 ll -i]' belongs to A - 1. Using this as X^. we may form another R with which to obtain the rational canonical' form.
independent of the members of the chain led by Xg Z and the members of the chain However A'^Z =
-2
1
-1
3
-1
1
-1
1
6.
Let
A =
-1 -1
-4 -4
-2
=
-2
-1
-1
Take X = [1,0,0,0,0]'.
-2
-2
Then
r
-2
-2
^^^=[1,1,-1,-1,0]' are linearly independent while a'^X^ 3. We tentatively assume this to be the minimum
^.^5
[-2,1,-1,-1,-2]',
L-1,
X-3X
X
and
belongs to A - 2A^ +
as X^.
212
[CHAP. 26
When, in A, the fourth column is subtracted from the first, we have [-1,0,0,1,0]'; hence, if y = Again, when the fourth column of A is subtracted from -1, o]', AY = -Y and Y belongs to X+ 1. the third, we have [o, 0, -1, 1, O]'; hence, if Z = [O, 0, 1, -1, O]', AZ = -Z find Z belongs to A+ 1. Since y, Z, and the members of the chain led by X^ are linearly independent, we label Y as X^ and Z as X^. When
[l, 0, 0,
11-2
R
=
-1
11
[Xs.X^.X^.AXs,.A%]
=
1
0-100
R~ AR
=
0-1-1
-1
0-3
-1
-1
-1
10
0-2
the rational canonical form of A.
12
SUPPLEMENTARY PROBLEMS
7.
9,
I,
n, in
Can any
1
of these matrices be
Partial Ans.
(a)
I,
II,
ni,
diag(l,2,
3)
-11
"o
6
1
o"
(b)
I, II,
m.
_0
1
0_
"o
(e)
I,
o'
1
_0
n, in.
-1
1
(/)
I.
-1
n, ni.
-1
1
o'
1
(g)
n.
1
m.
-1
-2.
-1
-1
'2
-1
(h)
I,
10 10
n. in,
8.
Under what conditions will (a) the canonical forms of Theorems I and n be identical? (b) the canonical forms of Theorems n and in be identical? (c) the canonical form of Theorem H be diagonal?
CHAP.
26]
213
"o
9.
o" 1
to
Problem
8(6).
0_
10.
(a) X + 1 Theorems I, n,
A,
(X + i)^
(b)
X^+
1,
III
Theorem
(a)
IV.
Ans.
-1
1
1
-1
10
1 1 1 1
_0
-1
-10
-2
0_
-1
1
-1
n.
-2
1
-1
1
1
1
1
.
-1
-3
2
6"
-1
-1 -1
1
-1
1
in,
-1
1 1
--1
1 1
-
-1
o"
1_
-1
-1 -1
1
-1
a
IV,
a
/S /3
214
[CHAP. 26
11.
m(A)
A (A)
"
g (A) +
12.
In
Example
6,
show
that
X,AX. and Y
form.
13.
In
(a)
Problem
6:
Take 7 Take
= [o,
1, 0, 0, o]',
X4
y_
(3/1
_2/)
A^g
belonging to A+1.
(i>)
= [O, 0,
1, 0, O]',
1.
linearly independent of
X^ = Z
-X^
belonging to A +
(c)
Compute R~
AR
(a) to build R.
14.
25, find
R such
that
R' AR
15.
2Xx
4^1
+ +
X2
+ +
Xg
X4 +
dx2
dt
2x2
2a:2
3At3
dxs
dt
6*1
- 3*3 t.
2xi^
dX4
dt
3ai
X2
Xq
2x4.
real variable
Hint.
i.
i Let
V ^
T U, . .3. -J
r
J define
"^^
dxi
dxQ
dxn
dxA'
111
3
|^^,
-jf. -jf,
^J
dX_
(i)
4 -6 -3
dt
-2 -1
-3
-1
-2 -2
-
AX
RY
carries
(i)
into
dY
dt
R ^ARY +
R^H
choose R so that R~
with
AR
X-i
= Ei, AX^.A'^X-i
while
12-1
[X.L,AXi,A^Xi,X2]
0'
t
dY^
dt
1
1
Y +
-1
yi + 73
y2
-74
Then
C^+it'^
Cge* + Cge-* t
2Ci +
C2e*+
3(Cs + C4)e"*+
^-2t+l'
4 + 2
6t
-Ci +
-t_l,2^ J. Cse"Cse^Sr
46"*
and
RY
- 4
-2C1 -
Cae*-
5(C3+C4)e"* -
2+3{-2_
INDEX
Absolute value of a complex number, 110 Addition
of matrices,
2,
of vectors, 67
determinant of, 49 inverse from, 55 rank of, 50 Algebraic complement, 24 Anti-commutative matrices, 11 Associative laws for
addition of matrices, 2 fields, 64 multiplication of matrices, 2 Augmented matrix, 75
Closed, 85
Coefficient matrix, 75
Column
space of a matrix, 93 transformation, 39 Commutative law for addition of matrices, 2 fields, 64 multiplication of matrices, 3 Commutative matrices, 11 Companion matrix, 197 Complementary minors, 24 Complex numbers, 12, 110 Conformable matrices
for addition, 2
Basis
change
of,
95
of a vector space, 86
orthonormal, 102, 111 Bilinear form(s) canonical form of, 126 definition of, 125 equivalent, 126 factorization of, 128
Jacobson, 205 of bilinear form, 126 of Hermitian form, 146 of matrix, 41, 42 of quadratic form, 133 rational, 203 row equivalent, 40 Canonical set under congruence, 116, 117 under equivalence, 43, 189 under similarity, 203 Cayley-Hamilton Theorem, 181 Chain of vectors, 207 Characteristic equation, 149 polynomial, 149 Characteristic roots definition of, 149 of adj A, 151 of a diagonal matrix, 155 of a direct sum, 155
12
13
of a product, of a sum, 13
Conjugate transpose, 13 Conjunctive matrices, 117 Contragredient transformation, 127 Coordinates of a vector, 88 Cramer's rule, 77
Decomposition of a matrix into Hermitian and skew-Hermitian parts, 13 symmetric and skew-symmetric parts, 12 Degree of a matrix polynomial, 179
of a (scalar) polynomial, 172
Dependent
forms, 69 matrices, 73 polynomials, 73 vectors, 68
215
216
Determinant
definition of, 20
INDEX
Hermitian form canonical form
definite,
of,
146
derivative of, 33
expansion of along first row and column, 33 along a row (column), 23 by Laplace method, 33 multiplication by scalar, 22 of conjugate of a matrix, 30 of conjugate transpose of a matrix, 30 of elementary transformation matrix, 42 of non-singular matrix, 39 of product of matrices, 33
of singular matrix, 39 of transpose of a matrix, 21
signature
of,
147 146
Hermitian forms
equivalence
of,
Image
1
Index of an Hermitian form, 147 of a real quadratic form, 133 Inner product, 100, 110
Intersection space, 87
Elementary
matrices, 41 n-vectors, 88 transformations, 39
Invariant vector(s) definition of, 149 of a diagonal matrix, 156 of an Hermitian matrix, 164 of a normal matrix, 164 of a real symmetric matrix, 163 of similar matrices, 156 Inverse of a (an) diagonal matrix, 55 direct sum, 55 elementary transformation, 39 matrix, 11, 55 product of matrices, 11 symmetric matrix, 58 Involutory matrix, 11
Equality of
matrices, 2 matrix polynomials, 179 (scalar) polynomials, 172
of,
75
Lambda
matrix, 179 Laplace's expansion, 33 Latent roots (vectors), 149 Leader of a chain, 207 Leading principal minors, 135 Left divisor, 180 Left inverse, 63
Factorization into elementary matrices, 43, 188 Field, 64 Field of values, 171
First minor, 22
102, 111
equivalent, 40
common
divisor, 173
over a
field,
65
INDEX
Matrices (cont.) product of, 3
scalar multiple of, 2 similar, 95, 156 square, 1
217
Null space, 87
Nullity, 87
w-vector, 85
sum
Matrix
of, 2
definition of, 1
elementary row (column), 41 elementary transformation of, 39 Hermitian, 13, 117, 164
idempotent, 11
inverse of, 11, 55 lambda, 179
nilpotent, 11
Orthonormal
basis,
102,
111
normal form
nullity of, 87
of,
41
Hermitian forms, 147 matrices, 134, 147 quadratic forms, 134 Principal minor
definition of, 134
leading, 135
Product of matrices
adjoint of, 50 conjugate of, 13 determinant of, 33
rank
of,
39
scalar,
10 singular, 39
skew-Hermitian, 13, 118 skew-symmetric, 12, 117 symmetric, 12, 115, 163
triangular, 10, 157 unitary, 112, 164
rank
transpose
Quadratic form
canonical form of, 133, 134 definition of, 131 factorization of, 138 rank of, 131 reduction of
Matrix Polynoniial(s)
definition of, 179 degree of, 179 product of, 179 proper (improper), 179 scalar, 180 singular (non-singular), 179 sum of, 179 Minimum polynomial, 196
Multiplication
in partitioned form, 4 of matrices, 3
Kronecker, 136 Lagi-ange, 132 regular, 135 Quadratic form, real definite, 134 index of, 133 semi-definite, 134 signature of, 133
Quadratic forms
equivalence
Negative
definite form (matrix), 134, 147 of a matrix, 2 semi-definite form (matrix), 134, 147 Nilpotent matrix, 11
134
Rank
of adjoint, 50 of bilinear form, 125 of Hermitian form, 146 of matrix, 39 of product, 43 of quadratic form, 131 of sum, 48
218
Right divisor, 180 Right inverse, 63 Root
of polynomial, 178 of scalar matrix polynomial, 187
INDEX
Symmetric matrix
deiinition of, 12
(cont.)
Row
equivalent matrices, 40 space of a matrix, 93 transformation, 39
Transformation
elementary, 39 linear, 94 orthogonal, 103 singular, 95 unitary, 112
Scalar matrix, 10 matrix polynomial, 180 multiple of a matrix, 2 polynomial, 172 product of two vectors (see inner product) Schwarz Inequality, 101, 110 Secular equation (see characteristic equation) Signature of Hermitian form, 147 of Hermitian matrix, 118 of real quadratic form, 133 of real symmetric matrix, 116 Similar matrices, 95, 196 Similarity invariants, 196 Singular matrix, 39 Skew-Hermitian matrix, 13, 118 Skew-symmetric matrix, 12, 117 Smith normal form, 188
Transpose
of a matrix, 11 of a product, 12 of a sum, 11
Span, 85
Spectral decomposition, 170 Spur (see Trace) Sub-matrix, 24 Sum of matrices, 2 vector spaces, 87
Sylvester's
belonging to a polynomial, 207 coordinates of, 88 definition of, 67 inner product of, 100 invariant, 149 length of, 100, 110 normalized, 102 orthogonal, 100 vector product of, 109
law
Symmetric matrix
characteristic roots of, 163
dimension of, 86 over the complex field, 110 over the real field, 100
Index of Symbols
Symbol
Page
1 1 1
Symbol
E^, (vector)
Page
88
100, 110
100, 110 103, 111
Hj
[^i;]
X-Y; X Y
\\n
S
^k
A-'A':
10
11
zxy
Q
p
s
A'
/
A
11
I;
A'-
12 13
116
131
A*- A^
det
\A\;
20
h
A. A,-
146
\^ij\
22
im i^ 23
149
149 170
172
^ii'i2
ii, is, ....
<^(A)
a,.
r
23
39
39
E^, (matrix)
/(A)
h-^ij
Hi(k), K^ik)
F[A]
^(A)
AjfiC),
172
179 180
189
39 39
H^.{k). K.j(k)
-V.
A^ (C)
40 43
NiX)
fi(^)
N
adj
49
m(A)
C(g)
J
F
X. X^
64
67
yn(f)
V^tiF)
85
S Cq(p)
203
86
87
205
^A
219