You are on page 1of 23

CHAPTER 2: MATRIX ALGEBRA

1.

Definition

A matrix is an array of mn elements (where m and n are integers) arranged in m rows and n columns.
Such a matrix A is usually denoted by:
a11 a12
a
21 a22
a
a32
A 31
.
.
.
.

am1 am 2

a13
a23
a33
.
.
am3

.. .. a1n
.. .. a2n
.. .. a3n

.
.

.. .. amn

where a11 ,a12 ,......,amn are called its elements and may be either real or complex. The matrix A is said
to be of size (mxn). If m=n, the matrix is said to be a square matrix of order n.
If the matrix has one column or one row then it is called column vector or row vector respectively.

a1

a2

a3

.... an row vector

b1
b
2
. column vector

.
bn

The elements aii in a square matrix form the principal diagonal (or main diagonal). Their sum

a11 a22 a33 ... amn is called the trace of A. If all the elements of a square matrix are zero, then the
matrix is called a null matrix. On the other hand, if only the elements on the main diagonal are non-zero,
then the matrix is said to be a diagonal matrix.

3 0 0
E.g: C 0 1 0
0 0 6
In particular, if all the diagonal elements are equal to 1, it is called a unit matrix and is usually denoted
by I. Thus,

1 0 0
I 0 1 0 unit matrix of order 3
0 0 1
A square matrix is said to be an upper-triangular matrix if aij 0 for i>j, and a lower-triangular matrix if

aij 0 for i<j. The matrices U (upper-triangular matrix) and L (lower-triangular matrix) are defined by:

u11 u12 u13


U 0 u22 u23

u33

l11
and L l21 l 22
l31 l 32

l 33

A square matrix A is said to be banded if

aij 0, when i - j k ,
in which case only (2k+1) elements in every row are non-zero. In particular matrices of the type

a11 a12
a
a22
A 21
0 a32

0
0

0
a23
a33
a43

0
0
a34

a44

are called tridiagonal matrices.


Every square matrix A is associated with a number called its determinant and is written as
a11 a12
a21 a22
.
.
A
.
.
.
.
an1 an 2

a13
a23
.
.
.
an 3

.. .. a1n
.. .. a2n
.
.
.
.. .. ann

If A 0 , the matrix is said to be non-singular, otherwise it is a singular matrix. As an example, the


matrix

2 1 2
A 3 0 5
4 2 4
is singular, since A 0
A square matrix A in which aij a ji is said to be symmetric. If aij a ji , it is said to skew-symmetric.
For a skew-symmetric matrix, therefore we have

aii aii
where aii 0 . For example, the matrix

0 2 3
2 0 5

3 5 0
is a skew-symmetric matrix.

2. Matrix Operations
i.

Equality of two matrices

Two matrices are said to be equal if they are of the same size and if their corresponding elements are
equal.
Let

A aij

be

an

mxn

matrix

B bij

and

be

an

matrix.

Then

A B m p, n q and aij bij for all 1 i m,1 j n


ii.

Addition and subtraction of matrices

Two matrices of the same size can be added or subtracted by adding or subtracting their corresponding
elements.

Let A aij and B bij be m x n matrices


Then A B ( aij bij ) for all 1 i m, 1 j n
iii.

Scalar multiplication of a matrix

If all the elements of a matrix A are multiplied by a constant k, then the resulting matrix is kA
iv.

Product of matrices

Let A aij be an m x r matrix and B bij be an r x n matrix.

Then AB ai 1bi 1 ai 2b2 j ...... air brj

v.

ik bkj

k 1

for all 1im, 1jn

The zero matrix

0 mxn ( aij ) where aij 0 for all 1im, 1jn. We also write 0mxn 0

3. Properties of the addition and multiplication of matrices


Let A, B and C be m x n matrices. Then
i.
ii.
iii.
iv.

A+B=B+A
(A+B) + C = A + (B+C)
(A+B) = A + B,
(+)A = A + A

(Commutative Law)
(Associative Law)
, scalar
, scalar

Let A, B and C be n x n matrices. Then


i.
ii.
iii.
v.

(AB)C = A (BC)
(A+B)C = AB + AC
A(B+C) = AC+AB
(AB) = (A)B = A (B)

(Associative Law)
(Distributive Law)
(Distributive Law)
scalar

*The multiplication of matrices does not satisfy the commutative law (i.e. ABBA)

Examples:
1.

5
A
3
a)
b)

8
6

and

2
B
4

12
,
- 9

find

A+B
A-B

c) 2A
d) 2A-2B

Solution:

2.

a)

7 20
AB

7 3

b)

3 4
AB

1 15

4 24
2B

8 18

c)

10 16
2A

6 12

6 8
Therefore 2 A 2B

2 30

d) From

3 4 2 1
A 1 2 3 1 and
0 1 2 3

1
3
B
1

- 1

c),

10 16
2A

6 12

2
4
, find AB
0

Solution:

3 12 2 1 6 16 0 1 16 23
AB 1 6 3 1 2 8 0 1 9 11
0 3 2 3 0 4 0 3 2 7

3.

1 2
A
show that
3 4
otherwise, determine A4.

A 2 5 A 2I where I is the unit matrix of order 2. Hence, or

Solution:

a
a
If A 11 12
a21 a22

and

b
b
a b a12 b21 a11b12 a12 b22
B 11 12 , AB 11 11

b21 b22
a21b11 a22 b21 a21b12 a22 b22

therefore
1( 1) 2( 3 ) 1( 2 ) 2( 4 ) 7 10
A2

3( 1) 4( 3 ) 3( 2 ) 4( 4 ) 15 22

1 2 5 10
5A 5

3 4 15 20

7 10 5 10 2 0
1 0
A2 5 A

2I
15 22 15 20 0 2
0 1

A2 5A 2I

A2 A2 ( A2 ) 5 A 2I 5 A 2I
25A 2 10AI 10IA 4I 2
25A 2 20AI 4I, since AI IA and I 2 I
25 (5A 2I) 20AI 4I, since A2 5 A 2I
125A 50I 20AI 4I
145A 54I, since AI A
we get
1 2
1
A4 145
54

3 4
0
145 290 54

435 580 0

0
1
0 199 290

54 435 634

4. Transpose, determinant, adjoint and inverse


Transpose
The matrix obtained by interchanging the rows and columns of a square matrix A is defined as the
transpose of A and is denoted by AT. Thus, if

a
a
a13
11 12

A a21 a22 a23


a
a
a33
31 32

Then,

a
a21 a31
11

T
A a12 a22 a32
a
a23 a33
13

It can be easily shown that


i)
ii)

If C = A + B then CT= AT + BT, and


If A and B are two square matrices, then the transpose of the product AB is the product of
the transpose taken in the reverse order, i.e.,
(AB)T=BTAT
More generally,
(ABCD..)T= .DTCTBTAT

Examples:

1 2 3
1 1 1

a) Let A 4 5 6 , B 1 1 1
7 8 9
1 1 1

1( 1) 2( 1) 3( 1) 1( 1) 2( 1) 3( 1) 1( 1) 2( 1) 3( 1) 6 6 6
Then AB 4( 1) 5( 1) 6( 1) 4( 1) 5( 1) 6( 1) 4( 1) 5( 1) 6( 1) 15 15 15
7( 1) 8(1) 9(1) 7(1) 8(1) 9(1) 7(1) 8( 1) 9( 1) 24 24 24

6 15 24
T
AB 6 15 24
6 15 24

Also,

1 1 1 1 4 7 6 15 24
T
T
B A 1 1 1 2 5 8 6 15 24 ( AB )T
1 1 1 3 6 9 6 15 24

1 0 1
b) Given A 0 1 0 show that A2
3 0 4
Solution:

A
T

T 2

1 0 1
Since A 0 1 0 ,
3 0 4

We have

1 0 1 1 0 1 11 00 13 10 01 10 11 00 14 4 0 5
A 2 0 1 0 0 1 0 01 10 03 00 11 00 01 10 04 0 1 0
3 0 4 3 0 4 31 00 43 30 01 00 31 00 44 15 0 19
hence,
4 0 15
T
A 0 1 0 ,
5 0 19
Therefore
1 0 3 1 0 3 11 00 31 10 01 30 13 00 34 4 0 15
T 2
A
0 1 0 0 1 0 01 10 01 00 11 00 03 10 04 0 1 0
1 0 4 1 0 4 11 00 41 10 01 40 13 00 44 5 0 19

which is the same as A 2

Determinant

Let A a ij be an n x n matrix. The determinant of A, denoted by A is defined as follows:


a12
a
a12
a
If A 11
, then A 11
a11a22 a12 a21

a21 a22
a21 a22

For 3 x 3 matrix (3 rows and 3 columns):

a11 a12
A a21 a22
a31 a32

a13
a23 , the determinant is:
a33

A a11 a22 a33 a23 a32 a12 a21a33 a23 a31 a13 a21a32 a23 a31

To work out the determinant of a 3 x 3 matrix:


Multiply a11 by the determinant of the 2 x 2 matrix that is not in a11s row or column
Likewise for a12 and a13
Add them up, but remember a12 has a negative sign!
For 4x4 matrix or higher:
By expanding the j-th row

a11

a12

a21 a22
A .
.

.. .. a1n
.. .. a2n
.
. 1 j 1a j 1 M j 1 1 j 2 a j 2 M j 2 ...... 1 j n a jn M jn

.
.
.
.
an1 an 2 .. .. amn
n

j k

a jk M jk .........................(1)

k 1

or by expanding the k-th column


a11 a12
a21 a22
A .
.
.
.
an1 an 2
n

.. .. a1n
.. .. a2n
.
. 1k 1a1k M1k 12 k a2k M 2k ...... 1k n ank M nk
.
.
.. .. amn

j k

a jk M jk .........................(2 )

j 1

where Mjk is the (n-1) x (n-1) submatrix of A obtained by deleting its j-th row and its k-th column.
The determinant M jk of Mjk is called the minor of ajk in A.
We may expand by any row or column to obtain A . The equation (1) gives A by expanding
the j-th row, and the equation (2) gives A by expanding the k-th column

Properties of the determinant:


Let A and B be n x n matrices
i.
ii.
iii.
iv.
v.
vi.

If A has a row (or column) of zeros then |A| = 0


If A has two identical rows (or columns), then |A| = 0
If A is an upper triangular (or lower triangular matrix), then |A| = product of the diagonal
elements. In particular, |In| = 1
|AB| = |A| |B|
|AT|=|A|
|A|=n|A|, where is scalar

Examples:
Find all the value(s) of x for which each of the following determinants is zero
(i)

2x
4

7
6x

2 x 6 x 7( 4 ) 0
Solution: 12 2x 6 x x 2 28 0

x 2 8 x 16 0 4 4 2
(ii)

1 3 x 4
4x
0
4
x 6
1
0

Solution:

3x

4x

x 6

4 10 4 3 x 0 4 x 24 44 x 0
0

4 3 x 4 x 24 16 4 x 0

4 36 x 72 4 x 2 16 4 x 0
4 x 40 x 84 0
x 7&3
2

Adjoint
Let A= (aij) be an n x n matrix. Then the adjoin of A, is denoted by adj A is defined as

A11
A
12
adj A .

.
A1n

A12
A22
.
.
A2n

.. .. An1
... .. An 2
. .
.

. .
.
.. .. Ann

a11
.
.
ij
ij
where Aij 1 M ij 1

ai 1,1
ai 1,1
.
.
an1

... ..
.
.

a1, j 1
.
.

a1, j 1

.. ...
.
.

a1n
.
.

.. .. ai 1, j 1 ai 1, j 1 .. .. ai 1,n
..
.. .. ai 1, j 1 ai 1, j 1 .. .. ai 1,n
.
.
.
.
.
.
.. ..

an , j 1

an , j 1

.
.. ..

.
ann

Aij is called the cofactor of aij


The adjoint of A is the transpose of the matrix cofactors of aij.
Properties of the adjoint:
Let A and B be n x n matrices. Then
i.
ii.
iii.
iv.
v.

A(adj A) = |A|In = (adj A) A


|adj A| =|A|n-1 if |A|0
(adj AB) = (adj A) (adj B)
adj (AT)=(adj A)T
adj (A)=n-1(adj A)

0
| A | 0

Remark: 0 | A | 0
0
0 | A |
Examples:
Find the adjoint of

1 2 1
a) A 3 8 2
4 9 1
Solution:

A11

8 2
8 1 2(9) 26
9 1

A13

3 8
3(9) 8(4) 5
4 9

A22

1 1
1(1) (1)( 4) 3
4 1

A 23

A 31

2 -1
2(2) - (-1)(8) 12
8 2

A 32

A 33

1 2
1(8) 2(3) 2
3 8

A12
A 21

3 2
3(1) 2(4) 11
4 1

2 1
2(1) (1)(9) 7
9 1
1 2
-(9 8) - 1
4 9

3 8
3(9) 8(4) 5
4 9

- 26 11 - 5
- 26 - 7 12

3 - 1 11 3 - 5
Hence adj A - 7

12 - 5 2
5
- 1 2
1 2 3
b) A= 0 4 5
1 0 6
Solution:
4 5
A11
24
0 6
A21
A31

A12

0 5
0 4
5 A13
4
1 6
1 0

2 3
1 3
12 A22
3
0 6
1 6

2 3
2
4 5

A32

A23

1 2
2
1 0

1 3
1 2
5 A33
4
0 5
0 4
T

24 5 - 4
24 12 2

3
5
Hence adj A - 12 3 2 5
- 2 - 5 4
4
2
4
Inverse
Let A be a non-singular square matrix of order n. let B be another square matrix of the same order as
such that

BA I
where I is the unit matrix of order n. Then B is said to be the inverse of A which is written as A-1 so that

AA1 A1A I
It follows that A and A-1 are square matrices of the same order.
If an inverse to A exists, then it is unique. Let B and C be the inverse matrices to A. Then
AB BA I

and AC CA I

Premultiplying AB by C then we get,


CAB CI

or

IB CI

which proves that the inverse is unique.


The inverse A-1 exist only when |A| 0 and then
A 1

1
adjA
A

If A and B are non-singular square matrices, then

AB1 B1A1
and

10

or

BC

A A
T 1

1 T

Since A and B are non-singular, we have |A| 0 and |B| 0, so that |AB| 0. Now,

AB B 1A1 A BB 1 A1 AIA1 AA1 I


Hence it follows that B-1A-1 is the inverse of the product AB.
Examples:

3 1 2

a) Find the inverse of the matrix A 1 2 1


1 1 1
Solution:
A 3

2 1
1 1
1 2
1
2
32 1 10 21 2 3 0 2 1
1 1
1 1
1 1

A11

2 1
2 1 1
1 1

A21
A31

1 2
1 2 1
1 1

1 2
1 4 3
2 1

A12
A22

1 1
0
1 1

3 2
3 2 1
1 1

A 32

3 2
3 2 1
1 1

1 0 -1

matrix of cofactor A 1
1 2
3 1 5

Hence

Since A

adj

0 -1
1 - 3
1
1

A 1
1 2 0
1 1
3 1 5
1 2 5

1 - 3 1
1 - 3
1
1
1

adjA , 0
1 1 0
1 1
A
1
1 2 5 1 2 5

4
5 2

b) Find the inverse of the matrix B 3 1 2


1 4 3

11

A13

1 2
1 2 1
1 1

A 23
A33

3 1
3 1 2
1 1

3 1
6 1 5
1 2

- 5 11 13
5 22

adj B 22 - 19 - 18 11 19
8
13 18
2
- 11
8
5 22
1

therefore B-1
11

19
2
49
13 18 11

2
11

and B 49

5. Solution of Linear Systems


A system of m linear quation in n unknown x1, x2, x3,..xn is defined as follows:
a11x1 a12 x 2 .... a1n x n b1

a21x1 a21x 2 .... a2n x n b2


.
.
.
am1 x1 am 2 x 2 ... amn x n bn
which may be written in matrix form
AX=B
where

a11 a12 .. .. a1n


a

21 a22 .. .. a2n
A .
.
. ,

.
.
.
an1 an 2 .. .. ann

a11 a12
a
21 a 22
.
.

.
.

a n1 a n 2

x1
x
2
X . ,

.
xn

.. .. a1n x1
.. .. a 2n x 2
. .

. .
.. .. a nn x n

b1
b
2
B .

.
bn

b1
b
2
.

.
b n

where A=(aij) is called the coefficient matrix and X=(xij) and B=(bij) are called the column vectors of the
unknowns and of the constants respectively.
If B=0 (that is bi=0 for all i), the system is called homogenous.
If B0 (that is, there exist a bi0 for all i), the system is called non-homogenous.
A system of linear equations having no solution is called inconsistent, while a system having one or
more solutions is called consistent.
Remark:
1. A homogenous system always has one solution, the trivial solution or infinite number of
solutions. Any solution not the trivial solution, if it exists, is called a nontrivial solution.
2. A non-homogenous system has no solution or one solution or infinite number of solutions.

12

i)

The Gaussian Elimination

Let A and B be the above matrices. We may write the above system of linear equations in its augmented
matrix form [A|B]. Then the Gauss elimination is a reduction of the augmented matrix [A|B] by
elementary row operations to triangular form, from which we then readily obtain the values of the
unknowns by back substitution

Steps:
a11 a12
a
21 a22
.
.
Augmented matrix : A | B
.
.

.
.

am1 am 2

i.

ii.
iii.
iv.
v.

a13
a23
.
.
.
am 3

.. .. a1n
.. .. a2n
.
.
.
.. .. amn

b1
b2
.
is a m n 1 matrix
.
.

bm

Suppose a110, that is the first entry of the first row is nonzero. (If a11=0 and ai10, then
interchange the 1st row and the ith row). This row is called the pivot row. The pivot row remains
untouched.
Use some elementary row operations to form a new matrix with ai1=0 for all i=2,3,,m.
Next take the 2nd row as the new pivot row if a220 (If a22=0 and ai20, i1, then interchange the
2nd row and the ith row). Repeat step 2 so that ai2=0 for all i=3,4,m.
Repeat the process until we form a upper triangular matrix
Working backward from the last to the first row of this system to obtain the solution.

Examples:
a) Solve the following system of linear equations:
x1 x2 2 x3 2

3 x1 x2 x3 6
x1 3 x2 4 x3 4
Solution:

2
1 1 2 2 R2 R2 3R1 1 1 2 2
1 1 2
R3 R3 R1
R3 R3 R2
3 1 1 6

0 2 7 12 0 2 7
12

1 3 4 4
0 2 2 2
0 0 5 10
x1 x 2 2 x 3 2
2x 2 7 x 3 12
- 5x 3 10
x 3 2, x 2 1, x1 1

b) Solve the following system of linear equations:

13

3 x1 2 x2 x3 3
2 x1 x2 x3 0
6 x1 2 x2 4 x3 6
Solution:
2
3 2 1 3 R2 R2 3 R1 3 2

1
R3 R3 2R1
2 1 1 0
0

6 2 4 6
0 2

3 2
1 3
R3 R3 6R 2
1
1
2 0
3
3

2 12
0 0

1 3

1
2
3

0 0

3 x1 2 x2 x3 3
1
1
x 2 x3 0
3
3
Let x3 k be any number. Then x1 k 9, x2 k 6

The Cramers Rule

ii)

Let AX=B be a system of n linear equations in n unknowns. If A is a nxn nonsingular matrix, then AX=B
has a unique solution given by:
a11 a12
a21 a22

xk

... a1( k 1)
... a2( k 1)

x1
x2

a1( k 1)
a1( k 1)

... a1n
... a2n

.
xn

an1 an 2

... an( k 1)

a1( k 1)

... ann

a11

a12

... a1( k 1)

x1

a1( k 1)

... a1n

a21 a22
.
.

... a2( k 1)
.

x2
.

a1( k 1)
.

... a2n
.

an1 an 2 ... an( k 1) x n a1( k 1) ... ann

a11 a12 ... a1( k 1) a1k a1( k 1) ... a1n


a21 a22
.
.

.
.

an1 an 2

... a2( k 1)
.
.
... an( k 1)

a2 k

a1( k 1)

.
.

.
.

x nk

a1( k 1)

... a2n
.
.
... ann

Examples:
Solve the system of linear equations

a)

xy z 6
x 2y 3z 14
x 4 y 9z 36
Solution:
In the form AX=B:

1 1 1
1 1 1 x 6
1 2 3 y 14 since A 1 2 3 2 0 then the system has a unique solution


1 4 9 z 36
1 4 9

14

By using Cramer' s rule, the solution is given by


6
14
36
x
1
1
1

1
2
4
1
2
4

1
3
9
1
1
3
9

1
1
1
y
1
1
1

6
14
36
1
2
4

1
1 1 6
3
1 2 14
9
1 4 36
2 and z
3
1
1 1 1
3
1 2 3
9
1 4 9

2x y z 3
xy z 0
x 2y z 0

b)

Solution:
By using Cramer' s rule, the solution is given by

3 1 1
0 1 1
0 2 1
x
1
2 1 1
1 1 1
1 2 1

2
1
1
y
2
1
1

3
0
0
1
1
2

1
2 1
1
1 1
1
1 2
2 and z
1
2 1
1
1 1
1
1 2

3
0
0
3
1
1
1

6. Cayley-Hamilton Theorem
This theorem states that every square matrix satisfies its own characteristic equation. Let the
characteristic polynomial be

f A I 0,
i.e.

p0 p1 p22 ...... pn n 0
Then the Cayley-Hamilton theorem states that

f A p0I p1A p2 A2 ...... pn1An1 pn An 0


Proof: The element of A I are of degree n or less in . Hence the element of adj A I are at most
of degree (n-1) in . We therefore write

adjA I Q0 Q1 Q22 ..... Qn1n1


where the Qis are square matrices of order (n-1).
Since
we have

A I adjA I A I I ,
A I Q0 Q1 Q22 ...... Qn1n1 p0 p1 p22 ... pn n I

Equating the coefficients of like powers of on both sides of the above equation, we get:

15

AQ0 p 0 I
AQ1 Q 0 p1I
AQ 2 Q1 p 2 I
.
.
.
- Q n -1 p n I
Pre-multiplying these equations respectively by I, A, A2, .., An and adding, we obtain

IAQ0 A2Q1 AQ0 A3Q2 A2Q1 ...... AnQn 1 p0I Ap1I A2 p2I .... An pnI
or

p0I p1A p2 A2 ..... pn An 0


since the terms on the left side cancel each other. This prove Cayley Hamilton theorem. We can write
the above statement as:

p0I p1A p2 A2 ....... pn An


Pre-multiplying both sides of the above equation by A-1, we obtain:

A1p0I p1A1A p2 A1A2 ...... pn A1An


or
A1

p1
p
p
p
I 2 A 3 A2 ...... n An 1
p0
p0
p0
p0

Example:

1 3 7
Given that A 4 2 3. Use the Cayley Hamilton to find A-1
1 2 1
Solution: The characteristic equation of A is 3 42 20 35I 0
By Cayley Hamilton we have A3 4A2 20 A 35I 0
Premultiplying the above equation to obtain

16

A 2 4 A 20I 35 A 1 0
1 2
A 1
A 4 A 20I
35
1 3 7 1 3 7 20 23 23
2
A 4 2 3 4 2 3 15 22 37
1 2 1 1 2 1 10 9 14
20 23 23 4 12 28 20 0 0
1

1
A
15 22 37 16 8 12 0 20 0
35

10 9 14 4 8 4 0 0 20
4 11 5
1

1 6 25
35
6
1 10

7. Eigenvalues and Eigenvectors


Let A be an nxn matrix. A number is called eigenvalue of A if there exists a nonzero column vector
X M n ,1 such that AX=X. X is called an eigenvector of A corresponding to (or associated with) the
eigenvalue .

~
(Note that the eigenvalue can be the number 0 but eigenvector X must be a nonzero vector, i.e. X 0
)
Properties of eigenvalues of eigenvectors
The following properties of eigenvalues and eigenvectors are useful in the applications:
(i)

If 1, 2, ..n are the eigenvalues of A, then


1 1
1
a)
, ,....., are the eigenvalues of A-1,
b)

1 2
n
k1, k2 ,....,kn are the eigenvalues of kA, k0

m
m
1m , m
2 ,.....,n are the eigenvalues of A
d) 12 ,....,n A

c)

(ii)
(iii)
(iv)

The matrices A and AT have the same eigenvalues


If two or more eigenvalues are equal, then the eigenvectors may be linearly independent
or linearly dependent
If X1 and X2 are eigenvectors of a matrix A corresponding to eigenvalues 1 and 2
respectively, and if 1 2, then X1 and X2 are linearly independent.

Example:
Find the eigenvalues and eigenvectors of the following matrix
1 3 3

A 3 5 3
6 6 4

Solution:
To do this, we find the values of which satisfy the characteristic equation of the matrix A, namely those
values of for which

det A I 0

17

1 3 3 0 0 1
A I 3 5 3 0 0 3
6 6 4 0 0 6
det A I 0

5
6

3
5
6

3
3
4

3
3
3
3 5
3
3
0
4
6 4
6
6

1 5 4 3 6 334 3 6 33 6 5 6 0
1 20 5 4 2 18 312 3 18 3 18 30 6 0
1 2 2 3 6 3 312 6 0
2 2 2 2 3 18 9 36 18 0
16 12 3 0
3 12 16 0

42 4 4 0
4,2,2

Finding eigenvectors:
For each eigenvalue , we have

A I x 0
Case 1: 4

A 4I x 0

3 3 3
A 4I 3 9 3
6 6 0
1
1 0 R2 1 / 12R2
1 0
3 3 3 0
1 1 1 0 R2 R2 3R1 1
1 1
3 9 3 0
R1 1 / 3R3

R3 R3 6R1

R3 R3 12R2

3 9 3 0
0 12 6 0 0 1 1 / 2 0

6 6 0 0
6 6 0 0
0 12 6 0
0 0
0
0
x
x
x 3 1, x 2 3 , x1 3
2
2
Case 2: 2
3 3 3
A 2I 3 3 3
6 6 6
3 3 3 0
1 1 1 0 R2 R2 3R1 1 1 1 0
3 3 3 0
R11 / 3R3
R3 R3 6R1
3 3 3 0

0 0 0 0

6 6 6 0
6 6 6 0
0 0 0 0
x3 x2
1
1



Thus x x 2 x 3 0 x 2 1 for any x 2 , x 3 \ { 0 }
x

1
0
3

are the eigenvecto rs of A associated with the eigenvalue -2

18

8. Linearly Independent Row Vectors

Definition 1:
Let v1, v 2 ,.....,v n be vectors in a vector set V (that is V { v1,v 2 ,...,v n })
(i)

We said that the vectors v1,v 2 ,...,v n are linearly independent if

(ii)

1v1 2v 2 ..... nv n 0 implies 1 2 .... n 0


v 1 ,v 2, ......,v n are linearly dependent if there exist numbers 1 , 2 ,.....,n , not all of them
zero, such that 1v1 2v 2 ..... nv n 0

Example:
a) Let V { 1,0,0, 1,2,0, 1,0,1}

11,0,0 2 1,2,0 3 1,0,1 0


1 2 3 0
2 2 0
3 0
2 0, 3 0 and 1 0

Therefore (1,0,0), (1,2,0), (1,0,1) are linearly independent


b) Let V { 1,0,0, 1,2,2, 1,4,4}

Since 11,0,0 21,2,2 1,4,4 0, then there exists i 0, i 1,2,3


such that 1 1,0,0 2 1,2,2 3 1,4,4 0

Therefore, 1,0,0,1,2,2,1,4,4 are linearly dependent


Definition 2:
Let A ai , j M m,n and let v be a row vector of A.

Then, the maximum number of independent row vectors of A is called the rank of A and is denoted by
A or rank A.
Remark:
Let V { v 1 ,v 2 ,.....,v n }, if there exists a zero row vector in V (that is vi=0), then v 1 ,v 2 ,......,v n are linearly
dependent.
If vi=0, then i does not to be zero so that
0v1 0v 2 .... i v i 0v i 1 .... 0v n 0
Definition 3 (Row Echelon Matrices):
A row is called a zero row if every element in the row is zero.
A row is called non-zero row if there is at least one element in the row which is not zero.

Let A a i , j M m ,n , A 0 and let Ri be the ith -row of A, i=1,2,.,m.


Let Ri be a non-zero row.
The first non-zero element in Ri is called the distinguished element of Ri.
A matrix A is called a row echelon matrix if there is a number k, 1 k m such that:
(i)
Only R1, R2, ., Rk are non-zero rows. (That is, all non-zero rows are above all zero rows
(if zero row (s) exist (s)) and
(ii)
If ci = the number of zeros before the distinguished element in Ri, then ci < ci+1, i=1,2,,k-1
Furthermore, a row echelon matrix is called a row reduced echelon matrix, if the distinguished elements
are:
19

(iii)
(iv)

Each equal to 1, and


The only nonzero entries in their respective columns

Example:
2 1 0
A 0 0 0
0 0 1

0
0
B
0

1
0
D
0

1
0
E
0

3 0 2 0
0 1 0 0
0 0 1 1

0 0 0 0

1 2 0 0
0 2 4 0
0 0 0 1

0 0 0 0
3 0 0 0
0 1 0 0
0 0 1 1

0 0 0 0

0
0
C
0

0 0 1 0
0 1 0 0
0 0 0 1

0 0 0 0
0 0 2 0 0 0
0 0 0 4 0 6

F
0 1 0 0 1 2

0 0 0 0 0 1

The matrices A, C and F are not row echelon matrices


The matrices B and D are echelon matrices but not row reduced echelon matrices
The matrix E is a row echelon matrices and also reduced echelon matrices.
Remarks:
1. The zero matrix 0 is a row echelon matrices and also row reduced echelon matrices.
2. Every matrix can be reduced to a row echelon matrix by some elementary row operations.

Procedure which row reduces a matrix to echelon form:


1. Suppose the j1 column is the first column with a non-zero entry. Interchange the rows so that
this non-zero entry appears in the first row, that is, so that a1 j 1 0.
2. For each i>1, apply the operation R i a1j 1 R i a1j 1 R1 .
3. Repeat steps 1 and 2with the submatrix formed by all the rows excluding the first. Continue the
process until the matrix is in echelon form
Definition 4:

If A ai , j M m ,n is a row echelon matrix, then the rank of A A the number of non-zero row (s)
in A.
Example:
Let A be a matrix formed by the row vectors set V.
V1 { 0,1,0, 1,0,2, 2,0,1}
(i)
(ii)
(iii)

Determine the rank of A


Determine whether the row vectors set V is linearly dependent or is linearly independent
The maximum number of linearly independent row vectors in V.

Solution:
0 1 0
1 0 2
1 0 2
R3 R3 2R1
R1 R 2
1 0 2

0 1 0 0 1 0

2 0 1
2 0 1
0 0 5
(i)
The number of non-zero rows in the row echelon matrix = 3. the rank of A = 3.

20

(ii)

There is no zero row in the row echelon matrix, the row vectors set V1 is linearly
independent
The maximum number of linearly independent row vectors in V1=3.

(iii)

9. Diagonalization

Recall that a matrix D d i , j M n is called a diagonal matrix if d i , j 0 for all ij. That is

d11 0 0
0 d
0
22
D

0 d nn
0
Properties of diagonal matrix:
Let D and W are two diagonal matrices of order n. That is,
d11 0 0
w11 0 0
0 d

0 w
0
0
22
22
D
and W



0 d nn
0 w nn
0
0

0
w 11d 11
0
1w 22 d 22
0
1. DW and WD are also diagonal matrix and DW WD

0 w nn d nn
0
2. The determinant of D D d11d22d33 ...dnn
3. If all of the diagonal elements of D are nonzero (that is d ii 0i 1,2,...n )

Then D is invertible (or non-singular) and D 1

1
d
11
0

0
1
d 22

0
d nn

4. The eigenvalues of D are its diagonal elements, that is d11...d nn


Of course, most square matrices are not diagonal, but some are related to diagonal matrices in a
way we will find useful in solving problems.
Definition 1:
Let A be an n-square matrix, Then A is diagonalizable there exists an n-square matrix P such that
P-1AP is a diagonal matrix. (That is, P-1AP = D where D is a diagonal matrix). When such a matrix P
exists, we say that P diagonalizes A.
Example:
3 2
Let A
. There exists a matrix P
3 2
1 3 1 3 2 2 1 1 3
P 1AP


7 1 2 3 2 1 3 7 1

2 1
1 3 such that

1 8 3 4 0

diagonal matrix
2 4 9 0 3

Therefore, P diagonalizes A.
21

Definition 2:
Let A be an n-square matrix. Then, A is diagonalizable A has n linearly independent eigenvectors.
Definition 3:
n column vector v 1 ,v 2 ,.....,v n are linearly independent

The matrix A v 1 ,v 2 ,.....,v n is invertible (non-singular) A 0.

Procedure to determine a matrix P that diagonalizes matrix A:


1.

Determine the characteristic equation of A, that is p A I n 0

2.

Solve the characteristic equation p 0 and determine all the eigenvalues i .

3.

For each i , solve A In ei 0 to find an eigenvector ei corresponding to i .

4.

Form the matrix P e1 | e2 | .......| en .

5.

Determine whether

6.

1 0 0
0 0
2

If P 0, then P 1AP

0 0 n

P is nonzero.

Example:

1 4
Let A
. Find a matrix P that diagonalizes A and find the diagonal matrix D such that
0 3
D=P-1AP.
Solution:
The characteristic equation is p A I2 0

A I 2

1
4
0
0
3

2 2 3 0
3 1 0

-1,3

Next, find eigenvectors ei corresponding to i , by solving the system A I n ei 0

4 x1 0
1

That is, solve
3 x 2 0
0
Case (i) 1 1 :
Augmented matrix:
0 4 0 R2 R2 R1 0 4 0
0 4 0 0 0 0

4x2 0 x2 0
22

, k 0.
The set of eigenvectors corresponding to 1 1 is given by 0

When k=1, e1 is an eigenvector corresponding to 1 1


0

Case (ii): 2 3 :
4 4 0
Augmented matrix is

0 0 0

- 4x1 4 x 2 0
x1 x 2

The set of eigenvectors corresponding to 2 3 is given by , k 0.

When k 1, e is an eigenvector corresponding to 2 3 .


1

1
1 1
Let P
. Since P 0
0
1

1 1 1
Since P 1 AP

0 1 0

1
1 1
1 0 , then P-1 exists and P 1

1
0 1
4 1 1 1 1 1 3 1 0

D
3 0 1 0 1 0 3 0 3

1 1
1 0
. diagonalizes the matrix A and D P 1 AP
Therefore P

0 1
0 3

23

You might also like