You are on page 1of 43

Chapter 2

Matrices
2.1 Operations with Matrices
2.2 Properties of Matrix Operations
2.3 The Inverse of a Matrix

2.4 Elementary Matrices

2.1

2.1 Operations with Matrices

Matrix:

a11 a12 a13 a1n


a

a
a

a
22
23
2n
21
A [aij ] a31 a32 a33 a3n
M mn


am1 am 2 am3 amn
mn
(i, j)-th entry (or element): aij
number of rows: m
number of columns: n
size: mn

Square matrix: m = n
2.2

Equal matrices: two matrices are equal if they have the same size
(m n) and entries corresponding to the same position are equal

For A [aij ]mn and B [bij ]mn ,


A B if and only if aij bij for 1 i m, 1 j n

Ex 1: Equality of matrices

1 2
A

3
4

a b
B

c
d

If A B, then a 1, b 2, c 3, and d 4

2.3

Matrix addition:

If A [aij ]mn , B [bij ]mn ,


then A B [aij ]mn [bij ]mn [aij bij ]mn [cij ]mn C

Ex 2: Matrix addition
1 2 1 3 1 1 2 3 0 5
0 1 1 2 0 1 1 2 1 3

1 1
3 3

2 2

1 1 0
3 3 0


2 2 0
2.4

Scalar multiplication:

If A [aij ]mn and c is a constant scalar,


then cA [caij ]mn

Matrix subtraction:
A B A (1) B
Ex 3: Scalar multiplication and matrix subtraction

1 2 4
A 3 0 1

2 1 2

0 0
2
B 1 4 3

3 2
1

Find (a) 3A, (b) B, (c) 3A B


2.5

Sol:
(a)

(b)

(c)

1 2 4 31 32 34 3 6 12
3A 3 3 0 1 3 3 30 3 1 9 0 3

6
2 1 2 32 31 32 6 3
0
0
0 0 2
2
4 3
B 1 1 4 3 1

3 2 1 3 2
1
0 0 1 6 12
3 6 12 2
3 A B 9 0 3 1 4 3 10 4 6



4
6 1
3 2 7 0
6 3
2.6

Matrix multiplication:
If A [aij ]mn and B [bij ]n p ,

then AB [aij ]mn [bij ]n p [cij ]m p C,


should be equal
size of C=AB
n

where cij aik bkj ai1b1 j ai 2b2 j ainbnj


k 1

a11 a12 a1n b b


b1n
11

1j

b
21
2
j
2
n

ai1 ai 2 ain

ci1 ci 2 cij cin


bn1 bnj bnn

an1 an 2 ann

The entry cij is obtained by calculating the sum of the entry-by-entry


product between the ith row of A and the jth column of B
2.7

Ex 4: Find AB
1 3
3 2

A 4 2
B

4
1

2 2
5 0 3 2
Sol:
(1)(3) (3)(4) (1)(2) (3)(1)
AB (4)(3) (2)(4) (4)(2) ( 2)(1)
(5)(3) (0)(4)
(5)(2) (0)(1) 3 2

9 1
4 6
15 10 3 2

Note: (1) BA is not multipliable


(2) Even BA is multipliable, ABBA

2.8

Matrix form of a system of linear equations in n variables:

a11 x1 a12 x2 a1n xn b1


a x a x a x b
21 1 22 2
2n n
2

am1 x1 am 2 x2 amn xn bm

m linear equations

a11
a
21

am1

a12 a1n x1 b1
a22 a2 n x2 b2



am 2 amn xn bm

single matrix equation


A xb
m n n 1

m 1

2.9

Partitioned matrices:
row vector

a11 a12
A a21 a22
a31 a32

a13
a23
a33

a14 r1
a24 r2
a34 r3

a11 a12
A a21 a22
a31 a32

a13
a23
a33

a14
column vector
a24 c1 c2 c3 c4
a34
submatrix

a11
A a21

a31

a12
a22
a32

a13
a23
a33

a14
A11

a24
A21
a34

A12
A22

Partitioned matrices can be


used to simplify equations or
to obtain new interpretation of
equations (see the next slide)
2.10

A linear combination of the column vectors of matrix A:


a11 a12
a
a22
21

am1 am 2

a1n
a2 n
c1 c2

amn

a11 x1 a12 x2
a x a x
22 2
Ax 21 1

am1 x1 am 2 x2

a1n xn
a2 n xn

amn xn m1

cn

a11
a12
a1n
a
a
a
21
22
x1 x2 xn 2 n






a
a
m1
m2
amn

c2

c1

x1c1 x2c2

x1
x
x 2


xn

cn

xncn Ax can be viewed as the linear combination of column


vectors of A with coefficients x1, x2,, xn

c1 c 2

x1
x
c n 2 You can derive the same result if you perform

the matrix multiplication for matrix A

expressed in column vectors and x directly
xn

2.11

Trace operation:
If A [aij ]nn , then Tr ( A) a11 a22

ann

Diagonal matrix: a square matrix in which nonzero


elements are found only in the principal diagonal

d1
0
A diag (d1 , d 2 ,, d n )

0
d2

0
0
M nn

d n

It is the usual notation for a diagonal matrix.


2.12

2.2 Properties of Matrix Operations

Three basic matrix operators, introduced in Sec. 2.1:


(1) matrix addition
(2) scalar multiplication
(3) matrix multiplication

Zero matrix:

0mn

Identity matrix of order n):

0 0
0 0

0 0

0
0

0 mn
1 0
0 1
In

0 0

0
0

1 n n
2.13

Properties of matrix addition and scalar multiplication:

If A, B, C M mn , and c, d are scalars,


then (1) A+B = B+A

(Commutative property () of addition)

(2) A+(B+C) = (A+B)+C


(3) ( cd ) A = c ( dA )
(4) 1A = A

(Associative property () of addition)

(Associative property of scalar multiplication)

(Multiplicative identity property, and 1 is the multiplicative


identity for all matrices)

(5) c( A+B ) = cA + cB

(Distributive () property of scalar


multiplication over matrix addition)

(6) ( c+d ) A = cA + dA

(Distributive property of scalar


multiplication over real-number addition)

Notes:
All above properties are very similar to the counterpart
properties for real numbers
2.14

Properties of zero matrices:

If A M mn , and c is a scalar,
then (1) A 0mn A
So, 0mn is also called the additive identity for the set of all mn matrices

(2) A ( A) 0mn
Thus , A is called the additive inverse of A

(3) cA 0mn c 0 or A 0mn

Notes:
All above properties are very similar to the counterpart
properties for the real number 0
2.15

Properties of matrix multiplication:


(1) (BC) = (AB ) C (Associative property of matrix multiplication)
(2) (B+C) = AB + AC

(Distributive property of LHS matrix multiplication


over matrix addition)
(Distributive property of RHS matrix multiplication
over matrix addition)

(3) (A+B)C = AC + BC
(4) c (AB) = (cA) B = A (cB)

For real numbers, the properties (2) and (3) are the same since the order for the
multiplication of real numbers is irrelevant.
For real numbers, in addition to satisfying above properties, there is a
commutative property of real-number multiplication, i.e., cd = dc.

Properties of the identity matrix:

If A M mn , then (1) AI n A
(2) I m A A
For real numbers, the role of 1 is similar to the identity matrix. However, 1 is
unique for real numbers and there could be many identity matrices with
different sizes

2.16

Ex : Matrix Multiplication is Associative


Calculate (AB)C and A(BC) for

1 2
1 0 2
A
, B
,

2 1
3 2 1
Sol:
1
( AB)C
2
5

1 0
and C 3 1 .
2 4

1
2 1 0 2
3

1 3 2 1
2
1 0
4 0
17

3 1

2 3
13

2 4

0
1
4
4
14
2.17

1
A( BC )
2
1

2 1
1 3

2 3
1 7

1 0
0 2

3
1

2 1
2 4
8 17 4

2 13 14

2.18

Definition of Ak : repeated multiplication of a square matrix:

A1 A, A2 AA,

, Ak AA

k matrices

Properties for Ak:

(1) AjAk = Aj+k


(2) (Aj)k = Ajk
where j and k are nonegative integers and A0 is assumed
to be I

For diagonal matrices:


d1 0
0 d
2
D

0 0

0
d1k

0
0
Dk

dn
0

0
d 2k
0

d nk
2.19

Transpose of a matrix:
a11
a
If A 21

am1

a12
a22
am 2

a11
a
then AT 12

a1n

a21
a22
a2 n

a1n
a2 n
M mn ,

amn

am1
am 2
M n m

amn

The transpose operation is to move the entry aij (original at the position (i, j)) to
the position (j, i)
Note that after performing the transpose operation, AT is with the size nm
2.20

Ex : Find the transpose of the following matrix


2
(a) A
8

Sol: (a)

(b)

1 2 3
A 4 5 6

7 8 9

2
A
AT 2
8
(b)
1 2 3
1
A 4 5 6 AT 2

7 8 9
3
(c)
1
0
0
A 2 4 AT

1
1

1
0

(c) A 2 4
1 1

8
4 7
5 8

6 9
2 1
4 1
2.21

Properties of transposes:

(1) ( AT )T A

(2) ( A B)T AT BT
(3) (cA)T c( AT )
(4) ( AB)T BT AT
Properties (2) and (4) can be generalized to the sum or product of
multiple matrices. For example, (A+B+C)T = AT+BT+CT and (ABC)T =
CTBTAT
Since a real number also can be viewed as a 1 1 matrix, the transpose
of a real number is itself, that is, for a R , aT = a. In other words,
transpose operation has actually no function on real numbers

2.22

Ex : Show that (AB)T and BTAT are equal

1 2
2
A 1 0
3
0 2
1

3 1
B 2 1
3 0

Sol:
T

2
1 2 3 1
2 1
2 6 1

T
( AB) 1 0
3 2 1 6 1

1
2

0 2

1
3
0

1
2

2 1 0
3 2 3
2 6 1

T T
B A
1 0 2

1
0
1

1
2

2 3

2.23

Symmetric matrix:
A square matrix A is symmetric if A = AT

Skew-symmetric matrix:
A square matrix A is skew-symmetric if AT = A

Ex:

1 2 3
If A a 4 5 is symmetric, find a, b, c?

b c 6
Sol:
1 2 3
1 a b
T
A

A
A a 4 5 AT 2 4 c

a 2, b 3, c 5
b c 6
3 5 6
2.24

Ex:

0 1 2
If A a 0 3 is a skew-symmetric, find a, b, c?

b c 0
Sol:
0 a b
0 1 2
A a 0 3
AT 1 0 c

b c 0
2 3 0
A AT a 1, b 2, c 3

Note: AAT must be symmetric

Pf:

( AAT )T ( AT )T AT AAT
AA is symmetric
T

The matrix A could be with any size,


i.e., it is not necessary for A to be a
square matrix.
In fact, AAT must be a square matrix.
2.25

Before finishing this section, two properties will be discussed,


which is held for real numbers, but not for matrices: the first is the
commutative property of matrix multiplication and the second is
the cancellation law

Real number:
ab = ba (Commutative property of real-number multiplication)
Matrix:
AB BA

m n n p n p m n

Three situations for BA:

( 1) If m p , then AB is defined, but BA is undefined


( 2) If m p, m n, then AB M mm , BA M nn (Sizes are not the same)
(3) If m p n, then AB M mmBA M mm
(Sizes are the same, but resultant matrices are not equal)

2.26

Ex :
Sow that AB and BA are not equal for the matrices.
2 1
1 3
B
A
and

0
2
2

Sol:
5
1 3 2 1 2
AB

1
0
2
4

7
2 1 1 3 0
BA

0
2
2

1
4

AB BA (noncommutativity of matrix multiplication)

2.27

Notes:
(1) A+B = B+A (the commutative law of matrix addition)

(2) AB BA (the matrix multiplication is not with the


commutative law) (so the order of matrix multiplication is very
important)

This property is different from the property for the


multiplication operations of real numbers, for which the
order of multiplication is with no difference

2.28

Real number:

ac bc, c 0
(Cancellation law for real numbers)
a b

Matrix:
AC BC and C 0 (C is not a zero matrix)
(1) If C is invertible, then A = B
(2) If C is not invertible, then A B (Cancellation law is not
necessary to be valid)
Here I skip to introduce the definition of invertible because
we will study it soon in the next section

2.29

Ex : (An example in which cancellation is not valid)


Show that AC=BC
1 3
2 4
1 2
A
, B
, C

0
1
2
3

1
2

Sol:

1
AC
0
2
BC
2

3 1 2 2

1 1
2 1
4 1 2 2

3 1 2 1

4
2
4
2

So, although AC BC, A B

2.30

2.3 The Inverse of a Matrix

Inverse matrix:
Consider A M nn ,

if there exists a matrix B M nn such that AB BA I n ,


then (1) A is invertible (or nonsingular)
(2) B is the inverse of A

Note:
A square matrix that does not have an inverse is called
noninvertible (or singular)

The definition of the inverse of a matrix is similar to that of the inverse of a


scalar, i.e., c (1/c) = 1
Since there is no inverse (or said multiplicative inverse) for the real number 0,
you can imagine that noninvertible matrices act a similar role to the real
number 0 is some sense

2.31

Theorem 1 : The inverse of a matrix is unique


If B and C are both inverses of the matrix A, then B = C.
Pf:

AB I
C ( AB ) CI
(CA) B C

IB C
BC

(associative property of matrix multiplication and the property


for the identity matrix)

Consequently, the inverse of a matrix is unique.

Notes:
(1) The inverse of A is denoted by A1

(2) AA1 A1 A I
2.32

Find the inverse of a matrix by the Gauss-Jordan Elimination:

| I I | A
Gauss-Jordan Elimination

2.33

Ex : Find the inverse of the following matrix


1 1 0
A 1 0 1
6 2 3

1 1 0 1 0 0
A I 1 0 1 0 1 0
6 2 3 0 0 1

Sol:

1 1 0
( 1)
A1,2

0 1 1
6 2 3

1 1 0
( 4)
A2,3

0 1 1
0 0 1

1 0 0 A( 6)
1,3

1 1 0
0 0 1

1 0 0
M 3( 1)

1 1 0
2 4 1

1 1 0
0
1 1

0 4 3

1 1 0
0 1 1

0 0 1

1 0 0
1 1 0
6 0 1

1 0 0
1 1 0
2 4 1
2.34

1 1 0
(1)
A3,2
0 1 0
0 0 1

1 0 0
1 0 0
(1)
A2,1

3 3 1 0 1 0

2 4 1
0 0 1

2 3 1
3 3 1
1 4 1

[ I A1 ]
So the matrix A is invertible, and its inverse is

2 3 1

1
A 3 3 1
2 4 1

Check it by yourselves:

AA1 A1 A I
2.35

Matrix Operations in Excel

TRANSPOSE: calculate the transpose of a matrix

MMULT: matrix multiplication

MINVERSE: calculate the inverse of a matrix

MDETERM: calculate the determinant of a matrix

SUMPRODUCT: calculate the inner product of two vectors

For TRANSPOSE, MMULT, and MINVSRSE, since the output should


be a matrix or a vector, we need to input the formula first, then choose
the output range, and finally put focus on the formula description cell
and press Ctrl+Shift+Enter to obtain the desired result.
See Matrix operations in Excel.xls downloaded from my website.

2.36

Theorem 2: Properties of inverse matrices


If A is an invertible matrix, k is a positive integer, and c is a scalar,
then
(1) A1 is invertible and ( A1 ) 1 A

(2) Ak is invertible and ( Ak )1 A k ( A1 )k

1 1
(3) cA is invertible if c 0 and (cA) A
c
(4) AT is invertible and ( AT ) 1 ( A1 )T T is not the number of
1

Ex.

power. It denotes the


transpose operation

2 3
2 4
0.1 0.3
T
1
A

3 1
0.4 0.2
4 1

0.1 0.4
1 T
( AT ) 1

(
A
)

0.3 0.2
2.37

Theorem 3: The inverse of a product


If A and B are invertible matrices of order n, then AB is invertible
and
( AB) 1 B 1 A1
Pf:
( AB)( B1 A1 ) A( BB1 ) A1 A( I ) A1 ( AI ) A1 AA1 I

(associative property of matrix multiplication)

Thus, if AB is invertible, then its inverse is B1 A1


Note:
(1) It can be generalized to the product of multiple matrices

A1 A2 A3 An An1 A31 A21 A11


1

(2) It is similar to the results of the transpose of the products of


multiple matrices (see Slide 2.23)
T
A1 A2 A3 An AnT A3T A2T A1T

2.38

Theorem 4: Cancellation properties for matrix multiplication


If C is an invertible matrix, then the following properties hold:

(1) If AC=BC, then A=B (right cancellation property)


(2) If CA=CB, then A=B (left cancellation property)

Pf:

AC BC
( AC )C 1 ( BC )C 1

A(CC 1 ) B (CC 1 )
AI BI
A B
Note:

(C is invertible, so C -1 exists)
(Associative property of matrix multiplication)

If C is not invertible, then cancellation is not valid.


2.39

Theorem 5: Systems of equations with a unique solution


If A is an invertible matrix, then the system of linear equations
Ax = b has a unique solution given by x A1b

Pf:

Ax b
A1 Ax A1b

( A is nonsingular)

Ix A1b
x A1b
If x1 and x2 were two solutions of equation Ax b,
then Ax1 b Ax2 x1 x2

(left cancellation property)

This solution is unique.


2.40

Ex :
Use an inverse matrix to solve each system

(a)

(c)

2 x 3 y z 1
3x 3 y z 1
2 x 4 y z 2

(b)

2x 3y z 4
3x 3 y z 8
2x 4 y z 5

2x 3y z 0
3x 3 y z 0
2x 4 y z 0

Sol:

2 3 1
1 1 0
Gauss-Jordan Elimination
A 3 3 1
A1 1 0 1
2 4 1
6 2 3

2.41

(a)

1 1 0 1 2
x A1b 1 0 1 1 1
6 2 3 2 2
(b)

1 1 0 4 4
x A1b 1 0 1 8 1
6 2 3 5 7
(c)

1 1 0 0 0
x A1b 1 0 1 0 0
6 2 3 0 0

This technique is very


convenient when you
face the problem of
solving several systems
with the same coefficient
matrix.
Because once you have
A-1, you simply need to
perform the matrix
multiplication to solve
the unknown variables.
If you only want to solve
one system, the
computation effort will
be less for the G. E. plus
the back substitution or
the G. J. E.
2.42

Before finishing this section, let us revisit a statement : if a


homogeneous system has any nontrivial solution, this system
must have infinitely many nontrivial solutions
Suppose there is a nonzero solution x1 for this homegeneous system such
that Ax1 0. Then it is straightforward to show that tx1 must be another
solution, i.e.,
A(tx1 ) t ( Ax1 ) t (0) 0
The fourth property of matrix multiplication

Finally, since t can be any real number, it can be concluded that there are
infinitely many solutions for this homogeneous system

2.43

You might also like