You are on page 1of 22

Matrices

A Complete Course

by Luke S. Cole
Version 1.7, July, 2000
CONTENTS 2

Contents
1 Linear Equations in Linear Algebra 4
1.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . 4
1.2 Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Solving a Linear System . . . . . . . . . . . . . . . . . . . . . . . 6

2 Row Reduction and Echelon Forms 7


2.1 Some Terms for Matrices . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Solutions to Linear Systems . . . . . . . . . . . . . . . . . . . . . 8
2.3 Back Substitution (Gaussian Elimination) . . . . . . . . . . . . . 8

3 Vector Equations 8
3.1 Addition of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Subtraction of Vectors . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . 9

4 The Matrix Equation A~x = ~b 10


4.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

5 Solution Sets of Linear Systems 11


5.1 Parametric Vector Form . . . . . . . . . . . . . . . . . . . . . . . 11
5.2 Homogeneous Linear Systems . . . . . . . . . . . . . . . . . . . . 11
5.2.1 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

6 Linear Independence 11
6.1 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.2 Linear Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . 12
6.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 12

7 Linear Transformations 12
7.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
7.2 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
7.2.1 A Shear Transformation . . . . . . . . . . . . . . . . . . . 13
7.2.2 A Dilation Transformation . . . . . . . . . . . . . . . . . 13
7.2.3 A Reflected Transformation . . . . . . . . . . . . . . . . . 13
7.2.4 A Rotated Transformation . . . . . . . . . . . . . . . . . . 13
7.2.5 A Projection Transformation . . . . . . . . . . . . . . . . 13
7.3 One-to-one . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
7.3.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

8 Matrix Operations 14
8.1 Sums and Scalar Multiples . . . . . . . . . . . . . . . . . . . . . . 14
8.1.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
8.2 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
8.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
8.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
8.3 Power of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 15
8.4 The Transpose of a Matrix . . . . . . . . . . . . . . . . . . . . . 15
8.4.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
CONTENTS 3

9 The Inverse of a Matrix 15


9.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
9.2 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 16
9.2.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
9.3 An Algorithm for Finding A−1 . . . . . . . . . . . . . . . . . . . 17
9.4 Characterizations of Invertible Matrices . . . . . . . . . . . . . . 17
9.5 Invertible Linear Transformations . . . . . . . . . . . . . . . . . . 18
9.5.1 Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

10 Subspaces of <n 18
10.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
10.2 Column Space of a Matrix . . . . . . . . . . . . . . . . . . . . . . 18
10.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
10.3 Null Space of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . 19
10.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
10.4 Basis for a Subspace . . . . . . . . . . . . . . . . . . . . . . . . . 19
10.4.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
10.5 The Dimension of a Subspace . . . . . . . . . . . . . . . . . . . . 20
10.5.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 20
10.5.2 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

11 Determinants 20
11.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
11.2 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
11.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
11.4 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
11.5 A Formula for the Inverse of A . . . . . . . . . . . . . . . . . . . 22
1 Linear Equations in Linear Algebra 4

1 Linear Equations in Linear Algebra


1.1 Systems of Linear Equations
A linear equation with variables x1 , ..., xn is an equation that can be written in
the form:

a1 .x1 + a2 .x2 + ... + an .xn = b (1)

where,
b  R or C
a1 , ..., an  R or C
n  J+
The equations

4.x1 − 5.x2 + 2 = x1 and x2 = 2( 6 − x1 ) + x3

are both linear because they can be rearranged algebraically as in equation (4.1).

The equations

4.x1 − 5.x2 = x1 .x2 and x2 = 2 x1 − 6

are both not linear because of the presence of x1 .x2 in the first equation and

x1 in the second.

A system of linear equations (or a linear system) is a collection of one or more


linear equations involving the same variables.
1.2 Matrix Notation 5

E.g.
For two linear equations
2.x1 − x2 + 1 · 5.x3 = 8
x1 − 4.x3 = −7
or

For a single vector equation


     
2 4 2
x + x =
4 1 9 2 3
 
2
Here we thing of as a vector (a line segment joining (0,0) to (2,4) in a
4
plane) and x1 as a scalar.

A system of linear equations has either:


• no solution
• exactly one solution
• infinitely many solutions

If a linear system is consistent if it has either one solution or infinitely many


solutions; a system is inconsistent if it has no solution.

1.2 Matrix Notation


The Matrix is simply the essential information of a linear system recorded com-
pletely in a rectangular array.
Given that a system is:
x1 − 2.x2 + x3 = 0
2.x2 − 8.x3 = 8
−4.x1 + 5.x2 + 9.x3 = −9
There are two different forms of the matrix that can be formed from the above
system.

The Coefficient Matrix


 
1 −2 1
 0 2 −8 
−4 5 9
The Augmented Matrix
 
1 −2 1 0
 0 2 −8 8 
−4 5 9 −9
1.3 Solving a Linear System 6

1.3 Solving a Linear System


The basic strategy is to replace one system with an equivalent system.

There are three basic operations are used to simplify a linear system:
1. Replace one equation by the sum of itself and a multiple of another equa-
tion.
2. Interchange the two equations.
3. Multiply all the terms in an equation by a nonzero constant.
E.g. Solve the following system:
 
1 −2 1 0
 0 2 −8 8 
−4 5 9 −9

We want to keep x1 in the first equation and eliminate it from


the other equations. To do so, add 4 times equation 1 to equation
3.
 
1 −2 1 0
 0 2 −8 8 
0 −3 13 −9

Next, multiply equation 2 by 12 in order to obtain 1 as the coef-


ficient for x2 .
 
1 −2 1 0
 0 1 −4 4 
0 −3 13 −9

Use the x2 in equation 2 to eliminate the −3.x2 in equation 3.


 
1 −2 1 0
 0 1 −4 4 
0 0 1 3

Eventually, we want to eliminate the −2.x2 term from equation


1, but it is more efficient to use the x3 in equation 3 first, to
eliminate the −4.x3 and +x3 terms in equations 2 and 1.
 
1 −2 0 −3
 0 1 0 16 
0 0 1 3

Now, having cleaned out the column above the x3 in equation 3,


we move back to the x2 in equation 2 and use it to eliminate the
−2.x2 above it. Because of our previous work with x3 , there is
now no arithmetic involving x3 terms. Adding 2 times equation
2 to equation 1, we obtain the system:
2 Row Reduction and Echelon Forms 7

 
1 0 0 29
 0 1 0 16 
0 0 1 3

Hence x1 = 29, x2 = 16 and x3 = 3 [i.e. (29,16,3)]

2 Row Reduction and Echelon Forms


A matrix is in Echelon Form if:
1. All nonzero rows are above any rows of all zeros.
2. It’s leading entries () are nonzero values a column to the right of the
leading entry of the row above it.
3. All values below the leading entries are zero.
 
 ∗ ∗ ∗
 0  ∗ ∗
 
 0 0 0 0
0 0 0 0
Note: ∗ indicates, value ∈ <.

A matrix is in Reduced Row Echelon Form if:


4. Leading entries are ’1’.
5. Zero’s are below, above to the left of it the leading entries.
 
1 0∗ ∗
0 1 ∗ ∗
 
0 0 0 0
0 0 0 0
Note: ∗ indicates, value ∈ <.

2.1 Some Terms for Matrices


• When we get a echelon form after row reduction, this process is called
forward phase.
• When we get a reduced row echelon form after row reduction of a echelon
form, this process is called backward phase
• The leading entry (or pivot) is the left most value before, the first
column of the above row.
• Free variables are parameters of a parametric description.
• A unique matrix is when the solution set is defined (i.e. no free variables).
2.2 Solutions to Linear Systems 8

2.2 Solutions to Linear Systems


To find the general solution of a matrix we do as follows.
E.g.
 
1 0 −5 1
For the matrix:  0 1 1 4 , x1 , x2 are leading terms and x3 is a free
00 0 0
variable.
     
x1 = 1 + 5.t x1 1 5
So, x2 = 4 − t or, in vector form  x2  =  4  +  −1  t
x3 = t x3 0 1

2.3 Back Substitution (Gaussian Elimination)


This method is done as follows:
1. Solve the equations for the leading variables.
2. Starting with the last equations, substitute each equation into the one
above it.
3. Treat the free variable(s), if any, as unconstrained parameters.

3 Vector Equations
 
2
When we see a vector: p~ =  3 , this is a matrix with only one column and is
1
called a column vector or just a vector and each row represents a dimension.
So, for example, a two row vector is a vector in two dimensions, denoted: <2 .
A three row vector is a vector in three dimensions and denoted <3 .
 
1
The geometric description of a vector such as is shown in figure 1
4

Figure 1: <2 vector representation

 
7
The geometric description of a vector such as  10  is shown in figure 2
6

Figure 2: <3 vector representation


3.1 Addition of Vectors 9

3.1 Addition of Vectors


If we add vectors, ~u and ~v we simply add ~u to ~v (this is adding the corresponding
rows of each column vector).
       
7 4 7+4 11
i.e. + = = , and is represented in figure 3
10 5 10 + 5 15

Figure 3: Addition of two vectors in <2

3.2 Subtraction of Vectors


If we subtract vectors, ~u and ~v we simply add ~u to −~v .
        
7 4 7−4 3
i.e. + − = = , and is represented in figure 4
10 5 10 − 5 5

Figure 4: Subtraction of two vectors in <2

3.3 Linear Combinations


A linear combination is expressed:
p
X
w
~ = ci .~vi (2)
i=1

Where,
~v1 , ~v3 , ..., ~vp are vectors in <n
c1 , c2 , ..., cp are scalars (or weights)
~ is in Span{~v1 , ~v3 , ..., ~vp }
w
   
−1 2
E.g. Estimate the linear combinations of ~v1 = and ~v2 = that
1 1
generate the vector ~u.
Looking at the geometrical description of ~v1 and ~v2 .
–image–
The parallelogram rule show us that ~u is the sum of 3.~v1 and −2.~v2 .
i.e. ~u = 3.~v1 − 2.~v2
     
1 5 −3
E.g. If ~a1 =  −2 , ~a2 =  −13  and ~b =  8 , then Span(~a1 , ~a2 ) is a
3 −3 1
3 ~
plane through the origin in < . Is b in that plane.
4 The Matrix Equation A~x = ~b 10

Since we now that if x1 .~a1 + x2 .~a2 = b has a solution, then ~b is on


the plane.    
1 5 −3 1 5 −3
Now,  −2 −13 8  rref→  0 −3 2 
3 −3 1 0 0 −2
Here we can see that the system has no solution.
∴ ~b is not in Span{~a1 , ~a2 }

4 The Matrix Equation A~x = ~b


A fundamental idea in linear algebra is to view a linear combination of vectors
as the product of a matrix and a vector. The following equation shows this
donation.

A~x = ~b (3)
 
x1
n
 x2 

  X
~a1 ~a2 . . . ~an  .  = xj~aj (4)
 ..  j=1
xn

Where,
A is a m × n matrix with columns ~a1 , ~a2 , ..., ~an
~x is a column vector that is in <n
~a1 , ~a2 , ..., ~an are vectors in <n
x1 , x2 , ..., xn are scalars (or weights)
~ is in Span{~a1 , ~a3 , ..., ~ap }
w
   
2 4 1
E.g. If A = and ~x = then what is ~b
4 9 2
     
2 4 10
A~x = 1 +2 =
4 9 12

4.1 Properties
If A is an m × n matrix, ~u and ~v are vectors in <n , and c is a scalar, then:

1. A(~u + ~v ) = A~u + A~v


2. A(c~u) = c(A~u)
3. The columns of A span <m iff a row echelon form of A has a pivot in
every row.
h i
4. The equation A~x = ~b is the augmented matrix ~a1 ~a2 . . . ~an ~b

Note: A matrix with only pivots that are 1 is know as the Identity Matrix
and denoted as I.
5 Solution Sets of Linear Systems 11

5 Solution Sets of Linear Systems


Solution sets of linear systems are important objects of study in linear algebra.

5.1 Parametric Vector Form


A solution set is said to be in parametric vector form if the solution is in
the form:
~x = s~u + t~v (5)
Where,
~x is a column vector that is in <n
~u, ~v are vectors
s, t are scalars (or weights) in <n

5.2 Homogeneous Linear Systems


A system of linear equations is said to be homogeneous if it can be written in
the form:
A~x = 0 (6)
Where,
A is an m × n matrix
~x is a column vector that is in <n
0 is the zero vector in <m
This equation has at least one solution, namely ~x = 0. This solution is known
as the trivial solution. If any other solutions exists, then they are know as a
nontrivial solution.

5.2.1 Theorems
1. The homogeneous equation has a nontrivial solution iff the equation has
at least one free variable.
2. Suppose the equation A~x = ~b is consistent for some given ~b, and let p~ be
a solution. Then the solution set of A~x = ~b is the set of all vectors of the
form w~ = p~ + ~vh , where ~vh is any solution of the homogeneous equation
A~x = 0.

6 Linear Independence
6.1 Linear Independence
6.1.1 Definition
If vectors {~v1 , ..., ~vp } in <n is said to be linearly independent if the vector
equation x1 .v1 + x2 .v2 + ... + xn .vn = 0 has only the trivial solution.

Figure 5 shows a graphical representation of what it means for vectors to be


linear independent.
6.2 Linear Dependence 12

Figure 5: Graphical representation of what it means for vectors to be linear


independent.

6.2 Linear Dependence


6.2.1 Definitions
1. If vectors {~v1 , ..., ~vp } in <n is said to be linearly dependent if there exist
weights c1 , c2 +...+cn , not all zero, such that c1 .v1 +c2 .v2 +...+cn .vn = 0.
2. Iff one vector is a multiple of another.
3. Iff a row has all zeros
4. Iff n > m for a matrix A being m × n
5. Iff ~vj is a linear combination of ~v1 , ~v2 , ..., ~vj−1 for the set of vectors {~v1 , ~v2 , ..., ~vk }.

Figure 6 shows a graphical representation of what it means for vectors to be


linear dependent.

Figure 6: Graphical representation of what it means for vectors to be linear


dependent.

7 Linear Transformations
Linear transformations are seeing what happens when we think of the matrix A
as an object that acts on a vector ~x by multiplication to produce a new vector
called A~x. So, if a matrix in <n is multiplied by some vector in <r then the
resulting matrix is in <m . We write:

For T : <n → <m then T (~x) = ~x 7→ A~x (7)

E.g.
 
 1  
4 −3 1 3 
1 = 5

2 0 5 1 1 8
1

7.1 Properties
1. T maps <n onto <m iff the columns of A spans <m
2. The range of T is the span of A
3. T (~x) = 0 iff ~x solves A~x = 0

A transformation T : <n → <m is linear if:


7.2 Terms 13

1. T (~u + ~v ) = T (~u) + T (~v ) ∀~u, ~v ∈ <n


2. T (c~u) = cT (~u) ∀~u ∈ <m and ∀c ∈ <
3. If T (x) = A~x for x = e1 , e2 , ..., en
 
then T (~x) = T (e1 ) T (e2 ) . . . T (en ) ~x

7.2 Terms
7.2.1 A Shear Transformation
A shear transformation is when a matrix is transformed so that some vectors
are shifted as shown in figure 7.

Figure 7: A Shear Transformation.

7.2.2 A Dilation Transformation


A dilation transformation is when a matrix is transformed so that the vectors
are multiplied by some constant in <, as shown in figure 8.

Figure 8: A Dilation Transformation.

7.2.3 A Reflected Transformation


A reflected transformation is when a matrix is transformed so that the vectors
are reflected across y = x or y = x and y − x. See figure 9.

Figure 9: A Reflected Transformation.

7.2.4 A Rotated Transformation


A rotated transformation is when a matrix is transformed so that the vectors
are rotated through some angle, as shown in figure 10.

Figure 10: A Rotated Transformation.

7.2.5 A Projection Transformation


A projection transformation is when a matrix is transformed so that the vectors
are projected to some set of axis’s. See figure 11.

Figure 11: A Projection Transformation.


7.3 One-to-one 14

7.3 One-to-one
7.3.1 Properties
1. Iff T (~x) = 0 for T : <n → <m has only the trivial solution.
2. T is one-to-one iff the columns of a matrix A are linearly independent.

8 Matrix Operations
8.1 Sums and Scalar Multiples
8.1.1 Properties
If A, B and C are matrices of the same size, and r and s are scalars.
1. A + B = B + A
2. (A + B) + C = A + (B + C)
3. A + 0 = A
4. r(A + B) = rA + rB

5. (r + s)A = rA + sA
6. r(sA) = (rs)A
     
4 05 1 1 1 2 −3
E.g. If A = ,B= and C = then what is
−1 3 2 3 5 7 0 1
(a) A + B
(b) A + C.
 
5 1 6
(a) A+B=
2 8 9
(b) A + C is not defined because A and C have different size
matrices.

8.2 Multiplication
8.2.1 Definition
If A is an m × n matrix, and if B is an n × p matrix with columns ~b1 , ~b2 , ..., ~bp
then the product AB is the m × p matrix whose columns are A~b1 , A~b2 , ..., A~bp .
That is:
h i
AB = A~b1 A~b2 . . . A~bp (8)

8.2.2 Properties
If A is a m × n matrix and B and C are matrices of size, for which the indicated
sums and products are defined.

1. A(BC) = (AB)C
8.3 Power of a Matrix 15

2. A(B + C) = AB + AC
3. (B + C)A = BA + CA
4. r(AB) = (rA)B = A(rB) ∀r ∈ <
5. Im A = A = AIn

8.3 Power of a Matrix


If A is an m × n matrix and if k is a positive integer, then

Ak = A...A
| {z } (9)
k

Note: We interpret A0 as I

8.4 The Transpose of a Matrix


Given a m × n A matrix, the transpose of A is the n × m matrix, denoted as
AT , where the columns of AT are formed from the corresponding rows of A.
 
a b
E.g. If A = , then what is the transpose of A.
c d
 
a c
AT =
b d

8.4.1 Properties
If A and B are matrices whose sizes are appropriate for the following sums and
products.
1. (AT )T = A
2. (A + B)T = AT + BT
3. (rA)T = rAT ∀r ∈ <
4. (AB)T = BT AT Note: The reverse order

9 The Inverse of a Matrix


In this section we consider only square matrices and we investigate the matrix
analogue of the reciprocal or multiplicative inverse of a nonzero real number.

If A is an n × n matrix, if often happens that there is another n × n ma-


trix C such that AC = I and CA = I where I is the n × n identity matrix.
In this case, we say that A is invertible and we call C an inverse of A. If B
were another inverse of A, then we would have B = BI = B(AC) = (BA)C
= IC = C. Thus when A is invertible, its inverse is unique. We denote it by
A−1 , so that:

AA−1 = I and A−1 A = I (10)


9.1 Properties 16

A matrix that is not invertible is sometimes called a singular matrix, and an


invertible matrix is called a nonsinqular matrix.

An equation for the inverse of a 2 × 2 matrix is as follows:


 
a b
If A = and ad − bc 6= 0, then
c d

 
−1 1 d −b
A = (11)
ad − bc −c a

If ad − bc = 0, then A is not invertible.


The quantity ad − bc is called the determinant of A, and we write:

det A = ad − bc (12)

A use of the inverse matrix is if for example each ~b in <n , the equation A~x = ~b
has unique solution ~x = A−1~b.

9.1 Properties
1. If A is an invertible matrix, then A−1 is invertible and (A−1 )−1 = A
2. (AB)−1 = B−1 A−1 Note: The reverse order

3. (AT )−1 = (A−1 )T

9.2 Elementary Matrices


An elementary matrix is one that is obtained by performing a single ele-
mentary row operation on an identity matrix. The next example illustrate a
elementary matrix.
   
1 0 0 a b c
E.g. Let E =  0 1 0  and A =  d e f  . Compute EA and
−4 0 1 g h i
describe how the product can be obtained by elementary row operations
on A.
 
a b c
So, EA =  d e f 
g − 4a h − 4b i − 4c
Hence, Addition of -1 times row 1 of A to row 3 produces EA.
This is a row replacement operation

9.2.1 Properties
1. If an elementary row operation is performed on an m × n matrix A, the
resulting matrix can be written as EA, where the m × m matrix E is
created by performing the same row operation on Im .
9.3 An Algorithm for Finding A−1 17

2. Each elementary matrix E is invertible. The inverse of E is the elementary


matrix of the same type that transforms E back into I.
3. An n × n matrix A is invertible iff A is row equivalent to In , and in this
case, any sequence of elementary row operations that reduces A to In also
transforms In into A−1 .

9.3 An Algorithm for Finding A−1


If we place A and I side-by-side to form an augmented matrix [ A I ], then row
operations on this matrix produce identical operations on A and on I. Further-
more, either there are row operations that transform A to In , and In to A−1 ,
or else A is not invertible.

The Algorithm is to row reduce the augmented matrix [ A I ] to


[ I A−1 ]
 
0 1 2
E.g. Find the inverse of the matrix A =  1 0 3 , if it exists.
4 −3 8
     
  0 1 2 1 0 0 1 0 3 0 1 0 1 0 3 0 1 0
Here, A I =  1 0 3 0 1 0  ∼  0 1 2 1 0 0  ∼  0 1 2 1 0 0 
4 −3 8 0 0 1 4 −3 8 0 0 1 0 −3 −4 0 −4 1
     
1030 1 0 10 3 0 1 0 1 0 0 −9/2 7 −3/2
∼  0 1 2 1 0 0  ∼  0 1 2 1 0 0  ∼  0 1 0 −2 4 −1 
0 0 2 3 −4 1 0 0 1 3/2 −2 1/2 0 0 1 3/2 −2 1/2
−1
 
∼ IA  
−9/2 7 −3/2
∴ A−1 =  −2 4 −1 
3/2 −2 1/2

9.4 Characterizations of Invertible Matrices


Let A be a square n × n matrix. Then The following statements are equivalent.
That is, for a given A, the statements are either all true or all false.
1. A is an invertible matrix.
2. A is row equivalent to the n × n identity matrix.
3. A has n pivot positions.
4. The equation A~x = ~b has at least one solution for each ~b in <n .
5. There is an n × n matrix C such that CA = I.
6. There is an n × n matrix D such that AD = I.
7. AT is an invertible matrix.
Note: If A and B are square matrices and if AB = I, then A and B are both
invertible, with B = A−1 and A = B−1 .
9.5 Invertible Linear Transformations 18

9.5 Invertible Linear Transformations


A linear transformation T : <n → <n is said to be invertibe if there exists a
function S : <n → <n such that

S(T (~x)) = x ∀~x ∈ <n (13)


T (S(~x)) = x ∀~x ∈ <n (14)

The next theorem shows that if such an S exists, it is unique and must be a
linear transformation. We call S the inverse of T and write it as T −1 .

9.5.1 Theorem
If we let T : <n → <n be a linear transformation and let A be the
standard matrix for T . Than T is invertible iff A is an invertible
matrix. In that case, the linear transformation S given by S(~x) =
A−1 ~x is the unique function satisfying 13 and 14.

10 Subspaces of <n
10.1 Definition
A subspace of <n is any set H in <n that has three properties:
1. The zero vector is in H.
2. For each ~u and ~v in H, the sum ~u + ~v is in H.

3. For each ~u in H and each scalar c, the vector c~u is in H

10.2 Column Space of a Matrix


10.2.1 Definition
The column space of a matrix A is the set Col A of all linear combinations
of the columns of A.
   
1 −3 −4 3
E.g. Let A =  −4 6 −2  and ~b =  3  . Determine whether ~b
−3 7 6 −4
is in the column space of A.

If A~x = ~b for some ~x and the equation is consistent.


Then ~b is in the column space of A.
   
1 −3 −4 3 1 −3 −4 3
 −4 6 −2 3  rref →  0 −6 −18 15 
−3 7 6 −4 0 0 0 0
∴ ~b is in the column space of A.
10.3 Null Space of a Matrix 19

10.3 Null Space of a Matrix


10.3.1 Definition
The null space of a matrix A is the set Nul A of all solutions to the homoge-
neous equation A~x = 0

10.4 Basis for a Subspace


10.4.1 Definition
A basis for a subspace H of <n is a linearly independent set in H that spans H.
 
−3 6 −1 1 −7
E.g. Find a basis for (a) the null space of the matrix A =  1 −2 2 3 −1 
2 −4 5 8 −4
and (b) the column space of A.
   
−3 6 −1 1 −7 1 −2 0 −1 3 0
(a) First,  1 −2 2 3 −1  rref →  0 0 1 2 −2 0 
2 −4 5 8 −4 0 0 0 0 0 0
x1 − 2.x2 − x4 + 3.x5 = 0
So, x3 + 2.x4 − 2.x5 = 0
0 = 0
Hence, the general solution is:
         
x1 2.x2 + x4 − 3.x5 2 1 −3
 x2   x2  1  0   0 
         
 x3  =  −2.x4 + 2.x5  = x2  0 +x4  −2 +x5  2 
         
 x4   x4  0  1   0 
x5 x5 0 0 1
     

 2 1 −3  
1   0   0 


     

So,  0  ,  −2  ,  2  is the basis for Nul A
     
 0   1   0 

 

 
0 0 1
 
   
 −3 1 
(b) So,  1  ,  2  is the basis for Col A
2 5
 

So we say that:
1. The pivot columns of a matrix A form a basis for the column space of A.
2. The column vectors of the free variables of a row reduction echelon matrix
A form a basis for the null space of A.
10.5 The Dimension of a Subspace 20

10.5 The Dimension of a Subspace


10.5.1 Definitions
1. The dimension of a nonzero subspace H, denoted by dim H, is the
number of vectors in any basis for H. The dimension of the zero subspace
{0} is defined to be zero1 .
2. The rank of a matrix A, denoted by rank A, is the dimension of the
column space of A.

10.5.2 Theorems
1. If a matrix A has n columns, then
rank A + dim Nul A = n (15)

2. Let H be a p-dimensional subspace of <n . Any linearly independent set


of exactly p elements in H is automatically a basis for H. Also, any set
of p elements of H that spans H is automatically a basis for H.

11 Determinants
Recall from equation 11 on page 16 that a matrix is invertible iff its determinant
is nonzero. Here we will extend this fact to matrices greater then a 2 × 2 matrix.

11.1 Definition
For n > 2, the determinant of an n × n matrix A = [aij ] is the sum of n
terms of the form ±a1j det A1j , with plus and minus signs alternating, where
the entries a11 , a12 , ..., a1n are from the first row of A. i.e.
det A = a11 det A11 − a12 det A12 + ... + (−1)1+n a1n det A1n (16)
Xn
= (−1)1+j a1j det A1j (17)
j=1

where,
a1j is the term in the first row and in the j th column
det A1j is the determinant of the matrix A without the first row
and the j th column
Since most matrices have many rows, mostly zero. The cofactor expansion
across the first row formula is introduced. That is, if we let:
Cij = (−1)i+j aij det Aij (18)
then
det A = a11 C11 − a12 C12 + ... + a1n C1n (19)
Form this we get the follow theorems:
1 The zero subspace has no basis (because the zero vector by itself forms a linearly dependent

set).
11.2 Theorems 21

11.2 Theorems
The determinant of an n×n matrix A can be computed by a cofactor expansion
across any row or down any column.
1. The expansion across the ith row using the cofactors in 18 is

det A = ai1 Ci1 − ai2 Ci2 + ... + ain Cin (20)

2. The cofactor expansion down the j th column is

det A = a1j C1j − a2j C2j + ... + anj Cnj (21)

11.3 Properties
Let A be a square matrix.
1. If a multiple of one row of A is added to another orw to produce a matrix
B, then: det B = det A
2. If two rows of A are interchanged to produce B, then: det B = −det A
3. If one row of A is multiplied by k to produce B, then: det B = k.det A
4. If a matrix A has the row reduced echelon form U then:
  
product of
(−1)r . when A is invertible

det A = pivots in U
0 when A is not invertible

5. A square matrix A is invertible iff det A 6= 0


6. If A is an n × n matrix, then: det AT = det A

7. If A and B are n × n matrices, then: det AB = (det A)(det B)


8. If A and B are n × n matrices, then: det (A + B) 6= det A + det B
9. If A is an n × n matrix and E is an n× n elementary matrix, then:
 1 if E is a row replacement
det EA = (det E)(det A), where det E = −1 if E is an interchange
r if E is a scale by r

11.4 Cramer’s Rule


Let A be an invertible n × n matrix. For any ~b in <n , the unique solution ~x of
A~x = ~b has entries given by:

det Ai (~b)
xi = (22)
det A
Where,
i = 1, 2, ..., n
h i
Ai (~b) = ~a1 · · · ~ai−1 ~b ~ai+1 · · · ~an
11.5 A Formula for the Inverse of A 22

E.g. Use Cramer’s rule to solve the system 3.x1 − 2.x2 = 6


and −5.x1 + 4.x2 = 8
     
3 −2 ~ 6 −2 ~ 3 6
Here, A = , A1 (b) = , A2 (b) =
−5 4 8 4 −5 8
And since det A = 2, the system has a unique solution.
det A1 (~b) 24 + 16
x1 = = = 20
det A 2
det A2 (~b) 24 + 30
x2 = = = 27
det A 2

11.5 A Formula for the Inverse of A


From Cramer’s rule we find that if we let a A be an invertible n × n matrix.
Then:
det Ai (~ej )
A−1
ij = (23)
det A
Where,
~ej is the j th column of the identity matrix