You are on page 1of 68

AMATH 460: Mathematical Methods

for Quantitative Finance


5. Linear Algebra I
Kjell Konis
Acting Assistant Professor, Applied Mathematics
University of Washington
Kjell Konis (Copyright 2013) 5. Linear Algebra I 1 / 68
Outline
1
Vectors
2
Vector Length and Planes
3
Systems of Linear Equations
4
Elimination
5
Matrix Multiplication
6
Solving Ax = b
7
Inverse Matrices
8
Matrix Factorization
9
The R Environment for Statistical Computing
Kjell Konis (Copyright 2013) 5. Linear Algebra I 2 / 68
Outline
1
Vectors
2
Vector Length and Planes
3
Systems of Linear Equations
4
Elimination
5
Matrix Multiplication
6
Solving Ax = b
7
Inverse Matrices
8
Matrix Factorization
9
The R Environment for Statistical Computing
Kjell Konis (Copyright 2013) 5. Linear Algebra I 3 / 68
Vectors
Portfolio: w
1
shares of asset 1, w
2
shares of asset 2
Think of this pair as a two-dimensional vector
w =
_
w
1
w
2
_
= (w
1
, w
2
)
The numbers w
1
and w
2
are the components of the column vector w
A second portfolio has u
1
shares of asset 1 and u
2
shares of asset 2;
the combined portfolio has
u + w =
_
u
1
u
2
_
+
_
w
1
w
2
_
=
_
u
1
+ w
1
u
2
+ w
2
_
Addition for vectors is dened component-wise
Kjell Konis (Copyright 2013) 5. Linear Algebra I 4 / 68
Vectors
Doubling a vector
2w = w + w =
_
w
1
w
2
_
+
_
w
1
w
2
_
=
_
w
1
+ w
1
w
2
+ w
2
_
=
_
2w
1
2w
2
_
In general, multiplying a vector by a scalar value c
cw =
_
cw
1
cw
2
_
A linear combination of vectors u and w
c
1
u + c
2
w =
_
c
1
u
1
c
1
u
2
_
+
_
c
2
w
1
c
2
w
2
_
=
_
c
1
u
1
+ c
2
w
1
c
1
u
2
+ c
2
w
2
_
Setting c
1
= 1 and c
2
= 1 gives vector subtraction
u w = 1u +1w =
_
1u
1
+1w
1
1u
2
+1w
2
_
=
_
u
1
w
1
u
2
w
2
_
Kjell Konis (Copyright 2013) 5. Linear Algebra I 5 / 68
Visualizing Vector Addition
The vector w =
_
w
1
w
2
_
is drawn as an arrow with the tail at the origin
and the pointy end at the point (w
1
, w
2
)
Let u =
_
1
2
_
and w =
_
3
1
_
w
u
v = w + u v
w = v u
u
Kjell Konis (Copyright 2013) 5. Linear Algebra I 6 / 68
Vector Addition
Each component in the sum is u
i
+ w
i
= w
i
+ u
i
= u + w = w + u
e.g.,
_
1
5
_
+
_
3
3
_
=
_
4
8
_ _
3
3
_
+
_
1
5
_
=
_
4
8
_
The zero vector has u
i
= 0 for all i , thus w + 0 = w
For 2u: preserve direction and double length
u is the same length as u but points in opposite direction
Kjell Konis (Copyright 2013) 5. Linear Algebra I 7 / 68
Dot Products
The dot product or inner product of u = (u
1
, u
2
) and w = (w
1
, w
2
) is
the scalar quantity
u w = u
1
w
1
+ u
2
w
2
Example: letting u = (1, 2) and w = (3, 1) gives
u v = (1, 2) (3, 1) = 1 3 + 2 1 = 5
Interpretation:
w is a position in assets 1 and 2
Let p = (p
1
, p
2
) be the prices of assets 1 and 2
V = w p = w
1
p
1
+ w
2
p
2
value of portfolio
Kjell Konis (Copyright 2013) 5. Linear Algebra I 8 / 68
Outline
1
Vectors
2
Vector Length and Planes
3
Systems of Linear Equations
4
Elimination
5
Matrix Multiplication
6
Solving Ax = b
7
Inverse Matrices
8
Matrix Factorization
9
The R Environment for Statistical Computing
Kjell Konis (Copyright 2013) 5. Linear Algebra I 9 / 68
Lengths
The length of a vector w is
length(w) = w =

w w
Example: suppose w = (w
1
, w
2
, w
3
, w
4
) has 4 components
w =

w w =
_
w
2
1
+ w
2
2
+ w
2
3
+ w
2
4
The length of a vector is positive except for the zero vector, where
0 = 0
A unit vector is a vector whose length is equal to one: u u = 1
A unit vector having the same direction as w = 0
w =
w
w
Kjell Konis (Copyright 2013) 5. Linear Algebra I 10 / 68
Vector Norms
A norm is a function that assigns a length to a vector
A norm must satisfy the following three conditions
i) x 0 and x = 0 only if x = 0
ii) x + y x +y
iii) ax = |a| x where a is a real number
Important class of vector norms: the p-norms
x
p
=
_
m

i =1
|x
i
|
p
_1
p
(1 p < )
x

= max
1i m
|x
i
|
x
1
=
m

i =1
|x
i
|
Kjell Konis (Copyright 2013) 5. Linear Algebra I 11 / 68
The Angle Between Two Vectors
The dot product u w is zero when v is perpendicular to w
The vector v =
_
cos(), sin()
_
is a unit vector
v =

v v =
_
cos
2
() + sin
2
() = 1
Let u = (1, 4), then
u =
u
u
=
_
cos(), sin()
_
Let w = (4, 1), then
w =
w
w
=
_
cos(), sin()
_
Cosine Formula: u w = cos()
Schwarz Inequality: |u w| uw
u
w

Kjell Konis (Copyright 2013) 5. Linear Algebra I 12 / 68


Planes
So far, 2-dimensional
Everything (dot products, lengths, angles, etc.) works in higher
dimensions too
A plane is a 2-dimensional sheet that lives in 3 dimensions
Conceptually, pick a normal vector n and dene the plane P to be all
vectors perpendicular to n
If a vector v = (x, y, z) P then
n v = 0
However, since n 0 = 0, 0 P
The equation of the plane passing through v
0
= (x
0
, y
0
, z
0
) and
normal to n is
n (v v
0
) = n
1
(x x
0
) + n
2
(y y
0
) + n
3
(z z
0
) = 0
Kjell Konis (Copyright 2013) 5. Linear Algebra I 13 / 68
Planes (continued)
Every plane normal to n has a linear equation with
coecients n
1
, n
2
, n
3
:
n
1
x + n
2
y + n
3
z = n
1
x
0
+ n
2
y
0
+ n
3
z
0
= d
Dierent values of d give parallel planes
The value d = 0 gives a plane through the origin
Kjell Konis (Copyright 2013) 5. Linear Algebra I 14 / 68
Outline
1
Vectors
2
Vector Length and Planes
3
Systems of Linear Equations
4
Elimination
5
Matrix Multiplication
6
Solving Ax = b
7
Inverse Matrices
8
Matrix Factorization
9
The R Environment for Statistical Computing
Kjell Konis (Copyright 2013) 5. Linear Algebra I 15 / 68
Systems of Linear Equations
Want to solve 3 equations in 3 unknowns
x + 2y + 3z = 6
2x + 5y + 2z = 4
6x 3y + z = 2
Row picture:
(1, 2, 3) (x, y, z) = 6
(2, 5, 2) (x, y, z) = 4
(6, 3, 1) (x, y, z) = 2
Column picture:
x
_

_
1
2
6
_

_
+ y
_

_
2
5
3
_

_
+ z
_

_
3
2
1
_

_
=
_

_
6
4
2
_

_
Kjell Konis (Copyright 2013) 5. Linear Algebra I 16 / 68
Systems of Linear Equations
28 I Vectors and Matrices
1.4 Matrices and Linear Equations
The central problem of linear algebra is to solve linear equations. There are two
ways to describe that problemfirst by rows and then by colunins. In this chapter we
explain the problem. in the next chapter we solve it.
Start with a system of three equations in th:ee unknowns. Let the unknowns be
x. u.;. and let the linear equations be
x
2* + 3; 6
Iv - 5v + 2; = 4
6x

3; ; 2.
We look for numbers x. v,; that solve all three equations at once. Those numbers
might or might not exist. For this system, they do exist. When the number of
equations matches the number of unknowns, there is usually one solution. The
immediate problem is how to visualize the three equations. There is a row picture
and a column picture.
R The row picture shows three planes meeting at a single point.
The first plane comes from the first equation x 2v 3; = 6. That plane crosses the
x.y,; axes at (6,0,0) and (0.3,0) and (0.0,2). The plane does not go through the
origin - the right side of its equation is 6 and not zero.
The second plane is given by the equation 2.v +
5i -i- 2; 4 from the second row.
The numbers 0. 0, 2 satisfy both equations, so the point (0,0. 2) lies on both planes.
Those two planes intersect in a line L, which goes through (0.0,2).
The third equation gives a third plane. It cuts the line L at a single point. That
point lies on all three planes. This is the row picture of equation (1)- three planes
meeting at a single point (x.v,;).
The numbers x,y.; solve all three equations.
Figure 1.12 Column picture of three equations: Combinations of columns are
b 2> column 3 so (x.v. = (0.0. 2) b = sum of columns so (x.i-.:) = (1. 1. 1).
C The column picture combines the columns on the left side to produce the right side.
Write the three equations as one vector equation based on columns:
3 6
+z 2 = 4
2
The row picture has planes given by dot products. The column picture has this linear
combination. The unknown numbers .v. v. = are the coefficients in the combination.
We multiply the columns by the correct numbers x.y.; to give the column (6,4.2).
For this particular equation I know the right combination (I made up the problem).
If x and v are zero, and = equals 2. then 2 times the third column agrees with the
column on the right. The solution is -v = 0. i = 0. 2. That point (0.0,2) lies on
all three planes in the row picture. It solves all three equations. The row and column
pictures show the same solution in different ways.
For one moment. change to a new right hand side (6.9.4). This vector equals
column 1
+ column 2 + column 3. The solution with this new right hand side is
(x.v. =
_______.
The numbers x. i. z multiply the columns to give b.
We have three rows in the row picture and three columns in the column picture
(plus the right side). The three rows and columns contain nine numbers. These nine
numbers fill a 3 hi 3 matrix. We are coming to the matrix picture. The coefficient
matrix has the rows and columns that have so far been kept separate:
1
2i
6J
29
(I)
cot I
2
3
cot 3
cot 2
[]
+4
5
(2)
1 2 3
The coefficient matrix is A 2 5 2
6 3 1
Figure 1.11 Row picture of three equations: Three planes meet at a point.
Kjell Konis (Copyright 2013) 5. Linear Algebra I 17 / 68
Matrix Form
Stacking rows or binding columns gives the coecient matrix
A =
_

_
1 2 3
2 5 2
6 3 1
_

_
Matrix notation for the system of 3 equations in 3 unknowns
_

_
1 2 3
2 5 2
6 3 1
_

_
_

_
x
y
z
_

_
=
_

_
6
4
2
_

_
is Av = b
where v = (x, y, z) and b = (6, 4, 2)
The left-hand side multiplies A times the unknowns v to get b
Multiplication rule must give a correct representation of the original
system
Kjell Konis (Copyright 2013) 5. Linear Algebra I 18 / 68
Matrix-Vector Multiplication
Row picture multiplication
Av =
_

_
(row 1) v
(row 2) v
(row 3) v
_

_
=
_

_
(row 1) (x, y, z)
(row 2) (x, y, z)
(row 3) (x, y, z)
_

_
Column picture multiplication
Av = x (column 1) + y (column 2) + z (column 3)
Examples:
Av =
_

_
1 0 0
1 0 0
1 0 0
_

_
_

_
4
5
6
_

_
=
_

_
4
4
4
_

_
Av =
_

_
1 0 0
0 1 0
0 0 1
_

_
_

_
4
5
6
_

_
=
_

_
4
5
6
_

_
Kjell Konis (Copyright 2013) 5. Linear Algebra I 19 / 68
Example
3x y = 3
x + y = 5

_
3 1
1 1
_ _
x
y
_
=
_
3
5
_
Row Picture
G (2, 3)
x + y = 5
3x y = 3
Rows
3 (2) (3) = 3
(2) + (3) = 5
Column Picture
col 2
3 (col 2)
col 1
2 (col 1)
G
2 (col 1) + 3 (col 2) = b
Columns 2
_
3
1
_
+ 3
_
1
1
_
=
_
3
5
_
Matrix
_
3 1
1 1
_ _
2
3
_
=
_
3
5
_
Kjell Konis (Copyright 2013) 5. Linear Algebra I 20 / 68
Systems of Equations
For an equal number of equations and unknowns, there is usually one
solution
Not guaranteed, in particular there may be
no solution (e.g., when the lines are parallel)
innitely many solutions (e.g., two equations for the same line)
Kjell Konis (Copyright 2013) 5. Linear Algebra I 21 / 68
Outline
1
Vectors
2
Vector Length and Planes
3
Systems of Linear Equations
4
Elimination
5
Matrix Multiplication
6
Solving Ax = b
7
Inverse Matrices
8
Matrix Factorization
9
The R Environment for Statistical Computing
Kjell Konis (Copyright 2013) 5. Linear Algebra I 22 / 68
Elimination
Want to solve the system of equations
x 2y = 1
3x + 2y = 11
High school algebra approach
solve for x: x = 2y + 1
eliminate x: 3(2y + 1) + 2y = 11
solve for y: 8y + 3 = 11 =y = 1
solve for x: x = 3
Kjell Konis (Copyright 2013) 5. Linear Algebra I 23 / 68
Elimination
1x 2y = 1
3x + 2y = 11
Terminology
Pivot The rst nonzero in the equation (row) that does the
elimination
Multiplier (number to eliminate) / (pivot)
How was x eliminated?
3x + 2y = 11
3[1x 2y = 1 ]
0x + 8y = 8
Elimination: subtract a multiple of one equation from another
Idea: use elimination to make an upper triangular system
Kjell Konis (Copyright 2013) 5. Linear Algebra I 24 / 68
Elimination
An upper triangular system of equations
1x 2y = 1
0x + 8y = 8
Solve for x and y using back substitution:
solve for y
use y to solve for x
Kjell Konis (Copyright 2013) 5. Linear Algebra I 25 / 68
Elimination Using Matrices
The system of 3 equations in 3 unknowns can be written in the
matrix form Ax = b
2x
1
+ 4x
2
2x
3
= 2
4x
1
+ 9x
2
3x
3
= 8
2x
1
3x
2
+ 7x
3
= 10

_
2 4 2
4 9 3
2 3 7
_

_
. .
A
_

_
x
1
x
2
x
3
_

_
. .
x
=
_

_
2
8
10
_

_
. .
b
The unknown is
_

_
x
1
x
2
x
3
_

_
and the solution is
_

_
1
2
2
_

_
Ax = b represents the row form and the column form of the system
Can multiply Ax a column at a time
Ax = (1)
_

_
2
4
2
_

_
+ 2
_

_
4
9
3
_

_
+ 2
_

_
2
3
7
_

_
=
_

_
2
8
10
_

_
Kjell Konis (Copyright 2013) 5. Linear Algebra I 26 / 68
Elimination Using Matrices
Can represent the original equation as Ax = b
What about the elimination steps?
Start by subtracting 2 times rst equation from the second
Use elimination matrix
E =
_

_
1 0 0
2 1 0
0 0 1
_

_
The right-hand side Eb becomes
_

_
1 0 0
2 1 0
0 0 1
_

_
_

_
b
1
b
2
b
3
_

_
=
_

_
b
1
b
2
2b
1
b
3
_

_
_

_
1 0 0
2 1 0
0 0 1
_

_
_

_
2
8
10
_

_
=
_

_
2
4
10
_

_
Kjell Konis (Copyright 2013) 5. Linear Algebra I 27 / 68
Two Important Matrices
The identity matrix has 1s on the diagonal and 0s everywhere else
I =
_

_
1 0 0
0 1 0
0 0 1
_

_
The elimination matrix that subtracts a multiple l of row j from row i
has an additional nonzero entry l in the i , j position
E
3,1
(l ) =
_

_
1 0 0
0 1 0
l 0 1
_

_
Examples
E
2,1
(2)b =
_

_
b
1
b
2
2b
1
b
3
_

_
Ib =
_

_
1 0 0
0 1 0
0 0 1
_

_
_

_
b
1
b
2
b
3
_

_
=
_

_
b
1
b
2
b
3
_

_
Kjell Konis (Copyright 2013) 5. Linear Algebra I 28 / 68
Outline
1
Vectors
2
Vector Length and Planes
3
Systems of Linear Equations
4
Elimination
5
Matrix Multiplication
6
Solving Ax = b
7
Inverse Matrices
8
Matrix Factorization
9
The R Environment for Statistical Computing
Kjell Konis (Copyright 2013) 5. Linear Algebra I 29 / 68
Matrix Multiplication
Have linear system Ax = b and elimination matrix E
One elimination step (introduce one 0 below the diagonal)
EAx = Eb
Know how to do right-hand-side
Since E is an elimination matrix, also know answer to EA
Column view:
The matrix A is composed of n columns a
1
, a
2
, . . . , a
n
The columns of the product EA are
EA =
_
Ea
1
, Ea
2
, . . . , Ea
n

Kjell Konis (Copyright 2013) 5. Linear Algebra I 30 / 68


Rules for Matrix Operations
A matrix is a rectangular array of numbers
An m n matrix A has m rows and n columns
The entries are denoted by a
ij
A =
_

_
a
11
a
1n
.
.
.
.
.
.
.
.
.
a
m1
a
mn
_

_
Matrices can be added when their dimensions are the same
A matrix can be multiplied by a scalar value c
_

_
1 2
3 4
0 0
_

_
+
_

_
2 2
4 4
9 9
_

_
=
_

_
3 4
7 8
9 9
_

_
2
_

_
1 2
3 4
0 0
_

_
=
_

_
2 4
6 8
0 0
_

_
Kjell Konis (Copyright 2013) 5. Linear Algebra I 31 / 68
Rules for Matrix Multiplication
Matrix multiplication a bit more dicult
To multiply a matrix A times a matrix B
# columns of A = # rows of B
Let A be an m n matrix and B an n p matrix
_
m rows
n columns
_
. .
A
_
n rows
p columns
_
. .
B
=
_
m rows
p columns
_
. .
AB
The dot product is extreme case, let u = (u
1
, u
2
) and w = (w
1
, w
2
)
u w = u
T
w =
_
u
1
u
2

_
w
1
w
2
_
= u
1
w
1
+ u
2
w
2
Kjell Konis (Copyright 2013) 5. Linear Algebra I 32 / 68
Matrix Multiplication
The matrix product AB contains the dot products of the rows of A
and the columns of B
(AB)
ij
= (row i of A) (column j of B)
Matrix multiplication formula, let C = AB
c
ij
=
n

k=1
a
ik
b
kj
Example
_
1 1
2 1
_ _
2 2
3 4
_
=
_
5 6
1 0
_
Computational complexity: {n multiplications, n 1 additions} / cell
Kjell Konis (Copyright 2013) 5. Linear Algebra I 33 / 68
Matrix Multiplication
An inner product is a row times a column
A column times a row is an outer product
_

_
1
2
3
_

_
_
3 2 1

=
_

_
3 2 1
6 4 2
9 6 3
_

_
Each column of AB is a linear combination of the columns of A
_

_
1 2 3
4 5 6
7 8 9
_

_
. .
A
_
column j of B

=
_
column j of AB

Rows of AB are linear combinations of the rows of B


_
row i of A

_
1 2 3
4 5 6
7 8 9
_

_
. .
B
=
_
row i of AB

Kjell Konis (Copyright 2013) 5. Linear Algebra I 34 / 68


Laws for Matrix Operations
Laws for Addition
1. A + B = B + A (commutative law)
2. c(A + B) = cA + cB (distributive law)
3. A + (B + C) = (A + B) + C (associative law)
Laws for Multiplication
1. C(A + B) = CA + CB (distributive law from left)
2. (A + B)C = AC + BC (distributive law from right)
3. A(BC) = (AB)C (associative law; parentheses not needed)
Kjell Konis (Copyright 2013) 5. Linear Algebra I 35 / 68
Laws for Matrix Operations
Caveat: there is one law we dont get
AB = BA (in general)
BA exists only when p = m
If A is an m n matrix and B is n m
AB is an m m matrix
BA is an n n matrix
Even when A and B are square matrices . . .
AB =
_
0 0
1 0
_ _
0 1
0 0
_
=
_
0 0
0 1
_
but BA =
_
0 1
0 0
_ _
0 0
1 0
_
=
_
1 0
0 0
_
Square matrices always commute multiplicatively with cI
Matrix powers commute and follow the same rules as numbers
(A
p
)(A
q
) = A
p+q
(A
p
)
q
= A
pq
A
0
= I
Kjell Konis (Copyright 2013) 5. Linear Algebra I 36 / 68
Block Matrices/Block Multiplication
A matrix may be broken into blocks (which are smaller matrices)
A =
_

_
1 0 1 0 1 0
0 1 0 1 0 1
1 0 1 0 1 0
0 1 0 1 0 1
_

_
=
_
I I I
I I I
_
Addition/multiplication allowed when block dimensions appropriate
_
A
11
A
12
A
21
A
22
_ _
B
11
. . .
B
21
. . .
_
=
_
A
11
B
11
+ A
12
B
21
. . .
A
21
B
11
+ A
22
B
21
. . .
_
Let the blocks of A be its columns and the blocks of B be its rows
AB =
_

_
| |
a
1
a
n
| |
_

_
. .
mn
_

_
b
1

.
.
.
b
n

_

_
. .
np
=
n

i =1
_

_
a
i
b
i
_

_
. .
mp
Kjell Konis (Copyright 2013) 5. Linear Algebra I 37 / 68
Outline
1
Vectors
2
Vector Length and Planes
3
Systems of Linear Equations
4
Elimination
5
Matrix Multiplication
6
Solving Ax = b
7
Inverse Matrices
8
Matrix Factorization
9
The R Environment for Statistical Computing
Kjell Konis (Copyright 2013) 5. Linear Algebra I 38 / 68
Elimination in Practice
Solve the following system using elimination
2x
1
+ 4x
2
2x
3
= 2
4x
1
+ 9x
2
3x
3
= 8
2x
1
3x
2
+ 7x
3
= 10
_

_
2 4 2
4 9 3
2 3 7
_

_
_

_
x
1
x
2
x
3
_

_
=
_

_
2
8
10
_

_
Augment A: the augmented matrix A

is
A

=
_
A b

=
_

_
2 4 2 2
4 9 3 8
2 3 7 10
_

_
Strategy: nd the pivot in the rst row and eliminate the values
below it
Kjell Konis (Copyright 2013) 5. Linear Algebra I 39 / 68
Example (continued)
E
(1)
= E
2,1
(2) subtracts twice the rst row from the second
A
(1)
=
_

_
1 0 0
2 1 0
0 0 1
_

_
. .
E
(1)
_

_
2 4 2 2
4 9 3 8
2 3 7 10
_

_
. .
A

=
_

_
2 4 2 2
0 1 1 4
2 3 7 10
_

_
E
(2)
= E
3,1
(1) adds the rst row to the third
A
(2)
=
_

_
1 0 0
0 1 0
1 0 1
_

_
. .
E
(2)
_

_
2 4 2 2
0 1 1 4
2 3 7 10
_

_
. .
A
(1)
=
_

_
2 4 2 2
0 1 1 4
0 1 5 12
_

_
Strategy continued: nd the pivot in the second row and eliminate
the values below it
Kjell Konis (Copyright 2013) 5. Linear Algebra I 40 / 68
Example (continued)
E
(3)
= E
3,2
(1) subtracts the second row from the third
A
(3)
=
_

_
1 0 0
0 1 0
0 1 1
_

_
. .
E
(3)
_

_
2 4 2 2
0 1 1 4
0 1 5 12
_

_
. .
A
(2)
=
_

_
2 4 2 2
0 1 1 4
0 0 4 8
_

_
Use back substitution to solve
4x
3
= 8 = x
3
= 2
x
2
+ x
3
= 4 = x
2
+ 2 = 4 = x
2
= 2
2x
1
+ 4x
2
2x
3
= 2 = 2x
1
+ 8 4 = 2 = x
1
= 1
Solution x = (1, 2, 2) solves original system Ax = b
Caveats:
May have to swap rows during elimination
The system is singular if there is a row with no pivot
Kjell Konis (Copyright 2013) 5. Linear Algebra I 41 / 68
Outline
1
Vectors
2
Vector Length and Planes
3
Systems of Linear Equations
4
Elimination
5
Matrix Multiplication
6
Solving Ax = b
7
Inverse Matrices
8
Matrix Factorization
9
The R Environment for Statistical Computing
Kjell Konis (Copyright 2013) 5. Linear Algebra I 42 / 68
Inverse Matrices
A square matrix A is invertible if there exists A
1
such that
A
1
A = I and AA
1
= I
The inverse (if it exists) is unique, let BA = I and AC = I
B(AC) = (BA)C = BI = IC = B = C
If A is invertible, the unique solution to Ax = b is
Ax = b
A
1
Ax = A
1
b
x = A
1
b
If there is a vector x = 0 such that Ax = 0 then A not invertible
x = Ix = A
1
Ax = A
1
(Ax) = A
1
0 = 0
Kjell Konis (Copyright 2013) 5. Linear Algebra I 43 / 68
Inverse Matrices
A 2 2 matrix is invertible i ad bc = 0
_
A

1
=
_
a b
c d
_
1
=
1
ad bc
_
d b
c a
_
The number ad bc is called the determinant of A
A matrix is invertible if its determinant is not equal to zero
A diagonal matrix is invertible when none of the diagonal entries are
zero
A =
_

_
d
1
.
.
.
d
n
_

_
= A
1
=
_

_
1/d
1
.
.
.
1/d
n
_

_
Kjell Konis (Copyright 2013) 5. Linear Algebra I 44 / 68
Inverse of a Product
If A and B are invertible then so is the product AB
(AB)
1
= B
1
A
1
Easy to verify
(AB)
1
(AB) = B
1
(A
1
A)B = B
1
IB = B
1
B = I
(AB)(AB)
1
= A(BB
1
)A
1
= AIA
1
= AA
1
= I
Same idea works for longer matrix products
(ABC)
1
= C
1
B
1
A
1
Kjell Konis (Copyright 2013) 5. Linear Algebra I 45 / 68
Calculation of A
1
Want to nd A
1
such that AA
1
= I
Let
e
1
=
_

_
1
0
0
_

_
, e
2
=
_

_
0
1
0
_

_
, e
3
=
_

_
0
0
1
_

_
so that
_

_
| | |
e
1
e
2
e
3
| | |
_

_
= I
Let x
1
, x
2
and x
3
be the columns of A
1
, then
AA
1
= A
_
x
1
x
2
x
3

=
_
e
1
e
2
e
3

= I
Have to solve 3 systems of equations
Ax
1
= e
1
, Ax
2
= e
2
, and Ax
3
= e
3
Computing A
1
three times as much work as solving Ax = b
Worst case:
Gauss-Jordan method requires n
3
elimination steps
Compare to solving Ax = b which requires n
3
/3
Kjell Konis (Copyright 2013) 5. Linear Algebra I 46 / 68
Singular versus Invertible
Let A be an n n matrix
With n pivots, can solve the n systems
Ax
i
= e
i
i = 1, . . . , n
The solutions x
i
are the columns of A
1
In fact, elimination gives a complete test for A
1
to exist: there must
be n pivots
Kjell Konis (Copyright 2013) 5. Linear Algebra I 47 / 68
Outline
1
Vectors
2
Vector Length and Planes
3
Systems of Linear Equations
4
Elimination
5
Matrix Multiplication
6
Solving Ax = b
7
Inverse Matrices
8
Matrix Factorization
9
The R Environment for Statistical Computing
Kjell Konis (Copyright 2013) 5. Linear Algebra I 48 / 68
Elimination = Factorization
Key ideas in linear algebra factorization of matrices
Look closely at 2 2 case:
E
21
(3) A =
_
1 0
3 1
_ _
2 1
6 8
_
=
_
2 1
0 5
_
= U
E
1
21
(3) U =
_
1 0
3 1
_ _
2 1
0 5
_
=
_
2 1
6 8
_
= A
Notice E
1
21
(3) is lower triangular = call it L
A = LU L lower triangular U upper triangular
For a 3 3 matrix:
_
E
32
E
31
E
21
_
A = U becomes A =
_
E
1
21
E
1
31
E
1
32
_
U = LU
(products of lower triangular matrices are lower triangular)
Kjell Konis (Copyright 2013) 5. Linear Algebra I 49 / 68
Seems Too Good To Be True . . . But Is
The strict lower triangular entries of L are the elimination multipliers
l
ij
= multiplier
_
E
ij
(m
ij
)

= m
ij
Recall elimination example:
1
E
21
(2): subtract twice the rst row from the second
2
E
31
(1): subtract minus the rst row from the third
3
E
32
(1): subtract the second row from the third
_

_
1 0 0
2 1 0
0 0 1
_

_
. .
E
1
21
(2)
_

_
1 0 0
0 1 0
1 0 1
_

_
. .
E
1
31
(1)
_

_
1 0 0
0 1 0
0 1 1
_

_
. .
E
1
32
(1)
=
_

_
1 0 0
2 1 0
1 1 1
_

_
. .
L
Kjell Konis (Copyright 2013) 5. Linear Algebra I 50 / 68
One Square System = Two Triangular Systems
Many computer programs solve Ax = b in two steps
i. Factor A into L and U
ii. Solve: use L, U, and b to nd x
Solve Lc = b then solve Ux = c
(Lc = b by forward substitution; Ux = b by back substitution)
Can see that answer is correct by premultiplying Ux = c by L
Ux = c
L(Ux) = Lc
(LU)x = b
Ax = b
Kjell Konis (Copyright 2013) 5. Linear Algebra I 51 / 68
Example
Solve system represented in matrix form by
_
2 2
4 9
_ _
x
1
x
2
_
=
_
8
21
_
Elimination (multiplier = 2) step:
_
2 2
0 5
_ _
x
1
x
2
_
=
_
8
5
_
Lower triangular system:
_
1 0
2 1
_ _
c
1
c
2
_
=
_
8
21
_
= c =
_
8
5
_
Upper triangular system:
_
2 2
0 5
_ _
x
1
x
2
_
=
_
8
5
_
= x =
_
3
1
_
Kjell Konis (Copyright 2013) 5. Linear Algebra I 52 / 68
LU Factorization
Elimination factors A into LU
The upper triangular U has the pivots on its diagonal
The lower triangular L has ones on its diagonal
L has the multipliers l
ij
below the diagonal
Computational Cost of Elimination
Let A be an n n matrix
Elimination on A requires about
1
3
n
3
multiplications and
1
3
n
3
subtractions
Kjell Konis (Copyright 2013) 5. Linear Algebra I 53 / 68
Storage Cost of LU Factorization
Suppose we factor
A =
_

_
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
_

_
into
L =
_

_
1 0 0
l
21
1 0
l
31
l
32
1
_

_
and U =
_

_
d
1
u
12
u
13
0 d
2
u
23
0 0 d
3
_

_
(d
1
, d
2
, d
3
are the pivots)
Can write L and U in the space that initially stored A
L and U =
_

_
d
1
u
12
u
13
l
21
d
2
u
23
l
31
l
32
d
3
_

_
Kjell Konis (Copyright 2013) 5. Linear Algebra I 54 / 68
Outline
1
Vectors
2
Vector Length and Planes
3
Systems of Linear Equations
4
Elimination
5
Matrix Multiplication
6
Solving Ax = b
7
Inverse Matrices
8
Matrix Factorization
9
The R Environment for Statistical Computing
Kjell Konis (Copyright 2013) 5. Linear Algebra I 55 / 68
The R Environment for Statistical Computing
What is R?
R is a language and environment for statistical computing and graphics
R oers: (among other things)
a data handling and storage facility
a suite of operators for calculations on arrays, in particular matrices
a well-developed, simple and eective programming language includes
conditionals
loops
user-dened recursive functions
input and output facilities
R is free software
http://www.r-project.org
Kjell Konis (Copyright 2013) 5. Linear Algebra I 56 / 68
The R Application
Kjell Konis (Copyright 2013) 5. Linear Algebra I 57 / 68
R Environment for Statistical Computing
R as a calculator
R commands in the lecture slides look like this
> 1 + 1
and the output looks like this
[1] 2
When running R, the console will look like this
> 1 + 1
[1] 2
Getting help # and commenting your code
> help("c") # ?c does the same thing
Kjell Konis (Copyright 2013) 5. Linear Algebra I 58 / 68
Creating Vectors
Several ways to create vectors in R, some of the more common:
> c(34, 12, 65, 24, 15)
[1] 34 12 65 24 15
> -3:7
[1] -3 -2 -1 0 1 2 3 4 5 6 7
> seq(from = 0, to = 1, by = 0.05)
[1] 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
[10] 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85
[19] 0.90 0.95 1.00
Can save the result of one computation to use an input in another:
> x <- c(24, 30, 41, 16, 8)
> x
[1] 24 30 41 16 8
Kjell Konis (Copyright 2013) 5. Linear Algebra I 59 / 68
Manipulating Vectors
Use square brackets to access components of a vector
> x
[1] 24 30 41 16 8
> x[3]
[1] 41
The argument in the square brackets can be a vector
> x[c(1,2,4)]
[1] 24 30 16
Can also use for assignment
> x[c(1,2,4)] <- -1
> x
[1] -1 -1 41 -1 8
Kjell Konis (Copyright 2013) 5. Linear Algebra I 60 / 68
Vector Arithmetic
Let x and y be vectors of equal length
> x <- c(6, 12, 4, 5, 14, 2, 16, 20)
> y <- 1:8
Use + to add vectors (+, -, *, / are component-wise functions)
> x + y
[1] 7 14 7 9 19 8 23 28
Many functions work component-wise
> log(x)
[1] 1.792 2.485 1.386 1.609 2.639 0.693 2.773 2.996
Can scale and shift a vector
> 2*x - 3
[1] 9 21 5 7 25 1 29 37
Kjell Konis (Copyright 2013) 5. Linear Algebra I 61 / 68
Creating Matrices
Can use the matrix function to shape a vector into a matrix
> x <- 1:16
> matrix(x, 4, 4)
[,1] [,2] [,3] [,4]
[1,] 1 5 9 13
[2,] 2 6 10 14
[3,] 3 7 11 15
[4,] 4 8 12 16
Alternatively, can ll in row-by-row
> matrix(x, 4, 4, byrow = TRUE)
[,1] [,2] [,3] [,4]
[1,] 1 2 3 4
[2,] 5 6 7 8
[3,] 9 10 11 12
[4,] 13 14 15 16
Kjell Konis (Copyright 2013) 5. Linear Algebra I 62 / 68
Manipulating Matrices
Create a 3 3 matrix A
> A <- matrix(1:9, 3, 3)
> A
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
[3,] 3 6 9
Use square brackets with 2 arguments (row, column) to access entries
of a matrix
> A[2, 3]
[1] 8
Kjell Konis (Copyright 2013) 5. Linear Algebra I 63 / 68
Manipulating Matrices
Can select multiple rows and/or columns
> A[1:2, 2:3]
[,1] [,2]
[1,] 4 7
[2,] 5 8
Leave an argument empty to select all
> A[1:2, ]
[,1] [,2] [,3]
[1,] 1 4 7
[2,] 2 5 8
Use the t function to transpose a matrix
> t(A)
[,1] [,2] [,3]
[1,] 1 2 3
[2,] 4 5 6
[3,] 7 8 9
Kjell Konis (Copyright 2013) 5. Linear Algebra I 64 / 68
Dot Products
Warning R always considers * to be component-wise multiplication
Let x and y be vectors containing n components
> x <- 4:1
> y <- 1:4
> x * y
[1] 4 6 6 4
For the dot product of two vectors, use the %*% function
> x %*% y
[,1]
[1,] 20
Sanity check
> sum(x * y)
[1] 20
Kjell Konis (Copyright 2013) 5. Linear Algebra I 65 / 68
Matrix-Vector and Matrix-Matrix Multiplication
Let x a be vector of n components
Let A be an n n matrix and B be an n p matrix (p = n)
The operation
> x %*% A
treats x as a row vector so the dimensions are conformable
The operation
> A %*% x
treats x as a column vector
The operation
> A %*% B
gives the matrix product AB
The operation
> B %*% A
causes an error because the dimensions are not conformable
Kjell Konis (Copyright 2013) 5. Linear Algebra I 66 / 68
Solving Systems of Equations
Recall the system . . .
x =
_

_
1
2
2
_

_
solves
_

_
2 4 2
4 9 3
2 3 7
_

_
_

_
x
1
x
2
x
3
_

_
=
_

_
2
8
10
_

_
Can solve in R using the solve function
> A <- matrix(c(2, 4, -2, 4, 9, -3, -2, -3, 7), 3, 3)
> b <- c(2, 8, 10)
> solve(A, b)
[1] -1 2 2
Kjell Konis (Copyright 2013) 5. Linear Algebra I 67 / 68
http://computational-finance.uw.edu
Kjell Konis (Copyright 2013) 5. Linear Algebra I 68 / 68