Professional Documents
Culture Documents
2.
For describing how to compute the result of any given linear operation acting on a vector
(e.g., finding the components with respect to some basis of the force acting on an object
having some velocity.)
These are two completely different things. Do not confuse them even though the same computational apparatus (i.e., matrices) is used for both. For example, if you confuse rotating a vector
with using a basis constructed by rotating the original basis, you are likely to discover that your
computations have everything spinning backwards.
Throughout this set of notes, K , L , M and N are positive integers.
4.1
Basics
a11
a
21
a
A =
31
..
.
a M1
..
.
a1N
a2N
a3N
..
.
aM N
As indicated, I will try to use bold face, upper case letters to denote matrices. We will use two
notations for the (i, j)th entry of A (i.e., the thing in the i th row and j th column):
(i, j)th entry of A = ai j = [A]i j
Until further notice, assume the thing in each entry is a scalar (i.e., a real or complex number).
Later, well use such things as functions, operators, and even other vectors and matrices.
9/22/2013
The matrix A is a row matrix if and only if it consists of just one row, and B is a column
matrix if and only if it consists of just one column. In such cases we will normally simplify the
indices in the obvious way,
A = a1 a2 a3 a N
and
b1
b
2
b
B =
3
..
.
bN
Basic Algebra
Presumably, you are already acquainted with matrix equality, addition and multiplication. So all
well do here is express those concepts using our []i j notation:
Matrix Equality Two matrices A and B are equal (and we write A = B ) if and only if both
of the following hold:
(a)
(b)
for
j = 1... M
and
k = 1... N
Matrix Addition and Scalar Multiplication Assuming A and B are both MN matrices,
and and are scalars, then A + B is the M N matrix with entries
[A + B] jk = [A] jk + [B] jk
for
j = 1... M
and k = 1 . . . N
b1k
b
2k
b
= a j1 a j2 a j3 a j M
3k
..
.
b Mk
M
X
a jm bmk
M
X
[A] jm [B]mk
m=1
m=1
Basics
M
X
[A]m [B]mk
M
X
[A] jm [B]m
m=1
If, instead, B is a column matrix, then so is AB and the above formula reduces to
[AB] j =
m=1
Also, you cannot assume that if the product AB is a matrix of just zeros, then either A or B
must all be zeros.
?Exercise 4.1
b: Give an example of two nonzero matrices A and B for which the product AB is a matrix
of just zeros. ( A and B can have some zeros, just make sure that at least one entry in each
is nonzero.)
M X
N
X
a jk b jk =
j=1 k=1
a1
a
2
a
A =
3
..
.
and
hA | Bi =
X
j
version: 9/22/2013
B = b1 b2 b3 b N
b1
b
2
b
B =
3
..
.
bM
aM
then
[A] jk [B] jk
j=1 k=1
M X
N
X
a j b j =
X
j
[A] j [B] j
Complex Conjugates
The complex conjugate of A , denoted by A is simply the matrix obtained taking the complex
conjugate of each entry,
A mn = [A]mn
.
!Example 4.1:
If A =
"
2 + 3i
6i
5
4 + 4i
7 2i
8
then A =
"
2 3i
6i
5
4 4i
7 + 2i
8
We will say A is real if and only if all the entries of A are real, and we will say A is
imaginary if and only if all the entries of A are imaginary.
The following are easily verified, if not obvious:
1.
(AB) = A B .
2.
(A ) = A .
3.
A is real
4.
A is imaginary
A = A .
A = A .
Transposes
The transpose of the M N matrix A , denoted in these notes1 by AT , is the N M matrix
whose rows are the columns of A (or, equivalently, whose columns are the rows of A ),
T
A mn = [A]nm .
!Example 4.2:
If A =
Also,
If
"
2 + 3i
6i
5
4 + 4i
7 2i
8
1 + 2i
|ai = 3i
5
then
6i
4 + 4i
8
2 + 3i
T
then A = 5
7 2i
|aiT = 1 + 2i
3i
Basics
If you just think about what transposing a transpose means, it should be pretty obvious
that
AT
T
= A .
If you think a little bit about the role of rows and columns in matrix multiplication, then you may
not be surprised by the fact that
(AB)T = BT AT .
This is easily proven using our [] jk notation:
M
X
T
[A]km [B]m j
(AB) jk = [AB]k j =
m=1
M
X
T T
A mk B jm
m=1
M
X
T T
B jm A mk = BT AT jk
m=1
showing that
(AB)T
jk
= BT AT jk
for every
( j, k)
AT = A
A is antisymmetric
AT = A
[A] jk = [A]k j
And
[A] jk = [A]k j
Note that, to be symmetric or antisymmetric, the matrix must have the same number of rows as
it has columns (i.e., it must be square).
You may recall from your undergraduate linear algebra days that there is a nice theory
concerning the eigenvalues and eigenvectors of real symmetric matrices. Well extend that
theory later.
Adjoints
Combining the transpose with complex conjugation yields the adjoint.2
The adjoint of the MN matrix A , denoted by either adj(A) or A , is the N M matrix
A = AT
Using the [] jk notation,
jk
= A
= [A]k j
T
2 The adjoint discussed here is sometimes called the operator adjoint. You may also find reference to the classical
version: 9/22/2013
!Example 4.3:
If A =
Also,
If
"
2 + 3i
6i
5
4 + 4i
7 2i
8
1 + 2i
|ai = 3i
5
then
6i
4 4i
8
2 3i
then A = 5
7 + 2i
|ai = 1 2i
3i
From what weve already discussed regarding complex conjugates and transposes, we immediately get the following facts:
1. A = A .
2.
(AB) = B A .
3.
A = AT
4.
A = AT
A is real .
A is imaginary .
Any matrix that satisfies A = A is said to be either self adjoint or Hermitian, depending
on the mood of the speaker. Such matrices are the complex analogs of symmetric real matrices
and will be of great interest later.
Along the same lines, any matrix that satisfies A = A is said to be antiHermitian. I
suppose you could also call them anti-self adjoint (self anti-adjoint?) though that is not
commonly done.
4.2
vN
hv| = hv|B = |vi = v1 v2 v N
Observe that
A |vi
Also note that
hv| |wi = v1 v2
= hv| A
w1
N
X
w2
vN . =
v j w j
..
j=1
wN
vector
inner
product
matrix
product
B = { b1 , b2 , . . . , b N }
N
X
vk wk
k=1
where
v1
v2
.. = |viD
.
and
v N
hv | wi =
N
X
k=1
vk wk = v1 v2
w1
w2
.. = |wiB
.
wN
w1
w2
v N . = |viD |wiB
..
wN
vector
inner
product
This material is optional. It requires the reciprocal basis from section 2.6.
version: 9/22/2013
4.3
Square Matrices
For the most part, the only matrices well have much to do with (other than row or column
matrices) are square matrices.
AO = OA = O
AI = IA = A .
and
0 0 0
1 0 0
O = 0 0 0
and
I = 0 1 0
0 0 0
0 0 1
Not all square matrices have inverses. If A has an inverse, it is said to be invertible or nonsingular.
If A does not have an inverse, we call it noninvertible or singular.
It may later be worth noting that, while the definition of the inverse requires that
AA1 = I
and
A1 A = I
or
BA = I
then
AB = I
and
BA = I
and thus, B = A1 .
Here are few other things you should recall about inverses (assume A is an invertible NN
matrix):
Determinants
1.
AB = C
B = A1 C .
2.
BA = C
1
A1
= A.
B = CA1 .
3.
4.
?Exercise 4.2:
4.4
Determinants
21 a22 a2N
A =
..
..
..
..
.
.
.
.
aN 1 aN 2 aN N
by
det(A)
or
21 a22 a2N
det .
..
..
..
.
.
.
.
.
aN 1 aN 2 aN N
a11 a12 a1N
a
a
a
21
22
2N
.
..
..
..
.
.
.
.
.
a N 1 a N 2 a N N
or
as seems convenient.
The determinant naturally arises when solving a system of N linear equations for N
unknowns
a11 x1 + a12 x2 + + a1N x N = b1
a21 x1 + a22 x2 + + a2N x N = b2
..
.
(4.1a)
a N 1 x1 + a N 2 x2 + + a N N x N = b N
where the a jk s and bk s are known values and the x j s are the unknowns. Do observe that we
can write this system more concisely in matrix form,
Ax = b
version: 9/22/2013
(4.1b)
where A is the N N matrix of coefficients, and x and b are the column matrices
x1
b1
x
b
2
2
x =
and
b =
..
.. .
.
.
xN
bN
det(A) xk = det(Bk )
for k = 1, 2, . . . N
(4.2)
where Bk is the matrix obtained by replacing the k th column in A with b . We can then find
each xk by dividing through by det(A) , provided det(A) 6= 0 . (Equation (4.2) is Cramers
rule for solving system (4.1a) or (4.1b), written the way it should be but almost never is
written.)
?Exercise 4.3:
the above?
What can you say about the possible values for the xk s if det(A) = 0 in
In theory, the definition of and formula for the determinant of any square matrix can be based
on equation (4.2) as a solution to system (4.1a) or (4.1b). (Clever choices for the b vectors may
help.) Instead, we will follow standard practice: Ill give you the formulas and tell you to trust
me. (And if you remember how to compute determinants as I rather expect just skip ahead
to Properties and Applications of Determinants.)
For N = 1 and N = 2 , we have the well-known formulas
a
a
11 12
det[a11 ] = a11
and
= a11 a22 a12 a21 .
a21 a22
For N > 2 , the formula for the determinant of an
a11 a12
a
21 a22
A =
..
..
.
.
aN 1 aN 2
N N matrix
a1N
a2N
.
..
..
.
aN N
N
X
(1)k1 a1k det(A1k )
k=1
N
X
(1) j1 a j1 det A j1
j=1
where A jk is the (N 1)(N 1) matrix obtained from A by deleting the j th row and k th
column.
Determinants
Either of the above formulas gives you the determinant of A . If you are careful with your
signs, you can find the determinant via an expansion using any row or column. If you are
interested3, here is a most general formula for the determinant for the above NN matrix A :
det(A) =
N X
N
X
i 1 =1 i 2 =1
N
X
(i 1 , i 2 , . . . , i N ) ai1 1 ai2 2 ai N N
i N =1
where is a function of all N -tuples of the integers from 1 to N satisfying all the following:
1.
2.
If i 1 = 1 , i 2 = 2 , i 3 = 3 , etc., then
(i 1 , i 2 , i 3 . . . , i N ) = (1, 2, 3, . . . , N ) = 1
3.
if k 6= K or k 6= K + 1
ik
jk =
Then
i K +1
i
K
if k = K
if k = K + 1
( j1 , j2 , j3 . . . , j N ) = (i 1 , i 2 , i 3 . . . , i N )
det(A) 6= 0
This is doubtlessly our favorite test for determining if A1 exists. It will also probably be the
most important fact about determinants that we will use.
Another property that can be derived from Cramers rule is that, if A and B are both NN
matrices, then
det(AB) = det(A) det(B) .
(4.3)
3 If you are not interested, skip to Properties and Applications of Determinants below.
version: 9/22/2013
1
0
0
det(I) = det
..
.
0
0
1
0
..
.
0
0
0
1
..
.
0
0
0
0 = 1
. . ..
. .
1
From this and the observation regarding determinants of products you can easily do the next
exercise.
?Exercise 4.4:
1
det(A)
A few other properties of the determinant are listed here. Assume A is an N N matrix
and is some scalar.
1.
2.
3.
4.
det(A)
det A
det AT
det A
= N det(A) .
= det(A) .
= det(A) .
= det(A) .
?Exercise 4.5: Convince yourself of the validity of each of the above statements. (For the
first, det(A) = N det(A) , you might first show that det(I) = N , and then consider
rewriting A as IA and taking the determinant of that.)
Also remind yourself of the computational facts regarding determinants listed on page 87
of Arfken, Weber & Harris.
Finally, let me comment on the importance of Cramers rule: It can be viewed as being
very important, theoretically, for the way it relates determinants to to solving of linear systems.
However, as a practical tool for solving these systems, it is quite unimportant. The number of
computations required to find all the determinants is horrendous, especially if the system is large.
Other methods, such as the Gauss elimination/row reduction learned in your undergraduate linear
algebra course are much faster and easier to use.