120 views

Uploaded by api-19505025

- Convolution
- Ch-04_Determinants.pdf
- Cbse Xii Hots Determinants Chapter 4
- m
- GBF459 - Matrix Algebra
- Algebra
- Transformations via Matrices.pdf
- 557966b408aeacff2002f4b1
- Mathematica - Math
- Introduction to Microsoft Excel With Applications in Chemical Engineering - Final Notes%2C 2nd Ed.
- Exam Mode Al
- Analytical Chemistry Volume 59 Issue 17 1987 [Doi 10.1021_ac00144a725] Beebe, Kenneth R.; Kowalski, Bruce R. -- An Introduction to Multivariate Calibration and Analysis
- anglais
- MGMT 221, Ch. II
- Introduction to Finite Elements
- Programming in Matlab
- Illia Civ Quarterl 395 Univ
- MIMO Algebra
- Modal papramters
- 32 Residual inverse iteration for the nonlinear eigenvalue problem.pdf

You are on page 1of 22

A Complete Course

by Luke S. Cole

Version 1.7, July, 2000

CONTENTS 2

Contents

1 Linear Equations in Linear Algebra 4

1.1 Systems of Linear Equations . . . . . . . . . . . . . . . . . . . . . 4

1.2 Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.3 Solving a Linear System . . . . . . . . . . . . . . . . . . . . . . . 6

2.1 Some Terms for Matrices . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Solutions to Linear Systems . . . . . . . . . . . . . . . . . . . . . 8

2.3 Back Substitution (Gaussian Elimination) . . . . . . . . . . . . . 8

3 Vector Equations 8

3.1 Addition of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.2 Subtraction of Vectors . . . . . . . . . . . . . . . . . . . . . . . . 9

3.3 Linear Combinations . . . . . . . . . . . . . . . . . . . . . . . . . 9

4.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

5.1 Parametric Vector Form . . . . . . . . . . . . . . . . . . . . . . . 11

5.2 Homogeneous Linear Systems . . . . . . . . . . . . . . . . . . . . 11

5.2.1 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

6 Linear Independence 11

6.1 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . 11

6.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

6.2 Linear Dependence . . . . . . . . . . . . . . . . . . . . . . . . . . 12

6.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 12

7 Linear Transformations 12

7.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

7.2 Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

7.2.1 A Shear Transformation . . . . . . . . . . . . . . . . . . . 13

7.2.2 A Dilation Transformation . . . . . . . . . . . . . . . . . 13

7.2.3 A Reflected Transformation . . . . . . . . . . . . . . . . . 13

7.2.4 A Rotated Transformation . . . . . . . . . . . . . . . . . . 13

7.2.5 A Projection Transformation . . . . . . . . . . . . . . . . 13

7.3 One-to-one . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

7.3.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

8 Matrix Operations 14

8.1 Sums and Scalar Multiples . . . . . . . . . . . . . . . . . . . . . . 14

8.1.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

8.2 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

8.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

8.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

8.3 Power of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 15

8.4 The Transpose of a Matrix . . . . . . . . . . . . . . . . . . . . . 15

8.4.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

CONTENTS 3

9.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

9.2 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 16

9.2.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

9.3 An Algorithm for Finding A−1 . . . . . . . . . . . . . . . . . . . 17

9.4 Characterizations of Invertible Matrices . . . . . . . . . . . . . . 17

9.5 Invertible Linear Transformations . . . . . . . . . . . . . . . . . . 18

9.5.1 Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

10 Subspaces of <n 18

10.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

10.2 Column Space of a Matrix . . . . . . . . . . . . . . . . . . . . . . 18

10.2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

10.3 Null Space of a Matrix . . . . . . . . . . . . . . . . . . . . . . . . 19

10.3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

10.4 Basis for a Subspace . . . . . . . . . . . . . . . . . . . . . . . . . 19

10.4.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

10.5 The Dimension of a Subspace . . . . . . . . . . . . . . . . . . . . 20

10.5.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 20

10.5.2 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

11 Determinants 20

11.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

11.2 Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

11.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

11.4 Cramer’s Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

11.5 A Formula for the Inverse of A . . . . . . . . . . . . . . . . . . . 22

1 Linear Equations in Linear Algebra 4

1.1 Systems of Linear Equations

A linear equation with variables x1 , ..., xn is an equation that can be written in

the form:

where,

b R or C

a1 , ..., an R or C

n J+

The equations

√

4.x1 − 5.x2 + 2 = x1 and x2 = 2( 6 − x1 ) + x3

are both linear because they can be rearranged algebraically as in equation (4.1).

The equations

√

4.x1 − 5.x2 = x1 .x2 and x2 = 2 x1 − 6

are both not linear because of the presence of x1 .x2 in the first equation and

√

x1 in the second.

linear equations involving the same variables.

1.2 Matrix Notation 5

E.g.

For two linear equations

2.x1 − x2 + 1 · 5.x3 = 8

x1 − 4.x3 = −7

or

2 4 2

x + x =

4 1 9 2 3

2

Here we thing of as a vector (a line segment joining (0,0) to (2,4) in a

4

plane) and x1 as a scalar.

• no solution

• exactly one solution

• infinitely many solutions

solutions; a system is inconsistent if it has no solution.

The Matrix is simply the essential information of a linear system recorded com-

pletely in a rectangular array.

Given that a system is:

x1 − 2.x2 + x3 = 0

2.x2 − 8.x3 = 8

−4.x1 + 5.x2 + 9.x3 = −9

There are two different forms of the matrix that can be formed from the above

system.

1 −2 1

0 2 −8

−4 5 9

The Augmented Matrix

1 −2 1 0

0 2 −8 8

−4 5 9 −9

1.3 Solving a Linear System 6

The basic strategy is to replace one system with an equivalent system.

There are three basic operations are used to simplify a linear system:

1. Replace one equation by the sum of itself and a multiple of another equa-

tion.

2. Interchange the two equations.

3. Multiply all the terms in an equation by a nonzero constant.

E.g. Solve the following system:

1 −2 1 0

0 2 −8 8

−4 5 9 −9

the other equations. To do so, add 4 times equation 1 to equation

3.

1 −2 1 0

0 2 −8 8

0 −3 13 −9

ficient for x2 .

1 −2 1 0

0 1 −4 4

0 −3 13 −9

1 −2 1 0

0 1 −4 4

0 0 1 3

1, but it is more efficient to use the x3 in equation 3 first, to

eliminate the −4.x3 and +x3 terms in equations 2 and 1.

1 −2 0 −3

0 1 0 16

0 0 1 3

we move back to the x2 in equation 2 and use it to eliminate the

−2.x2 above it. Because of our previous work with x3 , there is

now no arithmetic involving x3 terms. Adding 2 times equation

2 to equation 1, we obtain the system:

2 Row Reduction and Echelon Forms 7

1 0 0 29

0 1 0 16

0 0 1 3

A matrix is in Echelon Form if:

1. All nonzero rows are above any rows of all zeros.

2. It’s leading entries () are nonzero values a column to the right of the

leading entry of the row above it.

3. All values below the leading entries are zero.

∗ ∗ ∗

0 ∗ ∗

0 0 0 0

0 0 0 0

Note: ∗ indicates, value ∈ <.

4. Leading entries are ’1’.

5. Zero’s are below, above to the left of it the leading entries.

1 0∗ ∗

0 1 ∗ ∗

0 0 0 0

0 0 0 0

Note: ∗ indicates, value ∈ <.

• When we get a echelon form after row reduction, this process is called

forward phase.

• When we get a reduced row echelon form after row reduction of a echelon

form, this process is called backward phase

• The leading entry (or pivot) is the left most value before, the first

column of the above row.

• Free variables are parameters of a parametric description.

• A unique matrix is when the solution set is defined (i.e. no free variables).

2.2 Solutions to Linear Systems 8

To find the general solution of a matrix we do as follows.

E.g.

1 0 −5 1

For the matrix: 0 1 1 4 , x1 , x2 are leading terms and x3 is a free

00 0 0

variable.

x1 = 1 + 5.t x1 1 5

So, x2 = 4 − t or, in vector form x2 = 4 + −1 t

x3 = t x3 0 1

This method is done as follows:

1. Solve the equations for the leading variables.

2. Starting with the last equations, substitute each equation into the one

above it.

3. Treat the free variable(s), if any, as unconstrained parameters.

3 Vector Equations

2

When we see a vector: p~ = 3 , this is a matrix with only one column and is

1

called a column vector or just a vector and each row represents a dimension.

So, for example, a two row vector is a vector in two dimensions, denoted: <2 .

A three row vector is a vector in three dimensions and denoted <3 .

1

The geometric description of a vector such as is shown in figure 1

4

7

The geometric description of a vector such as 10 is shown in figure 2

6

3.1 Addition of Vectors 9

If we add vectors, ~u and ~v we simply add ~u to ~v (this is adding the corresponding

rows of each column vector).

7 4 7+4 11

i.e. + = = , and is represented in figure 3

10 5 10 + 5 15

If we subtract vectors, ~u and ~v we simply add ~u to −~v .

7 4 7−4 3

i.e. + − = = , and is represented in figure 4

10 5 10 − 5 5

A linear combination is expressed:

p

X

w

~ = ci .~vi (2)

i=1

Where,

~v1 , ~v3 , ..., ~vp are vectors in <n

c1 , c2 , ..., cp are scalars (or weights)

~ is in Span{~v1 , ~v3 , ..., ~vp }

w

−1 2

E.g. Estimate the linear combinations of ~v1 = and ~v2 = that

1 1

generate the vector ~u.

Looking at the geometrical description of ~v1 and ~v2 .

–image–

The parallelogram rule show us that ~u is the sum of 3.~v1 and −2.~v2 .

i.e. ~u = 3.~v1 − 2.~v2

1 5 −3

E.g. If ~a1 = −2 , ~a2 = −13 and ~b = 8 , then Span(~a1 , ~a2 ) is a

3 −3 1

3 ~

plane through the origin in < . Is b in that plane.

4 The Matrix Equation A~x = ~b 10

the plane.

1 5 −3 1 5 −3

Now, −2 −13 8 rref→ 0 −3 2

3 −3 1 0 0 −2

Here we can see that the system has no solution.

∴ ~b is not in Span{~a1 , ~a2 }

A fundamental idea in linear algebra is to view a linear combination of vectors

as the product of a matrix and a vector. The following equation shows this

donation.

A~x = ~b (3)

x1

n

x2

X

~a1 ~a2 . . . ~an . = xj~aj (4)

.. j=1

xn

Where,

A is a m × n matrix with columns ~a1 , ~a2 , ..., ~an

~x is a column vector that is in <n

~a1 , ~a2 , ..., ~an are vectors in <n

x1 , x2 , ..., xn are scalars (or weights)

~ is in Span{~a1 , ~a3 , ..., ~ap }

w

2 4 1

E.g. If A = and ~x = then what is ~b

4 9 2

2 4 10

A~x = 1 +2 =

4 9 12

4.1 Properties

If A is an m × n matrix, ~u and ~v are vectors in <n , and c is a scalar, then:

2. A(c~u) = c(A~u)

3. The columns of A span <m iff a row echelon form of A has a pivot in

every row.

h i

4. The equation A~x = ~b is the augmented matrix ~a1 ~a2 . . . ~an ~b

Note: A matrix with only pivots that are 1 is know as the Identity Matrix

and denoted as I.

5 Solution Sets of Linear Systems 11

Solution sets of linear systems are important objects of study in linear algebra.

A solution set is said to be in parametric vector form if the solution is in

the form:

~x = s~u + t~v (5)

Where,

~x is a column vector that is in <n

~u, ~v are vectors

s, t are scalars (or weights) in <n

A system of linear equations is said to be homogeneous if it can be written in

the form:

A~x = 0 (6)

Where,

A is an m × n matrix

~x is a column vector that is in <n

0 is the zero vector in <m

This equation has at least one solution, namely ~x = 0. This solution is known

as the trivial solution. If any other solutions exists, then they are know as a

nontrivial solution.

5.2.1 Theorems

1. The homogeneous equation has a nontrivial solution iff the equation has

at least one free variable.

2. Suppose the equation A~x = ~b is consistent for some given ~b, and let p~ be

a solution. Then the solution set of A~x = ~b is the set of all vectors of the

form w~ = p~ + ~vh , where ~vh is any solution of the homogeneous equation

A~x = 0.

6 Linear Independence

6.1 Linear Independence

6.1.1 Definition

If vectors {~v1 , ..., ~vp } in <n is said to be linearly independent if the vector

equation x1 .v1 + x2 .v2 + ... + xn .vn = 0 has only the trivial solution.

linear independent.

6.2 Linear Dependence 12

independent.

6.2.1 Definitions

1. If vectors {~v1 , ..., ~vp } in <n is said to be linearly dependent if there exist

weights c1 , c2 +...+cn , not all zero, such that c1 .v1 +c2 .v2 +...+cn .vn = 0.

2. Iff one vector is a multiple of another.

3. Iff a row has all zeros

4. Iff n > m for a matrix A being m × n

5. Iff ~vj is a linear combination of ~v1 , ~v2 , ..., ~vj−1 for the set of vectors {~v1 , ~v2 , ..., ~vk }.

linear dependent.

dependent.

7 Linear Transformations

Linear transformations are seeing what happens when we think of the matrix A

as an object that acts on a vector ~x by multiplication to produce a new vector

called A~x. So, if a matrix in <n is multiplied by some vector in <r then the

resulting matrix is in <m . We write:

E.g.

1

4 −3 1 3

1 = 5

2 0 5 1 1 8

1

7.1 Properties

1. T maps <n onto <m iff the columns of A spans <m

2. The range of T is the span of A

3. T (~x) = 0 iff ~x solves A~x = 0

7.2 Terms 13

2. T (c~u) = cT (~u) ∀~u ∈ <m and ∀c ∈ <

3. If T (x) = A~x for x = e1 , e2 , ..., en

then T (~x) = T (e1 ) T (e2 ) . . . T (en ) ~x

7.2 Terms

7.2.1 A Shear Transformation

A shear transformation is when a matrix is transformed so that some vectors

are shifted as shown in figure 7.

A dilation transformation is when a matrix is transformed so that the vectors

are multiplied by some constant in <, as shown in figure 8.

A reflected transformation is when a matrix is transformed so that the vectors

are reflected across y = x or y = x and y − x. See figure 9.

A rotated transformation is when a matrix is transformed so that the vectors

are rotated through some angle, as shown in figure 10.

A projection transformation is when a matrix is transformed so that the vectors

are projected to some set of axis’s. See figure 11.

7.3 One-to-one 14

7.3 One-to-one

7.3.1 Properties

1. Iff T (~x) = 0 for T : <n → <m has only the trivial solution.

2. T is one-to-one iff the columns of a matrix A are linearly independent.

8 Matrix Operations

8.1 Sums and Scalar Multiples

8.1.1 Properties

If A, B and C are matrices of the same size, and r and s are scalars.

1. A + B = B + A

2. (A + B) + C = A + (B + C)

3. A + 0 = A

4. r(A + B) = rA + rB

5. (r + s)A = rA + sA

6. r(sA) = (rs)A

4 05 1 1 1 2 −3

E.g. If A = ,B= and C = then what is

−1 3 2 3 5 7 0 1

(a) A + B

(b) A + C.

5 1 6

(a) A+B=

2 8 9

(b) A + C is not defined because A and C have different size

matrices.

8.2 Multiplication

8.2.1 Definition

If A is an m × n matrix, and if B is an n × p matrix with columns ~b1 , ~b2 , ..., ~bp

then the product AB is the m × p matrix whose columns are A~b1 , A~b2 , ..., A~bp .

That is:

h i

AB = A~b1 A~b2 . . . A~bp (8)

8.2.2 Properties

If A is a m × n matrix and B and C are matrices of size, for which the indicated

sums and products are defined.

1. A(BC) = (AB)C

8.3 Power of a Matrix 15

2. A(B + C) = AB + AC

3. (B + C)A = BA + CA

4. r(AB) = (rA)B = A(rB) ∀r ∈ <

5. Im A = A = AIn

If A is an m × n matrix and if k is a positive integer, then

Ak = A...A

| {z } (9)

k

Note: We interpret A0 as I

Given a m × n A matrix, the transpose of A is the n × m matrix, denoted as

AT , where the columns of AT are formed from the corresponding rows of A.

a b

E.g. If A = , then what is the transpose of A.

c d

a c

AT =

b d

8.4.1 Properties

If A and B are matrices whose sizes are appropriate for the following sums and

products.

1. (AT )T = A

2. (A + B)T = AT + BT

3. (rA)T = rAT ∀r ∈ <

4. (AB)T = BT AT Note: The reverse order

In this section we consider only square matrices and we investigate the matrix

analogue of the reciprocal or multiplicative inverse of a nonzero real number.

trix C such that AC = I and CA = I where I is the n × n identity matrix.

In this case, we say that A is invertible and we call C an inverse of A. If B

were another inverse of A, then we would have B = BI = B(AC) = (BA)C

= IC = C. Thus when A is invertible, its inverse is unique. We denote it by

A−1 , so that:

9.1 Properties 16

invertible matrix is called a nonsinqular matrix.

a b

If A = and ad − bc 6= 0, then

c d

−1 1 d −b

A = (11)

ad − bc −c a

The quantity ad − bc is called the determinant of A, and we write:

det A = ad − bc (12)

A use of the inverse matrix is if for example each ~b in <n , the equation A~x = ~b

has unique solution ~x = A−1~b.

9.1 Properties

1. If A is an invertible matrix, then A−1 is invertible and (A−1 )−1 = A

2. (AB)−1 = B−1 A−1 Note: The reverse order

An elementary matrix is one that is obtained by performing a single ele-

mentary row operation on an identity matrix. The next example illustrate a

elementary matrix.

1 0 0 a b c

E.g. Let E = 0 1 0 and A = d e f . Compute EA and

−4 0 1 g h i

describe how the product can be obtained by elementary row operations

on A.

a b c

So, EA = d e f

g − 4a h − 4b i − 4c

Hence, Addition of -1 times row 1 of A to row 3 produces EA.

This is a row replacement operation

9.2.1 Properties

1. If an elementary row operation is performed on an m × n matrix A, the

resulting matrix can be written as EA, where the m × m matrix E is

created by performing the same row operation on Im .

9.3 An Algorithm for Finding A−1 17

matrix of the same type that transforms E back into I.

3. An n × n matrix A is invertible iff A is row equivalent to In , and in this

case, any sequence of elementary row operations that reduces A to In also

transforms In into A−1 .

If we place A and I side-by-side to form an augmented matrix [ A I ], then row

operations on this matrix produce identical operations on A and on I. Further-

more, either there are row operations that transform A to In , and In to A−1 ,

or else A is not invertible.

[ I A−1 ]

0 1 2

E.g. Find the inverse of the matrix A = 1 0 3 , if it exists.

4 −3 8

0 1 2 1 0 0 1 0 3 0 1 0 1 0 3 0 1 0

Here, A I = 1 0 3 0 1 0 ∼ 0 1 2 1 0 0 ∼ 0 1 2 1 0 0

4 −3 8 0 0 1 4 −3 8 0 0 1 0 −3 −4 0 −4 1

1030 1 0 10 3 0 1 0 1 0 0 −9/2 7 −3/2

∼ 0 1 2 1 0 0 ∼ 0 1 2 1 0 0 ∼ 0 1 0 −2 4 −1

0 0 2 3 −4 1 0 0 1 3/2 −2 1/2 0 0 1 3/2 −2 1/2

−1

∼ IA

−9/2 7 −3/2

∴ A−1 = −2 4 −1

3/2 −2 1/2

Let A be a square n × n matrix. Then The following statements are equivalent.

That is, for a given A, the statements are either all true or all false.

1. A is an invertible matrix.

2. A is row equivalent to the n × n identity matrix.

3. A has n pivot positions.

4. The equation A~x = ~b has at least one solution for each ~b in <n .

5. There is an n × n matrix C such that CA = I.

6. There is an n × n matrix D such that AD = I.

7. AT is an invertible matrix.

Note: If A and B are square matrices and if AB = I, then A and B are both

invertible, with B = A−1 and A = B−1 .

9.5 Invertible Linear Transformations 18

A linear transformation T : <n → <n is said to be invertibe if there exists a

function S : <n → <n such that

T (S(~x)) = x ∀~x ∈ <n (14)

The next theorem shows that if such an S exists, it is unique and must be a

linear transformation. We call S the inverse of T and write it as T −1 .

9.5.1 Theorem

If we let T : <n → <n be a linear transformation and let A be the

standard matrix for T . Than T is invertible iff A is an invertible

matrix. In that case, the linear transformation S given by S(~x) =

A−1 ~x is the unique function satisfying 13 and 14.

10 Subspaces of <n

10.1 Definition

A subspace of <n is any set H in <n that has three properties:

1. The zero vector is in H.

2. For each ~u and ~v in H, the sum ~u + ~v is in H.

10.2.1 Definition

The column space of a matrix A is the set Col A of all linear combinations

of the columns of A.

1 −3 −4 3

E.g. Let A = −4 6 −2 and ~b = 3 . Determine whether ~b

−3 7 6 −4

is in the column space of A.

Then ~b is in the column space of A.

1 −3 −4 3 1 −3 −4 3

−4 6 −2 3 rref → 0 −6 −18 15

−3 7 6 −4 0 0 0 0

∴ ~b is in the column space of A.

10.3 Null Space of a Matrix 19

10.3.1 Definition

The null space of a matrix A is the set Nul A of all solutions to the homoge-

neous equation A~x = 0

10.4.1 Definition

A basis for a subspace H of <n is a linearly independent set in H that spans H.

−3 6 −1 1 −7

E.g. Find a basis for (a) the null space of the matrix A = 1 −2 2 3 −1

2 −4 5 8 −4

and (b) the column space of A.

−3 6 −1 1 −7 1 −2 0 −1 3 0

(a) First, 1 −2 2 3 −1 rref → 0 0 1 2 −2 0

2 −4 5 8 −4 0 0 0 0 0 0

x1 − 2.x2 − x4 + 3.x5 = 0

So, x3 + 2.x4 − 2.x5 = 0

0 = 0

Hence, the general solution is:

x1 2.x2 + x4 − 3.x5 2 1 −3

x2 x2 1 0 0

x3 = −2.x4 + 2.x5 = x2 0 +x4 −2 +x5 2

x4 x4 0 1 0

x5 x5 0 0 1

2 1 −3

1 0 0

So, 0 , −2 , 2 is the basis for Nul A

0 1 0

0 0 1

−3 1

(b) So, 1 , 2 is the basis for Col A

2 5

So we say that:

1. The pivot columns of a matrix A form a basis for the column space of A.

2. The column vectors of the free variables of a row reduction echelon matrix

A form a basis for the null space of A.

10.5 The Dimension of a Subspace 20

10.5.1 Definitions

1. The dimension of a nonzero subspace H, denoted by dim H, is the

number of vectors in any basis for H. The dimension of the zero subspace

{0} is defined to be zero1 .

2. The rank of a matrix A, denoted by rank A, is the dimension of the

column space of A.

10.5.2 Theorems

1. If a matrix A has n columns, then

rank A + dim Nul A = n (15)

of exactly p elements in H is automatically a basis for H. Also, any set

of p elements of H that spans H is automatically a basis for H.

11 Determinants

Recall from equation 11 on page 16 that a matrix is invertible iff its determinant

is nonzero. Here we will extend this fact to matrices greater then a 2 × 2 matrix.

11.1 Definition

For n > 2, the determinant of an n × n matrix A = [aij ] is the sum of n

terms of the form ±a1j det A1j , with plus and minus signs alternating, where

the entries a11 , a12 , ..., a1n are from the first row of A. i.e.

det A = a11 det A11 − a12 det A12 + ... + (−1)1+n a1n det A1n (16)

Xn

= (−1)1+j a1j det A1j (17)

j=1

where,

a1j is the term in the first row and in the j th column

det A1j is the determinant of the matrix A without the first row

and the j th column

Since most matrices have many rows, mostly zero. The cofactor expansion

across the first row formula is introduced. That is, if we let:

Cij = (−1)i+j aij det Aij (18)

then

det A = a11 C11 − a12 C12 + ... + a1n C1n (19)

Form this we get the follow theorems:

1 The zero subspace has no basis (because the zero vector by itself forms a linearly dependent

set).

11.2 Theorems 21

11.2 Theorems

The determinant of an n×n matrix A can be computed by a cofactor expansion

across any row or down any column.

1. The expansion across the ith row using the cofactors in 18 is

11.3 Properties

Let A be a square matrix.

1. If a multiple of one row of A is added to another orw to produce a matrix

B, then: det B = det A

2. If two rows of A are interchanged to produce B, then: det B = −det A

3. If one row of A is multiplied by k to produce B, then: det B = k.det A

4. If a matrix A has the row reduced echelon form U then:

product of

(−1)r . when A is invertible

det A = pivots in U

0 when A is not invertible

6. If A is an n × n matrix, then: det AT = det A

8. If A and B are n × n matrices, then: det (A + B) 6= det A + det B

9. If A is an n × n matrix and E is an n× n elementary matrix, then:

1 if E is a row replacement

det EA = (det E)(det A), where det E = −1 if E is an interchange

r if E is a scale by r

Let A be an invertible n × n matrix. For any ~b in <n , the unique solution ~x of

A~x = ~b has entries given by:

det Ai (~b)

xi = (22)

det A

Where,

i = 1, 2, ..., n

h i

Ai (~b) = ~a1 · · · ~ai−1 ~b ~ai+1 · · · ~an

11.5 A Formula for the Inverse of A 22

and −5.x1 + 4.x2 = 8

3 −2 ~ 6 −2 ~ 3 6

Here, A = , A1 (b) = , A2 (b) =

−5 4 8 4 −5 8

And since det A = 2, the system has a unique solution.

det A1 (~b) 24 + 16

x1 = = = 20

det A 2

det A2 (~b) 24 + 30

x2 = = = 27

det A 2

From Cramer’s rule we find that if we let a A be an invertible n × n matrix.

Then:

det Ai (~ej )

A−1

ij = (23)

det A

Where,

~ej is the j th column of the identity matrix

- ConvolutionUploaded bySimachew Alehegn
- Ch-04_Determinants.pdfUploaded bySharif Jamali
- Cbse Xii Hots Determinants Chapter 4Uploaded byOm Prakash Sharma
- mUploaded byapi-444439435
- GBF459 - Matrix AlgebraUploaded bySirVicNY
- AlgebraUploaded byElo Isa Macedo
- Transformations via Matrices.pdfUploaded byM2C7r6
- 557966b408aeacff2002f4b1Uploaded byAndi Pratama
- Mathematica - MathUploaded bycarmo-neto
- Introduction to Microsoft Excel With Applications in Chemical Engineering - Final Notes%2C 2nd Ed.Uploaded bySerato Inho
- Exam Mode AlUploaded byEsteban Luna Seña
- Analytical Chemistry Volume 59 Issue 17 1987 [Doi 10.1021_ac00144a725] Beebe, Kenneth R.; Kowalski, Bruce R. -- An Introduction to Multivariate Calibration and AnalysisUploaded byiabureid7460
- anglaisUploaded byhodalobna
- MGMT 221, Ch. IIUploaded byMulugeta Girma
- Introduction to Finite ElementsUploaded bymahsa_ci80
- Programming in MatlabUploaded byAgnibha Banerjee
- Illia Civ Quarterl 395 UnivUploaded bytelematico69
- MIMO AlgebraUploaded byngduyhieu
- Modal papramtersUploaded byDasaka Brahmendra
- 32 Residual inverse iteration for the nonlinear eigenvalue problem.pdfUploaded byMaritza L M Capristano
- 4-General Vector SpacesUploaded byslowjams
- Sparse_days_06_15_2010Uploaded byescadoula
- Introductory Mathematics. Applications and MethodsUploaded byDietethique
- ASE396 Methods of Estimation/Detection Scribe NotesUploaded byJoshua Yuan
- lec1_sys_linear_eqns.pdfUploaded byChou Zen Hang
- Matrix Singular Value DecompositionUploaded byHamed Nikbakht
- Familiarising Probabilistic Distance Clustering System of Evolving Awale PlayerUploaded byMegan Bell
- Algorithms for Model Reduction of Large Dynamical SystemsUploaded bythgnguyen
- Inverse Iteration Method for Finding EigenvectorsUploaded by王轩
- 0207033Uploaded bykocayusuf13

- the_real_number_systemUploaded byapi-19505025
- the_binomial_theoremUploaded byapi-19505025
- relations_and_functionsUploaded byapi-19505025
- linux-2000-2004Uploaded byapi-19505025
- monte_carlo_localisation_for_mobile_robots-shortUploaded byapi-19505025
- wepUploaded byapi-19505025
- polynomials_iiUploaded byapi-19505025
- probabilityUploaded byapi-19505025
- sequences_and_seriesUploaded byapi-19505025
- the_tangent_and_the_derivativeUploaded byapi-19505025
- drawing_using_the_scorbot_manipulator_armUploaded byapi-19505025
- ipv6Uploaded byapi-19505025
- monte_carlo_localisation_for_mobile_robotsUploaded byapi-19505025
- trigonometryUploaded byapi-19505025
- autonomous_lego_climber_project-stingrayUploaded byapi-19505025
- permutations_and_combinationsUploaded byapi-19505025
- co-ordinate_geometryUploaded byapi-19505025
- polynomials_iUploaded byapi-19505025
- circle_geometryUploaded byapi-19505025
- airhockey-pergames06Uploaded byapi-19505025
- parabolaUploaded byapi-19505025
- localisation_using_active_mirror_vision_system-anu05Uploaded byapi-19505025
- conic_sectionsUploaded byapi-19505025
- geometryUploaded byapi-19505025
- insect_inspired_robots-acra06Uploaded byapi-19505025
- localisation_using_active_mirror_vision_system-anu05-slidesUploaded byapi-19505025
- complex_numbersUploaded byapi-19505025
- intergationUploaded byapi-19505025
- geometrical_applications_of_differentiationUploaded byapi-19505025

- D7803 2012 Preparation of Zinc Coated Steels for Powder CoatUploaded byJayesh
- Ch 5 Slides 10th PDF.unlocked ProcessedUploaded byallen
- Coilgun Simulation 795Uploaded bySameer Sharma
- 2009 Silicon Valley IndexUploaded byhblodget
- PRATIKRAMAN_SUTRAUploaded byanildhar
- Brain MuscleUploaded byorandadoo
- TaoismUploaded byJeco Valdez
- NPTEL to ViewUploaded bySrinivasulu Pudu
- Aeroshell Fluid 3Uploaded byNICKY
- Density ProblemsUploaded byLuhan
- _04_Semune_Himamat_Tract_Abune_ShenoudaUploaded byAmanuel Maru
- Abrasion Resistance of MaterialsUploaded byPuneet Kaura
- TS Pgecet Pharmacy SyllabusUploaded byPavan Kumar K
- Copper Sky Recreation Complex OverviewUploaded bysarahruf
- Dreissena_polymorphaUploaded byRobert Fetch
- nullUploaded byM-NCPPC
- 11 Explorer 06Uploaded byMuhammad Jahangir
- PatipembasUploaded byludovicus88
- Roget's Thesaurus by Roget, Peter Mark, 1779-1869Uploaded byGutenberg.org
- Protection of MotorsUploaded bykittiey
- Materi Ppt asdDst ( Noise )Uploaded byhantzome
- Pumped storage hydro power plant and its parameter monitoringUploaded byIRJET Journal
- Anonymous DesertUploaded byNigel Parkinson
- embede sysUploaded byRamya Krishna
- Power SteeringUploaded byKarthik Kardeker
- Syngas H2 CO Ratio AdjustmentUploaded byDidit Andiatmoko
- Colombia Motorcycle Sales 2013Uploaded byRahul Shankhwar
- Kinematics of Machinery Unit 1Uploaded byHarinath Gowd
- part3Uploaded byNabilah Aziz
- Chapter 6 FishwaysUploaded byLeonardo Soares