You are on page 1of 15

UUM 526 Optimization Techniques in Engineering

Lecture 3: Vector Spaces and Matrices


Asst. Prof. N. Kemal Ure
Istanbul Technical University
ure@itu.edu.tr

October 8th, 2015

Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

1 / 15

Overview

Vectors and Matrices

Rank of a Matrix

Inner Products and Norms

Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

2 / 15

Vectors and Matrices

Linear Combination
Abstract Definition: Let V be a set, F be a field, + V V V ,
F V V . Then < V, F, +, . > is a vector space if:
v1 + v2 V, v1 , v2 V, , F

(1)

We will be mainly dealing with V = Rn and F = R. + will be the


usual vector addition and will be the usual scalar product.

If S V satisfies Eq. 1, then S is called a subspace of V .

Definition (Linear Combination)


Let ak , k = 1, . . . , n be a finite number of vectors. Vector b is said to be
a linear combination of vectors ak , if there exists scalars k such that:
n

b = 1 a1 + 2 a2 + . . . + n an = k ak
k=1
Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

3 / 15

Vectors and Matrices

Linear Dependency
Definition (Linearly Dependent Set)
The set of vectors A = {ak k = 1, . . . , n} are said to be linearly
dependent if one of the vectors is a linear combination of the others.
If the set A is not linearly dependent, then it is called linearly
independent.
Theorem (Test for linear independence)
The set A = {ak k = 1, . . . , n} is linearly independent iff the equality,
n

k ak = 0,

(2)

k=1

implies that k = 0, k = 1 . . . , n.
Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

4 / 15

Vectors and Matrices

Span, Basis and Dimension


Definition (Span)
The set of all linear combinations of the set of vectors {ak } is called
the span of {ak }
n

span[a1 , . . . , an ] = { k ak k R}
k=1

Span is always a subspace!


Definition (Basis)
If the set {ak } is linearly independent and span[a1 , . . . , an ] = V , then
{ak } is a basis for V .
By fixing a basis, we can represent other vectors in V in terms of
that basis.
Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

5 / 15

Vectors and Matrices

Span, Basis and Dimension


Theorem (Unique Coordinates for a Fixed Base)
Let {ak } be a basis for V . Then any vector v V can be represented
uniquely as,
n

v = k ak .
k=1

There are usually infinite number of bases for a subspace. For


instance if {ak } is a basis, so is {cak }, c R.
What about the number of vectors in a basis?
Theorem (Unique Number of Vectors in a Basis)
Let ak , k = 1, . . . , n and bi , i = 1, . . . , m be two different bases for V .
Then n = m.
Hence every space (or subspace) V has a unique number of vectors
in its every basis. That number is called the dimension of V .
We call k the coordinates of the vector v in base {ak }.
Asst. Prof. N.
n Kemal Ure (ITU)

Lecture 3

October 8th, 2015

6 / 15

Vectors and Matrices

Vector and Matrix Notation


For a Rn we will write vectors in column notation
a1

a
T

a = 2 = [a1 a2 . . . an ] , ai R


an

A matrix A Rmn is a rectangular collection of real numbers
aij R, i = 1, . . . , m.j = 1, . . . , n
a11 a12 . . . a1n

a
21 a22 . . . a2n

A=

am1 am2 . . . amn

much more useful to think A as a collection of n vectors lying in


Rm : A = [a1 , a2 , . . . , an ], ak Rm

Also matrix-vector multiplication Av makes more sense this way

Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

7 / 15

Rank of a Matrix

Matrix Rank
Definition (Rank of a Matrix)
Let A Rmn . Rank r is the maximal number of independent columns
of A.
Notice that r n. When r = n, we say that matrix is full rank.
Theorem (Invariance of Rank)
Rank of a matrix A Rmn is invariant under following operations.
1

Multiplication of columns of A by nonzero scalars.

Interchange of columns.

Addition to a given column a linear combination of other columns.

Nice, but is there a formula for testing if the matrix has full rank? Is
there a scalar quantity that measures the independency of columns?

For square matrices (m = n), the answer is yes! It is called the


determinant of the matrix.

Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

8 / 15

Rank of a Matrix

Determinant
Determinant of matrix (denoted by A) is a confusing concept at
first, has many different interpretations.
The properties of the determinant are more important than its
explicit formula
Definition (Determinant)
Determinant is a function denoted as det Rnn R, and possess the
following properties:
1

Determinant is linear in matrixs columns


A = detA

=
=

det[a1 , . . . , ak + bk , . . . , an ]
det[a1 , . . . , ak , . . . , an ]

+ [det[a1 , . . . , bk , . . . , an ].
2

If for some k, ak = ak+1 , then A = 0.

Determinant of the identity matrix is 1, that is In = 1.

Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

9 / 15

Rank of a Matrix

Consequences of Determinant Definition


If there is a zero column in the matrix, determinant is zero,
det[a1 , . . . , 0, . . . , an ] = 0.
If we add a column linear combination of other columns, determinant
does not change.

det a1 , . . . , ak + j aj , . . . , an = det[a1 , . . . , ak , . . . , an ]

j=1,jk

Determinant change sign if we interchange columns,


det[a1 , . . . , ak1 ak , . . . , an ] = det[a1 , . . . , ak , ak1 , . . . , an ].
Most importantly, if the columns of A are not linearly independent,
then A = 0. Hence for a square matrix: full rank nonzero
determinant.
Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

10 / 15

Rank of a Matrix

Determinant and Rank


Only square matrices have determinants. What if I want to test the
rank of a rectangular matrix?

Rectangular matrices have square sub-matrices! Wonder if their


determinant is useful... First we need to define it:

Definition (Minor)
pth order minor of a matrix A Rmn is the determinant of sub-matrix
formed by deleting m p rows and n p columns.
Then we have this cool theorem:
Theorem (Minors and Rank)
If an A Rmn (m n) has a nonzero nth order minor, then
rankA = n.
It is straightforward to show that rank of a matrix is the maximal
order of its nonzero minors.
Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

11 / 15

Rank of a Matrix

Nonsingular matrices and Inverses


A square matrix A with det A 0 is called nonsingular.
A matrix A Rnn is nonsingular if and only if there exists a matrix
B Rnn such that:
AB = BA = In
Matrix B is called the inverse of A and denoted as A1 .
Shows up in the solution of linear equations Ax = b. The unique
solution exists if A is nonsingular (x = A1 b)
What about non-square linear systems?
Theorem (Existence of Solution in a Linear System)
The set of equations represented by Ax = b has a solution if and only if
rankA = rank[A, b].
If rankA = m < n, then we have infinite number of solutions.
Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

12 / 15

Inner Products and Norms

Euclidian Inner Product


We need to turn our vector space into a metric space by adding a
length function.

A such function already exists for V = R, the absolute value function .


Some very useful properties: a a a, ab = ab

The most useful property: a + b a + b

For Rn , before defining the length function, it is helpful to define the


inner product first
Definition (Euclidean Inner Product)
The Euclidean Inner Product of two vectors x, y Rn is defined as
n

< x, y >= xi yi = xT y
i=1

Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

13 / 15

Inner Products and Norms

Inner Product Properties


Inner product has the following properties

Positivity: < x, x 0, x = 0 < x, x >= 0.


Symmetry: < x, y >=< y, x >.
Additivity: < x + y, z >=< x, z > + < y, z >.
Homogeneity: < rx, y >= r < x, y >, r R.

These properties also hold for the second vector.


Vectors are orthogonal if < x, y >= 0.
Now we can define the length, it is called the Euclidean norm:

x = < x, x > = xT x
Theorem (Cauchy-Schwartz Inequality)
For any x, y Rn

< x, y > xy,

The equality holds only if x = y for some R.


Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

14 / 15

Inner Products and Norms

Norm Properties
Norm possess many properties of the absolute value function
Positivity: x 0, x = 0 x = 0
Homogeneity: rx = rx, r R
Triangle Inequality: x + y x + y

There are many other vector norms. Actually any function that
satisfies the properties above is a norm.
p-norm: xp = (x1 p + x2 p + + xn p ) p
p = 2, the Euclidean norm
What does p = 0 or 1 corresponds to? What happens when p ?
1

Continuity of f Rn Rm can be formulated in terms of norms

f is continuous at x0 Rn if and only if for all  > 0, there exists a > 0


such that x x0 < f (x) f (x0 ) < 

if x Cn (complex numbers), inner product is defined as ni=1 xi yi ,


hence < x, y >= < y, x > and < x, ry >= r < x, y >.
Asst. Prof. N. Kemal Ure (ITU)

Lecture 3

October 8th, 2015

15 / 15

You might also like