You are on page 1of 6

Lecture 2: Eigenvalue-Eigenvector and Orthogonal Matrix

(Part B Chapter 7 in Advanced Eng. Math. By E. Kreyszig)


Definition
For a given square matrix A, if equation AX = X, with is a nonzero number and X is a nonzero vector
exists then is the eigenvalue and X the eigenvector of A. Both and X are derived from A, they represent
certain characteristics of A. The definition shows that, square matrix A multiplying its eigenvector only
changes the eigenvectors magnitude with a proportional factor of (eigenvalue).
Calculation

Ann X n1 = X n1 AX = I nn X n1 AX IX = O ( A I ) X = O
If the homogeneous LES has nontrivial solution for X , the Cramer Rule requests | A - I | = 0
Two steps
(1) Find from | A-I | = 0
- 2 2 - 3 x1
x1
- 2 2 - 3
1 0 0
2 1 - 6 x = x 2 1 - 6 0 1 0 = 0

2
2

- 1 - 2 0 x3
x3
- 1 - 2 0
0 0 1
(2 )(1 )( ) + 2(6)(1) + 2(2)(3)
(3)(1 )(1) (6)(2)(2 ) 2 2( ) = 0

2
- 3
- 2 -
2
1 - - 6 = 0

- 1
- 2 0 -

3 2 + 21 + 45 = 0 try 45 = 3 5 3, 1 = 5; 2 = 3 = 3.
Roots of a polynomial: the number of roots equals the polynomial order and the product of all roots equals
the polynomial constant.
(2) Find eigenvector X from homogeneous LES, ( A I ) X = 0 with the calculated using GE
a) For = 5
2
- 3
2
- 3
- 5
- 2 - 5
- 7
- 1 - 2 - 5
- 1 - 2
2

1 - 5 - 6 2 4 - 6 2 4 - 6 0 8 - 16

- 1
- 1 - 2 - 5
- 7
0 - 16 - 32
- 2 0 - 5
2
- 3
- 1 - 2
0 8
0 - 8

- 5
1

- 16 0
0
- 16

2
1
0

5
x1 + 2 x 2 + 5 x 3 = 0
x1 = c1
1

2
x 2 + 2 x3 = 0
X 1 = x 2 = 2c1 = c1 2
x 3 = c1
1
0
0=0

Notice: there are 2 Equations including 3 unknown components of the eigenvector (x1, x2, x3) so 1
independent component x1 can take any value, c1, and the other components, x2 and x3, have the values
corresponding to c1. In general, for a LES with n unknowns, after Gauss Elimination, we have r = rank (A)
being the number of independent equations in the LES then, (n r) equals the number of independent
unknowns. The eigenvector needs to be represented as a linear combination of the (n - r) independent
vectors.
b) For = -3 (the double roots)
2
- 3 1 2 - 3 1 2 - 3
n=3
- 2 - (3)
2

1 - (3)
- 6 2 4 - 6 0 0 0
r =1
x1 + 2 x2 3x3 = 0

- 1

-2
0 - (3) - 1 - 2 4 0 0 0
nr = 2
If take x2 = c2 ; x3 = c3 as independent components in the eigrnvector,
3x3 2 x2
X = x2 = c2 Usually, take
x3 = c3

2
3
2
3
x2
1
0

= k 2 + k 3 X = 1 and X = 0 X = k 2 1 + k 3 0




0
1
x3
0
1
0
1
Note: you can use any two independent vectors as x2 then calculate x1 and obtain the eigenvector X.
x
3

Rules
1. Values of equal the roots of characteristic polynomial derived from |A| = 0. If the order of the
polynomial is k, you have k real and/or complex roots, they may be distinct or multiple. A square matrix,
An n can have at maximum n distinct roots (eigenvalues).

2. All eigenvectors corresponding to distinct eigenvalues are independent.


5
3
5 3
A =
= 0 (5 ) 2 9 = 0 1 = 8, 2 = 2.
3
3
5
5

1 = 8,
3 -3 3 x1
5 8
1
=
X1 =
3
5 8 0 0 x2

1
2 = 2,
3
5 2
3 3
=
3

5 2

0 0

x1
1
x X 2 =
1
2

X1 and X 2 independent.

3. AT has the same eigenvalues as A.


Example eigenvalues and eigenvectors for 2d square matrices:

Illustration

Matrix

1 1
0 1

Eq.

2 2+ 1 = 0

2 2k + k2 = 0

( k1)( k2) = 0

2 2 cos + 1 = 0

1, 2

1= 2=1

1= 2= k

1= k 1

1,2= cos isin

X1, 2

(1 0)

(1, 0)

(1, 0)

(0,1)

2= k 2

(0,1)

(1, -i) (1, i)

Applications
1. Principal Directions in vector transformation (by multiplying a matrix)
A circle elastic membrane in the x1-x2 plane with boundary equation: x12 + x22 = 1 is stretched so
!
5 3 !
that any point P (x1 x2) on the boundary goes to point Q (y1 y2) by = = =
!
3 5 !
!
5 3 !
In the Principal Direction, points P moves to Q in the same direction:
=
3 5 !
!
5
3

= 5 ! 9 = 0 ! = 8, ! = 2 eigenvectors
3
5
!
!
58
3
1
! :
= 0 ! ! = 0 1 independent ! = ! = c! = c!

3
5 8 !
!
1
!
!
52
3
1

! :
= 0 ! + ! = 0 1 independent ! = ! = d! = d!

3
5 2 !
!
1

2.

Discrete Time Markov Chain

A random process of state vector Xn1 characterized as memoryless: the next state
Xt+1 depends only on the current state Xt but not on the sequence of events that
preceded it:
!
!!!!
!!!! = 0.9
0.8 0.1 0.1 ! = 1
!!!
!
!!
= !! !!
, !!!! = 0.1 0.7 0.2 !! = 0 = !!!! = 0.3
0 0.1 0.9 !! = 1
!!!!
!!!! = 0.9
!
!!!
!" = probability of transforming from ! to !
!!!! = 0.9 includes 0.8 from !! ; 0.1 from !! and 0.1 from !! .

3. Vibration analysis
Two masses, m1 and m2, hang on two springs (k1 and k2) as shown. With a small disturb, the system will
vibrate around it force-balanced positions y1 = 0 and y2 = 0. Calculate the displacements of m1 and m2: y1(t)
and y2(t).

k1 = 3
0

y1

m1 = 1

k2 = 2
0

y2

m2 = 1

From Newtons 2nd law and Hooks law.


2

m1

d y1
= k 2 ( y2 y1 ) k1 y1 = (k1 + k 2 ) y1 + k 2 y2
dt 2

m2

d 2 y2
= k2 ( y2 y1 ) = k2 y1 k2 y2
dt 2

d 2 y1
d 2 y1
m1 2 (k1 + k 2 ) k 2 y1 2 5 2 y1
dt =
dt =

k2
k 2 y 2 d 2 y 2 2 2 y 2
m d y 2
2
dt 2

dt 2
d 2 y1

y = x1 e t
2 x1 2 e t
If 1
and
t
dt2 =
2 t
y 2 = x 2 e
d y 2 x 2 e

dt 2

x1 2 e t 5 2 x1e t x1 2 5 2 x1
x 5 2 x1
2
=
=
2 1 =

Eigenvalue ; Eigenvector X
2 t
2
x2 2 2 x2
x2 e 2 2 x2 e x2 2 2 x2

-5-
2
x1
5 2 x1
2
= 0 1 = 1; 2 = 6;
2 2 x = x = 2
2
-

2
2
2 x1 4 2 x1
5 + 1
2 1 x1
For 1 = 1 = 1 = i;
x = 2 1 x = O 0 0 x = O
2

2
+
1

2
2

2
x1 = 1 y1 = x1e t = ce it
X 1 = c

t
it
2 y 2 = x 2 e = 2ce
2 x1 1 2 x1
5 + 6
1 2 x1
For 2 = 6 = 6i;
=
= O

= O
2 + 6 x 2 2 4 x 2
2
0 0 x 2
2 y1 = 2ce i
X 2 = c

i
x 2 = 1 y 2 = ce

6t
6t

The system can vibrate in different modes!


2. Diagonalization of a square matrix An n with n independent eigenvectors (distinct eigenvalues)
Diagonalization means transforming a non-diagonal matrix into an equivalent diagonal matrix, hence it
becomes simple for operations. If matrix An n with distinct eigenvalues, it must have independent
eigenvectors. Form a new matrix P whose column vectors are these distinct eigenvectors (P is called the
modal matrix of A). It can be shown that P-1 exists (|A| 0) and the matrix product P-1AP = D. For this
diagonal matrix D the diagonal elements are the distinct eigenvalues. The order or the eigenvalues in D is
the same as the order of column vectors in P:

1 0 ... 0
0 ... 0
2

P 1 AP = D =
0 0 . 0

0 0 ... n
2 3
1
1
1 1
1 1 1
Given A =
1 = 1 X 1 = c1 ; 2 = 5 X 2 = c2 P =
and P 1 =

2 1 1
3 2
1
1
1 1
1 1 1 2 3 1 1 1 0
D = P 1 AP =
=
2 1 1 3 2 1 1 0 5
Change the eigenvector order in P
1 1
1 1 1
1 1 1 2 3 1 1 5 0
1
1
P =
and P = 2 1 1 D = P AP = 2 1 1 3 2 1 1 = 0 1
1

1
5 4
4
1
4 1
Given A =
1 = 6 X 1 = ; 2 = 1 X 2 = X =
and X -1 = 15

1 2
1
- 1
1 1
5
1
1

5 4 4 1 6 0
D = X 1 AX = 15 5 4

=

5 5 1 2 1 1 0 1

1
5


4
5

We can retrieve A from D and P: PDP 1 = P( P 1 AP) P 1 = ( PP 1 ) A( PP 1 ) = IAI = A


3. Matrix Power calculation using A = PDP 1

A2 = PDP 1 PDP 1 = PD 2 P 1 Ak = PD k P 1.

2 3
1 1
1 1 1 0 1 1 1 1 0
1 1 1
1
23
23 1
Given A =
P = 1 1 and P = 2 1 1 then A = PD P = 1 1 0 5 23 2 1 1 = 0 5
3
2

4. Similar Matrix
Matrix C is formed by two given matrices A and B with C = B 1 AB then C and A are similar matrices and
the process is called a Similarity Transformation of A.
Features of Similar Matrix
C and A share the same eigenvalues {i} and their eigenvectors have the relation X C = B 1 X A
1
5
1) Given matrix X =
2
5

2
5

1
5

5 4
and A =
test the similarity transformation.
1 2

1
5
Let P = X and P 1 = X 1 = X T =
2
5

2
5

1
5

Eigenvalues and eigenvectors of A


5- 4
5 4
A =
1 2- = 0 (5- )(2- ) 4 = 0
1 2
2 7 + 6 = 0

( 6)( 1) = 0

1 = 6, 2 = 1.

Eigenvectors
1 = 6
5-6 4 -1 4 -1 4
4
1 2-6 1 -4 0 0 X = 1


2 = 1
5-1 4 4 4 1 1
1
1 2-1 1 1 0 0 X = 1


Similarity Matrix of A
1
A = P -1 AP = 5
2
5

1
4 5
1 1 2 2


5
5
Eigenvalues and eigenvectors of A%
23
5

21
5

65
12
5

=0

2
5 5

23 5

21

12 5

2
23 6
1 2 5 4 1 1 2
5
5
5
1

1
5 2 1 1 2 5 2 1
21 12
5
5
5
=6
=0 1
2 = 1

s as A

1 = 6
23 6 6
6 7 6 7 6 6
5 1 23 5 6
5
=

21 12 6 5 21
12 5 6 21 18 0 0 7
5
5

1
5
P 1 X =
2
5

2
5 4

1 1

5

1 2 4 1 6
= 1
=
5 2 1 1
5 7

Similarity!

5. Orthogonal Matrix
Orthogonal Vectors Two vectors A1n and B1n have the scalar product (dot) AB = 0.
A1n (B1n)T = 0
Orthogonal vectors must be independent, but independent vectors may not be orthogonal.
Perpendicular is limited to 3D, Orthogonal for nay Dimensions.
Orthogonal Matrix A square matrix has all column/row vectors {ai} orthogonal.
aiak= 0 j k
Orthonormal Matrix (Normalize the column/row vectors in the orthogonal matrix)
A real square matrix is orthogonal matrix, iff its column/row vectors: (a1,,an) have:

0 if j k

1 if j = k

different column/row vectors each other


all column/row vectors have length = 1

aiak=

Features of an orthonormal matrix: (1) AT = A1; (2) |A| = 1, (3) Eigenvalue || =1, and (4) forming a unit
perpendicular coordinate system.
Check agiven orthnormal matrix A
2
3
2
A =
3
1
3

1
3
2
3
2
3

2
2
3
3
1 T 1
; A =
3
3
2
2

3
3

2
3
2
3
1
3

1
2
3
3
2
2
AAT =
3
3
2
1

3
3

1 2 2
3 3 3
2 1 1

3 3 3
2
2 2

3
3 3

2
3
2
3
1
3

1
3
2
= I
3
2

3

Rule
A symmetric matrix A can construct an orthonormal matrix X using its normalized eigenvectors.
Two steps: 1) calculate eigenvectors (2) normalization the eigenvectors.
== 5
2
1 2 1
A =
= 0 (1 )(4 ) 4 = 0 2 5 = 0 1
2
2 = 0
2
4
4

1
2 4 2 2 1
1 5
1



2 x1 + x2 = 0 25

4 5 2 1 0 0
2
2 5
2
2 1 2 1 2
1 0
- 2


x1 + 2 x2 = 0 1 5
2

4 0 2 4 0 0

1 5

1 2
5
X = 25
1
5
5

Tests : 15 ( 25 ) +

XX 1 = XX T =

X =

1
5
2
5

2
5

1
5

2 1
5 5
1
5
2
5

= 0 orthonormal (vector dot product = 0)

2
5

1
5

1
5
2
5

2
5
1

1 0
=
an orthogonal matrix?
0 1

=1

s of X
1
5

2
5

1
5

2
5

=0

1 5

1 5

(1 5) = 2i =

= 0 (1 5) 2 + 4 = 0

1 2i

| |= 1
5
Specific matrices which can be transformed into an orthogonal matrix:
1) A square matrix An n has n distinct eigenvalues can construct an orthogonal matrix from its independent
eigenvectors. The column or row vectors in this orthogonal matrix construct a basis for Rn (a Cartesian
coordinate system).
2) An symmetric matrix A can derive an orthonormal system from its eigenvectors and form an orthogonal
matrix (by normalization) whose column/row vectors is a basis of Rn (a Cartesian coordinate system)

You might also like