You are on page 1of 4

EE 761 - Modern Control

Background - Matrix Operations


Fall 1997

Background - Matrix Operations


Lecture Notes
Definitions:
A set of linear equations such as
y 1 = a 11 x 1 + a 12 x 2 + a 13 x 3 + ...
y 2 = a 21 x 1 + a 22 x 2 + a 23 x 3 + ...
can be written in matrix form as
 x1 
 y 1   a 11 a 12 a 13 ...   x2


 =  
 y 2   a 21 a 22 a 23 ...   x3 

..
 . 

Dimension:
An nxm matrix has n rows and m columns

Addition
Only matrices with identical dimensions can be added
 a a   b b   a + b 11 a 12 + b 12 
A + B =  11 12  +  11 12  =  11 
 a 21 a 22   b 21 b 22   a 21 + b 21 a 22 + b 22 
Commutative & Associative Property: Matricies can be added in any order.
Y = AX + BX
= (A + B)X
= (B + A)X

Multiplication:
 Πa 1jb j1 Πa 1jb j2 
A ⋅ B =  
 Πa 2jb j1 Πa 2jb j2

example:
 7 8 
 1 2 3    1 ⋅ 7 + 2 ⋅ 9 + 3 ⋅ 11 1 ⋅ 8 + 2 ⋅ 10 + 3 ⋅ 12 
   9 10  =  
 4 5 6   11 12   4 ⋅ 7 + 5 ⋅ 9 + 6 ⋅ 11 4 ⋅ 8 + 5 ⋅ 10 + 6 ⋅ 12 
 
The inner dimension must match for matricies to be multiplied.
A 3x7 ⋅ B 7x2 = AB 3x2 A 3x7 matrix can be multiplied by a 7x2 matrix,
B 7x2 ⋅ A 3x7 does not make sense
but a 7x2 cannot be multiplied by a 3x7.
Properties: Matrix multiplication is associative, but not commutative
A(B + C) = AB + AC assicoative
AB ≠ BA but not commutative

Transpose
 a a a 
A T =  11 21 31 
 a 12 a 22 a 32 
1
EE 761 - Modern Control
Background - Matrix Operations
Fall 1997

(AB) T = B T A T

Determinant
The determinant is the area enclosed by the vectors composing the columns.
A = a 11 D 11 − a 12 D 12 + a 13 D 13 ...
D ij is the part of matrix A excluding row i & column j.
For 1x1 to 3x3 matricies, this works out to
a =a

a b
= ad − bc
c d

a b c
e f d f d e
d e f =a −b +c
h i g i g h
g h i

The determinant is zero if the area enclosed is zero. This happens when one or more columns are linearly dependent.
A⋅B = A ⋅ B
assuming both A and B are square.

Adjoint
 D 11 D 12 D 13 
 
Adj(A) =  D 21 D 22 D 23 
 
 D 31 D 32 D 33 

Inverse
If B is the inverse of A then
B = A −1
BA = I
AB = I

(only square matrices have inverses) In order to find a matrix inverse, cofactors or Gauss elimination works. Cofactors is
adj(A)
A −1 = A

Rank
The rank of a matrix is equal to the number of linearly independent columns or rows (either one is the same).
Linearly Independent: A vector Y is linearly independent of vectors X if no constants exist such that
Y = a 1 X 1 + a 2 X 2 + a 3 X 3 + ...
For example,
 1 0 
 
 0 1 
clearly has columns that are linearly independent. Thus, the rank is two.

2
EE 761 - Modern Control
Background - Matrix Operations
Fall 1997

 1 2 
 
 2 4 
has one linearly independent column (the second column is 2x the first). Thus, the rank is one.

Eigenvectors, Eigenvalues
Assume a system described by
.
X = AX
The eigenvalues are the solutions to
λI − A = 0

The eigenvectors are the solutions to


AΛ i = λ i Λ i

The natural response of this system will be of the form


x(t) = x 1 (0)e λ 1 t + x 2 (0)e λ 2 t + x 3 (0)e λ 3 t + ...
where x(0) are initial conditions and are the eigenvalues of A. Thus, the eigenvalues tell you how the system will respond.
The eigenvectors tell you what states are associated with what eigenvalue. For example, the dynamic system:
.  0 1   x1 
x1
. =  
x2  −6 −5   x 2 
has eigenvalues of {-2, -3}

λ −1
= λ(λ + 5) + 6 = (λ + 2)(λ + 3) = 0
6 λ+5

The eigen vector associated with the eigenvalue of -3 is


 0 1   v1   v1 
   = −3  
 −6 −5 v
 2   v2 
 v1   1 
 = 
 v 2   −3 
The eigen vector associated with the eigenvalue of -2 is
 0 1   v1   v1 
   = −2  
 −6 −5   v 2   v2 
 v1   1 
 = 
 v 2   −2 
The natural response of this system will then be of the form
 1  −3t  1  −2t
α e + β  e
 −3   −2 
where α and β are determined by the initial conditions.
Example 2: If x1(0)=0 and x2(0)=1, find x1(t) and x2(t)
 1  −3t  1  −2t
x(t) = α  e + β  e
 −3   −2 
3
EE 761 - Modern Control
Background - Matrix Operations
Fall 1997

at t=0
 1   1 
x(0) = α   + β 
 −3   −2 
so
α = −1, β = 1
 1  −3t  1  −2t
x(t) = −  e +  e
 −3   −2 

Similar Matrices
You can use a change of variables to redefine the states while leaving the overall transfer function unchanged. For example,
the dynamic system
.
X = AX + BU
Y = CX + DU
can be written in terms of a new state variable, Z, related to X as
X = TZ

resulting in the dynamics being


.
Z = T −1 ATZ + T −1 BU
Y = CTZ + DU
Transformations do not change the eigenvalues (how the system behaves remains unchanged with a change of variables).
Eigenvectors do change, however as
X = TZ
Λ X = TΛ Z
example: Let the states be defined as
 z 1   −6x 1 − 2x 2 
 = 
 z 2   6x 1 + 3x 2 
 −6 −2 
Z=   X = T −1 X
 6 3 
then the system matrix changes to
A Z = T −1 A X T
−1
 −6 −2   0 1   −6 −2   −2 0 
AZ =     = 
 6 3   −6 −5   6 3   0 −3 

You might also like