You are on page 1of 8

UNIT-III

Real and Complex Matrices, Quadratic Forms

• Real Matrices – Symmetric, Skew-symmetric, Orthogonal

• Linear transformations- Orthogonal Transformation

• Complex Matrices- Hermitian, Skew-Hermitian and Unitary

• Eigen Values and Eigen Vectors of Complex matrices and Their Properties

• Quadratic forms- Reduction to Canonical Form

• Rank- Positive, Negative Definite; Semi definite – Index, Signature- Sylvester


Law

• Objective Type Questions

Summary
1. Definitions:

Symmetric matrix
In linear algebra, a symmetric matrix is a square matrix, A, that is equal to its transpose

The entries of a symmetric matrix are symmetric with respect to the main diagonal (top left to
bottom right). So if the entries are written as A = (aij), then

for all indices i and j. The following 3×3 matrix is symmetric:

A matrix is called skew-symmetric or antisymmetric if its transpose is the same as its negative. The
following 3×3 matrix is skew-symmetric:

Skew-symmetric matrix
In linear algebra, a skew-symmetric (or antisymmetric or antimetric[1]) matrix is a square matrix
A whose transpose is also its negative; that is, it satisfies the equation:

or in component form, if : for all and

For example, the following matrix is skew-symmetric:

Compare this with a symmetric matrix whose transpose is the same as the matrix

or an orthogonal matrix, the transpose of which is equal to its inverse:

The following matrix is neither symmetric nor skew-symmetric:

Every diagonal matrix is symmetric, since all off-diagonal entries are zero. Similarly, each diagonal
element of a skew-symmetric matrix must be zero, since each is its own negative.

Orthogonal matrix
In linear algebra, an orthogonal matrix is a square matrix with real entries whose columns (or
rows) are orthogonal unit vectors (i.e., orthonormal). Because the columns are unit vectors in
addition to being orthogonal, some people use the term orthonormal to describe such matrices.

Equivalently, a matrix Q is orthogonal if its transpose is equal to its inverse:

alternatively,

(OR)
Definition: An n × n matrix A is called an orthogonal matrix whenever AT A =I .
EXAMPLE:
 −1 0   1 0   −1 0   cos θ − sin θ 
 ,  ,  ,  
 0 −1  0 −1  0 1   sin θ cos θ 

Conjugate transpose

"Adjoint matrix" redirects here. An adjugate matrix is sometimes called a "classical adjoint matrix".

In mathematics, the conjugate transpose, Hermitian transpose, or adjoint matrix of an m-by-


n matrix A with complex entries is the n-by-m matrix A* obtained from A by taking thetranspose and then
taking the complex conjugate of each entry (i.e. negating their imaginary parts but not their real parts). The
conjugate transpose is formally defined by

where the subscripts denote the i,j-th entry, for 1 ≤ i ≤ n and 1 ≤ j ≤ m, and the overbar denotes a
scalar complex conjugate. (The complex conjugate of a + bi, where a and b are reals, isa − bi.)

This definition can also be written as

where denotes the transpose and denotes the matrix with complex conjugated entries.

Other names for the conjugate transpose of a matrix are Hermitian conjugate, or transjugate. The
conjugate transpose of a matrix A can be denoted by any of these symbols:

 or , commonly used in linear algebra


 (sometimes pronounced "A dagger"), universally used in quantum mechanics
 , although this symbol is more commonly used for the Moore-Penrose pseudoinverse

In some contexts, denotes the matrix with complex conjugated entries, and thus the conjugate transpose
is denoted by or .

EXAMPLE:

then

Hermitian matrix
A Hermitian matrix (or self-adjoint matrix) is a square matrix with complex entries which is equal to its
own conjugate transpose – that is, the element in the ith row and jth column is equal to the complex
conjugate of the element in the jth row and ith column, for all indices i and j:

If the conjugate transpose of a matrix is denoted by , then the Hermitian property can be written
concisely as

Hermitian matrices can be understood as the complex extension of a real symmetric matrix.

For example,

is a Hermitian matrix

Skew-Hermitian matrix
In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or
antihermitian if its conjugate transpose is equal to its negative.[1] That is, the matrix A is skew-
Hermitian if it satisfies the relation

where denotes the conjugate transpose of a matrix. In component form, this means that

for all i and j, where ai,j is the i,j-th entry of A, and the overline denotes complex conjugation.

Skew-Hermitian matrices can be understood as the complex versions of real skew-symmetric


matrices, or as the matrix analogue of the purely imaginary numbers.[2]

Unitary matrix
In mathematics, a unitary matrix is an n by n complex matrix U satisfying the condition

where is the identity matrix in n dimensions and is the conjugate transpose (also called the
Hermitian adjoint) of U. Note this condition says that a matrix U is unitary if and only if it has an
inverse which is equal to its conjugate transpose

A unitary matrix in which all entries are real is an orthogonal matrix. Just as an orthogonal matrix
G preserves the (real) inner product of two real vectors,
so also a unitary matrix U satisfies

for all complex vectors x and y, where stands now for the standard inner product on

2. Properties of eigen values of real and complex matrices are


given:

1.If λ is a characterstic root of an orthogonal matrix, then 1/ λ is also a


characterstic root.
2.The eigen values of an orthogonal matrix are of unit modulus.
3. The eigen values of a hermitian matrix are all real.
4. The eigen values of a real symmetric matrix are all real.
5. The eigen values of a skew hermitian matrix are either purely
imaginary or zero.
6.The eigen values of a real skew symmetric matrix are purely
imaginary or zero.
7. The eigen values of a unitary matrix are of unit modulus.
8. If A is nilpotent matrix, then 0 is the only eigen value of A
9. If A is involuntary matrix its possible eigen values are 1 and -1
10.If A is an idempotent matrix its possible eigen values are 0 and 1

3. Transformations:

(a) The transformations X = AY where A = (aij)nXn; X = [x1 x2 …. xn];


Y = columns of [y1 y2 …. yn]; transforms vector Y to vector X over the matrix
‘A’.
The transformations is linear.

(b) Non-singular transformation:


(i) If ‘A’ is non-singular, (‫׀‬A 0 ≠‫ )׀‬then Y = AX is non-singular
transformation.
(ii) Then, X = A-1Y is inverse transformation of Y = AX.

(c) Orthogonal transformation: If ‘A’ is an orthogonal matrix, then


Y = AX is an orthogonal transformation;
A is orthogonal , A1 = A- 1 => Y1Y = X1X
i.e., Y = AX transforms ( x12 + x22 +….+ xn2) to (y12 + y22 +….. +yn2)

4. Quadratic forms: A homogeneous polynomial of 2nd degree in ‘n’


variables x1, x2,…xn is called of quadratic form.
Thus , q = ∑ ∑ aijxixj from i , j = 1 to n
(or) q = [a11x12 + a22x22 +……+ annxn2 + (a12+a21)x1x2 +
(a13+a31)x1x3 +…+…]
is a quadratic form in ‘n’ variables x1,x2……xn.

5. Matrix of a quadratic form ‘q’: If ‘A’ is a symmetric matrix,

q = X1AX is the matrix representation of ‘q’ and ‘A’ is the matrix


of ‘q’ where ,(aij+aji)=2aij is coefficient of xixj.
[i.e. aij=aji=1/2 coefficient of xixj]
Then q = X1AX = [x1x2.xn] A columns of[x1 x2.. xn]

6. Rank of quadratic form: If q = X1AX, then rank of A is the rank


of quadratic form ‘q’

(a) If rank of A = r = n , q is non-singular form


(b) If r < n , ‘q’ is singular

7. Canonical form or Normal form of q: A real quadratic form ‘q’


in which product terms are missing (i.e. all terms are square terms
only) is called the canonical form of q.
i.e. q = a1x12 + a2x22 + …+ anxn2 is canonical form.

8. Reduction to canonical form: If D = Diag [d1, d2,….dr] is the


diagonalization of A, then q1 = d1x12 + d2x22 + …. + drxr2 ,
(where r = rank of A) is canonical form of q = X1AX.

9. Nature of a quadratic form:


1. If q= X1AX is the given quadratic form (in n variables) of rank ‘r’,
then, q1=d1x12 + d2x22 +….+ drxr2 is the canonical for of q.
[di is +ve, -ve, or zero]

(a) Index: The number of +ve terms in q1 is called the index ‘s’ of
quadratic form ‘q’
(b) The number of non +ve terms = r-s
(c) Signature = S- (r-s)= 2s-r.

2. The quadratic form ‘q’ is said to be

(a) +ve definite if r=n, and s=n


(b) –ve definite if r=n, and s=0
(c) +ve semi-definite if r<n and s=r
(d) –ve semi-definite if r<n and s=0
(e) Indefinite in all other cases

3. To find the nature of ‘q’ with the help of Principal minors:

Let q=X1AX be the given quadratic form and let M1,M2,M3,…..


be the principal minors of A.

(a) ‘q’ is +ve definite iff Mj>0 for every j≤n


(b) ‘q’ is –ve definite if M1,M3,M5….are all –ve and M2,M4,M6,….be
the principal minors of A.
(c) ‘q’ is +ve semi-definite if Mj≥0 for every j≤n and at least one
Mj=0.
(d) ‘q’ is –ve semi-definite if in case (b) some Mj are =0.
(e) In all other cases ‘q’ is indefinite

4. To find the nature of ‘q’ by examining eigen values of ‘A’:-


If ‘q’ =X1AX is quadratic form in ‘n’ variables then, it is

a. +ve definite iff all eigen valus of A are +ve.


b. –ve definite iff all eigen values are –ve
c. +ve semi-definite if all eigen values are ≥0 and at least one eigen value
=0.
d. –ve semi-definite if all eigen values are ≤0 and at least one eigen value
is zero.
e. Indefinite if A has +ve as well as –ve eigen values.

10. Methods of Reduction of quadratic form to the canonical form.

(a) Lagrange’s method: A quadratic form can be reduced by this method to


a canonical form by completion of squares.

(b) Diagonalization method: Write A=I3AI3 [if A=(aij)3x3] apply


elementary row transformation on L.H.S and on prefactor of R.H.S.
Apply corresponding column transformations on L.H.S as well as the
post-factor of R.H.S continue this process till the equation is reduced to
the form,

D = P1 A P , where D is a diagonal matrix D = [d1 0 0]


[0 d2 0]
[0 0 d3]

Then the canonical form is q1=y1(P1AP)Y=Diag(d1 d2 d3) where


Y = [y1 y2 y3], i.e., if q = X1 A X, X = [x1 x2 x3] ,
q1 = d1y12 + d2y22 + d3y32.
Here X=PY is corresponding transformation.
(c) Orthogonal Reduction of q = X1AX:

(i) Find eigen values λi and corresponding eigen vectors Xi,


(i=1,2,…n) of A.
(ii) Find modal matrix B = [X1 X2 … Xn]
(iii) Normalize each column vector Xi of B by dividing it with its magnitude
and write the normalized modal matrix P which is orthogonal (i.e. P1 =
P-1)
(iv) Then X = PY reduces ‘q’ to q1
where q1 = λ1 y12 + λ2 y22 + …+ λn yn2
= Y1(P1AP)Y
( X=PY is know as orthogonal transformation)

11. Sylvester’s
law of inertia: The signature of real quadratic form
is invariant for all normal reductions.

****

You might also like