You are on page 1of 33

(Team Prodigies[S10]) MA 102

Project
Introduction :-

HOMOGENEOUS LINEAR
SYSTEM
S-10

Def :- A coupled system of equations(linearly algebraic or differential) with n unknowns and


m equations is generally referred to as linear system of equations.
Team Prodigies
In real life applications the linear system of equations are obtained whenever there are
certain constraints upon the process or the system involved. For example, a mass-spring
system has the following linear system of equations.
Index
➢ Homogeneous Linear System
○ Definition
○ Method of fundamental Matrices for system of differentials
○ Diagonalizable Matrices
○ Linear System of Equations in Algebra
○ Linear Depedence and Independence
○ Eigen Vectors and Values
○ Theorems based on nth ordered system of differential
equations
○ Applications
➢ Bibliography
➢ Acknowledgements
Different Methods to Solve Linear System
of Equations
Homogeneous systems of equations with constant coefficients can be solved in different ways.
The following methods are the most commonly used:

● elimination method (the method of reduction of n equations to a single equation of


the nth order);
● method of integrable combinations;
● method of eigenvalues and eigenvectors (including the method of undetermined
coefficients or using the Gauss-Jordan form in the case of multiple roots of the
characteristic equation);
● method of the matrix exponential.
Elimination Method

Using the method of elimination, a normal linear system of n equations can be reduced to a
single linear equation of nth order. This method is useful for simple systems, especially for
systems of order 2
Now we substitute a12x2 and get a second order differential equation of the
form as follows:

The equation is now a second order differential eqn with constant coefficients.

Now the characteristic equation can be obtained and so the roots, finally
leading to a general solution in the form of eAt
Method of Matrix Exponential
Consider the eqn :

The column vectors x(1)(t),x2(t) ... , x(n)(t) , are said to form a

fundamental matrix .

Since the columns are linearly independent so the matrix is non- -


singular.

In general, the equation can be written as : x = Ψ(t)*c

Considering IVP, the equation can be reframed as : x =Ψ (t)Ψ−1 (t0)x0


The above equation converges for all t. On differentiating we get it in terms of differential eqn .

On putting t=0, we get RHS to be


zero,indicating eAt = I at t=0 .

The general solution thus, is as shown on the left.

The problem with linear systems is that they are generally coupled so, the diagonalizable
matrices are employed so as to uncouple them. Whereas in the latter case there were n
unknowns for n columns of the matrix, now it is reduced to only one unknown per equation.
Thus , basically using diagonal matrix in place of a general matrix.
For an n*n system of differential equation, the solution is of the form y(t)=eAt

where, A is now a matrix of different possible solutions.

Suppose x’ = Ax and x(0)=xo

The exponential function can be represented in the form as follows :

We replace a by A, and the subsequent equation becomes :-


Let T be the matrix for corresponding eigen values and functions .

ξk denotes the eigen vectors.

And let D be the matrix of corresponding

eigen functions. Values of the eigen functions. D is a diagonal matrix.

Such that,

A can be transformed into D by the above process known as similarity transformation. Or, it may
also be said that A is Diagonalizable.
Say a Linear Homogeneous System

This system can be represented using matrix as AX=0, where A is the matrix of mxn and X is a column
matrix of nx1

A linear system of equations is in the form of Ax = B and it is called homogeneous linear


system of equation if B= 0.
Considering an example and applying Gauss-Jordan Elimination to reduce the
system in row echelon form and get the unknowns.
Linear independence and
dependence
The set of vectors is said to be linearly independent if there exist no
complex numbers in the set such that there doesn’t exist any non-zero
coefficient for which the linear combination is zero.

c1x1+c2x2+. . . . . . . . +cnxn=0 for all ci=0 only ; then the set of


vectors are said to be linearly independent set of vectors.

If any non zero constant ci for which the above equation is zero then it is
said to be linearly dependent.

Ex. for LI : {(1 2 3),( 3 5 7),( 5 9 13 )}

Ex. for LD : {(1,2,3), (2,4,6),(2,4,8)}


Eigenvectors and values
The equations like Ax=y can be viewed as the linear transformation of
transforming the vector x into y. For finding such things we need to use
y=λx, where λ is eigen value of the linear transformation.
(A − λI)x = 0.
The latter equation has nonzero solutions if and only if λ is chosen so
that
det(A − λI) = 0.
This equation is the characteristic equation of the matrix A.

If the equation has more than one repeated eigenvalue, then term
multiplicity plays important role. Multiplicity is the number of times the
eigenvalue is repeated for different set of vectors. This multiplicity is called
algebraic multiplicity.
A special case of matrices exist callen hermitian or self-adjoint matrices iff A*=A .

If this type of matrices as eigenvalues then they obey certain properties.

1)All eigenvalues are real.

2) There always exists a full set of n linearly independent eigenvectors, regardless of


the algebraic multiplicities of the eigenvalues

3) Corresponding to an eigenvalue of algebraic multiplicity m, it is possible to


choose m eigenvectors that are mutually orthogonal to each other . The full set of n
eigenvectors can always chosen to be orthogonal as well as linearly independent.

4) The eigenvectors of the eigenvalues form a mutually orthogonal set of vectors if


of same matrix.
Homogeneous Linear
Differential system having
constant coefficients
If eigenvalues are real
● X’=AX (here A is a real valued square matrix and det(A)≠0)
● Evaluating AX at a large number of points and plotting the resulting
vectors, we obtain a direction field of tangent vectors to solutions of
the system of differential equations
● Phase Portrait-Plot which provides information about solution of 2D
system. Specific Case of second order matrix A.
● Here we assume X = ξert(Here ξ represents the vector or column
matrix)
● (A − rI)ξ = 0 (upon solving the differential equation)-determines
Eigenvalues(r) and Eigenfunctions(ξ ). (r≠0 since then det(A)
becomes 0)
● General Solution X= c1 ξ (1)er(1)t+· · ·+cnξ (n)er(n)t.
If eigenvalues are complex

● (A − rI)ξ = 0, upon taking determinant of (A − rI) and the roots are complex in r

For 2x2 system;

● Eigenvalues are real and have opposite signs; x = 0 is a saddle point.


● Eigenvalues are real and have the same sign but are unequal; x = 0 is a node.
● Eigenvalues are complex with nonzero real part; x = 0 is a spiral point.
Theorems Nth order linear equation are in this format:

Most efficient way to solve this equation is through matrix form and we can
consider x1,x2...xn to be component of solution x. g1,g2….gn are component of
g(t) and similarly p11,....pnn are element of n*n matrix P(t), the previous
equation can be written as

x’’ = P(t)x + g(t).

For being homogeneous g(t)=0 i.e. equation become x’ = P(t)x …….. eq(1)
THEOREM 1:

If x1 and x2 are two solution of x’ = P(t)x then by superposition principle


x =c1x1+c2x2 is also a solution of this system. This can be generalized if
this equation has k solution then

THEOREM 2:

If x(1), ... , x(n) are linearly independent solution of eq(1) in a bounded


interval then every solution x = f(t) can be written as linear
combination of x(1), ... , x(n) in only one manner , f(t) = c1x(1) +···+ cnx(n)
THEOREM 3:

The equation x’ = P(t)x always has at least one fundamental solution i.e.

Let and
x1, x2,x3,...xn be solution of this

equation that satisfy

x(1) = e(1), ... , x(n) = e(n), then in a closed interval x1, x2,x3,...xn is a
fundamental set of solution of this equation.
THEOREM 4:

For x’ = P(t)x (elements of P are continuous and if x=u(t) +v(t)*i is its


complex solution of this equation then its real {u(t)} and imaginary
{v(t)} are also solution of equation.

For proving this theorem just substitute x=u(t) +v(t)i in x’ = P(t)x and
thus we will get:

x’ − P(t)x = u(t) − P(t)u(t) + [v(t) − P(t)v(t)]*i = 0 +0*i

By comparing we the result can be verified.


Applications of Homogeneous linear
System

● Cooling/Warming Law
● Population Growth/Decay
● Solving the real life mechanics problems like that of spring-mass system,
Bernoulli's Theorem, Hooke’s law
Bibliography :

Books :- Boyce Di’Prima

Webpages : http://tutorial.math.lamar.edu/Classes/DE/SystemsDE.aspx,
https://www.math24.net/linear-homogeneous-systems-differential-
equations-constant-coefficients/,
http://people.uncw.edu/hermanr/mat361/ODEBook/Systems.pdf
Acknowledgment

We, team Prodigies, would like to thank Prof.Indranath Sen Gupta


and Prof. ShanmugaNathan Raman for giving us this golden
opportunity to work upon this project. The project has been so
much of a knowledge gain experience and implementaion of
mathematics in real life applications is such a wonder to admire.

We would also like to extend our thanks to Prof.Sanjay Amrutiya and


the associated TAs for their help.

You might also like