You are on page 1of 16

Chapter 1

1.1

System theoretic approach

What is the system theoretic approach ?


To understand or to make things.

Build bigger things in terms of smaller things using simple connections.


Use only external properties of things.(black box approach)
Note: appropriate meaning to things and connections could be assigned.

Example:
Electrical Networks: We build by connecting (i.e using KCL, KVL constraints) multiterminal devices. For these multiterminal devices we only look at external (v, i) characteristics such as

v1
i1
v2
i2

v3
i3

.
= G .

(1.1)
.
.

.
.

vn1
in1
vn
in
The connections are simple KCL and KVL.

Block Diagram based description of systems: The block diagram is shown in Fig. 1.1.
In this case the connections are again simple: Summer and Connection Points.
Summer: It has many input variables and single output variables
z1 = u z4

(1.2)

Connection Point: It has one input and many outputs.


z2 = y

(1.3)

z2 = z3

(1.4)

Is this feasible? Can this always be done? What if connections are actually complicated?
Example: Suppose we have at nodes ex1 + ex2 + sin(x3 ) = 0. See Fig. 1.2
Solution lies in using new systems but keeping the connections simple as shown in Fig. 1.3
x1 = ex1
1

(1.5)

Figure 1.1:

Figure 1.2:

Figure 1.3:

x2 = ex2

(1.6)

x3 = sin(x3 )

(1.7)

x1 + x2 + x3 = 0

(1.8)

Connections need not introduce constrains in terms of equations. Example, when subsystems are
communities of people and arrows indicate some social interaction.
The kinds of system we deal with invariable yield connection constrains which are simple. Linear ones
with 0,+1,-1 coefficients. Both the block diagram description as well as that of electrical networks is of
this type. Note that their members are not of the type used for device characteristic w=Ri, the value
of R is not precisely known. The + 1,-1 of the connection is precise. This again should be exploited
during computation involving the system.
Advantages of keeping the connection constrains simple:

Good subsystems connected together yield good results.


In terms of nature of equations the connection equations being simple would qualify would qualify as
good and the above statement would be a consequence of this fact.
Example:
When connections are electrical i.e. KCL and KVL

(Example from audience) if subsystems have continuous V I characteristics system will have continuous
V I characteristics. (Inductors current continuous, Capacitors- voltage continuous)
Subsystems Linear Systems Linear Connect all subsystems having characteristics Ai + Bv = s
them and bring out ports you will see characteristics Kvp + M ip = Sp

1.2

System

Let us now define a system as a collection of input , output pairs.


Example : Electrical Multiport
+

u1

+
y1

uk

ym

Output
Voltage
Sensors

Output
Current
Sensors

Figure 1.4:
Typical element
((u1 , . . . uk ), (y1 , . . . ym ) S

(1.9)

ui (.), yi (.) could be functions of t or i Choose v as input and i as output. (this choice is arbitrary)
Question:
Does (v1 , i1 ) SR ?
(1.10)
Check if v1 = Ri, if yes (v1 , i1 ) SR (otherwise it does not belong to SR )
similarly, if v=E, (v, i) SE
and if, v= dL
dt , (v, i) SL
3

v=E

v, i

v,i
L

i
SE

SR

SL

Figure 1.5:

System is linear iff


(u1 , y 1 ), (u2 , y 2 ) S = (u1 + u2 ), y 1 + y 2 ) S
3

d
dt

(1.11)

Is the system defined by ddt3u + 2 ddt2u + u = dy


dt linear?
is a linear operation, linear combination of linear operations yield a linear operation.

1.3

Causality

1. past influences the future


2. future does not influence the past

We will take the 2nd point as the key. Think of a special case where for a given input function of time,
there is a unique output function and we build our system as follows
For each input function u, we just pick some y and say that this is the output function corresponding
to it. So, for each u there is a unique y in the system.
Now, consider the input functions u1 and u2 and corresponding output functions y 1 and y 2
The result could be as in Fig 1.6 and Fig 1.7
u1

y1

t1

t1

Figure 1.6:

Can such a system be called causal?


The two inputs were the same upto t1 and deviates after t1 . How did the system react to the deviation
before t1 ? This, one would not expect in a causal system. The above situation should be forbidden in
a causal system. We need to however state it in the context of the definition of system as a collection
of input-output pairs.
4

u2

y2

t1

t1

Figure 1.7:

So, we say it is like this. . . If two input functions u1 , u2 are the same upto t1 and then deviate, there
must be atleast one (u1 ,y 1 ) and one (u2 ,y 2 ) pair, such that y 1 , y 2 are the same upto t1 .
Formally, u1 (t) = u2 (t), t t1 , and (u1 , y 1 ) S (u2 , y 2 ) S such that, y 1 (t) = y 2 (t), t t1 .

1.4

Time Invariance

Time Invariance: No instance of time is special for the system. The time origin could be anywhere.
(u, y) S => (uT , yT ) S, where xT (t) = x(t T )

(1.12)

Suppose (u, y) as in adjacent figure in S, then, (u , y ) also in S

y
u
t

y
u
t

Figure 1.8:
Note: Other notions such as System/Subsystem Reciprocity and System/Subsystem Passivity are also
usually discussed.
Problem: Examine if the system stated below or in the figure are linear, time invariant, causal

Figure 1.9:

du
dt
du
y = sin(u) +
dt
du
u
+ sin(u) = y
e +2
dt
y = u for t > 10
y =u+t

a)
b)
c)
d)

= 2u for t < 10
dy
+ y + u = 0 for y > u
e)
dt
d2 y
+ u = 0 for y < u
dt2
Problem: Write down all the constraints of the systems in Figure below using a matrix operator
Ld
T
1
Eg: r1 + L di
dt + 2i2 = v3 can be written as [ 1 0 -1 dt 2] [v1 v2 v3 i1 i2 ]

R1

R2

R3

Figure 1.10:

1.5

State

Define state as extra information at time t given input up to t to find y(t)


Problem: For the circuits of Fig. 1.9 and Fig. 1.10 taking input to be (a) v (b) i. What is a good choice
of state?
In many important instances of system e.g. electrical circuits with the usual elements the state can be
taken to be a vector function of time. We can then write
Y = f (x, u, u,
. . .). (u, u,
. . . known because past of x is known)
In the linear case we can often write
x = Ax + Bu + C u . . .
y = Cx + Du + D1 u . . .

(1.13)

When the system is given with the state vector, we would say it is the System with State, which
would then be describable as y = f (x(t), u(.)| <t )
A system with state is linear iff
y 1 (t) = f (x1 (t), u1 (.)| <t )
2

y (t) = f (x (t), u (.)| <t )


1

y + y 2 = f (x1 + x2 , u1 (.) + u2 (.)| <t )

(1.14)

Notice that the states are also linearly combined.

Causality
Is a system governed by a differential equation causal? Intuitively, what does causality mean? Common
usage is as follows:
(i ) past influences future - causal
(ii ) future influences past - non-causal
(i ) and (ii ) are not negations of each other. In this sense, if a system is governed by
dy
= ay(t) + bu(t)
dt
it can be thought of as fitting into both or neither depending on how strictly we use the word influence.
For
dy
= +ay(t) + bu(t)
(1.15)
dt
can also be rewritten as follows.
Set = t. Then the above equation reads
dy
dy
=
= ay( ) bu( )
d
dt

(1.16)

So if in eq. 1.15 past influences future, so does it in eq. 1.16. But what is past in eq. 1.15 is future in
eq. 1.16.
So let us use for definition of causality, the key idea can we predict the future input knowing the past
behavior of the system? Here past behavior can be thought to include both past input and output.
Define z(t) = eat u(t). So eq. 1.15 becomes
eat y(t)
aeat y(t) = eat u(t)
at
u(t) = u
(t) say.
i.e., dz
dt = e
Since u
(t), u(t) can be obtained from each other instantaneously, we need only look at the equation
dz
(t).
dt = u
If z is known to be differentiable (left derivative = right derivative), then we can predict u(t0 ) by taking
dz
lim
= u(t0 ) So in this case we appear to be predicting the input.
tt0 , t<t0 dt
So for practical situations, it is better to think of a differential equation to have a left derivative, whenever
d
and u(t0 ) to mean
lim u(t), i.e., u(t0 ).
you have dt
tt0 , t<t0
The above discussion is for ordinary functions. Observe that once we interpret differential equations
this way, eq. 1.15 and eq. 1.16 do not govern the same differential equation.
To summarize, differential equations do not have a proffered direction of time before-hand. The direction
d
is imposed by us additionally when we model physical systems by, for instance, interpreting dt
and u(t) as
above.
Suppose that the system is governed by

x = Ax + Bu
y = Cx + Du
where the x(t) is the state of the system at time t. If we set x(t0 ) = 0 and insist u(t) = 0, t < 0 then it is
easy to see that y(t) = 0, t < 0.
We define the state x(t) of the system as follows. Given x(t0 ), u(t) for t 0, we can uniquely determine
output y(t) for t > t0 .
For our purposes the following restricted definition of causality is adequate.
8

Let S be a system with input variable u(.) and output variable y(.) and state x(.). We say S is causal
iff u(t) = 0, t < t0 , x(t0 ) = 0 always implies y(t) = 0 for t < t0 for all t0 .
In particular, the impulse response for such a system is zero for t < 0.
A system governed by
x = Ax + Bu
y = Cx + Du
where u, y, x are input, output and state variables, is causal in the above point of view.

Exercises
1. x is an eigen vector of matrix A iff x 6= 0, Ax = x for some . An eigen vector of AT is called a row
eigen vector of A.
(a) Show that is an eigen value iff det(I A) = 0
(b) Real matrices may have complex eigen values. Show that they always occur in pairs of conjugates.
The corresponding eigen vectors also are conjugates.
(c) Let A be the conjugate transpose of A. Show
(det(sI A)) = det((sI A) ) = det((s I A ))
So the eigen values of A and A are the same if A is real. If r is a row eigen vector of A
corresponding to eigen value and 6= then
r x = 0
(d) If an n n matix A has n distinct eigen values then the matrix P, whose columns are eigen
vectors corresponding to distinct eigen values, is invertible. Hence,

P1 AP =

.
n
where i are the distinct eigen values of A.
(e) If A is Hermitian, ie. A = A then P P = I In this case,

P AP =

.
n
and all the eigen values are real. Real symmetric matrices are Hermitian. In this case P can be
taken to be real.
(f) p is an eigen vector of A corresponding to eigen value if T1 p is an eigen vector of T1 AT
corresponding to .
(g) If A is Hermitian it is always possible to find P such that P = P1 and

P AP =

.
n
even if i are not distinct.

(h) (A 1 I)x = 0 has solution space V1 say. If 6= show V , V are orthogonal. (Take x, y to
be orthogonal if x y = xy = 0).
(i) For any vector space V, Show that it is always possible to find a matrix whose rows form a basis
of V and are mutually orthogonal. Let us call such a basis an orthogonal basis of V.
(j) Let P have as its rows the orthogonal bases of the vector space V for each eigen value of of the
Hermitian matrix A. Show that P is a square matrix and P P = I and further that

P AP =

.
n
2. (a) For the system described by x = Ax + Bu prove
Total solution = Zero input solution + Zero state solution.
Let x(t) satisfy x = Ax + Bu in the interval [0, ).
Here, A is an n n matrix and B is an n m matrix.
It can be shown that given x(0) and u(.) in [0, t), there exists a unique x(t) which satisfies the
above differential equation.
Let x1 be the unique solution corresponding to (x1 (0), u1 (.)), x2 be the one corresponding to
(x2 (0), u2 (.)), then by direct substitution we can verify that x1 + x2 is the unique solution
corresponding to (x1 (0) + x2 (0), u1 (.) + u2 (.)).
[Observe that when input is [u1 (.), u2 (.), . . . ., um (.)]T , the solution can be broken up further into
that due to the following m elementary situations:

u1 (.)
0
0
0 u2 (.)
0

. .

,
, . . . . . .,

. .

. .

.
0
0
um (.)
i.e., one of the inputs active and the others zero. In other words superposition of inputs is
applicable for zero state solution.]
In particular, if
x1 (0) = 0,
u1 (.) = u(.)
b(0)
x2 (0) = x

u2 (.) = 0.

We have solution x corresponding to (b


x(0), u(.)) = x1 + x2 where x1 is called the zero state
2
solution and x the zero input solution.
Observe that to prove the above result we have used (a) linearity and (b) uniqueness of solution
corresponding to initial condition and input. This idea can be therefore generalized to apply it
to other classes of differential equations which have the above unique solution property.
(b) Solve the scalar differential equation x = ax + bu by the method of integrating factors.
[Consider the scalar differential equation
x = ax + bu
Use the idea of integrating factor to compute the zero state solution to be
Z t
ea(t ) bu( )d
xu (t) =
0

and the zero input solution to be


xs (t) = x(0)eat .
[How does the situation change if we had x = ax + b1 u1 + b2 u2 + . . . . . + bm um ?]]
10

(c) If
z = Tx,
show that
z = TAT1 z + TBu
(d) Solve x = Ax + Bu when A is diagonalized.
(e) If A is diagonalizable there exists a matrix P such that

.
P1 AP =

.
n

Show that the columns of P are eigen vectors of A and 1 ,2 . . .,n the corresponding eigen
values.
Choose z = P1 x We then have

.
b
z + Bu
z =

.
n
b = P1 B
where B
ie we have n-decoupled differential equations.
We have
bj1 u1 + B
bj2 u2 + . . . . . + B
bjm um
zj = j zj + B
where j = 1......n
The solution to this equation is
zj (t) = e

x1 (t)
.
.
.
xn (t)

Observe that

j t

zj (0) +

 1
= p . . .pn

t
0

bj1 u1 ( ) + . . . . . + B
bjm um ( )d ]
ej (t ) [B

z1 (0)
e 1 t


.

.
e n t
zn (0)

x1 (0)
.
.
.
xn (0)

 1

= p . . . .pn

Thus the zero input solution has the form


X
j Pj ej t

Z t

b j u( )]
+ P[
ej (t ) B

z1 (0)
.
.
.
zn (0)

(j = zj (0))

If in the zero input solution you wish to have only the term ej t , you must take initial condition
to be j Pj .
If you start with [x1 (0), x2 (0), . . . ., xn (0)]T , to find the zero input solution:
first find the co-ordinates 1 , 2 , . . . ., n of x(0) in terms of the eigen vector axes [p1 p2 . . . .pn ]
ie

1
x1 (0)
2
x2 (0)

= P 1

.
n
xn (0)
11

and then the solution is

j Pj ej t

(f) Show how to eliminate the transient solution terms from the total solution due to input and
initial condition.
To understand the nature of zero state solution assume there is only one input u1 (t) = et with
all other inputs being zero. If j 6= , then
Z

ej (t ) e d = ej t
0

1
[et ej t ]
( j )

Thus total solution in this case is


X
j Pj ej t + [P](diag[
(If j = ,

Rt
0

e(j ) t
|
( j ) 0

1
bj
(et ej t )])B
( j )

ej (t ) ej d = tej t )

Observe that ej t always multiplies an eigen vector corresponding to j , Hence if you choose the
network with the input et and is not the same as any of the eigen values the nature of the
total solution is of the form
X
x(t) =
xj ej t + x et

where the xj would be an eigen vector corresponding to j . Usually, the second term is called
forced response (ie input forcing its nature on the output) and the first transient response
(under usual circumstances it becomes insignificant after a short while). If = j for one of the
j s then the second term would be x tet . If we wish that the first term reduces to zero (ie there
is no transient solution). We proceed as follows. We first find the zero state solution. Say this is
X
xu (t) =
xju ej t + x et
where xju is an eigen vector. Now choose x(0) =

xju

Then the zero input solution would be obtained by x(0) = P

j Pj ej t
But we know xju = Pj j for some values of
So,

x(0) = P

Since P is invertible,

1
.
.
.
n

So,
xs (t) =
12

1
.
.
.
n
1
.
.
.
n

xju ej t

1
.
.
.
n

and taking xs (t) =

It follows that the total solution is


X
xju ej t + x et xju ej t = x et

if x(0) =

xju

By this means we can cancel the transient solution and make the response look entirely like the
input. This idea is physically of same significance. Suppose the j are all in the range -1 to -2
where as = 106 , the forced response would die out much faster than the transient response if
you have any transient response term at all.
Note that in general the eigen values will be complex. The input would usually have the form
er t cos(im + ). This corresponds to a linear combination of inputs er t+jim t and er t+jim t .
P i
(g) Suppose a real function x(t) =
x (0)ei t Show that i occur in conjugate pairs and so do xi (0).
P
P
i. Prove: If i are distinct.
i ei t = 0 implies each i = 0. (Use the fact that if
i e i t = 0
P d k i t
then
i dtk e = 0 for all k OR use induction).
P i
P i
ii. If x(t) =
x (0)ei t =
z (0)ei t then xi (0) = z i (0) for each i
P

i
i t
iii. x (t)
e P But x (t) = x(t)
P= (x (0))

SoP (xi (0)) ei t =P (xi (0))ei t

ie (xi (0)) ei t (xi (0))ei t = 0


From (i) above this can happen the coefficient of each ei t is zero. If i is not among the j
this cannot happen. We conclude i = j and (xi (0)) (xi (0)) = 0 for some j
3. If an R,L,C circuit with positive values of R,L,C. Show that all the eigen values have nonpositive real
parts.
When the circuit has R,L,C and static devices and input is zero, the system starts with initial energy
1
1 T
v (0)CvC (0) + iTL (0)LiL (0)
2 C
2
The power absorbed by these devices at any instant
=

1
d 1 T
[ vC (t)CvC (t) + iTL (t)LiL (t)]
dt 2
2
T
= vC
(t)iC (t) + iTL (t)vL (t)

This must (by tellegans theorem ) be the negative of the power absorbed by the remaining (static)
devices. If these latter are positive resistors, this power is nonnegative. Hence the stored energy of
the inductors and capacitors cannot indefinitely increase as time increases. Therefore for each k , ek t
cannot increase indefinitely. Hence k = kr + jkim with kr 0.
4. Nature of the eigen values (RL,RC,LC,RLC cases).
Recall the method that we used for writig state equations for RLC circuits.(See Figure 1.11). We
assumed capacitors + voltage sources do not form loops and inductors + current sources do not form
cutsets.
(If they do, the above technique has to be generalized to one where three multiports are connected
together)
Observe that in the state equations
x = Ax + Bu
the matrix A can be obtained by setting u = 0 and then writing state equations. So, it is to be
expected, the eigen values, eigen vectors of the network correspond to setting voltage sources and
current sources to zero. Let us begin by considering the situation where all capacitors and inductors
are of value 1 in the appropriate units and all sources are zero. In this case we get,
 dvC  

iC
dt
=
diL
vL
dt

13

i1

i2
L

Static
C

V
Figure 1.11: Static multiport
Observe that
iC = i1 ,

vC = v1

iL = i2 ,

vL = v2

For the resistive multiport, we can write



 


i1
H11 H12
v1
=
v2
H21 H22
i2
Let us examine the situation of the hybrid matrix above. If iL were set to zero (inductors open
circuited) we get,
i1 = H11 v1
ie H11 is a conductance matrix for a purely resistive multiport and is symmetric by tellegans theorem
and semi positive definate since the power absorbed by the multiport is always non negative which
means v1T i1 = vT1 H11 v1 0 for all v1 . By similar argument, H22 is also symmetric positive semidefinate.
To understand the relationship between H12 and H21 . let us use Tellegans generalized reciprocity
theorem which says

(v1 )T i1 + (v2 )T i2 = (v1 )T i1 + (v2 )T i2


for arbitary primed and double primed excitation conditions for a resistive multiport.
We rewrite this as

(i1 )T v1 + (v2 )T i2 = (i1 )T v1 + (v2 )T i2


Substituting the multiport relationship and simplifying we get,

(i2 )T (H12 + HT21 )v1 = (i2 )T (H12 + HT21 )v1


For arbitary values of i2 , v1 (including zero), So

(i2 )T (H12 + HT21 )v1 = 0

For arbitary value of i2 , v1 ,

H12 = HT21

.
14

Problems
Laplace Transform Problems
Prove
1. L((t)) = 1
2. L(1(t)) =

1
s

3. L(tn ), f orn > 0 =


4. L(eat ) =

n!
sn+1

1
s+a

5. L(teat ) =
n1 at

1
(s+a)2

e
6. L( t (n1)!
)=

1
(s+a)n

1
(eat ebt ))a6=b =
7. L( ba

1
1
(s+a) (s+b)

1
8. L( ba
(aeat bebt ))a6=b =

9. L(sin t) =

s2 + 2

10. L(cos t) =

s
s2 + 2

11. L(eat sin t) =

(s+a)2 + 2

12. L(eat cos t) =

s+a
(s+a)2 + 2

s
(s+a)(s+b)

13. Find the L-transform of the function in Figure 1.12

$\delta_4$
$k at^2$
1

Figure 1.12:

15

Laplace Transform Problems II


Prove
n

d
n
n1
f (0 ) sn2 f (0 ) . . . f n1 (0 )
1. L[ dt
n f (t)] = s F (s) s
(where f k denotes the k th derivative).
Rt
2. L[ 0 f ( )d ] = F (s)
s

3. L[

Rt

f ( )d ] =

F (s)
s

f 1 (0 )
s

d
4. L[tf (t)] = ds
F (s)

5. (f (t T )1(t T )) = est F (s)


6. L[f ( at ] = aF (as), a > 0
7. limt0+ f (t) = lims sF (s)
Provided the limit exists
8. limt f (t) = lims0 sF (s)
Provided sF(s) is analytic on the j axis and on the right half of s-plane

16

You might also like