You are on page 1of 12

Chapter 1

Control Engineering Overview

Figure 1.1 shows a block diagram of modern mechatronic systems. Computer/DSP in the
middle emphasizes that the computer plays the central role in modern mechatronic systems.
It may represent a variety of hardware devices such as programmable logic controllers, DSPs
(Digital Signal Processors), embedded micro-controllers and their combinations as well as
software that realizes decision-making algorithms. In this course, we learn the basic concepts
and analytical tools for utilizing this central block and their applications from mechatronic
perspectives. The physical plant is the system that can be controlled. It might be an
aircraft, a large electric power generation and distribution system, an industrial process, a
head positioner for a computer disk drive, a data network, or an economic system.

Link to Other Computer/ Human-Mach.


Systems DSP Interface
Internet, Wireless Human Factors
Control
Decision Making

Sensors:
Actuation Energy Conversion
Power Modulation Signal Conditioning
Physical Plant
Mechanical,
Electrical, etc.
and Combinations

Figure 1.1: Modern mechatronic system


This note is adapted from the following: ME232/233: Advanced Control Systems written by M.
Tomizuka, UC Berkeley.

1
1.1 Terminologies

System: a collection of matter contained within a real or imaginary boundary or surface.

Signal: a flow of information that interconnects systems. It is represented as a function of


time or a sequence.

Environment: all that is outside the system

Control System: any system which exists for the purpose of controlling the flow of energy,
information, or other quantities in some desired fashion.

The goals of control engineering include:


(a) better stability
(b) regulation of output in the presence of disturbances and noises
(c) tracking time varying desired outputs
..
.

To achieve these objectives, the control engineer must


(a) model controlled plants
(b) analyze characteristics of plants
(c) design controller
(d) examine whether the designed controller meets specifications (simulation, etc.)
(e) implement the controller (digital computer, embedded controller, etc.)

2
1.2 Classification of Control Systems

1.2.1 Static vs. Dynamic Systems

The design of controllers is non-trivial because controlled plants (systems) are dynamic. The
output of a static system depends on the present input only, i.e., y(t) = f (u(t)); the output
of a dynamic system depends on the past as well as present input, y(t) = f (u( ); t). The
input-output relation of a static system is given by an algebraic equation. The input-output
relation of a dynamic system is given by differential equations/difference equations.

1.2.2 Linear vs. Nonlinear Systems

The system is linear if it follows the superposition principle. If not, the system is non-
linear. Let the system input and output be u(t) and y(t). Then the linear superposi-
tion applies if for any input and output pairs, {u1 (t), y1 (t)} and {u2 (t), y2 (t)}, we have
{u1(t) + u2 (t), y1 (t) + y2 (t)}.

In mechanical systems, torque limits of motors, hardening spring, Coulomb friction forces,
etc. make systems nonlinear. Even if physical systems are nonlinear, they can often be
linearized making linear analysis and design tools powerful. We primarily consider linear
systems and linear control in ME649.

1.2.3 Continuous Time vs. Discrete Time Systems

Inputs and outputs of continuous time systems are defined for all t: i.e. u(t) and y(t).
Continuous linear dynamic systems are described by linear differential equations.

Example 1.1. Mass-Spring-Damper System (u(t) = force and p(t) = position of mass)
d2 p(t) dp(t)
m + b + kp(t) = u(t) (1.1)
dt2 dt
where m, b and k are respectively the mass, damping coefficient and spring constant.

Inputs and outputs of discrete time systems are defined at discrete time points: i.e. u(k)
and p(k) are time sequences defined for k = 0, 1, 2, . . . . Discrete linear dynamic systems
are described by linear difference equations.

3
Example 1.2. Bank Account (x(k) = Balance at the beginning of k th month, u(k) =
Deposit/Credit -Payment/Debit during k th month)

x(k + 1) = (1 + )x(k) + u(k) (1.2)

where is the interest rate. While most of controlled plants are continuous, we may be
interested in modeling them as discrete time systems. In digital control, the overall control
system is hybrid.

1.2.4 Deterministic vs. Stochastic Systems

Stochastic is synonymous to non-deterministic. The linear system is an idealized version of


real-life systems, which are always nonlinear in one way or other. Likewise, the deterministic
system is an idealized version of real-life systems, which are always stochastic in one way
or other. Real-life systems are always stochastic because we cannot be completely immune
from uncertainties. Sensor noise, actuator noise, unknown disturbances, etc. all contribute
to rendering the real-life system stochastic. Stochastic system approach explicitly considers
such uncertain components while the deterministic system approach ignores them assuming
such effects are small. In a stochastic approach, we can only infer the probabilistic properties
(mean, variance, etc.) about the system. Unlike a deterministic system, a stochastic system
does not always produce the same output for a given input.

Example 1.3 (Stochastic System). In example 1.2, if we assume the gain-loss u(k) is un-
known, we may model it as a random variable, for example, u(k) N (m, 2 ) where N
denotes the normal (Gaussian) distribution with mean m and variance 2 . Then the discrete-
time dynamic system (1.2) becomes a random process.

Example 1.4 (Stochastic Nature of Position Measurement). Consider a pulse coder with
2
8 bit resolution, i.e. the position is measured by the quantizer of size = 28
0.0245
rad. Figure 1.2(a) shows the position measured by this encoder and the corresponding
quantization error. Such an error can be modeled by a uniformly distributed noise with its
bound 2 . This effect becomes more problematic as we compute the velocity using the
p(k)p(k1)
successive differentiation with a finite sample time Ts , i.e. venc = Ts
as shown in
Fig. 1.2(b), where we used Ts = 5 ms. This velocity error will get larger as we decrease the
sampling time.

4
0.1

Position (rad) 0
10 True velocity
True position
Successive differentiation of encoder
Encoder

Velocity (rad/sec)
0.1 5
0 0.2 0.4 0.6 0.8 1
Quantization error (rad)

0.02 0

0 5

10
0.02 0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1 Time (sec)
Time (sec)

(a) Position (b) Velocity

Figure 1.2: Effect of quantization noise of encoder

1.2.5 Open-loop vs. Closed-loop Control

Typical feedback controller (responding to the error): The PID (Proportional-Integral-


Derivative) controller is a most frequently used feedback controller.
Z t
de(t)
u(t) = kp e(t) + e( )d + kd (1.3)
0 dt

Closed-loop (feedback) controllers (Fig. 5.1(b)) provide a more robust performance than
open-loop controllers (Fig. 1.3(a)) in the presence of disturbances and plant uncertainties.
One limitation of feedback control is that the error must exist for any control action to take
place. This limitation can be overcome by adding a feedforward controller to the feedback
control system (Fig. 1.4).

5
Disturbance
d(t)
Desired output Feedback u(t) Controlled y(t)
Controller Plant

(a) Open-loop control

Disturbance
d(t)
Desired output + Feedback u(t) Controlled y(t)
- Controller Plant

Sensors

(b) Closed-loop control

Figure 1.3: Open-loop vs. closed-loop control

Feedforward
Block Disturbance
d(t)
Desired output + Feedback u(t) Controlled y(t)
- Controller Plant

Sensors

Figure 1.4: Feedback/feedforward control

6
Chapter 2

Laplace and z Transforms

Laplace and z transformations convert calculus operations to algebraic operations. Being


limited to linear systems only, they represent the foundation of all the existing system analysis
tools such as transfer function, block diagram, pole/zero analysis, frequency domain analysis,
etc.

2.1 Laplace Transform

Definition 2.1.
Z
f (t) (f (t) = 0 for t < 0) F (s) = L {f (t)} = f (t)est dt (2.1)
0

The important application of the Laplace transform in control theory is that it defines
the transfer function. Recall that a general form of transfer functions can be obtained from
the differential equation,
dn y dn1 y dm u dm1 u
+ an1 + + a0 y = bm bm1 + + b0 u (2.2)
dtn dtn1 dtm dtm1
Assuming the zero initial conditions:

dy dn1 y
y(0) = 0, = 0, ..., n1 = 0 (2.3)
dt t=0 dt t=0

we can obtain the transfer function by the Laplace transformation,


Y (s) bm sm + + b0 Q(s)
G(s) = = n n1
= (2.4)
U(s) s + an1 s + + a0 P (s)

7
P (s) = 0 is called the characteristic equation, and its roots are the poles of G(s). Notice that
|G(s)| = at each pole. Roots of Q(s) = 0 are called the zeros and |G(s)| = 0 at each zero.
The order of the numerator, m, satisfies m n, which is called the realizability condition.
For example, the pure differentiation (G(s) = s) is an unrealizable operation because, to
du(t)
find dt
, you need to know u(t + ) ( > 0) which is a future value. For this reason, the
realizability condition also means the causality condition.

Table 2.1: Laplace Transform of Common Functions


Time Domain Function, t 0 Laplace Transform
Unit-impulse function, (t) 1
1
Unit-step function, 1 s

1
Unit-ramp function, t s2

n!
tn , n > 0 sn+1

1
eat s+a

1
teat (s+a)2

n!
tn eat (s+a)n+1


sin(t) s2 + 2

s
cos(t) s2 + 2
p 2
n
n 2 en t sin(n 1 2t) (0 < < 1) s2 +2n s+ 2
1

dg(t)
dt
sG(s) g(0)
d2 g(t) dg(0)
dt2
s2 G(s) sg(0) dt

dn g(t) n1

dtn
sn G(s) sn1 g(0) . . . d dtn1
g(0)

Rt G(s)
0
g( )d s
RtRt G(s)
0 0
g( )d d s2

8
Table 2.2: Laplace Transform Theorems
Theorems Time Domain Function, t 0 Laplace Transform
Superposition f (t) = A1 g1 (t) + A2 g2 (t) F (s) = A1 G1 (s) + A2 G2 (s)
Final Value lim f (t) lim sF (s)
t s0
Initial Value lim f (t) lim sF (s)
t0 s
Time Shifting f (t) = g(t )us (t ) F (s) = es G(s)
Freq. Shifting f (t) = eat g(t) F (s) = G(s + a)
Rt
Convolution f (t) = g1 (t) g2 (t) := 0
g1 ( )g2 (t )d F (s) = G1 (s)G2 (s)
d
Complex Diff. f (t) = tg(t) F (s) = ds G(s)

2.2 Z Transform

The Z Transform is used for discrete-time sequences.

Definition 2.2.

X
f (k) (f (k) = 0 for k < 0) F (z) = Z {f (k)} = f (k)z k (2.5)
k=0

where z is a complex variable and must be such that the righthand side summation
converges.

Example 2.1 (Geometric Sequence). For f (k) = pk (geometric series),



X 1 z
F (z) = pk z k = 1
= if |z| > |p|. (2.6)
k=0
1 pz zp
1 z
For p = 1, f (k) becomes a unit step sequence and F (z) = 1z 1
= z1
if |z| > 1

Example 2.2 (Periodic Sequence). For f (k + N) = f (k) where N is a period,

F (z) = f (0) + f (1)z 1 + + f (N 1)z (N 1) + f (N)z N + f (N + 1)z (N +1) +


1 1 (N 1)

= f (0) + f (1)z + + f (N 1)z
1 z N
(2.7)

The Z transform is a linear operation, i.e.,

f (k) = f (k) + g(k) F (z) = F (z) + G(z) (2.8)

9
2.2.1 Relations of Z Transform

Useful relations for control system analysis and design are shown in Table 2.3.
Shift theorems:

Z [f (k i)] = z i F (z), i>0


i
X (2.9)
Z [f (k + i)] = z i F (z) z j f (i j), i>0
j=1

Derivation of the final value theorem:



X
Z [f (k + 1) f (k)] = (f (j + 1) f (j)) z j
j=0
k 
X 
= zF (z) zf (0) F (z) = (z 1)F (z) zf (0) = lim f (j + 1) f (j) z 1
k
j=0

= lim(z 1)F (z) f (0) = lim f (k) f (0)


z1 k

= lim(z 1)F (z) = lim f (k)


z1 k

(2.10)

Derivation of the convolution theorem:

Let F1 (z) = Z {f1 (k)} and F2 (z) = Z {f2 (k)}, then



! ! k
!
X X X X
F1 (z)F2 (z) = f1 (i)z i f2 (j)z j = f1 (k i)f2 (i) z k = Z {f1 (k) f2 (k)}
i=0 j=0 k=0 i=0
(2.11)
where denotes the convolution sum in the discrete time domain.

2.2.2 Discrete Time Transfer Function

Just like the Laplace transform is used to represent the differential equation in the form of a
(continuous time) transfer function, Z transform is used to represent the difference equation
in the form of a (discrete time) transfer function. The general form of SISO (Single-Input-
Single-Output) difference equation is

y(k) + an1 y(k 1) + + a0 y(k n) = bm u(k (n m)) + + b0 u(k n) (2.12)

10
where u(k) is a known input sequence. Applying the z transformation to each term,

Y (k) + an1 z 1 Y (z) + + a0 z n Y (z) = bm z (nm) U(z) + + b0 z n U(z)


bm z (nm) + + b0 z n
= Y (z) = U(z) (2.13)
1 + an1 z 1 + + a0 z n
bm z m + + b0
= Y (z) = n U(z)
z + an1 z n1 + + a0

Notice that two forms (one in z 1 and the other in z) of the discrete time transfer function
have been given. Poles, zeros and the realizability condition (m n) are similarly defined
as the continuous time case. Realizability implies that the present output depends on past
and present inputs but not on future inputs.

Table 2.3: Z Transform Theorems

Theorems Discrete Time Sequence, k 0 z Transform


Superposition A1 x1 (k) + A2 x2 (k) X(z) = A1 X1 (z) + A2 X2 (z)
Final Value lim x(k) lim (z 1)X(z) if exists.
k z1

Initial Value lim x(k) lim X(z)


k0 z
n
Time Shifting x(k n) z X(z)
Time reversal x(k) X(z 1 )
k
!
X X
Convolution x1 (k) x2 (k) := f1 (k i)f2 (i) z k X1 (z)X2 (z)
k=0 i=0

Difference x(k) x(k n) (1 z 1 )X(z)

11
Table 2.4: Table of Z Transform
X(s) x(t) x(kT ) (or x(k)) X(z)

1 k=0
(k) = 1
0 k 6= 0

1 k=n
(k n) = z n
0 k 6= n

1 1
s
1(t) 1(k) 1z 1

1 1
s+a
eat eakT 1eaT z 1

1 T z 1
s2
t kT (1z 1 )2

2 T 2 z 1 (1+z 1 )
s3
t2 (kT )2 (1z 1 )3

a (1eaT )z 1
s(s+a)
1 eat 1 eakT (1z 1 )(1eaT z 1 )

ba (eaT ebT )z 1
(s+a)(s+b)
eat ebt eakT ebkT (1eaT z 1 )(1ebT z 1 )

1 T eaT z 1
(s+a)2
teat kT eakT (1eaT z 1 )2

2 T 2 eaT (1+eaT z 1 )z 1
(s+a)3
t2 eat (kT )2 eakT (1eaT z 1 )3

s 1(1+aT )eaT z 1
(s+a)2
(1at)eat (1 akT )eakT (1eaT z 1 )2

a2 [(aT 1+eaT )+(1eaT aT eaT )z 1 ]z 1


s2 (s+a)
at1+eat akT 1 + eakT (1z 1 )2 (1eaT z 1 )

z 1 sin T
s2 + 2
sin t sin kT 12z 1 cos T +z 2

s 1z 1 cos T
s2 + 2
cos t cos kT 12z 1 cos T +z 2

eaT z 1 sin T
(s+a)2 + 2
eat sin t eakT sin kT 12eaT z 1 cos T +e2aT z 2

s+a 1eaT z 1 cos T


(s+a)2 + 2
eat cos t eakT cos kT 12eaT z 1 cos T +e2aT z 2

1
ak 1az 1

z 1
ak1 , k = 1, 2, ... 1az 1

z 1
kak1 (1az 1 )2

z 1 (1+az 1 )
k 2 ak1 (1az 1 )3

1
ak cos k 1+az 1

12

You might also like