You are on page 1of 16

Nonlinear Control Theory

Introduction
Harry G. Kwatny
Department of Mechanical Engineering & Mechanics Drexel University

Outline
Course resources Overview What is nonlinear control?
Linearization

Why nonlinear control? Some examples

Resources
Professor Harry G. Kwatny
Office: 151-A Tel: 895-2356 e-mail: hkwatny@coe.drexel.edu URL: http://www.pages.drexel.edu/faculty/hgk22

Text Books & Software: Kwatny, H. G. and Blankenship, Nonlinear Control & Analytical Mechanics, Birkhauser, 2000 Obtain update from H. Kwatny Mathematica, Student Version 5.0 Other References
1. 2. 3. 4. 5. 6.

Friedland, Bernard, Advanced Control System Design, Prentice-Hall, Inc., 1996. Slotine, J-J. E. and Li, W., Applied Nonlinear Control, Prentice-Hall, 1991. Vidyasagar, M., Nonlinear Systems Analysis 2nd edition, Prentice-Hall, 1993. Isidori, Alberto, Nonlinear Control Systems-3rd edition, Springer-Verlag, 1995. Nijmeijer, H. and H. J. van der Schaft, 1990: Nonlinear Dynamical Control Systems. SpringerVerlag. Khalil, H. K., 1996: Nonlinear Systems-2nd edition. MacMillan.

MEM 636 ~ Part I


Introduction
Course Overview, Using Mathematica

Nonlinear Dynamics, Stability


The State Space, Equilibria & Stability, Hartman-Grobman Theorem Stability ~ Liapunov Methods

Geometric Foundations
Manifolds, Vector Fields & Integral Curves Distributions, Frobenius Theorem & Integral Surfaces Coordinate Transformations

Controllability & Observability


Controllability & Observability, Canonical Forms

Stabilization via Feedback Linearization


Linearization via Feedback Stabilization using IO Linearization, Gain Scheduling

Observer Design Robust & Adaptive Control Tracking & Disturbance Rejection

General Model of Nonlinear System


state

input u ( t )

System

x ( t ) output y t ()

x = f ( x, u ) state equation y = h ( x, u ) output equation x Rn , u Rm , y Rq

Special Case: Linear System


Nonlinear System x = f ( x, u ) y = h ( x, u ) Linear System x = Ax + Bu y = Cx + Du

Most real systems are nonlinear Sometimes a linear approximation is satisfactory Linear systems are much easier to analyze

Linear Systems are Nice


1. Superposition Principle: A linear combination of any two solutions for a linear system is also a solution. 2. Unique Equilibrium: Recall that an equilibrium is a solution x(t), with u(t) = 0, for which x is constant. A generic Linear system has a unique (isolated) equilibrium at the origin x(t) = 0 and its stability is easily determined. 3. Controllability: There are known necessary and sufficient conditions under which a control exists to steer the state of a linear system from any initial value to any desired final value in finite time. 4. Observability: There are known necessary and sufficient conditions under which the systems state history can be determined from its input and output history. 5. Control design tools: A variety of controller and observer design techniques exist for linear systems (including classical techniques, pole placement, LQR/LQG, Hinf, etc.)

Why Nonlinear Control


Contemporary control problems require it,
Robotics, ground vehicles, propulsion systems, electric power systems, aircraft & spacecraft, autonomous vehicles, manufacturing processes, chemical & material processing,

Smooth (soft) nonlinearities


the system motion may not remain sufficiently close to an equilibrium point that the linear approximation is valid. Also, linearization often removes essential physical effects like Coriolis forces. The optimal control may make effective use of nonlinearities Robotics, process systems, spacecraft

Non-smooth (hard) nonlinearities


Saturation, backlash, deadzone, hysteresis, friction, switching

Systems that are not linearly controllable/observable may be controllable/observable in a nonlinear sense
Nonholonomic vehicles (try parking a car with a linear controller), underactuated mechanical systems (have fewer controls than dof), compressors near stall, ground vehicles near directional stability limit

Systems that operate near instability (bifurcation points)


Power system voltage collapse, aircraft stall & spin, compressor surge & rotating stall, auto directional, cornering & roll stability

Parameter adaptive & intelligent systems are inherently nonlinear

Linearization 1: linear approximation


near an equilibrium point
The general nonlinear system in standard form, involves the state vector x, input vector u , and output vector y. x = f ( x, u ) state equations y = h ( x, u ) output equations A set of values ( x0 , u0 , y0 ) is called an equilibrium point if they satisfy: 0 = f ( x0 , u0 ) y0 = h ( x0 , u0 ) We are interested in motions that remain close to the equilibrium point.

Linearization 2
Define: x ( t ) = x0 + x ( t ) , u ( t ) = u0 + u ( t ) , y ( t ) = y0 + y ( t ) The equations become:

x = f ( x0 + x ( t ) , u0 + u ( t ) )

y0 + y ( t ) = h ( x0 + x ( t ) , u0 + u ( t ) )

Now, construct a Taylor series for f , h f ( x0 , u0 ) f ( x0 , u0 ) x+ u + hot x u h ( x0 , u0 ) h ( x0 , u0 ) h ( x0 + x, u0 + u ) = h ( x0 , u0 ) + x+ u + hot x u Notice that f ( x0 , u0 ) = 0 and h ( x0 , u0 ) = y0 , so f ( x0 + x, u0 + u ) = f ( x0 , u0 ) +

x=

f ( x0 , u0 ) f ( x0 , u0 ) x+ u x u h ( x0 , u0 ) h ( x0 , u0 ) y= x+ u x u

x = A x + B u y = C x + D u

Some Examples of Nonlinear Systems

Example 1: Fully-Actuated Robotic System


Models always look like: kinematics: q = V (q) p dynamics: M ( q ) p + C ( q, p ) p + F ( q, p ) = u det M ( q ) 0 If we want to regulate velocity, choose u = C ( q, p ) p + F ( q, p ) + M ( q ) v p = v, a linear system!
Decoupling Stabilizing

q coordinates p quasi-velocities

If we want to regulate position, we can - more algebra. This is called the computed 'torque method.' We will generalize it.

Example 2
, sideslip

Velocities in inertial coordinates

Velocities in body coordinates

x cos sin 0 u y = sin cos 0 v kinematics: 0 0 1 r V , velocity Consider u1 = u and u2 = r to be control inputs and suppose v = 0. x cos 0 y = sin u + 0 u 1 2 0 1 Linearize about the origin ( x, y, ) = ( 0,0,0 ) to get 1 0 x = Ax + Bu, A = 0, B = 0 0 The 0 1 linear system is not controllable! In particular, y = 0, so it is not possible to steer from the origin along the y -axis

, heading

, rudder

If the boat has a keel, typically, we have sideslip v=0.

Example 2, contd
But, of course, if you allow large motions you can steer to a point x = 0, y = y , = 0. How? t t Try this: u1 ( t ) = cos , u2 ( t ) = sin
The system appears not to be linearly uncontrollable but nonlinearly controllable!! By coordinating the rudder and forward speed, we can cause the vehicle to move along the yaxis.

Example 3: Drive Motor & Load with Non-smooth Friction


Drive Motor Shaft Inertial Load

1 J2 s

1 s

1 J1s

1 s

2 ( 2 )

1 ( 1 )

The problem is to accurately position the load. The problem becomes even more interesting if the friction parameters are uncertain.

Example 4: Automobile Multiple Equilibria


Vs = V

u
Body Frame

d Vs = F ( , Vs , , , Fd ) dt equilibria: set = 0, Vs = Vs (a parameter) solve for , , Fd

y v
m, J

Y
b

0 = F ( , Vs , , 0, Fd )

Fl
Fr
Space Frame X

0.75 1
Vs

0.5 0.25
Vs

131.5 -1

132

132.5

133

131.5 -0.25 -0.5 -0.75

132

132.5

133

-2

You might also like