You are on page 1of 1

9

Optimal and robust control system design


9.1 Review of optimal control
An optimal control system seeks to maximize the return from a system for the minimum cost. In general terms, the optimal control problem is to find a control u which causes the system g(x(t), u(t), t) x (9:1) to follow an optimal trajectory x(t) that minimizes the performance criterion, or cost function t1 J h(x(t), u(t), t)dt (9:2)
t0

The problem is one of constrained functional minimization, and has several approaches. Variational calculus, Dreyfus (1962), may be employed to obtain a set of differential equations with certain boundary condition properties, known as the Euler Lagrange equations. The maximum principle of Pontryagin (1962) can also be applied to provide the same boundary conditions by using a Hamiltonian function. An alternative procedure is the dynamic programming method of Bellman (1957) which is based on the principle of optimality and the imbedding approach. The principle of optimality yields the HamiltonJacobi partial differential equation, whose solution results in an optimal control policy. EulerLagrange and Pontryagin's equations are applicable to systems with non-linear, time-varying state equations and non-quadratic, time varying performance criteria. The HamiltonJacobi equation is usually solved for the important and special case of the linear timeinvariant plant with quadratic performance criterion (called the performance index), which takes the form of the matrix Riccati (1724) equation. This produces an optimal control law as a linear function of the state vector components which is always stable, providing the system is controllable.

9.1.1

Types of optimal control problems

(a) The terminal control problem: This is used to bring the system as close as possible to a given terminal state within a given period of time. An example is an