You are on page 1of 6

ISA Transactions 47 (2008) 113–118

www.elsevier.com/locate/isatrans

Costate prediction based optimal control for non-linear hybrid systems


Minghui Hu a,∗ , Yongshan Wang b , Huihe Shao a
a School of Electrionic, Information and Electrical Engineering, Shanghai Jiaotong University, Shanghai, 200240, China
b Department of Mechanical Engineering, Tong Ji University, Shanghai, 200092, China

Received 21 December 2005; accepted 1 June 2007


Available online 7 August 2007

Abstract

This paper is derived for solving a non-linear discrete-continuous systems optimal control problem by iterating on a sequence of simplified
problems in discrete form. A mixed approach with a discrete cost function and continuous state variable system description is used as the basis
of the design, and it is shown how the global problem can be decomposed into local subsystem problems and a coordinator within a hierarchical
framework. The correct optimal solution to a real system in which model-reality difference exists can be obtained from the system model by
interconnected costate prediction iterative solution. The algorithm efficiency and convergence properties are demonstrated by simulation study.
c 2007, ISA. Published by Elsevier Ltd. All rights reserved.

Keywords: Optimal control; Discrete-continuous; Non-linear systems; Decomposition; Costate prediction

1. Introduction algorithm which is applied to both interaction and costate


prediction methods for solving discrete-continuous non-linear
Hybrid systems are dynamical and include continuous systems optimal control problems with a discrete performance
processes and discrete sets [1,2]. So, flow industry is a class index. The basic idea of the method is that the original
of hybrid systems. Because many non-linear optimal control non-linear optimal control problem is solved by iterating on
problems are intractable analyses, it is often necessary to a sequence of suitably modified linear quadratic problems
resort to iterative methods based on simplified model-based containing dynamic models and parameters which are updated
techniques for their solution, ideally, it is desirable for any such at each iteration. In this approach, processors perform certain
iterative technique to converge to the correct solution to the computations and then exchange their information either by
original problem [3–5]. A hierarchical algorithm of integrated direct communications with each other or by means of a
system optimization for a non-linear discrete dynamic large- coordinator processor. Each processor can perform its steps
scale system is introduced [6–10]. Methods based on integrated independent of other processors, that is to say, a processor can
system optimization and parameter estimation, ISOPE (or proceed with its next cycle without waiting for other processors
DISOPE for the dynamic version), which use Lagrangian to finish their iteration. This may result in some processors
techniques to properly integrate simplified optimization with performing computation faster than others.
parameter estimation to match the results of the original This paper is organized as follows. Section 2 of this
and simplified problems, achieve this aim [11]. However, in paper presents the problem formulation algorithm to solve
applications the system is often discrete-continuous hybrid the optimal control problem of non-linear interconnected
as a set of generally non-linear continuous state differential dynamical hybrid systems. The convergence behaviour of
equations [12,13]. the algorithms is analysed in Section 3. The convergence
In order to simplify the complex systems and enhance the was proved based on the results [14]. Section 4 discusses
utilization of the computation [14], this paper describes a novel the practical implementation of the algorithms. The results
of simulation of the proposed algorithms are presented to
∗ Corresponding author. Tel.: +86 2134204264; fax: +86 2154260762. emphasize the results of the theoretical investigation. Finally,
E-mail address: agile hu@sjtu.edu.cn (M. Hu). some conclusions are given in Section 5.

c 2007, ISA. Published by Elsevier Ltd. All rights reserved.


0019-0578/$ - see front matter
doi:10.1016/j.isatra.2007.06.001
114 M. Hu et al. / ISA Transactions 47 (2008) 113–118

2. Problem formulation and solution approach + xi∗ (k)T Q̄ i xi∗ (k) + u i (k)T Ri u i (k)
o
Consider a linear large-scale time invariant system: + u i∗ (k)T R̄i u i∗ (k) (6)

ẋ(t) = f c (x(t), u(t), t), x(t0 ) = x0 (1) where Q i + Q̄ i = Q i0 , Ri + R̄i = Ri0 , satisfy xi (k + 1) =
Ai xi (k) + Bi u i (k) + ωi (k)
where x(t) ∈ R n , u(t)
∈ fc : Rm ,× ×R→Rn Rm Rn .
The control u(t) is sampled at uniform sampling time M
X
instances tk , integer k = [0, N ], using a zero-order hold. Let ωi = L i j x j (k)
τ be the constant sampling time interval τ = tk+1 − tk . Then j=1
over a sampling interval Eq. (1) may be written as the discrete xi = xi∗ , u i = u i∗
system: ωi∗ (k) = xi (k + 1) − Ai xi∗ (k) − Bi u i∗ (k).
x(k + 1) = f d (x(k), u(k), k) (2)
R tk+1 Then the two-point boundary value problem which can be
where f d (x(k), u(k), k) = x(k) + tk f c (x(s), u(k), s)ds. combined in matrix form is as follows
So Eq. (2) can be written as
xi (k + 1) −Bi Ri−1 BiT xi (k)
    
Ai
=
x(k + 1) = e Aτ x(k) + A−1 (e Aτ − 1)Bu(k). (3) γi (k + 1) −Q i −AiT γi (k)
It is assumed that the system can be decomposed into M M    
X Ai j 0 gx j
interconnected subsystems given by: + (7)
j=1
0 −Ai j gγ j
j6=i
xi (k + 1) = Ai xi (k) + Bi u i (k) + ωi (k)
xi (k1 ) = xi0 , i = 1, . . . , M (4) where g is the interconnection matrix of all the subsystems.
iT
(l)
h
Let: vi = xi(l) γi(l) , then it can be written in the form
PM
where the interaction vector ωi = j=1 L i j x j (k) is a linear
combination of the states of the other M subsystems, ωi ∈ R qi .
The original system optimal control problem is reduced to the M
(l+1) (l+1) (l)
X
optimization of M subsystems, satisfying Eqs. (2) and (3) while vi (k + 1) = H0i vi (k) + Hi j vi (k) (8)
minimizing the cost function j=1

−Bi Ri−1 BiT


M
h i h i
Ai Ai j 0
X where H0i = −AiT
,H = 0 , Hii = 0,
min J = min Ji −Q i −Ai j
u(k) ∀i, j ∈ [i, M].
i=1
T
Optimal value of prediction: vi∗ = xi∗ γi∗ .
( 
M
X
= min Φi (xi (N ), N )
u(k) The ith subsystem error definition of the lth iterative can be
i=1
) written as
N
X −1
L i (xi (k), u i (k), k) (l) (l)
ei = vi − vi∗ = ei(l)
h i
+ (5) (l)
x eiγ
k=0
eg j = egx j egγ j = gx j − x ∗j gγ j − γ j∗ .
   
where Φi (·) : R n i → R is the ith subsystem terminal measure,
L i : R n i × R m i × R → R is the ith subsystem performance
Then it can be written as
measure function. We get
M
1 X
L i (xi (k), u i (k), k) = (xi (k)T Q i xi (k) + u i (k)T Ri u i (k)). ei (k + 1) = H0i ei (k) + Hi j eg j (k). (9)
2 j=1
The linear sections of the state constraint and cost function
were separated, and then the the non-linear and non-separable A Hamiltonian P Mfunction for the non-linear system can be
sections were predicted. Suppose prediction value xi = defined as H = i=1 Hi ,
xi∗ , u i = u i∗ , hence, Eq. (5) can be written as where
M 1 
Hi = xi (k)T Q i xi (k) + u i (k)T Ri u i (k)
X
min J = min Ji 2
u(k)
i=1 1 ∗ T 
M n + xi (k) Q i xi∗ (k) + u i∗ (k)T Ru i∗ (k)
1X 2
= min xi (N )T Pi xi (N )
− λiT (k) u i∗ (k) − u i (k)

u(k) 2 i=1
− βiT (k) xi∗ (k) − xi (k) + γiT (k + 1)

o 1X M n
+ xi∗ (N )T P̄i xi∗ (N ) + xi (k)T Q i xi (k)
× Ai xi (k) + Bi u i (k) + ωi∗ (k)

2 i=1 (10)
M. Hu et al. / ISA Transactions 47 (2008) 113–118 115

where, λi (k), βi (k) are modified multipliers, γi (k + 1) is the


costate vector. And optimality conditions

∇u i (k) Hi = 0 ⇒ u i (k) + Ri−1 BiT γi (k + 1)


+ Ri−1 λi (k) = 0 (11)
∇γi (k+1) Hi = xi (k + 1) ⇒ xi (k + 1) = Ai xi (k)
 
− Bi Ri−1 BiT γi (k + 1) + λi (k)
× ]xi (0) = xi0 (12)
∇xi (k) Hi = −γi (k) ⇒ −γi (k) = Ai γi (k + 1)
T

+ Q i xi (k) + βi (k)]xi (N ) = 0 (13) Fig. 1. Costate prediction algorithm hierarchical structure.


∇βi (k) Hi = 0 ⇒ xi∗ (k) = xi (k) (14)
Step 5: According to Eq. (13), updating the value of the costate
∇xi (k) Hi = −γi (k + 1) ⇒ u i∗ (k) = u i (k) (15) vector: P M P N −1 l+1
∇xi∗ (k) Hi = 0 ⇒ βi (k) = Q̄ i xi∗ (k) Step 6: Calculate the error, if eλ = i=1 k=0 kλi (k) −
M N −1
λi (k)k < ε2 , eγ =
l l+1
(k) − γil (k)k
P P
+ ∇xi∗ (k) ωi∗ (k)γi0 (k + 1) (16) i=1 k=0 kγi
< ε3 , ε2 and ε3 are small positive numbers, stop;
∇u ∗ (k) Hi = 0 ⇒ λi (k) = R̄i u i∗ (k) otherwise, set l + 1 → l and repeat from Step 2.
i
+ ∇u i∗ (k) ωi∗ (k)γi (k + 1). (17) The hierarchical structure and information exchange of this
According to the decompose-coordination algorithm, it algorithm is illustrated in Fig. 1. Note that the algorithm
is necessary to solve the Eqs. (11)–(13) and predict implementation can be considered from two classes. The first
xi∗ (k), u i∗ (k), λi (k), βi (k) in the first class. In a complex class can be considered as a means of solving the vector
situation it is difficult to solve the two-point boundary problem differential equation, and predicting the local parameters and
and Riccati differential equation. To avoid the two-point costate vector. The second class can be considered as a means
boundary problem, in the first class only solve Eqs. (11) and of modifying local parameters and solving the complex non-
(12). According to the unknown quantity of these equations, it linear optimal control problem.
is necessary to predict xi∗ (k), u i∗ (k), λi (k) and the costate vector
γi (k +1). The interactions of the non-linear discrete-continuous 3. Algorithm convergence analysis
hybrid system are integrated with repeated applications of
current costate vector predicted optimal control. The algorithm It is important to analyse the convergence behaviour of
is stated as follows. the algorithm and ensure that the iterations converge to a
Suppose these correlation parameters of this algorithm is final solution. Continuous linear systems interconnected costate
known, prediction algorithm converges to a final solution [3]. Now
we analyse the algorithm convergence of non-linear discrete-
Step 1 : Predict a nominal solution xi∗ (k), u i∗ (k), λi (k),
continuous hybrid systems interconnected costate prediction.
γi (k + 1);
We define G as the matrix of the interaction variables
Step 2: In the first class, derive the predicted local optimal
between the ith subsystem and others. G is, in general,
control u i (k) from Eq. (11), updating xi (k) by Eq. (12);
Step 3: In the second class, updating βi (k) by Eq. (16), solve a piecewise continuous function having a finite number
Eq. (13) using βi (k), xi (k), and then send the updated of discontinuity (as we have finite number of subsystems
values γi (k + 1); performing a finite number of iterations), G can be written in
Step 4: For regulating convergence, a simple relaxation method the form G = [G x G γ ], where G is given by
is employed whose aim is to satisfy each subsystem at G x = gx1 gx2 · · · gx M ,
 
the end of the iterations [15]. This is:
G γ = gγ1 gγ2 · · · gγ M .
 
u i∗ (k + 1) = u i∗ (k) + ku i (u i (k + 1) − u i∗ (k))
(18) Let
λi∗ (k + 1) = λi∗ (k) + k pi (λi (k + 1) − λi∗ (k))
M
where ku i ∈ (0, 1] and k pi ∈ (0, 1] are scalar gains. X
Ai j gx j = −Bi R −1 BiT γi (k + 1) + ωi∗ (k) (19)
In the second class, according to Eqs. (14)–(16), j=1
compute xi∗ (k), u i∗ (k), λi (k), we can update pre- j6=i

dicted
 ∗ T valuesT of∗ theT coordination vector
PM P e = then Eq. (12) can be written in the form
N −1
xi (k) , λi (k) , u i (k) , γi (k)T . If eu = i=1

k=0
ku i (k) − u i (k)k < ε1 , where l is the number of it-
∗l+1 ∗l xi (k + 1) = Ai xi (k) − Bi Ri−1 BiT γi (k + 1)
erations, ε1 is a small positive number, go to Step 5 and M
X
receive optimal solution, otherwise set l + 1 → l and + Ai j gx j , xi (0) = xi0 . (20)
repeat from Step 3: j=1
j6=i
116 M. Hu et al. / ISA Transactions 47 (2008) 113–118

Eq. (13) can be written in the form xi∗ (k + 1) = Φi x (k + 1, 0)xi∗ (k + 1)

γi (k) = −Q i xi (k) − AiT γi (k + 1) + Ai−1 Ψi x (k + 1, k)γi∗ (k)


M M
X
j Ψi j x (k + 1, k)x j (k).
Ai−1 ∗
+ (28)
X
− Ai j gx j , xi (N ) = 0. (21)
j=1 j=1
j6=i
From the initial condition in Eq. (21), according to (24), (8)
Then hybrid systems can be written in the discrete form and (9) and using error definition in the previous subsection, we
obtain
xi (k + 1) = Φi x (k + 1, k)xi (k) + Ψi x (k + 1, k)γi (k)
X M ei x (k + 1) = Ai−1 Ψi x (k + 1, k)egγ i (k)
+ Ai−1 Ψi j x (k + 1, k)gx j (k), M
X
j Ψi j x (k + 1, k)egx j (k).
Ai−1
j=1 + (29)
k ∈ [1, 2, . . . , N ] (22) j=1

Following the same procedures for the costate vector γ


where
component we get
Φi x (k + 1, k) = e Ai τ , M
X
Ψi x (k + 1, k) = −Ai−1 [Φi x (k + 1, k) − I ] Bi Ri−1 BiT , eiγ (k + 1) = (AiTj )−1 Ψi jγ (k + 1, k)egγ j (k)
j=1
Ψi j x (k + 1, k) = [Φi x (k + 1, k) − I ] Ai j ,
Ψii x (k + 1, k) = 0, ∀i, j ∈ [1, 2, . . . , M] . (23) + Q i−1 Ωi (k + 1, k, τ )egγ i (τ )
M
X
Then Eq. (21) can be written as + Q i−1 Ωi j (k + 1, k, τ )egx j (τ ). (30)
j=1
γi (k + 1) = Φiγ (k + 1, k)γi (k) + Ψiγ (k + 1, k)xi (k)
M  −1 According to Eqs. (29) and (30) we obtain
X
+ AiT Ψi jγ (k + 1, k)gγ j (k),
max kei x (k + 1)k ≤ N τ Mi x max egγ i (k + 1)

j=1
k∈[0,N ] k∈[0,N ]
k ∈ [1, 2, . . . , M] (24) !
M
X
max egxi (k + 1)

where + Mi j x (31)
k∈[0,N ]
−AT τ j=1
Φiγ (k + 1, k) = e i ,
 −1  where
Ψγ x (k + 1, k) = − AiT Φiγ (k + 1, k) − I Q i ,

Mi x = max kΦi x (k + 1, k)k ,
k∈[0,N ]
Ψi jγ (k + 1, k) = Φiγ (k, T )AiTj ,
Mi j x = max Ψi j x (k + 1, k) .

(32)
Ψiiγ (k + 1, k) = 0, ∀i, j ∈ [1, 2, . . . M] . (25) k∈[0,N ]

According to Eqs. (30) and (32) we get


From Eqs. (7) and (24), we can get
M
γi (k + 1) = Φiγ (k + 1, k)γi (k2 ) + Pi (k + 1, k, 0)xi (0)
X
max eiγ (k + 1) ≤ N τ Mi jγ max egγ j (k + 1)

k∈[0,N ] k∈[0,N ]
M
X j=1
+ Q i−1 Ψi jγ (k + 1, k)gγ j (k)
+ N 2 τ 2 Miγ x max egγ j (k + 1)

j=1
k∈[0,N ]
+ Q i−1 [Ωi (k + 1, k, τ )γi (τ )] !
" # M
M
X
max egx j (k + 1)

X + Mi j xγ (33)
+ Qi −1
Ωi j (k + 1, k, τ )gx j (τ ) (26) k∈[0,N ]
j=1
j=1
where
where
Mi jγ = max kΨi x (k + 1, k)k Miγ x
k∈[0,N ]
Pi (k + 1, k, 0) = Ψiγ (k + 1, k)Φi x (k, 0),
= max kΩi (k + 1, k)k
Ωi (k + 1, k, τ ) = Ψiγ (k + 1, k)Ψi x (k, τ ) k∈[0,N ]

Mi jγ x = max Ωi j (k + 1, k) .

Ωi j (k + 1, k, τ ) = Ψiγ (k + 1, k)ψi x (k, τ ), (34)
k∈[0,N ]
∀i, j ∈ [1, 2, . . . M] . (27)
Combining inequality equations (31) and (33) into a matrix
Note that at the optimal solution, we get form, with T 0 = N τ , we get
M. Hu et al. / ISA Transactions 47 (2008) 113–118 117

max kei x (k + 1)k


 
satisfy:
 k∈[0,N ]
max eiγ (k + 1)

k∈[0,N ]
ẋ1 (t) = −x1 (t) + 0.5x2 (t) + x1 (t)x2 (t) + 0.5x3 (t)
+ 0.2x5 (t) + 0.1x6 (t) + 0.1u 1 (t);
 max egxi (k + 1)
 
0 T 0 Mi x

=  k∈[0,N ] ẋ2 (t) = −2x2 (t) + x1 (t)2 − 0.2x3 (t) + 0.5x4 (t)
2
max egγ i (k + 1)

0 T 0 Miγ x − 0.1x5 (t) + 0.2x6 (t);
k∈[0,N ]
 max egx j (k + 1) ẋ3 (t) = 0.2x1 (t) − 5x3 (t) + 0.5x3 (t)2 + 0.5x4 (t) − x5 (t)
 
M
T 0 Mi j x

0
+ 0.5x6 (t) + 0.2u 1 (t);
X
+  k∈[0,N ]  .
T 02
M i jγ x T 0
M i jγ max egγ j (k + 1)
j=1 k∈[0,N ] ẋ4 (t) = 0.1x1 (t) − 0.2x2 (t) − 2x4 (t) + 0.2x5 (t) + 0.2u 2 (t);
(35) ẋ5 (t) = 0.4x1 (t) + 0.2x3 (t) − 0.5x4 (t) + x6 (t)
+ x5 (t)x6 (t);
Taking the global maximum of the previous inequality, we
obtain ẋ6 (t) = 0.1x1 (t) − 0.2x2 (t) − 0.5x3 (t) − x4 (t) − 0.5x6 (t);
x22 (N ) + x32 (N ) = 0.1, x4 (N ) + x5 (N ) = 0;
max kei (k + 1)k ≤ βi max eg j (k + 1)

k∈[0,N ] k∈[0,N ] T
0.6 1 1.2 .

x(0) = 1.0 0.8 0.5
M
X
λi j max eg j (k + 1) We decomposed the system into two subsystems:

+ (36)
j=1
k∈[0,N ] Subsystem 1: states x1 (t), x2 (t), x3 (t) and control u 1 (t);
Subsystem 2: states x4 (t), x5 (t), x6 (t) and control u 2 (t).
where
The dynamic equation for this system can be written as:
 2

βi = max T 0 Mi x , T 0 Miγ x ẋ (t) −1 0.5 0.5

1 0 0 0 
  (37) ẋ2 (t)  0 −2 −0.2 0 0 0 
2
λi j = max T 0 Mi j x , T 0 Miγ γ x T 0 Mi jγ .

ẋ3 (t) 0.2 0 −5 0 0 0 
   
=
ẋ4 (t)  0 0 0 −2 0.2 0 
 
Taking the maximum over all the subsystems we get the ẋ (t)  0
following inequality 5 0 0 −0.5 0 1 
ẋ6 (t) 0 0 0 −1 0 −0.5
max kei (k + 1)k ≤ δi max eg j (k + 1) .

(38) x (t) 0.1 0 ω (t)
k∈[0,N ] k∈[0,N ] 1 1
x2 (t)  0 0    2 (t)
 ω
Similar to the previous algorithm, if we choose k ∈ [0, N ], x (t) 0.2
  
0  u 1 (t)

ω (t)

such that δi < 1, ∀i ∈ [1, M], then the last inequality defines ×  3 + +  3 .
x4 (t)  0 0.2 u 2 (t) ω4 (t)

a property of contraction defined in Eq. (23), this contraction x (t)  0 0 ω (t)
5 5
property guarantees the convergence of asynchronous point x6 (t) 0 0 ω6 (t)
iterations.
The dynamic equation shows that the states x1 (t), x3 (t),
4. Simulation study x4 (t) are more important than the others, so the Q matrix
with the largest weight parameter has the following values,
To demonstrate the effectiveness of the proposed algorithm, Q = diag(2, 1, 2, 2, 1, 1), R = diag(1, 1), ε = 10−4 . The
in the following we will present the simulation results for equations of the discretized system have τ = 0.05 and N = 80,
a practical system (a refine tower system) that was solved its discrete form can be obtained by MATLAB. The terminal
using both the synchronous and asynchronous algorithms. The constraint of subsystem 1 was obtained, replacing x22 (N ) +
technique has been employed successfully to solve the non-
x32 (N ) = 0.1 by x2 (N ) + x3 (N ) = 0. Thus the simulation
linear problem with a non-separable performance index.
was carried out.
We consider a refine tower system of a coking plant. The
Fig. 2 illustrates the state response curves for x1 , x4 of the
chemical reactions that take place in the tower are affected by
algorithm showing the different effectiveness for the system
the compositions. From the material balance of each plate in the
optimal control, the internal system structure also influences the
tower, a state variable model has been developed. The system
convergence of the algorithm. With different Q matrices will be
is decomposed into two subsystems with dimensions 3 and 3,
obtained different state responses, this means having a larger
respectively. The control problem for the system is represented
parameter will have more significant effect on the minimization
by
than having a smaller weight parameter, larger Q value will
Cost function:
  !2 improve the effect of control u(t) for states.
N −1  X7 Fig. 3 illustrates the convergence behaviour of the algorithm
1 2 X
min x1 (N ) + x6 (N ) +
2
x (k) with costate vector γ1 (1) = 0.5, γ2 (1) = 1, the real global
u 1 (k),u 2 (k),u 3 (k) 2 
k=0
 i=1 i
cost function changes from iteration to iteration. The modified
algorithm converged in 41 iterations for ε = 10−4 . It is

X 3 
+ u i2 (k) observed that, satisfactory results are achieved for convergence
i=1
 behaviour.
118 M. Hu et al. / ISA Transactions 47 (2008) 113–118

References

[1] Banks SP, Dinesh K. Approximate optimal control of non-linear systems.


In: Fourth international conference on optimization; techniques and
applications. 1998.
[2] Banks SP, Mcairey D. Approximate optimal controllers for non-
linear parabolic systems. In: 15th IMACS world congress on scientific
computation, modelling and applied mathematics. 1997.
[3] Abdelwahed SS, Sultan MA, Hassan MF. Parallel asynchronous
algorithms for optimal control of large scale dynamic systems. Optimal
Control Applications and Methods 1998;18(4):257–71.
[4] Puri A, Varaiya P. Decidable hybrid system. Mathematical and Computer
Modelling 1996;23:191–202.
[5] Roberts PD, Becerra VM. Optimal control of a class of discrete-
continuous non-linear systems—Decomposition and hierarchical struc-
ture. Automatica 2001;37:1757–69.
[6] Mahmoud MS, Hassan MF, Darwish MG. Large scale control systems
theories and techniques. New York: Dekkar; 1985.
[7] Jamshidi M. Large-scale systems modeling and control. New York: North-
Fig. 2. State response curves for x1 , x4 .
Holland; 1983.
[8] Singh MG. Titli A. Systems: Decomposition, optimization, and control.
Oxford: Pergamon Press; 1978.
[9] Mitra D. Asynchronous relaxation for the numerical solutions of
differential equations by parallel processors. SIAM Journal on Scientific
and Statistical Computing 1987;8:43–58.
[10] El-Tarazi MM. Some convergence results for asynchronous algorithms.
Numerische Mathematik 1982;39:325–40.
[11] Lewis FL, Syrmos VL. Optimal control. New York: Wiley; 1995.
[12] Schmidt WH. Iterative methods for optimal control processes governed by
integral equations. International Series of Numerical Mathematics 1993;
111:69–82.
[13] Bertsekas DB, Tsitsiklis JN. Some aspects of parallel and distributed
iterative algorithms–A survey. Automatica 1991;27(1):3–21.
[14] Abdelwahed SS. Development of multi-level asynchronous algorithms for
complex systems. Master’s thesis. Dept. of Elec. Eng., Cairo Univ. Egypt;
1993.
[15] Lang B, Miellou JC, Spiteri P. Asynchronous relaxation algorithms for
optimal control problems. Mathematics and Computer in Simulation
1986;28:227–42.

Fig. 3. Convergence of system performance using costate predict (costate


vector γ1 (1) = 0.5, γ2 (1) = 1). Minghui Hu received her M.E. degree in Automation
from the NanChang Institute of Aeronautical Tech-
nology in 2004. Since 2005, she has been a Ph.D.
student in the School of Electrionic, Information &
5. Conclusion Electrical Engineering, Shanghai Jiaotong University.
Her research interests include manufacturing execution
systems, complex systems optimal control and non-
In this paper, interconnected costate prediction methods linear systems control.
for solving non-linear systems optimal control problems were
proposed. Those factors of non-linear and non-separable were
Yongshan Wang is a Ph.D. student in the Department
fixed by prediction. Also, we predicted those costate vectors. of Mechanical Engineering, Tongji University. His
The problem of interconnected two-point boundary value was research interests include computerization of creative
translated and solved using decomposing vector differential design method, optimization algorithm, secondary
development of UG software.
equation. So, the difficulty of solving equations was avoided.
The optional control of subsystems can be used in multilevel
parallel computation. This paper has proved that the algorithm
convergence analysis presented is valid. The results of optimal
control model, by iteration, can be converged to the real Huihe Shao received his B.S. degree in Control
solution to the hybrid systems. Theory and Control Engineering from the East
China University of Science and Technology in
1960. Currently, he is a Professor in the School
Acknowledgment of Electrionic, Information & Electrical Engineering,
Shanghai Jiaotong University. His research interests
include advance process control, optimal control and
This work has been supported by the National Natural intelligent control.
Science Foundation of China (60504033).

You might also like