You are on page 1of 10

This article has been accepted for inclusion in a future issue of this journal.

Content is final as presented, with the exception of pagination.


IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

A Projection Neural Network for Constrained


Quadratic Minimax Optimization
Qingshan Liu, Member, IEEE, and Jun Wang, Fellow, IEEE
Abstract This paper presents a projection neural network
described by a dynamic system for solving constrained quadratic
minimax programming problems. Sufficient conditions based on
a linear matrix inequality are provided for global convergence
of the proposed neural network. Compared with some of the
existing neural networks for quadratic minimax optimization, the
proposed neural network in this paper is capable of solving more
general constrained quadratic minimax optimization problems,
and the designed neural network does not include any parameter.
Moreover, the neural network has lower model complexities,
the number of state variables of which is equal to that of
the dimension of the optimization problems. The simulation
results on numerical examples are discussed to demonstrate
the effectiveness and characteristics of the proposed neural
network.

making [1], game theory [2], and optimal control [3].


In particular, the quadratic minimax programming problem (1)
includes the linear, quadratic, linear, and quadratic programming problems as its special cases. For example, the minimax
programming problem (1) can be written as the following
extended linear quadratic programming (ELQP) problem:

Index Terms Global convergence, Lyapunov stability,


projection neural network, quadratic minimax optimization.

The ELQP has been widely investigated in [4] and [5] and
utilized to optimal control.
In general, with increasing number of decision variables,
traditional numerical algorithms designed for computer
program become inefficient for real-time online solutions.
Since the 1980s, the artificial neural networks with parallel
structure based on circuit implementation have been
employing to handle the online solutions for science and
engineering applications [6][10]. Tank and Hopfield [6]
spearheaded neurodynamic optimization approach for linear
programming, which motivated the development of neural
networks for solving optimization problems. Soon after
that, Kennedy and Chua [7] proposed a recurrent neural
network for constrained nonlinear optimization by utilizing
the finite penalty parameter method. From then on, many
optimization approaches are investigated for the design
of neural networks for constrained optimization, such as
the primaldual neural network [11], primal and dual
neural networks [12], projection neural networks [13], [14],
one-layer neural networks [15], [16], and generalized neural
networks [17][19].
In particular, recently the projection approach has played a
significant role in the design of neural networks for constrained
optimization, such as the models in [20][24]. Specially,
Xia and Wang [25] developed the projection neural network
for solving monotone variational inequalities and related
optimization problems. Gao et al. [26] proposed a projectionbased recurrent neural network for a class of convex quadratic
minimax problems with constraints. Hu and Wang [27]
developed
the
projection
neural
network
for
pseudomonotone variational inequalities and pseudoconvex
optimization problems. Cheng et al. [28] investigated the
neural networks for applications to the identification of genetic
regulatory networks. Moreover, the general [14], [20], [29]
and extended general [22], [30] projection neural networks

I. I NTRODUCTION

ONSIDER the following quadratic minimax programming problem with constraints:


min {max f (x, y)}
x

s.t. Ax + C y = d, x X , y Y

(1)

1 T
1
x Qx + q T x x T H y y T Sy s T y
2
2

(2)

where
f (x, y) =

in which x = (x 1 , x 2 , . . . , x n )T Rn , y = (y1 , y2 , . . . ,
ym )T Rm , Q Rnn , and S Rmm are symmetric,
H Rnm , q Rn , s Rm , A R pn , C R pm , (A C) is
full row-rank, d R p , and X and Y are nonempty and closed
convex sets in Rn and Rm , respectively.
Minimax optimization has arisen in a variety of applications
in science and engineering, including interactive decision
Manuscript received August 15, 2014; revised December 16, 2014; accepted
April 16, 2015. The work of Q. Liu was supported in part by the National
Natural Science Foundation of China under Grant 61473333, in part by the
Program for New Century Excellent Talents in University of China under
Grant NCET-12-0114, and in part by the Fundamental Research Funds for the
Central Universities of China under Grant 2015QN035. The work of J. Wang
was supported by the Research Grants Council through the Hong Kong Special
Administrative Region, China, under Grant CUHK416812E.
Q. Liu is with the School of Automation, Huazhong University of Science and Technology, and the Key Laboratory of Image Processing and
Intelligent Control, Ministry of Education, Wuhan 430074, China (e-mail:
qsliu@hust.edu.cn).
J. Wang is with the Department of Mechanical and Automation
Engineering, The Chinese University of Hong Kong, Hong Kong (e-mail:
jwang@mae.cuhk.edu.hk).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TNNLS.2015.2425301

1 T
x Qx + q T x + (s H T x)
2
s.t. Ax + C y = d, x X , y Y
min
x

where

f (x) =



1
() = max T y y T Sy .
y
2

2162-237X 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
2

have been developed to solve more complex constrained


optimization problems.
In this paper, we are concerned with the design of
a projection neural network for solving the constrained
quadratic minimax programming problem (1). Compared with
the neural networks proposed in [31] and [32] for constrained
quadratic programming, the proposed neural network in this
paper is with lower model complexity and without any design
parameter. Furthermore, compared with existing neural networks for quadratic minimax optimization, the proposed neural
network in this paper has mainly two advantages. First, the
proposed neural network in this paper is capable of solving
more general quadratic minimax optimization problems.
The neural networks in [26] and [33] are only capable of
solving quadratic minimax optimization problems with bound
constraints. The neural network proposed in [34] is efficient
for quadratic minimax optimization with linear constraints.
However, the objective functions in [34] are restricted to
be convex. Here, we investigate the quadratic minimax
optimization problems, the objective functions of which are
not restricted to be convex everywhere. Second, the number
of neurons of the proposed neural network in this paper is the
same as that of primal/decision variables of the optimization
problems. However, the neural network in [34] includes primal
and dual variables. Consequently, the neural network here has
lower model complexity for quadratic minimax optimization.
The remainder of this paper is organized as follows.
In Section II, the proposed projection neural network model is
described based on the optimality analysis. The convergence
of the proposed neural network is analyzed in Section III.
In Section IV, the proposed neural network is utilized for
solving quadratic minimax optimization with linear inequality
constraints and the constrained quadratic programming
problems. Next, in Section V, three illustrative examples
are provided to show the effectiveness and performance of
the proposed neural network. Finally, Section VI concludes
this paper.
II. O PTIMALITY A NALYSIS AND M ODEL D ESCRIPTION
In this section, the optimal conditions of problem (1) are
first analyzed. Then a projection neural network described by
a continuous-time dynamic system is presented for solving
problem (1). Throughout this paper, we assume that the
objective function f (x, y) in problem (1) is convex with
respect to x and concave with respect to y on the feasible
set and there exists at least one finite solution to problem (1).
A. Optimality Analysis
The optimal conditions of problem (1) are described as the
following theorem.
Theorem 1: ((x )T , (y )T )T Rn+m is a solution to
problem (1) if and only if there exists R p such that
the following conditions hold:

x = gx (x x f (x , y ) A T )
(3)
y = g (y + y f (x , y ) C T )
y
Ax + C y = d

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

where x f and y f are the gradients of f with respect


to x and y, respectively, gx is a projection operator from
Rn to X defined by
g x (z) = arg min z 
X

and g y is defined by replacing X with Y in the above equation,


in which   denotes the Euclidean norm.
Proof: ((x )T , (y )T )T is a solution to problem (1) if and
only if for any x Dx = {x Rn : Ax + C y = d, x X }
and y D y = {y Rm : Ax + C y = d, y Y}, we have
f (x , y) f (x , y ) f (x, y ).

(4)

From the right inequality of (4), x is a minimum point of


f (x, y ) on Dx . Define the Lagrangian function as follows:
L(x, y , ) = f (x, y ) + T (Ax + C y d)
where R p is the Lagrangian multiplier.
According to the well-known saddle point theorem
(see [35, Th. 6.2.5]), x is a minimum point of f (x, y )
on Dx if and only if there exists R p such that
L(x , y , ) L(x , y , ) L(x, y , ).

(5)

From the right inequality of (5), x is a minimum point of


L(x, y , ) on X . It is equivalently to that for any x X ,
(x x )T x L(x , y , ) 0, i.e., (x x )T (x f (x , y ) +
A T ) 0. From the left inequality of (5), is a
maximum point of L(x , y , ) on R p . It is equivalently to
L(x , y , ) = 0, i.e., Ax + C y d = 0. Then, x is a
minimum point of f (x, y ) on Dx if and only if there exists
R p such that

(x x )T (x f (x , y ) + A T ) 0 x X
(6)
Ax + C y = d.
According to the projection method [36], x is a solution to
the variational inequality (x x )T (x f (x , y ) + A T ) 0
if and only if it satisfies the following projection equation:
x = gx (x x f (x , y ) A T ).

(7)

Similarly, from the left inequality in (4), y is a maximum


point of f (x , y) on D y if and only if for above R p
we have
y = g y (y + y f (x , y ) C T ).

(8)

This completes the proof.


From (2), we have x f (x , y ) = Qx H y + q
and y f (x , y ) = Sy H T x s. Let B = (A C),
z = (x T , y T )T, c = (q T , s T )T, g() = (g x (), g y ()) and


Q H
.
(9)
W=
HT S
The equations in (3) can be equivalently written as

z = g(z W z c B T )
Bz = d.

(10)

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
LIU AND WANG: PROJECTION NEURAL NETWORK FOR CONSTRAINED QUADRATIC MINIMAX OPTIMIZATION

Let = z W z c B T , then z = g( ). We have

B. Model Description
Based on the equations in (10), the dynamic equations of
the proposed neural network model for solving problem (1)
are described as follows:
State Equations:
d
= Pg() (I P)( g()

dt
+ W ((I P)g() + w) + c) + w
(11)
Output Equations:
z = g()

(12)

where Rn+m ,  is a positive scaling constant,


P = B T (B B T )1 B, w = B T (B B T )1 d, and I is
(n + m)-dimensional identity matrix.
From the definition of P, it is easy to get P 2 = P,
(I P)2 = I P, P(I P) = 0, and P = 1. The matrix P
is called projection matrix.
In general, the calculation of projection of a point onto a
convex set X is nontrivial. However, if the set X is a box
or sphere, the calculation is straightforward. For example,
if X = {u Rn : li u i h i , i = 1, 2, . . . , n}, then
g(u) = (g(u 1 ), g(u 2 ), . . . , g(u n ))T , where

h i , u i > h i
g(u i ) = u i , li u i h i

li , u i < li .
If X = {u Rn : u v r, v Rn , r > 0}, then

u,
u v r
g(u) =
v + r(uv)
,
u
v > r.
uv

(14)

Multiplying the both sides of (14) by B results in


= (B B T )1 B(g( ) W z c).

(15)

Substituting (15) into (14), we have


(I P)( g( ) + W z + c) = 0.
According to the second equalities in (10) and Lemma 3,
we have Pz = w. Then according to Definition 1, is an
equilibrium point of system (11).
Conversely, assume to be an equilibrium point of
system (11). According to Definition 1, the equations in (13)
are satisfied.
Multiplying the both sides of (13) by P, since P 2 = P and
Pw = w, it follows that P z = w and:
(I P)( z + W ((I P)z + w) + c) = 0

(16)

where z = g(). Thus, according to Lemma 3, z is a feasible


point to problem (1).
For any z = (x T , y T )T to be a feasible point, from (16),
we have
(z z)T (I P)( z + W z + c) = 0.
Since Pz = P z = w, we have
(z z)T ( z + W z + c) = 0.

(17)

From Lemma 1, for any feasible point z, (z z)T (z ) =


(z g())T (g() ) 0. Then, (17) implies (z z)T
(W z + c) = (z z)T (z ) 0, which follows:

Lemma 1 [36]: The following inequality holds:


(u g(u))T (g(u) v) 0 u Rn , v X .
Furthermore, it is easy to obtain the following result
from Lemma 1.
Lemma 2: For the projection operator g(x), the following
inequality holds:
(u v)T (g(u) g(v)) g(u) g(v)2 u, v Rn .
Lemma 3 [15]: Assume B is full row-rank. For any
z Rn+m , Bz = d if and only if Pz = w, where P and w
are defined in (11).
Definition 1: Rn+m is said to be an equilibrium point
of system (11) if
P z + (I P)( z + W ((I P)z + w) + c) w = 0
(13)
where z = g().
Theorem 2: z = ((x )T , (y )T )T Rn+m is an
optimal solution to problem (1) if and only if there exists
an equilibrium point Rn+m for system (11) such
that z = g( ).
Proof: From the first inequality in (10), it is well known
that z Rn+m is a solution to problem (1) if and only if
there exists R p such that
z = g(z W z c B T ).

= g( ) W z c B T .

(x x)T (Q x H y + q) 0
and
(y y)T (H T x + S y + s) 0.
According to the convexity of f (x, y) on the feasible set with
respect to x, it results in f (x, y) f (x, y) (x x)T
x f (x, y) = (x x)T (Q x H y + q) 0. Similarly,
according to the concavity of f (x, y) on the feasible set
with respect to y, it follows f (x, y) f (x, y) (y y)T
y f (x, y) = (y y)T (H T x S y s) 0. Then (x T , y T )T
is an optimal solution to problem (1).
III. T HEORETICAL A NALYSIS
In this section, the validity of the proposed neural network
for solving the quadratic minimax optimization problems
is analyzed. The stability and global convergence of the
neural network are proved to guarantee the solutions of the
corresponding problems.
A. Convergence Analysis
In this section, the stability and convergence of the dynamic
system in (11) and (12) are analyzed using Lyapunov
method [17], [37].

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
4

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

To prove the global convergence of the proposed neural


network, we first define that
2

V0 () =  g()  g()

Then
 V1 ((t)) 2 [2(z z)T P(z z)

(18)

where is an equilibrium point of system (11).


Lemma 4 [15]: For any Rn+m , we have the following.
1) V0 () g() g()2 .
2) V0 () is differentiable and its gradient is V0 () =
2(g() g()).
Theorem 3: For any initial value (t0 ) Rn+m , the output
vector z(t) of the neural network in (11) and (12) is stable in
the sense of Lyapunov and globally convergent to a solution
of problem (1) if there exists k 1 such that the following
matrix is positive semidefinite:
k[P + (I P)(W + W T )(I P)]
(19)
(I P)W T (I P)W (I P).
Proof: According to Theorem 2, z = g() is a solution to
problem (1), where is an equilibrium point of system (11)
and it satisfies

(z z)T (I P)W (I P)(z z)].


(22)
Moreover, we have
 V2 ((t)) = (V2 ())T (t)
= 2[ ]T [(I P)( ) + (I 2P)(z z)
(I P)W (I P)(z z)]
= 2( ) (I P)( )
T

+ 2( )T (I 2P)(z z)
2( )T (I P)W (I P)(z z).
Next, since P is symmetric and P 2 = P, we have
 2 2 =  (I P)( ) + (I 2P)(z z)
(I P)W (I P)(z z)2
= ( )T (I P)( ) + (z z)T (z z)

P z + (I P)( z + W ((I P)z + w) + c) w = 0.


(20)
Subtracting from the both sides of (11) and (20) follows
that:
d
= (I P)( ) + (I 2P)(z z)

dt
(I P)W (I P)(z z)
(21)
where z = g().
Consider the following Lyapunov function:

+ (z z)T (I P)W T (I P)W (I P)(z z)


2( )T (I P)(z z)
+ 2( )T (I P)W (I P)(z z)
2(z z)T (I P)W (I P)(z z).
Then
 V2 ((t)) +  2 2
= ( )T (I P)( ) + (z z)T (z z)
2( )T P(z z)
2(z z)T (I P)W (I P)(z z)

V () = [V1 () + V2 ()]
where V1 () = (V0 () + ( )T P( )), with V0 ()
defined in (18) and 0, and V2 () =  2 .
Next, we calculate the derivative of Vi ((t)) (i = 1, 2)
along the solution of system (21), respectively. We have
 V1 ((t)) = (V1 ())T (t)
= 2 [z z + P( )]T
[(I P)( ) + (I 2P)(z z)
(I P)W (I P)(z z)]
= 2 [(z z)T (I P)( )
+ (z z)T (I 2P)(z z)

+ (z z)T (I P)W T (I P)W (I P)(z z)


= ( )T (I P)( ) + (z z)T (z z)
+ 2( )T (I P)(z z) 2( )T (z z)
2(z z)T (I P)W (I P)(z z)
+ (z z)T (I P)W T (I P)W (I P)(z z).
From Lemma 2, we have ( )T (z z) (z z)T (z z).
It results that
 V2 ((t)) +  2 2
( )T (I P)( ) (z z)T (z z)
+ 2( )T (I P)(z z)
2(z z)T (I P)W (I P)(z z)

(z z)T (I P)W (I P)(z z)


( )T P(z z)]
= 2 [(z z)T ( )
+ (z z)T (I 2P)(z z)
(z z)T (I P)W (I P)(z z)].
From Lemma 2, we have
(z z)T ( ) = (g() g())T ( )
(g() g())T (g() g())
= (z z) (z z).
T

+ (z z)T (I P)W T (I P)W (I P)(z z). (23)


In (23), let
J1 = ( )T (I P)( ) (z z)T (z z)
+ 2( )T (I P)(z z)
and
J2 = 2(z z)T (I P)W (I P)(z z)
+ (z z)T (I P)W T (I P)W (I P)(z z).

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
LIU AND WANG: PROJECTION NEURAL NETWORK FOR CONSTRAINED QUADRATIC MINIMAX OPTIMIZATION

Then
J1 = ( ) (I P)( ) (z z) (I P)(z z)
T

+ 2( )T (I P)(z z) (z z)T P(z z)


= ( z + z)T (I P)( z + z)

as t . It follows that the -limit point is an equilibrium


point.
Finally, let us define another Lyapunov function
V () = [ (V0 () + ( )T P( )) +  2 ]

(z z)T P(z z)
(z z)T P(z z)
where the last inequality holds since I P is positive
semidefinite.
It further results that
 V2 ((t)) +  2 2

where V0 () =  g()2  g()2 .


Similar to the above proof, we have V ((t)) 0. From the
continuity of function V (), for any > 0, there exists > 0
such that V () < when   . Since V ((t)) is
monotonically nonincreasing on interval [0, +), there exists
a positive integer K , such that when t K
(t) 2 V ((t)) V ((K )) <

J1 + J2
(z z)T P(z z)
2(z z)T (I P)W (I P)(z z)
+ (z z)T (I P)W T (I P)W (I P)(z z). (24)
From the inequalities in (22) and (24), we have
V ((t)) +  2 2
=  V1 ((t)) +  V2 ((t)) +  2 2
2 [2(z z)T P(z z)
(z z)T (I P)W (I P)(z z)]

that is, limt + (t) = . Since V ()  2 is


radially unbounded, thus for any initial point (t0 ) Rn+m ,
the state vector (t) of system (11) is globally convergent to
an equilibrium point.
Since z = g(), the stability and convergence of the output
vector z(t) can be derived directly from that of the state
vector (t).
Remark 1: The matrix in (19) is a linear matrix
inequality (LMI), and the lower bound of k can be estimated
by solving the following optimization problem:
min k
s.t. k[P + (I P)(W + W T )(I P)]

(z z)T P(z z)
2(z z)T (I P)W (I P)(z z)

(I P)W T (I P)W (I P) 0.

+ (z z)T (I P)W T (I P)W (I P)(z z)


= (1 + 4 )(z z)T P(z z)
2(1 + )(z z)T (I P)W (I P)(z z)
+ (z z)T (I P)W T (I P)W (I P)(z z)
(z z)T [(1 + )P + (1 + )(I P)(W + W T )(I P)
(I P)W T (I P)W (I P)](z z)
= (z z)T [k(P + (I P)(W + W T )(I P))
(I P)W T (I P)W (I P)](z z)
where k = 1 + 1.
According to the condition, if the matrix in (19) is positive
semidefinite, we have
V ((t))  2 2 .

(25)

For any initial point (t0 ) Rn+m , V ((t)) is nonincreasing as t . The inequality in (25) indicates that {(t) :
0 t < T } L((t0 )) = { Rn+m : V () V ((t0 ))}.
Thus, T = + and (t) is bounded from the boundedness
of L((t0 )). From (25), V () is a Lyapunov function of
system (11), and system (11) is Lyapunov stable.
From the boundedness of (t), there exists an increasing
sequence {tl } and a limit point such that liml (tl ) = .
Thus, is an -limit point of (t).
According to the LaSalles invariance principle [38],
(t) will converge to M as t , where M is the largest
invariant subset of the following set:
= { L((t0 )) : V ((t)) = 0}.
Note that, from (25), if V ((t)) = 0, we have (t) = 0.
Then (t) will converge to the equilibrium point set

(26)

The problem in (26) can be solved using the MATLAB LMI


toolbox. We will show it in the simulation examples
of Section V.
In particular, if the matrix P + (I P)(W + W T )(I P) is
positive definite, there always exists k 1 such that the matrix
in (19) is positive definite. Then the following corollary holds.
Corollary 1: For any initial value (t0 ) Rn+m , the output
vector z(t) of the neural network in (11) and (12) is stable in
the sense of Lyapunov and globally convergent to a solution
of problem (1) if P + (I P)(W + W T )(I P) is positive
definite.
In fact, if the matrix P + (I P)(W + W T )(I P) is
positive definite, it corresponds to a strictly convexconcave
objective function on the equality constraints in problem (1).
We show the details in Section III-B.
B. For Strictly ConvexConcave Objective Function
Assume the constraint set with only equalities to be as
follows:
E = {x Rn , y Rm : Ax + C y = d}.
Lemma 5: The objective function f (x, y) in problem (1) is
strictly convex with respect to x on the set E if and only if
x T Qx > 0 for any x Hx = {x Rn : Ax = 0, x = 0}.
Proof: The proof is similar to that of [39, Lemma 4] and
it is omitted here.
Accordingly, the following lemma is also true.
Lemma 6: The objective function f (x, y) in problem (1) is
strictly concave with respect to y on the set E if and only if
y T Sy > 0 for any y H y = {y Rm : C y = 0, y = 0}.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
6

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

From Lemmas 5 and 6, it is easy to get the following


lemma.
Lemma 7: The objective function f (x, y) in problem (1)
is strictly convex with respect to x and strictly concave with
respect to y on the set E if and only if x T Qx + y T Sy > 0
for any (x T , y T )T H = {x Rn , y Rm : Ax + C y = 0,
(x T , y T )T = 0}.
Lemma 8: The objective function f (x, y) in problem (1)
is strictly convex with respect to x and strictly concave with
respect to y on the set E if and only if P + 2(I P)G(I P)
is positive definite, where P is defined in (11) and


Q
O
G=
OT S
in which O is zero matrix with approximate dimension.
Proof: Similar to Lemma 3, Bz = 0 if and only if Pz = 0,
where B = (A C) and z = (x T , y T )T . Combining with
Lemma 7, it follows that f (x, y) is strictly convex with respect
to x and strictly concave with respect to y on the set E, if and
only if z T Gz > 0 for any z Hz = {z Rn+m : Bz = 0,
z = 0}.
If P + 2(I P)G(I P) is positive definite, then for any
z Rn+m (z = 0)
z T (P + 2(I P)G(I P))z > 0.

(27)

For any z Hz , Pz = 0. Substituting it into (27) follows that


z T Gz > 0.
Conversely, if for any z Hz , z T Gz > 0. Then, for any
z Rn+m (z = 0), we have
z T (P + 2(I P)G(I P))z
= z T Pz + 2z T (I P)G(I P)z.
If z Hz , Pz = 0. Then, z T (P + 2(I P)G(I P))z =
> 0.
If z
/ Hz , Pz = 0 implies z T Pz > 0. Note that
P(I P)z = 0. If (I P)z = 0, then (I P)z Hz . We have
z T (I P)G(I P)z > 0. If (I P)z = 0, then z T (I P)G
(I P)z = 0, that is, for any z
/ Hz , z T (I P)G(I P)z 0.
T
It results that z (P + 2(I P)G(I P))z > 0.
From the above analysis, for any z Rn+m (z = 0),
T
z (P + 2(I P)G(I P))z > 0, i.e., P + 2(I P)G(I P)
is positive definite.
Note that W + W T = 2G, where W is defined in (9). Then,
from Lemma 8, one gets the following results.
Lemma 9: The objective function f (x, y) in problem (1)
is strictly convex with respect to x and strictly concave with
respect to y on the set E, if and only if P + (I P)
(W + W T )(I P) is positive definite.
Next, we present the results on stability and convergence of
the neural network in (11) and (12) for this case.
Theorem 4: For any initial value (t0 ) Rn+m , the output
vector z(t) of the neural network in (11) and (12) is stable in
the sense of Lyapunov and globally convergent to a solution of
problem (1) if the objective function f (x, y) is strictly convex
with respect to x and strictly concave with respect to y on
the set E.
Proof: According to Theorem 3, we will prove that there
exists k 1 such that the matrix in (19) is positive definite.

In fact, from Lemma 9, if the conditions hold, the matrix


P + (I P)(W + W T )(I P) is positive definite. Then there
always exists k 1, such that the matrix in (19) is positive
definite. This completes the proof.
Remark 2: From the results in Theorem 4, if the conditions
hold, the LMI (19) is always satisfied for sufficiently large k.
However, if the objective function f (x, y) satisfies only the
convex (concave) condition but not strictly, the LMI (19)
can be checked by solving the constrained optimization
problem (26).
IV. A NOTHER T WO C ASES
A. Quadratic Minimax Programming With
Linear Bound Constraints
In this section, we consider the quadratic minimax programming problem with linear bound constraints as follows:
min {max f (x, y)}
x

s.t. Ax + C y , x X , y Y

(28)

where f (x, y), X , Y, A, and C are the same as those in


problem (1), but (A C) is not restricted to be full row-rank,
and is a nonempty and closed convex set in R p . The
problem in (28) has been investigated in [34] and [40] when
is a box set. Here, the proposed neural network with a
similar model complexity is capable of solving more general
quadratic minimax programming problems. By introducing a
slack vector v R p , the constraints in (28) can be written as
Ax + C y v = 0, x X , Y Y, v .
Then the minimax programming problem in (28) is
equivalent to
min {max f (x, y, v)},

2z T Gz

y,v

s.t. Ax + C y v = 0, x X , y Y, v
where
1 T
x Qx + q T x x T H y
2
1
1
y T Sy s T y + v T (Ax + C y v).
2
2
Thus, the neural network in (11) and (12) can be used
for solving this problem by replacing Rn+m+ p ,
B = (A C I p ) with I p being p-dimensional identity matrix,
g becoming a projection operator form Rn+m+ p to
X Y , z = (x T , y T , v T )T, c = (q T , s T , oT )T , w = 0,
and

Q
H
A T /2
W = HT
S
C T /2 .
A/2 C/2
Ip
f (x, y, v) =

B. Special Case for Quadratic Programming


In this section, we consider a special case of problem (1)
as the following quadratic programming:
1 T
x Qx + q T x
2
s.t. Ax = d, x X
min

f (x) =

(29)

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
LIU AND WANG: PROJECTION NEURAL NETWORK FOR CONSTRAINED QUADRATIC MINIMAX OPTIMIZATION

where x = (x 1 , x 2 , . . . , x n )T Rn , Q Rnn is symmetric,


q Rn , A R pn is full row-rank, d R p , and X is a
nonempty and closed convex set in Rn .
Then the neural network in (11) and (12) for solving the
quadratic programming problem can be written as
State Equations:


d
= P g() (In P)( g()
dt
+ Q((In P)g() + w) + q) + w

(30)

Output Equations:
x = g()

(31)
Rn

where  is a positive scaling constant,


is the state
vector, P = A T (A A T )1 A, w = A T (A A T )1 d, and In is
n-dimensional identity matrix.
The results in Theorems 3 and 4 can be used for solving
the quadratic programming problem, and they are stated as the
following corollaries.
Corollary 2: For any initial value (t0 ) Rn , the output
vector x(t) of the neural network in (30) and (31) is stable in
the sense of Lyapunov and globally convergent to a solution
of problem (29) if there exists k 1 such that the following
matrix is positive semidefinite:

Fig. 1. Transient behaviors of the state variables of the neural network


in (30) and (31) for solving the problem in Example 1.

k[ P + 2(In P)Q(In P)] [(In P)Q(In P)]2 . (32)


Corollary 3: For any initial value (t0 ) Rn , the output
vector x(t) of the neural network in (30) and (31) is stable in
the sense of Lyapunov and globally convergent to a solution
of problem (29) if the objective function f (x) in problem (29)
is strictly convex on the set {x Rn : Ax = d}.
V. S IMULATION R ESULTS
In the following, three examples for constrained optimization problems are solved using the proposed neural
network.
Example 1: Consider the quadratic programming problem
in (29) with x R4 , q = (1, 1, 2, 3)T, d = (1, 1)T,
X = {x R4 : 2 x 2}, and

0
5 5
2


5 0
1 2
2 1 1
0

Q=
, A=
.
5
1
0
2
0 1 1
2
2
2 2
0
It is obviously that Q is not positive definite and
some of the existing neural networks, such as the ones
in [25], [31], and [32], are not capable of solving this
problem. Using the MATLAB LMI toolbox, we can get an
estimate of k by solving the optimization problem in (26).
Here, if k 3.3334, the matrix in (32) is positive semidefinite.
Then the proposed neural network in (30) and (31) can be used
for solving this problem.
Let  = 105 in the neural network model. Figs. 1 and 2,
respectively, show the transient behaviors of state vector (t)
and output vector x(t) with 10 random initial values. The
simulation results show that the output variables are globally
convergent to the unique solution x = (0.5, 2, 2, 1.5)T
in the bound constraints.

Fig. 2. Transient behaviors of the output variables of the neural network


in (30) and (31) for solving the problem in Example 1.

Example 2: Consider the optimization problem in (1) with


x R2 , y R2 , q = (1, 1)T , s = (0, 1)T , d = (1, 1)T,
X = {x R2 : 0 x 5}, Y = {y R2 : 0 y 5}, and






2 2
1 1
1 1
Q=
, S=
, H=
2 1
1 1
1
0




1 1
1
0
A=
, C=
.
1
0
1 1
Since the matrices Q and S are not positive semidefinite,
the neural networks in [26] and [34] for quadratic minimax
programming are not capable of solving this problem. Using
the MATLAB LMI toolbox, the matrix in (19) is positive
definite if k 1.2061. Let  = 105 in the neural network
model. Figs. 3 and 4, respectively, show the transient behaviors
of state vector (t) and output vector (x(t), y(t)) with
10 random initial values. The simulation results show that the
output variables are globally convergent to the unique solution
x = (1/3, 0, 2/3, 2)T in the bound constraints.
Next, the simulation results of some other recurrent neural
networks are presented to solve the problem in this example.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
8

Fig. 3. Transient behaviors of the state variables of the neural network


in (30) and (31) for solving the problem in Example 2.

Fig. 4. Transient behaviors of the output variables of the neural network


in (30) and (31) for solving the problem in Example 2.

Since the minimax optimization problem in (1) is equivalent to


solving the equations in (3), which are similar to the equations
in [31] for nonlinear optimization, we use the neural network
in [31] to solve this problem. Fig. 5 shows the transient
behaviors of the neural network in [31] from 10 random initial
points. We can observe that the output variables oscillate at
the optimal solution. Moreover, the neural network in [34] is
used for solving this problem, the simulation results of which
are shown in Fig. 6. We see that the output variables of the
neural network are divergent.
Example 3: Consider the quadratic minimax programming
with linear bound constraints in (28). Assume x Rn and
y Rn for different numbers of dimension, Q and S to be
identity matrices, H to be a matrix with all element 1, q and
s to bezero vectors, and feasible set D = {x Rn , y Rn :
1 ni=1 (x i + yi ) 1}.
Since the matrices Q and S are positive definite, the neural
network in (11) and (12) is capable of solving this problem.
Fig. 7 shows the convergence of the error function x x 
for n = 10, 100, 1000, and 5000, where x is the optimal

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Fig. 5. Transient behaviors of the neural network in [31] for solving the
problem in Example 2.

Fig. 6. Transient behaviors of the neural network in [34] for solving the
problem in Example 2.

Fig. 7. Convergence of error function x x  with different values of n


for solving the problem in Example 3.

solution of the problem. From the simulation results, we see


that the error function is convergent to zero for different values
of n. Moreover, the convergence rate is independent of the

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
LIU AND WANG: PROJECTION NEURAL NETWORK FOR CONSTRAINED QUADRATIC MINIMAX OPTIMIZATION

dimension of the problem, which indicates that the proposed


neural network is suitable for solving complex quadratic
minimax problems with high dimensions.
VI. C ONCLUSION
This paper presents a projection neural network described
by a continuous-time dynamic system for solving quadratic
minimax optimization problems with linear constraints.
Compared with other recurrent neural networks for
optimization, the proposed neural network is with lower
model complexity and without any design parameter in the
model. Furthermore, compared with some existing recurrent
neural networks for quadratic minimax optimization, the
proposed neural network is capable of solving more general
problems with relaxed restrictions to the objective functions.
Using the Lyapunov method and LMIs, the convergence
is guaranteed if the linear matrix is positive semidefinite.
The simulation results on several numerical examples are
presented to show the effectiveness and performance of the
proposed neural network.
R EFERENCES
[1] M. R. Young, A minimax portfolio selection rule with linear
programming solution, Manage. Sci., vol. 44, no. 5, pp. 673683,
1998.
[2] A. Rapoport and R. B. Boebel, Mixed strategies in strictly competitive games: A further test of the minimax hypothesis, Games Econ.
Behavior, vol. 4, no. 2, pp. 261283, 1992.
[3] R. T. Rockafellar, Linear-quadratic programming and optimal control,
SIAM J. Control Optim., vol. 25, no. 3, pp. 781814, 1987.
[4] R. T. Rockafellar and R. J.-B. Wets, Generalized linear-quadratic
problems of deterministic and stochastic optimal control in discrete
time, SIAM J. Control Optim., vol. 28, no. 4, pp. 810822,
1990.
[5] F. L. Lewis, D. L. Vrabie, and V. L. Syrmos, Optimal Control.
New York, NY, USA: Wiley, 2012.
[6] D. Tank and J. J. Hopfield, Simple neural optimization networks: An
A/D converter, signal decision circuit, and a linear programming circuit,
IEEE Trans. Circuits Syst., vol. 33, no. 5, pp. 533541, May 1986.
[7] M. P. Kennedy and L. O. Chua, Neural networks for nonlinear
programming, IEEE Trans. Circuits Syst., vol. 35, no. 5, pp. 554562,
May 1988.
[8] J. Wang, Analysis and design of a k-winners-take-all model with a
single state variable and the heaviside step activation function, IEEE
Trans. Neural Netw., vol. 21, no. 9, pp. 14961506, Sep. 2010.
[9] S. Wen, Z. Zeng, T. Huang, and Y. Zhang, Exponential adaptive lag
synchronization of memristive neural networks via fuzzy method and
applications in pseudorandom number generators, IEEE Trans. Fuzzy
Syst., vol. 22, no. 6, pp. 17041713, Dec. 2014.
[10] A. Wu and Z. Zeng, Lagrange stability of memristive neural networks
with discrete and distributed delays, IEEE Trans. Neural Netw. Learn.
Syst., vol. 25, no. 4, pp. 690703, Apr. 2014.
[11] J. Wang, Primal and dual neural networks for shortest-path routing,
IEEE Trans. Syst., Man, Cybern. A, Syst., Humans, vol. 28, no. 6,
pp. 864869, Nov. 1998.
[12] J. Wang, Primal and dual assignment networks, IEEE Trans. Neural
Netw., vol. 8, no. 3, pp. 784790, May 1997.
[13] Y. Xia, H. Leung, and J. Wang, A projection neural network and
its application to constrained optimization problems, IEEE Trans.
Circuits Syst. I, Fundam. Theory Appl., vol. 49, no. 4, pp. 447458,
Apr. 2002.
[14] X. Hu and J. Wang, Design of general projection neural networks for
solving monotone linear variational inequalities and linear and quadratic
optimization problems, IEEE Trans. Syst., Man, Cybern. B, Cybern.,
vol. 37, no. 5, pp. 14141421, Oct. 2007.
[15] Q. Liu and J. Wang, A one-layer projection neural network for nonsmooth optimization subject to linear equalities and bound constraints,
IEEE Trans. Neural Netw. Learn. Syst., vol. 24, no. 5, pp. 812824,
May 2013.

[16] Q. Liu, T. Huang, and J. Wang, One-layer continuous- and


discrete-time projection neural networks for solving variational
inequalities and related optimization problems, IEEE Trans. Neural
Netw. Learn. Syst., vol. 25, no. 7, pp. 13081318, Jul. 2014.
[17] M. Forti, P. Nistri, and M. Quincampoix, Generalized neural network
for nonsmooth nonlinear programming problems, IEEE Trans. Circuits
Syst. I, Reg. Papers, vol. 51, no. 9, pp. 17411754, Sep. 2004.
[18] W. Bian and X. Xue, Subgradient-based neural networks for nonsmooth
nonconvex optimization problems, IEEE Trans. Neural Netw., vol. 20,
no. 6, pp. 10241038, Jun. 2009.
[19] Q. Liu, J. Cao, and G. Chen, A novel recurrent neural network
with finite-time convergence for linear programming, Neural Comput.,
vol. 22, no. 11, pp. 29622978, 2010.
[20] Y. Xia and J. Wang, A general projection neural network for solving
monotone variational inequalities and related optimization problems,
IEEE Trans. Neural Netw., vol. 15, no. 2, pp. 318328,
Mar. 2004.
[21] X. Hu, Applications of the general projection neural network
in solving extended linear-quadratic programming problems with
linear constraints, Neurocomputing, vol. 72, nos. 46, pp. 11311137,
2009.
[22] Q. Liu and J. Cao, A recurrent neural network based on projection
operator for extended general variational inequalities, IEEE Trans.
Syst., Man, Cybern. B, Cybern., vol. 40, no. 3, pp. 928938,
Jun. 2010.
[23] S. Qin, W. Bian, and X. Xue, A new one-layer recurrent neural network
for nonsmooth pseudoconvex optimization, Neurocomputing, vol. 120,
pp. 655662, Nov. 2013.
[24] Z. Yan and J. Wang, Robust model predictive control of nonlinear
systems with unmodeled dynamics and bounded uncertainties based on
neural networks, IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 3,
pp. 457469, Mar. 2014.
[25] Y. S. Xia and J. Wang, On the stability of globally projected dynamical
systems, J. Optim. Theory Appl., vol. 106, no. 1, pp. 129150,
2000.
[26] X.-B. Gao, L.-Z. Liao, and W. Xue, A neural network for a class
of convex quadratic minimax problems with constraints, IEEE Trans.
Neural Netw., vol. 15, no. 3, pp. 622628, May 2004.
[27] X. Hu and J. Wang, Solving pseudomonotone variational inequalities
and pseudoconvex optimization problems using the projection neural
network, IEEE Trans. Neural Netw., vol. 17, no. 6, pp. 14871499,
Nov. 2006.
[28] L. Cheng, Z.-G. Hou, Y. Lin, M. Tan, W. C. Zhang, and F.-X. Wu,
Recurrent neural network for non-smooth convex optimization
problems with application to the identification of genetic regulatory
networks, IEEE Trans. Neural Netw., vol. 22, no. 5, pp. 714726,
May 2011.
[29] X. Hu and J. Wang, A recurrent neural network for solving a class
of general variational inequalities, IEEE Trans. Syst., Man, Cybern. B,
Cybern., vol. 37, no. 3, pp. 528539, Jun. 2007.
[30] Q. Liu and Y. Yang, Global exponential system of projection neural
networks for system of generalized variational inequalities and related
nonlinear minimax problems, Neurocomputing, vol. 73, nos. 1012,
pp. 20692076, 2010.
[31] Y. Xia and J. Wang, A recurrent neural network for solving nonlinear
convex programs subject to linear constraints, IEEE Trans. Neural
Netw., vol. 16, no. 2, pp. 379386, Mar. 2005.
[32] X.-B. Gao, A novel neural network for nonlinear convex programming, IEEE Trans. Neural Netw., vol. 15, no. 3, pp. 613621,
May 2004.
[33] X.-B. Gao and L.-H. Liao, A novel neural network for a class of
convex quadratic minimax problems, Neural Comput., vol. 18, no. 8,
pp. 18181846, 2006.
[34] X. Xue and W. Bian, A project neural network for solving degenerate
quadratic minimax problem with linear constraints, Neurocomputing,
vol. 72, nos. 79, pp. 18261838, 2009.
[35] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming:
Theory and Algorithms, 3rd ed. Hoboken, NJ, USA: Wiley, 2006.
[36] D. Kinderlehrer and G. Stampacchia, An Introduction to Variational
Inequalities and Their Applications. New York, NY, USA: Academic,
1982.
[37] Z. Zeng and W. X. Zheng, Multistability of two kinds of recurrent
neural networks with activation functions symmetrical about the origin
on the phase plane, IEEE Trans. Neural Netw. Learn. Syst., vol. 24,
no. 11, pp. 17491762, Nov. 2013.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
10

[38] J. P. La Salle, The Stability of Dynamical Systems. Philadelphia, PA,


USA: SIAM, 1976.
[39] Q. Liu and J. Wang, A one-layer recurrent neural network with
a discontinuous hard-limiting activation function for quadratic
programming, IEEE Trans. Neural Netw., vol. 19, no. 4, pp. 558570,
Apr. 2008.
[40] A. R. Nazemi, A dynamical model for solving degenerate quadratic
minimax problems with constraints, J. Comput. Appl. Math., vol. 236,
no. 6, pp. 12821295, 2011.

Qingshan
Liu
(S07M08) received the
B.S. degree in mathematics from Anhui Normal
University, Wuhu, China, in 2001, the M.S. degree
in applied mathematics from Southeast University,
Nanjing, China, in 2005, and the Ph.D. degree
in automation and computer-aided engineering from
The Chinese University of Hong Kong, Hong Kong,
in 2008.
He was a Research Associate with the
City University of Hong Kong, Hong Kong,
in 2009 and 2011. In 2010 and 2012, he was
a Post-Doctoral Fellow with The Chinese University of Hong Kong.
In 2012 and 2013, he was a Visiting Scholar with Texas A&M University
at Qatar, Doha, Qatar. From 2008 to 2014, he was an Associate Professor
with the School of Automation, Southeast University. He is currently
a Professor with the School of Automation, Huazhong University of
Science and Technology, Wuhan, China. His current research interests
include optimization theory and applications, artificial neural networks,
computational intelligence, and multiagent systems.

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Jun Wang (S89M90SM93F07) received


the B.S. degree in electrical engineering and
the M.S. degree in systems engineering from the
Dalian University of Technology, Dalian, China,
in 1982 and 1985, respectively, and the Ph.D. degree
in systems engineering from Case Western Reserve
University, Cleveland, OH, USA, in 1991.
He held various short-term or part-time visiting
positions with the U.S. Air Force Armstrong
Laboratory, San Antonio, TX, USA, in 1995, the
RIKEN Brain Science Institute, Wako, Japan,
in 2001, the Chinese Academy of Sciences, Beijing, China, in 2002,
and the Huazhong University of Science and Technology, Wuhan, China,
from 2006 to 2007. He was a Cheung Kong Chair Professor with Shanghai
Jiao Tong University, Shanghai, China, from 2008 to 2011. He has been
a part-time National Thousand-Talent Chair Professor with the Dalian
University of Technology since 2011. He has held various academic positions
with the Dalian University of Technology, Case Western Reserve University,
and the University of North Dakota, Grand Forks, ND, USA. He is currently
a Professor with the Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Hong Kong. His current research
interests include neural networks and their applications.
Prof. Wang was a recipient of the Research Excellence Award from The
Chinese University of Hong Kong for research from 2008 to 2009, the Natural
Science Award (First Class) from the Shanghai Municipal Government
in 2009, the Natural Science Award (First Class) from the Ministry of
Education of China in 2011, the Outstanding Achievement Award from
the Asia Pacific Neural Network Assembly, the IEEE T RANSACTIONS ON
N EURAL N ETWORKS Outstanding Paper Award (with Q. Liu) in 2011, and the
Neural Networks Pioneer Award from the IEEE Computational Intelligence
Society in 2014. He is the Editor-in-Chief of the IEEE T RANSACTIONS ON
C YBERNETICS . He was an Associate Editor of the journal as its predecessor
from 2003 to 2013, the IEEE T RANSACTIONS ON S YSTEMS , M AN , AND
C YBERNETICS PART C from 2002 to 2005, and the IEEE T RANSACTIONS
ON N EURAL N ETWORKS from 1999 to 2009. He was a member on
the Editorial Board of Neural Networks from 2012 to 2014, and the
Editorial Advisory Board of the International Journal of Neural Systems
from 2006 to 2012. He was a Guest Editor of the special issues of the
European Journal of Operational Research in 1996, the International Journal
of Neural Systems in 2007, and the Neurocomputing in 2008 and 2014.
He was the President of the Asia Pacific Neural Network Assembly
in 2006, and the General Chair of the 13th International Conference on
Neural Information Processing in 2006, and the IEEE World Congress on
Computational Intelligence in 2008. He was on many committees, such as
the IEEE Fellow Committee. He was the IEEE Computational Intelligence
Society Distinguished Lecturer from 2010 to 2012, and has been the
IEEE Computational Intelligence Society Distinguished Lecturer since 2014.

You might also like