You are on page 1of 12

Analog circuit optimization system based on hybrid evolutionary algorithms

Bo Liu
a
, Yan Wang
a,
, Zhiping Yu
a
, Leibo Liu
a
, Miao Li
a
, Zheng Wang
a
,
Jing Lu
a
, Francisco V. Ferna ndez
b
a
Institute of Microelectronics, Tsinghua University, China
b
IMSE, CSIC and University of Seville, Spain
a r t i c l e i n f o
Article history:
Received 29 July 2007
Received in revised form
28 January 2008
Accepted 10 April 2008
Keywords:
Analog circuit synthesis
Analog circuit optimization
Differential evolution (DE)
Co-evolutionary differential evolution
(CODE)
Analog circuit sizing
a b s t r a c t
This paper investigates a hybrid evolutionary-based design system for automated sizing of analog
integrated circuits (ICs). A new algorithm, called competitive co-evolutionary differential evolution
(CODE), is proposed to design analog ICs with practical user-dened specications. On the basis of the
combination of HSPICE and MATLAB, the system links circuit performances, evaluated through electrical
simulation, to the optimization system in the MATLAB environment, once a circuit topology is selected.
The system has been tested by typical and hard-to-design cases, such as complex analog blocks with
stringent design requirements. The results show that the design specications are closely met, even in
highly-constrained situations. Comparisons with available methods like genetic algorithms and
differential evolution, which use static penalty functions to handle design constraints, have also been
carried out, showing that the proposed algorithm offers important advantages in terms of optimization
quality and robustness. Moreover, the algorithm is shown to be efcient.
& 2008 Elsevier B.V. All rights reserved.
1. Introduction
Nowadays, VLSI technology progresses towards the integration
of mixed analogdigital circuits as a complete system-on-a-chip.
Though the analog part is a small fraction of the entire circuit, it is
much more difcult to design due to the complex and knowledge-
intensive nature of analog circuits. Without an automated
synthesis methodology, analog circuit design suffers from long
design time, high complexity, high cost and requires highly
skilled designers. Consequently, automated synthesis methodol-
ogies for analog circuits have received much attention. The analog
design procedure consists of topological-level design and para-
meter-level design (also called circuit sizing) [1,2]. This paper
concentrates on the latter, aiming at parameter selection and
optimization to improve the performances for a given circuit
topology.
There are two main purposes of a synthesis system: rst,
replace tedious and ad-hoc manual trade-offs by automatic design
of parameters; second, solve problems that are hard to design by
hand. Accuracy, ease of use, generality, robustness, and reasonable
run-time are necessary for a circuit synthesis solution to gain
acceptance [3]. Other than those requirements, ability to deal with
large-scale problems, closely meet the designers requirements,
even for highly-constrained problems, and ability to achieve
optimum results are signicant objectives of the proposed system.
Many parameter-level design strategies, methods, and tools have
been published in recent years [123], and some have even
reached commercialization [24].
Most analog circuit sizing problems can be naturally expressed
as the minimization of an objective
1
(e.g., power consumption),
usually subject to some constraints (e.g., DC gain larger than a
certain value). They can be formulated as follows:
min
x
f x
gxX0
subject to hx 0
X
L
oxoX
H
(1)
In this equation, the objective function f(x) is the performance
function to be minimized and h(x) are the equality constraints. In
analog circuit design, the equality constraints mainly refer to
Kirchhoffs current law (KCL) and Kirchhoffs voltage law (KVL)
equations. Vector x corresponds to the design variables, and X
L
and X
H
are their lower and upper bounds, respectively. The vector
g(x)X0 corresponds to user-dened constraints.
The proposed system uses the formulation of the analog circuit
design problem in Eq. (1) as a constrained optimization problem
and, then, solves it by evolutionary algorithms. A new algorithm,
ARTICLE IN PRESS
Contents lists available at ScienceDirect
journal homepage: www.elsevier.com/locate/vlsi
INTEGRATION, the VLSI journal
0167-9260/$ - see front matter & 2008 Elsevier B.V. All rights reserved.
doi:10.1016/j.vlsi.2008.04.003

Corresponding author.
E-mail address: wangy46@tsinghua.edu.cn (Y. Wang).
1
The maximization of a design objective can easily be transformed into a
minimization problem by just inverting its sign.
INTEGRATION, the VLSI journal 42 (2009) 137148
called competitive co-evolutionary differential evolution (CODE)
algorithm, is proposed to deal with this constrained opti-
mization problem. The algorithm has several novel features,
which enable it to deal with large-scale and highly-con-
strained problems in an acceptable computation time with high
robustness.
The evolutionary algorithm has been implemented in MATLAB
[25]. Evaluation of the objective function and constraints (values
of f(x) and g(x) in Eq. (1)) is performed by using an electrical
simulator, HSPICE [26], for which an appropriate link with
MATLAB has been implemented.
The structure of the paper is as follows. Section 2 reviews
related work and motivates the strategy of our optimization
approach. The evolutionary algorithm used in this approach,
differential evolution, and its implementation, are discussed in
Section 3. Section 4 formulates the competitive co-evolution
approach to handle constraints in the differential evolution
algorithm. Section 5 provides practical examples and benchmark
tests to show the efciency, effectiveness and advantages
of the proposed approach. Comparisons with other common
methods are also carried out. Finally, some concluding remarks
are given.
2. Related work
Synthesis can be carried out by the following two different
approaches: knowledge- and optimization-based. The basic
idea of knowledge-based synthesis is to formulate design
equations in such a way that given the performance character-
istics, the design parameters can be calculated [46]. In these
tools, the quality of the solutions in terms of both accuracy and
robustness is not acceptable since the very concept of knowledge-
based sizing forces the design equations to be simple. Other
drawbacks are the large preparatory time/effort required to
develop design plans or equations, the difculty in using them
in a different technology, and the limitation to a xed set of
circuits.
In optimization-based synthesis, the problem is translated into
function minimization problems that can be solved through
numerical methods. Essentially, they are based on the introduc-
tion of a performance evaluator within an iterative optimization
loop. The system is called equation-based when the performance
evaluator is based on equations capturing the behavior of a circuit
topology [711]. However, creating the equations often consumes
much more time than manually designing the circuit. In addition,
the simplications required in the closed form analytical
equations cause low accuracy and incompleteness. On the
contrary, simulation-based methods do not rely on analytical
equations but on SPICE-like simulations to evaluate the circuit
performances in the optimization process, which result in super-
ior accuracy, generality, and ease of use [1215]. Therefore, our
system is simulation-based. Through the link between HSPICE and
MATLAB, the candidate parameters are transmitted from the
optimization system to the simulation engine, and the circuit
performances obtained by the electrical simulator are returned to
the optimization system. The penalty to pay is a relatively long
computation time (compared to other methods), although, as the
experimental results in Section 5 demonstrate, it can be kept
within acceptable limits.
Techniques for analog circuit optimization that appeared in
literature can be broadly classied into two main categories:
deterministic optimization algorithms and stochastic search
algorithms (evolutionary computation algorithms, simulated
annealing, etc.). The traditional deterministic optimization meth-
ods mainly include steepest-descent algorithm and downhill
algorithm. These techniques are available in some commercial
electrical simulators [26]. The drawbacks of deterministic opti-
mization algorithms are mainly in the following three aspects: (1)
they require a good starting point, (2) an unsatisfactory local
minimum may be reached in many cases, and (3) they often
require continuity and differentiability of the objective function.
Some researchers have tried to address these difculties, such as
in Ref. [16], where a method to determine the initial point is
presented. Another approach is the application of geometric
programming methods, which guarantee the convergence to a
global minimum [11]. However, they require a special formulation
of design equations, which make them share many of the
disadvantages of equation-based systems. Research efforts on
stochastic search algorithms, especially evolutionary computation
(EC) algorithms (genetic algorithms, differential evolution, genetic
programming, etc.) have begun to appear in literature in recent
years [1,1723]. Due to the ability and efciency to nd a
satisfactory solution, genetic algorithms (GA) have been employed
as optimization routines for analog circuits in both, industry
and academia. For problems with practical design specications,
most reported approaches use the penalty function method to
handle constraints. Though these works have made a signicant
progress, the optimization algorithms for analog circuit design
automation remain an active research area because of the
following reasons:
(1) GA is the most popular evolutionary algorithm, but its search
ability and convergence rate have been criticized. It has also
been proved that canonical GA cannot converge to the global
optimum [27]. GA with elitism converges to the global
optimum theoretically, but it is not always the case in
practice. On the other hand, some other population-based
metaheuristics (PBMH) methods, such as swarm intelligence
[28] and differential evolution [29] are attracting much
attention in the community of operations research because
of their advantages over GAs. Their potentials in analog circuit
design automation still need to be exploited.
(2) The constraint handling problem is very important in analog
circuit design, especially for high performance circuits, which
are always highly constrained. Most reported synthesis
methods use penalty functions to handle constraints, and
few of them investigated solution algorithms for high
performance design problems. In these methods, the con-
strained optimization problem is transformed into an un-
constrained one by minimizing the following function:
f
0
x f x
X
n
i1
w
i
hg
i
xi, (2)
where the parameters w
i
are the penalty coefcients and
/g
i
(x)S returns the absolute value of g
i
(x) if it is negative, and
zero otherwise. The results of the methods based on penalty
functions are very sensitive to the penalty coefcients, and
may not meet the designers specications in many cases.
Small values of penalty coefcients drive the search outside
the feasible region and often produce infeasible solutions,
while imposing very severe penalties make it difcult to drive
the population to the optimum solution. Usually, exact
solutions are hard to nd without tuning the penalty
coefcients for many times. Although several penalty strate-
gies have been developed [30,31], there has been no general
rule for designing penalty coefcients till now.
(3) Ability to handle large-scale design problems is still under
investigation. Most of the available methods can deal with
about 1020 variables simultaneously, but analog circuits
with 30 or more unknown variables are common.
ARTICLE IN PRESS
B. Liu et al. / INTEGRATION, the VLSI journal 42 (2009) 137148 138
3. Differential evolution and its implementation
The differential evolution (DE) algorithm and its implementa-
tion are introduced briey in this section. The DE algorithm is
suitable for unconstrained problems, and it is also a basic
component in CODE.
Differential evolution is a population-based evolutionary
computation technique, which uses a simple differential operator
to create new candidate solutions, and a one-to-one competition
scheme to greedily select new candidates. Recently, DE has
attracted much attention in various technical elds [32,33].
The owdiagramof the DE algorithmis summarized in Fig. 1. The
DE algorithm starts with the random initialization of a population of
individuals in the search space and works on the cooperative
behaviors of the individuals in the population. At each generation,
the mutation and crossover operators are applied to the individuals,
and a new population arises. Then, selection takes place, and the
corresponding individuals from both populations compete to build
the next generation. The algorithm tries to nd the globally optimal
solution by utilizing the distribution of solutions in the search space
and differences between pairs of solutions as search directions.
However, the searching behavior of each individual in the search
space is adjusted by dynamically altering the direction and step
length in which the search is performed.
The ith individual in the d-dimensional search space at
generation t can be represented as
Xt x
i;1
; x
i;2
; . . . ; x
i;d
; i 1; 2; . . . ; NP, (3)
where NP denotes the size of the population.
For each target individual i, according to the mutation operator,
a mutant vector:
V
i
t 1 v
i;1
t 1; . . . ; v
i;d
t 1 (4)
is generated by adding the weighted difference between a pair of
individuals, randomly selected from the population at generation
t, to another individual, as described by the following equation:
V
i
t X
r0
t FX
r1
t X
r2
t, (5)
where indices r
1
and r
2
(r
1
, r
2
A{1, 2, y, NP}) are randomly chosen
and mutually different, and also different from the current index i.
The scaling factor F (FA(0, 1+)) controls the amplication of the
differential variation (X
r1
(t)X
r2
(t)). Although F has not an upper
limit, FA(0, 2] is commonly used. The population size NP must be
at least 4, so that the mutation operator can be applied. Vector
X
r0
(t) is the base vector to be perturbed. There is a variety of
mechanisms to select this base vector.
After the mutation phase, the crossover operator is applied to
increase the diversity of the population. Thus, for each target
individual, a trial vector U
i
(t) [u
i,1
(t), y, u
i,d
(t)] is generated as
follows:
u
i;j
t
v
i;j
t if randjpCR or j randni;
x
i;j
t otherwise;
(
(6)
where rand(j) is a random number uniformly distributed in the
range [0, 1]. The index randn(i) is randomly chosen from the set {1,
2, y, d}, and prevents the trial vector from being identical to the
target vector. The parameter CRA[0, 1] is a constant called
crossover parameter that controls the diversity of the population.
Following the crossover operation, the selection arises to decide
whether the trial vector U
i
(t) will be a member of the population of
the next generation t+1. For a minimization problem, U
i
(t) is
compared to the initial target individual X
i
(t) by the following
one-to-one based greedy selection criterion:
X
i
t 1
U
i
t if f U
i
tof X
i
t;
X
i
t otherwise;
(
(7)
where X
i
(t+1) is the new individual of the population of the next
generation, and f(x) is the objective function.
The procedure described above is considered as the standard
version of DE. Several strategies of DE have been proposed,
depending on the selection of the base vector to be perturbed, the
number and selection of the trial vectors and the type of crossover
operators [32,33]. In our implementation, the base vector X
r0
(t) is
selected to be the best member of the current population to share
its information among the individuals of the population and bias
solutions towards better vectors.
Special care has been taken for handling boundaries of the
parameters of the search space. Two classes of boundaries are
distinguished: hard and weak. Hard boundaries are those that
cannot be exceeded (e.g., a passive resistance cannot be negative
or the transistor gate length cannot be below the minimum value
allowed in the technological process), even if there are mathe-
matically better solutions beyond those points. During the
execution of the DE algorithm, the overstepped individuals are
set to the nearest bounds. Weak boundaries are those roughly
estimated to reasonably limit the search space. Although all
individuals in the initial population are selected within these
bounds, mutations during the evolution of the population may
yield individuals beyond those limits if better solutions are found.
Unlike GA and particle swarm optimization (PSO), DE can deal
with this problem. DE is also more effective as the accuracy of
local search is better than that of GA and PSO [29].
For some parameters it is also interesting to use logarithmic
scales to favor lower values of the parameters with large spans,
e.g., if a bias current spans over several decades, high values,
hence high power consumption, will be favored if a linear scale is
used. Other parameters must be discretized, e.g., device sizes can
only change according to a given grid. In our implementation,
mutant vectors are allowed to vary continuously, to promote
diversity. However, the parameters are set to the nearest grid
value when evaluating the tness of the individual.
ARTICLE IN PRESS
Set Ranges
Initialization
Select Base
Vector
DE Mutation
DE Crossover
DE Selection
Update
Parameters
Reach Maximum
Generations?
Output
No
Yes
Fig. 1. Flow diagram of the DE algorithm.
B. Liu et al. / INTEGRATION, the VLSI journal 42 (2009) 137148 139
4. Constrained analog circuit optimization problem
Though DE is very effective and efcient, it is not enough
for the sizing of analog circuits. All EC algorithms them-
selves lack a mechanism to deal with the constraints of a
problem, which remains an open research area. However,
there exist user-dened specications for most analog circuit
design problems, and these constraints must be appropriately
handled.
The use of penalty functions is the most common method, but
it is very sensitive to penalty coefcients and can hardly get
satisfactory results without proper penalty coefcients. Though
several penalty strategies [30,31] have been developed to improve
static penalty coefcients, there is no general rule to determine
proper penalty coefcients till now. Michalewicz describes
the difculties in each available penalty strategy in [34]. More-
over, Michalewicz and Schoenauer [35] concluded that the
static penalty function method without any sophistication
is more robust, as one such sophisticated method may work
well on some problems but may not work well on another
problem.
In this paper, constraints are handled by using the augmented
Lagrangian method, which transforms the constrained optimiza-
tion problem into a problem amenable to the DE algorithm
described in Section 3. Penalty parameters in the augmented
Lagrangian formulation are automatically updated during execu-
tion of the algorithm to reach the optimum point, hence avoiding
the problems related to inappropriate settings of the penalty
parameters. Parameters are updated based on a co-evolution
methodology.
In recent years, co-evolution methodologies, including co-
operative and competitive mode, have attracted much attention.
Methods based on cooperative mode mainly aim at unconstrained
optimization problem [36,37]. Cooperative co-evolution for con-
strained optimization problems has been proposed in Ref. [38]. In
this paper, the competitive co-evolution concept [3941] and the
modied DE algorithm based on the augmented Lagrangian
method are combined to formulate a hybrid algorithm, CODE,
for constrained optimization problems in analog IC synthesis. We
will begin by introducing augmented Lagrangians, and then
discuss the combination of augmented Lagrangians with compe-
titive co-evolution methodology.
4.1. Augmented Lagrangians
A constrained non-linear optimization problem can be ex-
pressed as
minimize f x
g
i
xp0; i 1; . . . ; m
subject to
g
j
x 0; j m 1; . . . ; n. (8)
These functions can be combined into a single transformation
function U, called the augmented Lagrangians [42]:
Fx; r; h f x
1
2
X
m
i1
r
i
g
i
y
i

2
y
2
i

1
2
X
n
jm1
r
j
g
j
y
j

2
y
2
j
.
(9)
In analog circuit design, equality constraints are limited to current
and voltage relationships imposed by Kirchhoffs laws: KCL and
KVL. Kirchhoffs laws are automatically included in the circuit
equations in electrical simulators, so only the inequality con-
straints dened by design specications have to be taken into
account in the optimization process. Therefore, the augmented
Lagrangian formulation is reduced to:
Fx; r; h f x
1
2
X
m
i1
r
i
g
i
y
i

2
y
2
i
. (10)
Here, x is the vector of decision variables, r is the vector of penalty
parameters, and m
i
r
i
y
i
is the Lagrangian multiplier associated
with the ith constraint. This process is repeated until convergence.
The optimization objective is to nd the saddle point (x
0
, l
0
), such
that:
Fx
0
; l
0
pFx
0
; l
0
pFx
0
; l
0
. (11)
In the Lagrangian dual method, there is not such a saddle point
for non-convex problems. However, augmented Lagrangians
addresses the problem by convexifying the objective function
with quadratic penalty terms associated with the constraints [43].
For most practical problems, a saddle point always exists, and x
0
is
the optimal solution of the optimization problem.
4.2. Combination of co-evolutionary methodology and augmented
Lagrangians
The main problem of the augmented Lagrangian method is
how to update the Lagrange multipliers so that it converges to the
saddle point to avoid local optimization. A predetermined
updating scheme may work well on some problems but may not
work well on another problem. Co-evolution methodology, relying
on the current evolution result of decision variables, solves this
problem.
Competitive co-evolution method was inspired by observing
predatorprey relationship, where organisms adapt to each other
in a dynamic environment. Groups are rewarded if they defeat
individuals that compete with them. Competitive co-evolution
strategy can be viewed as arm races of two groups [39,40]. To
arouse competition, two populations, whose values of tness
functions are opposite to each other, must be generated.
In order to nd the saddle point in augmented Lagrangians, the
problem can be formulated as
min
x
max
m
Fx; r; h f x
1
2
X
m
i1
r
i
g
i
y
i

2
y
2
i
, (12)
where x is vector of design variables, and l is the vector of
Lagrangian multipliers. The purpose of the rst population is to
minimize F(x, r, h) with individuals in this population encoding
values of x and using a xed l generated from the second
population. The evolution of this population tries to satisfy the
following property of the saddle point: F(x
0
, l
0
)pF(x, l
0
) in
Eq. (11). The second population aims to maximize F(x, r, h) with
its individuals encoding values of l and using a xed x generated
from the rst population. The reason of this operation is F(x
0
,
l)pF(x
0
, l
0
) in Eq. (11). This process establishes arm races of the
two populations. Once the rst population achieves a solution x
with a previous value of l, the second population gets a better l
based on x to defeat it. Then the rst population generates a better
x to defeat the second population. At last, the saddle point in
Eq. (11) can be reached, which is the optimal point of x. Values of r
and h are initialized at the beginning of the optimization process,
and are updated at the end of each cycle of DE-based optimiza-
tion. The penalty coefcients should increase as r
i+1
r
i
a after
each cycle, where r
0
and a should be initialized rst. The ow
diagram of CODE is summarized in Fig. 2.
In the rst iteration of the DE-based optimization cycle, an
individual is randomly selected from the second population. Its
purpose is that the rst population needs a xed Lagrangian
multiplier from the second population in the rst generation, but
ARTICLE IN PRESS
B. Liu et al. / INTEGRATION, the VLSI journal 42 (2009) 137148 140
the second population does not start its evolution operations at
the same time.
It can be seen that the CODE algorithm fully inherits the
advantages of the augmented Lagrangian method. The exact
solution can be achieved by augmented Lagrangian method,
whereas this is not true for the static penalty function method
[43]. In addition, the advantages of the differential evolution
algorithm, as described in Section 3, are also inherited. As any
other stochastic optimization algorithm, it cannot be guaranteed
that the global optimum solution is found for every problem, but
the experimental results demonstrate that better solutions than
previous methods are obtained in all cases.
5. Experimental results
In this section, the developed algorithmwill be applied to three
practical analog circuit sizing problems and four mathematical
benchmark problems. The three circuit sizing problems corre-
spond to three ampliers of increasing complexity. The purpose of
these examples is to test the ability of CODE to handle highly-
constrained optimization problems, the ability to handle large
search spaces (large number of design parameters), its compar-
ison to other optimization algorithms and its lowsensitivity to the
initial values of the optimization parameters. Finally, benchmark
tests of the evolutionary computation eld for constrained
optimization are shown.
In all the examples, the DE step size F is 0.8 and the crossover
probability CR is 0.8. The inner generations of population
encoding values of x is 80 for problems with less than 20
variables, and 100 for other problems, and the generations of
population encoding values of l is 80. The above parameters are
commonly used in DE based algorithms. We used r
0
1 and
a 1.5 for all the problems, except in the specic experiments
which demonstrate the low sensitivity of the results to these
parameter values. The design parameter search space is quite
wide in all cases. Transistor lengths were allowed to vary between
the minimum value allowed by the technological process to
10mm. Transistor widths were changed between the minimum
technology value to several hundreds of micrometers. Capacitor
values and bias currents and voltages also had broad (yet
reasonable) ranges.
The inputs to the system are a SPICE net list le containing the
structure, and user dened specications. All the examples are
run on a 2.4GHz PC with 1GB RAM, in the MATLAB environment.
Reported computation times include processing time in MATLAB,
the communication time between HSPICE and MATLAB, and the
simulation time of HSPICE.
5.1. Example 1: Design of a two-stage amplier
The main purpose of this example is to test the capability of
CODE to handle constraints. A typical Miller-compensated two-
stage amplier, shown in Fig. 3, is chosen rst to test the
algorithm. The technology used is a 0.25mm CMOS process and
the load capacitance C
L
is 30pF. The design parameters are:
transistor widths and lengths, compensation capacitor and bias
currents.
The rst experiment tries to achieve the design objectives and
constraints shown in Table 1. Appropriate matching constraints
were established and appropriate operating region was ensured
by imposing constraints like V
DS
/(V
GS
V
TH
)41 for NMOS transis-
tors. The same experiment was tried with the standard genetic
and differential evolution algorithms and using the static penalty
function method to handle constraints (denoted GA+PF and
DE+PF, respectively). We tried to manually improve the penalty
ARTICLE IN PRESS
Initialize
population A
Initialize
population B
DE operations
DE operations
Reach generation A?
Reach generation B?
Update parameters Convergence?
Best result
No
Design
variable
Multiplier
Yes
Yes
Yes
No
Start
No
Fig. 2. Flow diagram of CODE.
B. Liu et al. / INTEGRATION, the VLSI journal 42 (2009) 137148 141
coefcients through ve runs of the GA+PF and DE+PF algorithms.
At each new run, the penalty coefcients were updated trying to
increase the relative importance of the constraints not met in the
previous run. Table 1 shows the best result from these ve runs. It
can be seen that the GA+PF algorithm slightly violates the phase
margin specication and gets a considerably higher power
consumption. Both, the DE+PF and CODE algorithms, meet the
design specications but CODE achieves a signicantly lower
power consumption. Notice that CODE was run only once as no
manual adjustment of penalty parameters is needed.
Table 1 also shows the execution time of the algorithms. This
time includes the communication between MATLAB and SPICE.
Approximately, half of this time is spent in the electrical
simulator. The total CPU time can, hence, be very signicantly
reduced by implementing the optimizer in a compiled language
and improving the efciency of the communication between the
optimizer and the electrical simulator.
To test the ability to handle tighter specications let us
consider now the specications in Table 2. It can be observed that
CODE is still able to meet the specications. However, neither the
GA+PF algorithm nor the DE+PF algorithm are able to, even though
ve different sets of penalty coefcients were tried. The
design parameters obtained by CODE for this case are shown in
Table 3.
An important advantage of CODE is that the algorithm has a
low sensitivity to the initial settings of the penalty parameters. To
illustrate this, Table 4 shows the results of the application of the
CODE algorithm when different initial values of the optimization
parameters are used. By comparing this table with Table 2, it can
be checked that all constraints are met independently of the initial
values of the optimization parameters. Moreover, variations in the
nal value of the objective function keep below 3%.
From these experiments, we can conclude that for low
requirements, CODE and the static penalty methods work
relatively well (the latter usually needs several sets of penalty
coefcients before an acceptable value is found). However, for
problems with many and restrictive constraints, such as the case
above, the drawbacks of both algorithms based on static penalty
methods, GA+PF and DE+PF, become obvious, whereas CODE
consistently performs well.
5.2. Example 2: Design of a TCFC amplier
The second example will use an amplier based on the
transconductance with capacitance feedback compensation
ARTICLE IN PRESS
Fig. 3. The Miller-compensated two-stage amplier.
Table 1
Specications and results of CODE, GA+PF and DE+PF
Specications Constraints CODE GA+PF DE+PF
DC gain (dB) X70 76.48 72.601 80.659
GBW (MHz) X2 2.068 4.523 2.0406
Phase margin (1) X50 55.946 49.876 55.641
Output swing (V) X2 2.2017 2.1256 1.9182
CMRR (dB) X70 90.01 70.99 70.018
PSRR (dB) X70 76.571 74.658 80.802
Input noise nV=

Hz
p

p60 53.653 50.302 57.004
Slow rate (V/ms) X1.5 1.5209 1.649 1.503
Power (mW) Minimize 0.73118 2.215 1.1164
Total run time (s) 10097 10126 9865
Table 2
Specications and results of CODE, GA+PF and DE+PF
Specications Constraints CODE GA+PF DE+PF
DC gain (dB) X85 86.1 78.436 85.92
GBW (MHz) X2.5 2.5052 6.6431 2.7684
Phase margin (1) X55 57.746 54.958 54.785
Output swing (V) X2 2.0965 2.2274 2.0446
CMRR (dB) X80 80.452 77.553 75.667
PSRR (dB) X85 86.13 78.641 81.02
Noise nV=

Hz
p

p20 18.537 7.979 12.575
Slow rate (V/ms) X1.8 1.9709 1.9249 1.8629
Power (mW) Minimize 1.0143 2.4344 2.1298
Total run time (s) 11206 10533 11037
Table 3
Parameters of the two-stage amplier
W1 (mm) 10.3 W3 (mm) 99.48 W5 (mm) 99.48
W6 (mm) 85.66 W7 (mm) 46.27 L1 (mm) 4.34
L3 (mm) 0.7 L5 (mm) 4.77 L6 (mm) 0.59
L7 (mm) 3.88 C
c
(pF) 40.023 Ib (mA) 0.2173
Table 4
Experiments with different initial values of r
0
and a
Specications Constraints r
0
2, a 2 r
0
1, a 3
DC gain (dB) X85 93.603 86.117
GBW (MHz) X2.5 2.5072 3.3719
Phase margin (1) X55 59.708 57.402
Output swing (V) X2 2.029 2.07
CMRR (dB) X80 82.9 80.873
PSRR (dB) X85 93.622 86.154
Noise nV=

Hz
p

p20 16.204 10.088
Slow rate (V/ms) X1.8 1.9555 2.0359
Power (mW) Minimize 1.0535 1.0217
Total run time (s) 10957 11031
B. Liu et al. / INTEGRATION, the VLSI journal 42 (2009) 137148 142
(TCFC) technique [44]. The TCFC amplier is shown in Fig. 4 and
the target technology is a 0.35mm CMOS process. The optimiza-
tion problem contains 36 design parameters, hence, it is
considerably more complex than the previous example.
Table 5 shows the design specications and the results of
CODE, which successfully meets the constraints. We tried to
design the same circuit using the GA+PF algorithm and using a set
of ve penalty coefcients: 20, 50, 5, 50 and 100 for the design
constraints and objectives. The second column in Table 6 shows
the results after the execution of the GA+PF algorithm. It can be
seen that GBW and SR specications are not met. Therefore, we
increased the penalties of both constraints, GBW and SR, by a
factor 3 and executed the algorithm again. The results, in the
third column of Table 6, show that the slew rate specication is
not met yet, the GBW spec is met now but the DC gain and SR
specs are not met. Then, we tried a third time, increasing the
penalties of SR and DC gain by a factor 2, resulting in the
performances shown in the fourth column, in which all the
constraints are violated. Although we tried for another 10 times to
adjust the penalty parameters, still a satisfactory result cannot be
achieved.
We followed a similar procedure with the DE+PF algorithm.
The results are shown in Table 7. It can be seen that the
constraints are met for one set of penalty coefcients in this case.
But the power consumption obtained is much higher than with
the CODE algorithm.
It can be concluded that in high-performance designs, there
often exist tedious trade-offs to nd proper penalty parameters.
Sometimes, a good result can be achieved with static penalty
methods by using a proper set of penalty coefcients, but the
search of such penalty coefcients may yield a long and tedious
process.
5.3. Example 3: Design of a gain-boosted folded-cascode amplier
Finally, we will use the gain-boosted folded-cascode amplier
in Fig. 5. This is the most complex example in this paper with
almost 50 design parameters.
Table 8 shows the specications for this amplier, to be
designed in a 0.25mm CMOS process. The table also shows the
constraints (dm parameters represent the ratio of the drain-
source voltage over the drainsource saturation voltage) that are
used to ensure that all transistors are in the saturation region. The
third column shows the results of the CODE algorithm. All
constraints are met. The best of ve executions of the GA+PF
algorithm is shown in the fourth column. Performance specica-
tions are marginally met but several transistors are out of the
saturation region and power consumption is much higher than in
the solution provided by CODE. The application of the DE+PF
ARTICLE IN PRESS
Fig. 4. The TCFC amplier.
Table 5
Specications and results of the CODE algorithm for the TCFC amplier
Specications Constraints Result
DC gain (dB) X80 82.3830
GBW (MHz) X2 2.2186
Phase margin (1) X50 54.4970
Slow rate (V/ms) X1.5 1.56
Power (mW) Minimize 0.1425
Total run time (s) 12553
Table 6
Results of the GA+PF algorithm (same specications than Table 5)
Specications PF1 PF2 PF3
DC gain (dB) 83.401 67.794 72.199
GBW (MHz) 0.50004 2.5645 1.5086
Phase margin (1) 65.336 54.98 36.5
Slow rate (V/ms) 0.0022227 0.2096 1.41
Power (mW) 0.041669 0.15166 1.06
Total run time (s) 15950 15231 15606
Table 7
Results of the DE+PF algorithm (same specications than Table 5)
Specications PF1 PF2 PF3
DC gain (dB) 80.217 76.901 81.083
GBW (MHz) 0.9242 1.8462 2.0853
Phase margin (1) 50.6 53.66 58.85
Slow rate (V/ms) 0.033218 0.8295 3.2081
Power (mW) 0.050 0.7234 1.6024
Total run time (s) 12486 12891 13762
B. Liu et al. / INTEGRATION, the VLSI journal 42 (2009) 137148 143
algorithm is not able to meet the performance specications and
several transistors are not in the correct operating region.
As stated above, an important advantage of the CODE
algorithm, is that the competitive co-evolution adjusts the penalty
coefcients to the appropriate values. As an illustration, Table 9
shows the evolution of the Lagrange multipliers for all constraints
along several cycles of the co-evolutionary algorithm. It can be
seen that in a few cycles, the multipliers converge to the right
values to achieve a proper solution.
5.4. Benchmark problems for constrained optimization
In computer science, especially in evolutionary computation,
benchmark problems are of great importance to evaluate and
compare different algorithms. Benchmark problems are tough,
and if an algorithm works well in benchmark problems, it is often
regarded as very effective for medium-sized optimization pro-
blems. Three highly-constrained benchmark problems [45],
described in Appendix are tested rst. The results for three
experiments with different number of generations are shown in
Tables 10 and 11. Best, worst and average results were extracted
from 15 runs of the algorithm. The results show that the proposed
algorithm for constrained optimization problems, CODE, is quite
effective.
To show the advantages of CODE, the same benchmark
problems were tried with the DE+PF and GA+PF algorithms.
Different penalty coefcients were used in each of the 15 runs and
the best result among them is shown in Table 12.
2
The comparison
of computation times is shown in Table 13.
From the comparison, we can conclude that methods based on
penalty functions are worse than the methods based on co-
evolution methods. In particular, unlike many previous works, in
all cases, we use the same optimization parameters introduced
above, which have not undergone any specic calculation in view
ARTICLE IN PRESS
Fig. 5. (a) Gain-boosted folded-cascode amplier; (b) P amplier and (c) N amplier.
2
This is the most favorable comparison for the GA+PF and DE+PF algorithms,
since bad sets of penalty coefcients tend to decrease the mean value.
B. Liu et al. / INTEGRATION, the VLSI journal 42 (2009) 137148 144
of different problems; that is, the algorithm can achieve good
result without detailed parameters studies.
In addition, in order to test the ability of CODE to deal with
active constraints in optimization problems, the benchmark
problem in Ref. [46] was selected (Test problem 4 in Appendix)
In this problem, both constraints are active at the optimum. The
best results of CODE, GA+PF and DE+PF after 15 runs are 5.5080,
5.6833 and 5.5538, respectively.
6. Conclusions
This paper presents CODE: an evolutionary-based system for
parameter-level design of analog integrated circuits. The basic
elements of the system are a co-evolutionary methodology based
on a differential evolution algorithm and the use of augmented
Lagrangians to represent the constrained non-linear optimization
problem. CODE achieves the following three novel features: (1) it
avoids the tedious tuning of penalty coefcients, (2) it can closely
meet the designers specications even for highly-constrained
problems, and (3) it is suitable for medium or large-scale
problems. Moreover, CODE is efcient.
Acknowledgments
This work has been supported by National Natural Science
Foundation of China grant no. 60676012 and Special Funds for
ARTICLE IN PRESS
Table 8
Specications and results of CODE, GA+PF and DE+PF
Specications Constraints CODE GA+PF DE+PF
DC gain (dB) 480 112.17 79.995 69.953
GBW (MHz) 4250 256.88 253.93 250.34
Phase margin (1) 465 70.128 72.665 72.94
Gain margin o 1 0.99952 0.90579 2.7761
dm1a 41.2 19.162 19.269 14.627
dm2 41.2 5.9545 5.0727 0.54239
dm3a 41.2 15.856 26.476 26.554
dm4a 41.2 4.8841 2.029e-6 4.6684e-8
dm5a 41.2 2.6609 2.3041 0.37412
dm6a 41.2 8.6422 1.0149 2.6458
dm1bp 41.2 7.7727 4.9564 4.6678
dm3bp 41.2 2.8415 1.1078 0.23445
dm5bp 41.2 18.277 4.8062 0.66502
dm6bp 41.2 5.581 2.0614 0.071441
dm8bp 41.2 12.483 0.00039371 0.96522
dm10bp 41.2 4.263 8.2996 21.63
dm1bn 41.2 6.8212 1.6035 1.6045
dm3bn 41.2 6.2987 0.0082057 0.00012044
dm5bn 41.2 6.9101 0.0198 0.00031191
dm6bn 41.2 2.0146 0.00074711 0.035939
dm8bn 41.2 4.5137 4.0393 4.4812
dm10bn 41.2 2.9358 0.00020514 7.5559e-5
Power (mW) Minimize 4.004 15.508 0.87602
Total run time (s) 5472 6124 5220
Table 9
Evolution of Lagrangian multipliers
Specications Cycle 1 Cycle 2 Cycle 3 Cycle 4
DC gain (dB) 3.4197 0.001318 0.00046544 0.0002567
GBW (MHz) 2.8973 0.025275 0.01 0.01
Phase margin 3.4119 0.01 0.01 0.01
Gain margin 5.3408 0.029214 0.01 0.01
dm1a 7.2711 0.0043268 0.0018774 0.0015662
dm2 3.0929 0.011876 0.0054239 0.0033819
dm3a 8.385 0.010797 0.0049222 0.0038691
dm4a 5.6807 0.032155 0.01 0.01
dm5a 3.7041 0.01 0.01 0.01
dm6a 7.0274 0.023379 0.003503 0.0020054
dm1bp 5.4657 0.0089035 0.0048879 0.0033824
dm3bp 4.4488 0.010513 0.003444 0.0017828
dm5bp 6.9457 0.034867 0.01466 0.01
dm6bp 6.2131 0.0078141 0.0027322 0.0019259
dm8bp 7.9482 0.052685 0.01 0.01
dm10bp 9.5684 0.0095579 0.0036852 0.0024163
dm1bn 5.2259 0.0048001 0.0019227 0.0015987
dm3bn 8.8014 0.00042661 0.00014712 8.7983e-5
dm5bn 1.7296 0.01 0.01 0.01
dm6bn 9.7975 0.051865 0.001417 0.0013897
dm8bn 2.7145 0.0037478 0.0028153 0.0016745
dm10bn 2.5233 0.0079334 0.0028515 0.0020671
Table 10
Results of benchmark problems as a function of the number of generations in
CODE
Problem Generations
100 300 500
G1
Best 14.9501 14.9999 15.0000
Average 13.2311 14.8068 14.8821
Worst 11.4860 13.8913 13.9021
Table 11
Results of benchmark problems as a function of the number of generations in
CODE
Problem Generations
500 1000 1500
G9
Best 682.9565 682.8185 680.6900
Average 684.4214 683.6447 682.3335
Worst 689.3154 687.3919 686.9259
G7
Best 25.2218 25.2109 24.8946
Average 28.2832 26.1763 25.6317
Worst 34.4057 28.9355 27.0023
Table 12
Results of benchmark problems
Method Problem
G1 G7 G9
GA+PF 13.7597 685.9112 30.9991
DE+PF 14.9583 683.0213 25.0172
CODE 15.0000 680.6900 24.8946
Table 13
Computation time on benchmark problems
Method Problem
G1 G7 G9
GA+PF 15.87s 79.74s 133.15s
DE+PF 16.02s 61.53s 97.86s
CODE 10.55s 43.32s 76.30s
B. Liu et al. / INTEGRATION, the VLSI journal 42 (2009) 137148 145
Major State Basic Research Projects no. 2002CB311907. We
acknowledge valuable discussions with Dr. Ziqiang Wang and
Dr. Xueyi Yu, Institute of Microelectronics of Tsinghua University,
China. We are grateful to Mr. Hannan Ma, Department of
Electronic Engineering, for his contribution to the program. Dr.
F.V. Ferna ndez thanks the support of the TEC2004-01752 and
TEC2007-67247 Projects, funded by the Spanish Ministry of
Education and Science with support from ERDF, and by the TIC-
2532 Project, funded by Consejera de Innovacio n, Ciencia y
Empresa, Junta de Andaluca. We also thank the reviewers for
their comments that helped to improve the presentation of the
paper.
Appendix. Description of benchmark test problems
Test problem 1
Minimize
G
1
x 5x
1
5x
2
5x
3
5x
4
5
X
4
i1
x
2
i

X
13
i5
x
i
subject to:
2x
1
2x
2
x
10
x
11
10p0,
2x
1
2x
3
x
10
x
12
10p0,
2x
2
2x
3
x
11
x
12
10p0,
2x
4
x
5
x
10
p0,
2x
6
x
7
x
11
p0,
2x
8
x
9
x
12
p0,
8x
1
x
10
p0,
8x
2
x
11
p0,
8x
3
x
12
p0,
with
0px
i
p1; i 1; . . . ; 9
0px
i
p100; i 10; 11; 12
0px
13
p1
The optimum solution is x
n
1; 1; 11; 1; 1; 1; 1; 1; 3; 3; 3; 1 and the
function value is G
1
(x

) 15.
Test problem 2
Minimize
G
9
x x
1
10
2
5x
2
12
2
x
4
3
3x
4
11
2
10x
6
5
7x
2
6
x
4
7
4x
6
x
7
10x
6
8x
7
subject to
2x
2
1
3x
4
2
x
3
4x
2
4
5x
5
127p0,
7x
1
3x
2
10x
2
3
x
4
x
5
282p0,
23x
1
x
2
2
6x
2
6
8x
7
196p0,
4x
2
1
x
2
2
3x
1
x
2
2x
2
3
5x
6
11x
7
p0,
with
10px
i
p10; i 1; . . . ; 7.
The optimum solution is:
x
n
2:330499; 1:951372; 0:4775414; 4:365726,
0:6244870; 1:038131; 1:594227
and the function value is: G
9
x
n
680:6300573.
Test problem 3
Minimize
G
7
x x
2
1
x
2
2
x
1
x
2
14x
1
16x
2
x
3
10
2
4x
4
5
2
x
5
3
2
2x
6
1
2
5x
2
7
7x
8
11
2
2x
9
10
2
x
10
7
2
45
subject to
105 4x
1
5x
2
3x
7
9x
8
X0,
3x
1
2
2
4x
2
3
2
2x
2
3
7x
4
120X0,
10x
1
8x
2
17x
7
2x
8
X0,
x
2
1
2x
2
2
2
2x
1
x
2
14x
5
6x
6
X0,
8x
1
2x
2
5x
9
2x
10
12X0,
5x
2
1
8x
2
x
3
6
2
2x
4
40X0,
3x
1
6x
2
12x
9
8
2
7x
10
X0,
0:5x
1
8
2
2x
2
4 3x
2
5
x
6
30X0,
with
10px
i
p10; i 1; . . . ; 10.
The optimum solution is
x
n
2:171996; 2:363683; 8:773926; 5:095984,
0:9906548; 1:430574; 1:321644; 9:828726; 8:280092; 8:375927
and the function value is: G
7
x
n
24:3062091.
Test problem 4
Minimize
G
0
x x
1
x
2
subject to:
x
2
2x
4
1
8x
3
1
8x
2
1
2p0
x
2
4x
4
1
32x
3
1
88x
2
1
96x
1
36p0
with
0px
1
p3; 0px
2
p4.
The optimum solution is:
x
n
2:32952024; 3:17849288
and the function value is: G
0
5:508013271.
References
[1] D. Nam, Y. Seo, L. Park, C. Park, B. Kim, Parameter optimization of an on-chip
voltage reference circuit using evolutionary programming, IEEE Trans. Evol.
Comput. 5 (4) (2001) 414421.
[2] E. Martens, G. Gielen, Classication of analog synthesis tools based on their
architecture selection mechanisms, Integr. VLSI J. 41 (2) (2008 Feb)
238252.
[3] M. Krasnicki, R. Phelps, J. Hellums, M. McClung, R. Rutenbar, L. Carley, ASF: a
practical simulation-based methodology for the synthesis of custom analog
circuits, Proc. Int. Conf. Comput. Aid. Design (2001) 350357.
[4] M.G.R. Degrauwe, O. Nys, E. Dijkstra, J. Rijmenants, S. Bitz, B.L.A.G. Goffart,
E.A. Vittoz, S. Cserveny, C. Meixenberger, G. van der Stappen, H.J. Oguey, IDAC:
an interactive design tool for analog CMOS circuits, IEEE J. Solid-State Circuits
22 (6) (1987) 11061116.
[5] R. Harjani, R.A. Rutenbar, L.R. Carley, OASYS: a framework for analog circuit
synthesis, IEEE Trans. Comput. Aid. Design 8 (12) (1989) 12471265.
[6] C.A. Makris, C. Toumazou, Analog IC design automation: Part II. Automated
circuit correction by qualitative reasoning, IEEE Trans. Comput. Aid. Design 14
(2) (1995) 239254.
[7] E. Ochotta, R. Rutenbar, L.R. Carley, Synthesis of high-performance analog
circuits in ASTRX/OBLX, IEEE Trans. Comput. Aid. Design 15 (1996) 273294.
[8] G. Gielen, H. Walsharts, W. Sansen, Analog circuit design optimization based
on symbolic simulation and simulated annealing, IEEE J. Solid-State Circuits
25 (3) (1990).
ARTICLE IN PRESS
B. Liu et al. / INTEGRATION, the VLSI journal 42 (2009) 137148 146
[9] P.C. Maulik, L.R. Carley, D.J. Allstot, Sizing of cell-level analog circuits using
constrained optimization techniques, IEEE J. Solid-State Circuits 28 (3) (1993)
233241.
[10] S. Wang, H. Gao, Z. Wang, The research of parameter optimization design
method for analog circuit, in: Proceedings of the Instrumentation and
Management Technology Conference, 2004, pp. 15151518.
[11] M. Hershenson, S. Boyd, T. Lee, Optimal design of a CMOS op-amp via
geometric programming, IEEE Trans. Comput. Aid. Design of ICs 20 (1) (2001)
121.
[12] W. Nye, D.C. Riley, A. Sangiovanni-Vincentelli, A.L. Tits, DELIGHT. SPICE: an
optimization-based system for the design of integrated circuits, IEEE Trans.
Comput. Aid. Design 7 (4) (1988) 501519.
[13] R. Phelps, M.J. Krasnicki, R.A. Rutenbar, L.R. Carley, J.R. Hellums, Anaconda:
simulation-based synthesis of analog circuits via stochastic pattern search,
IEEE Trans. Comput. Aid. Design 19 (6) (2000) 703717.
[14] F. Medeiro, R. Rodrguez-Macas, F.V. Ferna ndez, R. Domnguez-Castro, J.L.
Huertas, A. Rodrguez-Va zquez, Global design of analog cells using statistical
optimization techniques, Analog Integr. Circuits Signal Process. 6 (3) (1994)
179195.
[15] R. Castro-Lo pez, F.V. Ferna ndez, O. Guerra, A. Rodrguez, Reuse Based
Methodologies and Tools in the Design of Analog and Mixed-Signal Integrated
Circuits, Springer, 2006.
[16] G. Stehr, M. Pronath, F. Schenkel, H. Graeb, K. Antreich, Initial sizing of analog
integrated circuits by centering within topology-given implicit specications,
in: Proceedings of the IEEE/ACM International Conference of Computer Aided
Design, 2003, pp. 241246.
[17] S. Balkir, G. Dundar, G. Alpaydin, Evolution based synthesis of analog
integrated circuits and systems, in: Proceedings of the NASA/DoD Conference
on Evolution Hardware, 2004.
[18] K. Takemura, T. Koide, H. Mattausch, T. Tsuji, Analog-circuit-component
optimization with genetic algorithm, in: Proceedings of the 47th IEEE
International Midwest Symposium on Circuits and Systems, 2004,
pp. 489492.
[19] M. Barros, G. Neves, J. Guilherme, N. Horta, An evolutionary optimization
kernel with adaptive parameters applied to analog circuit design, in:
Proceedings of the International Symposium on Signals, Circuits and Systems,
2005, pp. 545548.
[20] C. Goh, Y. Li, GA automated design and synthesis of analog circuits with
practical constraints, IEEE Cong. Evol. Comput. 1 (2001) 170177.
[21] J. Koza, F. Bennett, D. Andre, M. Keane, F. Dunlap, Automated synthesis of
analog electrical circuits by means of genetic programming, IEEE Trans. Evol.
Comput. 1 (2) (1997) 109128.
[22] G. Alpaydin, S. Balkir, G. Dundar, An evolutionary approach to automatic
synthesis of high performance analog integrated circuits, IEEE Trans. Evol.
Comput. 7 (3) (2003).
[23] W. Kruiskamp, D. Leenaerts, Darwin: CMOS opamp synthesis by means of a
genetic algorithm, in: Proceedings of the ACM/IEEE Design Automation
Conference, 1995, pp. 433438.
[24] Virtuoso NeoCircuit circuit sizing and optimization. Homepage: /http://
www.cadence.com/products/custom_ic/neocircuit/index.aspxS.
[25] /http://www.mathworks.com/products/matlab/S.
[26] /http://www.synopsys.com/products/mixedsignal/hspice/hspice.htmlS.
[27] G. Rudolph, Convergence analysis of canonical genetic algorithms, IEEE Trans.
Neural Netw. 5 (1994) 96101.
[28] J. Kennedy, R. Eberhart, Y. Shi, Swarm Intelligence, Morgan Kaufmann
Publishers, San Francisco, 2001.
[29] R. Storn, K. Price, Differential evolutiona simple and efcient heuristic for
global optimization over continuous spaces, J. Global Opt. 11 (1997) 341359.
[30] J. Joines, C. Houck, On the use of nonstationary penalty functions to solve
nonlinear constrained optimization problems with GAs, Proc. IEEE Int. Conf.
Evol. Comput. (1994) 579584.
[31] K. Deb, An efcient constraint handling method for genetic algorithm,
Comput. Methods Appl. Mech. Eng. 186 (2000) 311338.
[32] K. Price, R. Storn. Differential evolution. Homepage: /http://www.ICSI.Ber-
keley.edu/storn/code.htmlS.
[33] K. Price, R. Storn, J. Lampinen, Differential Evolution. A Practical Approach to
Global Optimization, Springer, 2005.
[34] Z. Michalewicz, Genetic algorithms, numerical optimization, and constraints,
in: L. Eshelman (Ed.), Proceedings of the Sixth International Conference on
Genetic Algorithms, Morgan Kaufman, San Mateo, 1995, pp. 151158.
[35] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained
parameter optimization problems, Evol. Comput. 4 (1) (1996) 132.
[36] Y. Shi, H. Teng, Z. Li, Cooperative co-evolutionary differential evolution for
function optimization, ICNC (2) 2005, Lecture Notes in Computer Science,
2005, pp 10801088.
[37] M.A. Potter, K.A. De Jong, A cooperative co-evolutionary approach to function
optimization, in: Proceedings of the Third Parallel Problem Solving From
Nature, Jerusalem, Israel, 1994, pp. 249257.
[38] B. Liu, H. Ma, X. Zhang, B. Liu, A memetic cooperative co-evolutionary
differential evolution algorithm for constrained optimization problems, IEEE
Cong. Evol. Comput., 2007.
[39] M. Kirley, A co-evolutionary genetic algorithm for job shop scheduling
problems, in: Proceedings of 3rd International Conference on Knowledge-
Based Intelligent Information Engineering System, Australia, 1999, pp. 8487.
[40] J. Paredis, Steps towards co-evolutionary classication neural networks,
Articial Life 5, MIT Press, 1994.
[41] H. Barbosa, A coevolutionary genetic algorithm for constrained optimization,
IEEE Cong. Evolut. Comput. (1999) 16051611.
[42] R.T. Rockafellar, Convex Analysis, 1972.
[43] M.S. Bazaraa, H.D. Sherali, C.M. Shetty, Nonlinear Programming: Theory and
Algorithms, second ed., Wiley, New York, 1993.
[44] X. Peng, W. Sansen, Transconductance with capacitances feedback compen-
sation for multistage ampliers, IEEE J. Solid-State Circuits 40 (7) (2005)
15141520.
[45] S. Koziel, Z. Michalewicz, Evolutionary algorithms, homomorphous
mappings, and constrained parameter optimization, Evol. Comput. 7 (1)
(1999) 1944.
[46] Z. Michalewicz, Evolutionary computation techniques for nonlinear program-
ming problems, Int. Trans. Oper. Res. 1 (2) (1994) 223240.
Bo Liu was born in Beijing, China, on September 23,
1984. He is currently a senior undergraduate student in
Tsinghua University, Beijing, China. Since 2005, he has
been a Research Assistant at the Tsinghua National
Laboratory for Information Science and Technology,
Beijing, China. Since 2007, he has been a Research
Assistant at the CAD Laboratory, Institute of Micro-
electronics, Tsinghua University, Beijing, China. He is
also a member of Analog Design Automation Research
Group of University of Sevilla, Sevilla, Spain. His
research interests include analog integrated circuit
synthesis and evolutionary computation algorithms.
Yan Wang received the B.S. and M.S. degrees in
electrical engineering from Xian Jiaotong University,
Xian, China, in 1988 and 1991, respectively, and the
Ph.D. degree in semiconductor device and physics from
the Institute of Semiconductors, Chinese Academy of
Science, Beijing, China, in 1995. Since 1999, she has
been a Professor with the Institute of Microelectronics,
Tsinghua University, Beijing, China. Her research
focuses on semiconductor device modeling.
Zhiping Yu graduated from Tsinghua University, Beij-
ing, China, in 1967 with B.S. degree. He received his
M.S. and Ph. D degrees from Stanford University,
Stanford, CA, USA in 1980, and 1985, respectively. He
is presently the professor in the Institute of Microelec-
tronics, Tsinghua University, Beijing, China. From 1989
to 2002, he has been a senior research scientist in the
Dept. of Electrical Engineering in Stanford University,
USA, while serving as the faculty member in Tsinghua.
He returned to Tsinghua full time since September
2002 and holds Pericom Microelectronics Professor-
ship (2002-2004) established by Pericom Semiconduc-
tor Corp. in San Jose, USA. His research interests
include device simulation for nano-scale MOSFETs, quantum transport in
nanoelectronic devices, compact circuit modeling of passive and active compo-
nents in RF CMOS, and numerical analysis techniques.
Leibo Liu received the B.S. and Ph.D. degrees in
electronic engineering from Tsinghua University, Beij-
ing, China, in 1999 and 2004, respectively. He is
currently an associate professor with the Institute of
Microelectronics, Tsinghua University, Beijing, China.
His research interests include recongurable processor
design and design methodologies, and system-level
energy-aware design approaches for portable applica-
tions.
Miao Li received the B.S. in electronic engineering from
Tsinghua University, PR China, in 2006. He was at the
Institute of Microelectronics of Tsinghua University
from the September of 2006 to now. His research
focuses on modeling of IIIV compound semiconductor
materials and devices.
ARTICLE IN PRESS
B. Liu et al. / INTEGRATION, the VLSI journal 42 (2009) 137148 147
Zheng Wang is an undergraduate student in electronic
engineering from Tsinghua University, Beijing, China.
Jing Lu received the B.S. in electronic engineering from
Tsinghua University, China, in 2005. She was a
graduate student at the Institute of Microelectronics
of Tsinghua University from 2005 and will receive the
master degree in 2008. Her research focuses on
modeling of IIIV compound semiconductor materials
and devices.
F.V. Ferna ndez got the Physics-Electronics degree from
the University of Seville in 1988 and his Ph.D. degree in
1992. In 1993, he worked as a postdoctoral research
fellow at Katholieke Universiteit Leuven (Belgium).
Since 1995, he is an Associate Professor at the
Department of Electronics and Electromagnetism of
University of Sevilla. He is also a researcher at CSIC-
IMSE-CNM. His research interests lie in the design and
design methodologies of analog and mixed-signal
circuits. Dr. Ferna ndez has authored or edited three
books and has co-authored more than 100 papers in
international journals and conferences. Dr. Ferna ndez
is currently the Editor-in-Chief of Integration, the VLSI
Journal (Elsevier). He regularly serves at the Program Committee of several
international conferences. He has also participated as researcher or main
researcher in several National and European R&D projects.
ARTICLE IN PRESS
B. Liu et al. / INTEGRATION, the VLSI journal 42 (2009) 137148 148

You might also like