Professional Documents
Culture Documents
AbstractWe discuss implicit and explicit knowledge representation mechanisms for evolutionary algorithms (EAs). We also
describe offline and online metaheuristics as examples of explicit
methods to leverage this knowledge. We illustrate the benefits
of this approach with four real-world applications. The first
application is automated insurance underwritinga discrete classification problem, which requires a careful tradeoff between the
percentage of insurance applications handled by the classifier and
its classification accuracy. The second application is flexible design
and manufacturinga combinatorial assignment problem, where
we optimize design and manufacturing assignments with respect
to time and cost of design and manufacturing for a given product.
Both problems use metaheuristics as a way to encode domain
knowledge. In the first application, the EA is used at the metalevel,
while in the second application, the EA is the object-level problem
solver. In both cases, the EAs use a single-valued fitness function
that represents the required tradeoffs. The third application is a
lamp spectrum optimization that is formulated as a multiobjective optimization problem. Using domain customized mutation
operators, we obtain a well-sampled Pareto front showing all
the nondominated solutions. The fourth application describes a
scheduling problem for the maintenance tasks of a constellation
of 25 low earth orbit satellites. The domain knowledge in this
application is embedded in the design of a structured chromosome, a collection of time-value transformations to reflect static
constraints, and a time-dependent penalty function to prevent
schedule collisions.
Index TermsAutomated insurance underwriting, combinatorial optimization, design and manufacturing planning,
evolutionary algorithms (EAs), knowledge representation, lamp
spectrum optimization, metaheuristics, multiobjective optimization, satellite scheduling, soft computing.
I. INTRODUCTION
(1)
where
,
257
when efficient encoding schemes and good strategies to maintain population diversity are used. Integration of fuzzy systems
and evolutionary algorithms allows the represented knowledge
to help initialize and guide the search for a solution in an efficient manner [10].
We also make a few remarks regarding the customization aspects of the algorithms. The NFLT formally supports leveraging
domain knowledge within the search algorithm. However, ad
hoc approaches for representing and integrating such knowledge
could lead to algorithms that are virtually impossible to maintain
over time. It is essential to have a process in which the knowledge is described explicitly rather than implicitly via procedural
changes. The use of hybrid systems based on an explicit knowledge base (KB) and the automation of their KBs tuning allow us
to deploy high-performance systems that can also be supported
and updated during their lifecycle. This concept is illustrated in
the first two applications described in this paper, where we use
metaheuristics, and is further described in [11] and [12].
B. Structure of This Paper
In the next section, we address the issue of knowledge
representation in evolutionary algorithms and consider implicit
representation approaches (specialized data structure, encoding,
and constraints) and explicit representation approaches (population initialization, customized variational operators, offline
and online metaheuristics). We describe the role of hybrid
fuzzy systems in incorporating domain knowledge within evolutionary algorithms and emphasize the natural framework that
they provide for these hybridizations. In the following sections,
we illustrate four real-world applications in which hybrid soft
computing systems were successfully developed, tested, and
in some cases deployed. All applications are described using a
common six-part structure: a) problem description; b) related
work; c) solution architecture; d) domain knowledge representation; e) results; and f) remarks. Our intention is to present
them as self-contained units, while highlighting their common
problem-solution aspects and benefits by incorporating domain
knowledge into the algorithms.
The first application is the automation of risk classification
in underwriting life insurance applications. This application
showcases the use of an evolutionary algorithm as an offline
metaheuristic used to tune the parameters of a fuzzy classifier.
The second application describes the optimization of design
and manufacturing planning and illustrates the role of a fuzzy
system as an online metaheuristic to control the real-time parameters of a genetic algorithm. The third application addresses
the optimization of lamp spectrum by explicitly incorporating
domain knowledge into the variational operators used by the
evolutionary algorithms. The last application is devoted to the
scheduling of the maintenance tasks for a constellation of 25
satellites in low earth orbit (LEO). This application illustrates
the use of implicit knowledge representation approaches, in
which we used customized data structures to reflect static
constraints (verifiable at compile time) and penalty functions
to represent dynamic constraints. In the Conclusions section,
we note again the importance of representing and integrating
domain knowledge with search algorithms and the natural ease
258
259
performed by the EAs. In this case, the knowledge of the landscape is probed by local hill-climbing searches that improve
each individual in the population before sharing information
through crossover operators and competing via the selection
process. Usually these approaches are referred to as hybrid
genetic algorithms or genetic local search. Memetic algorithms
(MAs) [20], [21] are a prime example of combining local search
heuristics with crossover operators. In MAs, each individual
attempts to improve its performance up to a predetermined
level by exploring possible correlations in the landscape. Then
they recombine to create new individuals and use crossover
operators to share this new information about local optima.
Finally, the new improved offspring compete among themselves through a selection process. A similar philosophy can
be found in the approaches suggested by Renders and Bersini
[22], named genetic hill climbing (GHC) and the simplex GA.
GHC also interweaves evolution with local learning, with its
hill-climbing component acting as an accelerator for the fitness
of the GA population. In this approach, individuals extend
their life and improve their fitness using a small number of
local search steps before undergoing crossover, mutation, and
selection. The simplex GA uses search mechanisms derived
from the simplex method to design new crossover operators.
In all these cases, the computational complexity increases but
the overall quality of the solution improves and is attained in a
small number of generations [22].
3) Variational and Selection Operators: Another convenient method to embed knowledge in EAs is via customization
of the variational and selection operators. Mutation operators
should be designed to take into account static constraints
embedded in the data structure and, if needed, should be
augmented by a repair mechanism to fix infeasible solutions
created by the mutation. For instance, when a Gaussian mutation generates an infeasible value for an allele, it could reapply
itself to the original allele value, using a systematically decreasing standard deviation, and iterate until the new value no
260
b) Offline tuning: Another option is to tune these parameters offline (usually with a meta-EA). This method embeds
knowledge in the design of the meta-EA so that the tuned values
provide optimal performance for the metrics of interest. This
method was first proposed by Grefenstette [28], who used a metalevel EA to search for the optimal parameter values for the
same suite of five object-level landscapes used in De Jongs
early work. In this case, a metalevel EA replaced the manual
analysis performed by De Jong. Each individual in the meta-EA
represented an instance of six parameter values for the objectlevel EA. Each trial at the metalevel EA caused the object-level
EA to run with the instantiated parameters for its maximum
number of generations. The results of the runs, evaluated in
terms of performance measures, provided the fitness value for
the individual in the metalevel EA. Note, that given the implication of the NFLT, the optimized parameters obtained by the
meta-EA should be recomputed for each type of problem. However, for most practical applications, such an a priori tuning may
be very expensive or time consuming, or both.
c) Online Control: The third option is to tune the parameters online via some control mechanism. As depicted in Fig. 1,
these corrections can be generated by an open loop, deterministic procedure (e.g., a scheduler), by a closed loop, adaptive
system (e.g., a controller), or by a self-adaptation mechanism.
The first option is to use a deterministic schedule to allocate
resource changes over time. Unfortunately, obtaining an optimal schedule might be as difficult as the original optimization
problem. The second option is to use a controller that explicitly represents the best heuristics regarding such resource allocation. Recent research into using fuzzy logic controllers to provide an online closed-loop control system for EA parameters
shows promising results [23], [29]. These controllers dynamically manipulate parameters such as population size and mutation rate based on metrics extracted during run time. In this
case, the knowledge of the problem is embedded in the heuristics of the controller. The third option is to tune the parameters
by adding them to the genomes so that they evolve with the population, as in evolutionary strategies with self-adaptation [30],
which use this technique to embed knowledge in the representation of individuals.
C. Metaheuristics: A Generalization
The offline tuning and online control of EA parameters described in the previous sections are specific instances of metaheuristics applied to an EA-based problem solver. In general,
metaheuristics are heuristic procedures defined at the metalevel
to control, guide, tune, and allocate computational resources for,
or reason about, object-level problem-solvers in order to improve their quality, performance, or efficiency. The use of metareasoning is a common approach in artificial intelligence (AI),
e.g., in resource-bounded agent applications [31], planning [32],
machine learning and case-based reasoning [33], [34], real-time
fuzzy reasoning systems [35], etc. Therefore, it is a natural step
to extend it from its AI origins to the field of soft computing,
and in particular to evolutionary algorithms.
Offline metaheuristics are used when we are not concerned
with runtime modifications or control of the object-level
problem-solver. In these cases, the goal is usually to define the
261
Fig. 3.
policies of efficiency against performance, within safety constraints. Fuzzy supervisory controllers execute these policies in
a smooth fashion interpolating, instead of switching, among operational modes [39]. Metacontrollers for EAs play a similar
role. As shown in Fig. 3, they monitor the performance of the
object-level EAs, and by leveraging the same interpolation capabilities of the supervisory fuzzy controllers, they control key
resources and parameters to provide smooth transitions from the
exploration to the exploitation stages of the EAs. In Section VI,
we will describe the implementation of online metaheuristics
using a fuzzy controller to improve the runtime performance of
an EA.
Metaheuristics provide an explicit way to represent domain
knowledge to guide an EA-based object-level solver. However,
in light of the NFLT implications, we need to remind ourselves
that: a) the use of metaheuristics will improve the performance
of optimization algorithms for a subset of problems and b) the
metaheuristics will not be universal, but specific for a problem
or a class of problems. Not even the KB used by the online
adaptation scheme can be considered of general applicability.
These conclusions have been validated by several experiments
in the parametric control of EAs [23], [29] and were originally
reported in [26].
III. HYBRID SOFT COMPUTING: A UNIFIED FRAMEWORK FOR
DEVELOPING METAHEURISTICS
The literature covering SC is expanding at a rapid pace, as
evidenced from the numerous congresses, books, and journals
devoted to this issue. Its original definition provided by Zadeh
[6] denotes systems that exploit the tolerance for imprecision,
uncertainty, and partial truth to achieve tractability, robustness,
low solution cost, and better rapport with reality. As discussed
in previous papers [8], [9], we view soft computing as the synergistic association of computing methodologies that includes
as its principal members fuzzy logic, neurocomputing, evolutionary computing, and probabilistic computing. We have also
stressed the synergy derived from hybrid SC systems that are
based on a loose or tight integration of their constituent technologies. This integration provides complementary reasoning
and search methods that allow us to combine domain knowledge and empirical data to develop flexible computing tools and
262
was the concatenation of all term sets. The fitness function was
a quadratic error calculated for four randomly chosen initial
conditions.
Lee and Takagi tuned both rules and term sets [59]. They
used a binary encoding for each three-tuple characterizing a triangular membership distribution. Each chromosome represents
a TSK-rule, concatenating the membership distributions in the
rule antecedent with the polynomial coefficients of the consequent. Most of the above approaches used the sum of quadratic
errors as a fitness function. Surmann et al. [60] extended this
quadratic function by adding to it an entropy term describing
the number of activated rules. Kinzel et al. [61] departed from
the string representation and used a cross-product matrix to encode the rule set (as if it were in table form). They also proposed
customized (point-radius) crossover operators that were similar
to the two-point crossover for string encoding. They first initialized the rule base according to intuitive heuristics, then used genetic algorithms to generate a better rule base, and finally tuned
the membership functions of the best rule base. This order of the
tuning process is similar to that typically used by self-organizing
controllers [62]. Herrera et al. [63] used a genetic algorithm to
tune each rule used by a fuzzy logic controller. They utilize a real
encoding for a four-parameter representation of a trapezoidal
membership function in each term set. A rule is achieved by
the concatenation of membership functions. A member in the
genetic search is such a concatenation of the encoding of membership functions.
Bonissone et al. [64] followed the tuning order suggested by
Zheng [65] for manual tuning. They began with macroscopic
effects by tuning the fuzzy logic controller state and control
variable scaling factors while using a standard uniformly spread
term set and a homogeneous rule base. After obtaining the best
scaling factors, they proceeded to tune the term sets, causing
medium-size effects. Finally, if additional improvements were
needed, they tuned the rule base to achieve microscopic effects.
Tsang and Yeung [66] describe a method that combines genetic
algorithms and neural networks for automated tuning of the parameters of a fuzzy expert system used as an advisor for job
placement. Grauel et al. [67] present a methodology for optimizing fuzzy classifiers based on B-splines using an evolutionary algorithm. The tuning algorithm maximizes the performance of breast cancer detection and at the same time minimizes
the size of the classifier. Other reports that explore the evolutionary tuning of fuzzy classifiers appear in [68][70].
All these approaches demonstrate the synergy obtained by
integrating domain knowledge, represented by fuzzy systems,
with robust data-driven search method, exemplified by evolutionary algorithms.
IV. LEVERAGING DOMAIN KNOWLEDGE IN INDUSTRIAL AND
COMMERCIAL EA APPLICATIONS
To illustrate the benefits of integrating domain knowledge
with evolutionary algorithms, we discuss four real-world applications in the areas of classification, scheduling, configuration
management, and optimization. These applications were developed, tested, and in some cases deployed. The first two applications illustrate the use of domain knowledge in metaheuristics,
263
264
Fig. 4.
Fig. 5.
265
the earlier stages of an evolutionary search, with a gradually increasing preference toward exploitation as the search matures.
A crossover heuristic could have been employed as well in this
real-space search as an additional recombination-oriented stochastic variation operation. However, the scheduled mutation
technique achieves the purpose of transition from exploration
to exploitation that a crossover operation could realize in this
real-space search problem.
1) Fitness Function: In discrete classification problems
such as this one, we can use two matrices to construct the
fitness function that we want to optimize. The first matrix is a
confusion matrix
that contains frequencies of correct
and incorrect classifications for all possible combinations of the
standard reference decisions (SRDs)2 and classifier decisions.
The first
columns represent the rate classes available
represents the classifiers choice
to the classifier. Column
of not assigning any rate class, sending the case to a human
underwriter. The same ordering is used to sort the rows for
penalty matrix
the SRD. The second matrix is a
that contains the cost of misclassification. The fitness function
combines the values of
resulted from a test run of the
with the penalty
classifier configured with chromosome
matrix to produce a single value
(2)
Function
E. Results
After defining measures of coverage and (relative and global)
accuracy, we performed a comparison against the SRD. The results, partially reported in [73], show a remarkable improvement
in all measures. We evaluated the performance of the decision
systems based on the three metrics below.
2Standard reference decisions represent ground truth rate class decisions as
reached by consensus among senior expert underwriters for a set of insurance
applications.
266
TABLE I
TYPICAL PERFORMANCE OF THE UNTUNED AND TUNED RULE-BASED
DECISION SYSTEM (FLE)
TABLE II
MEAN () AND STANDARD DEVIATION ( ) OF FLE PERFORMANCE OVER FIVE
TUNING CASE SETS COMPARED TO FIVE DISJOINT TEST SETS
267
268
TABLE IV
KNOWLEDGE BASE FOR
MUTATION RATE
AS A FUNCTION OF GD AND PCT
269
TABLE V
PERFORMANCE COMPARISON OF THE STANDARD EVOLUTIONARY ALGORITHM
(SEA) AND FUZZY STANDARD EVOLUTIONARY ALGORITHM (F-SEA). A
SHADED CELL DENOTES COMPARATIVELY SUPERIOR PERFORMANCE. A
SHADED CELL WITH THE SYMBOL DENOTES STATISTICALLY
SIGNIFICANT SUPERIOR PERFORMANCE
E. Results
In this section, we compare performance of two basic types
of evolutionary algorithms to their respective performance when
fuzzy control is introduced. The first algorithm, called the standard evolutionary algorithm (SEA), has a population size of
50, crossover rate of 0.6, and mutation rate of 0.005, uses proportional selection, applies uniform crossover to two selected
parents to produce two offspring, and completely replaces with
elitism the parent population with the offspring population. The
second algorithm, called the steady state evolutionary algorithm
(SSEA), is identical to the SEA except that only 25% of the
population is replaced at each generation. The fuzzy controlled
versions of these respective evolutionary algorithms are fuzzy
standard evolutionary algorithm (F-SEA) and fuzzy steady-state
evolutionary algorithm (F-SSEA).
The experimental setup is based on an underlying discrete defeasible options.
cision problem space of the order of 6.4
Three different minimization problems are defined over this decision space by introducing various aggregation functions given
by
Cost
Cost
Cost
Time
Time
(4a)
(4b)
Time
(4c)
and the
performance of the F-SEA is superior to the
performance of the SEA 73% of the time. The performance
performance
of the F-SEA is statistically superior to the
of the SEA 40% of the time, and the performance of the
F-SEA is statistically superior to the
performance of the
SEA 60% of the time. Statistically superior performance occurs
when the threshold for the corresponding t-test or F-test is
smaller than 0.05. Each such occurrence is highlighted and
marked with the symbol .
Table VI shows a rounded-up performance comparison of the
SSEA and F-SSEA. The performance of the F-SSEA is superior to the performance of the SSEA 73% of the time, and the
performance of the F-SSEA is superior to the performance
of the SSEA 80% of the time. The performance of the F-SSEA
is statistically superior to the performance of the SSEA 20%
of the time, while the performance of the F-SSEA is statistically inferior to the performance of the SSEA 7% of the time,
and the performance of the F-SSEA is statistically superior to
the performance of the SSEA 47% of the time.
The above results support the statistically based inference that
introducing fuzzy control to manage resources of the SEA has
more of an impact than when fuzzy control is introduced to
manage resources of an SSEA. Regardless, fuzzy control is able
to improve performance of the SSEA as well.
In Table VII, we compare performance of the SEA to the
SSEA in an effort to infer relative superiority of the basic approaches. These results support the statistical inference that the
performance of the SSEA is significantly better than the performance of the SEA. A potential explanation for this phenomenon
is since the SSEA only replaces a portion of its population at
each generation it delays the onset of premature convergence,
which is detrimental to the evolutionary search process.
270
TABLE VI
PERFORMANCE COMPARISON OF THE STEADY-STATE EVOLUTIONARY
ALGORITHM (SSEA) AND FUZZY STEADY STATE EVOLUTIONARY ALGORITHM
(F-SSEA). A SHADED CELL DENOTES COMPARATIVELY SUPERIOR
PERFORMANCE. A SHADED CELL WITH THE SYMBOL DENOTES
STATISTICALLY SIGNIFICANT SUPERIOR PERFORMANCE
TABLE VII
PERFORMANCE COMPARISON OF THE STANDARD EVOLUTIONARY ALGORITHM
AND STEADY STATE EVOLUTIONARY ALGORITHM. A SHADED CELL DENOTES
COMPARATIVELY SUPERIOR PERFORMANCE. A SHADED CELL WITH THE
SYMBOL DENOTES STATISTICALLY SIGNIFICANT SUPERIOR PERFORMANCE
F. Remarks
We have presented statistical results based on a large suite of
experiments, which support the argument that adaptive control
of an evolutionary algorithms resources via fuzzy logic may
be realized in a simple, intuitive, and efficient manner. We have
observed that the overhead due to fuzzy control intervention is
minimal, in spite of activating the fuzzy controller for population size at each generation. This overhead could increase for
some applications that require larger and more complex fuzzy
271
Fig. 8. Luminous efficacy of common electric light sources (and the sulphur
lamp).
B. Related Work
MacAdam [87] presents a proof of a theorem that allows the
optimal spectral reflectance for a pigment to achieve a maximum
colorimetric purity for a given illuminant and dominant wavelength to be determined. This can also be used to determine how
an arbitrary SPD may be filtered to achieve any chromaticity at
maximum efficiency [87], [88]. However, this method offers no
guarantee that the color rendering will be acceptable and it is
likely to be poor for many chromaticities.
Koedam and Opstelten [89] use trial and error and color
theory to develop three-line spectra on the blackbody locus
. Koedam et al. [90] examined (again via trial
with CRI
and error) the effect of using three bands of differing bandwidth
on color rendering and efficacy, although they only look at a
few points. Thornton [91] uses a similar approach to explore
the tradeoff between CRI and efficiency of some three line
spectra for white light. All three of these papers identify similar
regions of the spectrum (around 450, 540, and 610 nm) as
being particularly important for color rendering in line or band
spectra.
Einhorn and Einhorn [92], Walter [93], Haft and Thornton
[94], and Opstelten et al. [95] were the first researchers to
examine in a systematic way the relationship between CRI
and efficacy for certain chromaticities, using three line or three
band spectra. All four papers present calculations, starting with
different assumptions, of CRI/efficacy Pareto optimal fronts
for different colors and note that the lighting industry was
manufacturing many lamps that were far from this front (i.e.,
even given physical limitations, there was substantial room for
improvement).
Walter [96] applied nonlinear programming to the spectrum
optimization problem for efficacy and color rendering at a particular chromaticity. Although he was never able to get the algorithm to converge, he was able to find three and four line spectra
with high efficiency and high CRI.
Other researchers have applied a variety of approaches to
solve related problems. Ohta and Wyszecki [97] use linear programming to design illuminants that render a limited number of
objects at desired chromaticities. Ohta and Wyszecki [98] use
nonlinear programming to explore the relationship between illuminant changes and color change in relation to setting toler-
272
Fig. 10.
Suboptimal filter.
Fig. 11.
otherwise
otherwise
(6)
273
Fig. 14. Filter transmittance for three spectra (for the given chromaticity
coordinates).
solutions that have dissimilar form but similar performance, allowing an engineer to choose from these Pareto-optimal solutions based on criteria not encoded in the fitness function (e.g.,
ease of manufacturing).
Fig. 15 is a comparison of the chromaticity-efficiency Pareto
optimal surfaces obtained using both methods for the HPS
274
F. Remarks
The use of domain knowledge to develop problem-specific
mutation methods had a large beneficial effect on the performance of the GA in this case. For a relatively simple problem
with a smooth spectral power distribution, as obtained from
an incandescent source, the number of function evaluations required to arrive at a solution of comparable quality is reduced by
a factor of four by using the problem specific mutation methods.
For a more complex problem, such as one with a very spiky
spectral power distribution (as obtained from a metal halide
lamp), the effect is even more profound, reducing the number
of function evaluations by a factor of 20.
VIII. SCHEDULING MAINTENANCE TASKS FOR A
CONSTELLATION OF LOW EARTH ORBITING SATELLITES
A. Problem Description
Fig. 16. Tradeoff between efficiency and CRI at each chromaticity for the
four-dimensional problem (filtered x, y , CRI, and efficiency). For ease of
visualization, chromaticity is represented here by the corresponding color
temperature on the blackbody locus.
275
TABLE VIII
SATELLITECITY INTERACTION SCHEDULE
Fig. 17.
B. Related Work
C. Solution Architecture
As illustrated in Fig. 17, the EA utilizes an event simulation
to determine the total coverage provided in a given schedule.
This simulation is provided by a data structure that is constructed based on a known schedule of coverage events and the
EA scheduled task-starting times. After constructing the data
structure, the final adjusted coverage events are read directly
from the data structure. These final events have been reduced
in accordance with any overlapping tasks.
276
TABLE IX
INTERACTIONS BETWEEN TASKS AND EVENTS
Within: indicates that the usable time for the task is contained entirely within the event.
No Overlap: The task is not allowed to intersect the other
event or task type.
As these schedule conflicts arise dynamically, they must be
resolved dynamically as well. In this case we opted for a penalty
function that penalizes an individual for invalid overlapping
tasks. This constraint was even enforced with a dynamic
penalty so as to maintain diversity. Over time, that is, in later
generations, the penalty for this collision is more severe than in
early generations
(8)
In the above equation is the final fitness,
is the fitness before any penalty is applied, is a linearity factor, and
is a constant weight to be multiplied with the number of collisions. This method was important in that it allowed for greater
diversity in early generations, which increased the likelihood of
optimum results in later generations.
Fig. 18.
Fig. 19. This set of static constraints starts with valid and potentially
overlapping windows of starting times and maps them to a continuous range of
valid start times.
period, we truncate the period to account for the time that is not
available for this task.
As shown in Fig. 18, we map real time (except for eclipses)
into a truncated time horizon that represents only valid start
times. During the evaluation of this individual, we can then calculate a mapping from our truncated starting time to the actual
starting time.
A similar problem is for those tasks that must occur while
a satellite is in contact with a ground station. Since we know
when valid times for this task occur, we can once again create
an artificial time scale that incorporates only the valid range of
times for the tasks. This is shown in Fig. 19.
Dynamic constraints must be enforced differently, as they are
based on information not known until the EA is run. They are
dependent on the nature of the solution generated by the EA.
The primary dynamic constraint in this problem is the issue of
task collisions. There are several types of tasks, and each task
has specific interactions with other task types and other events.
These interactions are shown in Table IX.
The interaction types are defined as follows.
Shrinks: indicates that the task reduces the usable time in
that event.
No Interaction: indicates that the task may be scheduled
during the event or task.
E. Results
Since a real-coded representation was used, the genetic
operators assumed a principal role in the solution process. Our
crossover and mutation methods sought to mimic effective
methods common on binary representations. In both operators,
we utilized normal distributions around parents to generate
children. One early method used in this domain was the BLX
crossover method [109]. We found that this method was insufficient to improve our performance. We sought to develop a
method analogous to crossover methods typically employed in
binary representations. In these methods, the genetic material in
subsequent generations is directly inherited from either parent.
We attempted to utilize a method that mapped more readily
onto that construct than does the BLX method. In our parent
weighted crossover method, the children inherited values that
were generated from distributions centered on the parent values.
This is similar to the parent-centric recombination [110], which
tracks the direction in which children are moving from parents.
Similarly, our mutation operator selected a value from a distribution around the allele value being modified. This mutation
method proved to be much more effective than a method which
generated completely new allele values. These methods improved our results significantly. The EA in our approach was a
steady-state GA using a 50% population replacement strategy
with a mutation probability of 0.1% and a crossover probability
of 60%. In our experiments, the number of generations varied
in the range [300, 500], and the population sizes varied in the
range [25, 500].
In parallel to our EA, the problem was also solved using
a linear programming (LP) method. The LP was allowed to
run until it found a provable objective function optimum of
69 366.58 min. This number represents the aggregate time coverage of a Walker constellation of 25 LEO satellites. The tuned
EA typically returns values that are one to two minutes short of
this value. The LP method had to run for approximately ten minutes in order to find a provable optimum. As shown in Table X,
the EA ran for a comparable amount of time with a very large
TABLE X
AVERAGE RUNNING TIMES FOR CONVERGENCE RELATIVE TO POPULATION SIZE
277
278
the simultaneous maximization of return measures and the minimization of risk measures for the portfolio of assets. The return
and risk measures are complex linear or nonlinear functions of a
variety of market and asset factors. The principal characteristic
of this problem class is the presence of a large number of linear
allocation constraints, and multiple linear and nonlinear objectives defined over the resulting feasible space.
We have developed and tested evolutionary multiobjective
optimization algorithms that are able to robustly identify the
Pareto frontier of optimal portfolios defined over the space of
returns and risks. However, the key challenge in solving this
problem is presented by the large number of linear allocation
constraints. The feasible space defined by these constraints is
a high dimensional real-valued space (up to 2000 dimensions)
and is a highly compact convex polytope, making for an enormously challenging constraint satisfaction problem.
Linear programming methodologies can routinely handle
problems with thousands of linear constraints, but they are
unable to tackle nonlinear objectives. We leveraged knowledge
on the geometrical nature of the feasible space by designing
a specialized LP-based algorithm that robustly samples the
boundary vertices of the convex feasible space. These extremity
samples are seeded in the initial population and then exclusively used by the evolutionary multiobjective algorithm to
generate interior points (via convex crossover) that are always
geometrically feasible.
In this specific application, we have explicitly incorporated
domain knowledge by a specialized LP-based population initializer and convex crossover operators that always generate geometrically feasible interior points. This application further reinforces our central thesis that evolutionary algorithms + domain
knowledge = real-world evolutionary computation.
REFERENCES
[1] M. Gondran and M. Minoux, Graphs and Algorithms. New York:
Wiley, 1984.
[2] D. H. Wolpert and W. G. MacReady, No free lunch theorems for
search,, Santa Fe, NM, SFI-TR-95-02-010, 1995.
, No free lunch theorems for optimization, IEEE Trans. Evol.
[3]
Comput., vol. 1, no. 1, pp. 6782, 1997.
[4] C. R. Reeves and J. E. Rowe, Genetic Algorithms Principles and
Perspectives: A Guide to GA Theory. Amsterdam, The Netherlands:
Kluwer Academic, 2003, ch. 4.
[5] Y.-C. Ho and D. L. Pepyne, Simple explanation of the no free lunch theorem of optimization, Cybern. Syst. Anal., vol. 38, no. 2, pp. 292298,
2002.
[6] L. A. Zadeh, Fuzzy logic and soft computing: Issues, contentions and
perspectives, in Proc. IIZUKA94: Third Int. Conf. Fuzzy Logic, Neural
Nets Soft Computing, Iizuka, Japan, 1994, pp. 12.
[7] P. Bonissone, Automating the quality assurance of an on-line knowledge-based classifier by fusing multiple off-line classifiers, in Proc.
IPMU, Perugia, Italy, Jul. 49, 2004, pp. 309316.
, Soft computing: The convergence of emerging reasoning tech[8]
nologies, Soft Comput., vol. 1, no. 1, pp. 618, 1997.
[9] P. Bonissone, Y.-T. Chen, K. Goebel, and P. Khedkar, Hybrid soft computing systems: Industrial and commercial applications, Proc. IEEE,
vol. 87, pp. 16411667, 1999.
[10] T. Pal and N. R. Pal, SOGARG: A self-organized genetic algorithmbased rule generation scheme for fuzzy controllers, IEEE Trans. Evol.
Comput., vol. 7, pp. 397415, 2003.
[11] P. Bonissone, The life cycle of a fuzzy knowledge-based classifier, in
Proc. 2003 North Amer. Fuzzy Information Processing Soc., Chicago,
IL, Jul. 2003, pp. 488494.
[12] P. Bonissone, A. Varma, and K. Aggour, An evolutionary process for
designing and maintaining a fuzzy instance-based model (FIM), in
Proc. 2005 Genetic Fuzzy Systems, Granada, Spain, Mar. 2005.
[13] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs. New York: Springer-Verlag, 1996.
[14] R. C. Prim, Shortest connection networks and some generalizations,
Bell Syst. Technol. J., vol. 36, pp. 13891401, 1957.
[15] E. Vonk, L. C. Jain, and R. P. Johnson, Automatic Generation of Neural
Network Architecture using Evolutionary Computation, Singapore:
World Scientific, 1997.
[16] J. T. Richardson, M. R. Palmer, G. E. Liepins, and M. Hilliard, Some
guidelines for genetic algorithms with penalty functions, in Proc. 3rd
Int. Conf. Genetic Algorithms, San Mateo, CA, 1989, pp. 191197.
[17] W. Siedlecki and J. Sklansky, Constrained genetic optimization via
dynamic reward-penalty balancing and its use in pattern recognition,
in Proc. 3rd Int. Conf. Genetic Algorithms, San Mateo, CA, 1989, pp.
141150.
[18] J. Kubalik and J. Lazansky, Genetic algorithms and their tuning, in
Computing Anticipatory Systems, D. M. Dubois, Ed. Liege, Belgium:
American Institute of Physics, 1999, pp. 217229.
[19] R. Subbu, P. Bonissone, N. Eklund, S. Bollapragada, and K.
Chalermkraivuth, Multiobjective financial portfolio design: A hybrid evolutionary approach, in Proc. IEEE Int. Congr. Evol. Comput.,
Edinburgh, U.K., Sep. 25, 2005, pp. 17221729.
[20] P. Moscato, On evolution, search, optimization, genetic algorithms and
martial arts: Toward memetic algorithms, Caltech Concurrent Computation Program, Caltech, CA, C3P Rep. 826, 1989.
, Memetic algorithms: A short introduction, in New Ideas in Op[21]
timization, D. Corne, M. Dorigo, and F. Glower, Eds. London, U.K.:
McGraw-Hill, 1999, pp. 219234.
[22] J.-M. Renders and H. Bersini, Hybridizing genetic algorithms with hillclimbing methods for global optimization: Two possible ways, in Proc.
Int. Conf. Evol. Comput., 1994, pp. 312317.
[23] R. Subbu and P. Bonissone, A retrospective view of fuzzy control of
evolutionary algorithm resources, in Proc. IEEE Int. Conf. Fuzzy Syst.,
St Louis, MO, May 25, 2003, pp. 143148.
[24] E. Eiben, R. Hinterding, and Z. Michalewicz, Parameter control in evolutionary algorithms, IEEE Trans. Evol. Comput., vol. 3, no. 2, pp.
124141, 1999.
[25] F. Herrera and M. Lozano, Fuzzy adaptive genetic algorithms: Design, taxonomy, and future directions, Soft Comput., vol. 7, no. 8, pp.
545562, 2003.
[26] P. Bonissone, Soft computing and meta-heuristics: Using knowledge
and reasoning to control search and vice-versa, in Proc. SPIE
Applications Science Neural Networks (Fuzzy Systems Evolutionary
Computation VI), vol. 5200, San Diego, CA, Aug. 2003, pp. 133149.
[27] K. A. De Jong, An analysis of the behavior of a class of genetic
adaptive systems, Ph.D. dissertation, Univ. of Michigan, Ann Arbor,
MI, 1975.
[28] J. J. Grefenstette, Optimization of control parameters for genetic algorithms, IEEE Trans. Syst., Man, Cybern., vol. 16, no. 1, pp. 122128,
1986.
[29] F. Xue, Fuzzy logic controlled multi-objective differential evolution,
Rensselaer Polytechnic Inst., Troy, NY, Electronics Agile Manufacturing Research Institute Research Rep. ER03-4, 2003.
[30] T. Bck, Evolutionary Algorithms in Theory and Practice. New York:
Oxford Univ. Press, 1996.
[31] M. Schut and J. Wooldridge, The control of reasoning in resourcebounded agents, Knowl. Eng. Rev., vol. 16, no. 3, pp. 215240, 2000.
[32] E. Melis and A. Meier, Proof planning with multiple strategies, in Lecture Notes in Computer Science. Berlin, Germany: Springer-Verlag,
2000, vol. 1861, pp. 644659.
[33] M. Cox, Machines that forget: Learning from retrieval failure of misindexed explanations, in Proc. 16th Annu. Conf. Cognitive Science Soc.,
1994, pp. 225230.
[34] S. Fox, Introspective learning for case-based reasoning, Ph.D. dissertation, Dept. of Computer Science, Indiana Univ., Bloomington, IN,
1995.
[35] P. Bonissone and P. Halverson, Time-constrained reasoning under uncertainty, J. Real-Time Syst., vol. 2, no. 1/2, pp. 2545, 1990.
[36] P. P. Bonissone, P. S. Khedkar, and Y. Chen, Genetic algorithms for
automated tuning of fuzzy controllers: A transportation application, in
Proc IEEE Conf. Fuzzy Syst., New Orleans, LA, 1996, pp. 674680.
[37] X. Yao, Evolving artificial neural networks, Proc. IEEE, vol. 87, pp.
14231447, 1999.
[38] P. Larraaga and J. Lozano, Synergies between evolutionary computation and probabilistic graphical models, Int. J. Approx. Reason., vol.
31, no. 3, pp. 155156, Nov. 2002.
[39] P. Bonissone, V. Badami, K. Chiang, P. Khedkar, K. Marcelle, and M.
Schutten, Industrial applications of fuzzy logic at general electric,
Proc. IEEE, vol. 83, pp. 450465, 1995.
[40] S. Guillaume, Designing fuzzy inference systems from data: An interpretability-oriented review, IEEE Trans. Fuzzy Syst., vol. 9, no. 3, pp.
426443, 2001.
[41] J. Casillas, O. Cordn, F. Herrera, and L. Magdalena, Eds., Interpretability issues in fuzzy modeling, and accuracy improvements in
linguistic fuzzy modeling, in Studies in Fuzziness and Soft Computing. Berlin, Germany: Springer-Verlag, 2003, vol. 128129.
[42] L. A. Zadeh, Fuzzy sets, Inf. Contr., vol. 8, pp. 338353, 1965.
[43] N. Rescher, Many-Valued Logic. New York: McGraw-Hill, 1969.
[44] M. Black, Vagueness: An exercise in logical analysis, Phil. Sci., vol.
4, pp. 427455, 1937.
[45] E. H. Ruspini, P. P. Bonissone, and W. Pedycz, Handbook of Fuzzy Computation. Bristol, U.K.: Institute of Physics, 1998.
[46] L. A. Zadeh, Quantitative fuzzy semantics, Inf. Sci., vol. 3, pp.
159176, 1971.
, The concept of a linguistic variable and its application to approx[47]
imate reasoning, Part 1, Inf. Sci., vol. 8, pp. 199249, 1975.
[48] Y.-M. Pok and J.-X. Xu, Why is fuzzy control robust, in Proc. 3rd
IEEE Int. Conf. Fuzzy Syst., Orlando, FL, 1994, pp. 10181022.
[49] L. A. Zadeh, Outline of a new approach to the analysis of complex
systems and decision processes, IEEE Trans. Syst., Man, Cybern., vol.
SMC-3, pp. 2844, 1973.
[50] E. H. Mamdani and S. Assilian, An experiment in linguistic synthesis
with a fuzzy logic controller, Int. J. Man Mach. Studies, vol. 7, no. 1,
pp. 113, 1975.
[51] T. Takagi and M. Sugeno, Fuzzy identification of systems and its applications to modeling and control, IEEE Trans. Syst., Man, Cybern., vol.
SMC-15, pp. 116132, 1985.
[52] R. Babuska, R. Jager, and H. B. Verbruggen, Interpolation issues in
Sugeno-Takagi reasoning, in Proc. 3rd IEEE Int. Conf. Fuzzy Syst., Orlando, FL, 1994, pp. 859863.
[53] H. Bersini, G. Bontempi, and C. Decaestecker, Comparing RBF
and fuzzy inference systems on theoretical and practical basis, in
Proc. Int. Conf. Artificial Neural Networks, vol. 1, Paris, France,
1995, pp. 16974.
[54] T. J. Procyck and E. H. Mamdani, A linguistic self-organizing process
controller, Automatica, vol. 15, pp. 1530, 1979.
[55] F. Herrera and J. L. Verdegay, Eds., Genetic algorithms and soft computing, in Studies in Fuzziness and Soft Computing. Berlin, Germany:
Physica-Verlag, 1996, vol. 8.
[56] J. S. R. Jang, ANFIS: Adaptive-network-based fuzzy inference
system, IEEE Trans. Syst., Man, Cybern., vol. 233, pp. 665685,
1993.
[57] O. Cordn, F. Herrera, and M. Lozano, A classified review on the combination fuzzy logic-genetic algorithms bibliography, Dept. of Computer Science and A.I., Univ. of Granada, Granada, Spain, Tech. Rep.
DECSAI-95 129, 1995.
[58] C. L. Karr, Design of an adaptive fuzzy logic controller using genetic algorithms, in Proc. Int. Conf. Genetic Algorithms, San Diego, CA, 1991,
pp. 450456.
[59] M. A. Lee and H. Tagaki, Dynamic control of genetic algorithm using
fuzzy logic techniques, in Proc. 5th Int. Conf. Genetic Algorithms, CA,
1993, pp. 7683.
[60] H. Surmann, A. Kanstein, and K. Goser, Self-organizing and genetic
algorithms for an automatic design of fuzzy control and decision systems, in Proc. EUFIT, Aachen, Germany, 1993, pp. 10971104.
[61] J. Kinzel, F. Klawoon, and R. Kruse, Modifications of genetic algorithms for designing and optimizing fuzzy controllers, in Proc. 1st IEEE
Conf. Evol. Comput., Orlando, FL, 1994, pp. 2833.
[62] D. Burkhardt and P. P. Bonissone, Automated fuzzy knowledge base
generation and tuning, in Proc 1st IEEE Int. Conf. Fuzzy Syst., San
Diego, CA, 1992, pp. 179188.
[63] F. Herrera, M. Lozano, and J. L. Verdegay, Tuning fuzzy logic controllers by genetic algorithms, Int. J. Approx. Reason., vol. 12, no. 3/4,
pp. 299315, 1995.
[64] P. P. Bonissone, P. S. Khedkar, and Y.-T. Chen, Genetic algorithms
for automated tuning of fuzzy controllers, A transportation application,
in Proc. 5th IEEE Int. Conf. Fuzzy Syst., New Orleans, LA, 1996, pp.
674680.
[65] L. Zheng, A practical guide to tune proportional and integral (PI) like
fuzzy controllers, in Proc. 1st IEEE Int. Conf. Fuzzy Syst., S. Diego,
CA, 1992, pp. 633640.
[66] E. C. C. Tsang and D. S. Yeung, Optimizing fuzzy knowledge base by
genetic algorithms and neural networks, in Proc. IEEE Int. Conf. Syst.,
Man, Cybern., Tokyo, Japan, 1999, pp. 367371.
[67] A. Grauel, I. Renners, and L. A. Ludwig, Optimizing fuzzy classifiers
by evolutionary algorithms, in Proc. IEEE 4th Int. Conf. KnowledgeBased Intelligent Engineering Systems Allied Technologies, 2000, pp.
353356.
279
[68] S.-Y. Ho, T.-K. Chen, and S.-J. Ho, Designing an efficient fuzzy
classifier using an intelligent genetic algorithm, in Proc. IEEE 24th
Annu. Int. Computer Software Applications Conf. (COMPSAC), 2000,
pp. 293298.
[69] H. Ishibuchi, T. Nakashima, and T. Murata, Genetic-algorithm-based
approaches to the design of fuzzy systems for multi-dimensional pattern
classification problems, in Proc. IEEE Int. Conf. Evolutionary Computation, 1996, pp. 229234.
[70] C.-C. Wong, C.-C. Chen, and B.-C. Lin, Design of fuzzy classification
system using genetic algorithms, in Proc. IEEE 9th Int. Conf. Fuzzy
Systems, San Antonio, TX, 2000, pp. 297301.
[71] E. Collins, S. Ghosh, and C. Scofield, An application of a multiple
neural network learning system to emulation of mortgage underwriting
judgments, in Proc. IEEE Int. Conf. Neural Networks, 1988, pp.
351357.
[72] C. Nikolopoulos and S. Duvendack, A hybrid machine learning system
and its application to insurance underwriting, in Proc. IEEE Int. Congr.
Evol. Comput., 1994, pp. 692695.
[73] P. Bonissone, R. Subbu, and K. Aggour, Evolutionary optimization
of fuzzy decision systems for automated insurance underwriting, in
Proc. 2002 IEEE World Conf. Comput. Intell., Honolulu, HI, 2002, pp.
10031008.
[74] R. Subbu, C. Hocaoglu, and A. C. Sanderson, A virtual design environment using evolutionary agents, in Proc. IEEE Int. Conf. Robotics
Automation, Leuven, Belgium, 1998, pp. 247253.
[75] R. Subbu, A. C. Sanderson, and P. P. Bonissone, Fuzzy logic controlled
genetic algorithms versus tuned genetic algorithms: An agile manufacturing application, in Proc. ISIC/CIRA/ISAS Conf., Gaithersburg, MD,
1998b, pp. 434440.
[76] F. Herrera and M. Lozano, Adaptation of genetic algorithm parameters
based on fuzzy logic controllers, in Genetic Algorithms and Soft Computing, F. Herrera and J. L. Verdegay, Eds. Berlin, Germany: PhysicaVerlag, 1996, pp. 95129.
[77]
, Adaptive genetic operators based on coevolution with fuzzy behaviors, IEEE Trans. Evol. Comput., vol. 5, no. 2, pp. 149165, 2001.
[78] A. Tettamanzi and M. Tomassini, Fuzzy evolutionary algorithms,
in Soft Computing: Integrating Evolutionary, Neural, and Fuzzy Systems. Berlin, Germany: Springer-Verlag, 2001, ch. 7.
[79] R. Palmer, Sliding mode fuzzy control, in Proc. IEEE Int. Conf. Fuzzy
Systems, 1992, pp. 519526.
[80] P. Bonissone and K. H. Chiang, Fuzzy logic controllers: From development to deployment, in Proc. IEEE Int. Conf. Neural Networks, 1993,
pp. 610619.
[81] J. E. Slotine and W. Li, Applied Nonlinear Control. Englewood Cliffs,
NJ: Prentice-Hall, 1991.
[82] P. Bonissone, A compiler for fuzzy logic controllers, in Proc. Int.
Fuzzy Eng. Symp., 1991, pp. 706717.
[83] Memorandum of understanding for the implementation of a European concerted research action designated as COST Action 529 efficient
lighting for the 21st century, in European Co-Operation in the Field of
Scientific and Technical Research: COST, 2001.
[84] G. Wyszecki and W. Stiles, Color Science: Concepts and Methods,
Quantitative Data and Formulae, 2nd ed. New York: Wiley, 1982.
[85] M. Siminovitch, C. Gould, and E. Page, A high-efficiency indirect
lighting system utilizing the solar 1000 sulfur lamp, in Proc. Right
Light 4 Conf., vol. 2, Copenhagen, Denmark, Nov. 1921, 1997, pp.
3540.
[86] B. Turner, M. Ury, Y. Leng, and W. Love, Sulfur lamps Progress in
their development, J. Illum. Eng. Soc., vol. 26, pp. 1016, 1997.
[87] D. MacAdam, The theory of the maximum visual efficiency of colored
materials, J. Opt. Soc. Amer., vol. 25, pp. 249252, 1935.
, Maximum visual efficiency of colored materials, J. Opt. Soc.
[88]
Amer., vol. 25, pp. 361367, 1935.
[89] M. Koedam and J. Opstelten, Measurement and computer-aided optimization of spectral power distributions, Light. Res. Technol., vol. 3,
no. 3, pp. 205210, 1971.
[90] M. Koedam, J. Opstelten, and D. Radielovic , The application of simulated spectral power distributions in lamp development, J. Illum. Eng.
Soc., vol. 1, pp. 285289, 1972.
[91] W. Thornton, Luminosity and color-rendering capability of white
light, J. Opt. Soc. Amer., vol. 61, no. 9, pp. 11551163, 1971.
[92] H. Einhorn and F. Einhorn, Inherent efficiency and color rendering of
white light sources, Illum. Eng., vol. 62, pp. 154158, 1967.
[93] W. Walter, Optimum phosphor blends for fluorescent lamps, Appl.
Opt., vol. 10, pp. 11081113, 1971.
[94] H. Haft and W. Thornton, High performance fluorescent lamps, J.
Illum. Eng. Soc., vol. 1, pp. 2935, 1972.
[95] J. Opstelten, D. Radielovic , and J. Verstegen, Optimum spectra for light
sources, Phil. Tech. Rev., vol. 35, pp. 361370, 1975.
280
[96] W. Walter, Optimum lamp spectra, J. Illum. Eng. Soc., vol. 7, no. 1,
pp. 6673, 1978.
[97] N. Ohta and G. Wyszecki, Designing illuminants that render given objects in prescribed colors, J. Opt. Soc. Amer., vol. 66, pp. 269275,
1976.
, Color changes caused by specified changes in the illuminant,
[98]
Color Res. Appl., vol. 1, pp. 1721, 1976.
[99] D. DuPont, Study of the reconstruction of reflectance curves based on
tristimulus values: Comparison of methods of optimization, Color Res.
Appl., vol. 27, pp. 8899, 2002.
[100] N. Eklund and M. Embrechts, GA-based multi-objective optimization
of visible spectra for lamp design, in Smart Engineering System Design: Neural Networks, Fuzzy Logic, Evolutionary Programming, Data
Mining and Complex Systems, C. H. Dagli, A. L. Buczak, J. Ghosh, M.
J. Embrechts, and O. Ersoy, Eds. New York: ASME Press, 1999, pp.
451456.
, Determining the color-efficiency Pareto optimal surface for
[101]
filtered light sources, in Evolutionary Multi-Criterion Optimization,
Zitzler, Deb, Thiele, Coello, and Corne, Eds. Berlin, Germany:
Springer-Verlag, 2001, vol. 1993, Lecture Notes in Computer Science,
pp. 603611.
[102] N. Eklund, Multiobjective visible spectrum optimization: A genetic
algorithm approach, Ph.D. dissertation, Engineering Science Dept.,
Rensselaer Polytechnic Inst., Troy, NY, 2002.
[103] T. Kiehl, Genetic algorithms for autonomous satellite control, MS,
Computer Science Dept., Rensselaer Polytechnic Inst., Troy, NY, 1999.
[104] K.-D. Lee, H.-J. Lee, Y.-H. Cho, and D. G. Oh, Throughput-maximizing timeslot scheduling for interactive satellite multiclass services,
IEEE Commun. Lett., vol. 7, pp. 263265, 2003.
[105] N. Funabiki and S. Nishikawa, A binary hopfield neural-network approach for satellite broadcast scheduling problems, IEEE Trans. Neural
Netw., vol. 8, pp. 441445, 1997.
[106] T. Tambouratzis, Decomposition co-ordination artificial neural network for satellite broadcast scheduling, Electron. Lett., vol. 34, no. 15,
pp. 15031504, 1998.
[107] C. Plaunt, A. K. Jonsson, and J. Frank, Run-time satellite tele-communications call handling as dynamic constraint satisfaction, in Proc.
IEEE Aerospace Conf., vol. 5, 1999, pp. 165174.
[108] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA: Addison-Wesley, 1989.
[109] F. Herrera, M. Lozano, and J. L. Verdegay, Tackling real-coded genetic
algorithms: Operators and tools for behavioral analysis, Artif. Intell.
Rev., vol. 12, no. 4, pp. 265319, 1998.
[110] K. Deb, A. Anand, and D. Joshi, A computationally efficient evolutionary algorithm for real-parameter optimization, Evol. Comput., vol.
10, no. 4, pp. 371395, 2002.
Piero P. Bonissone (S75M79SM02F04) received the Ph.D. degree in electrical engineering and
computer science from the University of California
at Berkeley in 1979.
Since then, he has been a Computer Scientist with
General Electric Global Research, where he has carried out research in artificial intelligence, expert systems, fuzzy sets, evolutionary algorithms, and soft
computing, ranging from the control of turbo-shaft
engines to the use of fuzzy logic in dishwashers, locomotives, power supplies and financial applications.
He has developed case-based and fuzzy-neural systems to accurately estimate
the value of residential properties when used as mortgage collaterals and to predict paper-web breakages in paper mills. He has led a large internal project that
uses a fuzzy-rule base to partially automate the underwriting process of life
insurance applications. Recently he led a soft computing group in the development of prognostics of products remaining life. He is also an Adjunct Professor
with Rensselaer Polytechnic Institute, Troy, NY. Since 1993, he has been Editor-in-Chief of the International Journal of Approximate Reasoning. He has
coedited four books and published more than 120 articles. He received 33 U.S.
patents (with more than 25 pending). He has been the Keynote Speaker of many
important conferences in the artificial intelligence and soft computing field.
Dr. Bonissone is a Fellow of the American Association for Artificial Intelligence and the International Fuzzy Systems Association. In 1993, he received
the Coolidge Fellowship Award from GE CRD for overall technical accomplishments. From 1993 to 2000, he was Vice-President of Finance of the IEEE
NNC. In 2002, he became President of the IEEE Neural Networks Society. He
is Vice-President of Finances of the IEEE Computational Intelligence Society.