You are on page 1of 10

Classification Algorithms

Oracle Data Mining provides the following algorithms for classification:

Decision Tree

Decision trees automatically generate rules, which are conditional statements that reveal the logic
used to build the tree. See Chapter 11, "Decision Tree".

Naive Bayes

Naive Bayes uses Bayes' Theorem, a formula that calculates a probability by counting the
frequency of values and combinations of values in the historical data. See Chapter 15, "Naive
Bayes".

Generalized Linear Models (GLM)

GLM is a popular statistical technique for linear modeling. Oracle Data Mining implements GLM
for binary classification and for regression.

GLM provides extensive coefficient statistics and model statistics, as well as row diagnostics.
GLM also supports confidence bounds.

Support Vector Machine

Support Vector Machine (SVM) is a powerful, state-of-the-art algorithm based on linear and
nonlinear regression. Oracle Data Mining implements SVM for binary and multiclass
classification. See Chapter 18, "Support Vector Machines".

The nature of the data determines which classification algorithm will provide the best solution to a given
problem. The algorithm can differ with respect to accuracy, time to completion, and transparency. In
practice, it sometimes makes sense to develop several models for each algorithm, select the best model for
each algorithm, and then choose the best of those for deployment.

Reader Comment

Algorithms

The following provides a list of implemented Algorithms:


Learning Vector Quantization (LVQ)
o As described in "LVQ_PAK: The Learning Vector Quantization Program Package" by
Kohonen, Hynninen, Kangas, Laaksonnen and Torkkola (1996)
o and in: Teuvo Kohonen. Self-Organizing Maps. Third ed. Berlin Heidelberg: Springer-
Verlag; 2001. (Thomas S Huang; Teuvo Kohonen, and Manfred R. Schroeder. Springer
Series in Information Sciences; 30).
o Provides the following implementations: LVQ1, OLVQ1, LVQ2.1, LVQ3, OLVQ3,
Multiple-pass LVQ, Hierarchical LVQ
Self-Organizing Map (SOM)
o As described in "SOM_PAK: The Self-Organizing Map Program Package" by Kohonen,
Hynninen, Kangas, and Laaksonnen (1996)
o and in: Teuvo Kohonen. Self-Organizing Maps. Third ed. Berlin Heidelberg: Springer-
Verlag; 2001. (Thomas S Huang; Teuvo Kohonen, and Manfred R. Schroeder. Springer
Series in Information Sciences; 30).
o Provides the following implementations: SOM, LVQ-SOM, Multiple-pass SOM
Feed-Forward Artificial Neural Network (FF-ANN)
o as described in Russell D. Reed and Robert J. Marks II. Neural Smithing - Supervised
Learning in Feedforward Artificial Neural Networks. USA: MIT Press; 1999 Mar.
o Provides the following implementations: Perceptron, Widrow Hoff, Back Propagation,
Bold Driver Back Propagation (Vogl's Method)
Artificial Immune Recognition System (AIRS)
o as described in Andrew Watkins; Jon Timmis, and Lois Boggess. Artificial Immune
Recognition System (AIRS): An Immune-Inspired Supervised Learning Algorithm.
Genetic Programming and Evolvable Machines. 2004 Sep; 5(3):291-317. and Andrew B.
Watkins. Exploiting Immunological Metaphors in the Development of Serial, Parallel,
and Distributed Learning Algorithms [Ph.D. thesis]. Canterbury, UK: University of
Kent; 2005.
o Provides the following implementations: AIRS1, AIRS2, Parallel AIRS2
o A description of this implementation is provided in a technical report Jason Brownlee.
[Technical Report]. Artificial Immune Recognition System (AIRS) - A Review and
Analysis. Victoria, Australia: Centre for Intelligent Systems and Complex Processes
(CISCP), Faculty of Information and Communication Technologies (ICT), Swinburne
University of Technology; 2005 Jan; Technical Report ID: 1-01.
Clonal Selection Algorithm (CLONALG)
o as described in Leandro N. de Castro and Fernando J. Von Zuben. Learning and
optimization using the clonal selection principle. IEEE Transactions on Evolutionary
Computation. 2002 Jun; 6(3):239-251. and Leandro N. de Castro and Fernando J. Von
Zuben. The Clonal Selection Algorithm with Engineering Applications, L. Darrell
Whitley; David E. Goldberg; Erick Cant-Paz; Lee Spector; Ian C. Parmee, and Hans-
Georg Beyer, Editors. Proceedings of the Genetic and Evolutionary Computation
Conference (GECCO '00), Workshop on Artificial Immune Systems and Their
Applications; Las Vegas, Nevada, USA. Morgan Kaufmann; 2000: 36-37.
o Provides the following implementations: CLONALG and a new CSCA algorithm
o A description of this implementation is provided in a technical report Jason Brownlee.
[Technical Report]. Clonal Selection Theory & CLONALG - The Clonal Selection
Classification Algorithm (CSCA) Victoria, Australia: Centre for Intelligent Systems and
Complex Processes (CISCP), Faculty of Information and Communication Technologies
(ICT), Swinburne University of Technology; 2005 Jan; Technical Report ID: 2-01.
Immunos-81
o as described in Jerome H. Carter. The immune system as a model for classification and
pattern recognition. Journal of the American Informatics Association. 2000; 7(1):28-41.
o Provides the following implementations: Immunos-81 and a new Immunos-1 and
Immunos-2 algorithms
o A description of this implementation is provided in a technical report Jason Brownlee.
[Technical Report]. Immunos-81 - The Misunderstood Artificial Immune System.
Victoria, Australia: Centre for Intelligent Systems and Complex Processes (CISCP),
Faculty of Information and Communication Technologies (ICT), Swinburne University of
Technolog

Algorithms

The most widely used classifiers are the neural network (multi-layer perceptron), support vector
machines, k-nearest neighbours, Gaussian mixture model, Gaussian, naive Bayes, decision
tree and RBF classifiers.[citation needed]
Examples of classification algorithms include:

Linear classifiers
Fisher's linear discriminant
Logistic regression
Naive Bayes classifier
Perceptron
Support vector machines
Least squares support vector machines
Quadratic classifiers
Kernel estimation
k-nearest neighbor
Boosting (meta-algorithm)
Decision trees
Random forests
Neural networks
Gene Expression Programming
Bayesian networks
Hidden Markov models
Learning vector quantization
Proaftn

Example algorithms

[edit]Altruism algorithm
Researchers in Switzerland have developed an algorithm based on Hamilton's rule of kin selection. The
algorithm shows how altruism in a swarm of entities can, over time, evolve and result in more effective
swarm behaviour.[2][3]

[edit]Ant colony optimization


Ant colony optimization (ACO) is a class of optimization algorithms modeled on the actions of an ant
colony. ACO methods are useful in problems that need to find paths to goals. Artificial 'ants'simulation
agentslocate optimal solutions by moving through aparameter space representing all possible solutions.
Natural ants lay down pheromones directing each other to resources while exploring their environment. The
simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation
iterations more ants locate better solutions.[4]

[edit]Artificial bee colony algorithm


Artificial bee colony algorithm (ABC) is a meta-heuristic algorithm introduced by Karaboga in 2005,[5] and
simulates the foraging behaviour of honey bees. The ABC algorithm has three phases: employed bee,
onlooker bee and scout bee. In the employed bee and the onlooker bee phases, bees exploit the sources by
local searches in the neighbourhood of the solutions selected based on deterministic selection in the
employed bee phase and the probabilistic selection in the onlooker bee phase. In the scout bee phase which
is an analogy of abandoning exhausted food sources in the foraging process, solutions that are not
beneficial anymore for search progress are abandoned, and new solutions are inserted instead of them to
explore new regions in the search space. The algorithm has a well-balanced exploration and exploitation
ability.

[edit]Artificial immune systems


Artificial immune systems (AIS) concerns the usage of abstract structure and function of the immune
system to computational systems, and investigating the application of these systems towards solving
computational problems from mathematics, engineering, and information technology. AIS is a sub-field of
Biologically inspired computing, and natural computation, with interests in Machine Learning and
belonging to the broader field of Artificial Intelligence.

[edit]Bat algorithm
Bat algorithm (BA) inspired by the echolocation behavior of microbats. BA uses a frequency-tuning and
automatic balance of exploration and exploitation by controlling loudness and pulse emission rates. [6]

[edit]Charged system search


Charged System Search (CSS) is a new optimization algorithm based on some principles from physics and
mechanics.[7] CSS utilizes the governing laws of Coulomb and Gauss from electrostatics and the Newtonian
laws of mechanics. CSS is a multi-agent approach in which each agent is a Charged Particle (CP). CPs can
affect each other based on their fitness values and their separation distances. The quantity of the resultant
force is determined by using the electrostatics laws and the quality of the movement is determined using
Newtonian mechanics laws. CSS is applicable to all optimization fields; especially it is suitable for non-
smooth or non-convex domains. This algorithm provides a good balance between the exploration and the
exploitation paradigms of the algorithm which can considerably improve the efficiency of the algorithm
and therefore the CSS also can be considered as a good global and local optimizer simultaneously.

[edit]Cuckoo search
Cuckoo search (CS) mimics the brooding behaviour of some cuckoo species, which use host birds to hatch
their eggs and raise their chicks. This cuckoo search algorithm [8] is enhanced with Lvy flights with jump
steps drawn from Lvy distribution.[9] Recent studies suggested that CS can outperform other algorithms
such as particle swarm optimization. For example, a comparison of the cuckoo search with PSO, DE and
ABC suggest that CS and DE algorithms provide more robust results than PSO and ABC. [10]

[edit]Differential search algorithm


Differential search algorithm (DSA) inspired by migration of superorganisms. The problem solving success
of DSA is compared with the success of the algorithms ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011
and CMA-ES for solution of numerical optimization problems in 2012. Matlab code-link has been provided
in ivicioglu, P.,(2012) [11].

[edit]Firefly algorithm
Firefly algorithm (FA) inspired by the flashing behaviour of fireflies. Light intensity is associated with
attractiveness of a firefly, and such attraction enable the fireflies with the ability to subdivide into small
groups and each subgroup swarm around the local modes. Therefore, firefly algorithm is specially suitable
for multimodal optimization problems.[12] In fact, FA has been applied in continuous optimization, traveling
salesman problem, clustering, image processing and feature selection.
[edit]Glowworm swarm optimization
Glowworm swarm optimization (GSO), introduced by Krishnanand and Ghose in 2005 for simultaneous
computation of multiple optima of multimodal functions.[13][14][15][16] The algorithm shares a few features
with some better known algorithms, such as ant colony optimization and particle swarm optimization, but
with several significant differences. The agents in GSO are thought of as glowworms that carry a
luminescence quantity called luciferin along with them. The glowworms encode the fitness of their current
locations, evaluated using the objective function, into a luciferin value that they broadcast to their
neighbors. The glowworm identifies its neighbors and computes its movements by exploiting an adaptive
neighborhood, which is bounded above by its sensor range. Each glowworm selects, using a probabilistic
mechanism, a neighbor that has a luciferin value higher than its own and moves toward it. These
movementsbased only on local information and selective neighbor interactionsenable the swarm of
glowworms to partition into disjoint subgroups that converge on multiple optima of a given multimodal
function.

[edit]Gravitational search algorithm


Gravitational search algorithm (GSA) based on the law of gravity and the notion of mass interactions. The
GSA algorithm uses the theory of Newtonian physics and its searcher agents are the collection of masses.
In GSA, there is an isolated system of masses. Using the gravitational force, every mass in the system can
see the situation of other masses. The gravitational force is therefore a way of transferring information
between different masses (Rashedi, Nezamabadi-pour and Saryazdi 2009).[17] In GSA, agents are
considered as objects and their performance is measured by their masses. All these objects attract each
other by a gravity force, and this force causes a movement of all objects globally towards the objects with
heavier masses. The heavy masses correspond to good solutions of the problem. The position of the agent
corresponds to a solution of the problem, and its mass is determined using a fitness function. By lapse of
time, masses are attracted by the heaviest mass. We hope that this mass would present an optimum solution
in the search space. The GSA could be considered as an isolated system of masses. It is like a small
artificial world of masses obeying the Newtonian laws of gravitation and motion (Rashedi, Nezamabadi-
pour and Saryazdi 2009). A multi-objective variant of GSA, called Non-dominated Sorting Gravitational
Search Algorithm (NSGSA), was proposed by Nobahari and Nikusokhan in 2011.[18]

[edit]Intelligent water drops


Intelligent water drops algorithm (IWD) inspired by natural rivers and how they find almost optimal paths
to their destination. These near optimal or optimal paths follow from actions and reactions occurring among
the water drops and the water drops with their riverbeds. In the IWD algorithm, several artificial water
drops cooperate to change their environment in such a way that the optimal path is revealed as the one with
the lowest soil on its links. The solutions are incrementally constructed by the IWD algorithm.
Consequently, the IWD algorithm is generally a constructive population-based optimization algorithm. [19]

[edit]Krill herd algorithm


Krill herd (KH) is a novel biologically inspired algorithm proposed by Gandomi and Alavi in 2012.[20] The
KH algorithm is based on simulating the herding behavior of krill individuals. The minimum distances of
each individual krill from food and from highest density of the herd are considered as the objective function
for the krill movement. The time-dependent position of the krill individuals is formulated by three main
factors:
1. movement induced by the presence of other individuals;
2. foraging activity; and
3. random diffusion.

The derivative information is not necessary in the KH algorithm because it uses a stochastic random search
instead of a gradient search. For each metaheuristic algorithm, it is important to tune its related parameters.
One of interesting parts of the proposed algorithm is that it carefully simulates the krill behavior and it uses
the real world empirical studies to obtain the coefficients. Because of this fact, only time interval should be
fine-tuned in the KH algorithm. This can be considered as a remarkable advantage of the proposed
algorithm in comparison with other nature-inspired algorithms. The validation phases indicate that the KH
method is very encouraging for its future application to optimization tasks.

[edit]Magnetic optimization algorithm


Magnetic Optimization Algorithm (MOA), proposed by Tayarani in 2008,[21] is an optimization algorithm
inspired by the interaction among some magnetic particles with different masses. In this algorithm, the
possible solutions are some particles with different masses and different magnetic fields. Based on the
fitness of the particles, the mass and the magnetic field of each particle is determined, thus the better
particles are more massive objects with stronger magnetic fields. The particles in the population apply
attractive forces to each other and so move in the search space. Since the better solutions have greater mass
and magnetic field, the inferior particles tend to move toward the fitter solutions and thus migrate to area
around the better local optima, where they wander in search of better solutions.

[edit]Multi-swarm optimization
Multi-swarm optimization is a variant of particle swarm optimization (PSO) based on the use of multiple
sub-swarms instead of one (standard) swarm. The general approach in multi-swarm optimization is that
each sub-swarm focuses on a specific region while a specific diversification method decides where and
when to launch the sub-swarms. The multi-swarm framework is especially fitted for the optimization on
multi-modal problems, where multiple (local) optima exist.

[edit]Particle swarm optimization


Particle swarm optimization (PSO) is a global optimization algorithm for dealing with problems in which a
best solution can be represented as a point or surface in an n-dimensional space. Hypotheses are plotted in
this space and seeded with an initial velocity, as well as a communication channel between the particles.[22]
[23]
Particles then move through the solution space, and are evaluated according to some fitness criterion
after each timestep. Over time, particles are accelerated towards those particles within their communication
grouping which have better fitness values. The main advantage of such an approach over other global
minimization strategies such as simulated annealing is that the large number of members that make up the
particle swarm make the technique impressively resilient to the problem of local minima.

[edit]River formation dynamics


River formation dynamics (RFD) is an heuristic method similar to ant colony optimization (ACO). [24] It can
be seen as a gradient version of ACO, based on copying how water forms rivers by eroding the ground and
depositing sediments. As water transforms the environment, altitudes of places are dynamically modified,
and decreasing gradients are constructed. The gradients are followed by subsequent drops to create new
gradients, reinforcing the best ones. By doing so, good solutions are given in the form of decreasing
altitudes. This method has been applied to solve different NP-complete problems (for example, the
problems of finding a minimum distances tree and finding a minimum spanning tree in a variable-cost
graph[25]). The gradient orientation of RFD makes it specially suitable for solving these problems and
provides a good tradeoff between finding good results and not spending much computational time. In fact,
RFD fits particularly well for problems consisting in forming a kind of covering tree. [26]

[edit]Self-propelled particles
Self-propelled particles (SPP), also referred to as the Vicsek model, was introduced in 1995 by Vicsek et al.
[27]
as a special case of the boids model introduced in 1986 by Reynolds.[28] A swarm is modelled in SPP by
a collection of particles that move with a constant speed but respond to a random perturbation by adopting
at each time increment the average direction of motion of the other particles in their local neighbourhood.
[29]
SPP models predict that swarming animals share certain properties at the group level, regardless of the
type of animals in the swarm.[30] Swarming systems give rise to emergent behaviours which occur at many
different scales, some of which are turning out to be both universal and robust. It has become a challenge in
theoretical physics to find minimal statistical models that capture these behaviours. [31][32][33]

[edit]Stochastic diffusion search


Stochastic diffusion search (SDS)[34][35] is an agent-based probabilistic global search and optimization
technique best suited to problems where the objective function can be decomposed into multiple
independent partial-functions. Each agent maintains a hypothesis which is iteratively tested by evaluating a
randomly selected partial objective function parameterised by the agent's current hypothesis. In the
standard version of SDS such partial function evaluations are binary, resulting in each agent becoming
active or inactive. Information on hypotheses is diffused across the population via inter-agent
communication. Unlike the stigmergiccommunication used in ACO, in SDS agents communicate
hypotheses via a one-to-one communication strategy analogous to the tandem running procedure observed
in Leptothorax acervorum.[36] A positive feedback mechanism ensures that, over time, a population of
agents stabilise around the global-best solution. SDS is both an efficient and robust global search and
optimisation algorithm, which has been extensively mathematically described. [37][38][39] Recent work has
involved merging the global search properties of SDS with other swarm intelligence algorithms [40]

Evolutionary algorithms
Evolutionary algorithms is a sub-field of evolutionary computing.

Evolution strategies (ES, see Rechenberg, 1994) evolve individuals by means of mutation and
intermediate or discrete recombination. ES algorithms are designed particularly to solve problems in
the real-value domain. They use self-adaptation to adjust control parameters of the search. De-
randomization of self-adaptation has led to the contemporary Covariance Matrix Adaptation Evolution
Strategy (CMA-ES).

Evolutionary programming (EP) involves populations of solutions with primarily mutation and
selection and arbitrary representations. They use self-adaptation to adjust parameters, and can include
other variation operations such as combining information from multiple parents.

Gene expression programming (GEP) also uses populations of computer programs. These complex
computer programs are encoded in simpler linear chromosomes of fixed length, which are afterwards
expressed as expression trees. Expression trees or computer programs evolve because the
chromosomes undergo mutation and recombination in a manner similar to the canonical GA. But
thanks to the special organization of GEP chromosomes, these genetic modifications always result in
valid computer programs.[25]

Genetic programming (GP) is a related technique popularized by John Koza in which computer
programs, rather than function parameters, are optimized. Genetic programming often uses tree-
based internal data structures to represent the computer programs for adaptation instead of
the list structures typical of genetic algorithms.

Grouping genetic algorithm (GGA) is an evolution of the GA where the focus is shifted from
individual items, like in classical GAs, to groups or subset of items.[26] The idea behind this GA
evolution proposed by Emanuel Falkenauer is that solving some complex problems,
a.k.a. clustering or partitioning problems where a set of items must be split into disjoint group of items
in an optimal way, would better be achieved by making characteristics of the groups of items
equivalent to genes. These kind of problems include bin packing, line balancing, clustering with
respect to a distance measure, equal piles, etc., on which classic GAs proved to perform poorly.
Making genes equivalent to groups implies chromosomes that are in general of variable length, and
special genetic operators that manipulate whole groups of items. For bin packing in particular, a GGA
hybridized with the Dominance Criterion of Martello and Toth, is arguably the best technique to date.

Interactive evolutionary algorithms are evolutionary algorithms that use human evaluation. They
are usually applied to domains where it is hard to design a computational fitness function, for example,
evolving images, music, artistic designs and forms to fit users' aesthetic preference.

[edit]Swarm intelligence
Swarm intelligence is a sub-field of evolutionary computing.

Ant colony optimization (ACO) uses many ants (or agents) to traverse the solution space and find
locally productive areas. While usually inferior to genetic algorithms and other forms of local search, it
is able to produce results in problems where no global or up-to-date perspective can be obtained, and
thus the other methods cannot be applied.[citation needed]

Particle swarm optimization (PSO) is a computational method for multi-parameter optimization


which also uses population-based approach. A population (swarm) of candidate solutions (particles)
moves in the search space, and the movement of the particles is influenced both by their own best
known position and swarm's global best known position. Like genetic algorithms, the PSO method
depends on information sharing among population members. In some problems the PSO is often more
computationally efficient than the GAs, especially in unconstrained problems with continuous
variables.[27]

Intelligent Water Drops or the IWD algorithm [28] is a nature-inspired optimization algorithm
inspired from natural water drops which change their environment to find the near optimal or optimal
path to their destination. The memory is the river's bed and what is modified by the water drops is the
amount of soil on the river's bed.

[edit]Other evolutionary computing algorithms


Evolutionary computation is a sub-field of the metaheuristic methods.

Harmony search (HS) is an algorithm mimicking the behaviour of musicians in the process of
improvisation.

Memetic algorithm (MA), also called hybrid genetic algorithm among others, is a relatively new
evolutionary method where local search is applied during the evolutionary cycle. The idea of memetic
algorithms comes from memes, which unlike genes, can adapt themselves. In some problem areas they
are shown to be more efficient than traditional evolutionary algorithms.

Bacteriologic algorithms (BA) inspired by evolutionary ecology and, more particularly,


bacteriologic adaptation. Evolutionary ecology is the study of living organisms in the context of their
environment, with the aim of discovering how they adapt. Its basic concept is that in a heterogeneous
environment, you can't find one individual that fits the whole environment. So, you need to reason at
the population level. It is also believed BAs could be successfully applied to complex positioning
problems (antennas for cell phones, urban planning, and so on) or data mining.[29]

Cultural algorithm (CA) consists of the population component almost identical to that of the
genetic algorithm and, in addition, a knowledge component called the belief space.

Differential Search Algorithm (DS) inspired by migration of superorganisms. [30]

Gaussian adaptation (normal or natural adaptation, abbreviated NA to avoid confusion with GA) is
intended for the maximisation of manufacturing yield of signal processing systems. It may also be
used for ordinary parametric optimisation. It relies on a certain theorem valid for all regions of
acceptability and all Gaussian distributions. The efficiency of NA relies on information theory and a
certain theorem of efficiency. Its efficiency is defined as information divided by the work needed to
get the information.[31] Because NA maximises mean fitness rather than the fitness of the individual,
the landscape is smoothed such that valleys between peaks may disappear. Therefore it has a certain
ambition to avoid local peaks in the fitness landscape. NA is also good at climbing sharp crests by
adaptation of the moment matrix, because NA may maximise the disorder (average information) of the
Gaussian simultaneously keeping the mean fitness constant.

[edit]Other metaheuristic methods


Metaheuristic methods broadly fall within stochastic optimisation methods.

Simulated annealing (SA) is a related global optimization technique that traverses the search space
by testing random mutations on an individual solution. A mutation that increases fitness is always
accepted. A mutation that lowers fitness is accepted probabilistically based on the difference in fitness
and a decreasing temperature parameter. In SA parlance, one speaks of seeking the lowest energy
instead of the maximum fitness. SA can also be used within a standard GA algorithm by starting with a
relatively high rate of mutation and decreasing it over time along a given schedule.

Tabu search (TS) is similar to simulated annealing in that both traverse the solution space by
testing mutations of an individual solution. While simulated annealing generates only one mutated
solution, tabu search generates many mutated solutions and moves to the solution with the lowest
energy of those generated. In order to prevent cycling and encourage greater movement through the
solution space, a tabu list is maintained of partial or complete solutions. It is forbidden to move to a
solution that contains elements of the tabu list, which is updated as the solution traverses the solution
space.

Extremal optimization (EO) Unlike GAs, which work with a population of candidate solutions,
EO evolves a single solution and makes local modifications to the worst components. This requires
that a suitable representation be selected which permits individual solution components to be assigned
a quality measure ("fitness"). The governing principle behind this algorithm is that
of emergentimprovement through selectively removing low-quality components and replacing them
with a randomly selected component. This is decidedly at odds with a GA that selects good solutions
in an attempt to make better solutions.

[edit]Other stochastic optimisation methods

The cross-entropy (CE) method generates candidates solutions via a parameterized probability
distribution. The parameters are updated via cross-entropy minimization, so as to generate better
samples in the next iteration.

Reactive search optimization (RSO) advocates the integration of sub-symbolic machine learning
techniques into search heuristics for solving complex optimization problems. The word reactive hints
at a ready response to events during the search through an internal online feedback loop for the self-
tuning of critical parameters. Methodologies of interest for Reactive Search include machine learning
and statistics, in particular reinforcement learning, active or q

You might also like