You are on page 1of 6

Linear Programming - defined as the problem of maximizing or minimizing a linear function subject to

system of linear constraints. The constraints may be equalities or inequalities. The linear function is
called the objective function , of the formf(x,y)=ax+by+cf(x,y)=ax+by+c . The solution set of the system
of inequalities is the set of possible or feasible solution , which are of the form(x,y)(x,y) .

In a linear programming problem, any solution that satisfies


the constraints
aX = b
X≥ 0 is called a feasible solution.
Nonlinear programming (NP) involves minimizing or maximizing a nonlinear objective function subject
to bound constraints, linear constraints, or nonlinear constraints, where the constraints can be
inequalities or equalities. Example problems in engineering include analyzing design tradeoffs, selecting
optimal designs, and computing optimal trajectories.

For many general nonlinear programming problems, the objective function has many locally
optimal solutions; finding the best of all such minima, the global solution, is often difficult.
Quadratic programming (QP) is the process of solving a special type of mathematical
optimization problem—specifically, a (linearly constrained) quadratic optimization problem, that is, the
problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to
linear constraints on these variables. Quadratic programming is a particular type of nonlinear
programming.

Quadratic programming is the mathematical problem of finding a vector xx that minimizes a


quadratic function:
A Geometric Program (GP) is a type of non-linear optimization problem whose
objective and constraints have a particular form.

The decision variables must be strictly positive (non-zero, non-negative) quantities. This
is a good fit for engineering design equations (which are often constructed to have only
positive quantities), but any model with variables of unknown sign (such as forces and
velocities without a predefined direction) may be difficult to express in a GP. Such
models might be better expressed as Signomials.

More precisely, GP objectives and inequalities are formed out


of monomials and posynomials. In the context of GP, a monomial is defined as:
Multi-objective optimization (also known as multi-objective programming, vector
optimization, multicriteria optimization, multiattribute optimization or Pareto optimization)

- is an area of multiple criteria decision making, that is concerned with mathematical optimization
problems involving more than one objective function to be optimized simultaneously. Multi-objective
optimization has been applied in many fields of science, including engineering, economics and logistics
where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting
objectives.

- Involve more than one objective function that are to be minimized or maximized

- Answer is set of solutions that define the best tradeoff between competing objectives

Generally, multi-objective optimization methods are classified based on three different approaches:

• A priori approach – decision-maker provides preferences before the optimization process.

• A posteriori approach – the optimization process determines a set of Pareto solutions, and then the
decision-maker chooses a solution from the set of solutions provided by the algorithm.

• Interactive approach – there is progressive interaction between the decision-maker and the solver, i.e.
knowledge gained during the optimization process helps the decision-maker to define the preferences.

- In multi-objective optimization problem, the goodness of a solution is determined by the dominance


Single-Objective

1.Calculus-based techniques- Numerical methods, also called calculus-based methods, use a set of
necessary and sufficient conditions that must be satisfied by the solution of the optimization problem.
They can be further subdivided into two categories, viz. direct and indirect methods. Direct search
methods perform hill climbing in the function space by moving in a direction related to the local
gradient. In indirect methods, the solution is sought by solving a set of equations resulting from setting
the gradient of the objective function to zero. The calculus-based methods are local in scope and also
assume the existence of derivatives. These constraints severely restrict their application to many real-
life problems, although they can be very efficient in a small class of unimodal problems.

2. Enumerative techniques - Enumerative techniques involve evaluating each and every point of the
finite, or discretized infinite, search space in order to arrive at the optimal solution [24, 112]. Dynamic
programming is a well-known example of enumerative search. It is obvious that enumerative techniques
will break down on problems of even moderate 2.2 Single-Objective Optimization Techniques 19 size
and complexity because it may become simply impossible to search all the points in the space.

3. Random techniques - Guided random search techniques are based on enumerative methods, but they
use additional information about the search space to guide the search to potential regions of the search
space These can be further divided into two categories, namely single-point search and multiple-point
search, depending on whether it is searching just with one point or with several points at a time. The
guided random search methods are useful in problems where the search space is huge, multimodal, and
discontinuous, and where a nearoptimal solution is acceptable. These are robust schemes, and they
usually provide near-optimal solutions across a wide spectrum of problems. In the remaining part of this
chapter, we focus on such methods of optimization for both single and multiple objectives.

You might also like