Professional Documents
Culture Documents
Introduction to Non-Linear
Optimization
Optimization Tree
What is Optimization?
What is Optimization?(Cont.)
Figure 6: Global versus local optimization. Figure 7: Local point is equal to global point if
the function is convex.
Mathematical Background
Slop or gradient of the objective function f – represent the
direction in which the function will decrease/increase most rapidly
df f ( x x) f ( x) f
lim lim
dx x0 x x 0 x
Optimization Algorithm
Optimization Methods
Deterministic
• Direct Search – Use Objective function values to locate minimum
• Gradient Based – first or second order of objective function.
• Minimization objective function f(x) is used with –ve sign –
f(x) for maximization problem.
Single Variable
• Newton – Raphson is Gradient based technique (FOC)
• Golden Search – step size reducing iterative method
Multivariable Techniques ( Make use of Single variable Techniques
specially Golden Section)
• Unconstrained Optimization
a.) Powell Method – Quadratic (degree 2) objective function polynomial is
non-gradient based.
b.) Gradient Based – Steepest Descent (FOC) or Least Square minimum
(LMS)
c.) Hessian Based -Conjugate Gradient (FOC) and BFGS (SOC)
© 2008 Solutions 4U Sdn Bhd. All Rights Reserved
Fundamental of Optimization
• Constrained Optimization
a.) Indirect approach – by transforming into unconstrained
problem.
b.) Exterior Penalty Function (EPF) and Augmented Lagrange
Multiplier
c.) Direct Method Sequential Linear Programming (SLP), SQP and
Steepest Generalized Reduced Gradient Method (GRG)
Numerical Optimization
Newton –Raphson Method
1. Root Solver – System of nonlinear equations
2. (MATLAB – Optimization – fsolve)
f ( xi )
xi 1 xi
f ' ( xi 1 )
2. One dimensional Solver –
(MATLAB - OPTIMIZATION Method )
f ' ( xi )
xi 1 xi
f ' ' ( xi 1 )
Steepest Gradient Ascent/Descent
Methods
xi 1 f ( xi .d i )
f ( xi ) .f ( xi ).d i
f ( xi ).d i Is the magnitude of descent direction.
d 2 f ( xi ) achieve the steepest descent
d 2 f ( xi ) achieve steepest ascent
Steepest Gradient
Solve the following for two step steepest ascent 1.2
1 0
f ( x, y) 2.25xy 1.75 y 1.5x 2 2 y 1
2
The partial derivatives can be evaluated at the initial guesses, x = 1 and 0.8 max
y = 1, f
3x 2.25 y 3(1) 2.25(1) 0.75
x 0.6
f
2.25 x 4 y 1.75 2.25(1) 4(1) 1.75 0 0.4
y
0.2
Therefore, the search direction is –0.75i.
0
f (1 0.75h, 1) 0.5 0.5625 h 0.84375 h 2 0 0.2 0.4 0.6 0.8 1 1.2
This can be differentiated and set equal to zero and solved for h* = 0.33333. Therefore, the result for the
first iteration is x = 1 – 0.75(0.3333) = 0.75 and y = 1 + 0(0.3333) = 1. For the second iteration, the
partial derivatives can be evaluated as,
f
3(0.75) 2.25(1) 0
x
f
2.25(0.75) 4(1) 1.75 0.5625
y
Therefore, the search direction is –0.5625j.
f (0.75, 1 0.5625 h) 0.59375 0.316406 h 0.63281h 2
This can be differentiated and set equal to zero and solved for h* = 0.25.
Therefore, the result for the second iteration is x = 0.75 + 0(0.25) = 0.75 and y = 1 + (–0.5625)0.25 = 0.859375.
%chapra14.5 … Contd
clear
clc
Clf x2
ww1=0:0.01:1.2;
ww2=ww1;
[w1,w2]=meshgrid(ww1,ww2);
J=-1.5*w1.^2+2.25*w2.*w1-2*w2.^2+1.75*w2;
cs=contour(w1,w2,J,70);
%clabel(cs);
hold
grid
w1=1; w2=1; h=0;
for i=1:10
syms h
dfw1=-3*w1(i)+2.25*w2(i);
dfw2=2.25*w1(i)-4*w2(i)+1.75;
fw1=-1.5*(w1(i)+dfw1*h).^2 + 2.25*(w2(i)+dfw2*h).*(w1(i)+…
dfw1*h)-2*(w2(i)+dfw2*h).^2+1.75*(w2(i)+dfw2*h);
J=-1.5*w1(i)^2+2.25*w2(i)*w1(i)-2*w2(i)^2+1.75*w2(i)
g=solve(fw1);
h=sum(g)/2;
w1(i+1)=w1(i)+dfw1*h;
w2(i+1)=w2(i)+dfw2*h;
plot(w1,w2) xi
pause(0.05)
End
MATLAB OPTIMIZATION TOOLBOX
%startchapra.m
w1(i), w2(i) function J=chaprafun(x) clc
w1=x(1); clear
w2=x(2) x0=[1 1];
J=-(-1.5*w1^2+2.25*w2*w1-2*w2^2+1.75*w2); options=optimset('LargeScale','off','Display','iter','Maxiter',…
20,'MaxFunEvals',100,'TolX',1e-3,'TolFun',1e-3);
[x,fval]=fminunc(@chaprafun,x0,options)
In above equations θ1 = 0 as it is along the x-axis and other three angles are time varying.
1st angular velocity derivative
• If input is applied to link2 - DC motor then ω2 would be the input to the syste
In Matrix for
_
f ( xi )
xi 1 x i
f ' ( xi 1 )
_
f (q)
qi
f ' (qi )
%fouropt.m
function f=fouropt(x)
the = 0;
r1=12; r2=4; r3=10; r4=7;
f=-[r2*cos(the)+r3*cos(x(1))-r1*cos(0)-r4*cos(x(2));
r2*sin(the)+r3*sin(x(1))-r1*sin(0)-r4*sin(x(2))];
%startfouropt.m
clc
clear
x0=[0.1 0.1];
options=optimset('LargeScale','off','Display','iter','Maxiter',…
200,'MaxFunEvals',100,'TolX',1e-8,'TolFun',1e-8);
[x,fval]=fsolve(@fouropt,x0,options);
theta3=x(1)*57.3 Foursimmechm.m, foursimmech.mdl and possol4.m
theta4=x(2)*57.3
References:
1) Steven C. Chapra , Raymond P. Canale, Numerical Methods for Engineers, McGraw Hill,
Singapore, 2006
2) Kalyanmoy Deb, Optimization for Engineering Design, Prentice Hall, New Dehli, 1996
3) Optimization Toolbox for use with MATLAB, User guide Ver. 3, MathWorks, Natick, MA, USA,
2006