You are on page 1of 50

MEEM2036 Engineering Computing, part 7, ver 4

MBE 2036
One-Dimensional Unconstrained
Optimization
Part 7

Jia Pan
AC-1 Y6720
Email: jiapan@cityu.edu.hk

1
MEEM2036 Engineering Computing, part 7, ver 4 2

Whats optimization ?
Finding the best result or optimal solution of a problem
Examples:
Power vs. Heat
Design vs. Cost
Find x to minimize or maximize f(x)
f(x) cost function, objective function
One-dimensional vs. Multidimensional
MEEM2036 Engineering Computing, part 7, ver 4 3

Global and Local Min/Maximizers


Global maximizer
Global minimizer
Local maximizer
Local minimizer
Global min/maximizer must be local
min/maximizer
But the converse is NOT true
MEEM2036 Engineering Computing, part 7, ver 4 4

1D/2D Functions

Where are the global/local min/maximizers?


MEEM2036 Engineering Computing, part 7, ver 4 5

Global and Local Min/Maximizers


A minimizer of is the maximizer of

A maximizer of is the minimizer of

For both global and local min/maximizers


MEEM2036 Engineering Computing, part 7, ver 4 6

When Global == Local?


The local minimizer of a general function is
not necessary a global minimizer
But if the function is convex, then its local
minimizer must be a global minimizer
MEEM2036 Engineering Computing, part 7, ver 4 7

Convex function
is convex if for all pairs of and and
, there is

The line segment must lie above the curve


MEEM2036 Engineering Computing, part 7, ver 4 8

2D convex function
MEEM2036 Engineering Computing, part 7, ver 4 9

Is this function convex?


MEEM2036 Engineering Computing, part 7, ver 4 10

One-Dimensional Unconstrained Optimization


Optimization Root

V.S.
Both involve: (1) initial guess and (2) search for a point on a function
The optimum is the point where: The root is the point where:
In mathematical terms, f (x)=0 In mathematical terms, f (x)=0
If f(x)<0, the point is a maximum
If f(x)>0, the point is a minimum
Some Optimization methods find an optima by solving the root problem of the first
derivative of the function, i.e. f (x)=0
However, f (x) may be difficult to find analytically. Numerical methods may be needed
to solve the problem.
MEEM2036 Engineering Computing, part 7, ver 4 11

First Derivative Method for Finding Optimal Values

Given:
6( 3)

First Derivate of y : ( 3) ( 3)
=6
( 3)
=12 (x-3)

For finding the optimal values, we


need to solve:
MEEM2036 Engineering Computing, part 7, ver 4 12

First Derivate Method for Finding Optimal Values

Given:

First Derivate of y :

For finding the optimal values, we


need to solve:
=0

We expect 3 roots from here (complex)


Sometimes it may not be easy to
solve the equation!
MEEM2036 Engineering Computing, part 7, ver 4 13

Matlab roots
>> roots([12, -30, -24, 12])
ans = 3.0485 -0.9092 0.3608
MEEM2036 Engineering Computing, part 7, ver 4 14

Golden-Section
MEEM2036 Engineering Computing, part 7, ver 4 15

Golden-section
a b

b a

Given golden-section: c
Answer:

Proof:
MEEM2036 Engineering Computing, part 7, ver 4 16

Golden-Section Search
The Golden-Section search is a simple, general-purpose, single-
variable search technique
It is similar in spirit to the bisection approach for locating root
As with Bisection method, we can start by defining an interval
that contains a single answer
That is the interval should contain a single maximum or
minimumit is called unimodal

The method starts with two


initial guesses xL and xU that
bracket one local optimal value
of f(x)
MEEM2036 Engineering Computing, part 7, ver 4 17
Golden-Section Search (cont)
Then, two interior points x1 and x2 are chosen according to the golden ratio R:
d R (xU x L ) 0.61803 (xU x L ) 0.61803 l 0 Eq 8.1
x1 = xL + d
5 1 x2 = xU d
where R 0.61803
2

f(x)

x2 x1
xL xU
l0
l1 l2
l1 = d
l1 l2 l0 xU xL l1 l2 Eq 8.3
For Golden ratio, Eq 8.2
l0 l1
MEEM2036 Engineering Computing, part 7, ver 4 18

Golden-Section Search (cont)


Substitute Eq 8.3 into Eq 8.2
l1 l
2 Eq 8.4
l1 l2 l1

Take the reciprocal of Eq 8.4


l1 l2 l1
Eq 8.5
l1 l2
l2 1 R 2 R 1 0 Eq 8.6
let R 1 R
l1 R
b b 2 4ac 1 1 4( 1) 1 5
R 0.61803
2a 2 2
MEEM2036 Engineering Computing, part 7, ver 4 19

Golden-Section Search (cont)


The function is evaluated at these two interior
points. Two results can occur for finding a
maximum value:
i. If f(x1 ) > f(x2 ), the range of x from xL to x2 can be
eliminated because it does not contain the
maximum. Estimated xoptimal = x1 . For the next round
of calculation, in this case,
xLnew = x2old
x2new = x1old
f new(x2 ) = f old(x1 )
dnew = R*dold
x1new = xLnew+ dnew
Then calculate f new(x1 )
MEEM2036 Engineering Computing, part 7, ver 4 20

Golden-Section Search (cont)


f(x) Maximum
eliminate

l0 - d
= l0 - Rl0
= (1-R)l0 Eq 8.7
x2 x1
xL xU
d

d
l0
MEEM2036 Engineering Computing, part 7, ver 4 21

Golden-Section Search (cont)


f(x) Maximum

x2new=xunew -dnew d new

xLnew= xx22old x2new=x1old x1new

Prove x2
new
=x1 old
original d old = l0 new xUnew =xUold
d new= R l0new =Rd old = R (Rl0old) = R 2 l0 old Eq 8.8
From Eq 8.6, R2 = 1 - R
new new
Eq 8.8 can be written as d = (1-R) l0old This imply d = l0 old- dold

From Slide 9 , therefore x2new=x1old as xUnew =xUold


MEEM2036 Engineering Computing, part 7, ver 4 22

Golden-Section Search
ii. If f(x2 ) > f(x1 ), then the range of x from x1 to xU can be
eliminated because it does not contain the maximum.
Estimated xoptimal = x2 . For the next round of calculation:
xUnew = x1old
x1new = x2old
f new(x1 ) = f old(x2 )
dnew = R*dold
x2new = xUnew - dnew
Then calculate f new(x2 )
iii. Terminate when a s
Because the original x1 and x2 were chosen using the
golden ratio, we do not have to re-calculate all the
function values for the next iteration
For finding the minimum, f(x1 ) > f(x2 ) change to f(x1 ) <
f(x2 ) and f(x2 ) > f(x1 ) change to f(x2 ) < f(x1 )
MEEM2036 Engineering Computing, part 7, ver 4 23

Golden-Section Search (cont)


Maximum
f(x) eliminate

x2 x1
xL xU
d

d
MEEM2036 Engineering Computing, part 7, ver 4 24

Golden-Section Search (cont)


Maximum
f(x) eliminate

d new=Rd old

x2new x1new = x2old xUnew=x1old

xU
d new=Rd old

new= d old = l0new


xL
xLold
MEEM2036 Engineering Computing, part 7, ver 4 25

Golden-Section Search - Error Estimate

When an iteration in finding the optimal value is


complete, the optimum will either fall in one of two
intervals
If f(x2) > f(x1), the optimal value will be in the lower
interval, ie between xL, x2 and x1
If f(x1) > f(x2), the optimal value will be in the upper
interval, ie between x2, x1 and xU
Because the interior points are symmetrical, either
case can be used to define error.
We look at the example where the optimal point is at
the upper interval.
MEEM2036 Engineering Computing, part 7, ver 4 26

Golden-Section Search Error Estimate


f(x) Considering the case when the
eliminate maximum point is very close to x2
Maximum

x2 x1
xL xU
d
d
x x1 x2
xL R(xU xL ) xU R(xU xL )
xL R(xU xL ) xU R(xU xL )
xL xU 2 R(xU xL ) (2R 1)(xU xL ) 0.236(xU xL ) Eq 8.9
MEEM2036 Engineering Computing, part 7, ver 4 27

Golden-Section Search Error Estimate


Considering the case Maximum
f(x) when the optimal point is
eliminate at xU

x2 x1
xL xU
d

d
x xU x1
xU xL R(xU x L )
xU xL R(xU xL ) (1 R)(xU xL ) 0.382(xU xL ) Eq 8.10
MEEM2036 Engineering Computing, part 7, ver 4 28

Golden-Section Search Error Estimate

Based on Eq 8.9 and Eq 8.10, the maximum


possible error for each iteration is (1-R)(xU xL).
Therefore a can be calculated as follows:

x U xL
a (1 - R) 100% Eq 8.11
xoptimal
MEEM2036 Engineering Computing, part 7, ver 4 29

Golden-Section Search - Summary

Continue on next slide


MEEM2036 Engineering Computing, part 7, ver 4 30

Golden-Section Search - Summary


MEEM2036 Engineering Computing, part 7, ver 4 31

Example 1
Problem Statement: Use the Golden-section
search method to find the maximum of:
x2
f(x) 2sinx
10
within the interval xL=0 and xU = 4. The true
optimal value xoptimum = 1.4276. s = 5%

Solution:
5 1
d (4 0) 2.472
2
x1 0 2.472 2.472 x2 4 2.472 1.528
MEEM2036 Engineering Computing, part 7, ver 4 32

Example 1
x2
f(x) 2sinx
10

xU
xL x2 x1
MEEM2036 Engineering Computing, part 7, ver 4 33

Example 1 (cont)
1.528 2
f(x2 ) f(1.528) 2sin(1.528) - 1.765
10
f(x1 ) f(2.472) 0.635
As f(x2) > f(x1 ), the maximum value is in the interval defined by xL, x2 and x1.
xLnew =xLold= 0 xUnew =x1old= 2.472 x1new =x2old= 1.528

f(x1new )=f(x2old)= 2.472

dnew = 0.61803 (2.472 -0) = 0.61803 dold = 1.528


x2new =xUnew dnew = 2.472 - 1.528 = 0.944
f(x2new )=f(0.944)= 1.531
MEEM2036 Engineering Computing, part 7, ver 4 34

Example 1 (cont)
i xL x2 f(x2) x1 f(x1) xU d a

1 0 1.5279 1.7647 2.4721 0.6300 4 2.4721 99.99

2 0 0.9443 1.5310 1.5279 1.7647 2.4721 1.5279 61.80

3 0.9443 1.5279 1.7647 1.8885 1.5432 2.4721 0.9443 38.20

7 1.3050 1.3901 1.7742 1.4427 1.7755 1.5279 0.1378 5.90

8 1.3901 1.4427 1.7755 1.4752 1.7732 1.5279 0.0851 3.64


MEEM2036 Engineering Computing, part 7, ver 4 35
Can we directly estimate how
many iterations we need?
MEEM2036 Engineering Computing, part 7, ver 4 36
Number of Iterations required
for Example 1
1.4276

i xL x2 f(x2) x1 f(x1) xU d a

So 8 times 1 0 1.527
9
1.764
7
2.472
1
0.630
0
4 2.472
1
99.
99

2 0 0.944 1.531 1.527 1.764 2.472 1.527 61.


3 0 9 7 1 9 80

3 0.944 1.527 1.764 1.888 1.543 2.472 0.944 38.


3 9 7 5 2 1 3 20

7 1.305 1.390 1.774 1.442 1.775 1.527 0.137 5.9


0 1 2 7 5 9 8 0

8 1.390 1.442 1.775 1.475 1.773 1.527 0.085 3.6


1 7 5 2 2 9 1 4
MEEM2036 Engineering Computing, part 7, ver 4 37

Caution: When would Golden-Section


Search fail to get a global optimum?
Golden-section search only works on unimodal
functions if you want a global optimum
Examples
MEEM2036 Engineering Computing, part 7, ver 4 38

Caution: When would Golden-Section


Search fail to get a global optimum?
Examples
Case 1: xl = 0, xu= 4
MEEM2036 Engineering Computing, part 7, ver 4 39

Caution: When would Golden-Section


Search fail to get a global optimum?
Examples
Case 2: xl = 3, xu = 6.28
MEEM2036 Engineering Computing, part 7, ver 4 40

Caution: When would Golden-Section


Search fail?
Examples
Case 3: xl = 0, xu = 6.28 (local)
MEEM2036 Engineering Computing, part 7, ver 4 41

Question:

Does Golden-section search always return a


local optimal solution? Why?

We discussed tri-section, can we use bi-section


to find the optimum, similar to the root-finding?
Why?
MEEM2036 Engineering Computing, part 7, ver 4 42

Newtons Method
Newton-Raphson method is an open method for
finding the root x of a function such that f(x)=0. The
formula for finding x is:
f(xi )
xi 1 xi ' Eq 6.2
f (xi )
Optimization: f(xoptimal)=0
The optimization problem can be treated as solving the
root problem of the first derivative of the function, i.e.
f(x)=0
The formula for finding x can be modified as:
f ' (xi )
xi 1 xi "
Eq 8.12
f (xi )
MEEM2036 Engineering Computing, part 7, ver 4 43

Newtons Method (cont)


Terminate the calculation process when a < s
x rnew xrold
a 100%
x rnew
Newtons method is an open method similar to Newton-
Raphson method
It does not require initial guesses that bracket the
optimum.
Like Newton-Raphson method, depending on the nature
of the function and the quality of the initial guess, the
method may be divergent i.e. it may not find the answer
However, when it works, it is faster than the Golden-
Section Search method
Usually, it is a good idea to check for the correct sign of
the second derivative to confirm the technique is
converging
MEEM2036 Engineering Computing, part 7, ver 4 44

Newtons Method (cont)


Sometimes, hybrid techniques can be used to
overcome the diverging problem:
A bracketing approach is used when it is far from the
optimum and
An open method is used when it is near the optimum
If derivatives cannot be conveniently evaluated,
a secant-like version of Newtons method can be
developed by using finite-difference
approximations for the derivative evaluations.
MEEM2036 Engineering Computing, part 7, ver 4 45

Example 2 - Newtons Method


Problem Statement: Use Newtons method
to find the maximum of:
x2
f(x) 2sinx
10

with an initial guess of x0 = 2.5. The true


optimal value xoptimum = 1.4276.
Solution:
x 1
f (x) 2cos(x)
'
f (x) 2sin(x)
"

5 5
MEEM2036 Engineering Computing, part 7, ver 4 46

Example 2 - Newtons Method (cont)

The above equations can be substituted


into Eq 8.12:
xi
2cos(xi )
xi 1 xi 5
1
2sin(xi )
5
Substituting the initial guess yields:
2.5
2cos(2.5)
x1 2.5 5 0.99508 f(x1) = 1.57859
1
2sin(2.5)
5
MEEM2036 Engineering Computing, part 7, ver 4 47

Example 2 - Newtons Method (cont)


xrnew xrold x1 x0 0.995 2.5
a 100% 100% 100
xrnew x1 0.995
151.2361

The second iteration gives:


0.995
2cos(0.995)
x2 0.995 5 1.46901
1
2sin(0.995)
5
f(x2) = 1.77385

xrnew xrold x2 x1 1.46901 0.995


a 100% 100% 100
xrnew x2 1.46901
32.2618
MEEM2036 Engineering Computing, part 7, ver 4 48

Example 2 - Newtons Method (cont)


i x f(x) f(x) f(x) a

0 2.5 0.57194 -2.10229 -1.39694 0

1 0.99508 1.57859 0.88985 -1.87761 151.2361

2 1.46901 1.77385 -0.09058 -2.18965 32.2619

3 1.42764 1.77573 -0.00020 -2.17954 2.8978

4 1.42755 1.77573 0.0000 -2.17952 0.0063


MEEM2036 Engineering Computing, part 7, ver 4 49

Example 2 - Newtons Method (cont)


True percent relative error:
i Golden-Section Search Method t % Newtons method t %

1 7.0258 30.2970

2 7.0258 2.9007

3 7.0258 0.0028

4 7.0258 0.0035

5 7.0258

6 1.0577

7 1.0577

8 1.0577
MEEM2036 Engineering Computing, part 7, ver 4 50

Learning Outcomes
After this lecture, the student would be able to understand the
following:

The basic principle of one-dimensional


optimization
The basic principle of Golden-section
Search method
The basic principle of Newtons method

You might also like