You are on page 1of 47

MEEN5313

Part 2: Roots and


Optimization
Xue Yang, Ph.D.
Assistant Professor of Mechanical
Engineering
Texas A&M University-Kingsville

9/28/16

MEEN5313 Numerical Methods in ME

Part 2: Roots and Optimization


CHAPTER 5: Roots: Bracketing Methods
CHAPTER 6: Roots: Open Methods
CHAPTER 7: Optimization

9/28/16

MEEN5313 Numerical Methods in ME

Chapter 5: Bracketing Methods


Medical studies have established that a bungee jumpers
chances of sustaining a significant vertebrae injury increase
significantly if the free-fall velocity exceeds 36 m/s after 4 s of
free fall. Your boss at the bungee-jumping company wants you
to determine the mass at which this criterion is exceeded given
a drag coefficient of 0.25 kg/m.
where v = downward vertical velocity (m/s), t = time (s), g = the
acceleration due to gravity (=9.81 m/s ), cd = a lumped drag
coefficient (kg/m), and m = the jumpers mass (kg). The solution to
subtracting v(t) from both sides
the above differential equation is
2

Where,
Now we can see that the answer to the problem is the value of m that makes the function
equal to zero. Hence, we call this a roots problem.
9/28/16

MEEN5313 Numerical Methods in ME

5.2 GRAPHICAL METHODS


A simple method for obtaining an estimate of the root of the
equation f (x) = 0 is to make a plot of the function and
observe where it crosses the x axis. This point, which
represents the x value for which f (x) = 0, provides a rough
approximation of the root.
The mass limit is
roughly 145 kg.

9/28/16

MEEN5313 Numerical Methods in ME

5.2 GRAPHICAL METHODS


Fig. 5.1 shows a number of ways in which roots
can occur (or be absent) in an interval prescribed
by a lower bound xl and an upper bound xu.
Illustration of a number of general ways that a root
may occur in an interval prescribed by a lower bound
xl and an upper bound xu .
Parts (a) and (c) indicate that if both f (xl) and f (xu)
have the same sign, either there will be no roots or
there will be an even number of roots within the
interval.
Parts (b) and (d) indicate that if the function has
different signs at the end points, there will be an odd
number of roots in the interval.
9/28/16

MEEN5313 Numerical Methods in ME

5.2 GRAPHICAL METHODS


Although these generalizations are usually true,
there are cases where they do not hold. For
example, functions that are tangential to the x
axis (Fig. 5.2a) and discontinuous functions
(Fig. 5.2b) can violate these principles.
Illustration of some exceptions to the general cases
depicted in Fig. 5.1. (a) Multiple roots that occur
when the function is tangential to the x axis. For this
case, although the end points are of opposite signs,
there are an even number of axis interceptions for
the interval.
(b) Discontinuous functions where end points of
opposite sign bracket an even number of roots.
Special strategies are required for determining the
roots for these cases.
9/28/16

MEEN5313 Numerical Methods in ME

5.4 Bisection
The first step in
bisection is to guess
two values of the
unknown (in the
present problem, m)
that give values for f
(m) with different
signs: [50, 200].
The initial estimate
of the root xr lies at
the midpoint of the
interval [50, 200].
xr=(50+200)/2=125
9/28/16

Note that the exact value of the root is 142.7376. This means
that the value of 125 calculated here has a true percent relative
error of
|t|=|( 125 - 142.7376)/142.7376|=12.43%
MEEN5313 Numerical Methods in ME

5.4 Bisection
Next we compute the product of
the function value at the lower
bound and at the midpoint:
f(50)f(125)=4.579(0.409) =
1.871>0

No sign change occurs between


the lower bound and the midpoint
Consequently, the root must be
located in the upper interval
between 125 and 200. Therefore,
we create a new interval by
redefining the lower bound as 125.
At this point, the new interval
extends from xl = 125 to xu = 200.
A revised root estimate can then
be calculated as
xr = (125 + 200)/2= 162.5

The process can be repeated to obtain refined estimates. For example, f (125)
f (162.5) = 0.409(0.359) = 0.147
Therefore, the root is now in the lower interval between 125 and 162.5. The
upper bound is redefined as 162.5, and the root estimate for the third
9/28/16
MEEN5313 Numerical Methods in ME
8
iteration is calculated as

5.4 Bisection
We might decide that we should terminate when the error
drops below, say, 0.5%. This strategy is flawed because the
error estimates in the example were based on knowledge of
the true root of the function, but this is never known.
One way to do this is by estimating an approximate percent
relative error as

9/28/16

MEEN5313 Numerical Methods in ME

Bisection example
Use bisection method to find the root of f(x)=sin(5x)+x2 near
-0.5, till a < 10-4.
MATLAB plot and fzero.

clear
f1=@(x) sin(5.*x)+x.^2;
x=-1.5:0.01:1.5; y=f1(x);
plot(x,y); grid on
xlabel('x'); ylabel('y'); title('sin(5x)+x^2')
[root_fzero,fx_fzero]=fzero(f1,-0.5);

root_fzero=-0.5637
fx_fzero=-1.1102e-16
Bisection hand
calculation:

9/28/16

Iteration xl

xu

xr

0.6000 0.5000 0.5500


1
0
0
0
0.6000 0.5500 0.5750
2
0
0
0
0.5750 0.5500 0.5625
3
0
0
0
0.5750 0.5625 0.5687
4
0
0
5
MEEN5313 Numerical -Methods -in ME
0.5687 0.5625 0.5656

f(xl)
0.2188
8
0.2188
8
0.0671
8
0.0671
8
0.0300

f(xu)

f(xr)

0.3484
7
0.0791
6
0.0791
6
0.0067
8
0.0067

0.0791 1.0000
6
0
0.0671 0.0434
8
8
0.0067 0.0222
8
2
0.0300 0.0109
2
9
10
0.0115 0.0055

5.5 FALSE POSITION


False position (also called the linear
interpolation method) is another
well-known bracketing method.
Rather than bisecting the interval, it
locates the root by joining f(xl) and
f(xu) with a straight line (Fig. 5.8).
The intersection of this line with the
x axis represents an improved
estimate of the root.
In this way the values of xl and xu
always bracket the true root.
9/28/16

MEEN5313 Numerical Methods in ME

11

Example 5.5 The False-Position Method


Problem Statement. Use false position to solve the same problem approached
graphically and with bisection for
Solution:
Initiate the computation with guesses of xl = 50 and xu = 200
First iteration
xl = 50 f (xl ) = 4.579387
xu = 200 f (xu) = 0.860291
xr = 200 0.860291(50 200)/(4.579387 0.860291)= 176.2773
which has a true relative error of 23.5%.

Second iteration:
f (xl ) f (xr ) = 2.592732
Therefore, the root lies in the first subinterval, and xr becomes the upper limit for the next iteration, xu =
176.2773.
xl = 50 f (xl) = 4.579387
xu = 176.2773
f (xu) = 0.566174
xr = 176.2773 0.566174(50 176.2773)/(4.579387 0.566174)= 162.3828
which has true and approximate relative errors of 13.76% and 8.56%, respectively. Additional iterations can be
performed to refine the estimates of the root.
9/28/16

MEEN5313 Numerical Methods in ME

12

EXAMPLE 5.6 A Case Where Bisection Is Preferable


to False Position
Although false position often performs
better than bisection, there are other
cases where it does not. As in the
following example, there are certain
cases where bisection yields superior
results.
Use bisection and false position to
locate the root of f (x) = x10 1,
between x = 0 and 1.3.
Using bisection, the results can be
summarized as the first table
Thus, after five iterations, the true error
is reduced to less than 2%. For false
position, a very different outcome is
obtained as the second table.
After five iterations, the true error has
only been reduced to about 59%.
Insight into these results can be gained
by examining a plot of the function.
9/28/16

MEEN5313 Numerical Methods in ME

Usually, if f (xl) is much


closer to 0 than f (xu), then the
root should be much closer to
xl than to xu. Because
of the shape of the present
function, the opposite is true.

13

False position example


Use bisection method to find the root of f(x)=sin(5x)
+x2 near 1.0, till a < 10-4.
MATLAB plot and fzero.
f1=@(x) sin(5.*x)+x.^2;
[root_fzero1,fx_fzero1]=fzero(f1,1.0);
root_fzero1 = 0.9874
fx_fzero1 = -1.1102e-16

xl

Iteration
1
2
3
4
5
6
7
9/28/16

xu
0.9
0.94986
0.97409
0.98306
0.98601
0.98695
0.98724

xr
1.1
1.1
1.1
1.1
1.1
1.1
1.1

f(xl)
f(xu)
f(xr)
a
0.94986 -0.16753 0.50446 -0.09708 0.052493
0.97409 -0.09708 0.50446 -0.03868 0.024875
0.98306 -0.03868 0.50446 -0.01308 0.009121
0.98601 -0.01308 0.50446 -0.00418 0.002998
0.98695 -0.00418 0.50446 -0.00131 0.000948
0.98724 -0.00131 0.50446 -0.00041 0.000296
0.98733 -0.00041 0.50446 -0.00013 9.22E-05

MEEN5313 Numerical Methods in ME

14

Chapter 6: Open Methods


Bracketing methods
Must use two bounds bracketing
the root
Always converge

Open methods
Use only a single starting value or
two starting values that do not
necessarily bracket the root.
Sometimes diverge;
when the open methods converge,
they usually converge much faster
than bracketing methods.
9/28/16

Graphical depiction of the fundamental difference


between the (a) bracketing and (b) and (c) open
methods for root location. In (a), which is bisection,
the root is constrained within the interval prescribed
by xl and xu . In contrast, for the open method depicted
in (b) and (c), which is Newton-Raphson, a formula
is used to project from xi to xi+1 in an iterative fashion.
Thus the method can either (b) diverge or (c)
converge rapidly, depending on the shape of the
function and the value of the initial guess.

MEEN5313 Numerical Methods in ME

15

6.1 Simple fixed-point iteration


Rearrange the function f (x) = 0 so that x
is on the left-hand side of the equation:
x = g(x)

(6.1)

Eq. (6.1) can be used to compute a new


estimate xi+1 as expressed by the iterative
formula
xi+1 = g(xi) (6.2)

The approximate error for this equation


can be determined using the error
estimator:

Thus, each iteration brings the estimate


closer to the true value of the root:
0.56714329

Notice that the true percent


relative error for each iteration of
Example 6.1 is roughly
EXAMPLE 6.1 Simple Fixed-Point
proportional (for this case, by a
Iteration
factor of about 0.5 to 0.6) to the
Use simple fixed-point iteration to locate
error from the previous iteration.
the root of f (x) = ex x.
This property, called linear
convergence, is characteristic
of
9/28/16
MEEN5313 Numerical Methods in ME
16
fixed-point iteration.

Convergence of simple fixed-point iteration


It can be shown that the error
for any iteration is linearly
proportional to the error from
the previous iteration
multiplied by the absolute
value of the slope of g:
Ei+1 = g()Ei.
if |g| < 1, the errors decrease
with each iteration. For |g| > 1
the errors grow.
Graphical depiction of (a) and (b) convergence and (c) and (d) divergence of simple fixed-point
iteration. Graphs (a) and (c) are called monotone patterns whereas (b) and (c) are called
oscillating or spiral patterns. Note that convergence occurs when |g(x)| <1.
9/28/16

MEEN5313 Numerical Methods in ME

17

Convergence of simple fixed-point iteration


The equation x3+4x2-10=0 has a
unique root in [1, 2]. There are
many ways to derive the
equation x=g(x):
(a) x = g1(x) = x x3 4x2 + 10
(b) x = g2(x) =(10/x 4x)1/2
(c) x = g3(x) = 0.5(10 x3)1/2
(d) x = g4(x) =[10/(4+x)]1/2
(e) x = g5(x) = x (x3 + 4x2
10)/( 3x2 + 8x)
The actual root is 1.365230013

9/28/16

(a) and (b) diverge. (c) (d) (e) converge with


different speed. Apparently (e) is the best.
Why?

MEEN5313 Numerical Methods in ME

18

Convergence of simple fixed-point iteration


For g1(x) = x x3 4x2 + 10, g1(x) = 1 3x2 8x, so |g1(x)|
> 1 for all x in [1, 2]. It diverges.
With g2(x) = [(10/x) 4x]1/2, ; |g2(1.365)|=3.43>1. It diverges.
For the function g3(x) = 0.5(10 x3)1/2
|g3(2)| 2.12, so the condition |g3(x)| < 1 fails on [1, 2], but |
g3(x)| < 1 is valid in [1, 1.5]. It converges.

For g4(x) = (10/(4 + x))1/2, we have


for all x[1, 2]

g5 converges fastest. It will be shown in the next section.


9/28/16

MEEN5313 Numerical Methods in ME

19

6.2 NEWTON-RAPHSON
Commonly called Newtons method. The formula is

(6.6)

EXAMPLE 6.2 Newton-Raphson Method


Use the Newton-Raphson method to estimate the root of f (x) =ex
x employing an initial guess of x0= 0.
Solution
f(x) = -e-x-1
Starting with an initial guess of x0 = 0, this iterative equation can be applied
to compute. The approach rapidly converges on the true root.

Although the Newton-Raphson method is often very


efficient, there are situations where it performs poorly.
9/28/16

MEEN5313 Numerical Methods in ME

20

EXAMPLE 6.3 A Slowly Converging Function with Newton-Raphson


Determine the positive root of f (x) = x10 1 using
the Newton- Raphson method and an initial guess of
x = 0.5.

Solution:
The Newton-Raphson formula for this case is

Thus, after the first poor prediction, the technique is


converging on the true root of 1, but at a very slow
rate.

Graphical depiction of the NewtonRaphson method for a case with slow


convergence. The inset shows how a nearzero slope initially shoots the solution far
from the root. Thereafter, the solution very
slowly converges on the root.
9/28/16

MEEN5313 Numerical Methods in ME

21

Four cases where the Newton-Raphson method exhibits poor convergence.


there is no general convergence criterion for Newton-Raphson. Its convergence
depends on the nature of the function and on the accuracy of the initial guess.
9/28/16

MEEN5313 Numerical Methods in ME

22

6.3 Secant methods


Calculating the derivative may be inconvenient. Because
This approximation can be substituted into Eq. (6.6) to yield
the following iterative equation:

(6.8)

Rather than using two arbitrary values to estimate the


derivative, an alternative approach involves a fractional
perturbation of the independent variable to estimate f (x),
Modified secant method

(6.9)
= a small perturbation fraction.
9/28/16

MEEN5313 Numerical Methods in ME

23

Open method example


Use simple fixed point iteration,
Newton-Raphson, Secant, and
modified Secant method to find the
root of
ln(x) 1/x =0
s=0.0001

clear
f=@(x) log(x) - 1./x;
x=0.5:0.01:3;
y=f(x);
plot(x,y)
grid on
xlabel('x')
ylabel('y')
title('y=ln(x)+1/x')

MATLAB plot
The root is located near 1.75 (initial
guess).

9/28/16

MEEN5313 Numerical Methods in ME

24

Open method example: simple fixed-point iteration


1
2
3
4
5
6
7
8
9
10

ln(x) 1/x =0 x = exp(1/x)


Converge.

ln(x) 1/x =0 x = 1/ln(x)


Diverge.

xr

Iteration

1.7708
1.759
1.7657
1.7618
1.764
1.7628
1.7635
1.7631
1.7633
1.7632

0.01175
-0.00673
0.003795
-0.00216
0.001223
-0.00069
0.000393
-0.00022
0.000127
-7.18E-05

Open method example: Newton-Raphson


f(x)=ln(x) 1/x;
f(x) = 1/x+1/x2;

1
2

9/28/16

MEEN5313 Numerical Methods in ME

xr

Iteration

1.7632
1.7632

0.00749
3.84E-5

25

Open method example: Secant and modified Secant


ln(x) 1/x =0
Secant:
Initial guess: 1.75 and 1.85

Modified Secant:
Initial guess: 1.75
Delta: 0.01
Secant

Modified Secant
xr

Iteration
1
2
3

9/28/16

xr

Iteration

1.7637
0.00777
1.7632 -0.00025797
1.7632 8.3026E-06

MEEN5313 Numerical Methods in ME

1
2

a
1.7632
0.00749
1.7632 -1.2482e-05

26

MATLAB example

disp('Newton method')
tolerance = 1e-6;
a_error = 1;
x_old=100.0;
iter=0;

Write a MATLAB script


using Newtons, Secant
and modified Secant to
solve the following
equation

while a_error > tolerance


x_new = x_old - double(f(x_old)) / double(df(x_old));
a_error=abs( (x_old - x_new)/x_new );
x_old=x_new;
iter=iter+1;

Then, verify your result


using MATLAB built-in
function fzero.
Newton: initial guess:
100.0
Secant: initial guess: 100.0
and 110.0
Modified Secant: 100.0
and 10-6.
Tolerance: 10-6.
9/28/16

%univariate function
syms x
f(x) = sqrt (9.81/0.25*x) * tanh(4*sqrt(9.81*0.25/x)) -36;
df = diff(f,x);

disp([num2str(iter),' ',num2str(x_new),' ',num2str(a_error)])


end
disp('Secant method:')
% anonymous functions
f1=@(x) sqrt (9.81/0.25*x) * tanh(4*sqrt(9.81*0.25/x)) -36;
a_error = 1;
x0=100.0;
x1=110.0;
iter=0;
while a_error > tolerance
x2 = x1 - f1(x1)*(x0-x1)/(f1(x0)-f1(x1));
a_error = abs( (x2 - x1)/x2 );
x0=x1;
x1=x2;
iter=iter+1;
disp([num2str(iter),' ',num2str(x2),' ',num2str(a_error)])
end

MEEN5313 Numerical Methods in ME

27

disp('Modified Secant method:')


delta=1e-6;
a_error = 1;
x_old=100.0;
iter=0;
while a_error > tolerance
x_new = x_old - delta*f1(x_old)/(f1(x_old+delta)-f1(x_old)) ;
a_error=abs( (x_old - x_new)/x_new );
x_old=x_new;
iter=iter+1;
disp([num2str(iter),' ',num2str(x_new),' ',num2str(a_error)])
end
disp('MATLAB builtin function fzero')
options = optimset('display','iter');
[x,fx] = fzero(f1,100.0,options);
9/28/16

MEEN5313 Numerical Methods in ME

28

Results
Newton method
1 131.2027 0.23782
2 141.8974 0.075369
3 142.7332 0.0058552
4 142.7376 3.1228e-05
5 142.7376 8.7898e-10
Secant method:
1 133.9019 0.1785
2 140.911 0.049741
3 142.6357 0.012092
4 142.7365 0.00070577

MATLAB builtin function fzero


Search for an
Func-count
1
interval
3
5
7
9
11
13
15
17
19

interval around 100 containing a sign change:


a
f(a)
b
f(b)
100
-1.19738
100
-1.19738

Procedure
initial

97.1716
96
94.3431
92
88.6863
84
77.3726
68
54.7452

search
search
search
search
search
search
search
search
search

-1.30864
-1.35638
-1.42563
-1.5272
-1.67864
-1.91001
-2.27705
-2.89551
-4.05173

102.828
104
105.657
108
111.314
116
122.627
132
145.255

-1.09143
-1.04902
-0.99043
-0.910256
-0.801928
-0.65804
-0.471206
-0.235622
0.0506855

2 141.8974 0.075369

Search for a zero in the interval [54.7452, 145.255]:


Func-count
x
f(x)
Procedure
19
145.255
0.0506855
initial
20
144.137
0.0283659
interpolation
21
142.725 -0.000252758
interpolation
22
142.738
2.23259e-06
interpolation
23
142.738
1.74175e-10
interpolation
24
142.738
0
interpolation

3 142.7332 0.0058552

Zero found in the interval [54.7452, 145.255]

5 142.7376 8.23e-06
6 142.7376 5.2997e-09
Modified Secant method:
1 131.2027 0.23782

4 142.7376 3.1227e-05
5 142.7376 8.7287e-10
9/28/16

MEEN5313 Numerical Methods in ME

29

6.6 Polynomials
Polynomials are a special type of nonlinear algebraic equation of the general form
fn(x)=a1xn+a2xn-1++an-1x2+anx+an+1 (6.12)
where n is the order of the polynomial, and the as are constant coefficients.

Unfortunately, simple techniques like bisection and Newton-Raphson are not


available for determining all the roots of higher-order polynomials.
Use MATLAB built-in function roots.
EXAMPLE 6.8 Using MATLAB to Manipulate Polynomials and Determine
Their Roots
Problem Statement. Use the following equation to explore how MATLAB can be
employed to manipulate polynomials:
f5(x) = x5 3.5x4 + 2.75x3 + 2.125x2 3.875x + 1.25
Note that this polynomial has three real roots: 0.5, 1.0, and 2; and one pair of
complex roots: 1 0.5i.
9/28/16

MEEN5313 Numerical Methods in ME

30

MATLAB roots
f5(x) = x5 3.5x4 + 2.75x3 + 2.125x2 3.875x
+ 1.25
Solution. Polynomials are entered into
MATLAB by storing the coefficients as a row
vector. For example, entering the following
line stores the coefficients in the vector a:
% define the polynomial by its coefficient

x =
2.0000
-1.0000
1.0000
1.0000
0.5000

+
+
+
+

0.0000i
0.0000i
0.5000i
0.5000i
0.0000i

ans =

clear

1.0e-13 *

a = [1 -3.5 2.75 2.125 -3.875 1.25];

0.3042
0.0000
0.0067
0.0067
0.0022

x=roots(a)
polyval(a,x)
9/28/16

MEEN5313 Numerical Methods in ME

+
+
+
+

0.0000i
0.0000i
0.0289i
0.0289i
0.0000i
31

Chapter 7: Optimization

Objectives: how optimization can be used to determine minima and maxima of both onedimensional and multidimensional functions.
The determination of such extreme values is referred to as optimization.

7.1 INTRODUCTION AND BACKGROUND


EXAMPLE 7.1: Determining the Optimum Analytically by Root Location (analytic method)
A bungee jumper can be projected upward at a specified velocity. If it is subject to linear drag,
its altitude as a function of time can be computed as
where z = altitude (m) above the earths surface (defined as z = 0), z0 = the initial altitude (m),
m = mass (kg), c = a linear drag coefficient (kg/s), v0 = initial velocity (m/s), and t = time (s).
Note that for this formulation, positive velocity is considered to be in the upward direction.
Given the following parameter values: g = 9.81 m/s2, z0 = 100 m, v0 = 55 m/s, m = 80 kg, and c
= 15 kg/s, Eq. (7.1) can be used to calculate the jumpers altitude. As displayed in Fig. 7.1, the
jumper rises to a peak elevation of about 190 m at about t = 4 s.

9/28/16

MEEN5313 Numerical Methods in ME

32

Solution

This is actually the equation for the velocity. Find the root of
the above equation. Analytic solution is

We can verify that the result is a maximum by differentiating


to obtain the second derivative

A function of a single variable illustrating


the difference between roots and optima.

9/28/16

MEEN5313 Numerical Methods in ME

33

7.2 ONE-DIMENSIONAL OPTIMIZATION


7.2.1 Golden-Section Search
Golden ratio:

As with bisection, we can start by defining an interval that


contains a single minimum. For the golden-section search, the
two intermediate points are chosen according to the golden ratio
x1=xl+d(7.6)
x2=xu-d (7.7)
Where, d=(-1)(xu-xl) (7.8)

The function is evaluated at these two interior points. Two


results can occur:
1. If, as in fig(a), f(x1)< f(x2), then f (x1) is the minimum, and the
domain of x to the left of x2, from xl to x2, can be eliminated because it
does not contain the minimum. For this case, x2 becomes the new xl
for the next round.
2. If f (x2)< f(x1), then f(x2) is the minimum and the domain of x to the
right of x1, from x1 to xu would be eliminated. For this case, x1
becomes the new xu for the next round.

(a) The initial step of the golden-section search algorithm involves choosing two
interior points according to the golden ratio. (b) The second step involves defining a
new interval that encompasses the optimum.
9/28/16

MEEN5313 Numerical Methods in ME

34

7.2.1 Golden-Section Search


Now, here is the real benefit from the use of the
golden ratio. Because the original x1 and x2 were
chosen using the golden ratio, we do not have to
recalculate all the function values for the next
iteration.
For example, for the case illustrated in Fig. 7.6,
the old x1 becomes the new x2. This means that we
already have the value for the new f(x2), since it is
the same as the function value at the old x1.
To complete the algorithm, we need only
determine the new x1. This is done with Eq. (7.6)
with d computed with Eq. (7.8) based on the new
values of xl and xu. A similar approach would be
used for the alternate case where the optimum fell
in the left subinterval. For this case, the new x2
would be computed with Eq. (7.7).
9/28/16

MEEN5313 Numerical Methods in ME

35

Use the golden-section search to find the minimum of


f(x)=x2/10 - 2sinx.
within the interval from xl= 0 to xu= 4.
Iteration 1
d = 0.61803(4 0) = 2.4721
x1 = 0 + 2.4721 = 2.4721
x2 = 4 2.4721 = 1.5279

Example 7.2 GoldenSection Search

f(x2)=1.52792/10 - 2sin(1.5279) = -1.7647


f(x1)=2.47212/10 - 2sin(2.4721)= -0.6300
Because f (x2)< f(x1), we also know that the minimum is in the interval defined by xl, x2, and x1.
Thus, for the next iteration, the lower bound remains xl = 0, and x1 becomes the upper bound,
that is, xu= 2.4721. In addition, the former x2 value becomes the new x1, that is, x1= 1.5279. In
addition, we do not have to recalculate f(x1), it was determined on the previous iteration as
f(1.5279) = -1.7647.

Iteration 2
d = 0.61803(2.4721 0) = 1.5279
x2 = 2.4721 1.5279 = 0.9443
The function evaluation at x2 is f(0.9943) = 1.5310. Since this value is less than the
function value at x1, the minimum is f(1.5279) = 1.7647, and it is in the interval prescribed by
x2, x1, and xu.
9/28/16

MEEN5313 Numerical Methods in ME

36

Note that the current minimum is highlighted for every iteration. The result is
converging on the true value of 1.7757 at x = 1.4276.

Golden-Section error
Where, xopt= x1, if f(x1)<f(x2), otherwise, xopt=x2.

9/28/16

MEEN5313 Numerical Methods in ME

37

7.2.2 Parabolic Interpolation

Parabolic interpolation takes advantage of the fact that a


second-order polynomial often provides a good
approximation to the shape of f (x) near an optimum.
If we have three points that jointly bracket an optimum, we
can fit a parabola to the points.
where x1, x2, and x3 are the initial
guesses, and x4 is the value of x
that corresponds to the optimum
value of the parabolic fit to the
guesses.
This figure in the book is not very
intuitive, because the following
algorithm is trying to find the
minima not maxima.
9/28/16

Graphical depiction of parabolic interpolation

MEEN5313 Numerical Methods in ME

38

EXAMPLE 7.3 Parabolic Interpolation


Use parabolic interpolation to approximate the minimum of f(x) =
x2/10 - 2sin(x) with initial guesses of x1= 0, x2= 1, and x3= 4.
Solution. The function values at the three guesses can be
evaluated:
x1= 0 f(x1) = 0
x2= 1 f(x2) = 1.5829
x3= 4 f(x3) = 3.1136
f(x4)=-1.7691
Next, a strategy similar to the golden-section search can be employed to
determine which point should be discarded. Because the function value
for the new point is lower than for the intermediate point (x2) and the
new x value is to the right of the intermediate point, the lower guess (x1)
is discarded. Therefore, for the next iteration:
9/28/16

MEEN5313 Numerical Methods in ME

39

Solution
x1= 1 f(x1) = 1.5829
x2= 1.5055 f(x2) = 1.7691
x3= 4 f(x3) = 3.1136
Then,
Then, f(1.4903)=-1.7714. The process can be repeated

within five iterations, the result is converging rapidly on the true value of
1.7757 at x = 1.4276.
9/28/16

MEEN5313 Numerical Methods in ME

40

Flow chart of parabolic


interpolation

9/28/16

MEEN5313 Numerical Methods in ME

41

7.2.3 MATLAB Function: fminbnd


A simple expression of its syntax is
[xmin, fval] = fminbnd(function,x1,x2)
Example, find the minimum of

g=9.81;v0=55;m=80;c=15;z0=100;
z=@(t) -(z0+m/c*(v0+m*g/c)*(1-exp(-c/m*t))m*g/c*t);
[x,f]=fminbnd(z,0,8)
x = 3.8317
f = -192.8609
9/28/16

MEEN5313 Numerical Methods in ME

42

Optimization example
Find the maxima of the following
function using golden search and
the parabolic interpolation.

clear
f = @(x) x.^(-x);
x=0.01:0.01:2;
y=f(x);
plot(x,y)
grid on
xlabel('x');
ylabel('y');
title('y=x^{1/x}')

s = 0.0001

A maxima is located around 0.4,


between 0.2 and 0.6. Find the
minima of the following
equation.

9/28/16

MEEN5313 Numerical Methods in ME

43

Golden search

x2
f2
x1
f1
xu
xopt
a
Iteration xl
d
1
0.2 0.352786 -1.44421 0.447214 -1.43316
0.6 0.247214 0.352786 0.267661
2
0.2 0.294427 -1.43333 0.352786 -1.44421 0.447214 0.152786 0.352786 0.165424
3 0.294427 0.352786 -1.44421 0.388854 -1.44382 0.447214 0.094427 0.352786 0.102237
4 0.294427 0.330495 -1.44183 0.352786 -1.44421 0.388854 0.058359 0.352786 0.063186
5 0.330495 0.352786 -1.44421 0.366563 -1.44466 0.388854 0.036068 0.366563 0.037584
6 0.352786 0.366563 -1.44466 0.375078 -1.44457 0.388854 0.022291 0.366563 0.023228
7 0.352786 0.361301 -1.44458 0.366563 -1.44466 0.375078 0.013777 0.366563 0.014356
8 0.361301 0.366563 -1.44466 0.369815 -1.44466 0.375078 0.008514 0.366563 0.008872
9 0.361301 0.364553 -1.44465 0.366563 -1.44466 0.369815 0.005262 0.366563 0.005483
10 0.364553 0.366563 -1.44466 0.367805 -1.44467 0.369815 0.003252 0.367805 0.003377
11 0.366563 0.367805 -1.44467 0.368573 -1.44467 0.369815 0.00201 0.367805 0.002087
12 0.366563 0.367331 -1.44467 0.367805 -1.44467 0.368573 0.001242 0.367805 0.00129
13 0.367331 0.367805 -1.44467 0.368099 -1.44467 0.368573 0.000768 0.367805 0.000797
14 0.367331 0.367624 -1.44467 0.367805 -1.44467 0.368099 0.000474 0.367805 0.000493
15 0.367624 0.367805 -1.44467 0.367917 -1.44467 0.368099 0.000293 0.367917 0.000304
16 0.367805 0.367917 -1.44467 0.367987 -1.44467 0.368099 0.000181 0.367917 0.000188
17 0.367805 0.367875 -1.44467 0.367917 -1.44467 0.367987 0.000112 0.367875 0.000116
18 0.367805 0.367848 -1.44467 0.367875 -1.44467 0.367917 0.000069 0.367875 0.000072

The function f has a maxima of 1.44467 at x=0.367875


9/28/16

MEEN5313 Numerical Methods in ME

44

Parabolic interpolation

Iteration x1
1
2
3
4
5
6
7

f1
x2
f2
x3
f3
x4
f4
a
0.2 -1.37973
0.4 -1.4427
0.6 -1.35866 0.385665 -1.44406
---0.2 -1.37973 0.385665 -1.44406
0.4 -1.4427 0.371376 -1.44464 -0.03848
0.2 -1.37973 0.371376 -1.44464 0.385665 -1.44406 0.369434 -1.44466 -0.00526
0.2 -1.37973 0.369434 -1.44466 0.371376 -1.44464 0.368251 -1.44467 -0.00321
0.2 -1.37973 0.368251 -1.44467 0.369434 -1.44466 0.368022 -1.44467 -0.00062
0.2 -1.37973 0.368022 -1.44467 0.368251 -1.44467 0.367917 -1.44467 -0.00028
0.2 -1.37973 0.367917 -1.44467 0.368022 -1.44467 0.367893 -1.44467 -6.7E-05

True location is at 1/e =


0.367879
f1 = @(x) -x.^(-x);
[xmin_fminbnd,fmin_fminbnd]=fminbnd(f1,0.2,0.6);
xmin_fminbnd =
fmin_fminbnd =

0.3679
-1.4447

Do not forget to multiply (-1) to your function value.


9/28/16

MEEN5313 Numerical Methods in ME

45

7.3 Multidimensional Optimization


A simple expression of its syntax is
[xmin, fval] = fminsearch(function,x0)
where xmin and fval are the location and
value of the minimum, function is the
name of the function being evaluated,
and x0 is the initial guess. Note that x0
can be a scalar, vector, or a matrix.

(a) Contour and (b) mesh plots of a twodimensional function.


f(x1, x2) = 2 + x1 x2 + 2x12 + 2x1x2 + x22;
range -2 x1 0 and 0 x2 3.
Check textbook example 7.4 on page 196 to
learn how to plot the above two figures using
MATLAB.
9/28/16

Here is a simple MATLAB session


that uses fminsearch to determine
minimum
>> f=@(x) 2+x(1)x(2)+2*x(1)^2 +2*x(1)*x(2)
+x(2)^2;
>> [x,fval] =fminsearch(f,
[-0.5,0.5])
x = -1.0000 1.5000
fval = 0.7500

MEEN5313 Numerical Methods in ME

46

Summary of Part 2: Roots and Optimization

Roots finding
Bracketing methods: bisection, false position
Open methods: Simple fixed-point, Newtons method, Secant
method, modified newtons method.
MATLAB: fzero, roots

Optimization:
1-D: golden-section, parabolic
MATLAB:
1-D: fminbnd;
2-D or multidimensional: fminsearch.

9/28/16

MEEN5313 Numerical Methods in ME

47

You might also like