You are on page 1of 83

MATH160

NUMERICAL METHODS

ROOTS OF EQUATIONS
Objective:
Determine the roots of an bisection method. Determine the roots of an Newton-Rhapson method. Determine the roots of an secant method. Determine the roots of an fixed-point iteration method. Determine the roots of an false-position method. equation using equation using

equation using equation using


equation using

ROOTS OF EQUATIONS
Objective:
Create algorithms to find the roots of equation. Determine the advantages and disadvantages of each algorithm.

ROOTS OF EQUATIONS
A root or solution of equation f(x)=0 are the values of x for which the equation holds true. Methods for finding roots of equations are basic numerical methods and the subject is generally covered in books on numerical methods or numerical analysis Numerical methods for finding roots of equations can often be easily programmed and can also be found in general numerical libraries.

ROOTS OF EQUATIONS
Make sure you arent confused by the terminology. All of these are the same: Solving a polynomial equation p(x) = 0 Finding roots of a polynomial equation p(x) = 0 Finding zeroes of a polynomial function p(x) Factoring a polynomial function p(x)

ROOTS OF EQUATIONS
In math, the bisection method is a root-finding algorithm which repeatedly bisects an interval then selects a subinterval in which a root must lie for further processing. It is a very simple and robust method, but it is also relatively slow.

ROOTS OF EQUATIONS
The bisection method is applicable when we wish to solve the equation for the variable x, where f is a continuous function The bisection method requires two initial points a and b such that f(a) and f(b) have opposite signs. This is called a bracket of a root, for by the IVT the continuous function f must have at least one root in the interval (a, b).

ROOTS OF EQUATIONS
Theorem: If f is a continuous function on the interval [a, b] and f(a)f(b) < 0, then the bisection method converges to a root of f. The absolute error is halved at each step. Thus, the method converges linearly, which is quite slow. On the other hand, the method is guaranteed to converge if f(a) and f(b) have different signs.

Basis of Bisection Method


Theorem: An equation f(x)=0, where f(x) is a real continuous function, has at least one root between xl and xu if f(xl) f(xu) < 0.
f(x)

x xu x

Figure 1 At least one root exists between the two points if the function is real, continuous, and changes sign.

Basis of Bisection Method


Figure 2 If function does not change sign between two points, roots of the equation may still exist between the two points.
f(x)

xu

Basis of Bisection Method


Figure 3 If the function does not change sign between two points, there may not be any roots for the equation between the two points.
f(x)
f(x)

xu x

xu

Basis of Bisection Method


Figure 4 If the function changes sign between two points, more than one root for the equation may exist between the two points.
f(x)

xu x

Algorithm for Bisection Method


Step 1: Choose xl and xu as two guesses for the root such that f(xl) f(xu) < 0, or in other words, f(x) changes sign between xl and xu. This was demonstrated in Figure 1.
f(x)

x xu x

Algorithm for Bisection Method


Step 2: Estimate the root, xm of the equation f (x) = 0 as the mid point between xl and xu as
f(x)

xm =

x 2

xu
x xm xu x

Algorithm for Bisection Method


Step 3: Check the following a) If f xl f xm 0, then the root lies between xl and xm; then xl = xl ; xu = xm.
b) If f xl f xm 0, then the root lies between xm and xu; then xl = xm; xu = xu.

c) If f xl f xm 0 ; then the root is xm. Stop the algorithm if this is true.

Algorithm for Bisection Method


Step 4: Find the new estimate of the root
xm = x 2
old xm new m

xu

Find the absolute relative approximate error


x new m
a

100

where
old xm
new xm

previousestimateof root
currentestimateof root

Algorithm for Bisection Method


Step 5: Compare the absolute relative approximate error with the pre-specified error tolerance.

Note one should also check whether the number of iterations is more than the maximum number of iterations allowed. If so, one needs to terminate the algorithm and notify the user about it.

Bisection Method
Example: Consider the equation:

x6

x 1 0

How many roots does the equation have? What are the intervals that contains the roots? Solve for the roots using bisection method with error of less than 1%.

Bisection Method
Example: Consider the equation:

5 sin 2 x

2x 2

4x

How many roots does the equation have? What are the intervals that contains the roots? Solve for the roots using bisection method with error of less than 1%.

Bisection Method
Example: Consider the equation:
x
3

3x 1 0

x 1 sin x

ex

How many roots does the equation have? What are the intervals that contains the roots? Solve for the roots using bisection method with error of less than 1%.

Bisection Method
Advantages : Always convergent The root bracket gets halved with each iteration - guaranteed. Drawbacks: Slow convergence If one of the initial guesses is close to the root, the convergence is slower

Bisection Method
Drawbacks: If a function f(x) is such that it just touches the x-axis it will be unable to find the lower and upper guesses.
f(x)

Bisection Method
Drawbacks: Function changes sign but root does not exist
f(x)

Bisection Method
Note: The number of iterations n has to satisfy

to ensure that the error is smaller than the tolerance .

NEWTON-RAPHSON METHOD

Newtons Method
In numerical analysis, Newton's method (also known as the NewtonRaphson method), named after Isaac Newton and Joseph Raphson , is perhaps the best known method for finding successively better approximations to the zeroes (or roots) of a real-valued function. Newton's method can often converge remarkably quickly, especially if the iteration begins "sufficiently near" the desired root. Just how near "sufficiently near" needs to be, and just how quickly "remarkably quickly" can be, depends on the problem.

Newtons Method
Unfortunately, when iteration begins far from the desired root, Newton's method can easily lead an unwary user astray with little warning. Thus, good implementations of the method embed it in a routine that also detects and perhaps overcomes possible convergence failures.

Newtons Method
Given a function (x) and its derivative '(x), we begin with a first guess x0. Provided the function is reasonably well-behaved a better approximation x1 is

The process is repeated until a sufficiently accurate value is reached:

Newtons Method
An illustration of one iteration of Newton's method (the function is shown in blue and the tangent line is in red). We see that xn+1 is a better approximation than xn for the root x of the function f.

Newtons Method
The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line, and one computes the xintercept of this tangent line. This x-intercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated.

Newtons Method
Derivation:
f(x)

tan(
f(xi) B

AB AC

f ' ( xi )

f ( xi ) xi xi 1
xi f ( xi ) f ( xi )

C xi+1

A xi

xi

Newtons Method
Derivation:

Algorithm for Newtons Method


Step 1: Evaluate f(x) symbolically. Step 2: Use an initial guess of the root, xi , to estimate the new value of the root, xi+1 , as

Step 3: Find the absolute relative approximate error


a

f xi xi 1 = xi f xi

xi 1- xi = xi 1

10 0

Algorithm for Newtons Method


Step 4: Compare the absolute relative approximate error with the pre-specified error tolerance.

Note one should also check whether the number of iterations is more than the maximum number of iterations allowed. If so, one needs to terminate the algorithm and notify the user about it.

Newtons Method
Example: Consider the problem of finding the positive number x with cos(x) = x3.
We can rephrase that as finding the zero of f(x) = cos(x) x3. We have f'(x) = sin(x) 3x2. Since cos(x) 1 for all x and x3 > 1 for x > 1, we know that our zero lies between 0 and 1. We try a starting value of x0 = 0.5. (Note that a starting value of 0 will lead to an undefined result.).

Newtons Method
Answer:

Newtons Method
Example: Consider the equation:

0.25x 5

7x 3

How many roots does the equation have? What are the intervals that contains the roots? Solve for the roots using Newtons method with error of less than 0.000001.

Newtons Method
Answer: -2.17769, -0.42909, 2.39690

Newtons Method
Example: Consider the equation:

x cos 4

x2

3x

How many roots does the equation have? What are the intervals that contains the roots? Solve for the roots using Newtons method with error of less than 0.000001.

Newtons Method
Answer: 0.37995, 2.71298

Newtons Method
Advantages :

Converges fast (quadratic convergence), if it converges. Requires only one guess.

Newtons Method
Drawbacks: Divergence at inflection points
Selection of the initial guess or an iteration value of the root that is close to the inflection point of the function may start diverging away from the root in the Newton-Raphson method

f x

x 1

0.512 0

Newtons Method
Drawbacks:
Divergence at inflection point for
f x x 13 0.512 0

Iteration Number 0
1 2

xi 5.0000
3.6560 2.7465

xi

xi

x i3

1 3 xi

3
0.512 1
2

2.1084
1.6000

5
6 7 18

0.92589
30.119 19.746 0.2000

Newtons Method
Drawbacks: Division by zero
For the equation

f x

x3 0.03x 2 2.4 10 6

the Newton-Raphson method reduces to xi3 0.03xi2 2.4 10 6 xi 1 xi 3xi2 0.06xi

For x=0 or x=0.02, the denominator will equal zero.

Newtons Method
Drawbacks:
Oscillations near local maximum and minimum
Results obtained from the Newton-Raphson method may oscillate about the local maximum or minimum without converging on a root but converging on the local maximum or minimum. Eventually, it may lead to division by a number close to zero and may diverge.
For example for f x roots.

x 2 2 0 the equation has no real

Newtons Method
Drawbacks: Oscillations around local minima for f x
Iteration Number 0 1 2 3 4 5 6 7 8 9
6

x2 2 0

1.0000 0.5 1.75 0.30357 3.1423 1.2529 0.17166 5.7395 2.6955 0.97678

f(x)

3.00 2.25 5.063 2.092 11.874 3.570 2.029 34.942 9.266 2.954

300.00 128.571 476.47 109.66 150.80 829.88 102.99 112.93 175.96

3 3 2 2

11
4 0 -2

x
0

-1.75

-1

-0.3040
-1

0.5

3.142

Newtons Method
Drawbacks: Root Jumping In some cases where the function f(x) is oscillating and has a number of roots, one may choose an initial guess close to a root. However, the guesses may jump and converge to some other root.

Newtons Method
Drawbacks: Root Jumping Example For f x sin x 0 Choose x0 2.4 7.539822 It will converge to x 0 instead of x 2 6.2831853
f(x)
1 .5

0.5

0 -2

x
0

-0.06307
-0.5

0.5499

4.461

7.539822

1 0

-1

-1 .5

SECANT METHOD

Secant Method
In numerical analysis, the secant method is a rootfinding algorithm that uses a succession of roots of secant lines to better approximate a root of a function f. The first two iterations of the secant method. The red curve shows the function f and the blue lines are the secants.

Secant Method
The secant method is defined by the recurrence relation

As can be seen from the recurrence relation, the secant method requires two initial values, x0 and x1, which should ideally be chosen to lie close to the root.

Secant Method
Derivation: From Newtons Method
xi
1 = xi -

f(xi ) f ' (x i )

Approximate the derivative


f ( xi ) f ( xi ) xi f ( xi 1 ) xi 1

The Secant method


xi
1

xi

f ( x i )( x i x i 1 ) f ( xi ) f ( xi 1 )

Secant Method
Derivation: Similar Triangles
AB AE DC DE
f(x)

f(xi)

f ( xi ) xi xi 1

f ( xi 1 ) xi 1 xi 1
f(xi-1) C E D xi-1 A xi

The Secant method


xi
1

xi

f ( xi )(xi xi 1 ) f ( xi ) f ( xi 1 )

xi+1

Algorithm for Secant Method


Step 1: Calculate the next estimate of the root from two initial guesses

xi

xi

f ( xi )(xi xi 1 ) f ( xi ) f ( xi 1 )

Step 2: Find the absolute relative approximate error


a

xi 1- xi = xi 1

10 0

Algorithm for Secant Method


Step 3: Find if the absolute relative approximate error is greater than the prespecified relative error tolerance If so, go back to step 1, else stop the algorithm. Also check if the number of iterations has exceeded the maximum number of iterations

Secant Method
Example: Consider the problem of finding the positive number x with cos(x) = x3.
We can rephrase that as finding the zero of f(x) = cos(x) x3. We try a starting values of 0 and 1.

Secant Method
Answer: For cos(x) = x3.

Secant Method
Example: Consider the equation:

x6

x 1 0

How many roots does the equation have? What are the intervals that contains the roots? Solve for the roots using secant method with error of less than 0.000001.

Secant Method
Example: Consider the equation:

5 sin 2 x

2x 2

4x

How many roots does the equation have? What are the intervals that contains the roots? Solve for the roots using secant method with error of less than 0.000001.

Secant Method
Advantages : Converges fast, if it converges Requires two guesses that do not need to bracket the root

Secant Method
Drawbacks:
Division by zero
2 2

1 f ( x) f ( x) f ( x) 0 0

10 10

0 x x guess 1 x guess 2

10 10

f(x) p rev . gu ess n ew g uess

Secant Method
Drawbacks:
Root Jumping
2 2

1 f ( x) f ( x) f ( x) secant( x) f ( x) 1 0 0

10 10

0 x x 0 x 1' x x 1

10 10

f(x) x'1 , (fi rst gu ess) x0 , (p rev io us gu ess) Secant li ne x1 , (n ew g uess)

FIXED-POINT ITERATION METHOD

Fixed-point Iteration Method


Start from f(x) =0 and derive a relation x = g(x)

The fixed-point method is simply given by

Fixed-point Iteration Method


Example:
Compute zero for

f x

ex

4 2x

Answer:
Derive a relation x = g(x)

The fixed-point method is simply given by

Fixed-point Iteration Method


When does it converge?

Fixed-point Iteration Method


Example: Compute zero for
Answer:

f x

ex

4 2x

Fixed-point Iteration Method


Example: Compute zero for
Answer:

f x

ex

4 2x

Fixed-point Iteration Method


Example: Compute zero for
Answer:

f x

ex

4 2x

Fixed-point Iteration Method


Example: Compute zero for
Answer:

f x

ex

4 2x

Fixed-point Iteration Method


Example: Compute zero for f x
Answer:

ex

4 2x

FALSE-POSITION METHOD

False-position Method
The false position method or regula falsi method is a root-finding algorithm that combines features from the bisection method and the secant method. Like the bisection method, the false position method starts with two points a0 and b0 such that f(a0) and f(b0) are of opposite signs, which implies by the IVT that the function f has a root in the interval [a0, b0]. The method proceeds by producing a sequence of shrinking intervals [ak, bk] that all contain a root of f.

False-position Method
The first two iterations of the false position method.

The red curve shows the function f and the blue lines are the secants.

False-position Method
The graph used in this method is shown in the figure.

False-position Method
Advantage: Convergence is faster than bisection method. Disadvantages: 1. It requires a and b. 2. The convergence is generally slow. 3. It is only applicable to f (x) of certain fixed curvature in [a, b]. 4. It cannot handle multiple zeros.

False-position Method
Alternative formula: At iteration number k, the number

is computed. ck is the root of the secant line through (ak, f(ak)) and (bk, f(bk)). If f(ak) and f(ck) have the same sign, then we set ak+1 = ck and bk+1 = bk, otherwise we set ak+1 = ak and bk+1 = ck. This process is repeated until the root is approximated sufficiently well.

False-position Method
If the initial end-points a0 and b0 are chosen such that f(a0) and f(b0) are of opposite signs, then one of the end-points will converge to a root of f. Asymptotically, the other end-point will remain fixed for all subsequent iterations while the converging endpoint becomes updated. As a result, unlike the bisection method, the width of the bracket does not tend to zero. As a consequence, the linear approximation to f(x), which is used to pick the false position, does not improve in its quality.

False-position Method
One example of this phenomenon is the function f(x) = 2x3 4x2 + 3x on the initial bracket [1,1]. The left end, 1, is never replaced and thus the width of the bracket never falls below 1. Hence, the right endpoint approaches 0 at a linear rate.

False-position Method
While it is a misunderstanding to think that the method of false position is a good method, it is equally a mistake to think that it is unsalvageable. The failure mode is easy to detect (the same endpoint is retained twice in a row) and easily remedied by next picking a modified false position, such as or

down-weighting one of the endpoint values to force the next ck to occur on that side of the function.

False-position Method
The factor of 2 above looks like a hack, but it guarantees superlinear convergence (asymptotically, the algorithm will perform two regular steps after any modified step). There are other ways to pick the rescaling which give even better superlinear convergence rates.

Resources
Resources Numerical Methods Using Matlab, 4th Edition, 2004 by John H. Mathews and Kurtis K. Fink
Holistic Numerical Methods Institute by Autar Kaw and Jai Pau.

Numerical Methods for Engineers by Chapra and Canale

You might also like