16 views

Uploaded by Michael Downs

- Files%5C2-Clase Lectures Lecture 5
- Newton Method for Complex Polynomials - Mark McClure
- r05012304 - Mathematics for Bio Technologists
- Weber Patrick Rep t
- Roulette%5B1%5D
- Lec 13 Newton Raphson Method
- Dan Um Jay A
- EMA 523641
- Secant Method
- Resonant Converter
- t 5 Open Methods
- lec8a
- 12 3 Newton Raphson
- A Class of Methods Broyden
- Lecture 3
- Introduction to Numerical Analysis
- WEBINAR 2016 Nonlinear Diagnostic
- Chaos Notes
- powerpoint iwb ideas 2
- CAD Assignment Lines

You are on page 1of 9

1 Newtons Method

Newtons method is one of many iterative

processes for approximating roots of func-

tions. The equation governing each term is

simple and the process converges quickly in

many cases, making it a reasonable rst ef-

fort before trying other, more complicated or

specialized methods. The equation for term

x

n+1

in the Newton iteration is given below:

x

n+1

= x

n

f(x

n

)

f

(x

n

)

(1)

The thought process leading to the deriva-

tion of Newtons method is as follows: Val-

ues of a function f(x) in a neighborhood of

a point x

0

can be approximated by the the

functions tangent line at that point using

the point-slope equation g(x) = f

(x

0

)(x

x

0

) + f(x

0

). Thus, given a reasonable start-

ing guess for the root, x

0

, we can obtain

a better approximation for the root, x

1

, by

solving 0 = f

(x

0

)(x

1

x

0

) + f(x

0

) for x

1

.

This, of course, yields x

1

= x

0

f(x

0

)

f

(x

0

)

. We

visualize this in gure 1 using using New-

tons method to nd a root of the function

f(x) = x

3

3x

2

+ 2x. The intersection of

the vertical line and the x axis marks our ini-

tial guess of x

0

= 2.2 and the intersection of

the tangent line and the x axis is x

1

, which is

closer to the true root than our initial guess.

! !"#$ !"$ !"%$ & &"#$ &"$ &"%$ # #"#$ #"$ #"%$

'!"$

'!"#$

!"#$

!"$

!"%$

&

Figure 1: Visualization of Newtons Method

This raises several questions, however.

How would the iteration behave if the initial

guess were much farther away from the de-

sired root? If it were closer to another root?

What if the derivative were zero at one of the

iteration points? When is Newtons method

quick? For what starting values does New-

tons method converge? One might expect

that the iterations would have converged to

a dierent root if our initial guess were much

closer to that root. One might also expect

slow convergence out of the process if the

starting guess were extremely far way from

the root. If the derivative were zero or ex-

tremely small in one of the iterations, the

next approximation in the iteration would far

overshoot the true value of the root and the

iteration might not converge at all.

In practice, there are several cases when

Newtons method can fail. We turn to Tay-

lors theorem to understand roughly when the

method breaks down as well as its conver-

gence rate when conditions are ideal. At this

point we still consider only real valued func-

tions that take real arguments.

Taylors Theorem. Let f C

k

. Then,

given the functions expansion about a point

a, there exists q (a, x) such that

f(x) = f(a) + f

(a)(x a) +

f

(a)

2!

(x a)

2

+

+

f

(n)

(a)

n!

(x a)

n

+

f

(n+1)

(q)

(n + 1)!

(x a)

n+1

for 1 n k.

Using Taylors theorem we show that

Newtons method has quadratic convergence

under certain conditions.

Proposition. Let f C

2

and z be a root

of f. Let I = [z c, z + c] be an interval

about the root. Let x

0

I be an initial guess

for the root. If f

(x) = 0 x I and f

(x) is

bounded on I, then Newtons method achieves

quadratic convergence.

1

Math 56 Newton Fractals Michael Downs

Proof. Consider the expansion of the function

f about x

n

evaluated at z. Using Taylors

theorem to write this equation:

f(z)=f(xn)+f

(xn)(zxn)+

f

(q)

2

(zxn)

2

for some q (x

n

, z). Realizing that f(z) = 0

we can divide the equation by f

(x

n

) and

rewrite the equality as:

x

n

f(x

n

)

f

(x

n

)

z =

f

(q)

2f

(x

n

)

(x

n

z)

2

Recognizing the denition of the next term in

a newton iteration, this becomes:

x

n+1

z =

f

(q)

2f

(x

n

)

(x

n

z)

2

Take the absolute values of both sides. Let C

=

(q)

2f

(xn)

. Let

n+1

= |x

n+1

z| be the abso-

lute error in the n+1th term and

n

= |x

n

z|

be the absolute error in the nth term. We now

have:

n+1

= C

2

n

If C were a constant, or approximately con-

stant each iteration, this would signify that

Newtons method achieves quadratic conver-

gence. In other words, the number of correct

digits in each approximation doubles each

iteration. C is approximately constant, or

at least bounded, when f

x I, f

starting guess x

0

is close enough to the root.

Finally, q z because the interval (x

n

, z)

shrinks as each x

n

become closer and closer

to z.

In some cases convergence can still be

achieved given non-ideal condition, though

the rate might be slower, possibly exponen-

tial/linear or worse, meaning that the number

of correct digits increases linearly per itera-

tion rather than doubling. For example, con-

vergence to a root with multiplicity greater

than 1 is generally slower than quadratic.

Two important but dicult elementary

calculations are the square root and the re-

ciprocal. In present day were able to per-

form these calculations with ease and high

accuracy due to computers and ecient algo-

rithms. Computers have not always existed,

however, and in the past, these calculations

were done by hand using processes such as

Newtons method. Consider the two func-

tions f

1

(x) = x

2

a

1

and f

2

(x) = a

2

1

x

where a

1

and a

2

are real, nonzero constants.

f

1

has roots at

a

1

and f

2

has a root at

1

a

2

.

Using f

1

(x) = 2x and f

2

(x) =

1

x

2

we obtain

the iterations

x

n+1

=

x

n

+

a

xn

2

(2)

x

n+1

= x

n

(2 ax

n

) (3)

for approximating the root or reciprocal of a

given number a. Its interesting to note that

neither iteration converges given a starting

guess of zero. For (1), its clear from the equa-

tion that x

1

is undened for a starting guess

of x

0

= 0. Remembering the earlier theo-

rem, starting with x = 0 guarentees that the

derivative is zero-valued on I. Also, one can

see from gure 2 that the root that the pro-

cess converges to is dependent upon the sign

of the initial guess. Intuitively, it makes sense

that the process would not converge given a

starting guess equidistant from each root.

-2.4 -1.6 -0.8 0 0.8 1.6 2.4 3.2 4 4.8

-3

-2

-1

1

2

Figure 2: Bad guess vs good guess

2

Math 56 Newton Fractals Michael Downs

Figure two illustrates why the starting

guess is important. A starting guess of 2

yields a much more accurate approximation

than one of .01, which overshoots the root

by a wide margin. The intersection of the

x axis and the near-horizontal line in gure

2 would be the next iteration in the series.

The iteration for the reciprocal is also an ex-

ample of why the starting guess is important.

If the starting guess is not in the open inter-

val (0,

2

a

), Newtons method will not converge

at all. Generally, Newtons method does not

converge if the derivative is zero for one of the

iteration terms, if there is no root to be found

in the rst place, or if the iterations enter a

cycle and alternate back and forth between

dierent values.

2 Fractals and Newton It-

erations in the Complex

Plane

An interesting result is that Newtons method

works for complex valued functions. Having

seen that Newtons method behaves dier-

ently for dierent starting guesses, converg-

ing to dierent roots or possibly not con-

verging at all, one might wonder what hap-

pens at problem areas in the complex plane.

For example, starting points that are equidis-

tant to multiple dierent roots. Attempting

to visualize the convergence of each possible

starting complex number results in a frac-

tal pattern. Figure 3 is a colormap for the

function f(z) = z

3

1 depicting which root

Newtons method converged to given a start-

ing complex number. Complex numbers col-

ored red eventually converged to the root at

z = 1. Yellow means that that starting com-

plex number converged to e

2i

3

, and teal signi-

es convergence to e

2i

3

. The set of starting

points which never converge forms a fractal.

This set serves as a boundary for the dier-

ent basins of attraction, or the set of points

which converge to a particular root.

Re(z)

I

m

(

z

)

2 1.5 1 0.5 0 0.5 1 1.5 2

2

1.5

1

0.5

0

0.5

1

1.5

2

Figure 3: f(z) = z

3

1

Re(z)

I

m

(

z

)

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0.5

0.4

0.3

0.2

0.1

0

0.1

0.2

0.3

0.4

Figure 4: A closeup of a lobe with clear frac-

tal pattern

Looking back to gure 2, visualizing the

movement of the tangent line back and forth

across the parabola reveals that tiny varia-

tions in choice of starting point near a prob-

lem area can change which root the iterations

converge to and also that the small area near

the minimum gets sent to the majority of the

real line. This behavior of a small area being

mapped to a large one is characteristic of frac-

tals and gives us an intuition as to why this

phenomenon occurs: small movements from a

3

Math 56 Newton Fractals Michael Downs

point in the complex plane analogous to the

minima from gure 2 result in the next term

in the iteration being sent chaotically to some

other point in the complex plane.

It will now be useful to dene what a frac-

tal is and give a brief summary of dierent

fractal properties. It turns out that theres

no universally accepted denition of a fractal,

but typically theyre described as a complex

geometric gure that does not become sim-

pler upon magnication and exhibits at least

one of the following properties: self-similarity

at dierent scales, complicated structures at

dierent scales, and a fractal dimension

thats not an integer.

3 In terms of Complex

Dynamics

g

4 Variations of Newtons

Method

The most desireable feature of Newtons

method is its quadratic convergence, which

it often loses if conditions are not ideal. For

example, attempting to use Newtons method

to nd a root of multiplicity greater than one

results in linear convergence meaning that the

number of correct digits increases linearly per

iteration instead of doubling. Methods have

been developed to account for such cases in

order to recover the quadratic convergence

property. For example, given a function f(x)

with a root of multiplicity m, we can instead

use Newtons method on the function g(x) =

f(x)

1

m

which has the same root but with mul-

tiplicity one. Using g

(x) =

1

n

f(x)

1

m

1

f

(x)

the iteration becomes

x

n+1

= x

n

m

f(x

n

)

f

(x

n

)

(4)

which is called the relaxed Newtons method.

Lets observe the dierence in the con-

vergence rates for the function f

1

(z) = (z

1)(z 2)

2

(z 3) which has a root at two of

multiplicity two using the regular and the re-

laxed Newtons method.

Re(z)

I

m

(

z

)

0 0.5 1 1.5 2 2.5 3 3.5 4

2

1.5

1

0.5

0

0.5

1

1.5

2

Figure 5: 10 iterations using regular Newton,

all simple roots

In gure 5 we exhibit a colormap of the

convergence in 10 iterations for f

2

(x) = (z

1)(z2)(z3), which has a simple root where

f

1

has a double root. Red, yellow, and teal

signify convergence to the roots 1, 2, and 3

respectively. Dark blue means that number

did not converge, and is seen only near the

borders for the dierent basins of attraction.

Re(z)

I

m

(

z

)

0 0.5 1 1.5 2 2.5 3 3.5 4

2

1.5

1

0.5

0

0.5

1

1.5

2

Figure 6: 10 iterations using regular Newton

4

Math 56 Newton Fractals Michael Downs

Figure 6 depicts the reults of 10 iterations

of newtons method on f

1

for each starting

complex number on a small grid about the

roots. Red, lime, and light blue indicate con-

vergence to the roots 1, 2, and 4 respectively.

Dark blue means that the iterations did not

converge to the root for that given value.

Re(z)

I

m

(

z

)

0 0.5 1 1.5 2 2.5 3 3.5 4

2

1.5

1

0.5

0

0.5

1

1.5

2

Figure 7: 10 iterations using relaxed Newton

Contrasting gures 5 and 6, we observe

that the presence of a double root indeed

results in slower convergence to that root.

Nearly all the values in gure 5 converged,

while most of the values in gure 6 did not

converge.

The relaxed Newtons method xes this,

however, as seen in gure 7. The colors in

gure 7 have a dierent interpretation. Lime

here means convergence to 2 and blue means

no convergence at all. We observe that con-

vergence to the double root is achieved much

more quickly for a larger amount of starting

values using the relaxed newtons method. It-

erations using the relaxed Newtons method

also results in a dierent fractal pattern.

In general, the relaxed Newtons method

can be used on any function and the choice

of m, the root, aects the convergence and

the sharpness of the fractal pattern, Choos-

ing 0 < m < 1 softens the fractal pattern as

seen in gure 9. This is because exponentiat-

ing the original polynomial with something

greater than 1 introduces a multiple root

resulting in slower convergence but less in-

stances of overshooting near a problem point.

On the other hand, for choices of m > 1, the

iterations achieve faster coonvergence but are

more prone to chaotic behavior near bound-

ary points resulting in a sharper fractal pat-

tern, as seen in gure 8.

Re(z)

I

m

(

z

)

2 1.5 1 0.5 0 0.5 1 1.5 2

2

1.5

1

0.5

0

0.5

1

1.5

2

Figure 8: 100 iterations of relaxed Newton on

f(z) = z

3

1 with m = 1.9

Re(z)

I

m

(

z

)

2 1.5 1 0.5 0 0.5 1 1.5 2

2

1.5

1

0.5

0

0.5

1

1.5

2

Figure 9: 500 iterations of relaxed Newton on

f(z) = z

3

1 with m = .1

Another variation of Newtons method

is called Newtons method for a multiple

5

Math 56 Newton Fractals Michael Downs

root. This approach trades simplicity of iter-

ation terms for more robustness in achieving

quadratic convergence. If a polynomial f(x)

has a root of multiplicity m at a point x

0

,

its derivative f

ity m1 at that point. Thus we can obtain

a function g(x) =

f(x)

f

(x)

that has all simple

roots at all of fs roots. The iteration then

becomes:

x

n+1

= x

n

f(x

n

)f

(x

n

)

f

(x

n

)

2

f(x

n

)f

(x

n

)

(5)

The presence of a derivative in the original

Newton iteration was troubling enough. Here

we now must be able to calculate the sec-

ond derivative as well. The benet of New-

tons method for multiple roots is that it con-

verges quadratically at every root. We ob-

serve this through visualising the convergence

of f(z) = (z 1)

2

(z 2)

2

(z 3)

2

.

Re(z)

I

m

(

z

)

0 0.5 1 1.5 2 2.5 3 3.5 4

2

1.5

1

0.5

0

0.5

1

1.5

2

Figure 10: 10 iterations using regular Newton

Re(z)

I

m

(

z

)

1 0 1 2 3 4 5

3

2

1

0

1

2

3

Figure 11: 10 iterations using Newtons

method for multiple roots

We see that not many starting values

achieve convergence within the desired tol-

erance when all three roots have multiplic-

ity two using the standard Newton iteration.

Compare this to using Newtons method for

multiple roots in gure 11 and 12. Dark blue

signies convergence to something other than

one of the three desired roots or failure to con-

verge at all. Even though values that would

have converged to one of the roots in regu-

lar Newtons now converge to something else,

largely due to the fact that the roots of the

derivative are now possible values of conver-

gence, the values that converge to the desired

roots do so much more quickly. In gure 9

we only use 10 iterations. There is little dif-

ference between gure 11 and 12 when 100

iterations are used.

6

Math 56 Newton Fractals Michael Downs

Re(z)

I

m

(

z

)

1 0 1 2 3 4 5

3

2

1

0

1

2

3

Figure 12: 100 iterations using Newtons

method for multiple roots

Other, more complicated and specialized,

variations of Newtons method exist.

5 A Survey of Various

Newton Fractals

. In this section we visualize dierent New-

ton fractals, their properties, and their con-

vergence.

6 Code

Original code for this project was written in Matlab. I used Frederic Moisys box-counting

code as well.

1 % i mpl ements newton s al gor i t hm f or r oot f i ndi ng .

3 % p a row vect or f o r the c o e f f i c i e n t s of a pol ynomi al . i . e . [ 1 2 3 4] i s

% x3 + 2x2 + 3x1 + 4

5 % x i s a s t a r t i ng gues s which shoul d be c l o s e to the f i n a l r oot

% N i s the number of ti mes to run the i t e r a t i o n

7 f unc t i on r = newtp( p , x , N, m)

dp = pol yder ( p) ;

9 f or i = 1:N

x = x m pol yval ( p , x) . / pol yval ( dp , x) ;

11 end

r = x ;

13 end

1 % i mpl ementati on newton s method f o r mul t i pl e r oot s

3 % p a row vect or f o r the c o e f f i c i e n t s of a pol ynomi al . i . e . [ 1 2 3 4] i s

% x3 + 2x2 + 3x1 + 4

5 % x i s a s t a r t i ng gues s which shoul d be c l o s e to the f i n a l r oot

% N i s the number of ti mes to run the i t e r a t i o n

7 f unc t i on r = newtpmr ( p , x , N)

dp = pol yder ( p) ;

9 dp2 = pol yder ( dp) ;

f or i = 1:N

11 x = x ( pol yval ( p , x) . pol yval ( dp , x) ) . / ( pol yval ( dp , x) . 2 pol yval ( p , x

) . pol yval ( dp2 , x) ) ;

end

7

Math 56 Newton Fractals Michael Downs

13 r = x ;

end

% f unc t i on to produce an image of the f r a c t a l devel oped when us i ng newton s

2 % method to f i nd r oot s gi ven s t a r t i ng poi nt s .

4 % pf s a row vect or of the i nput pol ynomi al s c o e f f i c i e n t s i e [ 1 0 0 1] =

% z 3 1

6 % xmin , xmax , ymin , ymax s cr e en bounds

% s t ep how f i ne the gr i d i s . r e s o l ut i o n

8 % N number of newton i t e r a t i o n s to perf orm

% m c o e f f i c i e n t to mul t i pl t y by f or the r el axed newton s method . keep

10 % t hi s at 1 f o r the r e gul ar newton s method

f unc t i on r = f r a c t a l p ( p , xmin , xmax , ymin , ymax , step , N, m, mr)

12 % s e t up the gr i d :

x = xmin : s t ep : xmax ;

14 y = ymin : s t ep : ymax ;

[ X, Y] = meshgri d ( x , y) ;

16

% f i nd out which r oot s each complex number conver ges to

18 i f mr

Z = newtpmr ( p , X + Y i , N) ;

20 e l s e

Z = newtp( p , X + Y i , N, m) ;

22 end

24 % round each number to 5 deci mal pl ac e s

Z = round (Z 1e3 ) /1 e3 ;

26

% determi ne a l l r oot s of the f unc t i on and s t or e them i n an array

28 r = r oot s ( p) ;

n = l engt h ( r ) ;

30 % round each r oot to 8 deci mal pl ac e s

r = round ( r 1e3 ) /1 e3 ;

32

% add a zer o i n the end

34 i f ismember ( 0 , r )

r = [ r . 0 ] . ;

36 end

% c r e at e a matri x f u l l of the i ndi c e s to which the number converged

38 [ , l oc ] = ismember (Z, r ) ;

40 l oc = ( l oc n/2) /( n/2) ;

r = l oc ;

42 i magesc ( x , y , l oc ) ;

xl abe l ( Re( z ) )

44 yl abe l ( Im( z ) )

axi s square

46 end

8

Math 56 Newton Fractals Michael Downs

7 References

https://www.math.uwaterloo.ca/

~

wgilbert/Research/GilbertFractals.pdf

http://www.chiark.greenend.org.uk/

~

sgtatham/newton/

https://www.whitman.edu/mathematics/SeniorProjectArchive/2009/burton.pdf

http://eprints.maths.ox.ac.uk/1323/1/NA-96-14.pdf

9

- Files%5C2-Clase Lectures Lecture 5Uploaded byhamzaraz
- Newton Method for Complex Polynomials - Mark McClureUploaded byHelder Durao
- r05012304 - Mathematics for Bio TechnologistsUploaded bySRINIVASA RAO GANTA
- Weber Patrick Rep tUploaded byAkindolu Dada
- Roulette%5B1%5DUploaded byCypher Net
- Lec 13 Newton Raphson MethodUploaded byMuhammad Bilal Junaid
- Dan Um Jay AUploaded bysupriyakajjidoni
- EMA 523641Uploaded byLheidyy36
- Secant MethodUploaded byjeffrey marapia
- Resonant ConverterUploaded bySijo Augustine
- t 5 Open MethodsUploaded byvarunsingh214761
- lec8aUploaded byHenrique Mariano Amaral
- 12 3 Newton RaphsonUploaded byBisaso Kuteesa
- A Class of Methods BroydenUploaded byDavid Reyes
- Lecture 3Uploaded byLindsay Green
- Introduction to Numerical AnalysisUploaded bynab05
- WEBINAR 2016 Nonlinear DiagnosticUploaded byMukil Dev
- Chaos NotesUploaded bysunrabab
- powerpoint iwb ideas 2Uploaded byapi-459326447
- CAD Assignment LinesUploaded byapi-3723664
- MELJUN CORTES Numerical AlgorithmUploaded byMELJUN CORTES, MBA,MPA
- Nvr Classification 1Uploaded byAye-Yan Millan Arellano Velasco
- Burden 2011 - Numerical Analysis 9ed Cap 11Uploaded byCarloyos Hoyos
- WMA01_01_que_20170110Uploaded byjaberalislam
- Mathcad - CAPE - 2004 - Math Unit 2 - Paper 02Uploaded byNefta Baptiste
- forelesn.kap5Uploaded byrajaraghuramvarma
- things you must know for math staarUploaded byapi-236364642
- ch7Uploaded byRahul Khimsara
- 1-s2.0-S0898122110004591-mainUploaded bySyed Ubaid ur Rehman
- Roots of PolynomialsUploaded byJoseph Cloyd L. Lamberte

- Conditional SentencesUploaded byTiada Hari Tanpa Hary
- Research Proposal_BRMUploaded byArathi Deepak Nair
- Chuvash Avar LanguageUploaded byLug Therapie
- 959838072-MITUploaded byPlank89
- 831989Uploaded byالعلم نور
- participant packet 07Uploaded byapi-363327471
- Nortel_CPLUploaded bymcclaink06
- chap2JoybookUploaded byBodhi8
- 04 Filtration PresentationUploaded bykailash
- HSE Professionals _ Heat StressUploaded bychinne046
- ib economics review questionsUploaded byapi-196719233
- jeas_0816_4875Uploaded byHaa
- compilation research reportUploaded byapi-289044570
- Generator ProtectionUploaded bySantoshkumar Gupta
- Lamp_pengawasan Hydrotest-time SheetUploaded byRahmad Desman
- 986-3868-1-PBUploaded bypnkgoud
- A Fuzzy Multi Criteria Approach for Measuring Sustainability PerformanceUploaded byPhan Thành Trung
- 7s ModelUploaded byUJJAL SAHU
- CPM3rd Rheumatic FeverUploaded byJacky
- VA form 21-4138, Aug 2004Uploaded bygulftriton11057
- NIELIT Scientific Assistant Draft_Syllabus_STA_RUploaded byKumari Niraj
- Biomaterials Syllabus Fall 2014 UpdatedUploaded byEileen Ly
- 2016CompositeList Web 197Uploaded byAnuranjan
- Lecture 1 - Intro to Single Point Cutting Tool GeometryUploaded byMuhammad Umar Iqtidar
- Polg eBookUploaded byniravnadam4645
- 8th Sem SylabusUploaded byIshaan Khanna
- schedule packaged pumps.pdfUploaded byMohammad Dawood
- Mode of AddressUploaded byBethan
- eCATTUploaded byjeffsap
- iugyf)Uploaded byHappy Kumar