You are on page 1of 6

10/25/2018 Questions on Numerical Solution Methods

Questions on Numerical Solution Methods

Newton's Method, Secant Method, Sorting, Linear Algebra, Numerical Integration,


Differential Equations, Miscellaneous

Newton's Iterative Solution Method


Why do we bother with Newton Iterations when there are better way to solve things.

This is one of those leaps of faith that we ask you to make in the Education Business, and then we wonder why
you didn't absorb it. The fact is that in a large range of applications, Newton (or some close relation) is the best
way to solve things. Mathematica uses a Newton iteration when you invoke FindRoot. I use a generalized
Newton iteration on a daily basis to solve systems of several thousand coupled non-linear equations. No Newton
is not the universal best way to solve equations. You will have to learn several methods, if you get heavily
involved in the equation solution business, but this is a good start to your education in the field.

What is a Taylor Series Expansion?

If you know the value of a function, and the value of all of its derivatives at a point, the Taylor Series gives you a
way to obtain values of the function near that point without ever having to know the exact form of the function.
To get a thorough answer to this question, you should take a look in a calculus book. If you don't have such a
book, go to the math library, find the section of the shelves with calculus textbooks, and browse until you find
one that makes sense.

This week I memorized the Taylor expansion formulas. Who was Taylor?

Brook Taylor was an English mathematician, born in 1685, died 1731. The Taylor expansion formula, (or
Taylor's Theorem) was given in a 1715 publication called "Methodus Incrementorum Directa et Inversa". This
work laid out the principles of the calculus of finite differences, and provided the first description of the motion
of a vibrating string. He hung out with guys like Newton and Liebniz in the Royal Society, so was active during
a fairly exciting time in the development of Mathematics and Physics.

How do you tie the iteration into the DO loop?

The best thing to do is to make the DO loop the iteration itself. The DO loop index is the iteration counter, and
an IF test leaves the loop if convergence is obtained
do 100 it=1,itmax
c
c Stuff here associated with whatever iteration you are doing
c
if (abs(x-x0).lt.eps*abs(x)) go to 200
100 continue
print *, ' Iteration failed to converge'
200 print *, ' Final value of x = ', x

Secant Solution Method

What is the difference between Newton's method and the Secant method.

http://www.personal.psu.edu/jhm/f90/questions/meth.html 1/6
10/25/2018 Questions on Numerical Solution Methods

Look at the Postscript file on the subject. The bottom line is that where a first derivative is needed, the Newton
method uses a value obtained from an analytic (standard calculus) evaluation of the derivative, but the Secant
method uses a finite difference approximation to the derivative.

Why does the Newton iteration often work better than the secant method for nonlinear functions like
f(x,y)?

Systems involving more than one equation and one unknown, there is no clean way to obtain approximations to
the partial derivatives needed in the iteration, based only on past function values obtained during the iteration.
Say that I have values for four iterations as follows
x y f(x,y) g(x,y)
-------------------------------
1.1 2.0 2.2 5.2
.9 2.2 2.0 5.7
.7 2.4 1.7 6.5
.6 2.5 1.5 6.6

Because x and y are changing simultaneously from one iteration to the next, it is very difficult (but not
impossible) to construct numerical approximations to the partial derivatives (e.g. derivative of f with respect to x
while y is constant) .

Sorting
Could you give us some tips on sorting arrays alphabetically?

Aside from declaring the right variables CHARACTER, very little changes from a sort of numbers. You can
continue to use the same methods with the same .LT. or .GT. (maybe .LE. or .GE.) tests. The secret is
understanding how Fortran views the order or character strings. The best way to understand this ordering this is
to play with the names in "grades.in" once you have your program working. Generally 'AA'<'Aa'<'Aaa'<'Baa'
<'Bba'<'Zba'<'aAa'<'aaa'<'zaa'.

Can you rate how expensive (time consuming) the various sort techniques are?

You can't predict time consumption without knowing something about the data. As a general rule bubble comes
in last. See the next question.

Is there ever a case when it is very bad to use a particular sorting type?

Yes, if you have a large number (million, billion, ...) of numbers or names that are randomly ordered, then you
should not use bubble, selection, or insertion sort. Use the NETLIB routine or one of its relatives that take of
order n log(n) computer operations rather than n2 operations. If you have a list that was recently sorted, and a
couple of new elements are tagged on the end, but out of order, then you do not want to do a selection sort, and
probably shouldn't use a bubble or the NETLIB sort in grades.f. Something related to an insertion sort (maybe
with a halving search that I'll show you later) is a very good idea.

Why doesn't Fortran have a built in Sort routine?

It is neither common enough nor a simple enough to rate as an intrinsic function. You will find other more
frequently used procedures that fall into this category. That is why NETLIB was developed. Having a subroutine
that you can plug into your program is nearly as easy as having an intrinsic function, and generally a lot more
flexible.

Linear Algebra

Can you always trust NETLIB files?


http://www.personal.psu.edu/jhm/f90/questions/meth.html 2/6
10/25/2018 Questions on Numerical Solution Methods

No, but they almost always warn you if a package is inadequately tested. In general these subroutines are as well
tested as any software that you will find. Packages like LINPACK and BLAS are at the zero bug level.

How would you find a subprogram in NETLIB if you didn't know its name?

In NETSCAPE select the line that searches the NETLIB data base by topic. You can also look for a file with a
name beginning "index". Also check the file "linpack.doc" in ~jhm for a description of the LINPACK systematic
naming convention, giving another approach to finding some subroutines.

When is Gauss Elimination Used?

The simple answer is: When you are going to solve a set of linear equations without returning for another
solution with a different right hand side, but the same coefficients for the unknowns. Your question is actually
deeper than you may suspect. At some point, depending on the number and location of non-zero coefficients,
and the total number of equations, Gauss Elimination becomes too expensive in both computer time and memory
requirements. Then you must shift to iterative solution methods. I will not be teaching these methods. However,
it is worth knowing that your first introduction to iterative solution of linear equations will probably be the
Gauss-Seidel method. This method is relatively simple to teach, but not used frequently in modern applications.

To solve the matrices, couldn't you use Kramer's rule? Is there anything in Netlib using Kramer's rule?

You can, but once you go beyond 3 equations and 3 unknowns Kramer's rule is vastly more expensive than
Gauss Elimination for solving the equations. It has to do with the expense of evaluating a Determinant. Try
counting operations for Kramer's on 3 equations, 4 equations and 5 equations. I've never even bothered to look
for a Kramer's rule subroutine, and would be surprised to find one in LINPACK. By the way, another side
benefit of LU decomposition is that the Determinant of the original matrix can be evaluated quickly by
calculating the product of the Determinants of L and U.

What is the purpose of having an additional array that carries along the values of an original array?

I assume you are referring to my use of arrays "a" and "at" in some of my examples. The reason is that when you
pass an array "a" to a LINPACK subroutine as part of a solution of a set of linear equations, the subroutine
overwrites the original values in "a". The extra array "at" is needed only if you have further use for those
original coefficients. I needed them for a cross-check of results.

What is the purpose of a Transpose?

A transpose is an operation to change a matrix into another specific matrix. If you want an n by n matrix B to be
the transpose of A, then the Fortran to do the job is this:

do i =1,n

do j = 1,n

b(i,j) = a(j,i)

enddo

enddo

It is useful in various obsure solution procedures, one or another of which you may stumble across in your
career. If you remember what I said about the order matrix elements are stored in C versus Fortran, you may see
that this operation can be very useful in programs involving mixtures of C and Fortran subroutines.

Curve Fitting
http://www.personal.psu.edu/jhm/f90/questions/meth.html 3/6
10/25/2018 Questions on Numerical Solution Methods

Could it sometimes be better to fit a curve and then solve it analytically rather than numerically?

You've gotten to the heart of many numerical methods. The act of fitting a curve is usually an approximation so
that you are not producing a true analytic solution to the original differential equation. The use of fit curves
generate algebraic equations that can be solved relatively with computers (LINPACK, Newton, ...). This idea is
in a sense at the heart of Finite Element methods. You can take quite a few courses on that subject.

What does Least Squares do?

"Fits" a line or curve to a set of data points. By now you should know that with 2 points you can exactly
determine the slope and intercept of a straight line passing through the points. What if you have 10 points, that
should fall on a straight line, but because of experimental error, don't really line up. There is no single straight
line (y = m x + b) that passes through all ten points. The best that you can hope for is a line passing near all 10
points, but probably not through any of them. The problem is to come of with a systematic method for defining
"near" and finding the line that is nearest the points. That's where least squares comes in.

How exactly do you use the cubic fit and what's its purpose?

The reason for the fit is that I have values of some function only at a limited number of points. For example, I
may need viscosity of a liquid at any temperature between 0 and 120 Fahrenheit. However, I only keep values of
viscosity stored for a limited number of temperatures, say T= 0, 20, 40, 60, 80, 100, and 120 F. What I do is to
obtain coefficients for a cubic function within each of the intervals between my temperatures (6 intervals, 4
coefficients in each interval). There are two ways to get these coefficients. As an example, let's get a cubic for
temperatures between 40 F and 60 F. One way is to use the viscosities at 20, 40, 60, and 80 F. to generate 4
equations with 4 unknown coefficients. The other is to use temperatures at 40 F and 60 F, and force a match to
the first and second derivatives of the fits in adjacent intervals to get enough equations. Unfortunately, this later
"spline" approach results in a coupled set of linear equations for the cubic coefficients in all intervals, increasing
the complexity of the problem. In either case, you end up with an expression:

visc = a0 + a1*T + a2*T**2 + a3*T**3

with known values for a0, a1, a2, and a3. Plug in any T between 40 and 60, and you will get an approximation to
the viscosity at that temperature. Usually, this will give a more accurate approximation for values of viscosity in
the interval 40 < T < 60, than you would get with a simple linear interpolation. Unfortunately, there are times
when the cubic fit can behave very badly, "kinking" by placing the maximum and/or minimum of the cubic
within the interpolation range.

Numerical Integration
Is Simpson's Method always better than Trapezoidal?

Usually, but not always. If the function to be integrated has some rapid changes and you don't use enough
sampling points, then the quadratic fit in Simpson's method may give stranger results than a simple Trapezoidal
method..

Is Simpson's Method faster than the Trapezoidal method? Which one is more reliable?

Simpson's Method is usually faster simply because you usually don't need to evaluate the function at as many
points to obtain the same accuracy for the definite integral. If you are integrating smooth functions both are
reliable. If your function may have some regions of rapid change in value and derivatives, then trapezoidal is
more reliable.

What form of numerical integration do you think is the most reliable?

http://www.personal.psu.edu/jhm/f90/questions/meth.html 4/6
10/25/2018 Questions on Numerical Solution Methods

For a definite integral of a known function, the trapezoidal method is the most reliable. You don't get unintended
kinks from a straight line. For integration of an ODE, the Implicit Euler is the most reliable. In either case you
need to watch the integration step size to be sure that you maintain accuracy that you can accept, and will need
more steps than some other methods.

Solution of Differential Equations


How Deep will we go into Differential Equations?

We will look at a simple first order system of differential equations. No fancy theorems. We will follow the steps
to convert the differential equations to algebraic equations and proceed with a solution. If time permits, I may
discuss a second order ODE.

Why does it seem like ODE's are only solved numerically with a computer. Why can't they be solved
analytically like Mathematica can do?

The number of equations that Mathematica can solve analytically is actually relatively limited. When you get to
the ones involved in really interesting physical models of complex systems, neither Mathematica nor the best
Mathematicians can give you a useful analytic solution.

If we could only remember a couple sentences about ODE solutions what would they be?

Use existing solution packages whenever possible from places like NETLIB (ODEPACK). However, test them
carefully for limits of accuracy and stability. One good class of tests is to make a series of runs, each with
different integration steps.

Could you list methods (Secant, Adams, ...) in order of Accuracy.

Decouple Secant from Adams, Runge-Kutta, etc. Secant like Newton is used to solve non-linear algebraic
equations (yes those algebraic equations could be generated by an implicit Adam's-Moulton, but that's a second
level of solution detail). For a given number of iterations, Secant usually produces a less accurate answer than
Newton. On the ODE side, Explicit and Implicit Euler formally have the same first order (lowest) accuracy.
Crank-Nicholson has the next higher (second order) accuracy. Runge-Kutta, Adams-Bashford, Adams-Moulton,
and Implicit Adams-Moulton are all tied for the highest formal accuracy (fourth order) of those that we have
studied. In practice Runge-Kutta is usually a little more accurate than the others for a given step size. This
accuracy has meaning only as long as the methods stay numerically stable. The ordering of methods from first to
last to give up as step size is increased is usually (but not always): Explicit Euler or Adams-Bashford give up
first ; Adams-Moulton holds on until a little larger step size; Implicit Adams-Moulton and Runge-Kutta last
further; and Implicit Euler stays stable to any time step size (results may be very inaccurate, but not fatal to the
floating point unit)

Miscellaneous

How can a computer generate a random number when there is nothing random about the way it
operates?

I'm not the best guy to ask this question, because I can't give you a complete answer. There are large numbers of
papers in journals on the subject if you are interested, and source code available for "random number"
generators. The simple answer is that the don't generate random numbers. They generate "pseudo-random"
numbers. These are sequences of numbers that proceed from some starting point (called the seed) and via a well
determined algorithm produce a series of numbers that for most purposes look random. Start with the same seed,
generate the same sequence of numbers. You can introduce a certain amount of true randomness in the whole
process with your selection of the seed. Pull out the number of milliseconds that have elapsed during the day and
scale by an appropriate number.
http://www.personal.psu.edu/jhm/f90/questions/meth.html 5/6
10/25/2018 Questions on Numerical Solution Methods

Up to other Questions and Answers / Home

Maintained by John Mahaffy : jhm@psu.edu

http://www.personal.psu.edu/jhm/f90/questions/meth.html 6/6

You might also like