You are on page 1of 42

Numerical Methods for Evolutionary Systems

C. W. Gear
Celaya, Mexico, January 2007
Plan: to survey classes of problems that are important in chemical engineering and to look at the
properties of various methods.
Objective: to give the audience an understanding of the issues to be concerned with when
selecting a computer code to solve a given problem.
Note: In most cases a user should not be writing his or her own code to solve a problem, but use
one of the many off-the-shelf packages that are available. Real world problems present
many challenges to a code, so one should restrict oneself to using codes that have
undergone considerable development and improvement whenever possible. However, it is
important to understand the issues that govern accuracy and speed of solution before
choosing a code.
Outline
1. Fundamental Ordinary Differential Equations
2. Stiff Differential Equations
3. Differential-Algebraic Equations
4. Slow Manifolds
Listings of many of many of the MATLAB programs used for the examples appear in these slides. These
are present only for completeness and are not intended to be read! Those who have an electronic
version of these slides can copy them into a MATLAB file if they wish to experiment with them.

A number of references are cited on the slides. Copies of those that include Gear as an author can be
found at
www.princeton.edu/~wgear L1-1
Copyright 2006, C. W. Gear
(Ordinary) Differential Equations

dy
 f ( y , t ) (non-autonomous - depends on t )
dt
dy
 f ( y) (Autonomous - independent of t )
dt
y may be a vector [ y1 , y2 ,..., y N ]T .
Could even be infinite-dimensional (a PDE).
Can turn a non-autonomous system into an autonomous
one by defining an additional variable y N 1  t with
additional differential equation dy N 1 / dt  1.
Sometimes we have higher-order equations - for example
d2y dy
2
 F ( y , )
dt dt
Can turn this into larger, first-order system with substitution
z1  y , z2  dy / dt to get
dz1 dz2
 z2 ,  F ( z1 , z2 )
dt dt
L1-2
Copyright 2006, C. W. Gear
A differential equation specifies a vector direction everywhere (it is defined):
If we pick an initial value a curve with these vectors as tangents is the solution
However, there is a family of solutions – one for every initial value

Initial
value

t
L1-3
Copyright 2006, C. W. Gear
We will concentrate on evolutionary problems in this course.
These are initial value problems. The solution is specified at a given time
and we want to know the solution at future points in time.

Ordinary differential equations (ODEs) have unique solutions to an initial


value problem under mild continuity assumptions on f(y). We will generally
assume that the right-hand side (the function f(y)) is differentiable as often
as we need.

It is important to realize that if f(y) has discontinuities in its values or its


derivatives, some methods may not perform as well – we will look at some
examples of this. Such situations can occur when, for example, a change is
made to the operating environment of a system such as a reactor.

When the ODE system has more than one component, we can also have
boundary value problems in which data is specified at more than one point
in time (usually two). These problems frequently occur in control problems
when we know where we start and we want to arrive at a specific
conclusion – for example, sending a rocket to Mars. These generally
involve a different type of method than initial value problems, and existence
and uniqueness may not be true. We will not discuss these in this course.
L1-4
Copyright 2006, C. W. Gear
The differential equation defines the solution, y(t), for all values
of the independent variable t. However, in a computer we can
only represent a finite amount of information so we usually
represent the solution at a set of discrete time points

t0 < t1 < t2 < … tN

where tN is the final time we want to get to.

We will calculate numerical approximations to y(tn) at each of


these points, tn, and name these approximations yn.

t0 is the starting point for the integration and we are given the
initial value y0 = y(t0).

L1-5
Copyright 2006, C. W. Gear
We approximate the solution of a differential equation locally by some easily
computable function – such as a polynomial. The simplest is a straight line!
Suppose we are trying to approximate the solution:

y Local error
in 4th step

Global
Local error error
in 3rd step after 4
steps
Local error
Local error in 2nd step
In 1st step
y4

y3
y2
y1
y0

t1 t2 t
t3 t4
L1-6
Copyright 2006, C. W. Gear
In the previous slide we approximated the solution over one step by using the
tangent to the curve which is given by dy/dt = f(y). If we know y we can compute
this. The method, called the Forward Euler method is
yn+1 = yn + (tn+1 – tn) f(yn)
Usually we write the step size as h = tn+1 – tn (or as hn if it varies from step to step).
The step size (and the method) are chosen to get the accuracy we need. How big
is the error? We can look at it with Taylor’s series. We have
dy h2 d 2 y
y (tn 1 )  y (tn )  h  L
dt n 2 dt 2 n
or
y (tn 1 )  y (tn )  hy '(tn )  h 2 y "( n ) / 2 (exact)
(This assumes sufficient differentiability.) Since Euler’s method matches the first
two terms, the local error is h2y’’/2. This tells us that if we halve the step size, h,
then the local error reduces by a factor of four. However, if we halve the step
size, we will take twice as many steps to cover the same ground, so we
introduce twice as many local errors, so the global error at the end of a fixed
interval will reduce by a factor of two – that is, the global error in this case is
proportional to h (i.e, h to the FIRST power) so this is called a
FIRST ORDER METHOD
(see example from MATLAB program)
L1-7
Copyright 2006, C. W. Gear
Forward Euler method for y’ = t with various step sizes over interval [0,1]
(MATLAB program for this on next slide)
Because y” is constant (=1) local error is exactly h2/2 and global error is
exactly Nh2/2 where number of steps, N = 1/h.
It is often helpful to plot errors on log-log graph to see the order (from the
slope of the line). (It is exactly a straight line in this example because
there are no higher-order derivatives in the solution. Generally the log-log
plot of the error will only approach a straight line as h gets small.)
Solutions for different step sizes No. Steps Error Error/h
2 2.5000e-001 0.500
4 1.2500e-001 0.500
8 6.2500e-002 0.500
16 3.1250e-002 0.500
32 1.5625e-002 0.500
64 7.8125e-003 0.500

L1-8
Copyright 2006, C. W. Gear
%Ex1.m Forward Euler method for y' = t, y(0) = 0 over interval [0,1].
color = 'krgbmc'; % Sequence of colors for plotting at each step size
figure(1); hold off % Plot results for different h's here
% Labeling for printed output
fprintf('No. Steps Error Error/h\n\n')
for i = 1:6 % Run for 6 different step sizes
N = 2^i; h = 1/N; % Number of steps and step size
y = 0; % Initial value
for n = 1:N % Do N steps
t = (n-1)*h; % t at start of step
yold = y;
y = y + h*t; % Forward Euler step
%plot new segment of solution
plot([t;t+h], [yold;y],['-' color(i)],'LineWidth',2 )
axis([0 1 0 0.5]); xlabel('Time, t'); ylabel('Solution y'); hold on
pause(2/N) % Make it run slowly to watch output
end
% Print error at end of interval
fprintf('%6i %10.4e %5.3f\n',N,0.5-y,(0.5-y)/h);
Error(i) = 0.5 - y; % Save errors for log plot
nsave(i) = n; % ... and n's
end
print -dpsc Ex1-1; pause % print copy of plot and wait for key stroke
figure(2) %Log-log plot of errors
hold off
loglog(nsave,Error,'-b','LineWidth',3)
xlabel('Number steps');ylabel('Error');
axis([1E0 1E3 1E-3 1E0]); print -dpsc Ex1-2
L1-9
Copyright 2006, C. W. Gear
First-order methods are not accurate enough for most problems – that is the
step size has to be so small that the large number of steps takes too much
computer time. We need to get more accurate methods.

TAYLOR’S Series methods


Attempt to compute more derivatives in the Taylor’s series.
These are not usually worth considering since it is hard to compute the high
derivatives for most problems.

For example, consider forming several derivatives of the following simple


system:

Would be difficult to do by hand correctly. There are computer programs that do it


but they may be computationally expensive.
L1-10
Copyright 2006, C. W. Gear
Higher-order methods – Runge-Kutta methods
Some integration methods can be found by using the equality
tn1 dy
y (tn 1 )  y (tn )   dt
tn dt
and considering various approximations for dy/dt. For example, the Forward Euler
method is obtained by setting dy/dt = f(t) ≈ f(y(tn). Suppose instead we use the
mid-point of the interval and set dy/dt = f(t) ≈ f(y(tn+1/2) to get

yn1  yn  hf ( yn1/ 2 )
Unfortunately, we don’t know yn+1/2 so we cannot evaluate this formula. However, we
could estimate yn+1/2 using Forward Euler with yn+1/2 = yn + hf(y )/2 to get the method:
k0  hf ( yn ) This is an example of class of methods known as
1 Runge-Kutta methods. The important thing about
k1  hf ( yn  k0 ) these methods is that they are ONE STEP. They
2 start with only the knowledge of yn and give a
yn 1  yn  k1 procedure for calculating yn+1.
They take MORE THAN ONE evaluation of f(y) in each
step – and this affects their computational cost.
L1-11
Copyright 2006, C. W. Gear
The ORDER of Runge-Kutta methods. (This is algebraically messy so we will only
look at the example on the previous slide since it is relatively simple.)

We expand everything in Taylor’s series, usually around tn


k0  hyn
f 3 f
2
k1  hf ( yn  hyn / 2)  hf ( yn )  h yn / 2  h
2
( y  ) 2
/ 8 L
y y 2 n

so
f 3  f
2
yn 1  yn  hyn  h 2
yn / 2  h ( y  ) 2
/ 8 L
y y 2 n

h2 h3
Since y (tn 1 )  y (tn )  hy(tn )  hy "(tn )  hy '''(tn )  L
2 6
f
and y "  y
y
h3  2
f
the local error is hy '''(tn )  h3 ( yn ) 2 / 8  L
6 y 2

Hence, the method is SECOND ORDER.

L1-12
Copyright 2006, C. W. Gear
There are Runge-Kutta methods of many orders. For example

THIRD ORDER
1
k0  hf ( yn ); k1  hf ( yn  k0 )
3
2
k 2  hf ( yn  k1 ); yn 1  yn  ( k0  3k 2 ) / 4
3
FOURTH ORDER
1
k0  hf ( yn ); k1  hf ( yn  k0 )
2
1
k2  hf ( yn  k1 ); k3  hf ( yn  k 2 )
2
yn 1  yn  ( k0  2k1  2k2  k3 ) / 6
These are not the only choices of coefficients to achieve these orders (2, 3, and 4).
Note that the number of function evaluations (also called stages) is 2, 3 or 4 for
each of these methods. HOWEVER, a FIFTH ORDER METHOD takes at least 6
stages, and the number rapidly increases.
Also note, these are not the best choices for coefficients – leave that to an off-the shelf code!

We want to examine the consequences of using different orders, and we will use these examples of
orders 1 through 4 for tests on the simple problem y’ = λy+At5/120, with λ = -2, A = 1, y(0) = y0
to compute y(1). L1-13
Copyright 2006, C. W. Gear
Below we see the log-log plot of the errors at all 4 orders versus number of steps. The
higher order is clearly superior for smaller step sizes (larger number of steps and smaller
errors) but for large step sizes we see that the first order method is more accurate than
the second order one. We see if we are happy with a very approximate answer, a lower
order method may be less expensive, but as we desire more and more accuracy, higher-
order methods become better. (The MATLAB code for this is shown on the next slide.)

L1-14
Copyright 2006, C. W. Gear
%Ex2.m Comparison of 1st through 4th order RK methods on y' = lambda*y + A*t^5/120, y(0) = y0;
lambda = -2; A = 1;
y0 = -0.013; const = y0*lambda^6 + A; % Constant of integration for initial value y0
for i = 1:6 % 6 different step sizes
N = 2^i; h = 1/N; % Number of steps and step size
Init = [y0;0]; % Vector is [y; t]
y1 = Init; y2 = Init; y3 = Init; y4 = Init; % Initial values for four orders
for n = 1:N
%Do each step
%First order method
k0 = h*fun(lambda,A,y1); y1 = y1 + k0;
%Second order method
k0 = h*fun(lambda,A,y2); k1 = h*fun(lambda,A,y2 + k0/2); y2 = y2 + k1;
%Third order method
k0 = h*fun(lambda,A,y3); k1 = h*fun(lambda,A,y3 + k0/3); k2 = h*fun(lambda,A,y3+2*k1/3);
y3 = y3 + (k0 + 3*k2)/4;
%Fourth order method
k0 = h*fun(lambda,A,y4); k1 = h*fun(lambda,A,y4 + k0/2); k2 = h*fun(lambda,A,y4 + k1/2);
k3 = h*fun(lambda,A,y4 + k2);y4 = y4 + (k0 + 2*k1 + 2*k2 + k3)/6;
end
%Compute errors
tL = y1(2)*lambda; % t*lambda at end of interval
Solution = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*(1/24+tL/120))))))/lambda^6;
Err_y1(i) = abs(Solution - y1(1)); Err_y2(i) = abs(Solution - y2(1));
Err_y3(i) = abs(Solution - y3(1)); Err_y4(i) = abs(Solution - y4(1));
nsteps(i) = N;
end
figure(3)
loglog(nsteps,Err_y1,'-k',nsteps,Err_y2,'-r',nsteps,Err_y3,'-b',nsteps,Err_y4,'-g','LineWidth',2)
legend('Order 1','Order 2','Order 3','Order 4')
xlabel('Number steps'); ylabel('Error');
print -dpsc Ex2

function derivative = fun(lambda,A,y)


derivative = [lambda*y(1)+y(2)^5/120; 1];
L1-15
Copyright 2006, C. W. Gear
In the last example, we plotted error against the number of steps. However, if we have a complex
problem the majority of the computer time is spent evaluating the derivatives, not in performing the
step-by-step integration. Hence, we often measure the “cost” by counting the number of derivative
evaluations (often called function evaluations). If we don’t have a complex problem, there is little
point in worrying about which method is fastest because on today’s PCs, a simple problem can be
executed in much less time than it takes to enter the problem into the computer!

The Runge-Kutta method takes an increasing number of function evaluations as the order is
increased. Below we show the output from the error versus the number of function evaluations for
the last example. We can see that the cost advantage of lower order methods for low accuracy is
increased.

L1-16
Copyright 2006, C. W. Gear
The “problem” with RK methods is that they find out a lot of information about the solution during one step
(for example, they have estimates of its derivatives, but all of that information is thrown away before the
next step is started. Multi-step Methods use information from past steps to try to increase the accuracy of
the solution without using additional function evaluations. One important class of these method can be
found by again considering the equivalence:
tn1
y (tn 1 )  y (t n )   y (t )dt (1)
tn

Let us suppose that we have already computed the solution at a number of points, tn-i for i = 0, -1, -2, …
along with the derivative y’n-i. We could use this information to approximate y’(t) by interpolation so that
we could estimate the integral in eq. (1). For example, if we just used one additional past point we have,
from the Lagrange interpolation formula,
t  tn 1 t  tn
y '(t ) ; yn  yn 1
tn  tn 1 tn 1  tn
If we now substitute this in (1) and integrate we get the approximation
3 1
yn 1  yn + hyn - hyn 1
2 2
• This is a two-step method – it uses information from two places, tn and tn-1.
• It is a second-order method, but it only uses one function evaluation
It is called the (2-step) Adams-Bashforth method.
There are q-step Adams Bashforth (AB) methods for all q > 0. They take the form:

yn 1  yn  hi 1 i hyn 1-i


q

and are of q-th order. We can compute the coefficients in a number of ways – but is not something the
average user ever needs to do! (We will come back to this issue a little later.) The next example
L1-17
contains the coefficient values for orders up to four. Copyright 2006, C. W. Gear
In the next example we will integrate the problem from the previous example by one- through four-step
AB methods, and plot the errors (solid lines) compared with those of the RK methods (dotted lines)
used previously. The first order methods are identical. The “kinks” at the start of the graphs for the
2nd and 3rd order methods are to do with issues we will discuss later.
Note that there is a difficulty with multi-step methods – we need to know the values at multiple points
before we can get started. Automatic, off-the-shelf codes handle this in various ways. For this simple
example, we are going to compute the exact values – since we know the solution! The Matlab code
for this is shown on the next slide.

L1-18
Copyright 2006, C. W. Gear
%Ex3.m Comparison of 1st through 4th order Adams Bashforth & RK methods on y' = lambda*y+ A*t^5/120, y(0) = y0;
Ex2 % Ex2.m should be executed before this is run to compute some values that are used.
% Integration coefficients: beta(q,:) are the coefficients for the q-step method
beta = [[1 0 0 0]; [3 -1 0 0]/2; [23 -16 5 0]/12; [55 -59 37 -9]/24];
for i = 1:6 % 6 different step sizes
N = 2^i; h = 1/N; % Number of steps and step size
Init = [y0;0]; % Vector is [y; t]
% Compute derivative at past points
tL = -lambda*h;
Derivative1 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5;
tL = -2*lambda*h;
Derivative2 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5;
tL = -3*lambda*h;
Derivative3 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5;
for order = 1:4
yd = [Derivative1; Derivative2; Derivative3]; %Previous derivative value array
y = Init;
for n = 1:N
%Do each step NOTE This is not an efficient code when order < 4.
derivative = fun(lambda,A,y);
yd = [derivative(1); yd]; %Now we have all four derivatives
y(1) = y(1) + h*beta(order,:)*yd; %Adams Bashforth formula
y(2) = n*h; %Compute t directly
yd = yd(1:3); %Save the last three derivatives
end
%Compute errors
tL = y(2)*lambda; % t*lambda at end of interval
Solution = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*(1/24+tL/120))))))/lambda^6;
ErrAB(order,i) = abs(Solution - y(1));
end
nsteps(i) = N;
end
figure(5)
hold off
loglog(nsteps,ErrAB(1,:),'-k',nsteps,ErrAB(2,:),'-r',nsteps,ErrAB(3,:),'-b',nsteps,ErrAB(4,:),'-g','LineWidth',2)
hold on
loglog(nsteps,Err_y1,':k',nsteps*2,Err_y2,':r',nsteps*3,Err_y3,':b',nsteps*4,Err_y4,':g','LineWidth',3)
legend('AB-1','AB-2','AB-3','AB-4','RK-1','RK-2','RK-3','RK-4')
xlabel('Number Evaluations'); ylabel('Error');
print -dpsc Ex3

L1-19
Copyright 2006, C. W. Gear
Note that the costs of the two methods (RK and AB) were comparable at the same order and
accuracy for this problem. However, that is not always the case. The error plot below is for
the equation y’ = -2y, y(0) = 1 by RK and AB for orders 1 through 4. Here the multi-step
method is superior, which is the reason that multi-step methods are often preferable unless
there are other reasons to prefer one step methods (which we will discuss later). At 4th order,
AB is nearly twice as efficient (in function evaluations).

L1-20
Copyright 2006, C. W. Gear
Adams multi-step methods of very high orders were used in the pre-computer days by astronomers
(and others) because of the low amount of work (really important if you are doing it by hand!)
Naturally there was a wish to squeeze everything one could out of a single function evaluation (the
costly part).
Returning to our integral formula:
tn1
y (tn 1 )  y (tn )   y (t )dt (1)
tn

Let us use a different interpolation formula to approximate y’(t) that also uses y’n+1

(t  tn )(t  tn 1 ) (t  tn 1 )(t  tn 1 ) (t  t n 1 )(t  t n )


y '(t ) ; yn 1  yn  yn 1
(tn 1  tn )(tn 1  tn ) (tn  tn 1 )(tn  tn 1 ) (tn 1  tn 1 )(tn 1  tn )

If we substitute this is (1) and integrate we will get:


5 2 1
yn 1  yn + hyn 1  hyn - hyn 1
12 3 12
This is also a 2-step method, but to evaluate the right-hand side we need y’n+1=f(yn+1) which depends
on the as-yet unknown y’n+1. In other word, we have to (approximately) solve the implicit equation
5 2 1
yn 1  yn + hf ( yn 1 )  hyn - hyn 1
12 3 12
This appears to be computationally costly, but it will turn out that an approximation can be found with
little additional computation and that the benefits of these implicit methods are worth the small
additional computation time.
These methods are called Adams-Moulton methods. A q-step Adams Moulton method has order q+1.

L1-21
Copyright 2006, C. W. Gear
How might we “solve” such an implicit method? One approach is the Predictor-Corrector
implementation that is used in many codes. In this approach we get a first approximation by using an
explicit method (such as an Adams-Bashforth method) called the predictor, and then do functional
iteration of the implicit integration method (such as Adams Moulton) called the corrector. For the two-
step Adams Moulton method on the last slide, the functional iteration is

5 2 1
ynp11  yn + hf ( ynp1 )  hyn - hyn 1
12 3 12
Usually only one function iteration is done. For example, if we use the 2-step Adams-
Bashforth/Moulton pair as a predictor-corrector (PC) combination, we perform the operations

3 1
pn 1  yn + hyn - hyn 1
2 2
5 2 1
yn 1  yn + hf ( pn 1 )  hyn - hyn 1
12 3 12
This is a 2-step 3rd order predictor-corrector method. We haven’t said what value we use for y’n at the
next step - there are two options: we could use f(pn) or f(yn). In the former case we first PREDICT pn+1,
then EVALUATE f(pn+1),, then CORRECT to get yn+1. In the latter case, we have one more
EVALUATION. Hence we call these two cases PEC and PECE methods, respectively. Note that a
PEC method has one function evaluation per step, while a PECE method has two. (Off the shelf
codes will typically automatically select between these two, depending on the needs of the equation
being integrated.)

The next example (Ex4) compares the results of Adams-Bashforth against PEC and PECE
implementations of Adams Bashforth/Moulton. The next slide shows the comparative error results.
The code is on the following slide.
L1-22
Copyright 2006, C. W. Gear
We see that the PC methods are superior to Adams Bashforth in terms of function
evaluations, while the PECE implementation is slower than the PEC implementation.
Note that PEC-q and PECE-q are of order q+1 while AB-q is of order q.

L1-23
Copyright 2006, C. W. Gear
%Ex4.m Comparison of one- through 4-step Adams Bashforth/Moulton methods in PEC and PECE implementations
% versus Adams Bashforth the equation % y' = lambda*y+ A*t^5/120, y(0) = y0;
lambda = -2; A = 1;
y0 = -0.013; const = y0*lambda^6 + A; % Constant of integration for initial value y0
% Integration coefficients: betaB(q,:) are the AB coefficients for the q-step method
betaB = [[1 0 0 0]; [3 -1 0 0]/2; [23 -16 5 0]/12; [55 -59 37 -9]/24];
% Integration coefficients: betaM(q,:) are the AM coefficients for the q-step method
betaM = [[1 1 0 0 0]/2; [5 8 -1 0 0]/12; [9 19 -5 1 0]/24; [251 646 -264 106 -19]/720];
for i = 1:9 % 9 different step sizes
N = 2^i; h = 1/N; % Number of steps and step size
Init = [y0;0]; % Vector is [y; t]
% Compute derivative at past points
tL = 0; Derivative0 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5;
tL = -lambda*h; Derivative1 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5;
tL = -2*lambda*h; Derivative2 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5;
tL = -3*lambda*h; Derivative3 = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*1/24)))))/lambda^5;
for order = 1:4
yd = [Derivative0; Derivative1; Derivative2; Derivative3]; %Previous derivative value array
y = Init; yE = y; ydE = yd; yB = y; ydB = yd; %y is PEC, yE is PECE, and yB is AB method
for n = 1:N
t = n*h;
yB(1) = yB(1) + h*betaB(order,:)*ydB; yB(2) = t; %Adams-Bashforth method
dB = fun(lambda,A,yB); ydB = [dB(1); ydB];
p = y(1) + h*betaB(order,:)*yd; pE = yE(1) + h*betaB(order,:)*ydE; %Predictors
d = fun(lambda,A,[p; t]); dE = fun(lambda,A,[pE; t]);
%Now we have all five derivatives
yd = [d(1); yd]; ydE = [dE(1); ydE];
%Adams Moulton formula
y(1) = y(1) + h*betaM(order,:)*yd; yE(1) = yE(1) + h*betaM(order,:)*ydE;
dE = fun(lambda,A,[yE(1); t]); ydE(1) = dE(1); %Final evaluation for PECE
y(2) = t; yE(2) = t;
yd = yd(1:4); ydE = ydE(1:4); ydB = ydB(1:4); %Save the last four derivatives
end
%Compute errors
tL = y(2)*lambda; % t*lambda at end of interval
Solution = (const*exp(tL)-A*(1+tL*(1+tL*(1/2+tL*(1/6+tL*(1/24+tL/120))))))/lambda^6;
ErrPEC(order,i) = abs(Solution - y(1)); ErrPECE(order,i) = abs(Solution - yE(1));
ErrB(order,i) = abs(Solution - yB(1));
end
nsteps(i) = N;
end
figure(6)
hold off
loglog(nsteps,ErrB(1,:),':k',nsteps,ErrB(2,:),':r',nsteps,ErrB(3,:),':b',nsteps,ErrB(4,:),':g','LineWidth',2)
hold on
loglog(nsteps,ErrPEC(1,:),'--k',nsteps,ErrPEC(2,:),'--r',nsteps,ErrPEC(3,:),'--b',nsteps,ErrPEC(4,:),'--g','LineWidth',2)
loglog(2*nsteps,ErrPECE(1,:),'-k',2*nsteps,ErrPECE(2,:),'-r',2*nsteps,ErrPECE(3,:),'-b',2*nsteps,ErrPECE(4,:),'-g','LineWidth',2)
legend('AB-1','AB-2','AB-3','AB-4','PEC-1','PEC-2','PEC-3','PEC-4','PECE-1','PECE-2','PECE-3','PECE-4',3)
xlabel('Number Evaluations'); ylabel('Error');
title('Adams Bashforth and Adams Moulton in PEC and PECE implementations')
print -dpsc Ex4

L1-24
Copyright 2006, C. W. Gear
There are many variants of multi-step methods, although generally only two are widely used –
Adams methods and Backward Differentiation methods we will discuss later. All methods are based
on the idea of approximating yn+1 by a formula using previous values of y and y’. The most general
form of this is the q-step method

yn1  i 1i yn1i  hi 0 i hyn 1i


q q

If β0is zero, this is an explicit method, otherwise it is an implicit method.

How can we choose the coefficients αi and βi? Since higher-order methods look to be better – at least
for high accuracy, we usually choose them to get high order. A “guaranteed” way to find them is to
expand in Taylor’s series. If we ask that the first p+1 terms in the Taylor’s series of the right-hand side
match those on the left, we will have a p-th order method (the error will contain an hp+1 term). Clearly we
get linear equations in the coefficients, and we have 2q + 1 coefficients, so it looks as if we could get
order 2q. For example, the q = 2 case is called Milne’s method. It is:
yn1  yn1  h( yn 1  4hyn  hyn 1 ) / 3
It is 4th order accurate (and implicit). We will see that these higher-order methods are no good for q
> 2 and may not be very good for q = 2. We can best illustrate the problem with an even simpler
explicit method, the mid-point rule. It is
yn 1  yn 1  2hyn

which is second-order accurate and explicit. Let us use it to solve the problem y’ = λy, y(0) = 1.
The graph on the next slide shows the relative error for λ = 1 and λ = -1. (Relative error is the error
relative to the solution)

L1-25
Copyright 2006, C. W. Gear
%Ex5 Mid-point method for y' = y and y' = -y.
N = 2^4; h = 2/N; %Number of steps and step size
%First, y' = y
yv = [1; exp(-h)]; %Vector of past y's.
YP = [yv(1)]; %Array for results
for i = 1:N
y = yv(2) + h*2*yv(1);
yv = [y; yv(1)];
YP = [YP y];
end
%Now, y' = -y
yv = [1; exp(h)]; %Vector of past y's.
YM = [yv(1)]; %Array for results
for i = 1:N
y = yv(2) - h*2*yv(1);
yv = [y; yv(1)];
YM = [YM y];
end
trange = (0:N)*h;
plot(trange,(YP-exp(trange))./exp(trange),'-b',...
trange,(YM-exp(-trange))./exp(-trange),'--r','LineWidth',2')
legend('y'' = y','y'' = -y')
xlabel('time'); ylabel('Error')
print -dpsc Ex5

Notice how the error for y’ = -y is increasing more rapidly and is oscillating. What is
going on? To find this out, we need to understand an import issue about multi-step
methods.

L1-26
Copyright 2006, C. W. Gear
When we solve a multi-step method, we are solving a difference equation.

Thus, in the previous example, with y y we have computed the following sequence
yn 1  yn 1  2h yn

This is a 2-step difference equation. The general solution of the q-step difference equation

yn 1   iq1  i yn 1i is yn   iq1 ciin where {i } are the roots of  q   iq1  i q i
In this example, the polynomial is   2  1 where   h .
2
Hence the roots are

2 4 2
     1 2
or 1 1    L and  2 1   L
2 8 2
Thus we see that 1 exp(  )O(h3 ) and 2  exp(  )O(h2 )
Hence the solution has the form yn c1[exp(tn )O(h2 )]c2 ( 1)n[exp( tn )O(h)]

The first term is a second order accurate approximation to the solution, the second is an
extraneous term due to the second root of the difference equation. If  is positive, the second term

decays and is not a problem, but if  is negative, the second term grows and oscillates in sign,
as we saw in this example.
L1-27
Copyright 2006, C. W. Gear
The phenomenon we saw in the last example is a lack of stability. Stability is an issue that is important
in virtually all numerical computation schemes. Whenever you have a finite-dimensional process (as
any process on a computer must be) that starts with q numerical values, say yn and produces new
values, say yn+1, and this process is repeated, we have the potential for instability. Instability means
that a small perturbation in the initial value or any intermediate value, grows in size as the computation
proceeds. Thus, if the process is
yn+1 = F(yn)
and we make a small perturbation to y0, say to y0 + ε0, then y1 is changed to
F
y1  1  F ( y0   0 )  F ( y0 )  0
y
F
Thus 1   0  J 0 0 where J0 is the Jacobian of F with respect to y. If J0 were constant
y
then we would have  n  J  0 J is a q by q matrix so has q eigenvalues. If any of these are
n

larger than one, Jn increases unboundedly with n and we have potential instability problems.
In this example we start with two values, yn and yn-1 and compute “two” new ones, yn+1 and yn (of
course, we don’t actually compute yn since we already have it but the computational process passes
these two values onto the next step. Hence there are two eigenvalues, and we saw that for the last
example that one was about exp(hλ) and the other was about -exp(-hλ). For the equation y’ = λy one
root has to be close to exp(hλ) since that is how much the solution changes by in one step of length h.
No other roots should be larger than this, or errors will grow faster than the solution and we will have
instability. Three slides ago we said that although we could get 2q-th order q-step methods, they were
not generally useful. That is because of an important theorem proved in 1957 often called the (first)
Dahlquist barrier (after the person who first proved it). It states that a q-step multi-step method of
order greater than q+2 will always have one root greater than 1 for the problem y’ = 0 and that this is
also true for order greater than q+1 when q is odd. These methods are unstable. With even q and
order q+2 (as in Milne’s method) there is more than one root with magnitude one. These methods
are weakly stable (as long as there are not repeated roots) and will not work for some problems.
L1-28
Copyright 2006, C. W. Gear
Generally speaking we would like all of the roots (eigenvalues) for the “null” problem y’ = 0 (except for
the one that has to be 1 for the solution to be correct) to be less than one. Then we have a strongly
stable method. The Adams methods are strongly stable.

STABILITY is an important concept in computation. In ODEs it is not a problem for one-step methods
like RK methods because only one value is passed from step to step. (Actually if we have a system of s
s ODEs, s values are passed from step to step, but there will be s eigenvalues equal to 1 for the null
problem.) However, it is very important for multi-step methods, and will be important in other problems
later in these lectures.

Another concept that is important in the solution of these types of problems is CONVERGENCE. In
almost all cases we cannot solve a differential equation exactly on a computer. (We can only really do
that if we can find an explicit integral in a form that can be evaluated exactly, and those are generally not
very important problems.) Hence, we approximate the solution of a differential equation by computing
approximate values for it at a series of points, ti., a distance h apart. We saw earlier that as we made h
smaller, we got more and more accurate values. A method is CONVERGENT if its error goes to zero as
h goes to zero. (In practice round off errors from the finite precision of a computer make it impossible to
get errors smaller than some amount – in fact, in many computations as one keeps increasing the
number of steps, the error will ultimately start to increase because of the accumulation of round-off
errors.) What convergence means practically is that over some useful range we can reduce h and get
more accuracy. The order of convergence is the power of h in the global error term.

A final fundamental concept is CONSISTENCY. A method is consistent if the local error decreases faster
than h. Thus the first order methods we have talked about which have local errors proportional to h2
are consistent.

Then we have the important theorem: A method that is STABLE and CONSISTENT is CONVERGENT.
These give us two easy to very criteria that enable us to guarantee we have convergence.
L1-29
Copyright 2006, C. W. Gear
CONVERGENCE we have already given an outline of the proof of convergence. The
idea is that over an interval of length L we will take L/h steps of length h. If each step
introduces a local error than can be bounded by h1+ θ for some θ > 0 and if the
amplification of any errors introduced is bounded by K then the errors at the end cannot
sum up to anything greater than
K(L/h) h1+ θ = KL1+ θ h θ
Which goes to zero as h goes to zero. The bound on local error growth is a result of
stability, and the bound on the size of the local errors comes from consistency. Of course,
the details of the proof are tedious (and not of interest here).

We use convergent methods, not because we plan to drive the step size to zero (we don’t
have enough computer time for that) but because we can, if we need to, reduce the step
size to get more accuracy should our results not be accurate enough. It also provides us
with a simple way to estimate the error in a computation – repeat it using a smaller step
size and see how the results changes. This is not usually the least expensive way to
estimate errors, but it can be used as a last resort when other methods can’t be applied.
We will discuss it a little later.

L1-30
Copyright 2006, C. W. Gear
Virtually all codes for ODEs automatically adjust the step size to control the error and some also
adjust the order of the method. Typically a code has input parameters that specify both relative error
and absolute error The former is the size of the error relative to the size of the solution while the
latter is just the size of the error. The reason for having both is that if the solution is changing in size
by a large amount, we may want to just specify the number of digits of accuracy we would like, and
this is done with relative error – e.g., a relative error of 10-5 would indicate that one would like 5 digits
of accuracy. However, if the solution passes through zero, a relative accuracy will not work, since it
would require zero error at that point which is not usually possible. Hence most codes also allow for
an absolute accuracy criterion to be specified. If both are specified the code tries to satisfy the least
demanding.

Unless one has a very good idea of the size of the solution, both should be specified. For example, if
we just specified an absolute error of, say, 10-5, and the solution grew to 1015 a computer with less
than a 20-digit representation of numerical would not be able to meet the error request because of
round off errors. Most computers have about 16-digits of accuracy in double precision (7 in single) so
the if the solution was much larger than 1010 we could not get an absolute error of 10-5.
100,000,000,000 000,00 .
approximate position of round-off in IEEE double precision floating point is 10-5

If, on the other hand, we only used relative error, we would get into trouble if the solution got too
small.

L1-31
Copyright 2006, C. W. Gear
Earlier we defined local error (the error made in a single step), and global error (the accumulated error
after many steps. Obviously the user would like to control the global error (and many probably believe
that they are doing this) but this is almost always not possible. Codes usually estimate the local error
and try to control it in some way. To see the issues involved, we will look at some of the error estimation
methods.

One of the oldest is called Richardson extrapolation. It applies to any method for which we have some
idea of the rate of convergence. In the context of ODEs we may know the order of the error as a
function of h. For example, we know that the global error of the Forward Euler method is proprtional to
h. Suppose we do two integrations using Forward Euler, one with step size h getting an answer S1 and
one with step size h/2 getting an answer S2. Now we have
S1 = S + Kh + O(h2) and S2 = S + Kh/2 + O(h2)
where S is the true integral. If we view these as two equations in Kh and S we can solve for both,
finding that
Kh = (S1 – S2)/2 + O(h2) and S = 2S2 – S1 + O(h2)
Thus we can either estimate the error in the computed answers (Kh) or can get a more accurate
estimate of the answer (S) – which is what the Richardson extrapolation formula was originally used for.

This method can be used to estimate the global error in the solution of ODEs if they are solved with a
fixed step size. However, it is computationally expensive (two integrations) and can’t be done until the
integration has been completed. If we then decided that the error was too big, we would have to go
back to the beginning and repeat the process with a smaller step size. We would like to choose each
step so that the error is controlled – and we would like to do this in the most efficient manner.

L1-32
Copyright 2006, C. W. Gear
We can use Richardson extrapolation on a single step to estimate the local error.
Suppose we have a p-th order, one-step method, written as yn+1 = M(h,yn). Then, if
we apply it once with step size h/2 we have:

One step: u1  M (h, y (tn ))  y (tn 1 )  Kh p 1  O(h p  2 );


p 1
h h
Two steps: v1  M ( , y (tn ))  y (t 1 )  K    O(h p  2 );
2 n
2 2
p 1
h h h
.... v2  v1  M ( , v1 )  y (t 1 )  M ( , y 1 )  K    O(h p  2 )
2 n
2 2 n 2 2
p 1
h
....  y (tn 1 )  2 K    O(h p  2 );
2
hence u1  v2  Kh p 1 (1  2 p )
Thus this allows us to estimate the local error, and if it is too large we could repeat the step with a
smaller h to get an error of the size that would be acceptable. Of course, since we have an estimate of
the error, we might naturally want to subtract it from the answer to get greater accuracy! The answer is
then yn+1 = (v2 – 2-pu1)/(1 – 2-p).

This is a problem whenever we do error estimation. If we have a good estimate of the error
then it seems obvious that we should take the better answer by removing the error. But then
we have no error estimate!

Usually we take the better answer and assume that the neglected higher-order error is smaller than the
term we just estimated.
L1-33
Copyright 2006, C. W. Gear
EXAMPLE Richardson extrapolation of Forward Euler
u1  yn  hyn ; One step of size h
h h
v1  yn  yn ; v2  v1  f (v1 ); Two steps of size h / 2
2 2
v2  21 u1 h
yn1  1
 2v  u  y  hf ( y  y)
1 2
2 1 n n
2
The result is the second-order RK method we discussed earlier. Generally, when we use Richardson
extrapolation to improve the accuracy we just get another integration formula.
We could have used Richardson extrapolation to estimate the error – using either the improved value or
one of the original Euler integrations. What we would have then is a Runge-Kutta formula with an
embedded error estimate. Such methods are commonly used in RK codes where they are often called
Runge-Kutta pairs. In effect, there are two different Runge-Kutta formula – usually sharing many function
evaluations, whose difference provides an error estimate.

When we have a code with an error estimate, that estimate is used to control the step size. If the codes
uses a p-th order method and is asked to bound the local error by some quantity ε, it will estimate the error
in each step it takes, getting, say, the error estimate E. If E ≤ ε the step will be accepted. If E > ε it will
repeat the calculation with a smaller step. Since the local error is proportional to hp+1 it will use a new
step size of
hnew = γhold(ε /E)p+1
where γ < 1 is a “safety factor” chosen so that the chance of the repeated step having too large an error
is small. Whether the step is accepted or rejected, the step size for the next step is adjusted using this
formula so that its error will probably be acceptable – the objective being to try to maximize the step size
while minimizing the number of rejected steps so as to minimize total computation time.
L1-34
Copyright 2006, C. W. Gear
Multi-step method error estimation.

When a multi-step method is used, the difference between the predictor and the corrector provides
an estimate of the error. For example, if we use a p+1 step Adams-Bashforth predictor (order p+1)
and a p-step Adams-Moulton corrector (also order p+1), they both have local errors proportional to
hp+2y(p+2) where y(p+2) is the (p+2)-nd derivative of y so the predictor corrector difference gives a
direct estimate of hp+2y(p+2).
Most automatic codes that use multi-step methods use this mechanism to estimate the local error
and either accept the step or reject it and repeat it. The step control algorithm is similar to the one
we just discussed for one step methods.
One other feature of automatic multi-step codes is that the order is also selected automatically.
Since the local error is approximately equal to a product of a power of the step size, an error
coefficient determined by the particular formula used, and a derivative, the code can estimate the
derivatives at several different orders using backward differences of the computed solution and thus
estimate which order would allow the largest step size. In a typical code, a step is taken. If the
error estimate is too large, the step is repeated with a smaller step size and possibly a lower order.
If the step is successful, consideration is given to increasing the step size for the next step.
However, because a multi-step method has additional stability issues because of the additional
values carried along, the step or order is not usually changed for a number of steps following any
change.
Multi-step methods have a starting problem because they need past values. Most codes handle
this by starting at 1st order where they are one-step methods and need no additional information.
Then they use their automatic order control to slowly raise the order to that most appropriate for the
problem. However, this starting phase is less efficient than when the method is when working at its
optimal order. Generally, multi-step methods are more efficient than Runge-Kutta methods when
high orders are useful – which usually happens when one wants high accuracy. The inefficiency in
their starting phase is usually overcome by the efficiency in the high order phase, but we will see
cases when this is not true.
L1-35
Copyright 2006, C. W. Gear
Note that most of the error estimation methods we have discussed estimate the local error.
The user would usually like to control the global error but that is not, in general possible.
(In fact we can only get approximate estimates of the local error so we may not even control
that correctly.)
The reason we cannot control global error inexpensively in a code is that while we are
integrating, we have no idea how the solution may change in the future. This is illustrated
below where we see that a small local error at one point may lead to a large global error
component later. If you need to estimate the global error, you have to integrate more than
once with different steps sizes and look at the difference between the two computed
solutions. With some codes, you can run twice (or more) with different error requests.

y Its contribution
to global error

Local
error

t t
L1-36
Copyright 2006, C. W. Gear
The following Example illustrates the use of an automatic step control code. It integrates the Van der Pol
equation with parameter μ = 1 with 9 different tolerances from 10-2 to 10-10. Since we don’t know the true
answer, we will assume that the value obtained with the tightest tolerance is the correct answer, and
subtract it from the other 8 answers to estimate the error in each.

The Matlab code ode113 is used. It is a variable order, variable step Adams code. An option allows us to
print the number of function evaluations and other integration statistics and we have gathered these and
plotted the bsolute errors in the answers versus the number of function evaluations and versus the
requested tolerance. Note that the error does not appear to change very much between the two largest
tolerances (although it actually has the opposite sign so changes a lot). At very large tolerance, codes
are often not very responsive to the tolerance specification. In the middle of the range the code has the
good property that the error decreases as the tolerance is decreased (this is called tolerance
proportionality), and that the number of function evaluations decreases as we ask for less accuracy. (If
we assume that the step size is roughly proportional to the inverse of the number of function evaluations,
we can estimate that the average order of the method used is about 8 from this graph.)

L1-37
Copyright 2006, C. W. Gear
Code for last example
% Ex5A Example of using an automatic code with different error estimates.
% Run on van der Pol equation with mu = 1;
Last = [];
for i = 1:9
error = 0.1*10^(-i);
options = odeset('RelTol',error,'AbsTol',error,'Stats','on');
fprintf('\n')
[T,Y] = ode113(@fun5a,[0 20],[0; 1],options);
Last = [Last; Y(end,1)];
end
Lastd = Last(1:8) - Last(9); %Assume that last result is "true value" at end
for i = 1:8
fprintf('%25.15e\n',Lastd(i));
end
% Note: the following numbers of function evaluations had to be taken from the printed output
% whent he program was executed (caused by the 'stats','on' spec in odeset for options).
Fnevals = [208; 296; 398; 518; 640; 779; 945; 1149];
figure(5)
loglog(Fnevals,abs(Lastd),'-b')
ylabel('Error')
xlabel('Function Evaluations')
axis([200 2000 1E-9 1E-2]);
print -dpsc Ex5a_1
figure(6)
loglog(0.1*10.^(-1:-1:-8),abs(Lastd),'-g')
ylabel('Error')
xlabel('Tolerance')
print -dpsc Ex5a_2
function derivative = fun5a(t,y)
% Van der Pol equation
mu = 1;
derivative(1,1) = y(2);
derivative(2,1) = -y(1) + mu*(1-y(1)^2)*y(2);

L1-38
Copyright 2006, C. W. Gear
Discontinuities
Earlier we said that we would assume that f in y’ = f(y) had as many derivatives as we needed.
We have seen that the local error is proportional to a higher derivative. If the function has
discontinuities in a derivative, that analysis breaks down. The graph below is the solution of
the ODE
y’ = c(t).A.y4, y(0) = 0, c(t) = 1 for 0 ≤ t < 1/3, c(t) = 0.5 for 1/3 ≤ t ≤ 1
using the 1st through 4th order RK from example 2. Note that the slopes of the log-log plots of
the errors for 2nd through 4th orders are no longer 2 through 4 as before. The slope of the
fourth order case is one, not four, meaning that the order of the method has dropped to one
from the four we would normally have gotten from the method. The errors of the 2nd and 3rd
order methods are somewhat erratic but the average slope is close to one.

L1-39
Copyright 2006, C. W. Gear
The problem is that the discontinuity caused an error of order h in the step that straddles the
discontinuity at 1/3 (which was deliberately chosen so that it is never at a step boundary in this
example). The stranger behavior of the 2nd and 3rd order methods has to do with the position within
the step at which the discontinuity occurs. It can be explained, but is not worth our time here.
The important fact to learn from this example is that discontinuities can cause a break down in the
error behavior and will give problems to an automatic method. (Most automatic codes will work
because they will tend to estimate very large errors and so may take an excessive amount of
computer time.)
This slide we show results from the same code
but with the discontinuity at t = 0.5, which is
always on a step boundary (since we have
divided the interval [0,1] into a power of 2
equal steps). The discontinuity is on the left of
0.5 (meaning that if we evaluate at exactly 0.5
we will get the value that holds to the right of
0.5). Note that the 2nd and 3rd order methods
have slopes 2 and 3 but the 4th order method
still has slope one. In this example, the
particular RK methods used do not use the
value at the end point of an interval in orders 1
through 3, but do in the case of order 4.
This suggests a solution to the problem. When
there is a discontinuity, a step boundary must
be placed at the discontinuity, and when the
step to the left of the discontinuity is executed,
all evaluations must use the values before the
discontinuity. Similarly, all evaluations for the
step to the right must use values to the right.
L1-40
Copyright 2006, C. W. Gear
Note that is it not good enough just to move the location of the discontinuity from the left to the right
of its location. The graph on this slide is the same example with the discontinuity to the right of 0.5.
In typical problems with discontinuities, they are caused by actions such as the “throwing of
switches” that introduce a change to the model. When such problems are simulated, it is important
to integrate up to the point of the discontinuity, then “throw the switch” (change the model) and then
continue integrating. It is easy to integrate up to the time of the discontinuity if it happens at a known
time. However, in many problem, the "switch is thrown" when a certain variable exceeds some
value, and it is unknown when this will happen. If the model permits integration beyond the point at
which the "switch is thrown" without throwing it, then is it often more efficient to compete the
integration step that passes the discontinuity, interpolate to find the value at the discontinuity, then
restart. So, if we change the model when a variable exceeds some value, we simply check that
varaible after each step, and if it has exceeded the value we use inverse interpolation to find the
time at which it happened, and then interpolate for all other value to that time point before restarting.

One step methods like Runge Kutta have a


major advantage for problems with many
discontinuities because no information is
extrapolated from step to step. Multi-step
methods will suffer a similar breakdown in
order, so it will be necessary to integrate up
to the discontinuity, and then restart to the
right of the discontinuity. Since this is a less
efficient part of the multi-step method, they
may not perform as well.

L1-41
Copyright 2006, C. W. Gear
%Ex6.m RK methods on y' = c(t)*A*t^4
% where c(t) is a piecewise constant function - i.e. has a discontinuity at d
% Discontinuity is to left of d if s == 0, otherwise to right of d. Matlab code for discontinuous example
A = 1;
y0 = 0;
for version = 1:3
d = min(version/3,1/2); % Location of discontinuity
s = max(0,version-2);
Solution = A*(1+d^5)/10; % Solution at end point t = 1;
for i = 1:6 % 6 different step sizes
N = 2^i; h = 1/N; % Number of steps and step size
Init = [y0;0]; % Vector is [y; t]
y1 = Init; y2 = Init; y3 = Init; y4 = Init; % Initial values for four orders
for n = 1:N
%Do each step
%First order method
k0 = h*fun6(s,d,A,y1); y1 = y1 + k0;
%Second order method
k0 = h*fun6(s,d,A,y2); k1 = h*fun6(s,d,A,y2 + k0/2); y2 = y2 + k1;
%Third order method
k0 = h*fun6(s,d,A,y3); k1 = h*fun6(s,d,A,y3 + k0/3); k2 = h*fun6(s,d,A,y3+2*k1/3);
y3 = y3 + (k0 + 3*k2)/4;
%Fourth order method
k0 = h*fun6(s,d,A,y4); k1 = h*fun6(s,d,A,y4 + k0/2); k2 = h*fun6(s,d,A,y4 + k1/2);
k3 = h*fun6(s,d,A,y4 + k2);y4 = y4 + (k0 + 2*k1 + 2*k2 + k3)/6;
end function derivative = fun6(s,d,A,y)
%Compute errors %Example of discontinuous function
Err_y1(i) = abs(Solution - y1(1)); Err_y2(i) = abs(Solution - y2(1)); if s == 0
Err_y3(i) = abs(Solution - y3(1)); Err_y4(i) = abs(Solution - y4(1)); if y(2) < d
nsteps(i) = N; derivative = [A*y(2)^4; 1];
end else
figure(5+version) derivative = [0.5*A*y(2)^4; 1];
loglog(nsteps,Err_y1,'-k',nsteps,Err_y2,'--r',nsteps,Err_y3,':b',nsteps,Err_y4,'-.g','LineWidth',2) end
legend('Order 1','Order 2','Order 3','Order 4') else
xlabel('Number steps'); ylabel('Error'); if y(2) <= d
if s == 0 derivative = [A*y(2)^4; 1];
title(['discontiuity at left of ' num2str(d)]) else
else derivative = [0.5*A*y(2)^4; 1];
title(['discontiuity at right of ' num2str(d)]) end
end end
eval(['print -dpsc Ex6_' num2str(version)])
End

L1-42
Copyright 2006, C. W. Gear

You might also like