Professional Documents
Culture Documents
Practical Assignment No 1
1 Write a Program in MATLAB to solve given Algebraic or Transcendental Equation using any Bracketing
. Method.
Theorem
An equation, where is a real continuous function, has at least one root between and if (See Figure 1).
Note that if , there may or may not be any root between and (Figures 2 and 3). If , then there may
be more than one root between and (Figure 4). So the theorem only guarantees one root between and .
Bisection method
Since the method is based on finding the root between two points, the method falls under the
category of bracketing methods.
Since the root is bracketed between two points, and , one can find the mid-point, between and .
This gives us two new intervals
1. and , and
2. and .
f (x)
x
x
xu
Figure 1 At least one root exists between the two points if the function is real, continuous,
and changes sign.
f (x)
x
x xu
Figure 2 If the function does not change sign between the two points, roots of the equation may
still exist between the two points.
Figure 3 If the function does not change sign between two points, there may not be any
roots for the equation between the two points.
Figure 4 If the function changes sign between the two points, more than one root for the equation may
exist between the two points.
Is the root now between and or between and ? Well, one can find the sign of , and if then the new bracket
is between and , otherwise, it is between and . So, you can see that you are literally halving the interval.
As one repeats this process, the width of the interval becomes smaller and smaller, and you can zero in to
the root of the equation . The algorithm for the bisection method is given as follows.
where
= estimated root from present iteration
= estimated root from previous iteration
5. Compare the absolute relative approximate error with the pre-specified relative error tolerance . If ,
then go to Step 3, else stop the algorithm. Note one should also check whether the number of
iterations is more than the maximum number of iterations allowed. If so, one needs to terminate the
algorithm and notify the user about it.
MATLAB Program
Introduction
The bisection method was described as one of the simple bracketing methods of solving a nonlinear
equation of the general form
(1)
The above nonlinear equation can be stated as finding the value of x such that Equation (1) is satisfied.
In the bisection method, we identify proper values of (lower bound value) and (upper bound value) for the
current bracket, such that
. (2)
The next predicted/improved root can be computed as the midpoint between and as
(3)
The new upper and lower bounds are then established, and the procedure is repeated until the convergence
is achieved (such that the new lower and upper bounds are sufficiently close to each other).
However, in the example shown in Figure 1, the bisection method may not be efficient because it does not
take into consideration that is much closer to the zero of the function as compared to . In other words, the
next predicted root would be closer to (in the example as shown in Figure 1), than the mid-point between
and . The false-position method takes advantage of this observation mathematically by drawing a secant
from the function value at to the function value at , and estimates the root as where it crosses the x-axis.
False-Position Method
Based on two similar triangles, shown in Figure 1, one gets
(4)
From Equation (4), one obtains
The above equation can be solved to obtain the next predicted root as
(5)
The above equation, through simple algebraic manipulations, can also be expressed as
(6)
or
(7)
Observe the resemblance of Equations (6) and (7) to the secant method.
False-Position Algorithm
The steps to apply the false-position method to find the root of the equation are as follows.
1. Choose and as two guesses for the root such that, or in other words, changes sign between and.
2. Estimate the root, of the equation as
where
= estimated root from present iteration
= estimated root from previous iteration
5. Compare the absolute relative approximate error with the pre-specified relative error tolerance. If, then go
to step 3, else stop the algorithm. Note one should also check whether the number of iterations is more than
the maximum number of iterations allowed. If so, one needs to terminate the algorithm and notify the user
about it.
Note that the false-position and bisection algorithms are quite similar. The only difference is the formula
used to calculate the new estimate of the root as shown in steps #2 and #4!
MATLAB Program
Bansilal Ramnath Agarwal Charitable Trusts
VISHWAKARMA INSTITUTE OF TECHNOLOGY, PUNE 411037
(An Autonomous Institute Affiliated to Savitribai Phule Pune University)
Practical Assignment No 2
2. Write a Program in MATLAB to solve given Algebraic or Transcendental Equation using any
Open Method.
Derivation
The Newton-Raphson method is based on the principle that if the initial guess of the root of is at
, then if one draws the tangent to the curve at , the point where the tangent crosses the -axis is an
improved estimate of the root (Figure 1).
Using the definition of the slope of a function, at
,
which gives
(1)
Equation (1) is called the Newton-Raphson formula for solving nonlinear equations of the form .
So starting with an initial guess, , one can find the next guess, , by using Equation (1). One can
repeat this process until one finds the root within a desirable tolerance.
Algorithm
The steps of the Newton-Raphson method to find the root of an equation are
1. Evaluate symbolically
2. Use an initial guess of the root, , to estimate the new value of the root, , as
4. Compare the absolute relative approximate error with the pre-specified relative error
tolerance, . If >, then go to Step 2, else stop the algorithm. Also, check if the number
of iterations has exceeded the maximum number of iterations allowed. If so, one
needs to terminate the algorithm and notify the user.
f (x)
f (xi+1)
x
xi+2 xi+1 xi
Practical Assignment No 3
1. Write a Program in MATLAB to solve given Linear Simultaneous Equation using Gauss Elimination or
Gauss Jordan method.
Gauss Elimination
. .
. .
. .
Now, this equation can be subtracted from the second equation to give
or
where
This procedure of eliminating , is now repeated for the third equation to the equation to reduce the set of
equations as
. . .
. . .
. . .
This is the end of the first step of forward elimination. Now for the second step of forward elimination, we
start with the second equation as the pivot equation and as the pivot element. So, to eliminate in the third
equation, one divides the second equation by (the pivot element) and then multiply it by. This is the same
as multiplying the second equation by and subtracting it from the third equation. This makes the
coefficient of zero in the third equation. The same procedure is now repeated for the fourth equation till
the equation to give
. .
. .
. .
The next steps of forward elimination are conducted by using the third equation as a pivot equation and so
on. That is, there will be a total of steps of forward elimination. At the end of steps of forward
elimination, we get a set of equations that look like
. .
. .
. .
Back Substitution:
Now the equations are solved starting from the last equation as it has only one unknown.
Then the second last equation, that is the equation, has two unknowns: and , but is already known. This
reduces the equation also to one unknown. Back substitution hence can be represented for all equations
by the formula
for
and
1. Start
2. Declare the variables and read the order of the matrix n.
3. Take the coefficients of the linear equation as:
Do for k=1 to n
Do for j=1 to n+1
Read a[k][j]
End for j
End for k
4. Do for k=1 to n-1
Do for i=k+1 to n
Do for j=k+1 to n+1
a[i][j] = a[i][j] a[i][k] /a[k][k] * a[k][j]
End for j
End for i
End for k
5. Compute x[n] = a[n][n+1]/a[n][n]
6. Do for k=n-1 to 1
sum = 0
Do for j=k+1 to n
sum = sum + a[k][j] * x[j]
End for j
x[k] = 1/a[k][k] * (a[k][n+1] sum)
End for k
7. Display the result x[k]
8. Stop
Flow Chart
Here is a basic layout of Gauss Elimination flowchart which includes input, forward
elimination, back substitution and output.
Here is an attachment showing how the forward elimination and back substitution take
place.
MATLAB Program
Bansilal Ramnath Agarwal Charitable Trusts
VISHWAKARMA INSTITUTE OF TECHNOLOGY, PUNE 411037
(An Autonomous Institute Affiliated to Savitribai Phule Pune University)
Practical Assignment No 4
2. Write a Program in MATLAB to solve given Linear Simultaneous Equation using Gauss Seidel method.
Gauss Seidel
Why do we need another method to solve a set of simultaneous linear equations?
In certain cases, such as when a system of equations is large, iterative methods of solving equations are
more advantageous. Elimination methods, such as Gaussian elimination, are prone to large round-off
errors for a large set of equations. Iterative methods, such as the Gauss-Seidel method, give the user
control of the round-off error. Also, if the physics of the problem are well known, initial guesses needed
in iterative methods can be made more judiciously leading to faster convergence.
What is the algorithm for the Gauss-Seidel method? Given a general set of equations and unknowns, we
have
. .
. .
. .
If the diagonal elements are non-zero, each equation is rewritten for the corresponding unknown, that is,
the first equation is rewritten with on the left hand side, the second equation is rewritten with on the left
hand side and so on as follows
.
.
.
Now to find s, one assumes an initial guess for the s and then uses the rewritten equations to calculate
the new estimates. Remember, one always uses the most recent estimates to calculate the next estimates,
. At the end of each iteration, one calculates the absolute relative approximate error for each as
Practical Assignment No 5
Interpolation
1. Write a Program in MATLAB to Interpolate for the given point using quadratic spline
interpolation method.
What is interpolation?
Many times, data is given only at discrete points such as , , . So, how then does one find
the value of at any other value of ? Well, a continuous function may be used to represent
the data values with passing through the points (Figure 1). Then one can find the value
of at any other value of . This is called interpolation.
Of course, if falls outside the range of for which the data is given, it is no longer
interpolation but instead is called extrapolation.
So what kind of function should one choose? A polynomial is a common choice
for an interpolating function because polynomials are easy to
(A) evaluate,
(B) differentiate, and
(C) integrate
relative to other choices such as a trigonometric and exponential series.
Polynomial interpolation involves finding a polynomial of order that passes
through the points. Several methods to obtain such a polynomial include the direct
method, Newtons divided difference polynomial method and the Lagrangian interpolation
method.
So is the spline method yet another method of obtaining this order polynomial.
NO! Actually, when becomes large, in many cases, one may get oscillatory behavior
in the resulting polynomial. This was shown by Runge when he interpolated data based on
a simple function of
on an interval of [1, 1]. For example, take six equidistantly spaced points in [1, 1] and
find at these points as given in Table 1.
1.0 0.038461
0.6 0.1
0.2 0.5
0.2 0.5
0.6 0.1
1.0 0.038461
Now through these six points, one can pass a fifth order polynomial
through the six data points. On plotting the fifth order polynomial (Figure 2) and the
original function, one can see that the two do not match well. One may consider choosing
more points in the interval [1, 1] to get a better match, but it diverges even more (see
Figure 3), where 20 equidistant points were chosen in the interval [1, 1] to draw a 19th
order polynomial. In fact, Runge found that as the order of the polynomial becomes
infinite, the polynomial diverges in the interval of and .
So what is the answer to using information from more data points, but at the same
time keeping the function true to the data behavior? The answer is in spline interpolation.
The most common spline interpolations used are linear, quadratic, and cubic splines.
1.2
0.8
0.4
y
0
-1 -0.5 0 0.5 1
-0.4
x
5th Order Polynomial Function 1/(1+25*x^2)
.
.
.
Quadratic Splines
In these splines, a quadratic polynomial approximates the data between two consecutive
data points. Given , fit quadratic splines through the data. The splines are given by
.
.
.
So how does one find the coefficients of these quadratic splines? There are such
coefficients
To find unknowns, one needs to set up equations and then simultaneously solve them.
These equations are found as follows.
1. Each quadratic spline goes through two consecutive data points
.
.
.
.
.
.
This condition gives equations as there are quadratic splines going through two
consecutive data points.
2. The first derivatives of two quadratic splines are continuous at the interior points. For
example, the derivative of the first spline
is
is
.
.
.
.
.
.
Since there are interior points, we have such equations. So far, the total number of
equations is equations. We still then need one more equation.
We can assume that the first spline is linear, that is
This gives us equations and unknowns. These can be solved by a number of techniques
used to solve simultaneous linear equations
Flow chart
MATLAB Program
j=1;
i=2;
for k=(2*n): 3*n-3
l=0;
for i=1:n-1
if xu>x(i) && xu<x(i+1)
l=i;
u=i+1;
end
end
if l==0
disp('Extrapolation Problem')
end
Practical Assignment No 6
Curve Fitting
Write a Program in MATLAB to fit a curve for given data points using Least Square
1. method.
Linear regression is the most popular regression model. In this model, we wish to predict
response to data points by a regression model given by
(1)
where and are the constants of the regression model.
A measure of goodness of fit, that is, how well predicts the response variable is the
magnitude of the residual at each of the data points.
(2)
Ideally, if all the residuals are zero, one may have found an equation in which all the
points lie on the model. Thus, minimization of the residual is an objective of obtaining
regression coefficients.
The most popular method to minimize the residual is the least squares methods,
where the estimates of the constants of the models are chosen such that the sum of the
squared residuals is minimized, that is minimize .
Why minimize the sum of the square of the residuals? Why not, for instance,
minimize the sum of the residual errors or the sum of the absolute values of the residuals?
Alternatively, constants of the model can be chosen such that the average residual is zero
without making individual residuals small. Will any of these criteria yield unbiased
parameters with the smallest variance? All of these questions will be answered below. Look
at the data in Table 1.
2.0 4.0
3.0 6.0
2.0 6.0
3.0 8.0
So does this give us the smallest error? It does as . But it does not give unique values
for the parameters of the model. A straight-line of the model
(5)
also makes as shown in the Table 3.
Table 3 The residuals at each data point for regression model
Since this criterion does not give a unique regression model, it cannot be used for
finding the regression coefficients. Let us see why we cannot use this criterion for any
general data. We want to minimize
(6) Differentiating
Equation (6) with respect to and , we get
(7)
(8)
Putting these equations to zero, give but that is not possible. Therefore, unique values of
and do not exist.
You may think that the reason the minimization criterion does not work is that negative
residuals cancel with positive residuals. So is minimizing better? Let us look at the data
given in the Table 2 for equation . It makes as shown in the following table.
The value of also exists for the straight line model . No other straight line model for this
data has. Again, we find the regression coefficients are not unique, and hence this criterion
also cannot be used for finding the regression model.
Let us use the least squares criterion where we minimize
(9)
is called the sum of the square of the residuals.
To find and , we minimize with respect to and .
(10)
(11)
giving
(12)
(13)
Noting that
(14)
(15)
Figure 3 Linear regression of vs. data showing residuals and square of residual at a
typical point, .
Exponential model
Given , , . . . , best fit to the data. The variables and are the constants of the
exponential model. The residual at each data point is
(2)
The sum of the square of the residuals is
(3)
To find the constants and of the exponential model, we minimize by differentiating with
respect to and and equating the resulting equations to zero.
(4a,b)
or
(5a,b)
Equations (5a) and (5b) are nonlinear in and and thus not in a closed form to be
solved as was the case for linear regression. In general, iterative methods (such as
GaussNewton iteration method, method of steepest descent, Marquardt's method, direct
search, etc) must be used to find values of and .
However, in this case, from Equation (5a), can be written explicitly in terms of as
(6)
Substituting Equation (6) in (5b) gives
(7)
This equation is still a nonlinear equation in and can be solved best by numerical methods
such as the bisection method or the secant method.
MATLAB Program
Bansilal Ramnath Agarwal Charitable Trusts
VISHWAKARMA INSTITUTE OF TECHNOLOGY, PUNE 411037
(An Autonomous Institute Affiliated to Savitribai Phule Pune University)
Practical Assignment No 7
Numerical Integration
What is integration?
Integration is the process of measuring the area under a function plotted on a graph. Why would we want
to integrate a function? Among the most common examples are finding the velocity of a body from an
acceleration function, and displacement of a body from a velocity function. Throughout many engineering
fields, there are (what sometimes seems like) countless applications for integral calculus. You can read
about some of these applications in Chapters 07.00A-07.00G.
Sometimes, the evaluation of expressions involving these integrals can become daunting, if not
indeterminate. For this reason, a wide variety of numerical methods has been developed to simplify the
integral.
Here, we will discuss the trapezoidal rule of approximating integrals of the form
where
is called the integrand,
lower limit of integration
upper limit of integration
(1)
So if we want to approximate the integral
(2)
to find the value of the above integral, one assumes
(3)
where
. (4)
where is a order polynomial. The trapezoidal rule assumes , that is, approximating the integral by a linear
polynomial (straight line),
b b
f ( x)dx f ( x)dx
a a
1
(5)
But what is and ? Now if one chooses, and as the two points to approximate by a straight line from to ,
(6)
(7)
(8a)
Hence from Equation (5),
(8b)
(9)
(11)
This gives the same result as Equation (10) because they are just different forms of writing the same
polynomial.
(12)
where
ba
c1
2
ba
c2
2
x1 a
x2 b
The interpretation is that is evaluated at points and , and each function evaluation is given a weight of .
Geometrically, Equation (12) is looked at as the area of a trapezoid, while Equation (13) is viewed as the
sum of the area of two rectangles, as shown in Figure 3. How can one derive the trapezoidal rule by the
method of coefficients?
Assume
(14)
Let the right hand side be an exact expression for integrals of and , that is, the formula will then also be
exact for linear combinations of and , that is, for .
(15)
(16)
Solving the above two equations gives
(17)
Hence
(18)
Assume
(19)
Let the right hand side be exact for integrals of the form
So
(20)
But we want
(21)
to give the same result as Equation (20) for .
(22)
Hence from Equations (20) and (22),
(23)
Again, solving the above two equations (23) gives
(24)
Therefore
(25)
. Extending this procedure to dividing into equal segments and applying the trapezoidal rule over each
segment, the sum of the results obtained for each segment is the approximate value of the integral.
Divide into equal segments as shown in Figure 4. Then the width of each segment is
(26)
The integral can be broken into integrals as
(27)
Figure 4 Multiple () segment trapezoidal rule
(28)
MATLAB Program
Method 1:
Hence
b b
I f ( x)dx f 2 ( x)dx
a a
2
a0
a 2ab b 2
2
ab ab
af (a ) 4af 3af (b) 3bf (a ) 4bf bf (b)
2 2
a1
a 2 2ab b 2
ab
2 f (a ) 2 f f (b)
2
a2
a 2ab b
2 2
Then
b
I f 2 ( x)dx
a
b
a 0 a1 x a 2 x 2 dx
a
b
x2 x3
a 0 x a1 a2
2 3 a
b2 a2 b3 a3
a 0 (b a ) a1 a2
2 3
Substituting values of and give
b
ba ab
a f 2 ( x)dx 6 f (a) 4 f 2 f (b)
Since for Simpson 1/3 rule, the interval is broken into 2 segments, the segment width
ba
h
2
Hence the Simpsons 1/3 rule is given by
Since the above form has 1/3 in its formula, it is called Simpsons 1/3 rule.
Method 2:
Simpsons 1/3 rule can also be derived by approximating by a second order polynomial using Newtons
divided difference polynomial as
ab
f 2 ( x) b0 b1 ( x a ) b2 ( x a ) x
2
where
b0 f (a )
ab
f f (a)
2
b1
ab
a
2
ab ab
f (b) f f f (a)
2 2
ab ab
b a
b2 2 2
ba
Integrating Newtons divided difference polynomial gives us
b b
a
f ( x)dx f 2 ( x)dx
a
b
a b
b0 b1 ( x a ) b2 ( x a ) x dx
a 2
b
x2 x 3 (3a b) x 2 a (a b) x
b0 x b1 ax b2
2 3 4 2 a
Substituting values of and into this equation yields the same result as before
Method 3:
One could even use the Lagrange polynomial to derive Simpsons formula. Notice any method of three-
point quadratic interpolation can be used to accomplish this task. In this case, the interpolating function
becomes
ab ab
x ( x b ) ( x a ) x
2 ( x a )( x b) ab 2
f 2 ( x) f (a) f f (b)
ab ab a b 2 ab
a ( a b ) a b (b a ) b
2 2 2 2
Integrating this function gets
b
x 3 (a 3b) x 2 b(a b) x x 3 ( a b) x 2
abx
a b
3 4 2 f (a) 3 2 f
ab ab a b 2
b a ( a b ) a b
2 2 2
a f 2 ( x ) dx
x
3
(3a b) x 2 a (a b) x
3 4
2
f (b)
ab
(b a ) b
2 a
b 3 a 3 (a 3b)(b 2 a 2 ) b(a b)(b a )
3 4 2 f (a)
ab
a ( a b )
2
b 3 a 3 (a b)(b 2 a 2 )
ab(b a )
3 2 ab
f
ab a b 2
a b
2 2
b a
3 3
(3a b)(b a ) a (a b)(b a )
2 2
3 4 2 f (b)
a b
(b a ) b
2
Believe it or not, simplifying and factoring this large expression yields you the same result as before
Method 4:
Simpsons 1/3 rule can also be derived by the method of coefficients. Assume
Let the right-hand side be an exact expression for the integrals and . This implies that the right hand side
will be exact expressions for integrals of any linear combination of the three integrals for a general second
order polynomial. Now
b
1dx b a c
a
1 c 2 c3
b
b2 a2 ab
a xdx 2 c1a c2 2 c3b
b 2
b3 a3 ab
x dx c1 a 2 c 2 c3 b
2 2
a
3 2
Solving the above three equations for and give
ba
c1
6
2(b a )
c2
3
ba
c3
6
This gives
can be viewed as the area under the second order polynomial, while the equation from Method 4
a
f ( x)dx f ( x)dx
x0
where
x0 a
xn b
b x2 x4 xn 2 xn
f ( x n 4 ) 4 f ( x n 3 ) f ( x n 2 ) f ( x n 2 ) 4 f ( x n 1 ) f ( x n )
( xn2 xn4 ) ( xn xn2 )
6 6
Since
xi xi 2 2h
i 2, 4, ..., n
then
b
f ( x0 ) 4 f ( x1 ) f ( x2 ) f ( x 2 ) 4 f ( x3 ) f ( x 4 )
f ( x )dx 2h
a
6
2h
6 ...
f ( x n 4 ) 4 f ( x n 3 ) f ( x n 2 ) f ( x n 2 ) 4 f ( x n 1 ) f ( x n )
2h 2h
6 6
h
f ( x0 ) 4 f ( x1 ) f ( x3 ) ... f ( x n 1 ) 2 f ( x 2 ) f ( x 4 ) ... f ( x n 2 ) f ( x n )
3
n 1 n2
h
f ( x 0 ) 4 f ( xi ) 2 f ( xi ) f ( x n )
3 i 1 i 2
i odd i even
b
ba n 1 n 2
f ( x )dx f ( x 0 ) 4 f ( xi ) 2 f ( xi ) f ( x n )
3n i 1 i 2
a
i odd i even
MATLAB Program
Gauss Quadrature Rule
Background:
To derive the trapezoidal rule from the method of undetermined coefficients, we approximated
(1)
Let the right hand side be exact for integrals of a straight line, that is, for an integrated form of
So
(2)
But from Equation (1), we want
(3)
to give the same result as Equation (2) for .
b
a
a
0 a1 x dx c1 a 0 a1 a c 2 a 0 a1b
(4)
Hence from Equations (2) and (4),
(7)
Method 1:
The two-point Gauss quadrature rule is an extension of the trapezoidal rule approximation where the
arguments of the function are not predetermined as and , but as unknowns and . So in the two-point Gauss
quadrature rule, the integral is approximated as
b
I f ( x)dx
a
There are four unknowns , , and . These are found by assuming that the formula gives exact results for
integrating a general third order polynomial, . Hence
b b
f ( x)dx a 0 a1 x a 2 x 2 a3 x 3 dx
a a
b
x2 x3 x4
a 0 x a1 a2 a3
2 3 4 a
(8)
The formula would then give
(9)
Equating Equations (8) and (9) gives
b2 a 2 b3 a 3 b4 a 4
a0 b a a1 a 2 a3
2 3 4
2
3
c1 a0 a1 x1 a 2 x1 a3 x1 c2 a0 a1 x 2 a 2 x 2 a3 x 2
2 3
a0 c1 c2 a1 c1 x1 c x a c x
2 2 3 3
2 2 2 1 1 c2 x 2 a3 c1 x1 c2 x 2 (10)
Since in Equation (10), the constants and are arbitrary, the coefficients of and are equal. This gives us
four equations as follows.
(11)
Without proof (see Example 1 for proof of a related problem), we can find that the above four simultaneous
nonlinear equations have only one acceptable solution
(12)
Hence
(13)
Method 2:
We can derive the same formula by assuming that the expression gives exact values for the individual
integrals of and . The reason the formula can also be derived using this method is that the linear
combination of the above integrands is a general third order polynomial given by.
These will give four equations as follows
(14)
These four simultaneous nonlinear equations can be solved to give a single acceptable solution
(15)
Hence
(16)
Since two points are chosen, it is called the two-point Gauss quadrature rule. Higher point versions can
also be developed.
Table 1 Weighting factors and function arguments used in Gauss quadrature formulas
Weighting Function
Point
Factors Arguments
s
(21)
Solving the two Equations (21) simultaneously gives
(22)
Hence
Flow Chart
MATLAB Program
Bansilal Ramnath Agarwal Charitable Trusts
VISHWAKARMA INSTITUTE OF TECHNOLOGY, PUNE 411037
(An Autonomous Institute Affiliated to Savitribai Phule Pune University)
Practical Assignment No 8
Numerical Differentiation
1. Write a Program in MATLAB to Differentiate Numerically
For a finite ,
The above is the forward divided difference approximation of the first derivative. It is called forward
because you are taking a point ahead of . To find the value of at , we may choose another point ahead
as . This gives
where
Figure 1 Graphical representation of forward difference approximation of first derivative.
For a finite ,
This is a backward difference approximation as you are taking a point backward from . To find the
value of at , we may choose another point behind as . This gives
where
The term shows that the error in the approximation is of the order of .
Can you now derive from the Taylor series the formula for the backward divided difference
approximation of the first derivative?
As you can see, both forward and backward divided difference approximations of the first
derivative are accurate on the order of . Can we get better approximations? Yes, another method to
approximate the first derivative is called the central difference approximation of the first derivative.
From the Taylor series
(1)
and
(2)
Subtracting Equation (2) from Equation (1)
hence showing that we have obtained a more accurate formula as the error is of the order of .
(4)
where
(5)
Flow Chart
MATLAB Program
Bansilal Ramnath Agarwal Charitable Trusts
VISHWAKARMA INSTITUTE OF TECHNOLOGY, PUNE 411037
(An Autonomous Institute Affiliated to Savitribai Phule Pune University)
Practical Assignment No 9
Solution Of ODEs
1. Write a Program in MATLAB to solve an ODE
Only first order ordinary differential equations can be solved by using the Runge-Kutta 2nd order
method. In other sections, we will discuss how the Euler and Runge-Kutta methods are used to solve
higher order ordinary differential equations or coupled (simultaneous) differential equations.
Runge-Kutta 2nd order method
Eulers method is given by
(1)
where
To understand the Runge-Kutta 2nd order method, we need to derive Eulers method from the Taylor
series.
(2)
As you can see the first two terms of the Taylor series
are Eulers method and hence can be considered to be the Runge-Kutta 1st order method.
The true error in the approximation is given by
(3)
So what would a 2nd order method formula look like. It would include one more term of the Taylor
series as follows.
(4)
Let us take a generic example of a first order ordinary differential equation
(7)
This form allows one to take advantage of the 2nd order method without having to calculate.
So how do we find the unknowns , , and . Without proof (see Appendix for proof), equating
Equation (4) and (6) , gives three equations.
Since we have 3 equations and 4 unknowns, we can assume the value of one of the unknowns. The
other three will then be determined from the three equations. Generally the value of is chosen to
evaluate the other three constants. The three values generally used for are , and , and are known as
Heuns Method, the midpoint method and Ralstons method, respectively.
Heuns Method
Here is chosen, giving
resulting in
(8)
where
(9a)
(9b)
This method is graphically explained in Figure 1.
Midpoint Method
Here is chosen, giving
resulting in
(10)
where
(11a)
(11b)
Ralstons Method
Here is chosen, giving
resulting in
(12)
where
(13a)
(13b)
So only first order ordinary differential equations can be solved by using the Runge-Kutta 4th order
method. In other sections, we have discussed how Euler and Runge-Kutta methods are used to solve
higher order ordinary differential equations or coupled (simultaneous) differential equations.
Practical Assignment No 10
Solution Of PDEs
1. Write a Program in MATLAB to solve an ODE
1.
Laplace Equation
The general second order linear PDE with two independent variables and one dependent variable is
given by
(1)
where are functions of the independent variables and , and can be a function of and . Equation (1) is
considered to be elliptic if
(2)
One popular example of an elliptic second order linear partial differential equation is the Laplace
equation which is of the form
(3)
As
, ,,
then
Tt
W Tl Tr
x
Tb
L
Figure 1: Schematic diagram of the plate with the temperature boundary conditions
The partial differential equation that governs the temperature is given by
(4)
To find the temperature within the plate, we divide the plate area by a grid as shown in Figure 2.
y
Tt
(0, n)
x
y Tr
Tl (i, j )
x
(0,0)
Tb (m,0)
(i, j 1)
y x
(i 1, j ) (i, j ) (i 1, j )
(i, j 1)
Figure 2: Plate area divided into a grid
The length along the axis is divided into equal segments, while the width along the axis is divided into
equal segments, hence giving
(5)
(6)
Now we will apply the finite difference approximation of the partial derivatives at a general interior
node ().
(7)
(8)
Equations (7) and (8) are central divided difference approximations of the second derivatives.
Substituting Equations (7) and (8) in Equation (4), we get
(9)
For a grid with
Gauss-Seidel Method
To take advantage of the sparseness of the coefficient matrix as seen in Example 1, the Gauss-Seidel
method may provide a more efficient way of finding the solution. In this case, Equation (10) is written
for all interior nodes as
(11)
Now Equation (11) is solved iteratively for all interior nodes until all the temperatures at the interior
nodes are within a pre-specified tolerance.
Flow Chart
MATLAB Program
clc;
clear all;
int=input('total interval=');
f=zeros(int+1,int+1);
n=input('enter the no of iteration=');
g=input('Enter the fnction g=')
f(1,:)=input('top boundary condition=');
f(int+1,:)=input('lower boundary condition=');
f(:,1)=input('left boundary condititon=');
f(:,int+1)=input('right boundary condititon=');
for i=int:-1:2
for j=2:int
f(i,j)=(f(i+1,j)+f(i-1,j)+f(i,j+1)+f(i,j-1))/4
end
end
end
[x,y]=meshgrid(0:1:int,0:1:int)
surf(x,y,f)
2.
Parabolic Equations
The general second order linear PDE with two independent variables and one dependent variable is
given by
(1)
where are functions of the independent variables, ,, and can be a function of and . If , Equation (1) is
called a parabolic partial differential equation. One of the simple examples of a parabolic PDE is the
heat-conduction equation for a metal rod (Figure 1)
(2)
where
temperature as a function of location, and time,
in which the thermal diffusivity, is given by
where
thermal conductivity of rod material,
density of rod material,
specific heat of the rod material.
Choosing
(6)
(7)
Equation (7) can be solved explicitly because it can be written for each internal location node of the rod
for time node in terms of the temperature at time node . In other words, if we know the temperature at
node, and knowing the boundary temperatures, which is the temperature at the external nodes, we can
find the temperature at the next time step. We continue the process by first finding the temperature at all
nodes , and using these to find the temperature at the next time node, . This process continues till we
reach the time at which we are interested in finding the temperature.
giving
(11)
where
Now Equation (11) can be written for all nodes (except the external nodes), at a particular time level.
This results in simultaneous linear equations which can be solved to find the nodal temperature at a
particular time.
Crank-Nicolson Method
The Crank-Nicolson method provides an alternative scheme to implicit method. The accuracy of Crank-
Nicolson method is same in both space and time. In the implicit method, the approximation of is of
accuracy, while the approximation for is of accuracy. The accuracy in the Crank-Nicolson method is
achieved by approximating the derivative at the mid point of time step. To numerically solve PDEs such
as Equation (2), one can use finite difference approximations of the partial derivatives. The left hand
side of the second derivative is approximated at node as the average value of the central divided
difference approximation at time level and time level .
(12)
The first derivative on the right side of Equation (2) is approximated using forward divided difference
approximation at time level and node as
(13)
Substituting Equations (12) and (13) in Equation (2) gives
(14)
giving
(15)
where
Now Equation (15) is written for all nodes (except the external nodes). This will result in simultaneous
linear equations that can be solved to find the temperature at a particular time.
Flow Chart
MATLAB Program