You are on page 1of 79

I.

TRUNCATION ERROR

In numerical analysis and scientific computing, truncation error is the error made by truncating an infinite sum and approximating it by a finite sum. For instance, if we approximate the sine function by the first two non-zero term of its Taylor series, as in for small , the resulting error is a truncation error. It is present even with infinite-precision arithmetic, because it is caused by truncation of the infinite Taylor series to form the algorithm. Often, truncation error also includes discretization error, which is the error that arises from taking a finite number of steps in a computation to approximate an infinite process. For example, in numerical methods for ordinary differential equations, the continuously varying function that is the solution of the differential equation is approximated by a process that progresses step by step, and the error that this entails is a discretization or truncation error. See Truncation error (numerical integration) for more on this. Occasionally, round-off error (the consequence of using finite precision floating point numbers on computers) is also called truncation error, especially if the number is rounded by truncation.

Examples of Truncation Error: Taylor Series Numerical Differentiation

Taylor Series: Taylors theorem and its associated formula, the Taylor series, is of great value in the study of numerical methods. In essence, the Taylor theorem states that any smooth function can be approximated as a polynomial. The Taylor series then provides a means to express this idea mathematically in a form that can be used to generate practical results. A useful way to gain insight into the Taylor series is to build it term by term. A good problem context for this exercise is to predict a function value at one point in terms of the function value and its derivatives at another point. In general, we can usually assume that the truncation error is decreased by the addition of terms to the Taylor series. In many cases, if h is sufficiently small, the first- and other lowerorder terms usually account for a disproportionately high percent of the error. Thus, only a few terms are required to obtain an adequate approximation General Equation: ( where: = initial value = value for the next approximation = number of order of approximation = distance between and = remainder ) ( ) ( ) ( )
( ) ( )

( )

Help: Taylor: Solving the Roots using Taylor Series In mathematics, a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the function's derivatives at a single point. Syntax: Taylor(fx,xiplus1,xi,n,sol) where: fx = given function to be evaluated xiplus1,xi = initial guesses n = number of significant figures sol = showing of solution

Codes:

Example: Problem: From page 121 of the book: Problem 4.10(b) *Note: Using equation from Problem 4.13 Use Taylor series expansions to estimate: f(x) = 25x^3 - 6x^2 + 7x - 88 at x(i+1) = 1 for x(i) = 0.25. Syntax:

Output:

Numerical Differentiation In numerical analysis, numerical differentiation describes algorithms for estimating the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function. The simplest method is to use finite difference approximations. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems. Difference Approximation Backward ( ) ( ) ( ) ( ( First Order ( ) ) ) ( ( ( ) ) ) ( ) ( ) ( ) | Second Order ( ) ( ( ) ) ( ) ( ) ( ) ( ( ) ) ( )

Centered

Forward

| where: = initial value = number of order of approximation = distance between and = true value error

Help: Numdif: Solving the roots using Numerical Differentiation Numerical differentiation is the process of finding the numerical value of a derivative of a given function at a given point. The simplest method is to use finite difference approximations to estimate this derivative. Syntax: Numdif(fx,x1,x2,h) where: fx = given function x1,x2 = the range of the values to be evaluated to the function fx (values of the upper and lower limit can be interchanged) h = step size

Codes:

Example: Problem:
From page 113 of the book: Example 4.4 Use forward and backward difference approximations of O(h) and a centered difference approximation of O(h^2) to estimate the first derivative of: f(x) = -0.1x^4 - 0.15x^3 - 0.5x^2 - 0.25x + 1.2 at x = 0.5 using a step size h = 0.5. Repeat the computation using h = 0.25. Note that the derivative can be calculated directly as f'(x) = -0.4x^3 - 0.45x^2 - 1.0x - 0.25 and can be used to compute the true value as f'(0.5) = -0.9125

Syntax: using h = 0.5

Output:

Figure:

II.

BRACKETING METHOD

If you had a roots problem in the days before computing, youd often be told to use trial and error to come up with the root. It is preferable to have methods that come up with the correct answer automatically. Interestingly, as with trial and error, these approaches require an initial guess to get started. Then they systematically home in on the root in an iterative fashion. One of these methods is called bracketing method. Bracketing methods are based on making two initial guesses that bracket the root that is, are on either side of the root. For well-posed problems, the bracketing methods always work but converge slowly (i.e., they typically take more iterations to home in on the answer). In this cases, initial guesses are required. These may naturally arise from the physical context you are analyzing. However, in other cases, good initial guesses may not be obvious. In such cases, automated approaches to obtain guesses would be useful. We can use the incremental search method (an automatic approach to obtain initial guesses) tests the value of the function at evenly spaced intervals and finds brackets by identifying function sign changes between neighboring points.

False Position Method: The false position method is a term for problem-solving methods in arithmetic, algebra, and calculus. An algorithm for finding roots which retains that prior estimate for which the function value has opposite sign from the function value at the current best estimate of the root. In this way, the method of false position keeps the root bracketed (Press et al. 1992). In simple terms, these methods begin by attempting to evaluate a problem using test ("false") values for the variables, and then adjust the values accordingly.

General Equation:

( )( ( ) where: = estimated root & = initial guesses

) )

Help: FalseP: Solving Root Estimates using False Position Method An algorithm for finding roots which retains that prior estimate for which the function value has opposite sign from the function value at the current best estimate of the root. In this way, the method of false position keeps the root bracketed (Press et al. 1992). Syntax: FalseP(fx,xl,xu,n,sol) where: fx = given function to be evaluated xl = lower guess xu = upper guess n = maximum iteration sol = showing of solution

Codes:

Example: Problem:
From page 142 of the book: Example 5.6) Use bisection and false position to locate the root of f(x): x^10 - 1 between x = 0 and 1.3. n = 5 sol = yes

Syntax:

Output:

III.

OPEN METHOD

The open methods require only a single starting value or two starting values that do not necessarily bracket the root. As such, they sometimes diverge or move away from the true root as the computation progresses. However, when the open methods converge they usually do so much more quickly than the bracketing methods. We will begin our discussion of open techniques with a simple approach that is useful for illustrating their general form and also for demonstrating the concept of convergence.

Comparing Open Method & Bracketing Method Open Method Usually starting from a single starting point (initial estimate) Iteratively finding a new estimate Advantage: Faster convergence (when they work) Disadvantage: Sometimes diverge from the true root Bracketing Method Root is located within the lower and upper bound Advantage: Always converge to the solution Disadvantage: Relatively slow

Simple Fixed-Point Iteration As just mentioned, open methods employ a formula to predict the root. Such a formula can be developed for simple fixed-point iteration (or, as it is also called, one-point iteration or successive substitution) by rearranging the function f (x) = 0 so that x is on the left-hand side of the equation:

This transformation can be accomplished either by algebraic manipulation or by simply adding x to both sides of the original equation. General Equation: The utility of Eq. (6.1) is that it provides a formula to predict a new value of x as a function of an old value of x. ( ) | where: = initial guess = new estimate = approximation error |

Help: Fixdpt: Solving the roots using Simple Fixed-Point Iteration In numerical analysis, fixed-point iteration is a computing fixed points (roots) of iterated functions. Syntax: Fixdpt(fx,xo,n,sol) where: fx = given function to be evaluated x = initial guess n = number of significant figures sol = showing of solution method of

Codes:

Example: Problem:
Use simple fixed-point iteration to locate the root of: f(x)= (e^-x) starting with an initial guess of Xo = 0. n = 5 sol = yes

Syntax:

Output:

Newton-Raphson Method: Perhaps the most widely used of all root-locating formulas is the Newton-Raphson method. If the initial guess at the root is xi, a tangent can be extended from the point [xi, f (xi )]. The point where this tangent crosses the x axis usually represents an improved estimate of the root. The Newton-Raphson method can be derived on the basis of this geometrical interpretation. As in Fig. 6.4, the first derivative at x is equivalent to the slope:

which can be rearranged to form the Newton-Raphson formula.

General Equation: ( ) ( ) where: = new estimated value = initial guess ( ) = first derivative of the function at point xi

Help: Newrap: Solving the roots using Newton-Raphson Method Newton's method, also called the Newton-Raphson method, is a root-finding algorithm that uses the first few terms of the Taylor series of a function f(x) in the vicinity of a suspected root. Syntax: Newrap(fx,xo,n,sol) where: fx = given function xo = initial guess n = significant figures sol = showing of solution

Codes:

Example: Problem:
From page 176 of the book: Problem 6.3(b)) Determine the highest real root of f(x)= x^3 - 6x^2 + 11x - 6.1 (b) Using Newton-Raphson method (three iterations, xi = 3.5) sol = yes

Syntax:

Output:

Secant Method: A potential problem in implementing the Newton-Raphson method is the evaluation of the derivative. Although this is not inconvenient for polynomials and many other functions, there are certain functions whose derivatives may be difficult or inconvenient to evaluate. For these cases, the derivative can be approximated by a backward finite divided difference:

This approximation can be substituted into Eq. 6.6 (Newton-Raphson Formula) to yield the iterative equation

General Equation: ( )( ( ) where: = new estimated value = initial guess = previous estimated value ) ( )

Help: Secant: Solving the Roots using Secant Method A root-finding algorithm which assumes a function to be approximately linear in the region of interest. Each improvement is taken as the point where the approximating line crosses the axis. The secant method retains only the most recent estimate, so the root does not necessarily remain bracketed. Syntax: Secant(fx,ximuns1,xi,n,sol) where: fx = given function to be evaluated ximuns1 = initial guess, x(i-1) xi = initial guess, x(i) n = number of significant figures sol = showing of solution

Codes:

Example: Problem:
From page 176 of the book: Problem 6.3(b)) Determine the highest real root of f(x)= x^3 - 6x^2 + 11x - 6.1 (c) Using Secant method (three iterations, xi-1 = 2.5, xi = 3.5) sol = yes

Syntax:

Output:

Polynomial Method: Polynomials are a special type of nonlinear algebraic equation of the general form: ( ) where: = order of the polynomial = constant coefficients. In many (but not all) cases, the coefficients will be real. For such cases, the roots can be real and/or complex. In general, an nth order polynomial will have n roots. Polynomials have many applications in engineering and science. For example, they are used extensively in curve fitting. However, one of their most interesting and powerful applications is in characterizing dynamic systems ---- and, in particular, linear systems. Examples include reactors, mechanical devices, structures, and electrical circuits.

Help: Polyrt: Solving the Roots using Polynomial Method A root of a polynomial P(z) is a number 'zi' such that P(zi)=0. The fundamental theorem of algebra states that a polynomial P(z) of degree 'n' has 'n' roots, some of which may be degenerate. Syntax: Polyrt(fx) where: fx = given function to be evaluated

Codes:

Example: Problem:
From page 172 of the book: Example 6.8) Use the following equation to explore how MATLAB can be employed to manipulate polynomials: f5(x) = x^5 - 3.5x^4 + 2.75x^3 + 2.125x^2 - 3.875x + 1.25 Note that this polynomial has three real roots: 0.5, -1.0 and 2; and one pair of complex roots: 10.5i.

Syntax:

Output:

Mullers Method: It is the method secant, which obtains roots, estimating a projection of a direct line in the axis x, through two values of the function. The method consists on obtaining the coefficients of the three points, to substitute them in the quadratic formula and to obtain the point where the parable intercepts the axis x.

Substituting it to the system:

Having the coefficients as a result:

General Equation: | where: = new estimated value = given points = quadratic points Help: Muller: Solving the roots using Muller's Method Muller's method is based on the secant method, which constructs at every iteration a line through two points on the graph of f. Instead, Muller's method uses three points, constructs the parabola through these three points, and takes the intersection of the x-axis with the parabola to be the next approximation. Syntax: Muller(fx,xo,x1,x2,n,sol) where: fx = given equation |

xo,x1,x2 = initial guesses n = number of significant figures sol = showing of solution Codes:

Example: Problem:
From Prefinal Examinations #1) fx xo x1 x2 n sol = = = = = = x^5 - 9x^4 - 20x^3 + 204x^2 + 208x - 384 4.5 5.5 5 8 yes

Syntax:

Output:

IV.

GAUSS ELIMINATION

In linear algebra, Gaussian elimination (also known as row reduction) is an algorithm for solving systems of linear equations. It is usually understood as a sequence of operations performed on the associated matrix of coefficients. This method can also be used to find the rank of a matrix, to calculate the determinant of a matrix, and to calculate the inverse of an invertible square matrix. To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as is possible. There are three types of elementary row operations: 1) Swapping two rows 2) Multiplying a row by a non-zero number 3) Adding a multiple of one row to another row. Using these operations, a matrix can always be transformed into an upper triangular matrix, and in fact one that is in row echelon form. Once all of the leading coefficients (the leftmost non-zero entry in each row) are 1, and in every column containing a leading coefficient has zeros elsewhere, the matrix is said to be in reduced row echelon form. This final form is unique; in other words, it is independent of the sequence of row operations used. For example, in the following sequence of row operations (where multiple elementary operations might be done at each step), the third and fourth matrices are the ones in row echelon form, and the final matrix is the unique reduced row echelon form. The process of row reduction makes use of elementary row operations, and can be divided into two parts. The first part (sometimes called Forward Elimination) reduces a given system to row echelon form, from which one can tell whether there are no solutions, a unique solution, or infinitely many solutions. The second part (sometimes called Back Substitution) continues to use row operations until the solution is found; in other words, it puts the matrix into reduced row echelon form.

Cramers Rule: Cramers rule is another solution technique that is best suited to small numbers of equations. Before describing this method, we will briefly review the concept of the determinant, which is used to implement Cramers rule. In addition, the determinant has relevance to the evaluation of the ill-conditioning of a matrix. [ ] [ ]

*Cramers Rule: This rule states that each unknown in a system of linear algebraic equations may be expressed as a fraction of two determinants with denominator D and with the numerator obtained from D by replacing the column of coefficients of the unknown in question by the constants b1, b2, . . . , bn . For example, for three equations, x1 would be computed as:

General Equation: | |

where: = constants of the unknown equation = coefficients of the unknown equations ( Help: Cramers: Determining Unknowns using Cramer's Rule Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of the (square) coefficient matrix and of matrices obtained from it by replacing one column by the vector of right hand sides of the equations. Syntax: Cramers(varargin) where: varargin = coefficients given )= | | | | | |

of

the

unknowns

of

the

equations

Codes:

Example: Problem:
From page 233 of the book: Example 9.2) Use Cramer's Rule to solve: eq1: 0.3x1 + 0.52x2 + x3 = -0.01 eq2: 0.5x1 + x2 + 1.9x3 = 0.67 eq3: 0.1x1 + 0.3x2 + 0.5x3 = -0.44

Syntax:

Output:

Nave Gauss Elimination: This section includes the systematic techniques for forward elimination and back substitution that comprise Gauss elimination. Although these techniques are ideally suited for implementation on computers, some modifications will be required to obtain a reliable algorithm. In particular, the computer program must avoid division by zero. The following method is called Nave Gauss elimination because it does not avoid this problem. *Forward Elimination of Unknowns 1. Reduce the coefficient matrix [A] to an upper triangular system 2. Eliminate from the 2nd to nth Eqns. 3. Eliminate from the 3rd to nth Eqns. 4. Continue process until the nth equation has only 1 Non-Zero coefficient *Back Substitution:

General Form: Forward Elimination:

Back Substitution:

Help: Ngauss: Determining Unknowns using Naive-Gauss Elimination Naive-Gauss Elimination solves equations that does not avoid division by zero. The technique for n-equations consists of two phases: (1) Elimination of Unknowns and (2) Solution through Back Substitution.

Syntax: Ngauss(varargin) where: varargin = coefficients of the unknowns of the given equations Codes:

Example: Problem:
From page 238 of the book: Example 9.3) Use Gauss Elimination eq1: 3x1 eq2: 0.1x1 eq3: 0.3x1 to solve: - 0.1x2 - 0.2x3 = 7.85 + 7x2 - 0.3x3 = -19.3 - 0.2x2 + 10x3 = 71.4

Syntax:

Output:

Gauss Jordan Elimination: Gauss- Jordan Elimination is a variant of Gaussian Elimination. Again, we are transforming the coefficient matrix into another matrix that is much easier to solve, and the system represented by the new augmented matrix has the same solution set as the original system of the linear equations. In the Gauss-Jordan Elimination, the goal is to transform the coefficient matrix into a diagonal matrix, and the zeros are introduces into the matrix one column at a time. We work to eliminate the elements both above and below the diagonal element of a given column in one pass through the matrix. The general procedure for GaussJordan Elimination can be summarized in the following steps: Gauss-Jordan Elimination Steps: 1. Choose the leftmost nonzero column and get a 1 at the top. 2. Use multiples of the row containing the 1 from step 1 to get zeros in all remaining places in the column containing this 1. 3. Repeat step 1 with the submatrix formed by (mentally) deleting the top row.

4. 5. 6. 7.

Repeat step 2 with the entire matrix. Repeat step 1 with the submatrix formed by (mentally) deleting the top two rows. Repeat step 2 with the entire matrix. The matrix is now in reduced form, and we can proceed to solve the corresponding reduced system

By dividing the diagonal element and the right-hand-side element in each row by the diagonal element in that row, make each diagonal element equal to one
General Form:

[ where: = = =

| ]

Help: Gaussj: Determining Unknowns using Gauss-Jordan Elimination A method of solving a linear system of equations. This is done by transforming the system's augmented matrix into reduced rowechelon form by means of row operations. Syntax: Gaussj(varargin) where: varargin = coefficients of the unknowns of the given equations Codes:

Example: Problem:
From page 238 of the book: Example 9.3) Use Gauss Elimination eq1: 3x1 eq2: 0.1x1 eq3: 0.3x1 to solve: - 0.1x2 - 0.2x3 = 7.85 + 7x2 - 0.3x3 = -19.3 - 0.2x2 + 10x3 = 71.4

Syntax:

Output:

V.

LU DECOMPOSITION (also called LU FACTORIZATION):

Just as was the case with Gauss elimination, LU factorization requires pivoting to avoid division by zero. However, to simplify the following description, we will omit pivoting. In addition, the following explanation is limited to a set of three simultaneous equations. The results can be directly extended to n-dimensional systems. Suppose that Eq. (10.2) could be expressed as an upper triangular system. For example, for a 3 3 system:

A two-step strategy for obtaining solutions can be based on Eqs. (10.3), (10.7), and (10.8): 1. LU factorization step. [A+ is factored or decomposed into lower *L] and upper [U] triangular matrices. 2. Substitution step. [L] and [U] are used to determine a solution {x} for a right-hand side {b}. This step itself consists of two steps. First, Eq. (10.8) is used to generate an intermediate vector {d} by forward substitution. Then, the result is substituted into Eq. (10.3) which can be solved by back substitution for {x}. Help: LUfact: Solving Systems of Linear Algebraic Equations LU decomposition (where 'LU' stands for 'Lower Upper', and also called LU factorization) factors a matrix as the product of a lower triangular matrix and an upper triangular matrix. The LU decomposition can be viewed as the matrix form of Gaussian elimination. Syntax: LUfact(varargin) where: varargin = given Codes:

coefficients equations

of

the

unknowns

of

the

Example: Problem:
From page 238 of the book: Example 9.3) *Note: Use LU-Factorization instead Use Gauss Elimination to solve: eq1: 3x1 - 0.1x2 - 0.2x3 = 7.85 eq2: 0.1x1 + 7x2 - 0.3x3 = -19.3 eq3: 0.3x1 - 0.2x2 + 10x3 = 71.4

Syntax:

Output:

VI.

POLYNOMIAL INTERPOLATION

Polynomial interpolation is the interpolation of a given data set by a polynomial: given some points, find a polynomial which goes exactly through these points. This is the most common method used to estimate intermediate values between precise data points. The general formula for an (n 1)th-order polynomial can be written as ( ) For n data points, there is one and only one polynomial of order (n 1) that passes through all the points. For example, there is only one straight line (i.e., a first-order polynomial) that connects two points (Fig. 17.1a). Similarly, only one parabola connects a set of three points

(Fig. 17.1b). Polynomial interpolation consists of determining the unique (n 1)th-order polynomial that fits n data points. This polynomial then provides a formula to compute intermediate values.

FIGURE Examples of interpolating polynomials: (a) first-order (linear) connecting two points, (b) second-order (quadratic or parabolic) connecting three points, and (c) third-order (cubic) connecting four points.

General Form of Newtons Interpolating Polynomial: It can be generalized to fit an (n 1)th-order polynomial to n data points. The (n 1)thorder polynomial is:

These differences can be used to evaluate the coefficients in Eqs (17.11) through (17.14), which can then be substituted into Eq. (17.10) to yield the general form of Newtons interpolating polynomial: General Equation:

Help: Newint: Solving Polynomial Interpolation Using Newton's Interpolation Newton's Interpolating Polynomial is the interpolation polynomial for agiven set of data points in the Newton form. The coefficients of thepolynomial are calculated using divided differences. Syntax: Newint(x,y,xx,sol) where: x = values for independent variable y = values for dependent variable xx = value of which interpolation is calculated sol = showing of solution Codes:

Example: Problem:
From page 258 of the book: Example 10.2)

In Example 17.3, data points at x1 = 1, x2 = 4 and x3 = 6 were used to estimate ln(2) with a parabola. Now, adding a fourth point [x4 = 5; f(x4) = 1.609438], estimate ln(2) with a third-order Newton's Interpolating Polynomial.

Syntax:

Output:

Lagrange Interpolating Polynomial: Suppose we formulate a linear interpolating polynomial as the weighted average of the two values that we are connecting by a straight line:

where the Ls are the weighting coefficients. It is logical that the first weighting coefficient is the straight line that is equal to 1 at x1 and 0 at x2:

Similarly, the second coefficient is the straight line that is equal to 1 at x2 and 0 at x1:

Substituting these coefficients into Eq. 17.19 yields the straight line that connects the points (Fig. 17.8):

where the nomenclature f1(x) designates that this is a first-order polynomial. Equation (17.20) is referred to as the linear Lagrange interpolating polynomial. General Equation: ( ) ( ) ( )

( ) where:

n = the number of data points = designates the product of. Help: Lgrnge: Polynomial Interpolation Using Lagrange Interpolating Polynomial For a given set of distinct points xj and numbers yj, the Lagrange polynomial is the polynomial of the least degree that at each point xj assumes the corresponding value yj (i.e. the functions coincide at each point). Syntax: Lgrnge(x,y,xx) where: x = data points y = function @ x data points xx = value of x at which the interpolation is calculated Codes:

Example: Problem:

From page 418 of the book: Example 17.5) Use a Lagrange Interpolating Polynomial of the first and second order to evaluate the density of unused motor oil at T = 15 based on the following data: x1 = 0 f(x1) = 3.850 x2 = 20 f(x2) = 0.800 x3 = 40 f(x3) = 0.212

Syntax:

Output:

VII.

SPLINES

There are cases where the use of polynomial interpolations can lead to erroneous results because of round-off error and oscillations. An alternative approach is to apply lowerorder polynomials in a piecewise fashion to subsets of data points. Such connecting polynomials are called spline functions. Figure 18.1 illustrates a situation where a spline performs better than a higher-order polynomial. This is the case where a function is generally smooth but undergoes an abrupt change somewhere along the region of interest. The step increase depicted in Fig. 18.1 is an extreme example of such a change and serves to illustrate the point. Figure 18.1a through c illustrates how higher-order polynomials tend to swing through wild oscillations in the vicinity of an abrupt change. In contrast, the spline also connects the points, but because it is limited to lower-order changes, the oscillations are kept to a minimum. As such, the spline usually provides a superior approximation of the behavior of functions that have local, abrupt changes.

FIGURE 18.1 A visual representation of a situation where splines are superior to higher-order interpolating polynomials. The function to be fit undergoes an abrupt increase at x = 0. Parts (a) through (c) indicate that the abrupt change induces oscillations in interpolating polynomials. In contrast, because it is limited to straight-line connections, a linear spline (d) provides a much more acceptable approximation. Linear Spline:

The notation used for splines is displayed in Fig. 18.3. For n data points (i = 1, 2, . . . , n), there are n 1 intervals. Each interval i has its own spline function, (x). For linear splines, each function is merely the straight line connecting the two points at each end of the interval, which is formulated as: General Equation: ( ) where: = is the intercept, which is defined as: is the slope of the straight line connecting the points: ( )

Help: Linspl: Solving First-Order Curves using Linear Spline Given (x0,y0),(x1,y1), ... ,(xn-1,yn-1)(xn,yn), fit linear splines to the data. This simply involves forming the consecutive data through straight lines. So if the above data is given in an ascending order, the linear splines are given by (yi = f(xi)). Syntax: Linspl(xi,fi,x) where: xi = data points fi = intercept of the data points x = point where the splines are to be evaluated Codes:

Example:

Problem:
From page 432 of the book: Example 18.1) Fit the data below with first-order splines. Evaluate the function at x = 5. i: xi: fi: 1 3.0 2.5 2 4.5 1.0 3 7.0 2.5 4 9.0 0.5

Syntax:

Output:

Figure:

Quadratic Spline:

The objective in quadratic splines is to derive a second-order polynomial for each interval between data points. The polynomial for each interval can be represented generally as:

Evaluation of Unknowns: 1. The function must pass through all the points. This is called a continuity condition.

2. The function values of adjacent polynomials must be equal at the knots. This condition can be written for knot i + 1 as:

Thus, the equation above simplifies to:

3.

Equation (18.5) can be differentiated to yield: T h e equivalence of the derivatives at an interior node, i + 1 can therefore be written as:

4.

Assume that the second derivative is zero at the first point. Because the second derivative of Eq. (18.5) is 2ci, this condition can be expressed mathematically as:

Help: Quaspl: Solving Second-Order Curves using Quadratic Spline A quadratic spline is a spline constructed of piecewise secondorder polynomials which pass through a set of m control points. A quadratic spline interpolation theory is developed which, in general, produces better fits to continuous functions than does the existing cubic spline interpolation theory. Syntax: Quaspl(xi,fi,x) where: xi = values of x fi = values for the function x x = initial value Codes:

Example:

Problem:
From page 432 of the book: Example 18.1) Fit the data below with first-order splines. Evaluate the function at x = 5. i: xi: fi: 1 3.0 2.5 2 4.5 1.0 3 7.0 2.5 4 9.0 0.5 *Instead of first-order, use second-order splines.

Syntax:

Output:

Figure:

Cubic Spline: As stated at the beginning of the previous section, cubic splines are most frequently used in practice. The shortcomings of linear and quadratic splines have already been discussed. Quartic or higher-order splines are not used because they tend to exhibit the instabilities inherent in higher-order polynomials. Cubic splines are preferred because they provide the simplest representation that exhibits the desired appearance of smoothness. The objective in cubic splines is to derive a third-order polynomial for each interval between knots as represented generally by:

General Equation:

Help: Cubspl: Solving Third-Order Curves using Cubic Spline A cubic spline is a spline constructed of piecewise third-order polynomials which pass through a set of m control points. The second derivative of each polynomial is commonly set to zero at the endpoints, since this provides a boundary condition that completes the system of m-2 equations.

Syntax: Cubspl(xi,fi,x) where: xi = data points fi = intercept of the data points x = point where the splines are to be evaluated

Codes:

Example: Problem:
From page 442 of the book: Example 18.3) Fit cubic splines to the same data below. Utilize the results to estimate the value at x = 5. i: xi: fi: 1 3.0 2.5 2 4.5 1.0 3 7.0 2.5 4 9.0 0.5

Syntax:

Output:

Figure:

VIII.

NUMERICAL INTEGRATION

Numerical Integration is nothing but finding an approximate value to ( ) There are two different strategies to develop numerical integration formulas. One is similar to what we have adopted to numerical differentiation. That is, we approximate a polynomial for the given function and integrate that polynomial within the limits of the integration. This restricts us to integrate a function known at discrete tabular points. The most straightforward numerical integration technique uses the Newton-Cotes formulas (also called quadrature formulas), which approximate a function tabulated at a sequence of regularly spaced intervals by various degree polynomials. They are based on the strategy of replacing a complicated function or tabulated data with a polynomial that is easy to ( ) integrate: ( ) where: ( ) = a polynomial of the form: ( ) where: n is the order of the polynomial. If the endpoints are tabulated, then the 2-, 3- and 4-point formulas are called the Trapezoidal rule, Simpson's 1/3 rule and Simpson's 3/8 rule respectively.

The approximation of an integral by the area under (a) a straight line and (b) a parabola.

The approximation of an integral by the area under three straight-line segments.

Trapezoidal Rule: The trapezoidal rule is the first of the Newton-Cotes closed integration formulas. It corresponds to the case where the polynomial in Eq. (19.8) is first-order:

[ ( )

( )

( )

)]

The result of the integration is the equation for trapezoidal rule: General Equation: ( where: = boundary conditions = integral estimate Help: Trapez: Solving Numerical Integration using Trapezoidal Rule Trapezoidal Rule is a method for approximating a definite integral, f(x)dx with 'a' as its lower limit and 'b' as its upper limit, using linear approximations of f. The trapezoidal rule works by approximating the region under the graph of the function f(x) as a trapezoid and calculating its area. Syntax: Trapez(fx,a,b,n) where: fx = given function to be evaluated a = lower limit of integration b = upper limit of integration n = number of segments ) ( ) ( )

Codes:

Example: Problem:
From page 473 of the book: Table 19.1) Given: f(x) = 400x^5 - 900x^4 + 675x^3 - 200x^2 + 25x + 0.2 from a = 0 to b = 0.8. Note that the exact value of integral can be determined analytically to be 1.640533.

Syntax:

Output:

Simpsons Rule:

Aside from applying the trapezoidal rule with finer segmentation, another way to obtain a more accurate estimate of an integral is to use higher-order polynomials to connect the points. For example, if there is an extra point midway between f (a) and f (b), the three points can be connected with a parabola (Fig. 19.11a). If there are two points equally spaced between f (a) and f (b), the four points can be connected with a third-order polynomial (Fig. 19.11b). The formulas that result from taking the integrals under these polynomials are called Simpsons rules.

Help: Smpson: Solving Numerical Integration using Simpson's Rule Simpsons rule is an extension of Trapezoidal rule where the integrand is approximated by a higher-order polynomial. Syntax: Simpson(fx,a,b,type) where: fx = given function to be evaluated a = lower limit of integration b = upper limit of integration type = choose between: (1) for 1/3 ,(2) for 3/8 or (3) for composite rule

Simpsons 1/3 Rule: Simpsons 1/3 rule corresponds to the case where the polynomial in Eq. (19.8) is second order:

where a and b are designated as

and

, respectively. The result of the integration is:

[ (
General Equation: ( )

( )

( )]

( (

) )

( ) ( )

( )

where: = = = the point midway between a and b = truncation error = lies somewhere in the interval from a to b

Codes:

Example:
From page 476 of the book: Example 19.3) Use Eq. (19.23) to integrate f(x) = 400x^5 - 900x^4 + 675x^3 - 200x^2 + 25x + 0.2 from a = 0 to b = 0.8. Employ Eq. (19.24) to estimate the error. Recall that the exact integral is 1.640533.

Syntax:

Output:

Simpsons 3/8 Rule: In a similar manner to the derivation of the trapezoidal and Simpsons 1 /3 rule, a third order Lagrange polynomial can be fit to four points and integrated to yield:

where h = (b a)/3. This equation is known as Simpsons 3/8 rule because is multiplied by 3/8. It is the third Newton-Cotes closed integration formula. The 3/8 rule can also be expressed in the form of Eq. (19.13): General Equation: ( ) ( ) ( where: = = = the point midway between a and b = truncation error = lies somewhere in the interval from a to b ( ) ) ( ) ( ) ( )

Codes:

Example: Problem:
From page 476 of the book: Example 19.3) Use Eq. (19.23) to integrate f(x) = 400x^5 - 900x^4 + 675x^3 - 200x^2 + 25x + 0.2 from a = 0 to b = 0.8. Employ Eq. (19.24) to estimate the error. Recall that the exact integral is 1.640533.

Syntax:

Output:

Unequal Segment: To this point, all formulas for numerical integration have been based on equispaced data points. In practice, there are many situations where this assumption does not hold and we must deal with unequal-sized segments. For example, experimentally derived data are often of this type. For these cases, one method is to apply the trapezoidal rule to each segment and sum the results:

General Equation:

( )
where:

( )

( )

( )

= the width of segment . *Note that this was the same approach used for the composite trapezoidal rule.

Help: Uneven: Solving Numerical Integration with Unequal Segments Previous formulas were simplified based on equispaced data points - though this is not always the case. The trapezoidal rule may be used with data containing unequal segments by applying it to each segment and sum the results. Syntax: Uneven(fx,xo) where: fx = given function to be evaluated xo = values of x (segment)

Codes:

Example: Problem:
From page 482 of the book: Example 19.6) The information below was generated using the same polynomial employed in Example 19.1. Use Eq. 19.30 to determine the integral for these data. Recall that the correct answer is 1.640533. f(x) = 400x^5 - 900x^4 + 675x^3 - 200x^2 + 25x + 0.2 x = x = 0 0.40 0.80 0.36 0.70 0.32 0.64 0.22 0.54 0.12 0.44

Syntax:

Output:

IX.

ORDINARY DIFFERENTIAL EQUATIONS

Such equations, which are composed of an unknown function and its derivatives, are called differential equations. They are sometimes referred to as rate equations because they express the rate of change of a variable as a function of variables and parameters. When the function involves one independent variable, the equation is called an ordinary differential equation (or ODE). Initial-value problems are devoted to solving ordinary differential equations of the form ( )

A numerical method to solve such an equation for the velocity of the free-falling bungee jumper, for example, was of the general form: New value = old value + slope step size or, in mathematical terms,

where the slope is called an increment function According to this equation, the slope estimate of is used to extrapolate from an old value to a new value over a distance h. This formula can be applied step by step to trace out the trajectory of the solution into the future. Such approaches are called one-step methods because the value of the increment function is based on information at a single point i. They are also referred to as Runge-Kutta methods after the two applied mathematicians who first discussed them in the early 1900s.

Runge Kutta Method: Runge-Kutta (RK) methods achieve the accuracy of a Taylor series approach without requiring the calculation of higher derivatives. Many variations exist but all can be cast in the generalized form of Eq. (22.4): ( )

where the slope is called an increment function, which can be interpreted as a


representative slope over the interval. The increment function can be written in general form as:

where: are constants are:

where the and are constants. Notice that the are recurrence relationships. That is, appears in the equation for , which appears in the equation for , and so forth. Because each k is a functional evaluation, this recurrence makes RK methods efficient for computer calculations.

Help: Runkut: Solving Approximations without Higher-Order Derivatives using Runge-Kutta Method A method of numerically integrating ordinary differential equations by using a trial step at the midpoint of an interval to cancel out lower- order error terms. Syntax: Runkut(dydt,ti,tf,y0,h) where: dydt = given equation to be evaluated ti,tf = range of time where y will be solved yo = initial value h = step size

Second-Order Runge-Kutta Method: The second-order version of Eq. (22.33) is: ( where: ( ( ) ) )

The values for a1, a2, p1, and q11 are evaluated by setting Eq. (22.35) equal to a second-order Taylor series. By doing this, three equations can be derived to evaluate the four unknown constants: 1/2 1/2 We, therefore, must assume a value of one of the unknowns to determine the other three:

Fourth-Order Runge-Kutta Method: The most popular RK methods are fourth order. As with the second-order approaches, there are an infinite number of versions. The following is the most commonly used form, and we therefore call it the classical fourth-order RK method:

( where:

( ( ( (

) ) ) )

Codes:

Example: Problem:
From page 570 of the book: Example 22.3) Employ the classical fourth-order RK method to integrate y'=(4e^0.8t) 0.5y from t = 0 to 1 using a step size of 1 with y(0) = 2.

Syntax:

Output:

You might also like