You are on page 1of 13

DCDM BUSINESS SCHOOL

NUMERICAL METHODS (COS 233-8)

Solutions to Assignment 3

Question 1

Consider the following data:

x 1 2 3 4 5 6
f(x) 1 8 27 64 125 216

(a) Set up a difference table through fourth differences.


(b) What is the minimum degree that an interpolating polynomial, that fits all six data points exactly, can have? Explain.
(c) Give the (forward) Newton-Gregory polynomial that fits the data points with x values 2, 3 and 4. Then compute
f(3.5).
(d) Compute an approximate bound for the error in the approximation to f(3.5) in (c) using the Newtons forward
interpolating polynomial.
(e) Compute f(3.5) using the Lagrange interpolating polynomial through the data points with x values at 2, 3 and 4.

Solution

(a)

x y y 2 y 3 y 4 y

1 1
7
2 8 12
19 6
3 27 18 0
37 6
4 64 24 0
61 6
5 125 30
91
6 216

(b) If all the points have the same y-coordinate, the function is a constant and the minimum degree of the interpolating
polynomial will be zero. If this is not the case, we draw a difference table to see whether one of the differences does
become a non-zero constant. If n y is constant, the polynomial will be of degree n. In the above example, we have 6
points; if we have reached the fifth difference without any constant difference, then the minimum degree of the
interpolating polynomial will be 5.
2

(c)

x y y 2 y

2 8
19
3 27 18
37
4 64

The (forward) Newton-Gregory polynomial is given by

( x x0 ) ( x x 0 )( x x1 ) 2 y0 ( x x 0 )( x x1 )( x x 2 ) 3 y0
f ( x) = y 0 + y0 + + + ...
h h2 2! h3 3!

In this particular case, we will stop at the second difference since we only have three points, that is,

( x x0 ) ( x x 0 )( x x1 ) 2 y0
f ( x) = y 0 + y0 + where x0 = 2 and h = 1.
h h2 2!

Therefore,
( x 2)(19) ( x 2)( x 3)(18)
f ( x) = 8 + +
(1) (1)(2)
= 8 + 19 x 38 + 9 x 2 45 x + 54
= 9 x 2 26 x + 24

Thus, f (3.5) = (9) (3.5) 2 (26)(3.5) + 24 = 43.25 .

(d) It can be easily seen that the true function f(x) is x3. Thus, the true value of f(3.5) is 42.875 and the error is 0.375. If
we use the next-term rule (with obviously four points), the error is

(3.5 2)(3.5 3)(3.5 4)(6)


= 0.375
(1)(2)(3)

(e) For three data points, we can only fit a Lagrangian polynomial of degree 2. Using the relevant Lagrangian formula,
we have

( x x1 )( x x 2 ) ( x x 0 )( x x 2 ) ( x x 0 )( x x1 )
P2 ( x ) = f0 + f1 + f2
( x 0 x1 )( x 0 x 2 ) ( x1 x 0 )( x1 x 2 ) ( x 2 x 0 )( x 2 x1 )

with x0 = 2, x1 = 3 and x2 = 4.

Therefore,

(0.5)(0.5)(8) (1.5)(0.5)(27) (1.5)(0.5)(64)


P2 (3.5) = + + = 1 + 20.25 + 24 = 43.25 .
(1)(2) (1)(1) (2)(1)
3

Question 2

It is suspected that the high amounts of tannin in mature oak leaves inhibit the growth of the winter moth (Operophtera
bromata L., Geometridae) larvae that extensively damage these trees in certain years. The following table lists the average
weight of two samples of larvae at times in the first 28 days after birth. The first sample was reared on young oak leaves
whereas the second sample was reared on mature leaves from the same tree.

Day Sample 1 Sample 2


Average weight (mg) Average weight (mg)
0 6.67 6.67
6 17.33 16.11
10 42.67 18.89
13 37.33 15.00
17 30.10 10.56
20 29.31 9.44
28 28.74 8.89

(a) Use a natural cubic spline to approximate the average weight curve for each sample.
(b) Find an approximate maximum average weight for each sample by determining the maximum of the spline.

Solution

(a) The coefficients of the individual cubic splines are given by

S i +1 S i Si y i +1 y i 2hi S i + hi S i +1
ai = bi = ci = d i = yi
6hi 2 hi 6

where each spline g i ( x) = a i ( x xi ) 3 + bi ( x xi ) 2 + ci ( x xi ) + d i .

Note that h0 = 6 , h1 = 4 , h2 = 3 , h3 = 4 , h4 = 3 and h5 = 8 .

First sample

Day Sample 1 [1]


Average weight (mg)
0 6.67 1.7767
6 17.33 6.3350
10 42.67 1.7800
13 37.33 1.8075
17 30.10 0.2633
20 29.31 0.0713
28 28.74
4

This gives us the matrix equation

20 4 0 0 0 S1 27.3498
4 14 3 0 0 S 48.6900
2
0 3 14 4 0 S 3 = 0.1650

0 0 4 14 3 S 4 9.2652
0 0 0 3 22 S 5 1.1520

from which we find that S1 = 2.2235 , S 2 = 4.2802 , S 3 = 0.7795 , S 4 = 0.4407 and S 5 = 0.0077 . For a natural cubic
spline, S 0 = 0 and S 6 = 0 .

2.2235
a0 = = 0.0618 b0 = 0
36
c0 = 1.7767 2.2235 = 0.4468 d 0 = 6.67

g 0 ( x) = 0.0618x 3 0.4468x + 6.67

4.2802 2.2235 2.2235


a1 = = 0.2710 b1 = = 1.1118
24 2
(2)(4)(2.2235) + (4)(4.2802)
c1 = 6.335 = 6.2238 d 1 = 17.33
6

g1 ( x) = (0.2710)( x 6) 3 + (1.1118)( x 6) 2 + (6.2238)( x 6) + 17.33

g1 ( x) = 0.2710x 3 + 5.9898x 2 36.3858x + 78.548

0.7795 + 4.2802 4.2802


a2 = = 0.2811 b2 = = 2.1401
(6)(3) 2
( 2)(3)( 4.2802) + (3)(0.7795)
c 2 = 1.7800 = 2.1105 d 2 = 42.67
6

g 2 ( x) = (0.2811)( x 10) 3 (2.1401)( x 10) 2 + (2.1105)( x 10) + 42.67

g 2 ( x) = 0.2811x 3 10.5731x 2 + 129.2425x 473.5450


5

0.4407 0.7795 0.7795


a3 = = 0.0141 b3 = = 0.3898
(6)(4) 2
( 2)(4)(0.7795) + ( 4)(0.4407 )
c 3 = 1.8075 = 3.1406 d 3 = 37.33
6

g 3 ( x) = (0.0141)( x 13) 3 + (0.3898)( x 13) 2 (3.1406)( x 13) + 37.33

g 3 ( x) = 0.0141x 3 + 0.9397 x 2 20.4241x + 175.0117

0.0077 0.4407 0.4407


a4 = = 0.0249 b4 = = 0.2204
(6)(3) 2
(2)(3)(0.4407) + (3)(0.0077)
c 4 = 0.2633 = 0.7002 d 4 = 30.10
6

g 4 ( x) = (0.0249)( x 17) 3 + (0.2204)( x 17) 2 (0.7002)( x 17) + 30.10

g 4 ( x) = 0.0249x 3 + 1.4903x 2 29.7821x + 228.0327

0.0077 0.0077
a5 = = 0.0002 b5 = = 0.0039
( 6)(8) 2
(2)(8)(0.0077)
c5 = 0.0713 = 0.0508 d 5 = 29.31
6

g 5 ( x) = (0.0002)( x 20) 3 (0.0039)( x 20) 2 (0.0508)( x 20) + 29.31

g 5 ( x) = 0.0002x 3 0.0159x 2 + 0.3452x + 27.166

i Interval gi(x)

0 [0, 6] g 0 ( x) = 0.0618x 3 0.4468x + 6.67


1 [6, 10] g1 ( x) = 0.2710x 3 + 5.9898x 2 36.3858x + 78.548
2 [10, 13] g 2 ( x) = 0.2811x 3 10.5731x 2 + 129.2425x 473.5450
3 [13, 17] g 3 ( x) = 0.0141x 3 + 0.9397 x 2 20.4241x + 175.0117
4 [17, 10] g 4 ( x) = 0.0249x 3 + 1.4903x 2 29.7821x + 228.0327
5 [20, 28] g 5 ( x) = 0.0002x 3 0.0159x 2 + 0.3452x + 27.166
6

Second sample

Day Sample 1 [1]


Average weight (mg)
0 6.67 1.5733
6 16.11 0.6950
10 18.89 1.2967
13 15.00 1.1100
17 10.56 0.3733
20 9.44 0.0688
28 8.89

This gives us the matrix equation

20 4 0 0 0 S1 5.2698
4 14 3 0 0 S 11.9502
2
0 3 14 4 0 S 3 = 1.1202

0 0 4 14 3 S 4 4.4202
0 0 0 3 22 S 5 1.8270

from which we find that S1 = 0.0866 , S 2 = 0.8845 , S 3 = 0.2595 , S 4 = 0.0353 and S 5 = 0.0039 .

0.0866
a0 = = 0.0024 b0 = 0
36
c0 = 1.5733 + 0.0866 = 1.6599 d 0 = 6.67

g 0 ( x) = 0.0024x 3 + 1.6599x + 6.67

0.8845 + 0.0866 0.0866


a1 = = 0.0332 b1 = = 0.0433
24 2
( 2)( 4)( 0.0866 ) + ( 4)( 0.8845)
c1 = 0.6950 = 1.4001 d 1 = 16.11
6

g1 ( x = ( 0 0332)( x ) 3 + ( .0433 x 6) 2 + 1. )( x 6 + 16.

g 1 ( x) = 0.0332x 3 + 0.5543x 2 1.6659x + 13.3218


7
0.2595 + 0.8845 0.8845
a2 = = 0.0636 b2 = = 0.4423
( 6)(3) 2
( 2)(3)(0.8845) + (3)(0.2595)
c 2 = 1.2967 = 0.5420 d 2 = 18.89
6

g 2 ( x) = (0.0636)( x 10) 3 (0.4423)( x 10) 2 + (0.5420)( x 10) + 18.89

g 2 ( x) = 0.0636x 3 2.3503x 2 + 27.3840x 83.52

0.0353 0.2595 0.2595


a3 = = 0.0093 b3 = = 0.1298
(6)(4) 2
(2)(4)(0.2595) + (4)(0.0353)
c3 = 1.1100 = 1.4795 d 3 = 15.00
6

g 3 ( x) = (0.0093)( x 13) 3 + (0.1298)( x 13) 2 (1.4795)( x 13) + 15.00

g 3 ( x) = 0.0093x 3 + 0.4925x 2 9.5694x + 76.6018

0.0039 0.0353 0.0353


a4 = = 0.0017 b4 = = 0.0177
(6)(3) 2
( 2)(3)(0.0353) + (3)(0.0039 )
c 4 = 0.3733 = 0.4106 d 4 = 10.56
6

g 4 ( x) = (0.0017)( x 17) 3 + (0.0177)( x 17) 2 (0.4106)( x 17) + 10.56

g 4 ( x) = 0.0017 x 3 + 0.1044x 2 2.4863x + 31.0076

0.0039 0.0039
a5 = = 0.0001 b5 = = 0.0020
(6)(8) 2
(2)(8)(0.0039)
c5 = 0.0688 = 0.0792 d 5 = 9.44
6

g 5 ( x) = (0.0001)( x 20) 3 + (0.0020)( x 20) 2 (0.0792)( x 20) + 9.44

g 5 ( x) = 0.0001x 3 + 0.008x 2 0.2792x + 12.624


8

i Interval gi(x)

0 [0, 6] g 0 ( x) = 0.0024x 3 + 1.6599x + 6.67


1 [6, 10] g 1 ( x) = 0.0332x 3 + 0.5543x 2 1.6659x + 13.3218
2 [10, 13] g 2 ( x) = 0.0636x 3 2.3503x 2 + 27.3840x 83.52
3 [13, 17] g 3 ( x) = 0.0093x 3 + 0.4925x 2 9.5694x + 76.6018
4 [17, 10] g 4 ( x) = 0.0017 x 3 + 0.1044x 2 2.4863x + 31.0076
5 [20, 28] g 5 ( x) = 0.0001x 3 + 0.008x 2 0.2792x + 12.624

Note The numbers have been rounded (not truncated) to four decimal places. Substitution of values in the respective
splines will yield minor errors.

(b) The approximate maximum average weight is 42.67 mg for the first sample and 18.89 mg for the second sample.

Question 3

The Newton forward divided-difference formula is used to approximate f(0.3) given the following data:

x 0.0 0.2 0.4 0.6


f(x) 15.0 21.0 30.0 51.0

Suppose that it is discovered that f(0.4) was understated by 10 and f(0.6) was overstated by 5. By what amount should the
approximation to f(0.3) be changed?

Solution

We start by drawing a table of divided differences:

x f(x) [1] [2] [3]


0.0 15.0 30 37.5 187.5
0.2 21.0 45 150.0
0.4 30.0 105
0.6 51.0
9

The corresponding polynomial of degree 3 obtained by using the formula

f ( x) = f 0[ 0] + ( x x 0 ) f 0[1] + ( x x 0 )( x x1 ) f 0[ 2] + ( x x 0 )( x x1 )( x x 2 ) f 0[3]

Therefore,

f ( x) = 15.0 + ( x) (30) + ( x)( x 0.2)(37.5) + ( x)( x 0.2)( x 0.4)(187.5)

f ( x ) = 187.5 x 3 75 x 2 + 37.5 x + 15

The value of f(0.3) is calculated as 24.5625 .

With the understatement and overstatement of f(0.4) and f(0.6) respectively, we have the following divided difference table:

x f(x) [1] [2] [3]


0.0 15.0 30 162.5 541.6667
0.2 21.0 95 162.5
0.4 40.0 30
0.6 46.0

Therefore, working exactly,

1625
f ( x) = 15.0 + ( x ) (30) + ( x )( x 0.2)(162.5) + ( x )( x 0.2)( x 0.4)
3

1625 x 3 275 x
f ( x) = + 487.5 x 2 + 15
3 6

The corresponding value of f(0.3) is calculated as 30.5, which means that it would change by 5.9375 .

Question 4

Consider the following table:

x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
f(x) 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72

(a) Construct the least squares approximation polynomial of degree three and compute the error.
(b) Construct the least squares approximation of the form beax and compute the error.
(c) Construct the least squares approximation of the form bxa and compute the error.
(d) Draw a graph of the data points and the approximations in (a), (b) and (c).
10

Solution

(a) Let the least-squares cubic interpolating polynomial be y = a 0 + a1 x + a 2 x 2 + a 3 x 3 . Using the least-squares criterion,
we have the matrix equation

n x x2 x3 a0 y
a
x x2 x3 x4 1 = xy ,
x 2 x3 x4 x 5 a 2 x 2 y
3
x x4 x5 x 6 a3 x y
3

which, when simplified, gives

10 54.1 303.39 1759.831 a 0 1958.3900


54.1 303.39 1759.831 10523 .1207 a 11366.8430
1 =
303.39 1759 .831 10523 .1207 64607.9775 a 2 68006 .6811

1759.831 10523.1207 64607 .9775 405616.7435 a 3 417730 .0982

The augmented matrix for the system is

10 54.1 303.39 1958.3900


1759.831
54.1 303.39 1759.831 10523.1207 11366.8430

303.39 1759.831 10523.1207 64607 .9775 68006 .6811

1759.831 10523.1207 64607 .9775 405616 .7435 417730 .0982

Proceeding by Gaussian elimination, we obtain

1 5.41 30.339 175.9831 195.8390


1759.831 10523.1207 11366.8430
R1 R110 54.1 303.39

303.39 1759.831 10523.1207 64607.9775 68006.6811

1759.831 10523.1207 64607.9775 405616.7435 417730.0982

1 5.41 30.339 175.9831 195.8390


771.9531
R2 R2 54.1R1 0 10.709 118.4911 1002.4350

303.39 1759.831 10523.1207 64607.9775 68006.6811

1759.831 10523.1207 64607.9775 405616.7435 417730.0982

1 5.41 30.339 175.9831 195.8390


771.9531
R3 R3 303.39 R1 0 10.709 118.4911 1002.4350

0 118.4911 1318.5715 11216.4648 8591.0869

1759.831 10523.1207 64607.9775 405616.7435 417730.0982

1 5.41 30.339 195.8390


175.9831
771.9531
R4 R4 1759.831R1 0 10.709 118.4911 1002.4350

0 118.4911 1318 .5715 11216.4648 8591.0869

0 1002.4350 11216 .4648 95916 .2286 73086 .5550
11

1 5.41 30.339 175.9831 195.8390


72.0845
R2 R2 10.709 0 1 11.0646 93.6068

0 118.4911 1318.5715 11216.4648 8591.0869

0 1002.4350 11216.4648 95916.2286 73086.5550

1 5.41 30.339 195.8390


175.9831
72.0845
R3 R3 118.4911R2 0 1 11.0646 93.6068

0 0 7.5149 124.8921 49.7152

0 1002.4350 11216 .4648 95916.2286 73086.5550

1 5.41 30.339 175.9831 195.8390


72.0845
R4 R4 1002.4350 R2 0 1 11.0646 93.6068

0 0 7.5149 124.8921 49.7152

0 0 124.9225 2081 .4960 826.5292

1 5.41 30.339 175.9831 195.8390


72.0845
R3 R3 7.5149 0 1 11.0646 93.6068

0 0 1 16.6193 6.6156

0 0 124.9225 2081.4960 826.5292

1 5.41 30.339 175.9831 195.8390


11.0646 93.6068 72.0845
R4 R4 124.9225R3 0 1

0 0 1 16.6193 6.6156

0 0 0 5.3715 0.0919

1 5.41 30.339 175.9831 195.8390


11.0646 93.6068 72.0845
R4 R4 5.3715 0 1

0 0 1 16.6193 6.6156

0 0 0 1 0.0171

Back substitution gives

a 0 = 1.5817 , a1 = 0.4294 , a 2 = 6.3314 and a 3 = 0.0171 so that

the least squares approximation polynomial of degree 3 is given by

y = 0.0171x 3 + 6.3314x 2 + 0.4294 x 1.5817


12

x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
y 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72
y 102.09 112.73 129.67 141.62 167.10 194.70 224.41 256.25 299.03 326.30

Standard error =
( yi
y i ) 2
=
2.0197
= 0.5025 .
n2 8

(b) Given that the equation is y = be ax , taking natural logarithm on both sides, we have

ln y = ln b + ax

Rewriting the above equation as Y = B + ax , where Y = ln y and B = ln b, we can use linear regression and hence the
formulae

n xY x Y
a= and B = Y ax
n x 2 ( x )
2

by the method of least squares.

x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
y 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72
Y = ln y 4.6304 4.7290 4.8684 4.9562 5.1212 5.2737 5.4155 5.5480 5.7021 5.7891

The data can be summarised as

x = 54.1 , Y = 52.0336 , x = 303.39 , xY = 285.4896


2
n = 10,

(10)(285.4896) (54.1)(52.0336) 52.0336 (0.3724)(54.1)


Therefore, a = = 0.3724 and B = = 3.1888
(10)(303.39) (54.1) 2
10

so that b = e 3.1888 = 24.2593 . The least squares approximation is given by

y = 24.2593 e 0.3724 x

x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
y 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72
y 107.60 115.92 129.62 139.64 162.07 188.10 218.32 253.39 305.25 341.33

Standard error =
( yi
y i ) 2
=
418.9033
= 7.2362 .
n2 8
13

(c) Given that the equation is y = bx a , taking natural logarithm on both sides, we have
ln y = ln b + a ln x
Rewriting the above equation as Y = B + aX , where Y = ln y , X = ln x and B = ln b, we can use linear regression and
hence the formulae

n XY X Y
a= and B = Y aX
n X 2 ( X )
2

by the method of least squares.

x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
y 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72
X = ln x 1.3863 1.4351 1.5041 1.5476 1.6292 1.7047 1.7750 1.8405 1.9169 1.9601
Y = ln y 4.6304 4.7290 4.8684 4.9562 5.1212 5.2737 5.4155 5.5480 5.7021 5.7891

The data can be summarised as

X = 16.6995 , Y = 52.0336 , X = 25.2536 , XY = 87.6332


2
n = 10,

(10)(87.6332) (16.6995)(52.0336) 52.0336 ( 2.0196)(16.6995)


Therefore, a = = 2.0196 and B = = 1.8307
(10)( 25.2536) (16.6995) 2
10

so that b = e1.8307 = 6.2383 . The least squares approximation is given by

y = 6.2383 x 2.0196

x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
y 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72
y 102.56 113.18 130.11 142.05 167.52 195.12 224.84 256.69 299.50 326.79

Standard error =
( yi
y i ) 2
=
0.0079
= 0.0314 .
n2 8

(d) On graph paper.

Note Curve Expert 1.3 gave the following answers:

(a) y = 0.0137 x 3 + 6.8456x 2 2.3792x + 3.4291 (r = 1.00000000)


(b) y = 26.7584 e 0.3555 x
(r = 0.99740635)
(c) y = 6.2423 x 2.0192
(r = 0.99999995)

Digitally signed by Rajesh


Gunesh

Signature Not
Verified
Rajesh Gunesh DN: cn=Rajesh Gunesh,
o=De Chazal du Mee, c=MU
Date: 2001.07.15 16:46:48 +
04'00'
Location: Vacoas

You might also like