Professional Documents
Culture Documents
Solutions to Assignment 3
Question 1
x 1 2 3 4 5 6
f(x) 1 8 27 64 125 216
Solution
(a)
x y y 2 y 3 y 4 y
1 1
7
2 8 12
19 6
3 27 18 0
37 6
4 64 24 0
61 6
5 125 30
91
6 216
(b) If all the points have the same y-coordinate, the function is a constant and the minimum degree of the interpolating
polynomial will be zero. If this is not the case, we draw a difference table to see whether one of the differences does
become a non-zero constant. If n y is constant, the polynomial will be of degree n. In the above example, we have 6
points; if we have reached the fifth difference without any constant difference, then the minimum degree of the
interpolating polynomial will be 5.
2
(c)
x y y 2 y
2 8
19
3 27 18
37
4 64
( x x0 ) ( x x 0 )( x x1 ) 2 y0 ( x x 0 )( x x1 )( x x 2 ) 3 y0
f ( x) = y 0 + y0 + + + ...
h h2 2! h3 3!
In this particular case, we will stop at the second difference since we only have three points, that is,
( x x0 ) ( x x 0 )( x x1 ) 2 y0
f ( x) = y 0 + y0 + where x0 = 2 and h = 1.
h h2 2!
Therefore,
( x 2)(19) ( x 2)( x 3)(18)
f ( x) = 8 + +
(1) (1)(2)
= 8 + 19 x 38 + 9 x 2 45 x + 54
= 9 x 2 26 x + 24
(d) It can be easily seen that the true function f(x) is x3. Thus, the true value of f(3.5) is 42.875 and the error is 0.375. If
we use the next-term rule (with obviously four points), the error is
(e) For three data points, we can only fit a Lagrangian polynomial of degree 2. Using the relevant Lagrangian formula,
we have
( x x1 )( x x 2 ) ( x x 0 )( x x 2 ) ( x x 0 )( x x1 )
P2 ( x ) = f0 + f1 + f2
( x 0 x1 )( x 0 x 2 ) ( x1 x 0 )( x1 x 2 ) ( x 2 x 0 )( x 2 x1 )
with x0 = 2, x1 = 3 and x2 = 4.
Therefore,
Question 2
It is suspected that the high amounts of tannin in mature oak leaves inhibit the growth of the winter moth (Operophtera
bromata L., Geometridae) larvae that extensively damage these trees in certain years. The following table lists the average
weight of two samples of larvae at times in the first 28 days after birth. The first sample was reared on young oak leaves
whereas the second sample was reared on mature leaves from the same tree.
(a) Use a natural cubic spline to approximate the average weight curve for each sample.
(b) Find an approximate maximum average weight for each sample by determining the maximum of the spline.
Solution
S i +1 S i Si y i +1 y i 2hi S i + hi S i +1
ai = bi = ci = d i = yi
6hi 2 hi 6
First sample
20 4 0 0 0 S1 27.3498
4 14 3 0 0 S 48.6900
2
0 3 14 4 0 S 3 = 0.1650
0 0 4 14 3 S 4 9.2652
0 0 0 3 22 S 5 1.1520
from which we find that S1 = 2.2235 , S 2 = 4.2802 , S 3 = 0.7795 , S 4 = 0.4407 and S 5 = 0.0077 . For a natural cubic
spline, S 0 = 0 and S 6 = 0 .
2.2235
a0 = = 0.0618 b0 = 0
36
c0 = 1.7767 2.2235 = 0.4468 d 0 = 6.67
0.0077 0.0077
a5 = = 0.0002 b5 = = 0.0039
( 6)(8) 2
(2)(8)(0.0077)
c5 = 0.0713 = 0.0508 d 5 = 29.31
6
i Interval gi(x)
Second sample
20 4 0 0 0 S1 5.2698
4 14 3 0 0 S 11.9502
2
0 3 14 4 0 S 3 = 1.1202
0 0 4 14 3 S 4 4.4202
0 0 0 3 22 S 5 1.8270
from which we find that S1 = 0.0866 , S 2 = 0.8845 , S 3 = 0.2595 , S 4 = 0.0353 and S 5 = 0.0039 .
0.0866
a0 = = 0.0024 b0 = 0
36
c0 = 1.5733 + 0.0866 = 1.6599 d 0 = 6.67
0.0039 0.0039
a5 = = 0.0001 b5 = = 0.0020
(6)(8) 2
(2)(8)(0.0039)
c5 = 0.0688 = 0.0792 d 5 = 9.44
6
i Interval gi(x)
Note The numbers have been rounded (not truncated) to four decimal places. Substitution of values in the respective
splines will yield minor errors.
(b) The approximate maximum average weight is 42.67 mg for the first sample and 18.89 mg for the second sample.
Question 3
The Newton forward divided-difference formula is used to approximate f(0.3) given the following data:
Suppose that it is discovered that f(0.4) was understated by 10 and f(0.6) was overstated by 5. By what amount should the
approximation to f(0.3) be changed?
Solution
f ( x) = f 0[ 0] + ( x x 0 ) f 0[1] + ( x x 0 )( x x1 ) f 0[ 2] + ( x x 0 )( x x1 )( x x 2 ) f 0[3]
Therefore,
f ( x ) = 187.5 x 3 75 x 2 + 37.5 x + 15
With the understatement and overstatement of f(0.4) and f(0.6) respectively, we have the following divided difference table:
1625
f ( x) = 15.0 + ( x ) (30) + ( x )( x 0.2)(162.5) + ( x )( x 0.2)( x 0.4)
3
1625 x 3 275 x
f ( x) = + 487.5 x 2 + 15
3 6
The corresponding value of f(0.3) is calculated as 30.5, which means that it would change by 5.9375 .
Question 4
x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
f(x) 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72
(a) Construct the least squares approximation polynomial of degree three and compute the error.
(b) Construct the least squares approximation of the form beax and compute the error.
(c) Construct the least squares approximation of the form bxa and compute the error.
(d) Draw a graph of the data points and the approximations in (a), (b) and (c).
10
Solution
(a) Let the least-squares cubic interpolating polynomial be y = a 0 + a1 x + a 2 x 2 + a 3 x 3 . Using the least-squares criterion,
we have the matrix equation
n x x2 x3 a0 y
a
x x2 x3 x4 1 = xy ,
x 2 x3 x4 x 5 a 2 x 2 y
3
x x4 x5 x 6 a3 x y
3
x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
y 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72
y 102.09 112.73 129.67 141.62 167.10 194.70 224.41 256.25 299.03 326.30
Standard error =
( yi
y i ) 2
=
2.0197
= 0.5025 .
n2 8
(b) Given that the equation is y = be ax , taking natural logarithm on both sides, we have
ln y = ln b + ax
Rewriting the above equation as Y = B + ax , where Y = ln y and B = ln b, we can use linear regression and hence the
formulae
n xY x Y
a= and B = Y ax
n x 2 ( x )
2
x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
y 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72
Y = ln y 4.6304 4.7290 4.8684 4.9562 5.1212 5.2737 5.4155 5.5480 5.7021 5.7891
y = 24.2593 e 0.3724 x
x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
y 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72
y 107.60 115.92 129.62 139.64 162.07 188.10 218.32 253.39 305.25 341.33
Standard error =
( yi
y i ) 2
=
418.9033
= 7.2362 .
n2 8
13
(c) Given that the equation is y = bx a , taking natural logarithm on both sides, we have
ln y = ln b + a ln x
Rewriting the above equation as Y = B + aX , where Y = ln y , X = ln x and B = ln b, we can use linear regression and
hence the formulae
n XY X Y
a= and B = Y aX
n X 2 ( X )
2
x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
y 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72
X = ln x 1.3863 1.4351 1.5041 1.5476 1.6292 1.7047 1.7750 1.8405 1.9169 1.9601
Y = ln y 4.6304 4.7290 4.8684 4.9562 5.1212 5.2737 5.4155 5.5480 5.7021 5.7891
y = 6.2383 x 2.0196
x 4.0 4.2 4.5 4.7 5.1 5.5 5.9 6.3 6.8 7.1
y 102.56 113.18 130.11 142.05 167.53 195.14 224.87 256.73 299.50 326.72
y 102.56 113.18 130.11 142.05 167.52 195.12 224.84 256.69 299.50 326.79
Standard error =
( yi
y i ) 2
=
0.0079
= 0.0314 .
n2 8
Signature Not
Verified
Rajesh Gunesh DN: cn=Rajesh Gunesh,
o=De Chazal du Mee, c=MU
Date: 2001.07.15 16:46:48 +
04'00'
Location: Vacoas