You are on page 1of 7

Math 4650 Homework #4 due Friday, February 17 (2.5 #1d) The following sequence is linearly convergent.

Generate the rst ve terms of the sequence {n } using Aitkens 2 method. p p0 = 0.5, Solution: We just use the formula (pn+1 pn )2 . pn+2 2pn+1 + pn We generate pn up to n = 6 in order to get up to p4 . pn = p n n 0 1 2 3 4 5 6 pn 0.5 0.8775825619 0.6390124942 0.8026851007 0.6947780268 0.7681958313 0.7191654459 pn 0.7313851863 0.7360866919 0.7376528716 0.7384692207 0.7387980650 pn = cos pn1 , n1

The sequence pn converges linearly with rate converges linearly with rate pn+1 p 0.46. pn p

pn+1 p pn p

0.67, while the sequence pn

(2.5 #8) Use Steensens method to nd, to an accuracy of 104 , the root of x 2x = 0 that lies in [0, 1] and compare this to the results of Exercise 8 of Section 2.2. (See Homework 3 solutions.) Solution: The standard xed-point method uses g(x) = 2x . Steensens method involves using xed-point iteration with the new function h(x) = x [g(x) x]2 . g(g(x)) 2g(x) + x

2 In Exercise 8, we started with p0 = 3 , so we use the same starting point.

I will call qn the result of Steensens method with qn = h(qn1 ). I use Maple with 10-digit arithmetic. n 0 1 2 3 4 5 6 7 pn 0.6666666667 0.6299605249 0.6461940963 0.6389637114 0.6421740571 0.6407466531 0.6413809223 0.6410990063 1 qn 0.6666666667 0.6412161917 0.6411857443 0.6411857448 0.6411857444 0.6411857448 0.6411857444 0.6411857448

We see that already q1 is within 104 of the desired root. In Exercise 8 of Section 2.2, it wasnt until p7 that we reached this accuracy. (2.5 #16) Prove Theorem 2.14. [Hint: Let n = (pn+1 p)/(pn p), and show that limn n = 0. Then express (n p)/(pn p) in terms of n , n+1 , and .] p Solution: The theorem says: Suppose that {pn } is a sequence that converges linearly to the limit p and that n=0 pn+1 p < 1. n pn p lim Then the Aitkens 2 sequence {n } converges to p faster than {pn } in the sense p n=0 n=0 that pn p lim = 0. n pn p To prove this, we let n = (pn+1 p)/(pn p) as suggested. Then by assumption, we know that pn+1 p = = 0. lim n = lim n n pn p Now we can solve this equation to get pn+1 p = (pn p)(n + ), for every n, which implies that pn+2 p = (pn+1 p)(n+1 + ). Now consider Steensens method. We have (adding and subtracting the limit p) pn+1 pn = (pn+1 p) (pn p) = (n + 1)(pn p). We can also write pn+2 2pn+1 + pn = (pn+2 p) (pn+1 p) (pn+1 p) (pn p) = (pn+1 p)(n+1 + 1) (pn p)(n + 1) = (pn p) (n + )(n+1 + 1) (n + 1) . Plugging into Aitkens formula, we get the error pn p = pn p (pn+1 pn )2 pn+2 2pn+1 + pn (pn p)2 (n + 1)2 = pn p (pn p) (n + )(n+1 + 1) (n + 1) (n + 1)2 = (pn p) 1 . (n + )(n+1 + 1) (n + 1) 2

Thus nally we have (n + 1)2 pn p = lim 1 n n pn p (n + )(n+1 + 1) (n + 1)) lim .

This looks like a mess, but the nice thing is that since n 0 and n+1 0, in the limit we can just plug in 0 and obtain pn p ( 1)2 ( 1)2 =1 =1 = 0. n pn p ( 1) + 1 ( 1)2 lim (2.6 #6) f (x) = 10x3 8.3x2 + 2.295x 0.21141 = 0 has a root at x = 0.29. Use Newtons method with an initial approximation x0 = 0.28 to attempt to nd this root. Explain what happens. Solution: We have f (0.28) = 0.00001 and f (0.28) = 0.001, so that Newtons method at the rst step gives x1 = 0.28 f (0.28) = 0.28 0.01 = 0.27. f (0.28)

Now f (0.27) = 0 and f (0.27) = 0, so Newtons method gets stuck: it gives something of the form 0 x2 = 0.27 , 0 which is undened. MATLAB and Mathematica both give x2 = 0.27 (presumably because they rst notice that the numerator is zero and decide that the denominator doesnt matter), while Maple chokes and gives a Float(undened) error (presumably because it rst notices that the denominator is zero). None of these programs are actually equipped to use LHopitals rule to gure out what to do, but of course dividing by zero is always unpredictable. (3.1 #2bd) For the given functions f (x), let x0 = 1, x1 = 1.25, and x2 = 1.6. Construct interpolation polynomials of degree at most one and at most two to approximate f (1.4), and nd the absolute error. (b) f (x) = 3 x 1 Solution: The exact value is f (1.4) = 0.7368062997. To approximate x = 1.4 by a degree-one polynomial, we should use the points x1 and x2 (since they are closest to 1.4). The degree-one polynomial interpolation is P1 (x) = f (1.25) x 1.25 x 1.6 + f (1.6) 1.25 1.6 1.6 1.25 = 1.799887214(x 1.6) + 2.409807615(x 1.25),

so that P1 (1.4) = 0.7214485851. The absolute error is 0.015. 3

Figure 1: Problem 2.6 #6: The tangent line at x = 0.28 crosses the axis at 0.27, and Newtons method cannot continue past that point since its a degenerate root.

To approximate with a degree-two polynomial, we use all three points to get P2 (x) = f (1.25) (x 1.25)(x 1.6) (x 1.6)(x 1) + f (1) (1.25 1.6)(1.25 1) (1 1.25)(1 1.6) (x 1)(x 1.25) + f (1.6) (1.6 1)(1.6 1.25) = 7.199548857(x 1.6)(x 1) + 4.016346025(x 1)(x 1.25).

Thus P2 (1.4) = 0.8169446701. The absolute error is 0.080, substantially larger than what we get using the rst-order polynomial. (d) f (x) = e2x x Solution: We do the same thing as above. The exact value is f (1.4) = 15.04464677. The rst-degree polynomial is P1 (x) = f (1.25) x 1.25 x 1.6 + f (1.6) 1.25 1.6 1.6 1.25 = 31.23569703(x 1.6) + 65.52151485(x 1.25),

so that P1 (1.4) = 16.07536663. The absolute error is 1.03.

For degree two, we have P2 (x) = f (1.25) (x 1.25)(x 1.6) (x 1.6)(x 1) + f (1) (x 1.6)(x 1) (1 1.25)(1 1.6) (x 1)(x 1.25) + f (1.6) (1.6 1)(1.6 1.25) = 124.9427881(x 1)(x 1.6) + 42.59370733(x 1.25)(x 1.6) + 109.2025248(x 1)(x 1.25),

and P2 (1.4) = 15.26976331. The absolute error is 0.225. (3.1 #4bd) Use Theorem 3.3 to nd an error bound for the approximations in Exercise 2. (b) Solution: For the linear error we have |f (1.4) P1 (1.4)| |f ()| |(1.4 1.25)(1.4 1.6)| = 0.015|f ()|, 2

2 where 1.25 < < 1.6. Since f () = 9 ( 1)5/3 , the maximum of |f ()| on the interval [1.25, 1.6] occurs at 1.25, and we have |f (1.25)| = 2.24. Hence the error is at most |f (1.4) P1 (1.4)| 2.24 0.015 = 0.034.

For the quadratic error we have |f (1.4) P2 (1.4)| |f ()| |(1.4 1)(1.4 1.25)(1.4 1.6)| = 0.002|f ()|, 6

where 1 < < 1.6. Unfortunately we have f () = 10 ( 1)8/3 , which blows 15 up at = 1. So we cannot say anything about the size of |f ()| on the interval [1, 1.6], since could be arbitrarily close to 1. Hence the remainder formula does not give a useful bound in this case. (d) Solution: Exactly as above we have the formulas |f (1.4) P1 (1.4)| |f ()| |(1.4 1.25)(1.4 1.6)| = 0.015|f ()|, 2

where 1.25 < < 1.6. We compute f () = 4e2 4e3.2 , so the error bound is 1.47. For the quadratic error we have |f (1.4) P2 (1.4)| 0.002|f ()|, where 1 < < 1.6. The maximum of the third derivative is f (1.6) = 8e3.2 , so we get the error bound 0.39. (3.1 #14) Let f (x) = ex , for 0 x 2. 5

a. Approximate f (0.25) using linear interpolation with x0 = 0 and x1 = 0.5. Solution: We have P1 (x) = f (x0 ) x x1 x x0 f (x0 )(x x1 ) + f (x1 )(x x0 ) + f (x1 ) = x0 x1 x1 x0 x1 x0

in general. In this case we plug in and get P1 (0.25) = e0 0.25 + e0.5 0.25 = 1.324360636. 0.5 0

b. Approximate f (0.75) using linear interpolation with x0 = 0.5 and x1 = 1. Solution: The same formula as above yields P1 (0.75) = e0.5 (0.75 1) + e1 (0.75 0.5) = 2.183501550. 1 0.5

c. Approximate f (0.25) and f (0.75) by using the second interpolating polynomial with x0 = 0, x1 = 1, and x2 = 2. Solution: We write P2 (x) = f (0) (x 1)(x 2) (x 0)(x 2) (x 0)(x 1) + f (1) + f (2) (0 1)(0 2) (1 0)(1 2) (2 0)(2 1) 1 2 1 = 2 (x 1)(x 2) ex(x 2) + 2 e x(x 1).

Therefore P2 (0.25) = 1.152774291 and P2 (0.75) = 2.011915205. d. Which approximations are better and why? Solution: The exact values are f (0.25) = 1.284025417 and f (0.75) = 2.117000017. So the errors in using the linear approximations are |P1 (0.25)f (0.25)| = 0.040335219 and |P1 (0.75) f (0.75)| = 0.066501533. The errors in the quadratic approximation are |P2 (0.25) f (0.25)| = 0.131251126 and |P2 (0.75) f (0.75)| = 0.105084812. We might have expected the quadratic approximation to do better, but actually the linear interpolation does better since it uses points that are much closer to the values under consideration. Using 0 and 0.5 to approximate the value at 0.25 is smarter than using 0, 1, and 2. On the other hand, if we had used 0, 0.5, and 1 for the quadratic approximation, we might expect it to do better than the linear approximation. And certainly if we use 0, 0.5, and a third value between 0 and 0.5, we would denitely expect to do better. (3.1 #17) Suppose you need to construct eight-decimal-place tables for the common, or base-10, logarithm function from x = 1 to x = 10 in such a way that linear interpolation is accurate to within 106 . Determine a bound for the step size for this table. What choice of step size would you make to ensure that x = 10 is included in the table? Solution: The base-10 logarithm is f (x) = 6
ln x . ln 10

On each of the intervals xi to xi+1 , the error of a linear interpolation is |f (x) L(x)| = f () (x xi )(x xi+1 ) 2

for some (xi , xi+1 ). Now the quadratic polynomial (x xi )(x xi+1 ) attains its maximum halfway between xi and xi+1 . Since xi+1 = xi + h where h is the step size, the maximum occurs at x = xi + h/2. Plugging in, we get that at x = xi + h/2, (x xi )(x xi+1 ) = (h/2)(h/2) = h2 /4. Hence the error is h2 |f (x) L(x)| |f ()| , 8

for some (xi , xi+1 ). The maximum possible error over all possible intervals is sup |f (x) L(x)|
1x10

h2 sup |f ()|. 8 110


1 . ln 10

1 Since f () = x2 ln 10 , the absolute value is maximized at x = 1, so |f ()| Thus the maximum error is

sup |f (x) L(x)|


1x10

h2 , 8 ln 10

and we want this to be less than 106 . So we choose h < 103 8 ln 10 0.00429. Since we want 10/h to be an integer (in order for 10 to be in the table), it might be convenient to choose h = 0.004.

You might also like