You are on page 1of 11

Chapter 11, Problem 6.

  (*** solve in class  ***) 

Apply Cholesky decomposition to the symmetric matrix

6 15 55  a 0   152.6 
15 55 225  a    585.6 
  1   
55 225 979 a 2  2488.8
  

In addition to solving for the Cholesky decomposition, employ it to solve for the a’s.

l11  l11 l21 l31 l41 


l l  l22 l32 l42 
[ A]  [ A]T  L  LT   21 22 
l31 l32 l33  l33 l43 
  
l41 l42 l43 l44   l44 

i 1
aki   lij lkj k 1
akk   lkj2
j 1
lki  for i  1,2, , k  1 lkk 
lii j 1

Chapter 11, Solution 6.

l11  6  2.449

15
l21   6.1237
2.449

l22  55  6.12372  4.18

55
l31   22.454
2.449

225  6.1237(22.454)
l32   20.92
4.18

l33  979  22.4542  20.922  6.11

Thus, the Cholesky decomposition is


 2.449 
[L ]  6.1237 4.18 

22.454 20.92 6.11

The solution can then be generated by first using forward substitution to modify the right-
hand-side vector,

[ L]{D}  {B}

which can be solved for

 62.3 
 
{D}   48.8 
11 .37 
 

Then, we can use back substitution to determine the final solution,

[ L]T { X }  {D}

which can be solved for

2.48
 
{X }  2.36
1.86 
 

Chapter 17, Problem 5.

Use least-squares regression to fit a straight line to

x 6 7 11 15 17 21 23 29 29 37 39
y 29 21 29 14 21 15 7 7 13 0 3

Along with the slope and the intercept, compute the standard error of the estimate and the
correlation coefficient. Plot the data and the regression line. If someone made an
additional measurement of (x = 10, y = 10), would you suspect, based on a visual
assessment and the standard error, that the measurement was valid or faulty? Justify your
conclusion.

Chapter 17, Solution 5.

y = a 0 + a 1x

n xi yi   xi  yi 11 * 2380  234 * 159


a1    0.78055
n xi2    xi  11 * 6262  (234) 2
2

using (1), a0 can be expressed as a0  y  a1 x  31.0589


n n n St  Sr
St   ( yi  y ) 2 S r   ei2   ( yi  a0  a1 xi ) 2 r2 
i 1 i 1 i 1
St
The results can be summarized as

y  31.0589  0.78055x ( s y / x  4.476306; r  0.901489)

At x = 10, the best fit equation gives 23.2543. The line and data can be plotted along with the
point (10, 10).

The value of 10 is nearly 3 times the standard error away from the line,

23.2543 – 3(4.476306) = 9.824516

Thus, we can tentatively conclude that the value is probably erroneous. It should be noted that
the field of statistics provides related but more rigorous methods to assess whether such
points are “outliers.”

Chapter 17, Problem 8 (*** solve in class  ***) 

Fit the following data with (a) a saturation-growth-rate model, (b) a power equation, and (c) a
parabola. In each case, plot the data and the equation.

x 0.75 2 3 4 6 8 8.5
y 1.2 1.95 2 2.4 2.4 2.7 2.6

Chapter 17, Solution 8


(excel file contains the computational details).

x 1 3  x 1 1 3 1
y  3      *
Saturation Growth: 3  x y 3x y 3 3 x
1 1 1
(a) We regress 1/y versus 1/x to give  a 0  a 1  0.34154  0.36932
y x x

Therefore,  3 = 1/a0 = 1/0.3415 = 2.9279 and  3 = a1*3 = 0.3693 *2.9279 = 1.0813, and the
saturation-growth-rate model is

x x
y  3  2.9279
3  x 1.0813  x

The model and the data can be plotted as

(b) Power Equation: y   2 x  2  log y   2 log x  log 2

We regress log10(y) versus log10(x) to give

log10 y  0.1533  0.3114 log10 x

Therefore, 2 = 100.1533 = 1.4233 and 2 = 0.3114, and the power model is

y  1.4233x 0.3114

The model and the data can be plotted as

(c) Polynomial regression can be applied to develop a best-fit parabola


n n
y  a0  a1 x  a2 x 2 whic h minimizes the error : Sr   ei2   ( yi  a0  a1 xi  a2 xi2 ) 2
i 1 i 1

Equations to be solved:

 n

x x i
2
i
  a0    y i 
   
  xi x x   a1     xi yi 
2 3
i i
 xi2 x x 3 4  a2   xi2 yi 
 i i   

 7 32.3 201.8  a0   15.3 


 32.3 201.8 1441.5   a    78.5 
  1   
201.8 1441.5 10965.4 a2  511 .9

y  0.990728  0.449901x  0.03069 x 2

The model and the data can be plotted as


Chapter 18, Problem 6. (*** solve in class  ***) 

Repeat Probs. 18.1 through 18.3 using the Lagrange polynomial.

Chapter 18, Solution 6. f1 ( x) 


x  x1
f ( x0 ) 
x  x0
f ( x1 )
x0  x1 x1  x0

f 2 ( x) 
 x  x1   x  x2  f ( x )
 x0  x1   x0  x 2  0

 x  x0   x  x2  f ( x )
 x1  x0   x1  x 2  1

 x  x0   x  x1  f ( x )
 x2  x0   x2  x 1  2
18.1
Estimate the common logarithm of 10 (log 10) using linear interpolation.
x0 = 8 f(x0) = 0.90309
x1 = 9 f(x1) = 0.95424
x2 = 11 f(x2) = 1.04139
x3 = 12 f(x3) = 1.07918

18.1 (a): Interpolate between log 8 = 0.9031 and log 12 = 1.0792

x0 = 8 f(x0) = 0.9031
x1 = 12 f(x1) = 1.0792

10  12 10  8
f1 (10)  0.9031  1.0792  0.991
8  12 12  8

18.1 (b): Interpolate between log 9 = 0.95424 and log 11 = 1.04139.

x0 = 9 f(x0) = 0.95424
x1 = 11 f(x1) = 1.04139

10  11 10  9
f1 (10)  0.95424  1.04139  0.9978
9  11 11  9

18.2:
Fit a second-order Lagrange polynomial to estimate log 10 using the data from Prob. 18.1 at
x = 8, 9, and 11.
x0 = 8 f(x0) = 0.90309
x1 = 9 f(x1) = 0.95424
x2 = 11 f(x2) = 1.04139

(10  9)(10  11) (10  8)(10  11)


f 2 (10)  0.90309  0.95424
(8  9)(8  11) (9  8)(9  11)
(10  8)(10  9)
 1.04139  1.0003434
(11  8)(11  9)

18.3:
Fit a third-order Lagrange polynomial to estimate log 10 using the data from Prob. 18.1

x0 = 8 f(x0) = 0.90309
x1 = 9 f(x1) = 0.95424
x2 = 11 f(x2) = 1.04139
x3 = 12 f(x3) = 1.07918

(10  9)(10  11)(10  12) (10  8)(10  11)(10  12)


f 3 (10)  0.90309  0.95424
(8  9)(8  11)(8  12) (9  8)(9  11)(9  12)
(10  8)(10  9)(10  12) (10  8)(10  9)(10  11)
 1.04139  1.07918  1.0000449
(11  8)(11  9)(11  12) (12  8)(12  9)(12  11)

Chapter 18, Problem 8.

Employ inverse interpolation using a cubic interpolating polynomial and bisection to determine
the value of x that corresponds to f (x) = 0.23 for the following tabulated data:
x 2 3 4 5 6 7
f (x) 0.5 0.3333 0.25 0.2 0.1667 0.1429

Chapter 18, Solution 8.

The following points are used to generate a cubic interpolating polynomial

x0 = 3 f(x0) = 0.3333
x1 = 4 f(x1) = 0.25
x2 = 5 f(x2) = 0.2
x3 = 6 f(x3) = 0.1667

The polynomial can be generated in a number of ways including Newton’s Divided


Difference Interpolating Polynomials.
x
i l f(xl) First Second Third
0 3 0.3333 -0.0833 0.01665 -0.0027
1 4 0.25 -0.05 0.00835
2 5 0.2 -0.0333
3 6 0.1667

The result is:


f3(x) = 0.3333 + (x-3)(-0.0833) + (x-3)(x-4)(0.01665) + (x-3)(x-4)(x-5)(-0.0027)

If we simplify the above expression, we get:

f 3 ( x)  0.943  0.3261833x  0.0491x 2  0.00271667x 3

The roots problem can then be developed by setting this polynomial equal to the desired
value of 0.23

0  0.713  0.3261833x  0.0491x 2  0.00271667x 3

Bisection can then be used to determine the root. Using initial guesses of xl = 4 and xu = 5, the
first five iterations are
i xl xu xr f(xl) f(xr) f(xl)f(xr) a
1 4.00000 5.00000 4.50000 0.02000 -0.00811 -0.00016 11.11%
2 4.00000 4.50000 4.25000 0.02000 0.00504 0.00010 5.88%
3 4.25000 4.50000 4.37500 0.00504 -0.00174 -0.00001 2.86%
4 4.25000 4.37500 4.31250 0.00504 0.00160 0.00001 1.45%
5 4.31250 4.37500 4.34375 0.00160 -0.00009 0.00000 0.72%

If the iterations are continued, the final result is x = 4.34213.

Chapter 21, Problem 4.

Integrate the following function analytically and using the trapezoidal rule,
with n = 1, 2, 3, and 4:

  x  2 / x
2 2
dx
1

Use the analytical solution to compute true percent relative errors to evaluate the accuracy of
the trapezoidal approximations.
Chapter 21, Solution 4.

Analytical solution:
2
2  x3 4
1
( x  2 /x ) 2 dx  
 3
 4 x    8.33333
x 1

Trapezoidal rule for n :


f ( x0 )  f ( x1 ) f ( x1 )  f ( x2 ) f ( xn 1 )  f ( xn )
I h h  h
2 2 2
ba  n 1

I  f ( x0 )  f ( xn )  2 f ( xi )
2n  i 1 

99
For (n=1): I = ( 2  1) 9 t = (9 - 8.33333)/8.33333 = 8 %
2

For (n=2): I = (2-1)/4 [ 9+9+ 2*(1.5 + 2/1.5)2 ] = 8.513889


t = (8.5138-8.3333)/8.3333 = 2.16%

The results are summarized below:

n Integral t
1 9 8%
2 8.513889 2.167%
3 8.415185 0.982%
4 8.379725 0.557%

Let’s apply Richardson’s Extrapolation to these results and see what kind of improvement we get
for the error:

4 1 16 1
I  Im  Il I Im  Il
3 3 15 15

n I with O(h2) I with O(h4) I with O(h6) t


1 9 8.3517 8.3338 0.006 %
2 8.513889 8.3349
3 8.415185
4 8.379725

Chapter 13, Problem 9.
Employ the following methods to find the maximum of the function

f  x   x 4  2 x 3  8x 2  5x
(a) Golden-section search ( xl  2 , xu  1 ,  s  1% ).
(c) Newton’s method ( x0  1 ,  s  1% ).

Chapter 13, Solution 9.

(a) First, the golden ratio can be used to create the interior points,

5 1
d  (1  ( 2))  1.8541
2
x1  2  1.8541  0.1459
x 2  1  1.8541  0.8541

The function can be evaluated at the interior points

f ( x2 )  f ( 0.8541)  0.8514
f ( x1 )  f (0.1459)  0.5650

Because f(x1) > f(x2), the maximum is in the interval defined by x2, x1 and xu where x1
is the optimum. The error at this point can be computed as

1  (2)
 a  (1  0.61803)  100%  785.41%
 0.1459

The process can be repeated and all the iterations summarized as


i xl f(xl) x2 f(x2) x1 f(x1) xu f(xu) d xopt a
1 -2 -22 -0.8541 -0.851 -0.1459 0.565 1 -16.000 1.8541 -0.1459 785.41%
2 -0.8541 -0.851 -0.1459 0.565 0.2918 -2.197 1 -16.000 1.1459 -0.1459 485.41%
3 -0.8541 -0.851 -0.4164 0.809 -0.1459 0.565 0.2918 -2.197 0.7082 -0.4164 105.11%
4 -0.8541 -0.851 -0.5836 0.475 -0.4164 0.809 -0.1459 0.565 0.4377 -0.4164 64.96%
5 -0.5836 0.475 -0.4164 0.809 -0.3131 0.833 -0.1459 0.565 0.2705 -0.3131 53.40%
6 -0.4164 0.809 -0.3131 0.833 -0.2492 0.776 -0.1459 0.565 0.1672 -0.3131 33.00%
7 -0.4164 0.809 -0.3525 0.841 -0.3131 0.833 -0.2492 0.776 0.1033 -0.3525 18.11%
8 -0.4164 0.809 -0.3769 0.835 -0.3525 0.841 -0.3131 0.833 0.0639 -0.3525 11.19%
9 -0.3769 0.835 -0.3525 0.841 -0.3375 0.840 -0.3131 0.833 0.0395 -0.3525 6.92%
10 -0.3769 0.835 -0.3619 0.839 -0.3525 0.841 -0.3375 0.840 0.0244 -0.3525 4.28%
11 -0.3619 0.839 -0.3525 0.841 -0.3468 0.841 -0.3375 0.840 0.0151 -0.3468 2.69%
12 -0.3525 0.841 -0.3468 0.841 -0.3432 0.841 -0.3375 0.840 0.0093 -0.3468 1.66%
13 -0.3525 0.841 -0.3490 0.841 -0.3468 0.841 -0.3432 0.841 0.0058 -0.3468 1.03%
14 -0.3490 0.841 -0.3468 0.841 -0.3454 0.841 -0.3432 0.841 0.0036 -0.3468 0.63%
(c) The first and second derivatives of the function can be evaluated as
f ' ( x )  4 x 3  6 x 2  16 x  5

f " ( x )  12 x 2  12 x  16

which can be substituted into Eq. (13.8) to give

 4 xi3  6 xi2  16 xi  5 9
xi 1  xi   1   0.4375
 12 xi2  12 xi  16  16
which has a function value of 0.787094. The second iteration gives –0.34656, which
has a function value of 0.840791. At this point, an approximate error can be computed
as a = 128.571%. The process can be repeated, with the results tabulated below:

i x f(x) f'(x) f"(x) a


0 -1 -2 9 -16
1 -0.4375 0.787094 1.186523 -13.0469 128.571%
2 -0.34656 0.840791 -0.00921 -13.2825 26.242%
3 -0.34725 0.840794 -8.8E-07 -13.28 0.200%

Thus, within three iterations, the result is converging on the true value of
f(x) = 0.840794 at x = –0.34725.

You might also like