Professional Documents
Culture Documents
Physics 75.502
$0
Physics 75.502/487
Computational Physics
Fall/Winter 1998/99
Dean Karlen
Department of Physics
Carleton University
Part I: Introduction
Part II: Numerical Methods
Part III: Monte Carlo Techniques
Part IV: Statistics for Physicists
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
$1
Topics:
Linear Algebra
Interpolation and Extrapolation
Integration
Root Finding
Minimization or Maximization
Dierential Equations
References:
Numerical Recipes (in Fortran or C) The Art of
Scientic Computing, Second Edition W.H. Press, S.A.
Teukolsky, W.T. Vetterling, B.P. Flannery, Cambridge
University Press, 1992.
& %
Numerical Methods for Physics, A.L. Garcia, Prentice
Hall, 1994.
General Problem
There are N unknowns, xj and M equations,
X
N
aij xj = bi i = 1 ::: M :
j =1
If N = M there can be a solution, unless there is row or
column degeneracy (ie. singular).
Numerical solutions to this problem can have additional
problems:
equations are so close to being singular, that round o
error renders them so and hence the algorithm fails
equations are close to being singular and N is large that
roundo errors accumulate and swamp the result
Limits on N , if not close to singular:
32 bit ! N up to around 50
& %
64 bit ! N up to few hundred (CPU limited)
If coecients are sparse, the N > 1000 or more can be
handled by special methods.
Common Mistake
$3
& %
3 7 11 15 19 23 a31 a32 a33 a34 { {
4 8 12 16 20 24 { { { { { {
!
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Gauss-Jordan Elimination
$5
Gauss-Jordan Elimination
+ an ecient method for inverting A
; 3 times slower than other methods not producing A;1
; not recommended as a general purpose method
Method without pivoting
Perform operations that transform A into the identity
matrix:
0 10 1 0 1
B a11 a12 a13 a14 C B x1 C B b1 C
B
B a21 a22 a23 a24 C
C B
B x2 C
C B
B b2 C
C
B
B C B C = B C
@ a31 a32 a33 a34 A @ x3 A @ b3 C
C B C B A
a41 a42 a43 a44 x4 b4
0 a a a
1 0 1 0 b
1
B 1 a1211 a1113 a1411
C B x 1
C B a
1
11 C
B
B a21 a22 a23 a24 C B x2 C B b2 C
C B C B C
B
B C B C = B C
@ a31 a32 a33 a34 A @ x3 A @ b3 C
C B C B A
& %
a41 a42 a43 a44 x4 b4
&
B a11 a 11 a11 C B x1 C B a11
'
C
Physics 75.502
B
B 0 a22 ; a12 aa2111 a23 ; a13 aa1121 a24 ; a14 aa2111 C
C B
B x2 C
C B
B b2 ; b1 aa1121 C
C
B
B C
C B C
C =B
B a31 C
C
@ 0 a 32 ; a 12
a31 a33 ; a13 a31 a34 ; a14 a31
a11 a11 a11 AB@ x3 A @ b3 ; b1 a11 A
0 a42 ; a12 aa1141 a43 ; a13 aa1141 a44 ; a14 aa4111 x4 b4 ; b1 aa4111
Rev. 1.3
0 10 1 0 b1 1
Gauss-Jordan Elimination
1 0 a013 a014 x1
B
B C
C B
B C
C B
B
a11 C
B 0 1 a023 a024 C B x2 C B b02 C
C
B
B C
C B C
C = B
B 0 C
@0 0 a033 a034 AB@ x3 A @ b3 C A
0 0 a043 a044 x4 b04
6
1998/99
%
$
'
Physics 75.502 Gauss-Jordan Elimination
$7
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Gauss-Jordan Elimination
$8
& %
Choosing the row with the largest value works quite well.
Implementation
To minimize storage requirements:
Use b to built up solution. There is no need to have a
separate array.
Similarly the inverse can be built up in the input
matrix.
The disadvantage with this is that the input matrix and
RHS vector are destroyed by the operation.
Numerical Recipes:
SUBROUTINE gaussj(a,n,np,b,m,mp)
where
a is an n n matrix in array of physical dimension
np np
b is an n m matrix in array of physical dimension
np mp
Note that a is replaced by its inverse, and b by its solutions.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Gaussian Elimination with Backsubstitution
$
10
& %
0 a042 a043 a044 x4 b04
0 10 1 0 1
B a11 a12 a13 a14 C B x1 C B b1 C
B
B 0 a022 a023 a024 C
C B
B x2 C
C B
B b02 C
C
B
B C B C = B C
0 0 C B
@ 0 0 a33 a34 A @ x3 A @ b3 C
C B 0
A
0 0 0 a044 x4 b04
Pivoting is important for this method also.
To solve for xi , backsubstitute:
b 04
x4 = a0
44
x3 = a10 b03 ; x4a034]
33
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
LU decomposition
LU decomposition
$
12
& %
represents N 2 equations where there are N 2 + N unknowns.
Arbitrarily set the terms, `ii = 1, to dene a unique
solution.
0
1 0 0
10
LU decomposition
B
B
B `21 1 0 0CC B
B 0 u22 u23 u24 CC
B
B C B C
@ `31 `32 1 0 AB
C @ 0 0 u33 u34 C A
`41 `42 `43 1 0 0 0 u44
0 1
B a11 a12 a13 a14 C
B a22 a23 a24 C
=B C
a21
B
B C
@ a31 a32 a33 a34 C
A
a41 a42 a43 a44
The terms in L and U can be determined as follows:
u11 = a11
u12 = a12
`21 = a21 ` = a31 ` = a41
u11 31 u11 41 u11
u22 = a22 ; `21u12
`32 = 1 (a ; ` u ) ` = 1 (a ; ` u )
u22 32 31 12 42 u22 42 41 12
u13 = a13
u23 = a23 ; `21u13
& %
u33 = a33 ; `31u13 ; `32u23
etc.
LU decomposition
where
a is an n n matrix in array of physical dimension
np np
indx,d keep track of rows permuted by pivoting
Note that a is replaced by
0 1
B u11 u12 u13 u14 C
B
B `21 u22 u23 u24 C
C
B
B C
@ 31 32 33 34 C
& %
` ` u u A
`41 `42 `43 u44
backsubstitution:
SUBROUTINE lubksb(a,n,np,indx,b)
where
a, indx are the results from the call to ludcmp
b is RHS on input, is solution on output
Note that a and indx are not modied by this routine so
lubksb can be called repeatedly.
& %
det(A) = uii
i=1
& %
SUBROUTINE mprove
& %
nullity = number of zero wi 's
The columns of U with non-zero wi 's span the range.
The columns of V with zero wi 's span the nullspace.
& %
See SUBROUTINE tridiag.
Other forms of sparse matrices have special methods. See
Numerical Recipes for details.
Exercise 1
Exercise 1
$
19
R12
V2
V R13 R23
V3
R14 R24 R34
V4
R15 R25 R35 R45
V5
& %
X
5
0 = (V4 ; Vi ) R1
i=1 4i
where R1ii =0.
In order to solve for the four unknowns (I , V2 , V3, and V4 ), one can rearrange
the last three equations, and identify V1 = V , and V5 = 0.
;
P5 1 1 + 1 =
Write a program that solves this problem for any number of voltage points
Rev. 1.3
between 3 and 50. Consider the special case where V=1 Volt, all resistors are
present, and the value of each resistor is given by Rij = ji ; j j . Plot the
current drawn as a function of number of voltage points.
20
1998/99
%
$
'
Physics 75.502 Exercise 1
$
21
Solution:
2
1.75
1.5
1.25
1
0 10 20 30 40 50
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Interpolation and Extrapolation
General Problem
Given a table of values, y (xi ), i = 1 ::: N , estimate y(x) for
arbitrary x.
graphically: drawing a smooth curve through the points
dierent from tting: tabulated values have no errors.
The curve should go through all points.
most commonly used curves are polynomials
Methods
1) Determine interpolating function using a set of points
xi y(xi), then evaluate the function at the point x.
! not recommended...
inecient
roundo error
no error estimate
2) Start from y(xi) for xi close to x, and add corrections
& %
from xj further away. Successive corrections should
decrease and the size of the last correction can be used as
an estimate of the error.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Polynomial Interpolation
Polynomial Interpolation
$
24
returns:
y is the estimate of y (x) given n tabulated entries
in the arrays xa(n), ya(n)
& %
dy is the last correction applied, and can be used as
an error estimate
P (x ) p 0 + p 1 x + ::: + p x
R(x) = Q (x) = q + q x + ::: + q x
0 1
& %
SUBROUTINE ratint(xa,ya,n,x,y,dy)
2.4
2.2
1.8
1.6
1.4
1.2
0.8
& %
0.6
0 2 4 6 8 10
polint
2.4
2.2
1.8
1.6
1.4
1.2
0.8
0.6
0 2 4 6 8 10
polint
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Example of Polynomial Interpolation
$
28
2.4
2.2
1.8
1.6
1.4
1.2
0.8
0.6
0 2 4 6 8 10
polint
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Example of Polynomial Interpolation
$
29
2.4
2.2
1.8
1.6
1.4
1.2
0.8
0.6
0 2 4 6 8 10
polint
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Example of Rational Interpolation
$
30
2.4
2.2
1.8
1.6
1.4
1.2
0.8
0.6
& %
0 2 4 6 8 10
ratint
2.4
2.2
1.8
1.6
1.4
1.2
0.8
0.6
0 2 4 6 8 10
ratint
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Example of Rational Interpolation
$
32
2.4
2.2
1.8
1.6
1.4
1.2
0.8
0.6
0 2 4 6 8 10
ratint
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Cubic Spline Interpolation
$
33
y
j+1
y
j
xj x x j+1
x = f x + (1-f) x
j+1 j
y=fy + (1-f) y
j+1 j
0 f 1
& %
This linear interpolation function has y00 = 0 in the interval
and typically undened y 00 at the end points.
g = 16 f (f ; 1)(f + 1)(xj+1 ; xj )2
h = 16 f (f ; 1)(f ; 2)(xj+1 ; xj )2
The additional terms are clearly zero at the endpoints (f=0,
f=1), and it is easily shown that:
y00 = fyj00+1 + (1 ; f )yj00 :
& %
One problem: yi00 are typically not known...
6
yj ;1
+
3
+ yj
6
yj +1
=
yj +1 ; j; y yj ; yj;1
x j +1 ; j x x j ; xj ;1
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
Numerical Recipes:
Cubic Spline Interpolation
$
36
where
yp1 and ypn are to contain the rst derivatives at
the endpoints. If they are larger than 1030 , zero
second derivatives on the boundary are assumed
instead.
y2a is the (returned) array of second derivatives
Exercise 2:
Use a natural cubic spline to interpolate between tabulated
& %
points for x = 0 1 ::: 10 from the function shown on
page 26. Show the results in a table, plot the interpolation
function and compare to the original function.
1
y4 y3
x2
u
x2a(k)
y1 y2
0
x1a(j) x1 x1a(j+1)
0 t 1
(
y x1 x2 ) = (1 ; t)(1 ; u)y1 + t(1 ; u)y2 + tuy3 + (1 ; t)uy4
& %
This results in a continuous interpolation function but the
gradient is discontinuous at boundaries of each square.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Interpolation in 2 or more dimensions
$
39
Bicubic Interpolation
This method requires additional information for all the
tabulated points:
@y @y @ 2 y
@x1 @x2 @x1@x2
Numerical Recipes:
SUBROUTINE bcucof(y,y1,y2,y12,d1,d2,c)
Bicubic Spline
Use the 1D natural cubic spline interpolation function to
determine the derivatives needed for bicubic interpolation.
Numerical Recipes:
SUBROUTINE splie2(x1a,x2a,ya,m,n,y2a)
& %
SUBROUTINE splin2(x1a,x2a,ya,y2a,m,n,x1,x2,y)
Integration of Functions
$
40
R
Concentrate on 1D integrals: I = b f (x) dxa
Classical methods
Not recommended, but have been around a long time.
Divide x into equal intervals:
xi = x0 + i h i = 0 1 ::: N + 1 fi = f (xi )
R
To evaluate I = xx0N +1 f (x) dx, can use
closed formula: I = F (f0 f1 ::: fN +1 )
open formula: I = F (f1 f2 ::: fN )
Open formulas are especially useful if the function is poorly
behaved at one or both endpoints of the integral.
Closed Formulas
Trapezoidal rule:
& %
Z x2
f (x) dx = h 21 f1 + 12 f2 ] + O(h3 f 00)
x1
is exact for linear functions.
& %
x1
+ 43 fN ;1 + 13 fN ] + O( N14 )
Extrapolation Formulas
Integration of Functions
$
42
Z x1
f (x) dx = h f1 + O(h2 f 0)
Zxx0 1
f (x) dx = h 23 f1 ; 12 f2] + O(h3 f 00)
Zxx0 1 23 f ; 16 f + 5 f ] + O(h4 f 000)
f (x) dx = h 12 1 12 2 12 3
x0
& %
Extended midpoint rule:
Z xN +1
f (x) dx = hf 21 + f 32 + ::: + fN + 21 ] + O( N12 )
x0
Elementary Algorithms
$
43
The method is then to evaluate I1, I2 , I3 ::: and stop when
j(Ij +1 ; Ij )=Ij j < tolerance.
Numerical Recipes:
& %
SUBROUTINE qtrap(func,a,b,s)
To be even more ecient, use the fact that the error in the
trapezoidal method is even in 1=N :
$
44
Zb
I = f (x) dx = SN + N
2 + N4 + :::
a
I = S2N + 4N
2 + 16N 4 + :::
You can cancel out the 1=N 2 error:
I = 34 S2N ; 13 SN ; 4N 4 + :::
and so this formula is accurate to order 1=N 4. In fact this is
just the Simpson rule!
Numerical Recipes:
SUBROUTINE qsimp(func,a,b,s)
& %
SUBROUTINE qromb(func,a,b,s)
Improper Integrals
If the integrand is poorly behaved at the endpoints, the
extended midpoint rule can be used instead of the
trapezoidal rule, and Romberg integration can again be
performed:
Numerical Recipes:
SUBROUTINE qromo(func,a,b,s,choose)
& %
analytically, or it could be handled automatically:
call qromo(func,a,b,s,midinf)
Special cases
Integrands with power law singularities at upper or lower
limits:
f (x) (x ; a); 0 < < 1
then let t = (x ; a)1; . For = 21 , then
Zb Z pb;a
f (x) dx = 2t f (a + t2 ) dt
a 0
Numerical Recipes:
call qromo(func,a,b,s,midsql)
Gaussian Quadrature
Methods presented so far involve breaking the range into N
equal intervals, evaluating the integrand at the interval
boundaries, and forming the sum,
X
N
I=
i fi
i=1
where the weights
i depend on the order of the
calculation. Polynomials of that order or less are handled
exactly by these methods.
& %
a i=1
which will be exact for a polynomial with order < 2N .
General Idea:
Consider the set of orthogonal polynomials over a function
W (x):
Zb
hpi jpj i = W (x) pi(x) pj (x) dx =
i ij
a
For an N -point Gaussian quadrature,
xi are the roots of pN (x) (all between a and b)
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
Numerical Recipes provide several routines for special choices of W (x). For
&
'
example, W (x) = 1 correspons to Gauss-Legendre quadrature. Use the following
Physics 75.502
routine:
SUBROUTINE gauleg(x1,x2,x,w,n)
where x1 and x2 are the limits and n the number of points. The routine returns,
Rev. 1.3
Gaussian Quadrature
Gauss-Legendre 1 gauleg(x1,x2,x,w,n)
Gauss-Laguerre x e;x 0 < x < 1 gaulag(x,w,n,alf)
Gauss-Hermite e;x2 ;1 < x < 1 gauher(x,w,n)
Gauss-Jacobi (1 ; x) (1 + x) ;1 < x < 1 gaujac(x,w,n,alf,bet)
49
1998/99
%
$
'
Physics 75.502 Multidimensional Integrals
$
50
Multidimensional Integrals
Dicult for two reasons:
number of function evaluations for an integral in N
dimensions scales as
N
the boundary, (an N ; 1 dimensional surface) may be
complicated.
As long as high precision is not required, Monte Carlo
integration is usually the easiest to impliment, especially if
the boundary is complicated.
For smooth functions to be integrated over a region with a
simple boundary, repeated one dimensional integration can
be performed:
ZZZ
I = f (x y z) dx dy dz
Z x2 Z y2 (x) Z z2 (xy)
= dx dy dz f (x y z)
Zxx1 2 y1 (x) z1 (xy)
& %
= H (x)dx
x1
H (x) is given by
Z y2 (x) Z z2 (xy)
H (x) = dy dz f (x y z)
y1 (x) z1 (xy)
Z y2 (x )
= G(x y) dy
y1 (x)
where,
Z z2 (xy)
G(x y) = f (x y z) dz
z1 (xy)
& %
each subprogram calls a dierent version.
If recursion is allowed:
Multidimensional Integrals
$
52
call qgaus(H,x1,x2,s)
SUBROUTINE H(xx)
COMMON /xyz/x,y,z
x=xx
call qgaus(G,y1(x),y2(x),s)
H=s
return
end
SUBROUTINE G(yy)
COMMON /xyz/x,y,z
y=yy
call qgaus(F,z1(x,y),z2(x,y),s)
G=s
return
end
SUBROUTINE F(zz)
COMMON /xyz/x,y,z
& %
z=zz
F=func(x,y,z)
return
end
Exercise #3
The convolution of an exponential decay and a Gaussian
resolution function is given by:
Z 1 e; (t;2tt20 )2 e; t0
f (t) = p dt 0
0 2t
Evalulate this integral using the Gauss-Laguerre
quadrature, with
= 0, for = 1, t = 0:5, and
t = ;2 ;1:5 ::: 5:5 6. Use N = 5 10 15 20 and compare to
the analytic solution:
1
2 t t t
f (t) = 2 exp 2t2 ; erfc p ; p
2 2 t
& %
substitute the analytical solution for f (t), but instead
perform the double integral using qromb and gaulag.
Solution
0.5
0.45 N=5
N=10
0.4
N=15
N=20
0.35
true
0.3
0.25
0.2
0.15
0.1
0.05
& %
-2 -1 0 1 2 3 4 5 6
Root Finding
$
55
& %
It can be dicult to nd a bracketed region if two roots are
near each other.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Bracketing
Bracketing
$
57
a b a b
SUBROUTINE zbrak(func,x1,x2,n,xb1,xb2,nb)
& %
This routine breaks the range (x1 x2 ) into n intervals and
returns the number nb and the ranges
xb1(1:nb),xb2(1:nb) of those intervals that bracket roots.
On input nb species the maximum number sought.
Bisection
$
58
& %
Numerical Recipes:
FUNCTION rtbis(func,x1,x2,xacc)
f(x) 2 f(x) 2
3 3
x 4 x
1 1
& %
False position method Secant Method
Ridders Method
$
60
y2
x1 x3 x2 x
y3
y1
0 f 1
& %
p y 3 + sign y 2 ](y3
2 ; y1 y2 ) 21
Q= y2
exponential interpolation:
pyy21;;yy2y]y3
x4 = x3 + (x3 ; x1) sign
3 1 2
Since the bracket is maintained, it ispa robust method, and
the convergence is superlinear, m = 2.
Brent Method
Rather than using a linear interpolation, as in the secant
method, a quadratic interpolation is made. Checks are
made to ensure the method is converging rapidly, and if not,
a bisection step is made. It is thus both robust and fast.
& %
examples, it is seen that the false position method can
sometimes be slow to converge, and the secant method
sometimes fails.
erf((x-4)*5)+x/100+0.9
2 2
1.25 1.25
1 1
0.75 0.75
0.5 0.5
0.25 0.25
16
0 0 19
21
20
18
30
29 17
15
1 2 3 4 5 6 7 8 9 10111213141151617182921
02224
2325
2628
27 1 3 4 6 7 9 10 12 13
1 2 3 4 5 1 2 3 4 5
2 2
1.25 1.25
3
1 1
0.75 0.75
0.5 0.5
0.25 0.25
6
0 34
0 180
911
& %
12 1 24 57
1 2 3 4 5 1 2 3 4 5
erf((x-4)*5)-x/100+0.9
1.25 1.25
1 1
0.75 0.75
0.5 0.5
0.25 0.25
0 21
20
19 0
18
1 2 3 4 5 6 7 8 9 1011121314151617 1
1 2 3 4 5 1 2 3 4 5
1.25 1.25
3
1 1
0.75 0.75
0.5 0.5
0.25 0.25
6
0 45 0 9180
& %
7
12 3 1 24 5
1 2 3 4 5 1 2 3 4 5
erf((x-4)*5)+2*exp(cos(x*4)/100)+x/100.-1.1
2 2
1.25 1.25
1 1
0.75 0.75
0.5 0.5
0.25 0.25
0 0
618
11517
1 2 3 4 5 6 798 9 1011121314 20
1921
22
23
24
25
26
27
28
29
30 1
10 2
6 8
1 2 3 4 5 1 2 3 4 5
2 2
4
1.75 Ridders method 1.75 Brent method
1.5 1.5
1.25 1.25
1 1
0.75 0.75
6
0.5 0.5
0.25 0.25
8
0 43 0 10
12
14
13
11
& %
12 1 2 3579
1 2 3 4 5 1 2 3 4 5
cos(x*10-5)/3+erf(x-4)
0.5 0.5
4
0 567 0
3
2
-0.5 -0.5
5
1 1
-1 -1
1 2 3 4 5 1 2 3 4 5
0.5 2 0.5 4
0 43 0 7896
5
1 2
-0.5 -0.5
1
-1 -1
& %
1 2 3 4 5 1 2 3 4 5
Newton-Raphson Method
$
66
& %
bracketed region and against innite loops, uses the
bisection method in addition:
FUNCTION rtsafe(funcd,x1,x2,xacc)
& -2 -1.5
SQRT((X**3-3*X*Y**2-1)**2+(3*X**2*Y-Y**3)**2)
Rev. 1.3
1.5 2
%
1998/99
'
Physics 75.502 Newton-Raphson and Fractals
$
68
1.5
0.5
-0.5
-1
-1.5
& %
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
Roots of Polynomials
$
69
Q(x) = P (x)=(x ; r)
You can use poldiv(u,n,v,nv,q,r) to do this division, but
the successive roots can be susceptible to rounding errors.
It is recommended to always polish them up, by using them
as initial guesses with the original function P (x).
Note: you should never evaluate the polynomial,
P (x) = c1 + c2 x + c3 x2 + c4 x3 + c5 x4
as
p = c(1)+c(2)*x+c(3)*x**2+c(4)*x**3+c(5)*x**4
& %
but instead as
p = c(1) + x*(c(2) + x*(c(3) + x*(c(4) + x*c(5))))
Laguerre's Method
Roots of Polynomials
$
70
& %
SUBROUTINE zroots(a,m,roots,polish)
& %
a maximum of ntrial iterations are made to improve on
the
P jxinitial estimatePof x. Iteration stops if either
i j < tolx or jFi j < tolf.
Numerical Recipes:
SUBROUTINE lnsrch(n,xold,fold,g,p,x,f,stpmax
,check,func)
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
$
73
Exercise 4:
For blackbody radiation, the radiant energy per unit
volume in the wavelength range to + d is,
hc
u() d = 85 exp(hc=kT ) ; 1 d
where T is the temperature of the body, c is the speed of
light, h is Planck's constant, and k is Boltzmann's constant.
Show that the wavelength at which u() is maximum may
be written as max =
hc=kT , where
is a constant.
Determine the value of
numerically from the resulting
transcendental equation.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Minimization (Maximization) of Functions
$
74
x1 x2 x
& %
global maximum at the boundary, and so f 0 (x) 6= 0 at that
location.
a b x c
& a
Rev. 1.3
%
1998/99
'
Physics 75.502 Golden Section Search in 1D
$
76
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
a
Golden Section Search in 1D
c c
$
77
b b
x x
0 w w+z 1 0 w 1
After choosing the next point, the size of the next bracketed
region is either w + z or 1 ; w. The optimal strategy would
make these equal:
w + z = 1 ; w ! z = 1 ; 2w
But the original value of w should have been chosen in the
same way, so
p
w = 1 ;z w = 11;;2ww ! w = 3 ;2 5 = 0:38197:::
This is called the \golden mean" or \golden section".
Numerical Recipes:
SUBROUTINE mnbrak(ax,bx,cx,fa,fb,fc,func)
& %
section search can then be performed:
FUNCTION golden(ax,bx,cx,func,tol,xmin)
Method:
1. Begin with three points to dene a parabola.
2. Next point to evaluate is at the minimum of the
parabola.
3. Chose as the next set of three points, the minimum,
and the two points on either side.
& %
4. Repeat.
Numerical Recipes:
FUNCTION brent(ax,bx,cx,func,tol,xmin)
FUNCTION dbrent(ax,bx,cx,f,df,tol,xmin)
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Downhill Simplex Method (multidimensions)
2D - triangle 3D - tetrahedron
If one point is taken as the origin, the N lines from that
point dene vectors that span the N dimensional space.
Method:
Start with an initial guess, P0 , and step sizes in each
dimension, ei . This denes a simplex, with the vertices
& %
given by Pi = P0 + ei .
Perform a series of steps that expand and contract the
simplex in the N dimensions.
Simplex transformations:
$
81
a)
b)
c)
& %
The simplex eventually encloses a minimum, and the
contracts around it, until the function value within the
simplex is within some tolerance.
Numerical Recipes:
SUBROUTINE amoeba(p,y,mp,np,nd,ftol,funk,iter)
input:
p(1:nd+1,1:nd) vertices of the initial simplex
nd+1
y(1:nd+1) values of funk evaluated initial sim-
plex vertices
ftol fractional function tolerance
Location of minimum is returned in p (a contracted
simplex).
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Powell's Method
Powell's Method
$
83
Method:
1. choose a direction
2. nd minimum along that direction (using 1D
minimization)
3. repeat
It is important to choose the directions carefully. Unit
vectors in each dimension can be very inecient in some
cases:
y
& %
A more ecient approach would be to choose directions
such that minimization along one direction does not aect
the minimization along the other direction. These are
known as \conjugate directions".
Conjugate Directions
Conjugate directions can be found as long as the function is
quadratic about the minimum. Otherwise the directions
will be only approximately conjugate, but the method
improves the rate of convergence in any case.
If the function is nearly quadratic, then it is a good
approximation to write
X @f 1 X @ 2f
f (x) = f (P) + @x jP xi + 2 @x @x jP xi xj
i i ij i j
= c ; b
x + 21 x
A
x
Hence the gradient of f is approximately,
rf = A
x;b
The change in the gradient by moving along direction the
direction x is given by,
(rf ) = A
x
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Conjugate Directions
$
85
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Powell's Method
Powell's Method
$
86
P0
P0 P2 P2
P1
P0
& %
For a quadratic function, after N iterations, all the
directions will be conjugate, and thus the minimum will be
found exactly after N (N + 1) 1D minimizations.
& %
the largest decrease in the function. This reduces the chance
of it and PN ; P0 becoming almost linearly dependent.
to be \played out"! or
the reduction is not due to a large part on one direction
or f 00 is large along the direction PN ; P0 . These
conditions can be checked simultaneously by,
2(f0 ; 2fN + fE )(f0 ; fN ) ; #f ]2 (f0 ; fE )2 #f
where #f is the magnitude of the largest decrease along
any of the directions.
P0 P1
PN PE
Numerical Recipes:
SUBROUTINE powell(p,xi,n,np,ftol,iter,fret)
input:
p(1:n) initial starting point
& %
xi(1:n,1:n) initial directions (columns)
ftol fractional function tolerance
The routine nds the minimum of a user supplier function,
func and it is returned in p.
Gradient Methods
$
89
& %
! Conjugate Gradient Methods
By using the gradients, conjugate directions can be found
much more elegantly than Powell's method.
Start at point x0.
Gradient Methods
point x1.
The next direction d needs to be conjugate to the
previous direction of movement, x1 ; x0 .
d
A
(x1 ; x0) = 0
Fortunately A does not need to be calculated:
rf (x) = A
x ; b and so
d
A
(x1 ; x0) = d
(rf (x1) ; rf (x0)) = 0
The next direction d is some combination of the two
gradient vectors:
d = rf (x1) +
rf (x0)
solve for
, using rf (x1)
rf (x0) = 0:
( rf (x1 ))
2
=
(rf (x0))2
Continue the process.
Numerical Recipes:
& %
SUBROUTINE frprmn(p,n,ftol,iter,fret)
Numerical Recipes:
SUBROUTINE dfpmin(p,n,gtol,iter,fret,func,dfunc)
& %
p(1:n) is the starting position. The programs returns once
the magnitude of the gradient is reduced to gtol.
& %
Problems of this sort are common in accounting where the
concepts of negative dollars, negative widgets, etc. are
meaningless.
x3
Maximum at
a vertex
x2
& %
x1
& %
unique to that constraint and it has a positive
coecient
Simplex method
Linear Programming (Optimization)
$
95
An example:
z = 2x2 ; 4x3
subject to the constraints
x1 + 6x2 ; x3 = 2
;3x2 + 4x3 + x4 = 8
& %
To increase z , it is clear that x2 should be increased.
How far can x2 increase while keeping the LHS
variables 0?
Linear Programming (Optimization)
& %
x1 + 2x2 3 ! x1 + 2x2 ; y1 = 3
x2 + 3x3
4 ! x2 + 3x3 + y2 = 4
At the end, the solutions for y1 and y2 are ignored.
variables:
z1 = 3 ; x1 ; 2x2 + y1
z2 = 4 ; x2 ; 3x3 ; y2
And solve the new problem, maximizing
z0 = ;z1 ; z2 = ;7 + x1 + 3x2 + 3x3 ; y1 + y2
with all z1 and z2 constrained to be 0 as usual. Since the
solution to this problem has z1 = z2 = 0, the simplex
procedure will result in z1 and z2 becoming RHS variables,
which can be set to zero. This leaves the original problem,
but set up in restricted normal form.
Numerical Recipes subroutine:
simplx(a,m,n,mp,np,m1,m2,m3,icase,izrov,iposv)
& %
icase species if a solution is found
iposv(1:M) and izrov(1:N) are pointers to the solution
stored in a (see text).
& %
This allows the system to get out of local energy
minimums (as long as enough time is allowed).
Metropolis Algorithm
Simulated Annealing Methods
$
99
p = min 1 e
In other words, always take a downhill step, and sometimes
take an uphill step.
Can be applied to non-thermodynamic systems as well. One
needs to dene
1. a set of possible congurations
2. a method to randomly modify the congurations
3. a function (E ) to minimize as the goal of the problem
4. a control parameter (T ) and an annealing schedule
(how to lower T ).
Example: Traveling Salesman (minimize total trip distance)
1. Number cities, i = 1 ::: N , each with coordinate
(xi yi ). A conguration consists of a permutation of
the numbers 1 ::: N which species the order that the
cities are visited.
& %
2. Modify the permutation as follows
a) reverse order of 2 adjacent numbers
b) move 2 adjacent numbers to random location
P
3. E = Li, or some other penalty function could be
included.
4. Set k = 1 so that
Numerical Recipes:
SUBROUTINE anneal(x,y,iorder,ncity)
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Simulated Annealing Methods
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
& %
0.2
0.1
0
0 0.2 0.4 0.6 0.8 1
Τ=0.2391 Ε=29.68 Τ=0.2152 Ε=34.1 Τ=0.1937 Ε=29.49 Τ=0.1743 Ε=27.72 Τ=0.1569 Ε=23.78 Τ=0.1412 Ε=24.61 Τ=0.1271 Ε=18.73 Τ=0.1144 Ε=20.7
Τ=0.1029 Ε=17.79 Τ=0.0927 Ε=17.23 Τ=0.0834 Ε=15.04 Τ=0.075 Ε=15.41 Τ=0.0675 Ε=14.05 Τ=0.0608 Ε=13.02 Τ=0.0547 Ε=12.54 Τ=0.0492 Ε=11.82
Τ=0.0443 Ε=10.37 Τ=0.0399 Ε=10.09 Τ=0.0359 Ε=10.04 Τ=0.0323 Ε=9.34 Τ=0.0291 Ε=9.26 Τ=0.0262 Ε=8.68 Τ=0.0236 Ε=8.85 Τ=0.0212 Ε=8.51
Τ=0.0191 Ε=8.61 Τ=0.0172 Ε=8.47 Τ=0.0155 Ε=8.31 Τ=0.0139 Ε=8.32 Τ=0.0125 Ε=8.3 Τ=0.0113 Ε=8.33 Τ=0.0101 Ε=8.28 Τ=0.0091 Ε=8.23
Τ=0.0082 Ε=8.09 Τ=0.0074 Ε=8.07 Τ=0.0067 Ε=8.05 Τ=0.006 Ε=8.08 Τ=0.0054 Ε=8.1 Τ=0.0048 Ε=8.05 Τ=0.0044 Ε=8.05 Τ=0.0039 Ε=8.05
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Ordinary Dierential Equations
& %
To deal with problems with boundary conditions given at
more than one value of x, see text (two-point boundary
value problems).
Eulers method
Eulers method
$
104
& %
y0
h
x0 x1 x2 x3 x
Runge-Kutta Method
$
105
y0
h
x0 x1 x2 x
& %
yn+1 = yn + k2 + O(h3)
and is called the second order Runge-Kutta formula.
k1 = h f (xn yn )
k2 = h f (xn + 12 h yn + 12 k1 )
k3 = h f (xn + 21 h yn + 12 k2 )
k4 = h f (xn + h yn + k3 )
yn+1 = yn + 61 k1 + 13 k2 + 13 k3 + 16 k4 + O(h5 )
k1
k2
y0
k3
k4
& %
x
x0 x1
Step Doubling
Compare the result of a step of size 2h with that of size h.
The dierence in the two results can be used to estimate
the error in the approach. Adjust h to keep the error in a
reasonable range (not too large and not too small).
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
& %
intermediate results are stored in common /path/. The nal
argument species the stepping routine. Use rkqs for the
fth order embedded Runge-Kutta formula.
z1
z2
z0
z3
y0 z5
z4
x
x0 x0 + h x 0 + 2h x 0 + 3h x 0 + 4h x 0 + 5h
& %
and the error in this estimate is even in powers of h:
y(x + H ) = yn +
1h2 +
2h4
Bulirsch-Stoer Method
$
110
2 steps
4 steps
6 steps
extrapolation
to h 0
x
& %
x x+H
Exercise 5
Bulirsch-Stoer Method
$
111
θ(rad)
1.5
1.6
1.5 1
1.4
0.5
1.3
1.2 0
1.1
-0.5
1
0.9 -1
0.8
-1.5
0 2.5 5 7.5 10 0 2 4 6 8 10
t t
-4
x 10
0.5
y
E(t)-E0
0.15
0 0.1
0.05
-0.5
0
-1 -0.05
-1.5 -0.1
-0.15
-2
-0.2
& %
-1 0 1 0 2.5 5 7.5 10
x t
θ(rad)
1.4 1.5
1.3 1
1.2 0.5
1.1 0
1 -0.5
0.9 -1
0.8 -1.5
0 2.5 5 7.5 10 0 2 4 6 8 10
t t
-4
0.5 x 10
y
E(t)-E0
0.2
0
0.15
-0.5
0.1
-1 0.05
0
-1.5
-0.05
-2
-0.1
& %
-1 0 1 0 2.5 5 7.5 10
x t
θ(rad)
1.5
1.00015
1.0001 1
1.00005 0.5
1 0
0.99995 -0.5
0.9999 -1
0.99985
-1.5
0 2.5 5 7.5 10 0 2 4 6 8 10
t t
0.5
y
E(t)-E0
0.014
0 0.012
0.01
-0.5
0.008
-1
0.006
-1.5 0.004
-2 0.002
& %
-1 0 1 0 2.5 5 7.5 10
x t
& %
@x2 @y2
Given u(x y ) on the boundary, the problem is to nd
u(x y) elsewhere.
Flux-conservative IVP
$
116
& %
propagating in the positive x direction:
u = f (x ; vt)
The numerical solution to this problem is not as simple!
FTCS Method
$
117
& %
In PAW, the formula is easily handled,
sigma u = u + c] (ls(u 1) ; ls(u ;1))
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Lax Method
$
119
Lax Method
A simple modication to the FTCS method, improves the
stability of the method. Replace unj by its average,
1 (un + un ), so the recurrence relation is now
2 j +1 j ;1
v #t (un ; un ) :
unj +1 = 12 (unj+1 + unj;1) ; 2# x j+1 j;1
This method is stable for vxt
1, but for vxt < 1 the
amplitude diminishes. For vxt = 1 the solution is exact,
unj +1 = unj;1
& %
The new term is a dissipative term, said to add \numerical
viscosity" to the equation.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Lax-Wendro Scheme
$
121
Lax-Wendro Scheme
To improve the dierence equation to second order in time,
consider the Taylor expansion of the solution,
@u 1 @ 2u
u(x t + #t) = u(x t) + #t @t + 2 #t2 @t2 + O(#t3)
The second term is easily represented using the original
PDE, which can be written in a more general form,
@u = ; @F (u)
@t @x
where F (u) = vu for the advection equation. The second
partial derivative can be written,
@ 2u = ; @ @F = ; @ @F = ; @ @F @u
@t2 @t @x @x @t @x @u @t
@ @F
= @x F 0 (u) @x :
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
&
' Physics 75.502
0 n n n 0 n n n
+ 2#x2 F (uj + 12 )(F (uj +1) ; F (uj )) ; F (uj; 12 )(F (uj ) ; F (uj ;1 ))
where unj 21 = (unj1 + unj )=2.
Rev. 1.3
n +1 n v # t n n v 2 #t2
uj = uj ; 2#x (uj+1 ; uj;1) + 2#x2 (unj+1 + unj;1 ; 2unj ) :
122
1998/99
%
$
'
Physics 75.502 Lax-Wendro Scheme
method.
Example of a solution to the advection equation using
Lax-Wendro scheme, with vxt = 0:6. Each box represents
a new time bin.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Application: Fluid Mechanics in 1D
$
124
@ :
= ;c() @x
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Application: Fluid Mechanics in 1D
$
125
x1 x
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Trac Simulation
Trac Simulation
$
126
& %
ρ = ρm ρ=0
ρ = ρm ρ=0
−ε ε x
& %
ρ = ρm ρ=0
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Trac Simulation
$
129
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Trac Simulation
$
130
& %
position
time
position
&
Dean Karlen/Carleton University Rev. 1.3
position
1998/99
%
'
Physics 75.502 Diusive Initial Value Problem
$
132
& %
time steps #t for some problems.
2D2t = 1
The result for ( x)
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Diusive Initial Value Problem
$
134
2D2t = 1:2
The result for ( x)
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Diusive Initial Value Problem
$
135
Crank-Nicholson scheme
Even better, is to simply average the result from forward
and backward time methods. This gives a a method that is
second order in other time and space and stable for all #t.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Boundary Value Problems
& %
This is just a simple average of the 4 neighbouring points in
space. The method is to continue iterations until solution
converges. However, this is usually too slow for most
problems.
Gauss-Seidel Method
There is a slight improvement in converge if updated values
of unj` are used as they become available,
n +1 1 n n +1 n n +1
Successive Overrelaxation
This algorithm converges much more quickly by
overcorrecting the values for u at each iteration,
n +1 n ! n n +1 n n +1
& %
usually found by trial/error.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Part III: Monte Carlo Methods
$
139
Topics:
Introduction
Random Number generators
Special distributions
General Techniques
Multidimensional simulation
References:
The Art of Computer Programming, D.E. Knuth,
Addison-Wesley, vol 2, 1969.
Monte Carlo Theory and Practice, F. James, Rep.
Prog. Phys., Vol. 43, 1980, 1145.
Portable Random Number Generators, W.H. Press,
& %
S.A. Teukolsky, Computers in Physics, Vol. 6, No. 5,
1992, 522.
Random Numbers
& %
numbers that have nothing to do with the other
numbers in the sequence.
& %
Hooking up a random machine to a
computer is not a good idea. This would
lead to irreproducable results, making
debugging difficult.
Pseudo-Random Numbers
These are sequences of numbers generated by
computer algorithms, usally in a uniform
distribution in the range [0,1].
An early example :
eg. 57721566492=33317792380594909291
so the next number is given by
& %
The sequence is not random, since each
number is completely determined from the
previous. But it appears to be random.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Random Number Generation
$
144
I n+1 = (a I n + c) mod m
& %
sequences. The method with c=0 is called:
Multiplicative congruential.
Choice of modulus, m
m should be as large as possible since the period
can never be longer than m.
One usually chooses m to be near the largest
integer that can be represented. On a 32 bit
machine, that is 231 ≈ 2×109.
Choice of multiplier, a
& %
p dividing m;
iii) a-1 is a multiple of 4, if m is a
multiple of 4
RANDU generator
& %
serious problem...
500
400
300
200
100
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
& %
Random number
Looks okay
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1
0.8 1
0.6 0.8
0.4 0.6
0.2 0.4
1
0.2 0.75
1
0.75 0.5
1 0.5
0.75 0.25
0.5 0.25
0.25 0 1 0.75 0.5 0
0.25
1
0.8 1
0.6 0.8
0.4 0.6
0.2 0.4
0.2
1
0.75 1
0.5 0.75 0.25
0.25 0.5 0.5
0.25 0.25 0.75
0 0.75 0.5 0 1
1
& %
Problem seen when observed at the right angle!
d= 3 2953
d= 4 566
d= 6 120
d=10 41
& %
maximum.
The replacement of the multiplier from 65539 to
69069 improves the performance signifigantly.
Warning
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Random Number Generation
RANMAR generator
& %
This seems to be the ultimate in random
number generators!
Number generators.
x=RAND(IDUM)
For example
x=RAND(IDUM)+RAND(IDUM)
& %
may be changed to
x=2.*RAND(IDUM)
Solution:
DO 1 I=1,100
IDUM=IDUM+1
x=RAND(IDUM)
...
1 CONTINUE
& %
seed for the next random number.
(Numerical Recipies generators, for
example)!
Problem:
Consider a system initially having N0 unstable nuclei. How
does the number of parent nuclei, N , change with time?
Algorithm:
LOOP from t=0 to t, step #t
LOOP over each remaining parent nucleus
Decide if the nucleus decays:
IF(random # <
#t) then
reduce the number of parents by 1
ENDIF
END LOOP over nuclei
& %
PLOT or record N vs. t
END LOOP over time
END
Exercise 6
Write a program to implement the preceding algorithm.
Graph the number of remaining nuclei as a function of time
for the following cases:
N0 = 100
= 0:01 s;1 #t = 1 s !
N0 = 5000
= 0:03 s;1 #t = 1 s :
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Simulating Radioactive Decay
$
157
Solution to exercise 6:
The 'experimental' results do not perfectly follow the
expected curve! there are statistical "uctuations.
100 5000
90 4500
80 4000
70 3500
60 3000
50 2500
40 2000
30 1500
20 1000
10 500
0 0
0 100 200 300 0 100 200 300
10 3
10 2
10
10
& %
1 1
0 100 200 300 0 100 200 300
Poisson Distribution
$
158
& %
is therefore: m
P = pn (1 ; p)m;n n :
P = pn (1 ; p)m;n (m ;mn! )! n!
T n T m;n m!
= m 1; m (m ; n)! n!
& %
where = T . This is known as the Poisson distribution.
Exercise 7
Modify the program written for exercise 6 to simulate an
experiment that counts the number of decays observed in a
time interval, T .
Allow the experiment to be repeated and histogram the
distribution of the number of decays for the following two
cases:
N0 = 500
= 4 10;5 s;1 #t = 10 s T = 100 s
N0 = 500
= 2 10;4 s;1 #t = 10 s T = 100 s
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
Solution to Exercise 7:
Poisson Distribution
$
161
320
280
240
200
160
120
80
40
0
0 1 2 3 4 5 6 7 8 9
140
120
100
80
60
40
& %
20
0
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28
n
Pn = n! e; ( =
NT )
Mean value:
X1 n
hni = n n! e;
n=0
; X
1 n;1
= e
n=1 (n ; 1)!
X1 m
= e; m! =
m=0
Variance:
1
X n
2 = (n ; ) 2 e;
n=0 n!
1
X n
& %
= (n2 ; 2n + 2 ) n! e;
n=0
Hence if n decays
p
are observed, the 1 standard deviation
uncertainty is n. (This is also true for any other variable
& %
that follows the Poisson distribution.)
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Gaussian (or Normal) Distribution
$
165
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Gaussian (or Normal) Distribution
$
166
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Binomial Distribution
$
167
Binomial Distribution
The binomial distribution describes the results of repeated
experiments which has only two possible outcomes.
Suppose a radioactive source is monitored for a time
interval T . There is a probability p that one or more
disintegrations would be detected in that time interval. If a
total of m intervals were recorded, the probability that n of
them had at least one decay is
m
P = pn (1 ; p)m;n n :
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Simulating General Distributions
& %
If a special purpose generator routine is not available, then
use a general purpose method for generating random
numbers according to an arbitrary distribution.
Rejection Technique
Problem: Generate a series of random numbers, xi , which
follow a distribution function f (x).
Algorithm:
Choose trial x, given a uniform random number 1:
xtrial = xmin + (xmax ; xmin)1
Decide whether to accept the trial value:
if f (xtrial ) > 2 fbig then accept
& %
where fbig f (x) for all x, xmin
x
xmax. Repeat the
algorithm until a trial value is accepted.
f big
f(x)
x min x max
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Rejection Technique
$
171
p
Naccept = p(1 ; p)Ntrial p = NNaccept
trial
I 2 Naccept 2
I = Naccept
= NNaccept 1 ; NNaccept Ntrial N 21
trial trial accept
= N 1 ; N1
accept trial
= 1 1;p
Ntrial p
& %
;
So the relative accuracy only improves with N 2
1
trial
f big
f(x)
x min x max
& %
is (;1 +1). A better algorithm is needed...
Inversion Technique
$
173
Zx
f (x) dx
Z xxmin
max =
f (x) dx
& %
xmin
This method is fully ecient, since each random number
gives an x value.
Note that the simple rejection technique would not work for
& %
either of these examples.
Exercise 8
Write a program that generates the value according to the
distribution function:
f () = (sin2 + a cos2 );1
in the range 0
2 .
Compare the rejection technique and the inversion
technique:
Generate 10000 values for each method using a = 0:5
and also for a = 0:001.
Plot the results for each (4 plots) and overlay the
distribution curve, f (), properly normalized.
Compare the CPU time required for the 4 runs.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
Solution to Exercise 8:
Inversion Technique
$
176
160
140
120
100
80
60
40
20
0
0 1 2 3 4 5 6
theta
1800
1600
1400
1200
1000
800
600
& %
400
200
0
0 1 2 3 4 5 6
theta
Importance Sampling
f a(x)
& %
The rejection technique is just the special case
where f a(x) is chosen to be a constant.
Example:
w = f(x)/f a(x)
Procedure:
& %
you may need to run the program to find the
maximum weight generared, and then pick a value a
little larger, and rerun.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Inversion Technique
Gaussian distribution.
σ 2= ⌠
⌡ (x-1/2) 2
dx = 1/12
& %
This algorithm is coded in RG32 in the
CERN library.
2) 2 D gaussian
Inversion Technique
$
181
θ
θ
θ θ θ
θ θ θ
∞
θ π
λ
θ πλ
θ
θ
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Multidimensional Simulation
Multidimensional Simulation
$
182
Dx(x) = ⌠
⌡ f(x,y) dy
& %
invertable. The weights for trial events are given
by, w = f(x,y) / f a(x,y) and the integral can be
evaluated as before, using the weights of all trial
events. (Event = x and y pair)
k’
k θ
-
& %
pole at = 0.
Exercise 9
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
Solution to Exercise 9:
Multidimensional Simulation
$
186
160
140
120
100
80
60
40
20
0
0 0.4 0.8 1.2 1.6 2 2.4 2.8
1200
1000
800
600
400
& %
200
0
0 0.4 0.8 1.2 1.6 2 2.4 2.8
Algorithm:
Break path into small steps:
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Multidimensional Simulation
EGS (SLAC)
GEANT (CERN)
Detector response
Efficiency
& %
From measurements from well understood
sources, the effiency as a function of energy
(and maybe position) can be found.
For example:
Once ε is known,
select events that are
ε missed with a
probabilty of ε:
Resolution, offset
& %
Simulate the observed energy distribution
when no source is present.
x=a σ
& %
This is called inverse probability by
mathematicians. Physicists use the term
probability for both direct and inverse
probability.
Systematic uncertainty:
due to the uncertainty in the behavior of
the experimental apparatus
Example:
A measurement of the activity of a radioactive
source: Count the number, N, of signals in a
detector covering the solid angle Ω, with
efficiency ε, over a period of time T.
& %
Systematic uncertainty: T, Ω, and ε are not
known with perfect precision.
General comments:
• Two measurements often suffer common
systematics, whereas never share statistical
error, hence one often treats statistical and
systematic errors separately.
• It is usually difficult to characterise the one
standard deviation systematic uncertainty.
• Most experiments are designed so that the
systematic uncertainty is smaller than the
statistical uncertainty.
& %
can be done to a precision limited by a statistical
process.
y = Σxi / n
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
Dean Karlen/Carleton University Rev. 1.3 1998/99
'
Physics 75.502 Part IV: Statistics for Physicists
$
195
& %
@x @y
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
Example: Gaussian in 2 D
Part IV: Statistics for Physicists
$
197
P(x,y)=(2πσxσy)-1 exp(-(x2/σx2+y2/σy2)/2))
σy
σx
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Part IV: Statistics for Physicists
$
198
y’
y
σy’ x
φ x’
σx’
Then
P (x0 y0) = (2x0 y0 );1(1 ; 2);1=2
exp(;(x02=x20 + y02=y20 ; 2x0y0=(x0 y0 ))=(2(1 ; 2)))
where = Vx0y0 =(x0 y0 ) is the correlation coecient
jj
1! and = 0 corresponds to no correlation
& %
tan 2 = 2x0 y0 =(x20 ; y20 )
Exercise 10
Part IV: Statistics for Physicists
$
199
& %
-6 -6
-8 -8
-10 -10
-10 -5 0 5 10 -10 -5 0 5 10
Uncorrelated Correlated
Confidence intervals
Condence Intervals
$
200
f(t)
ta tb
& %
Not really!
& %
which states the probability of |(x-μ)/σ|<2 is 95.4%
Instead:
c
The integral I = ∫−c G(0,1)(z) dz is given below:
c I
1 0.683
1.5 0.866
1.64 0.900
& %
1.96 0.950
2.0 0.955
2.58 0.990
3.0 0.997
Classical approach
& %
same limit!
Classical Approach
Classical Approach 95% C.L.
0.04
0.035
0.03
0.025
0.02
0.015
0.01
0.005
0
-600 -400 -200 0 200 400 600
& %
2 2
mν (eV )
Bayesian approach
∫ R(x|θ) dx =1
& %
See F.James, M.Roos, Phys.Rev.D44, 299 (1991)
Bayesian Approach
Bayesian Approach 95% C.L.
0.06
0.05
0.04
0.03
0.02
0.01
0
-600 -400 -200 0 200 400 600
& %
2 2
mν (eV )
Estimation of parameters
General Problem:
Definitions:
& %
∧
estimate: the resulting value of the estimator, θ
In addition:
& %
smaller variance than the median.)
Likelihood function
& %
Each xi could also denote a set of measurements,
and θ could be a set of parameters.
∧
Maximum Likelihood Method
& %
weighted mean.
Consistent
estimators converge on true parameter
Unbiased
∧
sometimes biased for ∧finite samples. Note: θ
may be unbiased but t(θ) may be biased.
Efficient
if a sufficient estimator exists, the ML
& %
method produces it, and this will give the
minimum attainable variance.
ie. You can’t do better than this.
Variance of ML estimates:
Maximum Likelihood Method
$
212
Y
n
L(~x ~
) = L(x1 ::: xnj1 :::k ) = f (xi ~)
i
& %
@2 V (^)
Pn x2i
^ = Pin=1 1i
i=1 i
2
Recall the log likelihood function is
"
X 1
n 2 #
ln L = ; ln(2 2) ; 1 xi ;
2 i 2 i
i=1
Then, the variance is,
@ 2 ln L ;1 X
n !;1
V (^) = ; = 1
@2 2
=^ i=1 i
p
In the case where i = , # = = n
& %
i j
Graphical determination of
$
214
One parameter
& %
so,
∧ ∧
ln L( θ +V( θ )1/2) = ln Lmax−1/2
The formula,
∧ ∧
ln L(θ +V(θ )1/2) =ln Lmax−1/2
Proof:
1.23 −+ 0.09
0.12
& %
for example.
prob/0.1
0.03 0.03
0.025 0.025
0.02 0.02
0.015 0.015
0.01 0.01
0.005 0.005
0 0
0 2 4 6 8 10 0 2 4 6 8 10
Gaussian Non-Gaussian
-3 -3
ln(prob/0.1)
ln(prob/0.1)
-3.5 -3.5
-4 -4
-4.5 -4.5
-5 -5
-5.5 -5.5
-6 -6
-6.5 -6.5
& %
-7 -7
0 2 4 6 8 10 0 2 4 6 8 10
Two parameters
Given L(x|θ1,θ2), plot contours of constant
likelihood in the θ1,θ2 plane.
Often there may be more than one maximum, if
one isn’t much larger than all the rest, then an
additional (different) experiment may be needed to
decide which of the peaks to take.
∧
θ2
∧ ∧
θ 2 −Δθ 2
& %
∧ ∧
θ 1 −Δθ 1
∧
θ1
∧ ∧
θ 1 +Δθ 1 θ1
θ1
θ2
∧ ∧
θ 2 +Δθ 2
∧
θ2
∧ ∧
θ 2 −Δθ 2
∧ ∧
θ 1 −Δθ 1
∧
θ1
∧ ∧
θ 1 +Δθ 1 θ1
σ γ
& %
0.5 1 0.393
2.0 2 0.865
4.5 3 0.989
& %
problems.
& %
pi is the probability for bin i: pi = ∫Δxif(x|θ)dx
Since L depends on θ only through pi, find
maximum of L through lnL = Σnilnpi(θ).
Rather obvious when you look at it!
& %
functions:
Exercise 11
Consider an experiment that is set up to measure the
lifetime of an unstable nucleus, N, using the chain reaction,
A ! Ne ' N ! Xp
The creation and decay of N is signaled by the electron and
proton.
The lifetime of each N, which follows the p.d.f, f = 1 e;t= ,
is measured from the time between observing the electron
and proton with a resolution of t .
The expected probability density function is the convolution
of the exponential decay and the Gaussian resolution:
Z 1 e; (t;2tt20 )2 e; t0
f (tj t) = p dt 0
2t
1
0
2 t t t
= 2 exp 2t2 ; erfc p ; p
2 2 t
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Maximum Likelihood Method
$
223
Exercise 11 (continued)
Generate 200 events with = 1 s and t=0.5 s. (Use the
inversion technique followed by a Gaussian smearing.) Use
the maximum likelihood method to nd ^ and the
uncertainty, ^ . Plot the likelihood function, and the
resulting p.d.f. for the measured times compared to a
histogram containing the data.
Automate the ML procedure so as to be able to repeat this
exercise 100 times, and plot the distribution of (^ ; )=^
for your 100 experiments and show that it follows a unit
Gaussian.
For 1 data sample, assume that t is unknown, and show a
contour plot in the t plane with constant likelihood,
ln L = ln Lmax ; 12
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
24
20
16
12
0
-4 -2 0 2 4 6 8 10 12
observed times
700
600
500
400
300
278
277
276
& %
275
274
273
0.8 0.85 0.9 0.95 1 1.05 1.1 1.15 1.2 1.25
12
10
0
-4 -3 -2 -1 0 1 2 3 4
residual/error
Exercise 6
0.68
0.64
0.6
0.56
0.52
0.48
0.44
0.4
& %
0.36
0.32
Method
& %
if accuracy for yi given by σi, wi=1/σi2;
if yi represents a Poisson distributed random
number, wi=1/fi (or sometimes wi=1/yi).
N N
Σ j=1
2
X = i=1 Σ (yi-fi)Vij-1(yj-fj)
χ2 distribution
If xi (i=1,...,N) are distributed according to the
Gaussian with mean μi and variance σi2, the
quantity, χ2≡Σ(xi-μi)2/σi2 has the p.d.f. given by,
f(χ2|N) = 2-N/2Γ-1(N/2)χ2(N/2-1)e-χ /2 0 ≤ χ2 ≤∞
2
& %
freedom.
0.35
2
0.3
1
0.25
3
0.2
0.15 5
6
7
0.1 10
15
& %
20
0.05
0
0 2.5 5 7.5 10 12.5 15 17.5 20 22.5 25
Cumulative 2 distribution
Least Squares Method
$
229
Z 2
F (2jN ) = f (2jN ) d2 = 1 ;
0
The p.d.f. of F is uniform over 0 1] (of course!).
The following graph shows
= 1 ; F (2 ), for various N
1
-1
10 1 2 3 4 5 7 10 15 20 25 30 40 50 70
-2
10
& 10
-3
Rev. 1.3
10
2
%
1998/99
'
Physics 75.502 Least Squares Method
%
∧
θ
N Σxi2 - (Σxi)2
Polynomial tting
For high order polynomials ( 6), roundo errors may cause
serious numerical inaccuracies. It is better to use
orthogonal polynomials, since the error matrix is diagonal
and easy to invert.
Take as a model,
X
L
fi = ` (xi )!`
`=1
where ` are orthogonal over the observables,
& %
X
N
k (xi)` (xi ) = k`
i=1
Degrees of Freedom
If yi are Gaussian distributed with true mean i and
variance i2, then
N y ; 2
X
X2 = i i
i=1 i
follows a 2 distribution with N degrees of freedom.
But i are unknown. If we instead use ^i (the result from
the LS minimisation to a linear model with L independant
parameters), then
N y ; ^ 2
X
2 =
Xmin i i
i=1 i
is distributed according to the 2 distribution with N ; L
degrees of freedom.
This can been proven by showing that for a linear model,
2 can be expressed as a sum of (N ; L) independant
Xmin
terms each being the square of a Gaussian distributed
& %
variable.
& %
where, Q2min is Xmin
2 with V replaced by Vr , and where L is
the number of parameters in the linear model.
Goodness of Fit
Least Squares Method
$
234
& %
If large value of PXmin
2 is due to one of the
measurements, should examine that measurement in
detail.
& %
Since 1 degree of freedom has been lost due to the
normalisation condition, Σni=n, X2min would follow
f(χ2|N-1-L) if the model consisted of L
independent parameters.
Choice of binning
• equal width
• equal probability
& %
the less probable regions.
& %
uncertainty of the estimates will not be well defined.
(For example, by including large weight events, the
estimated variances can actually increase.)
→
Often it turns out that the true values, η , are
related through algebraic
→
constraint equations.
The observations, y , do not strickly satisfy these
∧
→
contraints, but one wishes to form estimates, η ,
that do. The variance of these estimates should be
smaller than if the constraints were not taken into
account.
multipliers.
Example: 3 angles of a triangle
Elimination:
model has 2 parameters, η1, η2 and minimise:
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502
Lagrange multipliers
Least Squares Method
$
239
→ →
In general, if Bθ - b = 0 represents K constraint
equations (B is a K×L matrix) then minimise,
→ → → → → → → → →
2 T -1 T
X (θ ,λ )=(y -Aθ) V (y -Aθ)+2λ (Bθ -b)
& %
where C A V A
→ -1 →
c ≡A V y
T
VB≡ B C-1 BT
a γ (1 par) γ (2 par)
& %
12 0.683 0.393
22 0.954 0.865
32 0.997 0.989
Hypothesis Testing
Hypothesis Testing
$
241
& %
Hypothesis testing may also be part of the data analysis, for
example to decide if each event is due to signal or
background process.
f(x|θ0)
& %
R
W-R
xc x
f(x|θ0)
α
1-α
xc x
f(x|θ1)
β
1-β
xc x
& %
We wish to choose xc so that the number of Type
I and Type II errors are as small as possible.
Neyman-Pearson test
Hypothesis Testing
$
244
1-β = ∫R f(x|θ1) dx
f(x|θ1)
= ∫R f(x|θ0) f(x|θ0) dx
& %
If the experiment
→
consists of a series of
measurements x , replace f by L(x→ |θ)=Πi f(xi|θ)
& %
If the sample is large, we can use the asymptotic
behavoir for likelihood ratios: If H0 imposes r
constraints then -2 ln λ is distributed as a χ2
distribution with r degrees of freedom.
Exercise 12
Apply the likelihood ratio test to the hypothetical
experiment defined in exercise 11. Suppose σt is
unknown and we want to test the hypothesis that
τ=τ0=1s.
H0 : τ = τ0 H1 : τ ≠ τ0
& %
experiments fail this test?
Solution to Exercise 12
Exercise 12
40
35
30
25
20
15
10
& %
0 1 2 3 4 5 6 7 8
-2 ln likelihood ratio
hypothesis:
H0 : μ = μ0 (μ0 is some number)
a/2 a/2
& %
s2 = n+m;2 (xi ; x') + (yi ; y')2
2
i i
and d follows the student t-distribution with n+m-2
degrees of freedom.
& %
2 distribution for N ; 1 degrees of freedom. The
cumulative 2 distribution can be used to calculate
the rejection region.
Ea Eb
E0
We can ask:
& %
What is the probability to observe such a
fluctuation at any position?
Answers:
Hypothesis Testing
$
252
Hypothesis: H0 : N = B
Assume B and V(B) are known (theory or
sidebands). N is distributed according to Poisson
distribution, so under assumption at N=B,
V(N) = N = B
then, V(N-B) = V(N) + V(B) = B + V(B)
If N is large, approximate Poisson by Gaussian,
then use
∧ ∧ ∧
d = (N-B)/(V(N-B)) ≈ (N-B)/(B +V(B))1/2
1/2
& %
deviations of the effect.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
'
Physics 75.502 Goodness of Fit Tests
$
254
H0 : f(x) = f0(x)
Pearson’s χ test
2
& %
N
Σ
0
where i=1
p i = 1
& %
• If unbinned ML used to determine parameters,
X2 no longer strictly χ2 (N-1-L), but it is
bounded by χ2(N-1) and χ2(N-1-L). If N>>L,
there is little difference.
⎧ 0 x < x1
Sn(x) = ⎨ i/n xi ≤ x < xi+1
⎩ 1 x ≥ xn
Sn(x)
1
& %
0
x1 xn
distribution, F0(x):
& %
dα 1.07/√n 1.22/√n 1.36/√n 1.63/√n
Exercise 13
& %
σt
= 12 (erfc( -t ) − exp( 2στ − τt ) erfc (
2
t
2 − t
) )
√2 σ t √2 τ √2 σt
Solution to Exercise 13
Goodness of Fit Tests
$
259
12
10
2
8 χ test
6
0
0 2 4 6 8 10 12 14 16
5 Kolmogorov-Smirnov test
4
& %
1
0
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Test of Independence
Goodness of Fit Tests
$
260
X2 = Σi Σ (nij-ni•n•j/n)2/(ni•n•j/n)
j
=n{Σi Σj nij2/(ni•n•j) - 1}
& %
(I-1)(J-1)
& %
r = n2+ nm + 1 V (r) = 2nm(2nm ; n ; m)
m (n + m)2(n + m ; 1)
p
and for large n m (n m > 10), d = (r ; r )= V (r)
approximately follows the standard Gaussian, G(01) .
& %
u = ;2 (ln x1 + ln x2)
will follow a 2 distribution with 4 degrees of freedom.
60
50
40
30
20
10
0
0 0.5 1 1.5 2
& %
signs, so that p(r = 5) = 0:0074 is reason to reject the
hypothesis.
The combination u = ;2(ln P2 + ln p(r)) = 13:5
corresponds to probability 0:009.
&
Dean Karlen/Carleton University Rev. 1.3
%
1998/99
&
Part V: Chaotic Dynamics
' Physics 75.502
Rev. 1.3
Part V: Chaotic Dynamics
1998/99
%
$
&
Chaos is seen in many physical systems,
'
Physics 75.502
for example:
◆ fluid dynamics (weather forecasting)
Rev. 1.3
Part V: Chaotic Dynamics
1998/99
%
$
&
A damped driven pendulum will be used to
' Physics 75.502
Rev. 1.3
ω θ
Part V: Chaotic Dynamics
θ
267
1998/99
%
$
&
We have already seen that the pendulum is
'
Physics 75.502
Rev. 1.3
Part V: Chaotic Dynamics
1998/99
%
$
&
'
Physics 75.502
Rev. 1.3
Part V: Chaotic Dynamics
269
1998/99
%
$
&
Attractors
'
Physics 75.502
Rev. 1.3
attractors) are fractals. They have a non-integer
Part V: Chaotic Dynamics
1998/99
%
$
Exploring an attractor:
&
3 -0.3
'
Physics 75.502
2.5 -0.35
2 -0.4
1.5
-0.45
1
-0.5
0.5
-0.55
VN VS XN VN VS XN
2.3 2.1
2.075
2.2
2.05
2.1
2.025
2
Rev. 1.3
2
Part V: Chaotic Dynamics
1.9 1.975
1.95
1.8
1.925
1.7
1.9
1.6
1.8 1.9 2 2 2.02 2.04
VN VS XN VN VS XN
271
1998/99
%
$
&
Fractional dimension
'
Physics 75.502
Rev. 1.3
16 L/4
Part V: Chaotic Dynamics
d d
N(ε) = L (1/ε) 2n L/2n 2n n
2 L/2
dc = lim log N(ε) / log (1/ε)
ε→0
272
1998/99
%
$
Example: dimension of the Cantor set
&
'
Physics 75.502
Rev. 1.3
8 1/27
Part V: Chaotic Dynamics
n n n n
dc=lim log 2 / log 3 2 1/3
→∞
dc=log 2 / log 3 < 1
273
1998/99
%
$
The fractional dimension of a chaotic attractor is a
&
'
Physics 75.502
Rev. 1.3
Part V: Chaotic Dynamics
λ1 t
λ2 t
εe
ε εe
274
1998/99
%
$
&
The average rate of expansion along the principle
'
Physics 75.502
Rev. 1.3
direction so one of the exponents is zero.
Part V: Chaotic Dynamics
1998/99
%
$
&
Bifurcation Diagrams
'
Physics 75.502
Rev. 1.3
Part V: Chaotic Dynamics
1998/99
%
$
Bifurcation diagram for the damped driven pendulum. The horizontal axis is the
&
driving force coecient, f0 , and the vertical axis shows the possible long term
'
Physics 75.502
velocities for a xed phase of the driven force. The initial conditions are chosen
at random for each point, and the rst 100 cycles are not shown, so that
transients will have decayed.
2.5
1.5
0.5
Rev. 1.3
Part V: Chaotic Dynamics
-0.5
1.45 1.46 1.47 1.48 1.49 1.5 1.51 1.52 1.53 1.54 1.55
w vs f
277
1998/99
%
$
&
Comparison of the pendulum to
'
Physics 75.502
xn = μ xn−1 (1 − xn−1)
Rev. 1.3
For some values of μ, x tends to a fixed
Part V: Chaotic Dynamics
1998/99
%
$
&
Feigenbaum number
'
Physics 75.502
lim μk − μk−1
k→∞
= δ = 4.669201...
μk+1 − μk
Rev. 1.3
Part V: Chaotic Dynamics
1998/99
%
$
&
' Physics 75.502
Rev. 1.3
Part V: Chaotic Dynamics
1998/99
%
$
Index
ameoba, 82 binomial distribution, 167
bcucof, 39 bisection, 58
brent, 79 blackbody radiation, 73
dbrent, 79 boundary value problems, 115, 136
dfpmin, 91 bracketing a root, 57
frprmn, 90 Brent method, 61, 78
gauher, 49 Bulirsch-Stoer method, 110
gaujac, 49
gaulag, 49 Cantor set, 273
gauleg, 49 capacity dimension, 272
gaussj, 9 central limit theorem, 180, 193
laguer, 70 chaotic pendulum, 111
lnsrch, 72 classical method, 203
lubksb, 15 Chaotic Dynamics, 265
ludcmp, 14 combining errors, 194, 195
mnewt, 71 combining results, 221
mprove, 16 comparing distributions, 249
newt, 72 Compton scattering, 183
odeint, 109, 110 confidence interval, 200
poldiv, 69 confidence intervals from LS, 240
polin2, 38 conjugate directions, 84
polint, 24 conjugate gradient methods, 89
powell, 87 consistent estimator, 211
qromb, 44 continuity equation, 124
qromo, 45 correlation, 198
qsimp, 44 correlation coefficient, 198
qtrap, 43 covariance matrix, 196
ratint, 25 Crank-Nicholson scheme, 135
rtbis, 58 critical value, 242
rtflsp, 59 cubic spline interpolation, 33
rtnewt, 66
rtsafe, 66 cumulative 2 distribution, 229
rtsec, 59
simplx, 97 d.o.f., 232
splie2, 39 darts, 170
splin2, 39 degrees of freedom, 232
spline, 36 detector response, 188
splint, 36 differential equations, 103
svdcmp, 18 diffusion equation, 115
tridiag, 18 diffusion problem, 132
zbrac, 57 downhill simplex method, 80
zbrak, 57
zroots, 70
efficient estimator, 211
EGS, 188
2 distribution, 227 embedded Runge-Kutta formulas, 108
2D interpolation, 37 error matrix, 196
estimate, 207
acceptance region, 242 estimation of parameters, 207
accounting problems, 92 estimator, 207
adaptive stepsize control, 107 Euler's method, 104
attractor, 270 exercise 1, 19
exercise 10, 199
exercise 11, 222
Bayesian approach, 205 exercise 12, 246
bicubic interpolation, 39 exercise 13, 258
bicubic spline, 39 exercise 2, 36
bifurcation diagram, 276 exercise 3, 53
binned data, 220, 235 exercise 4, 73
0-1
INDEX 0-2
exercise 5, 111 Laplace's equation, 115
exercise 6, 156 Lax method, 119
exercise 7, 160 Lax-Wendroff scheme, 120, 128
exercise 8, 175 least squares method, 226
exercise 9, 185 lifetime measurement, 222
experimental measurements, 189 likelihood function, 209
experimental uncertainties, 189 likelihood ratio test, 245
extrapolation, 22 linear algebra, 2
linear congruential method, 144
false position method, 59 linear constraints, 238
Feigenbaum number, 279 linear least squares model, 230
finding roots, 55 linear programming, 92
fluid mechanics, 124 logistic map, 278
flux conservative problem, 116 LU decomposition, 12
forward time centered space, 117 Lyapunov exponents, 274
fractals, 67
fractional dimension, 272 Marsaglia effect, 150
FTCS method, 117 matrix problems, 2
maximization, 74
Gauss-Hermite integration, 49 maximum likelihood method, 209
Gauss-Jacobi integration, 49 mean, 192
Gauss-Jordan elimination, 5 Metropolis algorithm, 99
Gauss-Laguerre integration, 49 midpoint rule, 42
Gauss-Legendre integration, 49 minization, 74
Gauss-Seidel method, 137 modified midpoint method, 109
Gaussian confidence intervals, 202 Monte Carlo methods, 139
Gaussian distribution, 165 multidimensional integrals, 50
Gaussian elimination, 10 multidimensional simulation, 182
Gaussian quadrature, 47
Gaussian random numbers, 180, 181 Newton-Raphson method, 66
GEANT, 188 Neyman-Pearson test, 244
generalized likelihood function, 220 non-linear least squares model, 233
golden section search, 75 non-physical regions, 203
goodness of fit, 234 normal distribution, 165
goodness of fit tests, 254 null hypothesis, 241
gradient methods, 89 numerical intergration, 40
numerical viscosity, 123
hyperbolic equation, 115
hypothesis tests, 241 ODEs, 103
optimization, 92
importance sampling, 177 ordinary differential equations, 103
improper integration, 45 overrelaxation method, 137
independence test, 260
initial value problems, 115 parabolic equation, 115
integration in many dimensions, 50 parabolic interpolation, 78
integration of functions, 40 parameter estimation, 207
interpolation, 22 parametric test, 248
interpolation in 2D, 37 partial differential equations, 115
inverse probability, 189, 201 PDEs, 115
inversion technique, 173 Pearson's 2 test, 254
iterative improvement, 16 pendulum, 111
photon transport, 187
Jacobi's method, 136 physical dimensions, 3
pivoting, 8
Poincare section, 269
Kolmogorov-Smirnov test, 256 Poisson distribution, 158, 162
polynomial fitting, 231
polynomial interpolation, 24
Lagrange multiplies, 239 Powell's heuristic method, 87
Laguerre's method, 70 Powell's method, 83, 86
INDEX 0-3
pseudo-random numbers, 142
radioactive decay, 155
radioactive nuclei, 156
random interval, 201
random number generation, 142
random numbers, 142
random uncertainty, 190
RANDU generator, 146
RANMAR generator, 152
rational function interpolation, 25
recursion, 51
rejection region, 242
rejection technique, 169
relaxation methods, 136
resistor divider network, 19
Ridders method, 60
Romberg integration, 44
root finding, 55
roots of polynomials, 69
run test, 261
Runge-Kutta method, 105
secant method, 59
signal signifigance, 251
signifigance of signal, 251
signifigance of test, 242
simplex method, 80, 95
Simpson's rule, 41
simulated annealing methods, 98
simulating detector response, 188
simulating general distributions, 168
simulating random processes, 155
singular value decomposition, 17
size of test, 242
solutions to any equation, 55
solving systems of equations, 71
SOR, 137
sparse linear systems, 18
spline, 33
Statistical Methods, 189
statistical uncertainty, 190
successive overrelaxation, 137
systematic uncertainty, 190
test of independence, 260
traffic simulation, 126
trapezoidal rule, 40
traveling salesman, 99
type I,II errors, 243
unbiased estimator, 211
uncertainties of ML estimates, 212
variable metric methods, 91
variance, 192
wave equation, 115
weighted events, 221
weighted mean, 210, 213