You are on page 1of 72

"

' ....,:'.' ....


AUTHJR
, TITLE
INSTIl[JrION
SPONS AGENCI
PUB 'DATE
'GRANf
, NOTE'
AVAILABLE FBOl!
.,
. " ,', '. , .':.:.:..' ....",'.
A
v
"
......
..
"
. BaCOREI'! RHSOllB
.
,
.

.
, S'I
030 462

Sobol', I. M ."
rbe Carlo 'popular Lectures in
Mathematic!ie',. .
Chicago Univ., .111. Dept. of Hathe,llat.ics. ",
National Science Pcnndation,ia.inqto.n. D.C;,
7'4
. .. , ...
NSP-:.;-BB47(MA,
72p..: 'Por relat.ed ,see SE 030460-465. Nat
available in hard copy, to copyright restrictio'ns.
rran slat.ed an dadapted front' the 'B ussian edition.
The University of C.hicago Press. ,Chicago, n 60637
(Order No. 767493: S!4.50l. ,
. , .
EDRS PHI CE "
DESCBIPr:>RS
MP01 Plus Not lvailab1e froaEDRS." . '
Highe,r Edu,cation;, *Kathelllti::11
!P?lications: Katbeaatics
Sat:hellatics Inst!:'uction: *Pr:lba'bility:
"'.

IDENTIFIERS
1"., ,
*s:aeistics '
I'
,.scnte' Carlo Methods
ABsrRAcr .
. . r-he Monte Carlo is a method,of apprexim!taly
solving mathematical and physical probteas by the simulation of
qua.ntities . rne principal qoal 0 f this booklet, is suggest
to specia.liSts In all area.s' that will encpunter prpblems wllich
be sllved by the K3nte Carlo Part I of the booklet
the"simu1ation of randoM.v"rlables. Part II
of applic:ati.ons ,f the Monte Carlo (Aut hor/HK) . '
.
II

.,

(
, .
, .
I
,
'J

**.*** fI.'**** * ***********.** ** **** *'*' '.
* ,Reproductions' supplie"'a by EDRS 3.!'e the that can be tnaje +:
. * from the o!'iqina 1 t. ' *
*******.******* **********
.
til
..
" '
If ...
..
I I
\

.'. ,,'"
.' t

Popular Lactur.8S In Mathematics
,
U.S, D.PA.TM.NT a. H5ALTH.
EDUCATION .. WSL'''.E
N"TlONAL INSTITUla Of'
EDUCATION

: 'THIS COCUNlEfoiTHAS BEEN REPAO-
cuetD EXACTLY AS,REeElv.eD FROM
THE PEASOH OR ORGANIZATION ORIGIN-,
ATING IT POINTS 0" vIEW OR OPINIOPIIS
stAlED 00 NOT NECESSARILY AEPRf;.
SENT OFF IClAL NATIONAL INSTITUTE Of'
EDUCATION OR POLICY
"PERM(SSION TO REPRODUCE THIS
MATERIAL ,t-l' ONLY
BEEN BY
ray'L. eb4\A\fS
. : NSF,
'I
TO THE EDUCATIONAL RESOURCES
INFORMAIION CENTER (ERIC)."
of
The,
h .... ;.;
" ,"',','.,
, , .

. Monte Carlo' .
Method .
.I.M. Sob'or
TranaIated and ,.
'Adapt.s from
the48COnd Ruulan editlori by
RObert ........ ,
John.stone. aqd
Peter EortInJ
.J.'
',,',

.

.. " .
_,i
. .
, ' ,
I .'
' ..
. '
f .
"
.
,,) .

'. a'
: "

. \ ; . ',.,'. \-

"
"
,
"
..

-.

,"
"
'.
,
Popular Lectures in
, . , \
. Survey of Recen,t East European Mathematical
, :' . '
A project 20ndUj.t d by
IZAAK. WIRSZtJP .
Department of Ma:theljlBtics,
the of,chic1go,
under-a grant from the
National Scis:,nce Founqati'Y1
. ./'.
'. "
...
,

. - ./ '.


. ,. ,"
-.'
)

.
> .... L.
, ,
U
I. M. Sobol'
..
.. ,
\
.. ';.
,
":,,',, .
, ,.0' ~ ,.
T h e ~ ,
Monte Carlo:
M e ~ o ~
The
U Diversity of Chicago
Press
Chicago and
London
4'
-
~ . ~
\.
,..
\ .'
...
..
..
, '
l

It

"
" .
..


. ,
..
The UniVjfSity of Chicago Press, Chicago 60637
Tile of Chicago Press, Ltd., London
CO 1974 by The University of ChiCllio ..
All rights reserved. Published 1974
Printed in the United States of America
' .
International Standard Book Number: ()-'226--76749-3
Library of Congress Catalog Card Number: 73-89791
1. M. SOBOL' is a research mathematician
at the Mathematical Institute of the
USSR Academ:( of Sciences,
[19741'1,/,
, ,
",

f
J
.. ,
r

..

"
"
,
,
,
..
" .
. .
,
,-
':",
\
:":
" ,....-- 'It' "j':'"
.. '
',.. ..',
" .. ' .
.

..
I
t
. I
t

-.


I ,i
!

)
1. Introduction to Method
Part 1 Simulatiua V.,tables
2. Random Variables
3. Gc;,nerating Random .Numbers on a Computer
4. Transformations of Random Variables
I
/)
. r
/
I'
(
J

J.
.'

"
q ..
,,' 'r
, .
I' ,
Part 2.., 'of the of the Monte MethOd
5. Simulating a Mass-Supply System .' IJ .
6. Calculating the Quality and of.
7. Simulating the Penetration of Neutrons through a:Siock.
8. Evaluating a Definite Integral -
9. of Propositions '\
Appendix
Bibliography
.
...
. "
,
t
,
\ :'
, . .
. l'
.\
"', '
. .
...
vii
1
..
7
18
'24
33

39 I/o
45
52
58
65
61
'I ' .
,. .
Preface
f
.
. .
I
Some. years' ago I accepted an invitation. ffom the Depaqrnent of
Computer Technology at the Public to deliver two lectures
on the Monte Carlo method, These lectures have since been repeated
.' over the course of several Ye<U'S and their contents. have gradually
settled and "jelled." The present edition also-includes a supplementary
. section (chapter 2), whiah 1 should say a few words.
. ,
Shortly before the first lecture. I discovered to 'my horror that most
. of the audience was unfamiliar with probability' theory. Since some
familiarity with tbat theory was absolutely 1 hurriedly'
inserted in the lecture .3. section my listeners with some
bajic of probability. Chapter 2 of this booklet is an outgrowth
of that.section. I
everyone has heard and used the words .. probability,"
. frcqvency, ", and" random varh1111e." The intuitive notions of prob-
ability and frequency more or correspond to the tn.e meanings of
the terms, but 'the layman's rlotion of a random is rather
different from the mathematical definition. In chapter 2, therefore, the
concept oJ probability is assumed to be known, and only the more
concept of a random varfable is explained at length. This
section cannot replace a course in probability theory: the presentation
here is greatly simplified, and no proofs are given of the theorems
asserted. But it does give the rea;er' enough acquaintance with rando"m
variables for an understanding of the simplest procedures of the Monte
Carlo method.
The principal goal of this booklet is to suggest to specialists in all
arc.as-4Pat they will encounter problems which can be solved by the'
. .
Monte (arlo method.' ,
The p;oblems considered in the lectures are fairly simple and have

been drawn from diverse fields. Naturally, they cannot encompass all
vii
/
"
'.
..

,
I
. .
viii Preface
. the areas in which the can be applied, For exaI1lple, there is n9t
a word medicine in the bookiet, although the methods of chap fer
7 to calculate radiation dosages in X-'ray therapy. If one
has a pr6gram for calculating the absorption of radiation by thd various
hody tissues. it is poSsible; to select the dosage and direction of irradia-
. . ,
tion which .most effectively ensures that no harm. is done t6 healthy
tissues. f'
The present book includes the material read in the lectures. A more
detailed is given qf certain examples. chapter 9 has been
..
"
, ..
1. Sobol'
Moscow, 1967
...
l'

J
..
. ,.
. .
,
, .
. ,
'.

'. '

'f.
I
.'
oJ
'Introduction to
the Method
p
'1

\'
I
\ .
,
-'
The Montc Carlo method is a method of approximately solving
mathematical and physical problems by the simulation of, random
quantities. _,

.. _ lr- Tbe 0ti&iD of the Monte Carlo Meihod
The generally accepted birth date of the Monte Carlo method is 1949,
when an article entitled .. Thc Monte Carlo Mcthod" 1 appeared. The
American mathematicians J. Neyman and S. utam MC considered its
o.riginators. In the Soviet Union, first articles on tht.Monte Carlo
method published in 1955 and 1956.:2
The theoretical basis of 'the method has long been known. In the
nineteenth and early twe"ntieth centuries, statistical problems were some-
, times solved with the help of random selections, that is, in by the
Monte Carlo method. Prior to the appearance of electronic computers,
this method was not widcly agplicable since the simulation of random
quantities by hand is a very laborious process. Thus, the beginning of
,the Monte Carlo method as a highly universal numerical
became possible only with the appeartnce of computers.
""me name" Monte Carlo" comes from the city of Monte in the-
principality of Monaco, famous for its' gambling house. One of the
mechanical devices' for obtaining random quantities is th.c
'roulette wheel. This subject will be considered in chapter 3. Perhaps it is
worthwhile to answer here the frequently asked questiofl: "Does the
I
1. N. Metropolis and S, Ulam, "The Monte Method," Jouf$al of UJ('
Amuican Statistical Association 44, no. 247 (J949):33s-:.41.
2. These were the articles by, V. V. Chavchanidze, Yu. A. Schrcider, and V. S .
Vladimirov ..
1
9
"
, .
..
"
.,
,
, ,
. 4"
i
f '
,""".
,'\ ', ..
..
, ,
,
2 Introduction to the Method . . '
. ., . '.
Monte help one win roulette?" The is that it
" does not; it not even an attempt to do so. '. f
1,
" .
. #0..
Example. In order to make more
. clear, to the /feader what we' are
talking' about, let us examine a
very simple example. Suppose that
y






o 1 x
we need to COlnpufe the area of a-
plane figure may be a com
pletely arbitrary figure with a
given graph-
ically or arui1.Jtically, connected or t
consjsrlng of several pieces.' Let
the region be as in
figure 1.1, and let us assume that
, .,"\( :- . Fig. 1.1 I it is contained completely within
\. ' 'the unit sqUAre .
Choose at random N points in the square and designate the.number
of points lying inside S by N '. It is geometrically obvious that the area
of S is appr.oximately eqUal to the ratio N'IN. The greater-the N. the
greaterJ-he aceuracy of this .estimate. ..
In the represe,Pteci in figure 1.1, selected N ::p 40 points.
N' = 12 points appearea insidF S. The ratio N'/N = 12/40 =
0.30, while the true of S 0.35.
s
,

1.2. Two .of ttie Monte
t ';':,"(' r . . . ..'
In our example it been too. difficult to calculate I
directly t,he true area of S. II of this we shall
some less trivial examples. Our si'mple, method, liliweVef, qoes poj,nt out
one feature of the Monte Carlo method, that 'the of
the computational algorithm. This algorithm consists,:.iJ.1' of a
pTocess for producing a lfheprocess is rePeated N times,'
. being .independent of the rest, and the results of the
are averaged together. Because of ifs similarity to the process of per
a scientific experiment, the Moote Carlo method is sometimes
, I
.. t "
3, In practice, tHe Monte is not used for calculating the area of 11
plane figure. Then: are "'ther methods fOI this, which, although they arc more "
complicated, guarantee much' greater accurliCY,
Still, the Monte Carlo method shoWfl in our example permits us to calculate
-
very' simply' the .. many-dimensional volume" of a -body in '
-. space-; and in such a case the Montc Carlo method often proves to be the only
I numerical method useful in solving the problem.
,
,J1)

I

(
/
'.

. , '

' ..
3
. .. ......
called the method of statistical trials. In our example, the random event
of taking a rp.ndom in the square and cheCking to deter ..
mine whether it belonged to S, 'and the of the trials were a\'eraScd
together by taking the ratio N'/N. ' '
.. A second feature of the l:llethod is 'as a rule, the error which vile
expect from the calcUlation is v(DI N), where 15 is some constant and',
N the number of trials. In oUr example; it turns out from probability
(fat proof, see' section 2.6) that .
..
D - ..4(1 - ..4) '= (O.35Xl - 0.35), 0.23,

..
where A. is true area of the region S, so v{DjN)- v{0.23/40) ,
0.076. We See.that the actual error .of the calculation, 0.05, was
after all, unreasontlb1y large.
From the formula
error z J(!)
.
it is clear that t@ decrease the error by a factor of 10 (in other words:to
obtain another significant digit in the result), it is necessary to itlcrease N
( (and the amount of work) bYra factor of 100. '.
, To attain high precision)n this way is clearly impossible. The Monte
Carlo method effective in solving the result
need be accurate only to 5-10%. However, any particular problem can
,
be solved by pifferent variations of the Monte Carlo method' which
'assign different values to D. In many I0blems', a,;;omputational pro-
cedure which gives D a 'significantly smaller value will considerably
increase the accuracy of the result. "
,
t...3. Problems nat C4m Be Solved by the Monte Carlo Method
The Monte Carlo method makes possible the simulation ofall6' process
, "-
influenced by random. factors. This. however, is not its only use. For
many mathematical problems involvitJg r,o chance, we can artifichtlIy
devise,a probabilistic model (freqllently several)' for solving these prob-
lems. In fact, this was done in the example in section 1.1. For these
reasons the Monte Carlo method can be a universal method
'for solving mathematical problems ...
j In foreign the term Monte c:rlo methods (in the plural) is now
more frequently 'used, in view of the fad that ,the same problem can be solved by
iimulatina different
" .
, ...
11
. .
. .
....... !; ..
, .
-
. ,'.. ..,''', .
, ' .
,"".
" .
I
,

" .
,
4 Introduction to tlle 'Method
It is particularly int,jrcsti'ng that in certain cases, instead of sini11lating
the actUal random process, it is advantageous ,to use artificial models. \
Such a situation is the topic of ohapter 7. . .
More about' the example. Let us return to the section 1.1.
For the calculation we needed. to choose points at random in the unit
square. How is this actually done 7
Let us set up such an experiment. ImagiQe figure 1.1 (on an increased
, scale) hanging on a wall as a target. Some distance from the wall, N darts
n are aimed at the ofthc;.square and thrown. Of course. not 'au the
darts will fall exactly in the center; they will strike the target at N random
points.
6
Can these points be used to estimate area of S7
.y The restlltof such an experitnent
1 --, is depicted in figure 1.2. In this


. .
0' 1 x
exPeriment N 40, N' = 24, and
the ratio N' I N = O.6b is almost
double the true value of'the area
(0.35).), is efear that when the
are thrown with very great
skill, the result of the experiment ,
win be very bad, as almost all of
the darts will fall near the center
and thus in S.6 -.
We can see that our method of
Fig. 1.2,. computing the area will be valid
<:::::...---" only when the'random points are
not" simply random!" but, in addition, .. uniformly distributed" over
the whole square. To give these words a precise meaning, we must
become acquai'nted y<ith the definition of random variables and with
some of their propertieS: This ... ,iQfprmation is presented in chapter 2. A
reader who has studied probabiIit)Rheory may Gmit aU except sections
2.5 and ,2.6 of chapter 2.
5, We assume that the darts are not in the hands of thC" world champion and
that they arc thrown from a sufficiently great distance from the target.
0, The wa,ys in 'which the random were chosen in figures 1,1 and 1,2
will be' discussed in section 4,5,
, '
-
,
,
,

....

t,.,",
..
Part 1
, .
.,

...
, .
\ ,



.
..

"

\
..
.. ,

.1
,i,'
t
..
I

J

.,.
.1.:\.'; .
.
. ',,' - .' "
,

... "
.......
Simulating '.:-
. Random Variables

j
-.
.
1 '"'""'

.
..


;
..
... ,
.,
\
..
. .. : ..
"
"
.

'.':
\
2
,
Randodl
Variables
. ,
- - \..
. \


III
j\
7
..
. .
We assume that the.concept of probability is more or leSs familiar to
tl1e reader, ana we to the concept of a random Variable .
The words" random in ordinary English usage, refer to the
outcome of any process which proceeds without any discernible aim or
direction. a mathematiCian's use of the words. "random
variable" has a completely definite meaning. He is saying that we do
pot know the value of this quantity iJ1 any given case, but we know
values' it c.an assume and we' know the probabilities with which it
values. On the basis of this information, while we cannot
precisely predict the result of any single trial associated with this
random we ean predict very reliably the total results of a great
number of trials, .The more trials there are (as they say, the larger the
sample is), the more accurate the prediction will lje.
2.1. Discrete Random Variables
The random variable X is called discrete if it can assume any of a
.. discrete set of values. Xl' X2, . , X1!}
X is therefore by the table
(T)
. where Xl, X2' .. , x" are the possible values of the variable X, and
'Ph P'J, ... , PI! are the probabilities corresponding to them. Precisely .
. ..
. 1. In probability theory discrete random variables that can assume a countably
Of infinite number of values Xl. XIl. Xa . are also-considered.
7
1 .
..... 1
..
"\.<:.
.. ,
>,
. ::;

"
..
...
,
..
..
..
) ,
8

'\ Simulatina Random Vaclaoles
speaking, the' probability random variable has the value Xt
(denoted by P(X = Xt is 'equal to PI: -
, ,
/(X = xJ :: p,.,
..
So.,timcs we write PrlXi) mstead of PI or P(X =
Table (T) is called the distribution of the random variable.
.'
The numbers Xh Xlh , x" are arbitrary., However, the probabilities -
Pl. P',j, . .' ., p" must satisfy two conditions:
(a) aU p, au 'non-negative:

Pi 0;
(2.1)
..
(b) the sum of all the p, equals I':
,
PI + P2 + .' .. + p" = 1.
(2.2)
The last condition means that in every event X must assume one of
the values Xh . x".
The number
(2.3)
is called J:he expected value, or nUJthematical expectation, of the random
variable X.
To illustrate the physical meaning of this quantity we write ,it in the
following
,We see that E(X) is in a sense the al1erage value of the variable X, in which
the more probable values arc added Into the sum with lar.ger weights.:Z
2. Averaging with weights is very common in science. For example, in mech-
anics; if masses 1fIt. . .. , m" are d'istributcd on thc x-axis at the points Xl.
XI ... x", then the abscissa of the center of gravity of this system is given by
the formula
Of in this case the sum of all the masses does not necessarily equal unity .
...
...
" .'-
. '",\".-'
.. : ......
t
...
, '.
. .
"
/
..
. '
.. . (
"
., .. " Random 'YlJ1iahIu' " ...,9
. '.. ..';
Let U$ mention the bMic of "
c is 'any constant, tbeil '. , ,:' , .,;
.
.. 'r
E(X + c) ;:i'E-(X) + c /
.. ", ' .
(2.4)
?

X)".
.,/ .
...
"
..

.
"
'J If X Juid'.Y arc .th:n,
. . ' . ,,:
,
,
<#
, ,
,
, . (2.6)
.
, .
.' E(X + Y1=.E(X}
t' .. . ' .
'... t I ,
',.
" , . '. './,

. .... = (1,' .. /
.. .. .' . . .' ,,;:r- .
1 :- "." . \ . . ,.;' ,. .'
is called the ,prthe nuidprn v41rlable X. That is, the variwtee
.' .1' ,'. , . , .!f'
Var (X) is the of the squared deviation orthe I
random Vat (r)
always. . " . . ' .'
. The and the variance are the'
pettant the tanqom variable X: W:nat
practical value? .
tf we the variable X many times' and obtaiJl the. values
Xl> X,,; , ,,. X
N
'(each of which is equal to one of :t9c numbers
'Xl' ' .,' x,,). then the arithmetic mean of these be close
; toE(X): .
(2.8)
I
and the variance V:ar (X) characterizes spread of these values
around the average E(X). ,
Formula (2.7) can be transform'ed using foqnulas (2.4H2.6):
"', ,
Var (X) = - 2E(X), X +'
'= - 2E(X),E(X) + "
whence
;' ''y
Var (X) = E,(X'J),-- (2,9)
It is usually easier in hand computations to find the variance by
, (2.9) than by formula (2.7). .
('
......)

-
.'
.'
.#
"
._, .....
-
...
. /' 10
.. "
Simulating Random Variables

. , ...
Let us mention the baSic properties. of the variance: If. c is any
constant, then' . \ .
vk (X + c) = Var (X)"
. .
Vat (eX) = CJ Va! (X). ':(2.11)'
\ .'
The concept of itfgependence of random variables Jplays an .
. .
role in the.theory of probability. Let us suppose that, besides the variable
X, we also wItch a-random variable Y.lfthe distribution of the variable
'X dOes not change-when we know the value which the varijble YaSsumes,
...... and vice versa, then it is natural to believe that X and Y do nQt depenlf"on
each other. We then say that the random variables X and Yare inde-
. ,
T.he following relations hold for random variables X'
d y. .,.
an . "
" .
')"(xlY) E(X}E( Y) ,
Var (X + Y) = Var (X)'+ Var (Y).
q.14) .
.
\
Example. Let us consider a random variable X with Ule distribution
I
'(I 2 3 4 5 6)
'.' X. = itt ': !. t ! .
Since each of the values is equally probable, the n\ln'! ber pf dots appearing
when a die can be used to realize these values. !Let.us calculate
, the mathema cal expectation and the variance of x: By (2.3),
(X) = t(l + 2 + 3 -t 4 + 5 + 6) = 3.5 ,
, .
By formula (2'.9),
Var (X) = i(X:.I) - ((X2
= i(P+ 2:.1 + J:.I + 5:.1 +. 6
2
) - (3.5):.1 = 2.9J,.7 .
;. Example. Let us consider the ranoom Y with distribution
. (3
Y = .
t t
To realize these values, we can ronsidcr a toss of a coin with the condi-
tion that a head counts 3 points and 4 points. In this case, .
I ,
" .
.. .
E( Y) = 0.53 + 0.54 = 3.5 ;
Var (Y) == 0.5(3
2
+ 4:.1) - (3.5r.l ::!: 0.25,

17

.'
..r--
. ,'.
'" i
.-
, .
\
.
"
" .-
, '
, .
,
Random Variables t 1

,iWeseethatE("Y) = E(X),butVar(Y)<: Var(X). This could easily
have been anticipated, since the values of Y can differ from 3.5 oIlly by
0.5, while for the values of" X the spread cal reach 2.5. .
,.
Let us assume that some radwm is placed at the origin Ci)f a coordinate '
plane: As each 9tom of radium decays, an 'a-particle is emitted. We shall
y
, describe its direction'by the angle .p (fig. 3).
, , Since, both in and practice_ any
\
" w is possible, this random
'( varmble can assume' any from 0 to 211.
,We shall say that a random variable X is '
t
continuous if it can assume any value in
some interval [0, b]. '
A continuous tandom variable X is f.
defined by the assignment of a function .
p(x) to the interval [a, b] cont;.tining the
l'
FiB. 2.1
x ,possible va11;1cs of this yariable. p{x) is
called the probability density or :density
distribution of the random variable X.
" l tn,e significanCe of p(x) is as follows: Let (0', b:) be an arbitrary
I ,:,inter\.a1 contained in [a, b] (that is, a' S a
f
, h' b). Then the prob-
I' Ability that X lies in tl1e interval (a', b') is to tne integral
.
. ,",
P(a' < IX < b'} = p{""x) dx .
(2.14)
In figure 2.2 the shaded area represents the value of the integral (2.14).
, . ,
y'"'p(x)
y
...
0 iI a
b'
b x
J
Fig. 2.2'
f
1
#
...
<t
...
18
, i

'.
-1 "
12 Siinulating Random Variables.
The set,of values of X can be any interval. The interval may condliu
either or both of its endpoints, and even the cases a ;:: -00 and b = 00
are possible.
J
The density p(x), however, must satisfy two condj!ft,ps
to conditions (1) ,aIld (2) for
(a) the p(x) is nonnegativt:
p(x) O. , (2.15)
(b) the integral of the dens'ity p(x) over the whole interval (a, b) is
eqWll to"l:
...
(2.16)
/ The number
, .
.'
E(i) III xp(x) dx . ""(2.17)
,a ... rl
. I ...
is called }h: expected of a random
The sIgnIficance .of thlS q,Uytlty same as In the case of the I
discrete random variable. sinCe' ,
E(X) = f! xp(x) dx "
f!p(x) dx
it is easily seen that this is the average value of X. In fact, X can assume
any value xdn the interval (a, b) with "weight" P(X}t3 ,
Everything explained in section 2.1 from formula (2.4) 'up to and
including fotmula (2.13) is also valid random variables.
This includes the definition of variance (2.7), the formula (2.9). for its
computation, and all the properties of E(X) and Var (X). We shall not
repeat themY
.,
3. In this case it is also to explain the analogous formula in mechanics:
If the linear density of a rod is equal to p(x) for a .-s; x :S b, then the abscissa of
the of gravity is given by the formula
'.
x = S: xp(x) dx,..L---"
- J:p(x) dx
,
4. Tl'lis statement is not exactly true for all continuous random variables. In
statistics thcte arise a 'few continuous random variables for which one or both
of the integrals
E(X) '--' f xp(x) dx Var (X) = f dx -
. '.t
diverge; for instance, the Cauchy densily p(x) = (I/1T)(I/11 + for -00 <
x < 00. has infinite variance. For these variables, formulas (2.7) through (2.13)
cannot be used. and special methods must be devised to t,reat them.
1 ()
4. ,1
I
.. ,
-
,;, ,
\"
. . , .

,
, ... ' ,"
'.' , ,
,.. 'f" ' . \:',
.. ',
,4. '. " .

. '. . ........ ,.,
..
1, ...
13
,
Let US mention one mo(e. formula, that for the mathematical
expectation of a ratjdom function. As before, let the random, variabJeX .
. probability density p(x). We 'ChooSe an arbitrary continuous fune.. .r
tionf(x), and,-consider the .rsp1dom Y :It f(x), sometime$' Q1lled
a random/unction. It can be proved that' . . , ., .
. ... .
E(f(X) =
1-
(2.18)
Let 'us stress that, gcnethltyspeaking, E(f(X =F
The random Variable G defined on the
y
4 f) (x) \. interval tOl 1 J and a' dCilllty
1t-----..... p(x) ? 1 is Caned a wtirorm distribl!fon
. ,...
on [0, 11 (fig. 2.3). .... .
.
-, Whatever ,ubinlellal (a', b') ,we takt
withi1l'{O, 11 the p!obability that ill
(d, b') l"C\1Ua} to,. . . .
o x
p(x) dx h' - a' , Fig. 2.3
. ,
that is, the length' of the subinterval. In particular, if we divide [0, 1]
into any number of intervals of length, the probabilities of G
hitting any of these intervals are the same.
I t is easy to calculate ..
. .'. I .

(G) = 11 xp(x) x dx = 1',
Var (G) = 11 rp(x) dx - .= i - i = fr. "
In what follows we shall have many uses for the random variable G,
2.3. Normal Random Variables
\
A normal (or gaussian) random .variable is a random variable Z '
defined on the whole axis (-00, (0)' and .having the density
t
J!<x) = aJ21T exp [- (x :;rr J ' (2.19)
,J.
,
where a and 0 > 0 are numerical parameters.
J
"
,
,
..

,.
.

-.

...
, \ ..
14 j 'Simulatins; Random Variables
-t,
'The parameter a does not affect the shaJ'e of the curve p(x): a change
in a results only in' a displacement of the the How-
ever, the shape of the curve does change with a change Indeed, it is
easy to see that '.
, 1 '
max (pJ..x = p(a) =' av
2
1r
,If C1 decreases, the max (p(x will increase. Howevet, according to
condition (2.16), all the area .under the curve p(x) is to 1. Thefe-
fOt:t\ the curve will extetid upward near x = a, but will decrease for all
sufficientlr "f x. In 6 n.ormat densIties are
drawn, one' Wlth a,= 0, a = 1, a:qQ' another WIth a = 0; (19= 0.5.
(Another, is figure 6.5 below.) I '
11 is possible'fO" show that' . ;; ,", " .
... ." ....

(Z) ;::: a, \!'ar'(Z) = UJ.
N variables \ a:e in the id\,estigation" of
. very divetse problems. example, an ,error 8 'in 'measurement is
generally a normal random variable. The reason for this will be dis ..
cussed If in measurement is not'" systematic, then
a = (8) = O. And the quantity C1 = yVar (8), called the standard
deviation of S, describes th,error in the method of measufement.
The rule of sigmas,." It is not difficult to that a f
. density.p, "
r
J
O
+3<1 p(x) dx = 0.997.
0+3<1 ,
,
y
o
Fig. 2.4
,-
21
2
,
) .
(
' .
' ..
..... ,:
, ,
, , '
'RivtdDm Variables IS
.
whatever a and " arc in (2.19). From (2.14) it that
"
Pea ,- 3q < Z < a + 3(1) = 0.997.
t ' .. i
(2.20)
. The probability 0.997 is very ncar to 1. We therefore give the latter
formula tne 'following interpre41tion: For a single trial it is practilalJy
impossible to a value Z differing from ky more'tlUzlz 3a;
2.4.. The Central timit 1beoriDl of Probability Theory

Lyal-'Wno1 have worked on generalizations S}f th" original result. Its
proof rather Complex.', , ',' ,
l Let us consider N indepehdent, identically distributed randon! vari-
ables X
lt
..f2 ... , X",; that is to say, the probability densities of these
" coincide. Consequently, their mathematical expe!tations and
variances also coincide. .
We write
,..r
E(X
l
) = E(X
2
) = ... = E(X
N
) = m ,
\Tar (Xl) = Var (X
2
)'= ... = Var (X",) =
. \
Denote sum of all these variables by S N:
, SN = Xl + X
2
+ ... + X
N

formulas (2.6) and (2.13) it fonows that
E(SN) = E(Xl + X
2
+ ... + XN) = Nm , )
Vat (S",) = Var (Xl + X
2
of .. + XN) = Nrr ..
Now let us the normal random variable ZN these same
'parameters: a = Nm, c?= NI?
THEOREM 2.1. The dens it y of the sum SN approaches the densit, of the
, normal variable ZN in such a way that for every x,
J
(
SN - Nm )' (ZN - Nm )
---- P vV(N)' < x p .vV(N) < x'
for alJ large N.
c
The significance of this theorem is clear: The sum SN of a large
number of iden'fical random vatipbles is - approx,imately
,
(PSN(X) PzN(x.
' ..
; ,
).
.',

. '
(' .

L
, ... f
16 . simZ'" Variables .'
Indeed, the theorem is alid wider weaker
J Not all the terms Xl' .. '., X
N
have to be identical and independent;
essenth,illy, 'all that is is that single terms 'do not play too great
a role in the sum. ' " ,
,.
It is precisely this theorem. which why normal random
. variables are so often incountered in Indeed. whenever we meet
a summing influence over a large number of independent random
factors, the resulting random variable proves to be normal. For'
example, the scattering of artillery shells from their target is almost
I!1ways a normal random vJriable, 'since it depends on the meteoro-
conditions in the various regions of.the trajectory as wtU as
. on many Qther ,: .
r .
l.S. De'GeaenI SdIeme of the Metho.t'
. '
SuppOse IDat we want to determine some unknoWn' quantity'm. Let'
us attempt to devise a random variable X with E(X) = m. Say the
variance of this variable is Var (X);:: r?
COJ)sider N independent random variables Xl' ... t X
Nt
with , .
,distributions identical to that of X. If N is sufficiently large, then,
according to the theorem of section 2A, the distribution of the sum
SN = Xl +.,#lJ + ... + X
N
will be approximately normal with
eters a = Nm, = NVJ.. From equation (2.20) it follows that
P(Nm - 3vVN < SN < Nm + 3vyN) :::: 0.997.
r ';" .
',' \ ' t
, :1f'M'e" divide the inequality within the parentheses by N, we obtain an
ineqUality, and the probability remains the same:
(
3v SN 3v )
m -, y N < N <: m + V N 0.997.
We can rewrite the last relatio.n in a, slightly different form:
(
11 N I ' 3V)
P IN XJ - m < vi N 0.997 .
(2.21)
. .'
This is an extremely important relation for the Monte Carlo method .
It gives us both a method of calculating m and an estimate of the
uncertainty of our estimation. .
Indeed, suppose that we have found N values of the random variable
.,"

"
, ..
'1
, .
,
.. '
. ' /'
." .....
\
RmuJom VariabJu 17
, ",
X.
5
From (2.21) it is obvious that the arithmetic mean of these values
will be approximately equal to m. With high probability, the error of
this ,approximation does not the. quantity 3v/V N. Obviously,
this error converges to zero as N increases .
. .
2.6. A Last Wonl about the Example
l
, '"
.. '
. . .
Let us now apply some of these ideas to the example of section 1.1 fl>
see how obtaIned formul' for error,: . ' , .. J
. .. ' .

. ,
" "
If we denote the result of the jth single trial by
.. '
. ,
{
t. if the jth randpm point lies inS
XI = ,
0, if Dot, ' ..
then our estimati of the area of S is just 2 Xi/No It is ,easy tq see that ."
the distribution of each XI is
(
0 . 1)
1 - A A .
.Hence, by formulas (2.3) and (2.9),
m = E(X) = 0(1 - A) +.'lA =- A,
vI1 = VariX) = Ql.(l - A) + - = A(l - A),.
;N = = J(A(l; A) . '.' "
'" . .., ..
We have chosen to omit the factor 3 from the formula 3vlv N since a
deviation so large as 3(v/ V N) will rarely be encountered. Our formula
v( D! N) actually gives the starldard deviation of the norm.a1 random
... , variable which is .. dosest" to the distribution of 0: Xf)J N. It is closely
related to measure of error, the probable- which we'shall
introduce later, in chapter 8.
5. It is immaterial whether we find one value of of the variables Xl! ' ..
... , X N or N values for the single variable X, sim:e all the randortl variables
have idi!ntical distributions.
..
. ;

'.
' .
. ." .

, .
,. '
..
, ..
\,
.
....
'."
;,.",
,
. .
. "
.0., '
..
, .:! " .. .. :'."
. ,.
..
3
,
,
.
.
...
.<
Generatlng
Random NumberS
on a'Computer .
...........
., ..
. ),,,:
of. ,.'
...": . , .
. ' ..
" ..
'0. -

, The very thoUght of genet11ting rang',
.. ,_.sometimes 'provokes the question:
\
muSt he programmed beforehand,
There. are, 'indeed, several difficulties withtllls but they
. belong more to philosophy th,an to lihd SO we span n-ot.
d 11
.. to..: ." ,..... '" ';."!' . , "" .
we on ,." ..... .. ! . A'. .
'The random iI\
The is whether natural ..
pheno.mena expenmentally. Such .. a. course,
proves to be and a nihd,om some
physiCal with perfect in orre:set..of ,can
prove'to charaderize the same J)Ootly during investiSfltion
of others. . .' . . .
. Such problems of deScrjption univerSal. only within applied
mathematics but in all other fields as well. A cartog11lpher, for example,
can drflw a road on a natio,nal map as a perfeCtiy straight line. On the '
large scale map of a heavily populated area, it must be
wide and crooked, and, close examination reveals all sorts of
propetties of the road: color, texture, and the like, 'of which the original
description can take no whatsoever. Our use of random van"
abIes should be regarded not as proJiiiing' a perfect description of
natural phenomena, but as a in
which we may be interested. --
Ordinarily, three ways ot: obtaining random values are distinguished:
tables of random numbers, number generators, and the pseudo-
random number method.
3.1. Tables of Random Numbers '.
Let us parform the. following experiment. We mark the digits
0, 1, 2, ' , .,9 on ten identica! .. slips of paper, We place these slips of
18
f
"
. ,
i
. ,.',
.
. ,

...
"
.


, .
Rmtdont Numbers on a
19 r
paper in a hat, mix them and take out one; then return it and '
mix again. We write down the digits obtained in this WllY in the form
of' a table like table A in the Appendix (in table A the digits are arranged ,
in groups of fi'\le for I
, ,Such a table'is called a ttibJe of random digits. It is possible to put it
into a compllter's Then, in the prqcess of calculation, when we
need values of a random vaiiable with the distribution
(
0 1 2
0.1 0.1- 0.1
...
, ,
... 0.1
(3.1)
'-...'
.... we need' only take the llext this tab!e. "
,largest of the published \1'3l1dom-nun1ber tables one
million digits.
1
Of course, i\ was compiled with the assistaJ;lce of tech- "
nieal equipment more sophisticated dian a hat: A special roulette wheel
was constructed which operated electronically. Figure 3.1 s,hows an
elementary yersion of such a wheel. A rotating disc is stopped suddenly,
, and the number to which the stationary anow points is selected .

. '
Fig. 3.1
Compiling a of random numbers is not as easy as it may
appear. Any real device produces random variables wi,th a
distribution differing slightly from the ideal distribution. During an
experiment there may well be accidents (for example, one of the slips
of paper in the hat might stick to the lining for some time). ,Therefore,
the compiled tablos are carefully checked by special statistical tests, to
make sure that no particular characteristics of the group of numbers

1. RAND Corporation, A Million Random Digits with If}(),()()() Normal Deviates
(Glencoe: Free 1955).

9["
--0
r :
, ':'"
"
. ,
,
20 Simulating .Random Variables
contradict the hypothesis.that the numbers are values of a
variable (3. p.
Let us examine one of the simplest tests. Consider a table con-
.", .
!aining N.digits. Let the number of zeros in this table be Va, the number
of Vb the number of twos and so on. We calculate the sum
9
L (v, - (O.1)N)II.
'-0
"
The theory of probability allows us to predict the range in which this
sum should It should not be very ..large, since the mathematical
expectation of eaeh of the Vi is equal to (D. I )N, but neither should it be
too ,smau, since that would jmiicate an ., Qverly regular"
of values. i .
Ti'bles of random are used only for Monte Carlo ,method \, .
calculations performed by hand. The fact is that all computers h.five
comparatively small internal memories, and a large table will not fit in
. them. To store the table in' external memory and then to consult it
continually for numberS slows considerably .
. The possibility that, in time, the memorie'!; of'computers will increase
sharply should not be ruled out, and in that case random.-numbertables
might become more widely useful.
3.2. Random-Number Generators
It would seem that the wheel described in section 3.1 could be hooked
up to a calculating machine and be made to prOduce random numbers
as needed. However, any mechanical device would be too for a
computer. Therefore, v'aeuum tube noise is more often used as, a
random-number generator. The noise level ,,[the tube is monitored, and
if, within some fixed interval of time, the-noise exceeds a set threshold
an even number oftimes, zero is if an odd number of times,
a
At first glance this is a.,very convenient procedure. Suppose m such
generators work in parallel, all the time, and send random zeros and
, ones into all the binary places of a particular memory location. At any
point in its caiculationsthe machine can go to this location and"""take
from it the random value G.,The values will be distributed over
tha.intcrval [0, 1], though only approximately, of course, each number
2. There are setups are even. more statistically perfect.

21
being an m-digit binary fraction of the form O. . .. D(a)' where
each of the variables Den imitates a random variable with the dis-
tribution
"" .,
. (0 1).
;';
t
.
Yet even this method is not free from defects. First, it is difficult to
check the "quality" of the numbers produced. It is necessary to make
periodic tests, since \ny imperfection can lead to a K distribution drift.,
(that i1., the zefOS aJ\d ones in one of the places begin to appear in
unequal frequencies). Second, it is often desirable to, be able repeat a
calculation on the computer. But it is impossible to duplicate a sequence
. of random numbers if are not held in the memory throughout the
calculation; and if they are held in the memory, we are back to the
tables. . 4
Methods of this sort will undoubtedly prove useful when computers
are constructed especially for solvin$ problems by means of the
Carlo method. For all-purpose computers, however, on which caIcUla':
"lions requiring random numbers come up only rarely, it is simply not
economical to maintain and to make use of such special eq,uipment. It is
better to use pseudo-random numbers. '
3.3. Pseudo-Random Numbers
So long as the" quality" of the random numbers used can be verified
by special tests, one can ignore the means by which they were produced.
It is even possible to try to generate them through a set formula.
Numbers obtained by a formula that imitate the values of a random
variable G uniformly distributed in [0, 1] are called 'pseudo-random
numbers. Here the word .. imitate" means that these numbers satisfy
the test just as if they were values of a random variable. They will be
quite satisfactory so long as the calculations performed with them
remain\llnrelated to the particular formula by which they were produced.
\,
The first algorithm for obtaining pseudo-random numbers was
proposed by 1. Neyman. It is called the midflle-of-squares method. We
illustrate it with an example. .
We are given a four-digit integer nl = 9876. We it. We
usually obtain an eight-digit number nl = 97535376. We take out the
midd1e four digits of this number and designate the result = 5353.
Then we square = 286546(9) and once mpre take, out the
middle four digits, obtaining na = 6546.
..
I
..
\

\
,
Simulating Random Variables
Then ns" = 42850116. n, = 8501;' = 72267001, nr. = 2670;
J '.
= 07128900, ns = 1289, and so forth. .
The proposed values to be used for the variable G are tilen 0.9876;
0.5353; 0.6546; 0.2670; 0.1289, and so forth.
3
This algorithm is not suitable, for it to givemore.
sman numbers than it should. It is ruso prone to falling into" traps," .
such as the sequences 0000,0000, ... , and 6ioo, 2100,4100,8100,
6100, .... For these reasons various experimenters have worked out
other algorithms. Some of them take advantage of peculiarities of
specific computerS. example, let us examine one such algorithm,
used on the Strela cOmputer.
Example.' The Strela is a triple--addres's,lfioating-point computer.
The memory location into which the number x is placed is made up of
forty-three binary places (fig. 3.2). The machine works with binary
...
Exponent
Sign of the coefflCrent Sign of the BlIponent
Fig. 3.2
numbers. in the form x = q'2'UJ, where p is the exponent of the.
number and q the coefficient.
5
In the jth place there can be a zero or a
one; let us call this value eJ. Then
"
s the + sign, one the sign.'
3. This algorithm can be w . en in the form nk t 1 = F(nk), where F stands for
the aggregate of the operations that are performed on the number nk in ordel to
obtain nk. H. The number nl is given. The pseudo-random numbers = 10- 'nk.
4. Sec 1. M. Sobol', .. Psevdosluchainye chisla dlya mashiny Sire/a" [Pseudo-
random numbers for the Strela computer}, Teoriya veroyalnosli i ee primtneniya
[probability theory and its applications) 3, no. 2 (1958):205--11.
5. A somewhat different method of floatingpoint number storage is Cllmmon
in American computers such as the IBM 360. Either 32 or 64 binary places,
arranged in groups of eigpt, are used. Real numbers are considered as written
in the form -
,
where q < 1,0 <; p 127. The leftmost binary digit records the sign of
the number, 0 for +, J for - ; tht; next seven place" re(;(>rd the value of p written
as a binary integer; the last 24 or 56,)Jlaces give the value of the coefficient q. A
method of generating random numbers similar to that of the author can easily
be developed for use with this arrangement.-Trans. .
.2,9
. )
,
, '
I '.'.
I.. ,,-
'I
'r
" '
." ..
\ ,!ommnuw Random Numbus on Q CoinpMter ' 23
After a G;, usually 1, is 'chosen, number Gl:+
1
is
obtained in operations: ',t ' .
(1) Gil: is by a large constant, usually 10
17

(2) The rtPresentation of the product 10
17
G
k
is displaced seven
places to the ileft, so that thc' seven places of the product disappear,
and ,zeros appear'in places 36 to 42. '
(3) The atlsolute valuc of the resulting number is taken; Jhis becomes,
Gk +1
This process will more than 80,000 random numbers G
k
before
the sequence periodic and the numbers begin to repeat.
Various tests on the first 50,000 numbers give completely satisfactory
results. numbers have been used in solving a wide variety of
problems.
The advantages of the pseudo-random number method are quite
evident. First, to obtain each number requires only a few simple opera-
tions, so that the speed of generation Qf random numbers is on the
same order as the computer's work speed. Second, the program
occupies very litttt space in the memory. Third, the sequence
of G
k
can be easily reproduced. Finally, it is only necessarY to yerify the
" quality" of such a series once; after that, it can be used many times
for calculations in suitable problems without fear of error:
The single disadvantage of this method is the of pseudo-
random numbers which it gives. However, there are ways to obtain still
more of them. In particulars it is possible to change the initial number
Go. ' "
The overwhelming majority of computations currently performed by
the Monte Carlo ,method use pseudo-random numbers.
'-,
..
:. .
,, r
:,
I
.{
)
:
\
,-
\.
;
(
4
,.
''"
, 1.
J
..
;: Transformations
of Random
Variables
The necessity of simulating different Vilriablcs arises in
solving vari,ous problems. In thC$carly. stages or1he use of the Monte .
Carlo some experimenters tried' to construCt a wheel for finding
each _ random variable.. For ex-
, ample, in order to find values of a
random with the dis-

(
Xl Xs X') ; (4.1) '.
0.5 0.25 0.125, 0.125
one would use the wheel iUus.
. trated in 'figure which operates -
in the same way as the wheel in
Fia. 4.1 figUre 3. I, but which has unequal
/ divisions, in the proportions PI'
/. However, this turns out to be ""mpletely unnecessary. Values for any
irandom variable can be obtained by transformations on the values of .
standard" random variable. Usually this role is played by G, the
uniform distribution over the interval [0, 1]. We already know how to
get the values of G.
The procesS of finding the values of some variable X, by
transforming one or more values of G, we will call the construction of X .
. ,
4.1. Constructing a Discrete Random Variable
Assume that we want to obtain values of a random varihble X,with
the distribution
X=
(Xl
X2

.
PI P2
...
24
/ ,-
,,: :r
3
I.
,
..

.....
\
- (
..
,
, '.
)
'
TrlW/orntlllions 0/ Random VaritdJ/es
Let us examine the intcrvaI 0 S Y S 1 and break it up into n imervats
with lengths of Ph ... p,.. Thccoordinates of the points of division
will obviously be Y1 = Ph Ya = PI + Ya = PI + P; + Pa., ... ,
Y.-1 = PI + p\ +' ... + P.-I '
We number intervals '1,2, ... , (fig. 4.2):
/I ,
'.
2 3 n
Iii
I
,0
PI PI+P'}, "P,+P-;,+P3
1 y
_ F1g. 4.2
Each time we need to .. perform an experiment" and to select a value of
X, we shall choose a value of.G and find the point y ::: G. If this point
lies in the interval numbered i, we will consider that X'= Xt (for this ..
trial).
It is easy to demonstrate the validity of such a procedure. Since the
random variable G is uni(onnly distributed over lO, 1], the probability
that a G is in any interval is equal to the length of. that interval. That is,
P(O S G < PI) :II< PI ,
P(P1 ,.s G < PI '+ p,J = ,
P (PI + + : .. + Pit -1 .s G 1) = PlI
L
According to our procedure, X = Xi whenever
Pl + + ... + P. -1 :s G < PI + pg + ... + , p
and the probability' of this event is Pl'
Of course, In a computer we can get SJ,Iong without figure 4.2. Let us
assume that the numbers Xh XII, '. X
lI
have been placed in successive
storage locations in the memory, and likewise the probabilities
Plo PI + PI + PSI + P3 ... , I., A flow chart of the subroutine for the
construction of X is provided in figure 4.3.
Example. To construct ten values of a random variable T with the
distribution
..

)
. ,
, .
26 Simulating Random Variables
.
. ,
Find G
\-
,
I
Is G < Pl
no yes
,

Add one to the
l
Let
X=x,
address of the
memory locatIOns
from wI'1lCh p 1
and x, are to Restore the
be taken
addresses of
I
Pl and xl
4.3
For the values of G we take pairs, of numbers from table A in the
by 0.01.
1
Thus, G = 0.86; 0.51; 0.59; 0.07\ 0.95;
0.66; 0.15; 0.56; 0.64; 0.34. ' ,
Clearly. under our procedure the values of G less than 0.58 correspond
to the value T = 3, and the values of G 0.58 to the value T = 4.
Thus. we obtain the values T = 4; 3; 4; 3; 4; 4; 3; 3; 4; 3.
Note that the order of the values Xl' x
2
, , Xl'l in the distribution X
is arbitrary, although it shou1{i be the the construction.
4.2. The Construction of Continuous Random Variables
N ow let us assume that we need to get values of a random X
wlrtch is distributed over the interval [a, b 1 with density p{x).
1. Since in this example the P, arc given to two decimal places, it suffices to take
the values of G to two decimal places. 'In an approximation of this sort, where the
case of G = 0.58 is possible. it should be included with the case G > 0.58 (for
the value G = 0.00 is possible, but not the value G = 1.00). When more decimal
places for G arc used, the case of the equality G = Pi is improbable, and it can be
included in either of the inequalities.
j;
I
..
./

..
,., ,"
01 Random Yarlab/es 27
."
We shall prove that values of X arc given by the equation
, .
, IX
- a p(x) dx = G;
.,
(4.2)
that is, taking each value of G in turn, we must solve equation (4.2) and
find the cOrresPonding value of X.
For the proof let 'us examine the function (fig. 4.4)
L
x w
y == p(x) dx.
a
From the general properties of density (2.15) and (2.16), it follows that
, y(a) = 0, y(b) = 1,
and, taking the derivative,
y'(X) = AX) 0 .
This means that the. function y(x) increases monotonically from 0 to 1.
Furthermore, almost any line y = G, where 0 G :S 1, intersects the
curve y = y(x) in one and one point, the abscissa of which we take
as X. If we agree to take for values of G lying on .. fiat spots" on the
curve,. the value of X corresponding to one of the endpoints of the fiat
spot, then equation (4.2) will always have one and only one solution.
y
.. --...
G --- - - - - - - - ..
y(b')
y (a')
o x b x o x
Fig. 4.4 Fig. 4.5
Now we take an arbitrary interval (0', b'), contained in [a, bJ. The
points of this interval
a' < x < b
'
31
28 Simulating Random Variables
. r '
correspond to those ordinates of the curve y = y{x) wltich satisfy the
inequality ,.
yea') <. y <. y(b') ,
or to possible "fiat' spots" with ordinates y(a') apd y(b'). Since the
derivative y'(x) =: p(x) is zero everywhere on these" flat spots," they
contribute nothing to the probability Pea' <. X <. b'), and therefore
(fig. 4.5), "
)
Pea' <. X <. ... b') = P(y(a') <. G <. y(b' .
.
Since G is ~ v c n 1 y distributed over (0, I),
'b'
P(y(a
i
) <. G <. y(b' = y(b') - Y(a') = L. p(x) dx .
Theref,?re,
b'
pea' <. X <. b') = L, p(x) dx ,
and this means exactly that the random variable X, which is a root of
equation (4.2), has the probability density p(x).
Example. The random variable H is said to be uniformly distributed
over the interVal [a, b 1 if its density is constant in this interval:
1
p{x) = -b - for all a <. x <. b.
-a
In order to construct the values of li, we set up equation (4.2):
J
H dx
<) b_a=G.
The integral is easily computed:
.H-a=G
b-a .
Hence. we obtain an explicit formula for II:
11 = a + G(b - a) . (4.3)
Other examples of the application of formula (4.2) will be given in
sections 5.2 and 8.3.

. ,.,:'. ,.,',
.'
\
Trans/ormations 0/ Random Variables
29 I
4.3. Neyman's Metbocl for the CODStnJctiOD of
CoutiDuous RaDdom V.ariables,
It can prove exceedingly difficult to solve equation (4.2) for X; for
example, when the integral of p(x) is not expressed in terms of elemen-
tary functions, or when the density of p(x) is given graphically.
Let us suppose tlult the random
variable X is defined over a finite
interval (a, b) and its density IS
y bounded (fig. 4.6):

y =p(x)
H"
o 8
4.6
H' b II
p(x) s Mo .
The value' of X can be constructed
in the following way:
(1) We take two values G' and
G" of the random variable G
and locate the random point
(H', H") with coordinates
H' = a + G'(b - a),
H" = G"Mo. II
(2) If this point lies under the curve y = p(x). then we set x = H';
if it lies above the curve, we reject the pair (G', Gj and select a new
pair of values.
The justification for this method is presented in section 9.1.
" .
4.4. 00 CoIJStructiog Normalized Variables
There are many ways of constructing the various random variables.
We shall not deal with all oftbem here. They are usually not used unless
the methods of 4.2 and-'4.3 prove ineffective.
Specifically, this happens in the case of a normalized variable Z, since
,.. the equation
# #
is not explicitly solvable, and the interval containing possible values of
Z is infinite.
I n table .R in the Appendix, values, already constructed, arc giyen for
a normal random variable Z with mathematical expectation (Z) = 0
3,)
1,
30 ' Simulatina Random Variables
, .
and variance Var (Z) lC L It is not hard to that the
variable
Z' = a + aZ , (4.4)
.; 'f' .
<,"
<..: I
.. ,
will also normal and, moreover. it follows from (10) and (11) that
E(Z') "'" a, Vat (Z') =. cfJ.
, Thus, formula (2.2), with the help of table B, will allow us to
any normal variable.
4.5. More About the Example from SedioD 1.1
Now it is possible to explain how the random points in figures 1.1 and
1.2 were selected. In figure 1.1 the points were chosen with the co- '
rdinates "." ," ., ,
Q ., ," .. ,-. ".'
x = G', Y = G'.
The values of G' and G
ir
were computed from groups of five digits from
'table A: Xl = 0.'86515;Yl = = tf.66155;y:. =
sb on. ' '
It can be proved 3 that since the abscissas and the ordinates 6f these
points are independent, the of hitting 'a point in any region
within the square is equal to the area the region. Stated differently,
this that the points are uniforml 1iistributed over the square.
In figure 1.2 the points were made with the coordinates
x = 0.5 + 0.2Z', Y = 0.5. -+ 0.2Z*,
r values df Z' and ZIJ were taken successively from table B:
Xl = 0.5 + 0.20.2005, Y1 = 0.5 + 0.21.1922;
X2 = 0.5 -+ 0.2(1.0077), ....
One of the points, falling outside the square, was discarded.
From formula (4.4) it foHows that ,abscissas and ordinates of
these points are DOnna! random variables with means a = 0.5 and
variances cfJ = 0.04.
2. Proof is given in section 9.2.
3. Proof is iiven in section 9.3.
3i
'J,' " 1, "'- "
.; ',.' '.,'
"
-
.' 'j
,
. ,
'. \ .... :.:.
..
, .
-.......
. ,
...
. A' ,

... -
..
",\ .-'
-,'
. ".,
"
Examples
. of the
"
,
..
of the
.
Monte Carlo'
Method
_:: I
,"
. .
.4. '.
-
-:.
'38
' .
t-
l,j.
'l
. .
.,
" .
o .
..
...
,0
If",
.. "I
...
...

j
",
....
, '.
.' . . .
.,
5

_'1,.1.
.' \
"', .
, ,
Simulating
a
System
' .
S.l. Dt>scriptiOD of the Problem
Let us elUlmine one of the simplest mass-supply systems. Consider a
system like the check-out section of a supennarket, consisting of n lines
(or channels, or distribution points). each of which can .. wait on cus-
tomers." Demands come into the system, the moments of their entrances
being random .. Each demand starts on line number 1. If this line is free .
. at time T
k
, when the kth demand enters the system, it will begin to supply
.' .t.t1e demand, a process 13$ting a time t. If at the instant Tk .
the demand is instantly transferred to lit:le 2, and so on. Finally, if aU n
I. lines are busy at the instant T
k
the system is said to overflow.
-- "
Out problem is to determine how many demands (on the the
" system satisfies in an interval of time T and how many times it will
overflow ..
Problems of th,is type are encountered constantly in the research of
'.J ., .. J,llarket organizations, and not only those providing everyday services.
:;.\'/ 'In some very special cases it is possible to find an analytical solution;
. .but in complex situations like those we shall describe""1ater, the Monte

Carlo method. turns out to be the only possible methoduf calculation .
...
5.2. The Simple Del1UlDd Flow
The first question which comes up in our examination of this system
is: What is the form of the flow of incoming demands? This question is
usually answered by observations of the system, or of simi!ar
over long periods of time. From the study of demand flows' tinder
various conditions we can select some frequently encountered cases.
The simple, or Poisson, demand flow occurs when the interval of time
33
, ..
3.9
.
..

34 'Examples of the Application of the Monte Carlo Method
S between two consecutive is a random variable, distributed
over the interval [0, ()Q) with density
2
1
o
"
2
FiB. 5.1
P(x) = ae-
4X
(5.1) ..
)(
Formula (5.1) is also called the
exponential distrtbution (see fig. 5.1,
where the densitieS (5.1) are con-
structed for a = 1 and Q = 2).
It is to compute the mallie-
. r
matica1 expectation of S:. ' .
'-to \. Iso .
(S).== J
o
xp(x)dx c' 0 xae-
ax
c.
After integratin'g by parts. (u ='-v
dv = ae- aX dx), we obtain
The parameter a is called the demand flow density. "
The formula for constructing S is easily obtained [fOrl) equation (4.2),
which in the present case is written:
-
Computing the integral on the left, we get the relation
1 - e-
as
= G,
and, hence.

s = 1n (1 G)

The variable 1 - G has exactly the same distribution as G, and so,
instead of this last formula. one can use the formula
s = -! In G.
-a
4n
(5.2)
. .......'
Sinudolilfg a Mass-Supply System
5.3. TIle PIu for tJae COutputatioo
35
Let us look at the operation of the systeiU of section 5.1 in the case
:Qf the simple demand flow.
To each line lNe assign a storage location in the memory of a
pUter, in which we register the moment the line becomes free. Let
us designate the next time 'at' Which the ith line will become free 'by It.
, At \he beginning Of calcUlatipn we Jet the time when !lie first
demand enters the system, Th equal iero. One'can see that at this point
an the 4 are equal to 9; all the are free. Th\ calculation ends at
time T, = Tl + T. :
The first demaIfd enters line L This means that for the period t this
line will busy. Therefore, we should substitute for 11 the new value
(t
1
)oalJ =. Tl + t, add, one to the counter of demands and return
to examine the second demand. .
'us assume that k demands have already been 'Fxamined. it is
necessary, then, to select the time for the entrance 01' the (k + l)th
L'
demand. F()f this, we take .the v:tltle of G and compute the ne'"
value, of S (Sk) by' formula (5.2). Then we compute the $iWe
tk+ 1 = 'Pk + Sk
(
Is the first line free at this time? To establish this it is necessary to
, verify, t '
o (5.3)
If this condition is met, it means that at time T/(+ 1 the line is free and
can attend to the demand; therefore replace 11 by T;+ 1 + t, add'
, one to the counter, and return for the next demand.
If condition (5.3) is not met, it means that at T
k
+ 1 the first line is
busy. Then we test whether the second line is free:
t
::::;; T
k
+ 1 ? (5.4)
If cond"ition (5.4) is met, we I']. 6y 1 + t, add one to the
, counter. and go on to the next demand.
If pondition (':5.4) is not met either, we proceed to a test of the
condition
It can happen that for all i from 1 to n,
: /
," \
" ,
/
.'
i
I
!
I
<,
-----..............
0":
.. "
,
.,
'(
, .
' .. -. '"
"',
"
.,
,36 Examples of the Application of the ,Monte carlo Method
, ,,". . .
yes
results to output
end of program
.a
.'
add 1 to the
demands fliled
counter
. ,
,
1-2
..
Tt t 1

or
f1
Fig. 5.2

== t" == 0; k"" 1
T., +
kold
+ 1
I =
add 1 to the
overflow counter
Sk
...
,

,
,
I
/
,
.
'.
.\'
, ,
"" ,
, ,Sinudiuing a Mqss-Supply System 31
. \'
that is, that at time T"+l all the lines are busy.Jn this case we add one
to the overflow counte'rand go on to examine the next demand.
Each time T!c+ 1 is computed, it is necessary to test the for
the termination of the el(periment:
.'
When this condition is satisfied, the trial comes to an On the
ooWlters are the numper' of dCrnands succeSsfully met (m4) and the
number of overflows (mo). ,
Let this experiment be r((peated N times: Then the results ot the
trials are averaged:" ' .
,f
1 N,
E(md) N L (md)"
, J-l
1 N
E (mol N L (mo)/,
. . 1-1
where (md)J and (mo)! are the values of ma and mo obtained on the jth
trial. . ,
In figure 5.2 a flow chart of the program which performs this calcula-
tion is given. (If the values of md anlj mo for single'trials arc desired.
they can be printed out in the square marked. "end of trial")
. 5.4. More Complex Problems I .
It is easy to see that we can use this method to compute results for
more complex systems. For example, the value t, rather than being
fixed, can be different for the various lines (this would correspond to
aifferent equipment or to varying qualifications of the service staff), or a
random variable whose distripution differs for the various lines. The
plan for the calculation remains roughly the same. The only change is
that a new value of I is generated for each demand and the for
each line is .independent of that for the others. '
One ca1l also examine so-calfed wailing-time systems, which do not
overflow immediately. The demand is stored for a period t' (its '
waiting lime in the system), and, if any line becomes available during
that time" it to that demand. \
Systems can also be considered in which the next demand is taken on
by the line which will first become available. It is possible to allow for
random variations in the density of the demand flow over time, for a
random repair time on each line, and many other possibilities.
...
..
"
\
-'
" '
,,'

I
"
38 Examples of the Application of the Monte Carlo Method
\
Of course, such simulations are not done eif-ortlessly. In . order to
obtain results of any practical value, one must choose a sound model, "I>
and this requires extremely careful stud): of the actual demand flows,
time-study observations of the work at t various distribution. points;
and so on.
In oMer to study any system of t s type, one must know the prob-
abilistic principles of the functioni g of the various parts of the system.
Then the Monte Carlp rnethod'permits the computation of
ahilistic principles of the entire system.however complex it may be.
, Such 'methods of ca1culationare extremely leJpful in planning enter-
prises. Instead of a costly (and sometimes real experiment,
we can conduct experiments on a computer, trying different meth.ods
of job organization oftnd of equipment usage.
'\

...
, ...
y .
1
/

(
',I :
."
''W
. '
6
.. ,
,

the
and Reliability.
, of Products "
),
6..i., 11ae for Quality
" I ..
Let examine a ."roduct S, made up of (perhaps many) elements.
For example, S may be a1piece of electrical made of resistors
Capacitors (CO,), tubes, and the like.'We define the quality of the
product as of output parameter U, which can be
pute<i from the para1lfetcrs of all the clements:
p
. ,
If, for example, U is the vortage in an,operating section of an electric
circuit, then by Ohm's law it is possible to construct for the
circuit and, solving them, to find U.
VlR n Kilohms
to.lerance 5%
I n reality the parameters of the elements of
a mechanism are never exactly equal to "their
indicated values. For example, the resistor
illustrated in figure 6.1 can test out anywhere
Fig. 6.1. between 20.0 and 23.1 kilohms.
The arises: What effect do devia-
tions of the parameters of all thtse elements have on the, value of U?
One can try to compute the limits of the dimension U, taking the
.. worst" values of the parameters of each clement. However, it is not
always clear which values will be the Furthermore, if the number
of elemenls is .large, the limits thus computed will be highly
it is unlikely that all the parameters will be simultaneously
at .
it is more reasonable to calculate the parameters of all the
elements and the value of U itself by the Monte Carlo method and to
try to stimate its mathematical expectation E( U) and variance
.
,-- -'.- 39
15

..
.. \
. 40 Examples of the Application of the Monte carlo Method
. . .
Var (U). E(U) will be the mean value of U for all the parts of the
product, and. Var(U) will show how much deviation of UfromE(U) .
will be encountered in practice. -
Recall (see section 2.2) general,
1= . \,.; " .. ).
,It is practically impossible to compute analytically the distribution of
U for an'ifunctionfwhich is at all complex. Sometimes this cap
experimentally by looking at a large lot of finished products. BAlt even
this is not possible, certainly not in the design stage. .
Let us try to apply our method. To do so, we shall need to know:
(a) the probabilistic characteristics of all the elemetl1S, and (b) the
function f (more exactly, a w'!Y to compute the value of U from any
fixed vakfes R(l), R('J), , , ; e(1), C(lI)' . , .; , , .). .
. The probability distribution of the parameters of each single ele!J1ent
can be obtained experimentally by examining a large lot of such
elements. Quiteoften the distribution is found to be normal. Therefore,
many experimenters in the following way. They consider the
of the pictured in figure 6.1 to be a normal random
variaQle Q with mathematical expectation E(Q) and with 30' = 1.1
(remember that, accQrding to (2.20), it is rare to get a value of Q deviaa.
ting from E(Q) by more than 30' on any 'one trial).
,
The plan for the calculation is quite For each element a value
of its parameter is constructed; then the value of V is computed
according to formula (6.1). Repeating the trial N times obtaining
values V
h
. . ,' VN. we can compute that, approximately.
. 1
N 2: VI'
j .. l

1 l N 1 (N ):il
J
-'1 2: (VSJ - N 2 VJ "
JEl
For large N in the. latter formula one can replace the factor l/(N - I)
by 1/ N. and then this formula is a simple wnsc4uence of formulas (2.8)
and (2.9), I n statistics it has been show 11 that for small N it is better to
keep the factor l/(N '1).
6.2. Examples of the Calculation of Reliabqity
Suppose we want to estimate how long, on the avet\ge, a product
will function properly, assuming that we know the relevant characteris-
tics of each of its components .
'\
/
,
J
Calculating and Reliability oj Products 41
If we consider the breakdown time of each component t(k) to be a'
. '
constant, then computing the breakdown time of the product presents
no dilhculties. For example, for schematically represented
'- in figure 6.2, in which the break-
_ '._, _____ . , ,. ____ down of onc component implies
fig. 6.2
the breakdown of the, entire
product,
t = min (t(1); 1(3); 1(4' (6.2)
oj
And for a product, ,schematically rCl'rescnted in figure 6.3, where one,
of the elements is duplicated, Qf redundant,
(6.3)
since if clement 3 fails, for example, the proouct will continue to work
with the single element 4. '
,
I n actual practice tl'lc break-
down time of any k of
a m(.."Chanism takes the form of
a random variable f(k)' When we
say that a light bulb is good for ....
1,000 hours, we only mean that
Hg.6.J this is the average value E(F) of
the variable F. Everyone knows
..
that one bulh may hurn out sooner than anothcronc like it.
If the density distrihution I-'(k) is known f()r each of the components of
.the product, nn i.:an be computed hy the method, follow-
ing the plan of section 6.1. That is, for each clement it is possible to con-
struct a value of the variable Fit,; let us call it Then it is possible to
. a value/of the random variahle F, representing the breakdown
time of the entire product, by a formula corresponding to (6.2) or (6.3).
Repeating this experiment enough times (N). we i.:an obtarn the
approximation -
....
I, \
J' J. !" /'
N L. "
1-' 1
wheref, is the valuefohtaincd on thcJth trial.
It must' be notcd that the question of the distributions of break-
down times for thc various clements is not at all a simple one. For
long-lived clements, ,H.:tlIal experiments to determinc the distributions

arc diflkult to pcrform, sirKc one must wait until cnough of the clements
have broken down.
;
,
,/
,42 . Examples of the Application of the Monte Carlo
6.3. Further Possibilities 'of the Method
The preceding examples show that the procedure for calculating the
quality of products being designed is quite simple in theory. We must
know the probabilistic characteristics of all the components of the
product, and we must succeed in computing the variaQle in which we are
interested as a function of the parameters of these components. Then we
can allow for the randomness of the parameters by means of our
simulation.
t
From the simulation it is possible to obtain much more useful infor-
mation than just the mean and the variance of the variable that interests
us. Suppose that we have obtained a large number of values V
lt
... ,
V
N
of the random variable V. From these values we can const,ruct the
approximate density distribution of U. In the most cases, this is
a rather dimcult statistical question. Let us limit ourselves, then, to a
concrete example.
1 Suppose that we have. all together, 120 values V
17
U
2
, . , of
the random variable U, all of them contained in the interval .
I < V
j
< 6.5.
break this interval into eleven (or .my number which is neHhcr too
large nftl' too small) equal intervals of length = 0.5 and count how
) many values of U
j
fall in each interval. The results are given in figure 6.4.
'2
I
o 1
I
2
lb 24 32 '1 19
I I I I
3 4 5
Fig. 6.4
2
1 ..
6 1 x
The frequency of hits in any interval yields the proportion of hits in

that interval out of N "'- 120. In our example the frequencies are: 0.017;
0; 0.008; 0.12; 0.20; 0.27; 0.14; 0.16; 0.06; 0.008; 0.017.
On each Of the intervals of the partition, let us construct a rectangle
with area equal to the frequency of values of VI falling in that interval
(fig. 6.5). In other words, the height of each rectangle will he equal to the
frequency divided by L\x. The resulting graph is called a histogram.
The histogram servcs as an approximation to the unknown density of
the random variable U. Therefore, for. example, the area of the histo-
gram hounded hy x = 2.5 and x = 5.5 gives us an approximate value
for the probahility
P(2.5 < U < 5.5) 0.95 .
-....
the QuClity IlIUi ReI4lbtllJy of PrINblcts
On the basis of the above calculation (the trial), it is possible to
estimate that- there-is a probabjlity of 0.95 that a U will (all in
the interval 2.5 < U < 5.5.. .
In figure 6.5 the density of a normal random variable Z' with the
parameters a = 3..85, a = 0.88 has been constructed ass comparison.
1
.
,
, "
..
o 2 3 4
5 6 x.
Fig. 6.5
If we now compute the probability that:l' falls within the interval
2.5 < Z' < 5.5 for this density, we get the "fitted"
1. The numbers a = 3.85 and a = 0.88 were obtained by considering a random
variable with the distribution .
(
Xl X:j X3 XU)
0.017 0 0.008 . .. 0.017 '
where each x"is the value of the midpoint of the kth interval (thus, Xl = 1.25,
X2 = 1.75, and so on), and then calculating the expectation a and the variance a:I
.of such a random variable by formulas (2.3) and (2.9). This pro<;ess is called
fitting a normal density to the frequency distribution (*).
2. Here is the method used to compute this value. In accordance with (2.14),
we write
I ' 1. [(X - J
P(2.5) ,< Z < 5.5) = ov(2.".) exp - .
In the integral we make a substitution for the variable (x - a)la = t. Then we obtain
1 fll. ( Ill) .
P(2.5 < Z' < 5.5) = V (211') 11 cxp - '2 dl.. .
where: 11 = (2.5 = -1.54 and ''ll = (5.5 - a)/a = 1.88. The latter integral
1,9
'.'.'
. ,
. '.'.
. .1
-
"
44 Examples of the Application of the Monte Carlo Method
6.4. A Remark
It is unfortunate that calculations of this type are not at present per-
formed more commonly. It is difficult to say why this is Most likely it
is because designers and planners are not aware of the possibility.
Moreover, before using the method to simulate any product, one
must find out the probabilistic characteristics of a11 the components that
go in,o it. This is RO small task. But it is also true that, knowing these
characteristics, one can evaluate the quality .of any product made of
these components. It is even possible to find the variation in quality
when certain components are replaced by others.
The probabilistic characteristics of the elements will always be'a promi
i
nent obstacle for those who make such calculations. Nonetheless,. one
might hope that in the near future such calculations wiH become more
uS,ual. . 4
can be evaluated with the help of tables of the so-called probability integral
in which are given the values for x 0 of the function
"
1 r
Jt

#,x) = v'(2,y). _ II) exp -"2 dr.
We obtain
#. I
P(2.5 < Z' < 5.5) 4>(1.54) + - 1 = 0.91,
using the identity r/i..x) + r/>( - x) I. which can casily be verified by looking at the
graph of the normal distribution
f
P(x) = -- cxp -::..
1 (2)
2 .
. .
...
' ..
.', .'
\,
' .. ,," .
. ..
..
,'J
.
7 Simulating the
Penetration of

Neutrons
.
through a Block
'.
" .. ,
The laws of probability, as they apply to interactions of single elemen-
tary particles (neutrons, photons, and. others) with matter. are
known. Usually it is necessary to find out the macroscopic characteristics
of these processes, those in which an enormous number of such particles
participate: density, current flow, and so on. This situation is similar to
the one we met in chapters Sand 6, and it; too,. can be. handled by the
use of the Monte Carlo method. .
Most frequently, perhaps, the Monte Carlo method is' used in the
study of the physics of neutrons. We stall examine an elementary variant
of the problem of theponetration of neutrons' through a block .

7.1. A Formulation of the Problem
Let a stream of neutrons with energy Eo (all at an angle of 90 on a '
homogeneous block" of infinite extent but of finite depthh. In collisions
with atoms of the matter .. of whiCh the block is composed, neutrons can
be deflected elastically or ab,sorbed. 'Let us assume, for simplicity, that
the energy of a neutron does not change when it is deflected, a"," that a
neutron will "rebound" off an atom in any direction with equRI prob-.
ability. This is approximately the case for matter composed of heavy
atoms. The histories of several neutrons are portrayed in figure 7.1:
neutron (d) penetrated the block, neutron (b) is absorbed, neutron (c) is
reflected from the block.
We are required to compute the probabiJ,ity p + of a neutron pene-
trating the block, the probability p - of a neutron being reflected from
the block, and the probability pO of a neutron being absorbed by the
,
45
51
\.,.
. "
'1"
,
..
-
I
.'
. -
..

46
b
c
"
... '.
"
I
'of Jhe Application of the Monte Carlo Method
J
II
7.1
It
Interaction ofneuttons with matter
is in the '4SC under '
consideration by two constants :2c
and I., respectively. called' the ab-
sorption cross-section and the disper
sian cross.s.ection. The subscripts c
and S are the initial letters of the
words .. capture nand ..
The sum of these cross-sections is
called the total cross-section
The physical s1gnificance of the;:ross-sections is this: In a collision
of a neutron' with an atom of matter the probability of absorption is
equal to "2,,:/2., and the probability of reflection is '2../"2,.
The free length L of a neutron (that is, the distance between
consecutive Collisions) is a random variable. We shan aSsume that it
can take any positive ,value from a probability density
, .
p(x) = 2: r2lx.
':This density of the variable L coincides w1'th the density (5:1) of the
random variable S for the simple demand flow. By analogy with seCtion
5.2 we can immediately write the for the mean free-path
, E(L) = 1/'2.
and'the formula constructing L:
L= -(l/2)lnG.
There to be clarified the of hO; to select the
direction of the 'neutron the collision. Since the situation is sym-
metric 'with respect to the x-axis, the direction can he as the
single angle cP formed by the final direction of the velocitY_Q{tl.1te neutron
and the x-axis. It can be proved 1 that the necessity of prob-
abilities in each direction is in this case equivalent to
that the cosine of this angle, M = cos cP, be uniformlY'distributed over
. "9
1. Proof is given in section 9.4.
"
--
... "

, .
, .,
, .
.....

. Ir
,\47
,. i'
. .
the' interval [-1, 1]. From for.mWa (4.3), lettinS a -1, b = 1, the
form'ula for M fOllows:'.' .
M"""' 2G :- 1.
. 7.2. A PlaD for the Calculation by Meam of the
Simulation of Real
,
Let us assume that a neutron underwent its kth deflection inside the
block at the point ;x" and afterwards began to move in the direction M t.
, . Let uS construct the free-path length
o
..
. Fig. 7.2
.and compute the abscissa of the next
collision (fig. 7.2)
check to see if the condition for
penetrating the block has been met:
If it has, the ((alculation of the neutron's trajectory stops, and a 1 iio
added to the counter fot penetrated particles. we test the
condition for refiectior. .
. ' If'this condition is met: the calculation of the neut.ron'.s trajectory stops
and a 1 is ad'ded to the counter for reflected particles. If this condition
also fails, that is, if 0 s Xl.: + 1 S h, it means that the neutron has under-
gone its (k + l)th collision within the block, and it is necessary to
construct the effect of this collision on the neutron.
Ivccordance with the method of 4.1, we take the next value
of G and test the condition for absorption: .
. If this last inequality holds, then the calculation of the neutron's trajec-
tory stops, and a 1 is added ,to the counter for absorbed particles. If not,
I
.. ,
.,'.

l#

48 Examples of the Application of the Monte Carlo Method
we consider that the neutron has undergone a deflection at the point
Xlc+ 1- Then we generate a new direction of movement
,
and repeat the cycle once more (using different yalues of G
t
of course).
All the G are written. without subspripts, since each value of G is used
only once. Up to three values of G needed to calculate each jog of
the trajectory.
The initial values for are:
'Xo = 0 t Mo = 1.
After N trajectories have been computed, it is found that N + neutrons
. .
have gone through the .block,' N - have been reflected from it, and N
have been absorbed. Obviously, the desired probabIlities are approxi:-:
mately equal to the ratios

N-
In figure 7.3 a flow chart of the program for this problem is shown.
The subScript) is the number of the trajectory, and the subscript k is the
collision number along the
This computation procedure, although it is very natural, is not perfect.
In particular, it is difficult to determine the probabilities p + and p' by
this ;rncthod when they arc very small. This is precisely the case one
encounters in caiculating protection against radiation.
However, by more sophisticated f!.pplications of the Monte Carlo
. method, even these computations arc possible, We will briefly consider
one of the simplest variants of calculation with the help of so-called
" weights."
7.3. A Plan for the Calculation Using Weights to
A void T erminal
Let reexamine the problem of neutron penetration. Let us' assume
that a .. paekagc," consisting of a large' number Wo of individual
neutrons, is traveling along a single trajectory. 'For a collision at ,he
point Xl average number of neutrons in the package which wou1d be
absorbed is w
o
2.cI''i., and 'the number of neutrons undergoing deflection
would be, on the avcrage, wo'i..IL:.
I
\
\
\
I
..
'.'
,.'
:'.

.; ,
si1tul.lt1.tin6 t}ul PllUlraJioll of NeuJrons, thrOllgh Q Block
.. ,'.
' .
..
yes

N +1
.,
.
no
,
-#"
no
* new
.. ,
calculaw
results to output end of program
Fig. 7.3
/
'" " II". '
. "
'",
" \,
\.
:so
I ,.,.
Examples of 1M. Application of the C8rIo.Method

. I
..
'.
, ,

\.,
,:
','
"
"
'f .\
"
If
"
I
\
Fig. 7.4
Sr )
. " .
,
')
'1'-'
'f "
"
Sinudal{Jg the Penetration of Neutrons through a BlQck Sl
In our \ program, after each collision, we therefore' add the value
wo"1cII to the absorbed-particle Counter, and watch the motion of the
.,..
deflected assuming that the entire remainder of the package_
is deflected in a single direction.
All the formulas for the calcUlation given in section 7.2 remain the
. same. For each collision the number of neutron9in the package is
simply reduced: .
. that part of the package comprising wk"L/"2. neutrons will be
,
. absorbed. Now. the trajectory canrtot be ended by absorption. ( .
The value w" is usually called the we;ghtofthe neutron and, instead of

talking about a .. package" consisting of w" nCl,ltrons, one speaks of a
neutron with weight Wk' The initial weight Wo is usually set equal to 1. This
, does not conflict with our notion of a "large package," since all the Wk
obtained while computing a trajectory contain Wo as a common factor.
A flow chart of the program which realizes this calculation is given in
figure' 7.4. It is no more complex than the flow chart in figure 7.3. It is
possible to prove,
2
however, that calculating p + by this method is
always more .efficient than' using the method of section 7.2.
"7.4. A Remark
..
There are a great many other ways10 do the calculation, using various
, weights, ,.but we cannot .stpp to consider them here. We stress !
that the Monte Carlo rnetpod enables one to 'solve many complex
problems about elementary particles:' T.he 'cancorisist of
any substanCe and can have any geometrical of the
particles can, if we sl' desire. be changed with each c6Hision. It is
possible by this technique fo simulate many other nuclear processes.
For example, can construct a model for the fissioning of an atom
and the formation of new neutrons by coHlsion with a neutron, and thus
'simulate the conditions for the initiation and maintenance of a chain
reaction. Problems related to this were. in fact, among the first serious
applications of the Monte Carlo method to SCientific problems.,.
, .
2. P,oof ,is given in section 9.5.
I
>
..
-
"
" :.
, ",' 4.,
"" .
I
8
" <
.. ,
'.;".
. '.', .' .

, .' a Definite
"Integral

'Or ..."
,
..
The problems examined in chapters 5, 6, and 7 were probabilistic by
nature, and to use the Motite Carlo method to solve them" seemed quite
natural. Here a purely mathematical proble.m is considered: the approxi-
mate evaluation of a definite integial. ' .,
Since evaluating a definite integral is equivalent to finding an area,
we CQuId use the methOd of section 1.2. In this chapter, however, we
. shall present a more effective method, which. allows us construct
.
several probabilistic models for solving the problem by the Monte'
Carlo method. We shall finally indicate how to chooSe the best from
among all these models.
,", 8.1. The Methoo of CompUtation
Let examine a function g(x), defined on the interval a ::s;; b.
Our assignment is t9 compute approximately the'integral
1.,= i
b
g(x) dx".
(8.1)
. We select an arbitrary density distribution Pv(x), also defined on
the interval [a, b] ,(that is, a function Pv(x), satisfying conditions (2.15)
and (2.16, ,
Finally, besides the random variable V, defined on the interval [a, b]
with density Pv(.x). we need a random variable
By
II g(V) .
pv(V)
E(H) == Jb (g(X) )pv(X) dx = I.
a Pv(X)
..)2
58
,I .
/
.. '-,
I.
, EvallUiting. Q' Definite Integral
, . ,. .. '.
Now let us look at N identical random variables Hl" H'j, ... ,-H
N
, and
apply the' central limit theorem of .. section 2.4 to their sum. In this Case
formula (2.21) is written . .
. ,
'" . .
3 0.997. (8.2)
"" .
This last relation means that if we choose N values Vi' V'j, .. '. Jc:x,
then for sufficiently large Ny. '",
I .
(8.3)
. It also shows that there is a very large probability that the error of
approximation in (8.3) will not exceed '3v(Var (H)/N).
8.2. How to Choose a Plan for the CalculAtion
R
We saw that to compute the integral (8.1), we collld use any random
variable V, defined over the interval [a, bJ. In any case
E(H) == E('g(V) :::;:: J
Pv(V)
. However, the variance and, hence, the estimate of the error of formula
(8.3) are dependent on what variable V we use. That is.
Var (11) = E(l/'J) - P = fb (g'JX ) dx - /'J.
,....", I Pv X
It can be shown 1 that this expression is minimized when Pv(x) is
proportional to Ig(x)j.
Of course, we certainly do not want to choQse very complex
since the pr()ccdurc for constructi]J.g values of V then becomes very
laborious. But it is possible to usc g(x). as a guide in choosing Pv(x}
(for an example. see sl.'Ction 8.3). ....., . ,
In practice integrals of the form (8.1) arc not computed by the Monte
Carlo method; the quadrature formulas proyide a more precise tech-
nique. In the transition to multivalued integrals the situation changes.
The formulas bccomc"very complex, while the Monte Carlo
methon remains practically unchanged.

I. Proof is given in section 9.6.
5.9
;. .. ' ..":'."':"':".: ,
. '
,.;.,...;
. -
.....
"
. ";',.
Examples of the Application of the Monte Carlo Method
8.3. A Numeriatl Example
Let us IlPproximately compute the integral

f
"'ll
. I- = 'sin x dx ..
o .'
..
The value of this integral is known:
1IJ2 ' :
10 Sin x th ,;". [-cos =1.
We shall use two different random variables V for the calculation:
One with constant density 2/71 (that is, a uniform di.stribution over the
\ '.
interval [0, '71/2]), and one with linear density pv{x) = these
.densities, together with the function being integrated" are 'showf in
. ,
I -
I.
figwe 8.1. It is evident that t\le linear density most closely fulfills Pte ,
" , . . . , , ' .
y
t----r-7flL----- y "
'" ' .. If
o
Fig. 8.1
, recomrn..,datibn in section 8.2, that it is desirable fqr Pv(x) to be propor-
tional to"sin x. Therefore, one may expect that it will yield the. better
result. '
.
(a) Let Pv(x) =' 211T on the'interval [0, ?T/2J. The formula f.or con-
structing V can be obtained from formula (4.3) for a = 0 and b = 1T/2:
..
?TG
. 2
Now formula (8.3) takes the form
" .,
..
.. ,
9

N
"1 L sin V,
J-1
}'
61)
..
.;" ,
"

..
,

,,'- . . ,
, r
,Valuating a lkjinite Integral 5S
Let N = 10. As values of G let us use 'groups of three digits from
, .
. table A (mJJltiplied by 0.001). The intermediate results are collected in
table 8.1. .
Table 8.1
j 1 2 3
4 5 '6 8 9 10 . "
G
1
0.8.65 0.159 0.079 0.566 0.155 0.664 0.345 0.655 0.812 0.332
V
1
. 1.3590.250 0.124 0.889 0.243 1.043 0.542 1.029 1.275 0.521
sin VI 0.978 0.247 0.124 '0.77&t-0.241 0.864 0.,516 0.857 0,957 0.498
. The final result of the cqmputation is:
I 0.952.
(b) Now let Pv(x) '= 8x/7fJ. For the construction of Y let us use
equation-(4.2),
After some simple calculations, we obtain
I
'7T
'Y=2:,vG.
'Forqlula (8.3) takes on the fonn:
.. 1
1 '" 7fJ f sin V"t
"'" 8N J-1 Vi ".
. .
Let N = 10, We take the same numbers for G as in (a). The inter-
mediate results arc in table, 8.2. /; . . . ,
..
T(lbJe B.i
5 6 7 8 9 1'0
, GJ,.. 0.159 0.079 0.566 0.155 0.664 0.345 0.655 . 0.812 Oq-B2
V, 1.461
O.
62
f?
0.442 1.182
9.
618 1.280 0.923 1.271 w 1.415 0.905
sin V
L
.M
-V;-
0.'680 0.936 0.968. 0.783 0.937" 0.748 0.863 0.751 0.698 0.868
, ...

,
", .
The result of the calculation
11"
J lXH6 .. "
.'
As we anticipated, the second method gave the more accurate result.
. ,
61
..
,:' .,.'--
.
.,
'.' ...
. .
56 Examples of the Application of 'the Monte Carlo Method
8.4. 00 Error
"
In section 8.1 it was noted that the absolute value of the error in calcu-
lating an integral I practically cann'ot, exceed the value 3"/(Var (H}1 N).
In however, the error as a rule turns out to be noticeably less
than this value. Therefore, as a characteristic of error another .value is.
often used in practite-the probable eTTO," .
.
.
Method
(a)
(b)
. ,
Table 8.3
- ... _- .. "',. --;-=-- .-:'.:-:-:"::". : .....
,Var (1/) op 0,.
..
.
0.256 O.l(B 0.048
0.016 0.027 0.016
._-_._-
' .
The actual absolute error depends on the particular random numbers
used in the c,alcuiation and can prove to he twiCe Of three times as large.
as Sr' or several times smaller. (\1' us. not the upper limit of the
error. but rather its order of magnitude. In fact, Sp is very:nearly the
valuc for which a deviation rarger than and a dcviation smaller than
Sl' arc equally likely. To sec this, that we arc approximating 1 by
By the ccntrallirnit theorem of section 2.4. R is approximately u normal
random variable with mathematical expectation I and sta'ndard dt;via-
tioo .::;-: ,,/(YClr (l1)f{V). But for 'any noriTlal random variable Z. (t is
not Jl<lrd to' calculate that whatever (1 and a may be.
. . ,
,
.a+O.675f1
. J Pix) dx- 0.5.
a - 0.075a
....
whence -
.. J'<jz ." a\ <.: 0.675(7) .-': 0.5 .. P(lz - al > 0.675a)
.. that is. deviations frotn the cxpCl,;{ctt yalue larger and smaller than the
probable .error 0.675a arc probable.
Let to the example in scclJ(ln ILl horn till: values given in
tables g.l and !:s.2. one can ,ipproximatc the variance Yar (1/) for both
,. ".; .
. W
"
..
',.'
. '''''.
,-
"f
.
EvDiualing a 'Definite Integral 57
........
methods of computation, The suitable for calculation was .
in section 6.1.:1 .
. The approximate values of the variance Var (JJ), the probable'errors
calculated from them, and the true absolute errors obtained from
. calculation (Sc) are shown 'in table 8.3' for both methods of calcUlation.
We see that Sc really is on the same'order as 8".
. .
2.,F<if method (a):
-"
\ For method (b):
56/
.. '= 36 (4.604 -, 3.676)=:= 0.2/.
, ',. # \
....
"0
,. ,
.
.

, .,
\\

"
i
1
)
A
1

\
)
1
\

\
)
1
\
)
\
i .
J
, J
I
.
.
'J
' . .
-'
/
I
I
I,

\
io

..
/
f.

r
'"

'j
(
/ "".
I
I
. "
. .
' ... f ..
1
"
. 'l .'., .. /: .. ;

... . ..-1 ,.
'.'
.'
:- .... _,
- " -
.;. .. ,.
..


.
< ,
, .'\ ..... 'I
" ..
9
.'
-- . ,
:Proofs'
( - of Certa'jn
Propositions'
< in this chapter demonstrations are given for some assertions made in
'the preceding-chapters. We have gathered them together because they
y
seemed to us somewhat cumber-
, some for a popular presentation
Mo t----rd-__ 'or presupposed knowledge of
o a b'
Fig. 9.1
probability theory,
The Jdcationot
Method of
a Random Variable (Section 4.3)
The random point (H', H") is '
uniformly distributed 'over the
rectangle abed (fig. 9.1), the area
b x of which is equal to Mo(b - a).1
The probability that point{H', H')
.',
is under the curve y = p(x) and
will not be discarded is eqUal to the ratio of the areas
'(
But the proba'bility that the point is under the curve y = p(x) in the
interval a' ,< x < b' is similarly equal to the ratio "'of the areas
1. Compare'section 9.3.
p(x) dx
Mo(b - a)
S8 '
"
. ,
<
' .

... ' . . :,'
',J, "
Proofs of Certain Propqsitions
... 59
.,
Consequently, among all the values of X that arc not discarded, the
proportion of values which,fall in the interval (a', h') is equal to the
quotient
f::p{x) dx
. Mo(q - a) = fb'p( ) cix
, 1 .' X,
II'
a)
is what we wanted to show;
9.2. The' Density Distribution of a Variable Z I ;" a + oZ
, .(Section 4..4)
. '
It is assumed that the variable Z is normal, with mathematical
(Z) = 0 and varianceVar (Z) = 1, so that its density' is
.\ /

..
In order to compute the density distribution of the variable Z', let us .
'choose two arbitrary numbers Xl < Xli and compute the probability
\
P(Xl < Z' < Xli) = P(Xl < a +'OZ < Xli)
p(x, :; a <-;-;x.':; a) .
Consequently ,

tI '
We simplify this last integral by substituting the variable x' = a + ox.
We get
whence follows (compare (2.14)) the normality of the variable Z' with
parameters E(Z') = a, Var (2') = a2.
I'

60 ExamP.les of the Application of the Monte Carlo Method .
9.3. Uniform Distribution of PoiDts in a (Section 4.5)
Since the coordinateS of the point (G', Gj are independent, the
density p(x. y) is equal to the product of the densities
" .... '
Each of these densities is identically equal to "1.' thIs means that
p(x, y) = 1 (for 0 :$ x :5 1 and 0 :5 Y :5 1) and, consequently, the
uniformity of distribution of the pojnt (G', G') in the unit square.
\
'9.4. The Choice of a Random Direction (Section 7.1)
Let us agree to specify a direction by means of a" unit vector starting'
at the origin. The heads of such vectors form the the unit
x
I
Fig. 9.2
spher.e-. Now, the words .. any
direction . is equally 'probable"
that the head of a vector is
'. , ,
point Q, uniformly
distributed over the surface of the
sphere. It follows that the prob-
ability of Q lying in any part of the
y. surface dS is equal to' (JS/4-rr ..
Let us ehoose on the surface of
the sphere spherical
\ (cP. tfJ) (fig. 9.2). Then
dS = sin. cfo dtfJ (9.1)
where 0 :5 cP :5 1T, 0 :5 if! < 21T.
, . '"
Since the coordinates cP and if! are independent, the density of the
point (<}>. tP) is equal to the product P{cfo, 1jJ) = p{rp}Pw{tfJ). From this
equatjon, relation (9.1), andthc
-- .
it follows that
dS
p( rp, tP) drp dcj; = 41r '
sin rp
'i'
(9.2)
2. This is, in fact, the formal definition of the independence for random
variables G' and .' 1
'. .'
J
\
,"'" ;
..
,
,",.
. ,Proofs of Certain Propositions
61
\. ,\ ..
Let US integrate this expression with respect to zP from 0 to Taking
into account the 11tlrmalizing condition .
we obtain:
(9.3)
Dividing (9.2) by (9.3), we find that.
(9.4)
Obviously, rP is uniformly over the interval [0, 2zr}, and the
formula for the construction of rP will be written thus:,
. .
'" = 27TG
(9.5)
We find the formula for the construction of cp.' with the 'help of
equation (4.2):

1 f' "
2 Sin xdx = G',
o . .
whence
co.f(= 1 - 2G.
(9.6)
Forrn\llas (9.5) and (9.6) allow one to (to construct) a nlndqrn
direction. The values of G in these formulas should, of course, be.
. different.
Formula (9.6) differs from the last formula of section 7.1 only in that
. ,. a
G appears in it rather than I .. G, but these variahlcs have identical
distri butions.
"
9.S.'Ine Superiority of the Method Qf Weig,hting (Section, 7.3)
Let us introduce the random variables Nand N I, equal to the
number (weight) of neutrons which passed through the block, and
6'7
;,..' ... ).. .'. ( " ".
... ':
\
\
" , ;
.'

t
62 Examples of the Application of the Monte Carlo Method
obtained by calculating one trajectory by the method of section 7.2 and
one by the method of 7.3, respectively.
We know that
(N) = E(N') = p+ .
Since N can take on only two values, 0 and 1, the distribution of Nis
given by the table" '
Taking itlto account that.,N2 = N, it is not hard to calculate that
Var(N) =p+ _ (p+)2.
,.It is easy to see that the variable N' can take on an ipfinite number of
, values: Wo = 1, Wi = ,wo'Ll'i.. W
2
= w
o
{2:s/2:)2, W3 ... Wkt ... and also
the value 0 (if the package is refit..'Cted from the block instead of passing
through). Therefore, its distribution is given by the table
N' _ (WO WI W
2
. (/0 ql q2
... 0).
.. , q
The values ql need not interest us, since in any case one can write the
formula for the variance
co
Var (NI) = .2 W/t2q" (p+ya.

Noticing that all the- WI.; S 1 and w"qk,= E'(N') = p+, we
get the inequality Var (N') s,p+ ,- (p+yJ. "" Var (N). ,.
This fact, that the variance of N' is always.lcss than the variance of N,
shows fhat the method of section 7.3 is always better for calculating p +
than the method of section 7.2.
The same argument applies to the calculation '()f p -, and, if the
absorption is not too great, to the calculation of pOalso.

9.6. The Best Choice for V (Section 8.2)
In section 8.2 we obtained an expression for the variance Var (II).
In ,order to find the minimum of this expression for all possihle choices
of Pv(x), we make use of an inequality well-known in analysis:
I)
,n J2 fO fO
a iu(x)v(x)1 dx 5; a dl(x) dx a v2(x) dx .
r

ProOfs 0/ Certain P;opositiolls 63
We set u = g(x)!V(Pv(x and v = VPv(x); then from this inequality
we obtain '
[f
ll . 12 fll g2(X) f-ll , fll g2(x)
" Ig(x)! dx s "Pv(x) dx "PV("? dx a pv(x) dx,
Thus,
Var(lJ) If: dxf- }2,
(9.7)
\
It remains to be shown that lower bound is reached when Pv(x) is
proportional to I g(x)j. .
Let
. I g(x) I
Pv(x) = Ig(x)1 dx
, (9.8)
.
It is not hard to compute that for the density Pv(x),
.
Let us observe that to e tbe .. best" density (9.8) in the calculation
, and the variance, Var (;HiS' really equal to the right side of (9.7).
/ js, in practice, impossible. :get it, it is, necessary to know the value of
.the integral J: I g(x) I dx. But he evaluation of this last integral presents
a problem just as difficult the one we arc trying to solve: the evalua-
tion of the integral g(x)dx. Therefore, we restricted ourselves to
.: the recommendation stated in section 8.2.
,
,
11

-
..
,-" .
....
I .
1
, '.,.'
. . "'i'
APP.ENDIX
Tables

.. '
.'
'A, .400 Random Digits
t
=--.'. --;---- .. --- . _ .. -
86515 90795 66J55 66434 56558 12332 94377 57892
69186 03393 42502 99224 88955. 53758 9J641 . 18867
41686 42163 85181 33181 , 72664 53807 00607
86522 47171 88059 89342 67248 09082 12311 90316
72587 93000 89688 78416 27589 99528 14480 50961
,
52452 42499 33346
83935 . 79130 90410 45420 77757
76773 97526 27256 66447 25731 37525 16287 66181
. 04825 82134 80lJ7 75120 45904 75601 70492 10274
..
S71l} 84718 45863 24520 19976 04925 07824 76044
84754 57616 38132 6429.4 15218 '49286 42903

Table B. 88 Normal Values'
. ,-:. _._-- ... _-.--....a::.::...---- - --- -::..--_ .. :.---- _.-:.-
0.2005 t.l922 -0.0077 0.0348 1.0423 -1.8149 1.1803 0.0033
1.1609 "':0.6690 -1.5893 0.5816 1.8818, 0,7390 -0.2736 1.0828
0.5864 -0.9245 0.0904 1.5068 -1.1147 0,2776 0.1012 -1.3566
...
0.1425 1.2809 0.4043 0.6379 -0.4428 -2.3006 -0.6446
0.9516 -1.7708 2.8854 0.4686 1.4664 1,6852 -0.9690 -0.0831
. -'0.5863
0.8574 -0.5557 0.8115 -0.2676 -1.2496 -1.2125 l.384(l
1.1572 : 0.9990 0,5405 -0.6022 0.0093 0.2119 -1.4647
-0.4428 -0.5564 -0.5098 - 1.1929 -0.0572 -0.5061 -0.1557 -1.2384
-U.3924 I. 7981 0.6141 -1.3596 1.4943 -0.4406 -0.2033 -0,1316
0.8319 0.4270 - 0.8888 0.4167 -0.8513 1.1054 1.2237 -0.7003
0.9780 \-0.7679 0.8960 0.5154 -0.7165 0.8563 - 1.1630 1.8800
---.. - -------- ... .. - _. -- .-.. - .-- -_. ---_._,-- .. --
I. Random digits imitate valucs of a random variable with distribution (3.1)
(see scctjoo 3.1).
, 2.
Normal valucs imitate values of a normal random variable Z with
parameters a = 0, a = I.
, "
. ,
'f, '.; ',:t \" . ,,\::,"'\" .'

Bibli()gl'aphy' '
, , ,
'. '\, .'
.j ' ,
. 1
, , , '.: . .. 'I
.. ... ,'.: ... ...
... , ' :
,\" , ! .... ;.. ,\.. .
' .
., <.: .. .
..
. ; . .. :
.
..
" . .\"
:',"
.," .
.. ,
,,'f \ ': ,
.. ....
., ,
" .
For further study the readet is referred to, the following bdqks.
Extensive bibliographies are to 1k found in the, first two listings. - ,
Buslenko, N. P . Golenko. D. I . Sobol', I. M . Sragovich:.Vt.,G . and
Shreider, Yu. A. The MO'!te Carlo Method. Translated by G. J. Tee.
New York: Pergamon 1966. The same work ,was also published
as The Method of Statistical Testing (New York: Publishing
" Co., 1964). . :'
Hammersley, J. M . at:ld Handscomb. 9. C. Monte Carlo MethGds.
London: Methuen and Co., .. 1964.
Spanier, Jerome, and Gelband, Ely M. Monte Carlo Principles and
transport Reading, Pub-
lishing Co., 1969. ,
. /
Some material dealing with particular matters discussed in chapters 3
and 4 may be found in Monte Carlo Method:tilie proceedings of a
symposium), U.S. National Bureau of Standards Applied Mathematics
Series, no. 12, 1951.
"
,


, .
, , " ,,: ",:, ";;
'" ")'"
:, "I2Uk Wirszllp, .
, ' EditOr
. '
,The Popular tecto,.. in .
Mathematics &erl tranaJated' ,
and adapted from ,
,makes available to "Enol ish- "
, speaking t8ach.,. and Ituden1a '
lome Of the best mathematical
literature of !.he Soviet Union.
The lectures are Intended to
introduce varfoua aspects of
matmnnatfca, thought and to
engage the .rodent In
. maihematicalactivity which
will foster Independent_ork.
. Some of the le.otures prOvide an
. -elementary introduction to ' .
, certain nonelementary topics.
, and others present mathematlC8J
and ldeas.ln greater
The books contain many
Ingenious problems with hints,
solutions, 'andanswars. A "
slgnlflcan,t feature Is the authors'
use of new approaches to make
complex mathematical topics "
accessible to both high school
arfd college students.
, .
l ]' .
.
The University of Chicago Press

.
,-./'
, 'I'he Monte ,C8rIO Method
, I. M. SoboJ' '
IJnd adapted IfDtiJ
the second RiJS$lan edition by
Robert AI_., John Stone, and
. PGtet FOrtini
The Monte Carlo method is an
appY().\lch to mathemati ..
cal and :physicaJ problems '
, approximately by the simulation
.of random qualities. The author's
presentation of thiS rather
sophisticated topic is uniquQ'in
its clarity. Assuming only a basic
knowledge of elementary ,
calculus" I. M. Sobol' first
reviews the probability theory
required to understand the
method. Next, he surveys the .
ways of generating random
variables on a computer and ,
the retatlve merits of random
number tables, generators, '
and pseudo-r8fldom number
Examples are then
given of the use of the Monte
ea,rio metha<;f In quality control,
operations research, phYSics,
and numerical analySis. ,
I. M. SOBOL' is a research
mathematician at the Mathe-
matics Institute of the USSR
Academy of Sciences .
. -
Paper ISBN: 0-226-76749-3
. ,
, l

You might also like