Professional Documents
Culture Documents
Number Generation
Summary
simulation.
For the systems we are -100
t
u2(t)
Δt t
x(t)
t
Inefficient: At most steps nothing happens
Accuracy: Event order in each interval
Time Driven vs. Event Driven
Simulation Models
Time Driven Dynamics Event Driven Dynamics
x(t ) + 1 if u1 (t ) = 1, u2 (t ) = 0,
x + 1 if e′ = a
x(t + ) = x(t ) − 1 if u1 (t ) = 0, u2 (t ) = 1 f ( x, e ′ ) =
x(t ) otherwise x − 1 if e′ = d
Pr{X=1}=p1 FX ( x ) = Pr { X ≤ x}
Pr{X=2}=p2 x
Pr{X=3}=p3 = ∑ Pr { X = i}
i =−∞
1 2 3 X
1
p1+ p2 + p3
∞ p1+ p2
∑ Pr { X = i} = 1
i =−∞
p1
1 2 3 X
Some Probability Results/Review
Pr { x1 − ε ≤ X ≤ x1 + ε } = 1
x1 + ε
FX(x)
= ∫
x1 − ε
f X ( x ) dx
0 x
Independence of Random Variables
Total Probability
Pr { X = x} = ∑ Pr { X = x | Y = yk } Pr { Y = yk }
k
Expectation and Variance
var [ X ] = E ( X − E [ X ] )
2
Covariance of two random variables
cov [ X , Y ] = E [ XY − XE [ Y ] − YE [ X ] + E [ X ] E [ Y ] ]
= E [ XY ] − E [ X ] E [ Y ]
The covariance between two independent random
variables is 0 (why?)
E [ XY ] = ∑∑ xi y j Pr { X = xi , Y = y j }
i j
= ∑ xi Pr { X = xi } ∑ y j Pr { Y = y j } = E [ X ] E [ Y ]
i j
Markov Inequality
kσ kσ
fX(x)
μ x
This may be a very conservative bound!
Note that for X~N(0,σ2), for k=2, from Chebyshev’s inequality Pr{.}<0.25, when in fact
the actual value is Pr{.}< 0.05
The Law of Large Numbers
X 1 + ... + X n
lim =µ
n →∞ n
The Central Limit Theorem
σ 2 = E ( X − µ ) 2 = ∑σ ι2
i =1
(-1,1) (1,1)
Let X, Y, be independent
random variables uniformly
Area of Circle π distributed in the interval [-1,1]
=
Area of square 4 The probability that a point
(X,Y) falls in the circle is given
by π
Pr { X 2 + Y 2 ≤ 1} =
(-1,-1) (1,-1) 4
SOLUTION
Generate N pairs of uniformly distributed random variates (u1,u2) in
the interval [0,1).
Transform them to become uniform over the interval [-1,1), using
(2u1-1,2u2-1).
Form the ratio of the number of points that fall in the circle over N
Discrete Random Variates
x0 if u ≤ p0
x
1 1 if p0 ≤ u ≤ p0 + p1
u M M
X = j −1 j
x if
X j ∑ pi ≤ u ≤ ∑ pi
x
i =0 i =0
∑ Pr { X = i} ≤ U < ∑ Pr { X = i}
i =1 i =1
⇒ 1 − q j −1 ≤ U < 1 − q j
j −1
∑ Pr { X = i} = 1 − Pr { X > j − 1} = 1 − q
i =1
j −1
⇒ q j < 1 − U ≤ q j −1
⇒ X = min { j : q < 1 − U }j
As a result:
X = min { j : j log ( q ) < log ( 1 − U ) }
log ( 1 − U )
log ( 1 − U ) X = +1
= min j : j >
log ( )
q
log ( q )
Poisson and Binomial Distributions
= ∑ 1 − q j = 1 −
j c
cq j
Therefore k −1
∞
1 pi
pi = ∑ 1 − = pi
k =1 c c
D-RNG-AR Example
y(i)=Y;
The Composition Approach
X x
Example: Exponentially Distributed
Random Variable
Suppose we would like to generate a sequence of random
variates having density function
f X ( x ) = λ e−λ x
Solution
Find the cumulative distribution
x
FX ( x ) = ∫ λ e− λ y dy = 1 − e− λ x
0
λ
Equivalently, since 1-u is also uniformly distributed in (0,1)
1
x = − ln ( u )
λ
Convolution Techniques and the Erlang
Distribution
Suppose the random variable X is the sum of a number of
independent identically distributed random variables
n
X = ∑ Yi
i =1
Algorithm C-RNG-Cv:
Generate Y1,…,Yn from the given distribution
X=Y1+Y2+…+Yn.
( λ x ) n e−λ x
f X ( x) =
n!
Accept/Reject Method (C-RNG-AR)
Y= -log(u1);
u3= U(0,1);
If u3 < 0.5 X=Y;
Else X= -Y;
Generate u=U(0,1)
Return t.
Generating a Non-Homogeneous
Poisson Processes
Suppose that the process is non-homogeneous i.e., the
rate varies with time, i.e., λ(t) ≤λ, for all t<T.
Let ti denote the ith point of a Poisson process, and τ the
actual time, then the algorithm for generating the first N points
of the sequence {ti, i=1,2,…,N} is given by
Algorithm Thinning Poisson-λ:
k=0, t(k)=0, tau= 0;
While k<N
Generate u1=U(0,1);
{ s
}
F| Si | ( s ) = Pr { Si < s | t = ti } = 1 − exp − ∫ λ ( ti + y ) dy
0
s + ti + a ti + a s
FSi = 1 − exp − log = 1− =
ti + a s + ti + a s + t i + a
Inverting this yields
−1 ( ti + a ) u
FS =
i
1− u
Example of Non-Homogeneous
Poisson Processes
−1 ( ti + a ) u
Inverse Transform FS =
i
1− u
Algorithm C-RNG-C:
Generate u=U(0,1)
If u<p1, get X from G1(x), return;
If u<p1+p2, get X from G2(x), return;
…
Polar Method for Generating Normal
Random Variates
Let X and Y be independent normally distributed random
variables with zero mean and variance 1. Then the joint
density function is given by
1 − 12 x2 1 − 12 y2 1 − 12 ( x2 + y 2 )
f XY ( x, y ) = e e = e
2π 2π 2π
Then make a variable change
Y y
r=x +y 2 2
θ = arctan
R x
The new joint density function is
θ 1 1 − 12 r
X f RΘ ( r , θ ) = e
2π 2
Uniform in the Exponential
interval [0,2π] with rate 1/2
Polar Method for Generating Normal
Random Variates
Algorithm C-RNG-N1:
Generates 2
Generate u1=U(0,1), u2=U(0,1);
independent
R= -2*log(u1); W= 2*pi*u2; RVs
X= sqrt(R) cos(W); Y= sqrt(R) sin(W);
But, sine and cosine evaluations are inefficient!
Algorithm C-RNG-N2:
1. Generate u1=U(0,1), u2=U(0,1);
2. Set V1= 2*u1-1, V2= 2*u2-1; (-1,1) (1,1)
3. S=V1*V1+V2*V2; (V1,V2)
4. If S > 1, Go to 1
5. R= sqrt(-2*log(S)/S);
6. X= R*V1;
7. Y= R*V2;
(-1,-1) (1,-1)
Simulation of Discrete Event Systems
INITIALIZE
TIME
STATE
EVENT CALENDAR
Update State
e1 t1 Update Time
e2 t2 t’=t1
x’=f(x,e1) …
Delete
Infeasible
CLOCK Add New
STRUCTURE Events
Feasible
RNG Events
Verification of a Simulation Program