You are on page 1of 27

# Simulation 1

## Random Variables Generation

2
Random Variable Generation
Random variable is defined as typical value
sampled from a given distribution. If we
take large number (ideally infinite) of them
and plot a histogram, it will approach the
original pmf or pdf.
Goal: Study methods of generating random
variables of a given distribution, assuming a
routine to generate uniformly distributed
random numbers is available.
Note that distribution of interest can be
discrete or continuous.
Simulation 3
Ns-2 and other simulation environments
have implemented function for different
distribution generation.
But
you should understand how they work, and
what to expect from them
not all distributions may have been already
implemented
you might develop at some point your own
simulation program
Simulation 4
Random-Number Generators (RNGs)
Algorithm to generate independent,
identically distributed draws from the
continuous UNIF (0, 1) distribution
These are called random numbers
in simulation
Basis for generating observations from all
other distributions and random processes
Transform random numbers in a way that
depends on the desired distribution or process
(later in this lecture)
x
0
1
f(x)
1
Simulation 5
Random Number Generation
It is difficult to construct a truly random number
generator.
Random number generators consist of a sequence
of deterministically generated numbers that
look random.
A deterministic sequence can never be truly
random.
This property, though seemingly problematic,
is actually desirable because it is possible to
reproduce the results of an experiment!
Simulation 6
The Nature of Random Number Generators
Recursive formula (algorithm)
Do something to the seed to get the
next one
Repeat generate the same
sequence
Will eventually repeat (cycle)
want long cycle length
Simulation 7
Multiplicative Congruential Method
x
k+1
= a x
k
modulo m
The constants a and m should follow the following
criteria
For any initial seed x
0
, the resultant sequence should
look like random
For any initial seed, the period (number of variables
that can be generated before repetition) should be
large.
In order to prevent the sequence from collapsing to
zero, m must be prime and x
0
cannot be zero
Simulation 8
Pseudo-Random Number Generation
To fit the above requirements, m should be chosen to be a
large prime number.
The choice of modulus depends on the arithmetic precision
used to implement the algorithm. A signed 32-bit integer
can represent values between - 2
31
and 2
31
-1 . Fortunately,
the quantity 2
31
-1=2147483647 is a prime number
Good choice for parameters is
m= 2
31
-1 and a=75 for 32-bit word machines
x
k
/m maps the random variables in the interval [0,1)
Mixed Congruential Method
x
k+1
= (ax
k
+ c) modulo m
Simulation 9
Example of a Toy LCG
Parameters m = 63, a = 22, c = 4, Z
0
= 19:
Z
i
= (22 Z
i1
+ 4) (mod 63), seed with Z
0
= 19
Cycling will repeat forever
Cycle length m
(could be << m depending
on parameters)
Pick m BIG
But that might not be enough
for good statistical properties
Simulation 10
Quality of the Random Number Generator
The random variate u
k
=x
k
/m should be uniformly
distributed in the interval [0,1).
Divide the interval [0,1) into k subintervals and count the
number of variates that fall within each interval.
Run the chi-square test (will be discussed later)
The sequence of random variates should be
uncorrelated
Define the auto-correlation coefficient with lag k > 0 C
k
and verify that E[C
k
] approached 0 for all k=1,2,
where u
i
is the random variate in the interval [0,1).
( )( )
1 1
2 2
1
1
n k
k i i k
i
C u u
n k

+
=
=

Simulation 11
Random Generators in NS-2
Default random generator : defaultRNG
Create new random generator:
set newRNG [new RNG]
\$newRNG seed 2
The generation of random variables uses a seed
seed=0 - a new random variables sequence is generated
each time the simulation is run
seed !=0 the same sequence of random variables is
generated each time the simulation is run
Different random generators with the same seed
generates the same rv sequences
\$defaultRNG seed 0
\$defaultRNG seed 2
Simulation 12
Generating Random Variates
Have: Desired input distribution for model (fitted
or specified in some way), and RNG (UNIF (0, 1))
Want: Transform UNIF (0, 1) random numbers
into draws from the desired input distribution
Method: Mathematical transformations of random
numbers to deform them to the desired
distribution
Specific transform depends on desired distribution
distributions
Do discrete, continuous distributions separately
Simulation 13
Generating from Discrete Distributions
Example: probability mass function

2
0 3
Divide [0, 1] into subintervals of length 0.1, 0.5, 0.4
Generate U ~ UNIF (0, 1)
See which subinterval its in
Return X = corresponding value
Simulation 14
Generating from Discrete Distributions :
Another View
Plot cumulative distribution function; generate U
and plot on vertical axis; read across and down
Inverting the
CDF
Equivalent to
earlier
method
Simulation 15
Generating from Continuous Distributions
Inverse Transform Method
Example: EXPO (5) distribution
PDF
CDF
General algorithm (see proof later):
1. Generate a random number U ~ UNIF(0, 1)
2. Set U = F(X) and solve for X = F
1
(U)
Solving analytically for X may or may not be simple (or
possible)
Sometimes use numerical approximation to solve
Simulation 16
Generating from Continuous Distributions
(contd.)
Solution for EXPO (5) case:
Set U = F(X) = 1 e
X/5
e
X/5
= 1 U
X/5 = ln (1 U)
X = 5 ln (1 U)
Picture (inverting the CDF, as in discrete case):
More Us will hit F(x)
where its steep
This is where the density
f(x) is tallest, and we
want a denser
distribution of Xs
Simulation 17
). ( )) ( ( ) ) ( ( ) (
1
x F x F U P x U F P x X P = = =

Inverse Transform Method: Proof
Algorithm:
1. Generate a random number U ~ UNIF(0, 1)
2. Set U = F(X) and solve for X = F
1
(U)
Proof:
1. We need to show that for any real
number x, P(X x)=F(x).
2. Since F is invertible, we have
Simulation 18
Inverse Transform Method: When is it Appropriate (1/2)
When a closed form inverse for CDF exists
Simple example: Triangular distribution
The inverse transform formula can
then be determined to be:
Simulation 19
Inverse Transform Method: When is it Appropriate (2/2)
When a good approximation for the CDF inverse can be
found
Example: simple approx. for Gaussian distribution
Approximation with at least one decimal-place accuracy
for
20
Example: Pareto Distribution
CDF
is shape parameter.
Random Variable x
where u is drawn from uniform distribution U(0,1).
Simplified Formula
1
( )
0
X
k
x k
F x
x
x k

| |

|
=

\

1
(1 )
k
x
u

=

1
u
k
x =
Simulation 21
Acceptance Rejection Method
Gamma Distribution
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0 2 4 6 8 10
x
f ( x )
A closed form for F(X) does not exist
Use another distribution for which we do know how to
calculate the CDF and its inverse.
We pick a function t(x) that is larger than f(x) for all x.
Technically we say that t(x) majorizes f(x)
Gamma(2,1) distribution.
Simulation 22
Acceptance Rejection Method
Acceptance-Rejection Example
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0 2 4 6 8 10
x
f
(
x
)
t(x) = 0.4
PDF for t : r(x) = 0.4/4
CDF for t :
R(x) = 0.1x
Simulation 23
Acceptance Rejection Method
Acceptance-Rejection Example
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0 2 4 6 8 10
x
f
(
x
)
t(x) = 0.4
If we threw darts that could land
only on the line x = 3, then the
probability that a dart hitting inside
the distribution would be
f(X=3)/t(X=3).
Simulation 24
Acceptance Rejection Method: Algorithm
Acceptance-Rejection Example
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0 2 4 6 8 10
x
f
(
x
)
t(x) = 0.4
1. Generate a random value x with uniform density in [0,10]
2. With probability f(x)/t(x), return x as random value for f(x)
To check the probability, generate another r.v. which is uniformly
distributed between 0 and 1
Simulation 25
Acceptance Rejection Method: Example
Two calls to the function to return a random variate for the gamma(2,1)
distribution. In this example, the value 0.3064 would be returned.
No. X f(X=x) t(X=x) Accept if U<= U~U(0,1) Accept?
1 8.1074 0.002 0.4 0.0061 0.911 FALSE
2 0.3064 0.226 0.4 0.5638 0.453 TRUE
Comparison A-R vs Known Gamma
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.18
0.20
0
.
0
1
.
0
2
.
0
3
.
0
4
.
0
5
.
0
6
.
0
7
.
0
8
.
0
9
.
0
1
0
.
0
x
p
(
x
)
A-R
Known Gamma(2,1)
A-R method with t(x) =0.4 against a histogram from a known Gamma (2,1) distribution.
n = 1000
Simulation 26
Acception-Rejection Method: Intuitive Explanation
f(Y)/t(Y) is small most of Y-s rejected
f(Y)/t(Y) is largemost of Y-s accepted
The concentration of Ys from t(Y) is altered to agree f(Y)
Simulation 27
Random Variables Examples in NS-2