You are on page 1of 47

Chapter 2

Random Numbers
1.

2.

Kleber Barcia V.

Random-number generation
Random-variable generation

Chapter 2

1. Objectives. Random numbers

Discuss how the random numbers are


generated.

Introduce the subsequent testing for


randomness:
Frequency

test

Autocorrelation

Kleber Barcia V.

test.

Chapter 2

Properties of random numbers

Two important statistical properties:

Uniformity

Independence.

Random Number, Ri, must be independently drawn from a


uniform distribution with pdf:
E r

1, 0 x 1
f ( x)
0, otherwise

a b

2
2

b a
V r
12
Figure: pdf for random numbers

2 1

x
E (r ) xdx
0
2
1

Kleber Barcia V.

V (r ) x dx E r
1

x3
1 1 1 1


3 0 2 3 4 12
3

Chapter 2

Generation of pseudo-random numbers

Pseudo, because generating numbers using a known


method removes the potential for true randomness.

Goal: To produce a sequence of numbers in [0,1] that


simulates, or imitates, the ideal properties of random numbers
(RN).

Important considerations in RN routines:

Fast

Portable to different computers

Have sufficiently long cycle

Replicable

Closely approximate the ideal statistical properties of uniformity


and independence.

Kleber Barcia V.

Chapter 2

Techniques to generate random numbers

Middle-square algorithm
Middle-product algorithm
Constant multiplier algorithm
Linear algorithm
Multiplicative congruential algorithm
Additive congruential algorithm
Congruential non-linear algorithms
Quadratic
Blumy

Kleber Barcia V.

Shub
5

Chapter 2

Middle-square algorithm
1.

Select a seed x0 with D digits (D > 3)

2.

Rises to the square: (x0)2

3.

Takes the four digits of the Center = x1

4.

Rises to the square: (x1)2

5.

Takes the four digits of the Center = x2

6.

And so on. Then convert these values to r1 between


0 and 1

If it is not possible to obtain the D digits in the Center,


add zeros to the left.
Kleber Barcia V.

Chapter 2

Middle-square algorithm

If x0 = 5735
Then,
Y0 = (5735)2 = 32890225
Y1 = (8902)2 = 79245604
Y2 = (2456)2 = 06031936
Y3 = (0319)2 = 101760
Y4 = (0176)2 = 030976
1.
2.

x1 = 8902
x2 = 2456
x3 = 0319
x4 = 0176
x5 = 3097

r1 = 0.8902
r2 = 0.2456
r3 = 0.0319
r4 = 0.0176
r5 = 0.3097

Small cycles
May lose the series (e.g.: X0 = 0012)

Kleber Barcia V.

Chapter 2

Middle-product algorithm

It requires 2 seeds of D digits


1.
2.

3.

Select 2 seeds x0 and x1 with D > 3


Y0 = x0* x1
xi = digits of the center
Yi = x1*xi
xi+1 = digits of the center

If it is not possible to obtain the D digits in the center, add


zeros to the left.

Kleber Barcia V.

Chapter 2

Middle-product algorithm

Example:
Seeds x0 = 5015 y x1 = 5734

Y0 = (5015)(5734) = 28756010
Y1 = (5734)(7560) = 43349040
Y2 = (7560)(3490) = 26384400
Y3 = (3490)(3844) = 13415560
Y4 = (3844)(4155) = 15971820

Kleber Barcia V.

x2 = 7560
x3 = 3490
x4 = 3844
x5 = 4155
x6 = 9718

r1 = 0.756
r2 = 0.349
r3 = 0.3844
r4 = 0.4155
r5 = 0.9718

Chapter 2

Constant multiplier algorithm

Similar to Middle-product algorithm

1.

Select a seed x0 (D > 3)

2.

Select a constant "a

3.

Y0 = a*x0

xi = D digits of the center

4.

Yi = a*xi

xi+1= D digits of the center

5.

Repeat until you get the desired random numbers

Kleber Barcia V.

10

Chapter 2

Constant multiplier algorithm

Example:
If: seed x0 = 9803 and constant a = 6965

Y0 = (6965)(9803) = 68277895
Y1 = (6965)(2778) = 19348770
Y2 = (6965)(3487) = 24286955
Y3 = (6965)(2869) = 19982585
Y4 = (6965)(9825) = 68431125

Kleber Barcia V.

x1 = 2778
x2 = 3487
x3 = 2869
x4 = 9825
x5 = 4311

r1 = 0.2778
r2 = 0.3487
r3 = 0.2869
r4 = 0.9825
r5 = 0.4311

11

Chapter 2

Linear algorithm

To produce a sequence of integers, X1, X2, between 0 and


m-1 by following a recursive relationship:
X i 1 (aX i c) mod m, i 0,1,2,...
The
multiplier

The
increment

The
modulus

The selection of the values for a, c, m, and X0 drastically affects


the statistical properties and the cycle length, must be > 0
"mod (m)" is the residue of the term (axi + c) /m
The random integers are being generated [0,m-1], and to
convert the integers to random numbers:
Ri

Kleber Barcia V.

X i 1
, i 1,2,...
m 1

12

Chapter 2

Linear algorithm

Example:

Use X0 = 27, a = 17, c = 43 y m = 100

The values of xi and ri are:

X1 = (17*27+43) mod 100 = 502 mod 100 = 2

r1 = 2/99 = 0.02

X2 = (17*2+43) mod 100 = 77 mod 100 = 77

r2 = 77/99 = 0.77

X3 = (17*77+43) mod 100 = 1352 mod,100 = 52

r3 = 52/99 = 0.52

The maximum life period N is equal to m

Kleber Barcia V.

13

Chapter 2

Multiplicative congruential algorithm


It arises from the linear algorithm when c = 0. The recursive
equation is:
X i 1 aX i modm,

i 0,1,2,...

The values of a, m, and x0 must be > 0 and integers


ri

The conditions that must be fulfilled to achieve the


maximum period are:

X i 1
, i 1,2,...
m 1

m = 2g
g must be integer
a = 3 + 8 k or 5 + 8 k
where k = 0, 1, 2, 3,...
X0 = should be an odd number

With these conditions we achieve

Kleber Barcia V.

N = m/4 = 2g-2

14

Chapter 2

Multiplicative congruential algorithm

Example:

Use x0 = 17, k = 2, g = 5

Then: a = 5+8(2) = 21

and

x1 = (21*17) mod 32 = 5
x2 = (21*5) mod 32 = 9
x3 = (21*9) mod 32 = 29
x4 = (21*29) mod 32 = 1
x5 = (21*1) mod 32 = 21
x6 = (21*21) mod 32 = 25
x7 = (21*25) mod 32 = 13
x8 = (21*13) mod 32 = 17

Kleber Barcia V.

m=25 = 32
r1 = 5/31 = 0.1612
r2 = 9/31 = 0.2903
r3 = 29/31 = 0.9354
r4 = 1/31 = 0.3225
r5 = 21/31 = 0.6774
r6 = 25/31 = 0.8064
r7 = 13/31 = 0.4193
r8 = 17/31 = 0.5483
15

Chapter 2

Additive congruential algorithm

Requires a sequence of n integer numbers x1, x2, x3,xn


to generate new random numbers starting at xn+1, xn+2,

The recursive equation is:

xi xi 1 xi n modm,

i n 1, n 2,..., N

Xi
ri
m 1
Kleber Barcia V.

16

Chapter 2

Additive congruential algorithm

Example:
x1 =65, x2 = 89, x3 = 98, x4 = 03, x5 = 69, m = 100
x6=(x5+x1)mod100 = (69+65)mod100 = 34
x7=(x6+x2)mod100 = (34+89)mod100 = 23
x8=(x7+x3)mod100 = (23+98)mod100 = 21
x9=(x8+x4)mod100 = (21+03)mod100 = 24
x10=(x9+x5)mod100 = (24+69)mod100 = 93
x11=(x10+x6)mod100 = (93+34)mod100 = 27
x12=(x11+x7)mod100 = (27+23)mod100 = 50
Kleber Barcia V.

r6= 34/99 = 0.3434


r7= 23/99 = 0.2323
r8= 21/99 = 0.2121
r9= 24/99 = 0.2424
r10= 93/99 = 0.9393
r11= 27/99 = 0.2727
r12= 50/99 = 0.5050
17

Chapter 2

Congruential non-linear algorithms

Quadratic:

xi 1 ax 2 bx c modm

i 0,1,2,3,..., N

Conditions to achieve a maximum period N = m


m = 2g
g must be integer
a must be an even number
c must be an odd number
(b-a)mod4 = 1
X i 1
ri
m 1
Kleber Barcia V.

18

Chapter 2

Congruential non-linear algorithms (quadratic)


Example:

xi 1 ax 2 bx c modm

i 0,1,2,3,..., N

x0 = 13, m = 8, a = 26, b = 27, c = 27


x1 = (26*132+27*13+27)mod 8 = 4
x2 = (26*42+27*4+27)mod 8 = 7
x3 = (26*72+27*7+27)mod 8 = 2
x4 = (26*22+27*2+27)mod 8 = 1
x5 = (26*12+27*1+27)mod 8 = 0
x6 = (26*02+27*0+27)mod 8 = 3
x7 = (26*32+27*3+27)mod 8 =6
x8 = (26*62+27*6+27)mod 8 = 5
x9 = (26*52+27*5+27)mod 8 = 4
Kleber Barcia V.

19

Chapter 2

Congruential non-linear algorithms

Algorithm Blum, Blumy Shub


It occurs when Quadratic is:
a=1
b=0
c=0

xi 1 xi modm
2

Kleber Barcia V.

i 0,1,2,3,..., n
20

Chapter 2

Characteristics of a good generator

Maximum Density

Such that the values assumed by Ri, i = 1,2,, leave no large


gaps on [0,1]

Problem: Instead of continuous, each Ri is discrete

Solution: a very large integer for modulus m

Maximum Period

To achieve maximum density and avoid cycling.

Achieve by: proper choice of a, c, m, and X0.

Most digital computers use a binary representation of


numbers

Speed and efficiency depend on the modulus, m, to be (or close


to) 2b a big number

Kleber Barcia V.

Rev. 1

21

Chapter 2

Tests for Random Numbers

Two categories:

Testing for uniformity:


H0: Ri ~ U[0,1]
H1: Ri ~/ U[0,1]
Test for independence:
H0: Ri ~ independently
H1: Ri ~/ independently

Mean test
H0: ri = 0.5
H1: ri =/ 0.5

We can use the normal distribution with 2 = 1/12


or = 0.2887 with a significance level = 0.05
Kleber Barcia V.

22

Chapter 2

Tests for Random Numbers

When to use these tests:

If a well-known simulation languages or random-number


generators is used, it is probably unnecessary to test

If the generator is not explicitly known or documented, e.g.,


spreadsheet programs, symbolic/numerical calculators, tests
should be applied to the sample numbers.

Types of tests:

Theoretical tests: evaluate the choices of m, a, and c without


actually generating any numbers

Empirical tests: applied to actual sequences of numbers


produced. Our emphasis.

Kleber Barcia V.

Rev. 1

23

Chapter 2

Kolmogorov-Smirnov test

Very useful for little data.

For discrete and continuous variables.

Procedure:
1.

Arrange the ri from lowest to highest

2.

Determine D+, D-

3.

Determine D

4.

Determine the critical value of D, n

5.

If D> D, n then ri are not uniform

Kleber Barcia V.

[Uniformity Test]

24

Chapter 2

Kolmogorov-Smirnov test

[Uniformity Test]

Example: Suppose that 5 generated numbers are 0.44,


0.81, 0.14, 0.05, 0.93.

Step 1:

Step 2:

R(i)

0.05

0.14

0.44

0.81

0.93

i/N

0.20

0.40

0.60

0.80

1.00

i/N R(i)

0.15

0.26

0.16

0.07

R(i) (i-1)/N

0.05

0.04

0.21

0.13

Sort R (i) Ascending

D+ = max {i/N R(i)}


D- = max {R(i) - (i-1)/N}

Step 3: D = max(D+, D-) = 0.26


Step 4: For a = 0.05,
Da, n = 0.565 > D
So not reject H0.

Kleber Barcia V.

25

Chapter 2

Chi-square test

[Uniformity Test]

The Chi-square test uses the statistical sample:


n is the # of classes
n


2
0

Ei is the expected # of
the class ith
Oi is the observed # of
the class ith

Approximately the chi-square distribution with n-1 degrees of


freedom is tabulated in Table A.6
For a uniform distribution, the expected number Ei, in each class
is:
Ei

i 1

(Oi Ei )
Ei

N
, where N is the total # of observation
n

Valid only for large samples, example: N> = 50


If X02>X2(; n-1) reject the hypothesis of uniformity

Kleber Barcia V.

26

Chapter 2

Chi-square test

[Uniformity Test]

Example: Use Chi-square with a = 0.05 to test that the data shown
are uniformly distributed
0.34

0.90

0.25

0.89

0.87

0.44

0.12

0.21

0.46

0.67

0.83

0.76

0.79

0.64

0.70

0.81

0.94

0.74

0.22

0.74

0.96

0.99

0.77

0.67

0.56

0.41

0.52

0.73

0.99

0.02

0.47

0.30

0.17

0.82

0.56

0.05

0.45

0.31

0.78

0.05

0.79

0.71

0.23

0.19

0.82

0.93

0.65

0.37

0.39

0.42

0.99

0.17

0.99

0.46

0.05

0.66

0.10

0.42

0.18

0.49

0.37

0.51

0.54

0.01

0.81

0.28

0.69

0.34

0.75

0.49

0.72

0.43

0.56

0.97

0.30

0.94

0.96

0.58

0.73

0.05

0.06

0.39

0.84

0.24

0.40

0.64

0.40

0.19

0.79

0.62

0.18

0.26

0.97

0.88

0.64

0.47

0.60

0.11

0.29

0.78

The test uses n = 10 intervals of equal length:


[0, 0.1), [0.1, 0.2), , [0.9, 1.0)

Kleber Barcia V.

27

Chapter 2

Chi-square test

[Uniformity Test]

Intervalo

Oi

Ei

Oi-Ei

(Oi-Ei)2

(Oi-Ei)2
Ei

10

-2

0.4

10

-2

0.4

10

10

0.0

10

-1

0.1

12

10

0.4

10

-2

0.4

10

10

0.0

14

10

16

1.6

10

10

0.0

10

11
100

10
100

1
0

0.1
3.4

The X02 value = 3.4


Compared to the value of Table A.6, X20.05, 9 = 16.9
Then H0 is not rejected.

Kleber Barcia V.

28

Chapter 2

Tests for Autocorrelation

It is a test between each m numbers, starting with the


number i

[Test of Independence]

The auto-correlation rim between the numbers Ri, Ri+m, Ri+2m,


Ri+(M+1)m
M is the largest integer such that, i (M 1 )m N , where N is
the total number of values in the sequence

Hypothesis:
H 0 : rim 0,

numbers are independen t

H1 : rim 0,

numbers are NOT independent

If the values are not correlated:

For large values of M, the distribution estimation rim, is


approximately a normal distribution.

Kleber Barcia V.

29

Chapter 2

Tests for Autocorrelation

The statistical test is:

[Test of Independence]

r im
Z0
r

im

Z0 is distributed normally with mean = 0 and variance = 1, and:

1 M

im
R
R

i km i (k 1 )m 0.25

M 1 k 0

im

If rim > 0, the subsequence has positive autocorrelation

13M 7
12(M 1 )

High random numbers tend to be followed by high ones, and vice versa.

If rim < 0, the subsequence has negative autocorrelation

Low random numbers tend to be followed by high ones, and vice versa.

Kleber Barcia V.

30

Chapter 2

Example

[Test for autocorrelation]

Test if the 3rd, 8th, 13th and so on, are auto-correlated.


Use a = 0.05

Then i = 3, m = 5, N = 30
0.12

0.01

0.23

0.28

0.89

0.31

0.64

0.28

0.83

0.93

0.99

0.15

0.33

0.35

0.91

0.41

0.60

0.27

0.75

0.88

0.68

0.49

0.05

0.43

0.95

0.58

0.19

0.36

0.69

0.87

i (M 1 )m N
3 4 15 30

28 30
Then M = 4
Kleber Barcia V.

31

Chapter 2

Example

[Test for autocorrelation]

1 (0.23)(0.28) (0.28)(0.33) (0.33)(0.27)


35

0.25
4 1 (0.27)(0.05) (0.05)(0.36)

0.1945
35

13(4) 7
0.128
12( 4 1 )

Z0

0.1945
1.516
0.1280

From Table A.3, z0.025 = 1.96. then the hypothesis is not


rejected.

Kleber Barcia V.

32

Chapter 2

Other tests of Independence

Test runs up and down


Test runs up and below the mean
Test poker
Test series
Test holes
-----HOMEWORK 3 at SIDWeb-----

Bonus: (3 points for the partial exam)

Write a computer program that generates random numbers (4 digits)


using the Multiplicative Congruential Algorithm. Allow user to enter
values x0, k, g
Give an example printed and the program file
Kleber Barcia V.

33

Chapter 2

2. Objectives. Random Variables

Understand how to generate random variables


from a specific distribution as inputs of a
simulation model.

Illustrate some used techniques for generating


random variables.
Inverse-transform

Acceptance-

Kleber Barcia V.

technique

rejection technique

34

Chapter 2

Inverse-transform technique

It can be used to generate random variables from an


exponential distribution, uniform, Weibull, triangular and
empirical distribution.

Concept:
For

cdf function: r = F(x)

Generate
Find

r (0,1)

r = F(x)
r1

x:

x = F-1(r)
x1

Kleber Barcia V.

35

Chapter 2

Exponential Distribution

[inverse-transform]

The exponential cdf:


r=
r=

F(x)
1 e -l x

for x 0

e lX 1 r
lX ln(1 r )
X

ln(1 r )

l
To generate X1, X2, X3
Xi =
=
=
Kleber Barcia V.

F-1(ri)
-(1/l ln(1-ri)
-(1/l) ln(ri)

Figure: Inverse-transform
technique for exp (l = 1)

36

Chapter 2

Table of Distributions
Distribution

[inverse-transform]

Cumulative

Uniform

F x x a

Weibull

F x 1 e
F x x

Triangular

Normal
Kleber Barcia V.

b a
a

a xb

x0

Inverse-Transform

xi a b a ri
x a ln 1 r

0 x 1
x 2r 0 r
2
2

x
F x 1
1 x 2 x 2 21 r
2

1 r 1
2

x z
37

Chapter 2

Exponential Distribution

[inverse-transform]

Example: Generate 200 variables Xi with exponential distribution


(l = 1)
i

Ri

0.1306

0.0422

0.6597

0.7965

0.7696

Xi

0.1400

0.0431

1.078

1.592

1.468

The histograms of ri and xi are:


Uniform distribution
Exponential distribution

Kleber Barcia V.

38

Chapter 2

Uniform Distribution

The temperature of an oven


behaves uniformly within the
range of 95 to 100 C.
Modeling the behavior of the
temperature.

[Example]

xi a b a ri

Measurements
1
2

ri
0.48
0.82

Temperature oC
95+5*0.48=97.40
99.10

3
4
5

0.69
0.67
0.00

98.45
98.35
95.00

Kleber Barcia V.

39

Chapter 2

Exponential Distribution

Historical data service time a Bank


behaves exponentially with an
average of 3 min / customer.
Simulate the behavior of this
random variable

[Example]

xi 3 ln 1 ri

Customer
1
2

ri
0.36
0.17

Service Time(min)
3.06
5.31

3
4
5

0.97
0.50
0.21

0.09
2.07
0.70

Kleber Barcia V.

40

Chapter 2

Poisson Distribution

[Example]

The number of pieces entering to a production system


follows a Poisson distribution with mean 2 pcs/hour.
Simulate the arrival behavior of the pieces to the system.

P(x)

F(x)

Random digits

0.1353

0.1353

0-0.1353

0.2706

0.4060

0.1353-0.4060

0.2706

0.6766

0.4060-0.6766

0.1804

0.8571

0.6766-0.8571

0.0902

0.9473

0.0369

e a a x
px
, x 0,1,...
x!
x
e a a i
F ( x)
i!
i 0
ri

Pieces/hour

0.8571-0.9473

Arrival
Time

0.9834

0.9473-0.9834

0.6754

0.0120

0.9954

0.9834-0.9954

0.0234

0.0034

0.9989

0.9954-0.9989

0.7892

0.0008

0.9997

0.9989-0.9997

0.5134

0.0001

0.9999

0.9997-0.9999

0.3331

10

0.00003

0.9999

0.9999-0.9999

Kleber Barcia V.

41

Chapter 2

Convolution Method

In some probability distributions, the random variable


can be generated by adding other random variables
Erlang Distribution
The sum of exponential variables of equal means
(1/)

Y = x1+x2++xk
Y = (-1/k)ln(1-r1)+ (-1/k)ln(1-r2)++ (-1/k)ln(1-rk)
Where:
Y = Eri= (-1/k)ln(1-ri)
Kleber Barcia V.

The factor productoria


42

Chapter 2

Convolution Method

The processing time of certain machine follows a 3erlang distribution with mean 1/ = 8 min/piece

Y = -(8/3)[ln(1-ri)] = -(8/3)ln[(1-r1) (1-r2) (1-r3)]


Piece

1-r1

1-r2

1-r3

Process Time

0.28

0.52

0.64

6.328

0.96

0.37

0.83

3.257

0. 04

0.12

0.03

23.588

0.35

0.44

0.50

6.837

0.77

0.09

0.21

11.279

Kleber Barcia V.

43

Chapter 2

Empirical continuous distribution [inverse-transform]

When theoretical distribution is not applicable


To collect empirical data:

Resample the observed data


Interpolate between observed data points to fill in the gaps

For a small and large sample set (size n):

Arrange the data from smallest to largest

x (1) x (2) x (n)

Assign the probability 1/n to each interval

x (i -1) x x (i)

(i 1)

X F ( R) x(i 1) ai R

Where 1 / n is the probability

ai
Kleber Barcia V.

x(i ) x(i 1)
1 / n (i 1) / n

x(i ) x(i 1)
1/ n

44

Chapter 2

Empirical continuous distribution

[inverse-transform]

Example: Suppose that the observed data of 100 repair times of a machine
are:

Interval
(Hours)

Frecuency

Relative
Frecuency

Cumulative
Frecuencia, c i

Slot, a i

0.25 x 0.5

31

0.31

0.31

0.81

0.5 x 1.0

10

0.10

0.41

5.0

1.0 x 1.5

25

0.25

0.66

2.0

1.5 x 2.0

34

0.34

1.00

1.47

Consider r1 = 0.83:
c3 = 0.67 < r1 < c4 = 1.00
X1 = x(4-1) + a4(r1 c(4-1))
= 1.5 + 1.47(0.83-0.66)
= 1.75

Kleber Barcia V.

45

Chapter 2

Empirical Discrete Distribution

Example: Suppose the number of shipments, x, on the


loading dock of IHW company is either 0, 1, or 2

Data - Probability distribution:


x

p(x)

F(x)

0
1
2

0.50
0.30
0.20

0.50
0.80
1.00

Method - Given R, the generation


scheme becomes:

R 0.5
0,

x 1, 0.5 R 0.8
2, 0.8 R 1.0

Kleber Barcia V.

Consider R1 = 0.73:
F(xi-1) < R <= F(xi)
F(x0) < 0.73 <= F(x1)
Hence, x1 = 1
Rev. 1

46

Chapter 2

Acceptance-Rejection technique

Useful particularly when inverse cdf does not exist in closed


form, a.k.a. thinning
To generate random variables X ~ U (1/4, 1) we must:
Generate R

procedure :
Step 1. Generate R ~ U[0,1]

Step 2a. If R >= , accept X=R.


Step 2b. If R < , reject R,
back to step 1

no

Condition
si
output R

This procedure is only for uniform distributions


Acceptance-rejection technique is also applied to Poisson,
nonstationary Poisson, and gamma distributions
-----HOMEWORK 4 at SIDWeb-----

Kleber Barcia V.

47

You might also like