You are on page 1of 7

The Uniform Distribution

A continuous random variable X which has probability density function given by:
F (x) = 1 for a x b (1)
b - a
and f(x) = 0 if x is not between a and b) follows a uniform distribution with
parameters a and b. We write X ~ U (a , b)
Remember that the area under the graph of the random variable must be
equal to 1 (see continuous random variables).
Expectation and Variance
If X ~ U (a , b), then:
E (X) = (a + b)
Var (X) = (1/12)(b - a)
2

Proof of Expectation
2 2 2
1
2 2( ) 2
b
a
x x b a b a
dx
b a b a b a
1 +

1

]


Cumulative Distribution Function
The cumulative distribution function can be found by integrating the p.d.f between 0 and t:
1
( )
t
a
t a
F t dx
b a b a

Methods for generating random numbers


from uniform distribution :
First: Inverse Transform method
Suppose we can generate a uniform random number r on [0 1]. How can
we generate
numbers x with a given pdf P(x)?
To warm up our brain, let's first think about something else. Suppose we
generate a
uniform random number 0 < r < 1 and square it. So we have
2
x r
.
Clearly we also have
0 < x < 1. What is the pdf P(x) for x? A wild guess might be that it is just
the square of
the pdf for r, so x would also be uniform. It is however easy to see that this
can't be true.
Consider the probability p that x < 1/2. If x was uniform on [0 1] we would
get p = 1/2.
In order to get an x < 1/2 we must have gotten an
1/ 2 r <
. The
probability for this is 1/ 2 p , not p = 1/2. So P(x) can't be uniform.
To figure out what P(x) is, consider the probability p that x falls in the
interval [ ]
xx r + . In the limit 0 x we have, according to the definition
of a pdf, p = P(x) r . So if we can figure out p, we can compute P(x).
For x to fall in [ ]
xx r + , we must generate r in the range x x x
1
+
]
.
Since r is uniform, the probability for this (which is also p) is just the length
of this interval. So we get

p x x x +
and we now use p = P(x) r and solve for P(x), obtaining

( )
x x x
p x
x
+

Taking the limit 0 x we thus obtain


0
1
( ) lim
2
x
x x x d
p x x
x dx
x

+

With this result, we now have an instance of the inverse transform method
to generate
random numbers with pdf
1/ 2 x
: generate a uniform random number r
and square it.
The general form of the inverse transform method is obtained by
computing the pdf of
x = g(r) for some function g, and then trying to _nd a function g such that
the desired
pdf is obtained.
Let us assume that g(r) is invertible with inverse
1
( ) g x

. The chance that x


lies in the
interval [ ]
xx dx + is P(x)dx, for infinitesimal dx. What values of r should we
have gotten to get this? (Remember we are generating values x by calling
a uniform random number generator to get r and then setting x = g(r).)
We should have gotten an r in the interval [ r r + dr ], with r =
1
( ) g x

)
and r + dr =
1
g

(x + dx). Write out the last formula and get



1 1
| ( ) ( ) '( ) , r dr g x g x dx

+
where ' denotes the derivative. Using r =
1
( ) g x

we can simplify this to



( )
1
'( ) , dr g x dx

The probability for the r value to be in the interval [r r + dr] is just


dr. This is also the
probability for x to be in [x x + dx], which is P(x)dx. Using Eq. 1, we
get

1
( ) ( ) '( ) , p x dx g x dx

thus

1
( ) ( ) '( ). p x g x

Integrating both sides and remembering that ( ) ( )


x
F x P y dy

, gives us

1
( ) ( ), F x g x

or

1
( ) ( ), g x F x

provided F(x) has an inverse.


In summary, to generate a number x with pdf P(x) using the inverse
transform method , we first figure out the cdf F(x) from P(x). We then
invert that by solving r = F(x) for x, which gives the function
1
( ) F r

. We
then generate a uniform random number 0 < r < 1 and compute x =
1
( ) F r

Second: Acceptance and rejection method


Let u and v be two independent random numbers with uniform
distribution in the intervals 0 1 u < , and 1 1 v . Do the transformation
2
/ , x sv u a y u + . The rectangle in the (u,v) plane is transformed into a hat
function y = h(x) in the (x, y) plane. All (x, y) points will fall under the hat curve y = h(x)
which is uniform in the center and falls like
2
x

in the tails. h(x) is a useful hat function
for the rejection method. The acceptance condition is v < f(x)/k. s and a are chosen so
that ( ) ( ) f x kh x for all x, where

2
2
1......... ..
( )
...........
( )
for a s x a s
h x
s
elsewhere
x a
< +

' ;


The advantage of this method is that the calculations are simple and fast, and the
rejection rate is reasonable. Quick acceptance and quick rejection areas can be
applied. For discrete distributions, f(x) is replaced by f( x ) .
The following values are used for the hat parameters for the poisson, binomial and
hypergeometric distributions

1
,
2
a and +

2
1, 1
2 1 3 3
( )
2 2
s s s
e e
+ +

where

is the mean and


2

is the variance of the distribution (Ahrens & Dieter


1989). These values are reasonably close to the optimal values. It is possible to
calculate the optimal values for a and s (Stadlober 1989, 1990), but this adds to the
set-up time with only a marginal improvement in execution time. The optimal value of
k is of course the maximum of f(x): k = f(M), where M is the mode.
This method is used for the poisson, binomial and hypergeometric distributions in
StochasticLib.
Third: Convolution method
Convolution of Uniform Distributions
Consider a sum X of independent and uniformly distributed random
variables [ ]
, , 1,......, :
i i i
X U a b i n :
1
...
n
X X X + +
then the following is true.
The sum X is symmetrically distributed around
1 1
( ... ) ( ... )
2
n n
a a b b + + + + +
If

2
1
( )
n
n
i i
i
b a

(6)
then by the central limit theorem the distribution of


[ ]
[ ]
1
1
n
i
i
n
i
i
X E X
V X

tends for n to the standard normal distribution


Hence in case (6) and for sufficiently large n the distribution of X can be
approximated by the normal distribution
1 1
( [ ], [ ])
n n
i i
i i
N E X V X


. However,
it turns out that in spite of the symmetry relation convergence to the
normal distribution may be rather slow, if the lengths
i i
b a are very
different implying that the normal approximation might be bad even for
large values of n. Therefore, the exact distribution of X would be desirable.
Convolution of Identical Uniform Distributions
Historically, the case of independent and identically uniformly distributed
Xis played an important role, i. e.

1
...
n
X X X + + with [ ; ], 1,...,
i
X U a b i n : (8)
The distribution of X was first studied by N. I. Lobatchewski in 1842. He
wanted to use it to evaluate the error of astronomical measurements, in
order to decide whether the Euclidean or the non-Euclidean geometry is
valid in the universe.
the density function of X.
( ) ( )
1
( , )
( )
0
1
1 ( ) .... ..
( )
( 1)( )
0.....................................................................................
n
n n x
i
n
n
i
n
x na i b a if na x nb
f x
i n b a
otherwise

' ;
,

%
( ) , : [ ]
x na
n n x
b a

%
largest integer less than
x na
b a

It is well-known that the speed of convergence to the normal distribution is


extremely fast in the identically distributed case given by (8). Already for
n = 4 the difference between the normal approximation and the exact
distribution is often negligible.
If the single distributions are not identical, but have a common length of
their support,
.. .. 1,...,
i i
b a b a for i n ,then it is possible to reduce the problem of
deriving the
distribution of X to the identically distributed case by suitable
transformations. However, allowing arbitrary uniform distributions requires
a different approach for determining the distribution function of X.
Convolution of Arbitrary Uniform Distributions
The convolution of arbitrary uniform distributions can be obtained
by a result given
in [1] which refers to the distribution function of a linear
combination of independent
U [0, 1]-distributed random variables. However, the given formula is
rather unsuited for practical applications and, therefore, a different
representation of the distribution function is derived here.

You might also like