You are on page 1of 9

Univariate Distribution Relationships

Lawrence M. L EEMIS and Jacquelyn T. M C Q UESTON


distribution. A distribution is described by two lines of text in
each box. The first line gives the name of the distribution and its
Probability distributions are traditionally treated separately in parameters. The second line contains the properties (described
introductory mathematical statistics textbooks. A figure is pre- in the next section) that the distribution assumes.
sented here that shows properties that individual distributions The parameterizations for the distributions are given in the
possess and many of the relationships between these distribu- Appendix. If the distribution is known by several names (e.g.,
tions. the normal distribution is often called the Gaussian distribu-
tion), this is indicated in the Appendix following the name of
KEY WORDS: Asymptotic relationships; Distribution proper- the distribution. The parameters typically satisfy the following
ties; Limiting distributions; Stochastic parameters; Transforma- conditions:
tions.
• n, with or without subscripts, is a positive integer;

• p is a parameter satisfying 0 < p < 1;


1. INTRODUCTION • α and σ , with or without subscripts, are positive scale pa-
rameters;
Introductory probability and statistics textbooks typically in-
troduce common univariate distributions individually, and sel- • β, γ , and κ are positive shape parameters;
dom report all of the relationships between these distributions.
This article contains an update of a figure presented by Leemis • μ, a, and b are location parameters;
(1986) that shows the properties of and relationships between • λ and δ are positive parameters.
several common univariate distributions. More detail concern-
ing these distributions is given by Johnson, Kotz, and Balakrish- Exceptions to these rules, such as the rectangular parameter n,
nan (1994, 1995) and Johnson, Kemp, and Kotz (2005). More are given in the Appendix after any aliases for the distribution.
concise treatments are given by Balakrishnan and Nevzorov Additionally, any parameters not described above are explic-
(2003), Evans, Hastings, and Peacock (2000), Ord (1972), Pa- itly listed in the Appendix. Many of the distributions have sev-
tel, Kapadia, and Owen (1976), Patil, Boswell, Joshi, and Rat- eral mathematical forms, only one of which is presented here
naparkhi (1985), Patil, Boswell, and Ratnaparkhi (1985), and (e.g., the extreme value and discrete Weibull distributions) for
Shapiro and Gross (1981). Figures similar to the one presented the sake of brevity.
here have appeared in Casella and Berger (2002), Marshall and There are numerous distributions that have not been in-
Olkin (1985), Nakagawa and Yoda (1977), Song (2005), and cluded in the chart due to space limitations or that the dis-
Taha (1982). tribution is not related to one of the distributions currently
Figure 1 contains 76 univariate probability distributions. on the chart. These include Bézier curves (Flanigan–Wagner
There are 19 discrete and 57 continuous models. Discrete distri- and Wilson 1993); the Burr distribution (Crowder et al. 1991,
butions are displayed in rectangular boxes; continuous distribu- p. 33 and Johnson, Kotz, and Balakrishnan 1994, pp. 15–
tions are displayed in rounded boxes. The discrete distributions 63); the generalized beta distribution (McDonald 1984); the
are at the top of the figure, with the exception of the Benford generalized exponential distribution (Gupta and Kundu 2007);
the generalized F distribution (Prentice 1975); Johnson curves
Lawrence M. Leemis is a Professor, Department of Mathematics, The (Johnson, Kotz, and Balakrishnan 1994, pp. 15–63); the kappa
College of William & Mary, Williamsburg, VA 23187–8795 (E-mail: distribution (Hosking 1994); the Kolmogorov–Smirnov one-
leemis@math.wm.edu). Jacquelyn T. McQueston is an Operations Researcher, sample distribution (parameters estimated from data), the
Northrop Grumman Corporation, Chantilly, VA 20151. The authors are grate-
ful for the support from The College of William & Mary through a summer Kolmogorov–Smirnov two-sample distribution (Boomsma and
research grant, a faculty research assignment, and a NSF CSEMS grant DUE– Molenaar 1994); the generalized lambda distribution (Ramberg
0123022. They also express their gratitude to the students in CS 688 at William and Schmeiser 1974); the Maxwell distribution (Balakrishnan
& Mary, the editor, a referee, Bruce Schmeiser, John Drew, and Diane Evans
and Nevzorov 2003, p. 232); Pearson systems (Johnson, Kotz,
for their careful proofreading of this article. Please e-mail the first author with
updates and corrections to the chart given in this article, which will be posted at and Balakrishnan 1994, pp. 15–63); the generalized Waring dis-
www.math.wm.edu/∼leemis. tribution (Hogg, McKean, and Craig 2005, p. 195). Likewise,

c
2008 American Statistical Association DOI: 10.1198/000313008X270448 The American Statistician, February 2008, Vol. 62, No. 1 45
Devroye (2006) refers to Dickman’s, Kolmogorov–Smirnov, • The minimum property (M) indicates that the smallest of
Kummer’s, Linnik–Laha, theta, and de la Vallée–Poussin dis- independent and identically distributed random variables
tributions in his chapter on variate generation. from a distribution comes from the same distribution fam-
ily.
2. DISTRIBUTION PROPERTIES Example: If X i ∼ exponential(αi ) for i = 1, 2, . . . , n, and
X 1 , X 2 , . . . , X n are independent, then
There are several properties that apply to individual distribu-
tions listed in Figure 1. n
X !
min{X 1 , X 2 , . . . , X n } ∼ exponential 1 (1/αi ) .
• The linear combination property (L) indicates that lin- i=1
ear combinations of independent random variables having
this particular distribution come from the same distribution • The maximum property (X) indicates that the largest of
family. independent and identically distributed random variables
Example: If X i ∼ N μi , σi2 for i = 1, 2, . . . , n;
 from a distribution comes from the same distribution fam-
a1 , a2 , . . . , an are real constants, and X 1 , X 2 , . . . , X n are ily.
independent, then Example: If X i ∼ standard power (βi ) for i = 1, 2, . . . , n,
and X 1 , X 2 , . . . , X n are independent, then
n n n
!
ai X i ∼ N ai μi , ai2 σi2 n
X X X !
max{X 1 , X 2 , . . . , X n } ∼ standard power
. X
i=1 i=1 i=1 βi .
i=1
• The convolution property (C) indicates that sums of inde-
pendent random variables having this particular distribu- • The forgetfulness property (F), more commonly known as
tion come from the same distribution family. the memoryless property, indicates that the conditional dis-
Example: If X i ∼ χ 2 (n i ) for i = 1, 2, . . . , n, and X 1 , tribution of a random variable is identical to the uncondi-
X 2 , . . . , X n are independent, then tional distribution. The geometric and exponential distri-
butions are the only two distributions with this property.
n n
This property is a special case of the residual property.
!
Xi ∼ χ 2
ni .
X X

i=1 i=1 • The residual property (R) indicates that the conditional
distribution of a random variable left-truncated at a value
• The scaling property (S) implies that any positive real in its support belongs to the same distribution family as the
constant times a random variable having this distribution unconditional distribution.
comes from the same distribution family.
Example: If X ∼ Uniform(a, b), and k is a real constant
Example: If X ∼ Weibull(α, β) and k is a positive, real satisfying a < k < b, then the conditional distribution of
constant, then X given X > k belongs to the uniform family.
k X ∼ Weibull(αk β , β).
• The variate generation property (V) indicates that the in-
verse cumulative distribution function of a continuous ran-
• The product property (P) indicates that products of inde- dom variable can be expressed in closed form. For a dis-
pendent random variables having this particular distribu- crete random variable, this property indicates that a variate
tion come from the same distribution family. can be generated in an O(1) algorithm that does not cycle
Example: If X i ∼ lognormal(μi , σi2 ) for i = 1, 2, . . . , n, through the support values or rely on a special property.
and X 1 , X 2 , . . . , X n are independent, then Example: If X ∼ exponential(α), then

n n n F −1 (u) = −α log(1 − u), 0 < u < 1.


!
X i ∼ lognormal σi2
Y X X
μi , .
i=1 i=1 i=1
Since property L implies properties C and S, the C and S
• The inverse property (I) indicates that the reciprocal of a properties are not listed on a distribution having the L property.
random variable of this type comes from the same distri- Similarly, property F ⇒ property R.
bution family. Some of the properties apply only in restricted cases. The
minimum property applies to the Weibull distribution, for ex-
Example: If X ∼ F(n 1 , n 2 ), then
ample, only when the shape parameter is fixed. The Weibull
1 distribution has Mβ on the second line in Figure 1 to indicate
∼ F(n 2 , n 1 ). that the property is valid only in this restricted case.
X

46 Teacher’s Corner
Figure 1. Univariate distribution relationships.

The American Statistician, February 2008, Vol. 62, No. 1 47


3. RELATIONSHIPS AMONG THE DISTRIBUTIONS distribution results in a distribution with support over the en-
tire real axis. Second, the inverted gamma distribution indicates
There are three types of lines used to connect the distribu- that the reciprocal of any survival distribution results in another
tions to one another. The solid line is used for special cases and survival distribution. Third, switching the roles of F(x) and
transformations. Transformations typically have an X on their F −1 (u) for a random variable with support on (0, 1) results in
label to distinguish them from special cases. The term “transfor- a complementary distribution (e.g., Jones 2002).
mation” is used rather loosely here, to include the distribution of Additionally, the transformations in Figure 1 can be used to
an order statistic, truncating a random variable, or taking a mix- give intuition to some random variate generation routines. The
ture of random variables. The dashed line is used for asymp- Box–Muller algorithm, for example, converts a U (0, 1) to an
totic relationships, which are typically in the limit as one or exponential to a chi-square to a standard normal to a normal
more parameters approach the boundary of the parameter space. random variable.
The dotted line is used for Bayesian relationships (e.g., Beta– Redundant arrows have typically not been drawn. An arrow
binomial, Beta–Pascal, Gamma–normal, and Gamma–Poisson). between the minimax distribution and the standard uniform dis-
The binomial, chi-square, exponential, gamma, normal, and tribution has not been drawn because of the two arrows connect-
U (0, 1) distributions emerge as hubs, highlighting their central- ing the minimax distribution to the standard power distribution
ity in applied statistics. Summation limits run from i = 1 to n. and the standard power distribution to the standard uniform dis-
The notation X (r ) denotes the r th order statistic drawn from a tribution. Likewise, although the exponential distribution is a
random sample of size n. special case of the gamma distribution when the shape parame-
There are certain special cases where distributions overlap ter equals 1, this is not explicitly indicated because of the special
for just a single setting of their parameters. Examples include case involving the Erlang distribution.
(a) the exponential distribution with a mean of two and the chi- In order to preserve a planar graph, several relationships are
square distribution with two degrees of freedom, (b) the chi- not included, such as those that would not fit on the chart or
square distribution with an even number of degrees of freedom involved distributions that were too far apart. Examples include:
and the Erlang distribution with scale parameter two, and (c) the
Kolmogorov–Smirnov distribution (all parameters known case) • A geometric random variable is the floor of an exponential
for a sample of size n = 1 and the U (1/2, 1) distribution. Each random variable.
of these cases is indicated by a double-headed arrow. • A rectangular random variable is the floor of a uniform
The probability integral transformation allows a line to random variable.
be drawn, in theory, between the standard uniform and all
others since F(X ) ∼ U (0, 1). Similarly, a line could be drawn • An exponential random variable is a special case of a
between the unit exponential distribution Makeham random variable with δ = 0.
R x and all others since
H (X ) ∼ exponential(1), where H (x) = −∞ f (t)/(1−F(t))dt
• A standard power random variable is a special case of a
is the cumulative hazard function.
beta random variable with δ = 1.
All random variables that can be expressed as sums (e.g., the
Erlang as the sum of independent and identically distributed ex- • If X has the F distribution with parameters n 1 and n 2 , then
ponential random variables) converge asymptotically in a pa-
1+(n 1 /n 2 )X has the beta distribution (Hogg, McKean, and
1
rameter to the normal distribution by the central limit theo- Craig 2005, p. 189).
rem. These distributions include the binomial, chi-square, Er-
lang, gamma, hypoexponential, and Pascal distributions. Fur- • The doubly noncentral F distribution with n 1 , n 2 degrees
thermore, all distributions have an asymptotic relationship with of freedom and noncentrality parameters δ, γ is defined as
the normal distribution (by the central limit theorem if sums of the distribution of
random variables are considered).
X 1 (δ) X 2 (γ ) −1
Many of the transformations can be inverted, and this is indi-
  

cated on the chart by a double-headed arrow between two dis- n1 n2


,

tributions. Consider the relationship between the normal distri- where X 1 (δ), X 2 (γ ) are noncentral chi-square random
bution and the standard normal distribution. If X ∼ N (μ, σ 2 ), variables with n 1 , n 2 degrees of freedom, respectively,
then X −μ ∼ N (0, 1) as indicated on the chart. Conversely, if (Johnson, Kotz, and Balakrishnan 1995, p. 480).
X ∼ N (0, 1), then μ + σ X ∼ N (μ, σ 2 ). The first direction
σ

of the transformation is useful for standardizing random vari- • A normal and uniform random variable are special and lim-
ables to be used for table lookup, while the second direction iting cases of an error random variable (Evans, Hastings,
is useful for variate generation. In most cases, though, an in- and Peacock 2000, p. 76).
verse transformation is implicit and is not listed on the chart for • A binomial random variable is a special case of a power se-
brevity (e.g., extreme value random variable as the logarithm of ries random variable (Evans, Hastings, and Peacock 2000,
a Weibull random variable and Weibull random variable as the p. 166).
exponential of an extreme value random variable).
Several of these relationships hint at further distributions that • The limit of a von Mises random variable is a normal ran-
have not yet been developed. First, the extreme value and log dom variable as κ → ∞ (Evans, Hastings, and Peacock
gamma distributions indicate that the logarithm of any survival 2000, p. 191).

48 Teacher’s Corner
• The half-normal, Rayleigh, and Maxwell–Boltzmann dis- • Muth’s distribution: Muth (1977)
tributions are special cases of the chi distribution with
n = 1, 2, and 3 degrees of freedom (Johnson, Balakrish- • negative hypergeometric distribution: Balakrishnan and
nan, and Kotz 1994, p. 417). Nevzorov (2003), Miller and Fridell (2007)

• A function of the ratio of two independent generalized • power distribution: Balakrishnan and Nevzorov (2003)
gamma random variables has the beta distribution (Stacy
• TSP distribution: Kotz and van Dorp (2004)
1962).
• Zipf distribution: Ross (2006).
Additionally, there are transformations where two distribu-
tions are combined to obtain a third, which were also omitted to
maintain a planar graph. Two such examples are: A. APPENDIX: PARAMETERIZATIONS

• The t distribution with n degrees of freedom is defined as A.1 Discrete Distributions


the distribution of
Z Benford:
2
p ,
1
 
f (x) = log10 1+ x = 1, 2, . . . , 9
χ (n)/n
where Z is a standard normal random variable and χ 2 (n) x
,
is a chi-square random variable with n degrees of freedom,
independent of Z (Evans, Hastings, and Peacock 2000, p. Bernoulli:
180).
f (x) = p x (1 − p)1−x , x = 0, 1
• The noncentral beta distribution with noncentrality param-
eter δ is defined as the distribution of Beta–binomial:
0(x + a)0(n − x + b)0(a + b)0(n + 2)
X f (x) =
(n + 1)0(a + b + n)0(a)0(b)0(x + 1)0(n − x + 1)
X +Y
,
,
x = 0, 1, . . . , n
where X is a noncentral chi-square random variable with
parameters (β, δ) and Y is a central chi-square random Beta–Pascal (factorial):
variable with γ degrees of freedom (Evans, Hastings, and
Peacock 2000, p. 42). n − 1 + x B(n + a, b + x)
 
f (x) = x = 0, 1, . . .
x B(a, b)
,
References for distributions not typically covered in introduc-
tory probability and statistics textbooks include: Binomial:
• arctan distribution: Glen and Leemis (1997) n x
 
f (x) = p (1 − p)n−x , x = 0, 1, . . . , n
x
• Benford distribution: Benford (1938)
• exponential power distribution: Smith and Bain (1975) Discrete uniform:
1
• extreme value distribution: de Haan and Ferreira (2006) f (x) = x = a, a + 1, . . . , b
b−a+1
,
• generalized gamma distribution: Stacy (1962)
Discrete Weibull:
• generalized Pareto distribution: Davis and Feldstein (1979)
f (x) = (1 − p)x − (1 − p)(x+1) , x = 0, 1, . . .
β β

• Gompertz distribution: Jordan (1967)


Gamma–Poisson:
• hyperexponential and hypoexponential distributions: Ross
(2007) 0(x + β)α x
f (x) = x = 0, 1, . . .
0(β)(1 + α)β+x x!
,
• IDB distribution: Hjorth (1980)
Geometric:
• inverse Gaussian distribution: Chhikara and Folks (1989),
Seshadri (1993) f (x) = p(1 − p)x , x = 0, 1, . . .
• inverted gamma distribution: Casella and Berger (2002)
Hypergeometric:
• logarithm distribution: Johnson, Kemp, and Kotz (2005)
n1 n3 − n1 n3
   
f (x) =
• logistic–exponential distribution: Lan and Leemis (2007) x n2 − x n2
,

• Makeham distribution: Jordan (1967) x = max(0, n 1 + n 2 − n 3 ), . . . , min(n 1 , n 2 )

The American Statistician, February 2008, Vol. 62, No. 1 49


Logarithm (logarithmic series, 0 < c < 1): Cauchy (Lorentz, Breit–Wigner, −∞ < a < ∞):
−(1 − c)x 1
f (x) = x = 1, 2, . . . f (x) = −∞ < x < ∞
x log c απ [1 + ((x − a)/α)2 ]
, ,

Negative hypergeometric: Chi:


1
f (x) = x n−1 e−x /2 , x >0
2
n1 + x − 1 n3 − n1 + n2 − x − 1
2n/2−1 0(n/2)
  
f (x) =
x n2 − x
Chi-square:
n3 + n2 − 1
 

n2 1
f (x) = x n/2−1 e−x/2 , x >0
,
2n/2 0(n/2)
x = max(0, n 1 + n 2 − n 3 ), . . . , n 2
Doubly noncentral F:
Pascal (negative binomial):
 j    k 
n−1+x n e e

  −δ/2 1 δ −γ /2 1 γ
f (x) = p (1 − p)x , x = 0, 1, . . . f (x) =
2 2

XX ∞

x j! k!
 
  
j=0 k=0
Poisson (μ > 0): (n 1 /2)+ j (n 2 /2)+k (n 1 /2)+ j−1
×n 1 n2 x
μx e−μ
f (x) = x = 0, 1, . . .
x! ×(n 2 + n 1 x)− 2 (n 1 +n 2 )− j−k
, 1

Polya: 1 1
  −1
× B n 1 + j, n 2 + k x >0
2 2
,
  x−1 n−x−1  n−1
n Y
f (x) = ( p + jβ) (1 − p + kβ) (1 + iβ),
Y Y
x Doubly noncentral t:
j=0 k=0 i=0
See Johnson, Kotz, and Balakrishnan (1995, p. 533)
x = 0, 1, . . . , n
Erlang:
Power series (c > 0; A(c) = ax c x ):
P
x
1
ax c x f (x) = x n−1 e−x/α , x >0
f (x) = x = 0, 1, . . . α n (n − 1)!
A(c)
,

Rectangular (discrete uniform, n = 0, 1, . . .): Error (exponential power, general error; −∞ < a < ∞, b >
0, c > 0):
1
f (x) = x = 0, 1, . . . , n exp −(|x − a|/b)2/c /2
 
n+1 f (x) = −∞ < x < ∞
,
b(2c/2+1 )0(1 + c/2)
,
Zeta:
1
f (x) = x = 1, 2, . . . Exponential (negative exponential):

P∞ ,
i=1 (1/i)
f (x) = (1/α)e−x/α , x >0
α

Zipf (α ≥ 0):
1 Exponential power:
f (x) = x = 1, 2, . . . , n

Pn ,
i=1
f (x) = (e1−e )eλx λκ x κ−1 , x >0
(1/i)α λx κ κ

A.2 Continuous Distributions


Extreme value (Gumbel):
Arcsin:
1 f (x) = (β/α)e xβ−e
xβ /α
−∞ < x < ∞
f (x) = 0<x <1
π x(1 − x)
, √ ,

Arctangent (−∞ < φ < ∞): F (variance ratio, Fisher–Snedecor):


0((n 1 + n 2 )/2)(n 1 /n 2 )n 1 /2 x n 1 /2−1
f (x) =  x ≥0 f (x) = x >0
λ
0(n 1 /2)0(n 2 /2)[(n 1 /n 2 )x + 1]((n 1 +n 2 )/2)
 , ,
arctan(λφ) + 2
π
1 + λ2 (x − φ)2
Gamma:
Beta: 1
  f (x) = x β−1 e−x/α , x >0
f (x) = x (1 − x)γ −1 , 0<x <1
0(β + γ ) β−1 α β 0(β)
0(β)0(γ )

50 Teacher’s Corner
Gamma–normal: Log logistic:
See Evans, Hastings, and Peacock (2000, p. 103)
f (x) = x >0
λκ(λx)κ−1
Generalized gamma: [1 + (λx)κ ]2
,

Log normal (−∞ < α < ∞):


f (x) = x γβ−1 e−(x/α) , x >0
γ γ

1 1
 
f (x) = √ exp − (log(x/α)/β)2 , x >0
α γβ 0(β)
Generalized Pareto: 2π βx 2

Logistic:
 
f (x) = γ + (1 + x/δ)−κ e−γ x , x >0
κ
x +δ λκ κeκ x
f (x) = −∞ < x < ∞
Gompertz (κ > 1): [1 + (λe x )κ ]2
,

Logistic–exponential:
f (x) = δκ x e[−δ(κ x >0
x −1)/ log κ]
,
αβ(eαx − 1)β−1 eαx
Hyperbolic–secant: f (x) = x >0
(1 + (eαx − 1)β )2
,

f (x) = sech(π x), −∞ < x < ∞ Lomax:


Hyperexponential ( pi > 0, i=1
Pn
pi = 1): f (x) = x >0
λκ
,
(1 + λx)κ+1
n Makeham (κ > 1):
pi −x/αi
f (x) = e x >0
X
x
−γ x− δ(κlog−1)
f (x) = (γ + δκ x )e x >0
,
αi
i=1 κ ,
Hypoexponential (αi 6= α j for i =
6 j): Minimax:
n n f (x) = βγ x β−1 (1 − x β )γ −1 , 0<x <1
 
αi 
f (x) = x >0
X Y
(1/αi )e−x/αi 
Muth:
,
αi − α j
i=1 j=1, j6=i

f (x) = (eκ x − κ)e[− κ e +κ x+ κ1 ]


x >0
1 κx
IDB (γ ≥ 0): ,

(1 + κ x)δx + γ −δx 2 /2 Noncentral beta:


f (x) = e x >0
(1 + κ x)γ /κ+1
e−δ/2 δ i
, ! 
f (x) =

X 0(i + β + γ )
Inverse Gaussian (Wald; μ > 0): i! 2
i=0
0(γ )0(i + β)

×x i+β−1 (1 − x)γ −1 , 0<x <1


r
λ − 2μλ2 x (x−μ)2
f (x) = e x >0
2π x 3
,
Noncentral chi-square:
Inverted beta (β > 1, γ > 1):
exp(− δ )( δ )k exp(− x2 )x
n+2k

x β−1 (1 + x)−β−γ
2 −1
f (x) = 2 2
x >0

X
f (x) = x >0 k! 2
n+2k
0( n+2k

B(β, γ )
,
k=0 2
2 )
,

Inverted gamma: Noncentral F:


 i
∞ 0 2i+n 1 +n 2 n1
x (2i+n 1 −2)/2 e−δ/2 δ
  (2i+n 1 )/2
f (x) = [1/ 0(α)β ]x e x >0

2 n2 2
f (x) =
α −α−1 −1/βx X
,
n2  2i+n 1 n1
i! x
,
1
(2i+n 1 +n 2 )/2
Kolmogorov–Smirnov:
  
i=0 0 2 0 2 + n2
x >0
See Drew, Glen, and Leemis (2000)
Noncentral t (−∞ < δ < ∞):
Laplace (double exponential):
n n/2 exp(−δ 2 /2)
1/(α1 + α2 )e x/α1 , x <0 f (x) = √
 
f (x) = π 0(n/2)(n + x 2 )(n+1)/2
1/(α1 + α2 ) e−x/α2 , x >0 √ !i
0 [(n + i + 1)/2] xδ 2

Log gamma:
X
i! n + x2
× √ ,
i=0
f (x) = [1/α β 0(β)]eβx e−e −∞ < x < ∞ −∞ < x < ∞
x /α
,

The American Statistician, February 2008, Vol. 62, No. 1 51


Normal (Gaussian): A.3 Functions
1 1 Gamma function:
 
f (x) = √ exp − ((x − μ)/σ )2 , −∞ < x < ∞
2πσ 2 Z
Pareto: e−x x c−1 d x

0(c) =
0
f (x) = x >λ
κλκ
x κ+1 Beta function:
,
Power:
1
f (x) = 0<x <α
Z
B(a, b) = x a−1 (1 − x)b−1 d x
βx β−1
,
0
Rayleigh:
αβ

Modified Bessel function of the first kind of order 0:


f (x) = (2x/α)e−x x >0
2 /α
,
Standard Cauchy: κ 2i
I0 (κ) =

X

1 22i (i!)2
i=0
f (x) = −∞ < x < ∞
π(1 + x 2 )
,
[Received October 2007. Revised December 2007.]
Standard normal:
e−x /2 REFERENCES
2

f (x) = √ −∞ < x < ∞



,
Balakrishnan, N., and Nevzorov, V.B. (2003), A Primer on Statistical Distribu-
Standard power: tions, Hoboken, NJ: Wiley.
Benford, F. (1938), “The Law of Anomalous Numbers,” Proceedings of the
f (x) = βx β−1 , 0<x <1 American Philosophical Society, 78, 551–572.
Standard triangular: Boomsma, A., and Molenaar, I.W. (1994), “Four Electronic Tables for Proba-
bility Distributions,” The American Statistician, 48, 153–162.
x + 1, −1 < x < 0

Casella, G., and Berger, R. (2002), Statistical Inference (2nd ed.), Belmont, CA:
f (x) =
1 − x, 0≤x <1 Duxbury.
Chhikara, R.S., and Folks, L.S. (1989), The Inverse Gaussian Distribution: The-
Standard uniform: ory, Methodology and Applications, New York: Marcel Dekker, Inc.
f (x) = 1, 0<x <1 Crowder, M.J., Kimber, A.C., Smith, R.L., and Sweeting, T.J. (1991), Statistical
Analysis of Reliability Data, New York: Chapman and Hall.
t (Student’s t): Davis, H.T., and Feldstein, M.L (1979), “The Generalized Pareto Law as a
Model for Progressively Censored Survival Data,” Biometrika, 66, 299–
0((n + 1)/2)
f (x) = −∞ < x < ∞ 306.
(nπ)1/2 0(n/2)[x 2 /n + 1](n+1)/2
,
Devroye, L. (2006), “Nonuniform Random Variate Generation,” in Simulation,
Triangular (a < m < b): eds. S.G. Henderson and B.L. Nelson, Vol. 13, Handbooks in Operations
Research and Management Science, Amsterdam: North–Holland.
2(x − a)
a<x <m Drew, J.H., Glen, A.G., and Leemis, L.M. (2000), “Computing the Cumula-

a)(m − a) tive Distribution Function of the Kolmogorov–Smirnov Statistic,” Compu-

f (x) = (b −
 ,
2(b − x) tational Statistics and Data Analysis, 34, 1–15.
m≤x <b
(b − a)(b − m) Evans, M., Hastings, N., and Peacock, B. (2000), Statistical Distributions (3rd

 ,
ed.), New York: Wiley.
TSP (two-sided power): Flanigan–Wagner, M., and Wilson, J.R. (1993), “Using Univariate Bézier Dis-
tributions to Model Simulation Input Processes,” in Proceedings of the 1993
n x − a n−1
  
a<x ≤m Winter Simulation Conference, eds. G. W. Evans, M. Mollaghasemi, E. C.
f (x) = b − a  m − a n−1

Russell, and W. E. Biles, Institute of Electrical and Electronics Engineers,

 ,
n b−x pp. 365–373.
m≤x <b Glen, A., and Leemis, L.M. (1997), “The Arctangent Survival Distribution,”

b−a b−m

 ,
Journal of Quality Technology, 29, 205–210.
Uniform (continuous rectangular; −∞ < a < b < ∞): Gupta, R.D., and Kundu, D. (2007), “Generalized Exponential Distribution: Ex-
isting Results and Some Recent Developments,” Journal of Statistical Plan-
f (x) = 1/(b − a), a<x <b ning and Research, 137, 3537–3547.
de Haan, L., and Ferreira, A. (2006), Extreme Value Theory: An Introduction,
von Mises (0 < μ < 2π ): New York: Springer.
eκ cos(x−μ) Hjorth, U. (1980), “A Reliability Distribution with Increasing, Decreasing, Con-
f (x) = 0 < x < 2π stant and Bathtub-Shaped Failure Rates,” Technometrics, 22, 99–107.
2π I0 (κ)
,
Hogg, R.V., McKean, J.W., and Craig, A.T. (2005), Introduction to Mathemati-
Wald (Standard Wald): cal Statistics (6th ed.), Upper Saddle River, NJ: Prentice Hall.
r Hosking, J.R.M. (1994), “The Four-Parameter Kappa Distribution,” IBM Jour-
f (x) = e 2x x >0 nal of Research and Development, 38, 251–258.
λ − λ (x−1)2
2π x 3 Johnson, N.L., Kemp, A.W., and Kotz, S. (2005), Univariate Discrete Distribu-
,

Weibull: tions (3rd ed.), New York: Wiley.


Johnson, N.L., Kotz, S., and Balakrishnan, N. (1994), Continuous Univariate
f (x) = (β/α)x β−1 exp −(1/α)x β , x >0 Distributions (Vol. I, 2nd ed.), New York: Wiley.
 

52 Teacher’s Corner
(1995), Continuous Univariate Distributions (Vol. II, 2nd ed.), New Patel, J.K., Kapadia, C.H., and Owen, D.B. (1976), Handbook of Statistical
York: Wiley. Distributions, New York: Marcel Dekker, Inc.
Jones, M.C. (2002), “The Complementary Beta Distribution,” Journal of Statis- Patil, G.P., Boswell, M.T., Joshi, S.W., and Ratnaparkhi, M.V. (1985), Discrete
tical Planning and Inference, 104, 329–337. Models, Burtonsville, MD: International Co-operative Publishing House.
Jordan, C.W. (1967), Life Contingencies, Chicago: Society of Actuaries. Patil, G.P., Boswell, M.T., and Ratnaparkhi, M.V. (1985), Univariate Con-
Kotz, S., and van Dorp, J.R. (2004), Beyond Beta: Other Continuous Families tinuous Models, Burtonsville, MD: International Co-operative Publishing
of Distributions with Bounded Support and Applications, Hackensack, NJ: House.
World Scientific. Prentice, R.L. (1975), “Discrimination Among Some Parametric Models,”
Lan, L., and Leemis, L. (2007), “The Logistic–Exponential Survival Distribu- Biometrika, 62, 607–619.
tion,” Technical Report, The College of William & Mary, Department of Ramberg, J.S., and Schmeiser, B.W. (1974), “An Approximate Method for Gen-
Mathematics. erating Asymmetric Random Variables,” Communications of the Associa-
Leemis, L. (1986), “Relationships Among Common Univariate Distributions,” tion for Computing Machinery, 17, 78–82.
The American Statistician, 40, 143–146. Ross, S. (2006), A First Course In Probability (7th ed.), Upper Saddle River,
Marshall, A.W., and Olkin, I. (1985), “A Family of Bivariate Distributions Gen- NJ: Prentice Hall.
erated by the Bivariate Bernoulli Distribution,” Journal of the American Sta- (2007), Introduction to Probability Models (9th ed.), New York: Aca-
tistical Association, 80, 332–338. demic Press.
McDonald, J.B. (1984), “Some Generalized Functions for the Size Distribution Seshadri, V. (1993), The Inverse Gaussian Distribution, Oxford: Oxford Uni-
of Income,” Econometrica, 52, 647–663. versity Press.
Miller, G.K., and Fridell, S.L. (2007), “A Forgotten Discrete Distribution? Re-
Shapiro, S.S., and Gross, A.J. (1981), Statistical Modeling Techniques, New
viving the Negative Hypergeometric Model,” The American Statistician, 61,
York: Marcel Dekker.
347–350.
Smith, R.M., and Bain, L.J. (1975), “An Exponential Power Life-Testing Dis-
Muth, E.J. (1977), “Reliability Models with Positive Memory Derived from the
tribution,” Communications in Statistics, 4, 469–481.
Mean Residual Life Function” in The Theory and Applications of Relia-
bility, eds. C.P. Tsokos and I. Shimi, New York: Academic Press, Inc., pp. Song, W.T. (2005), “Relationships Among Some Univariate Distributions,” IIE
401–435. Transactions, 37, 651–656.
Nakagawa, T., and Yoda, H. (1977), “Relationships Among Distributions,” Stacy, E.W. (1962), “A Generalization of the Gamma Distribution,” Annals of
IEEE Transactions on Reliability, 26, 352–353. Mathematical Statistics, 33, 1187–1192.
Ord, J.K. (1972), Families of Frequency Distributions, New York: Hafner Pub- Taha, H.A. (1982), Operations Research: An Introduction (3rd ed.), New York:
lishing. Macmillan.

The American Statistician, February 2008, Vol. 62, No. 1 53

You might also like