You are on page 1of 54

/DEAR WIKIPEDIA READERS:/ To protect our independence, we'll never run

ads. We survive on donations averaging about 10. This week we ask


readers in the Netherlands for help. If everyone reading this right now
gave 2, our fundraiser would be done within an hour. Yep, thats about
the price of buying a programmer a coffee. Were a small non-profit with
costs of a top 5 website: servers, staff and programs. Wikipedia is
something special. It is like a library or a public park where we can
all go to think and learn. If Wikipedia is useful to you, take one
minute to make a tax-deductible donation to keep it online and ad-free
another year. /Thank you./
One-time
Monthly*
2
5
10
20
30
50
100

iDEAL Credit Card PayPal Boleto PayPal (USD)

Problems donating?
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Problems_donating&country=NL&language=en&uselang=en>
| Other ways to give
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Ways_to_Give&country=NL&language=en&uselang=en>
| Frequently asked questions
<https://wikimediafoundation.org/wiki/Special:LandingCheck?landing_page=FAQ&basi
c=true&country=NL&language=en&uselang=en>
| By donating, you are agreeing to our donor privacy policy
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Donor_policy&country=NL&language=en&uselang=en>.
The Wikimedia Foundation is a nonprofit, tax-exempt organization
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Tax_Deductibility&country=NL&language=en&uselang=en>.
By donating, you are agreeing to our donor privacy policy
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Donor_policy&country=NL&language=en&uselang=en>
and to sharing your information with the Wikimedia Foundation and its
service providers in the U.S. and elsewhere. The Wikimedia Foundation is
a nonprofit, tax-exempt organization
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Tax_Deductibility&country=NL&language=en&uselang=en>.
By donating, you are agreeing to our donor privacy policy
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Donor_policy&country=NL&language=en&uselang=en>
and to sharing your information with the Wikimedia Foundation
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Tax_Deductibility&country=NL&language=en&uselang=en>
and its service providers in the U.S. and elsewhere. *Recurring payments
will be debited by the Wikimedia Foundation until you notify us to stop.
We ll send you an email receipt for each payment, which will include a
link to easy cancellation instructions.
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Cancel_or_change_recurring_payments&country=NL&language=en&uselang=en>
If we all gave 2, the fundraiser would be over in an hour. Please
Donate Now

Normal distribution
From Wikipedia, the free encyclopedia
Jump to: navigation <#mw-navigation>, search <#p-search>
This article is about the univariate normal distribution. For normally
distributed vectors, see Multivariate normal distribution
</wiki/Multivariate_normal_distribution>.
Normal
Probability density function
Probability density function for the normal distribution
</wiki/File:Normal_Distribution_PDF.svg>
The red curve is the /standard normal distribution/
Cumulative distribution function
Cumulative distribution function for the normal distribution
</wiki/File:Normal_Distribution_CDF.svg>
Notation
\mathcal{N}(\mu,\,\sigma^2)
Parameters
// *R* mean (location </wiki/Location_parameter>)
//^2 > 0 variance (quared cale </wiki/Scale_parameter>)
Support </wiki/Support_(mathematic)> /x/ *R*
pdf </wiki/Probability_denity_function>
\frac{1}{\igma\qrt{2\pi}}\,
e^{-\frac{(x - \mu)^2}{2 \igma^2}}
CDF </wiki/Cumulative_ditribution_function>
\frac12\left[1 +
\operatorname{erf}\left( \frac{x-\mu}{\igma\qrt{2}}\right)\right]
Quantile </wiki/Quantile_function>
\mu+\igma\qrt{2}\,\operatorname{erf}^{-1}(2F-1)
Mean </wiki/Expected_value>
//
Median </wiki/Median> //
Mode </wiki/Mode_(statistics)> //
Variance </wiki/Variance>
\sigma^2\,
Skewness </wiki/Skewness>
0
Ex. kurtosis </wiki/Excess_kurtosis>
0
Entropy </wiki/Information_entropy>
\frac12 \ln(2 \pi e \, \sigma^2)
MGF </wiki/Moment-generating_function> \exp\{ \mu t +
\frac{1}{2}\sigma^2t^2 \}
CF </wiki/Characteristic_function_(probability_theory)>
\exp \{ i\mu t
- \frac{1}{2}\sigma^2 t^2 \}
Fisher information </wiki/Fisher_information>
\begin{pmatrix}1/\sigma^2&0\\0&1/(2\sigma^4)\end{pmatrix}
In probability theory </wiki/Probability_theory>, the *normal* (or
*Gaussian*) *distribution* is a very commonly occurring continuous
probability distribution </wiki/Continuous_probability_distribution>a
function that tells the probability that any real observation will fall
between any two real limits or real numbers </wiki/Real_number>, as the
curve approaches zero on either side. Normal distributions are extremely
important in statistics </wiki/Statistics> and are often used in the
natural </wiki/Natural_science> and social sciences
</wiki/Social_science> for real-valued random variables
</wiki/Random_variable> whose distributions are not known.^[1]
<#cite_note-1> ^[2] <#cite_note-2>
The normal distribution is immensely useful because of the central limit
theorem </wiki/Central_limit_theorem>, which states that, under mild
conditions, the mean </wiki/Mean> of many random variables
</wiki/Random_variables> independently drawn from the same distribution
is distributed approximately normally, irrespective of the form of the
original distribution: physical quantities that are expected to be the
sum of many independent processes (such as measurement errors
</wiki/Measurement_error>) often have a distribution very close to the
normal. Moreover, many results and methods (such as propagation of

uncertainty </wiki/Propagation_of_uncertainty> and least squares


</wiki/Least_squares> parameter fitting) can be derived analytically in
explicit form when the relevant variables are normally distributed.
The Gaussian distribution is sometimes informally called the *bell
curve*. However, many other distributions are bell-shaped (such as
Cauchy </wiki/Cauchy_distribution> s, Student
</wiki/Student%27s_t-distribution> s, and logistic
</wiki/Logistic_distribution>). The terms *Gaussian function
</wiki/Gaussian_function>* and *Gaussian bell curve* are also ambiguous
because they sometimes refer to multiples of the normal distribution
that cannot be directly interpreted in terms of probabilities.
A normal distribution is
f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} e^{
-\frac{(x-\mu)^2}{2\sigma^2} }
The parameter // in this definition is the /mean </wiki/Mean>/ or
/expectation </wiki/Expected_value>/ of the distribution (and also its
median </wiki/Median> and mode </wiki/Mode_(statistics)>). The
parameter // i it tandard deviation </wiki/Standard_deviation>; it
variance </wiki/Variance> i therefore // ^2 . A random variable with a
Gauian ditribution i aid to be *normally ditributed* and i called
a *normal deviate*.
If // = 0 and // = 1, the ditribution i called the *tandard normal
ditribution* or the *unit normal ditribution*, and a random variable
with that ditribution i a *tandard normal deviate*.
The normal ditribution i the only abolutely continuou
</wiki/Abolute_continuity> ditribution all of whoe cumulant
</wiki/Cumulant> beyond the firt two (i.e., other than the mean and
variance </wiki/Variance>) are zero. It i alo the continuou
ditribution with the maximum entropy
</wiki/Maximum_entropy_probability_ditribution> for a given mean and
variance.^[3] <#cite_note-3> ^[4] <#cite_note-4>
The normal ditribution i a ubcla of the elliptical ditribution
</wiki/Elliptical_ditribution>. The normal ditribution i ymmetric
</wiki/Symmetric_ditribution> about it mean, and i non-zero over the
entire real line. A uch it may not be a uitable model for variable
that are inherently poitive or trongly kewed, uch a the weight
</wiki/Weight> of a peron or the price of a hare
</wiki/Share_(finance)>. Such variable may be better decribed by other
ditribution, uch a the log-normal ditribution
</wiki/Log-normal_ditribution> or the Pareto ditribution
</wiki/Pareto_ditribution>.
The value of the normal ditribution i practically zero when the value
/x/ lie more than a few tandard deviation </wiki/Standard_deviation>
away from the mean. Therefore, it may not be an appropriate model when
one expect a ignificant fraction of outlier </wiki/Outlier>value
that lie many tandard deviation away from the mean and leat quare
and other tatitical inference </wiki/Statitical_inference> method
that are optimal for normally ditributed variable often become highly
unreliable when applied to uch data. In thoe cae, a more
heavy-tailed </wiki/Heavy-tailed> ditribution hould be aumed and the
appropriate robut tatitical inference </wiki/Robut_tatitic>
method applied.

The Gauian ditribution belong to the family of table ditribution


</wiki/Stable_ditribution> which are the attractor of um of
independent, identically ditributed ditribution whether or not the
mean or variance i finite. Except for the Gauian which i a limiting
cae, all table ditribution have heavy tail and infinite variance.
Content
[hide <#>]
* 1 Definition <#Definition>
o 1.1 Standard normal ditribution <#Standard_normal_ditribution>
o 1.2 General normal ditribution <#General_normal_ditribution>
o 1.3 Notation <#Notation>
o 1.4 Alternative parameterization <#Alternative_parameterization>
* 2 Propertie <#Propertie>
o 2.1 Symmetrie and derivative <#Symmetrie_and_derivative>
o 2.2 Moment <#Moment>
o 2.3 Fourier tranform and characteritic function
<#Fourier_tranform_and_characteritic_function>
o 2.4 Moment and cumulant generating function
<#Moment_and_cumulant_generating_function>
* 3 Cumulative ditribution function <#Cumulative_ditribution_function>
o 3.1 Standard deviation and tolerance interval
<#Standard_deviation_and_tolerance_interval>
o 3.2 Quantile function <#Quantile_function>
* 4 Zero-variance limit <#Zero-variance_limit>
* 5 The central limit theorem <#The_central_limit_theorem>
* 6 Operation on normal deviate <#Operation_on_normal_deviate>
o 6.1 Infinite diviibility and Cramr' theorem
<#Infinite_diviibility_and_Cram.C3.A9r.27_theorem>
o 6.2 Berntein' theorem <#Berntein.27_theorem>
* 7 Other propertie <#Other_propertie>
* 8 Related ditribution <#Related_ditribution>
o 8.1 Operation on a ingle random variable
<#Operation_on_a_ingle_random_variable>
o 8.2 Combination of two independent random variable
<#Combination_of_two_independent_random_variable>
o 8.3 Combination of two or more independent random variable
<#Combination_of_two_or_more_independent_random_variable>
o 8.4 Operation on the denity function
<#Operation_on_the_denity_function>
o 8.5 Extenion <#Extenion>
* 9 Normality tet <#Normality_tet>
* 10 Etimation of parameter <#Etimation_of_parameter>
* 11 Bayeian analyi of the normal ditribution
<#Bayeian_analyi_of_the_normal_ditribution>
o 11.1 The um of two quadratic <#The_um_of_two_quadratic>
+ 11.1.1 Scalar form <#Scalar_form>
+ 11.1.2 Vector form <#Vector_form>
o 11.2 The um of difference from the mean
<#The_um_of_difference_from_the_mean>
o 11.3 With known variance <#With_known_variance>
o 11.4 With known mean <#With_known_mean>
o 11.5 With unknown mean and unknown variance
<#With_unknown_mean_and_unknown_variance>
* 12 Occurrence <#Occurrence>
o 12.1 Exact normality <#Exact_normality>

*
*
*
*
*
*
*
*

o 12.2 Approximate normality <#Approximate_normality>


o 12.3 Aumed normality <#Aumed_normality>
o 12.4 Produced normality <#Produced_normality>
13 Generating value from normal ditribution
<#Generating_value_from_normal_ditribution>
14 Numerical approximation for the normal CDF
<#Numerical_approximation_for_the_normal_CDF>
15 Hitory <#Hitory>
o 15.1 Development <#Development>
o 15.2 Naming <#Naming>
16 See alo <#See_alo>
17 Note <#Note>
18 Citation <#Citation>
19 Reference <#Reference>
20 External link <#External_link>
Definition[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=1>]
Standard normal ditribution[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=2>]

The implet cae of a normal ditribution i known a the /tandard


normal ditribution/. Thi i a pecial cae where =0 and =1, and it
i decribed by thi probability denity function
</wiki/Probability_denity_function>:
\phi(x) = \frac{e^{- \frac{\criptcripttyle 1}{\criptcripttyle
2} x^2}}{\qrt{2\pi}}\,
The factor \cripttyle\ 1/\qrt{2\pi} in thi expreion enure that
the total area under the curve //(/x/) i equal to one.^[5]
<#cite_note-5> The 1/2 in the exponent enure that the ditribution ha
unit variance (and therefore alo unit tandard deviation). Thi
function i ymmetric around /x/=0, where it attain it maximum value
1/\qrt{2\pi}; and ha inflection point </wiki/Inflection_point> at +1
and 1.
Authors may differ also on which normal distribution should be called
the "standard" one. Gauss himself defined the standard normal as having
variance //^2 = 1/2, that i
\phi(x) = \frac{e^{-x^2}}{\qrt\pi}\,
Stigler </wiki/Stephen_Stigler>^[6] <#cite_note-6> goe even further,
defining the tandard normal with variance //^2 = 1/2// :
\hi(x) = e^{-\i x^2}
General normal distribution[edit
</w/index.h?title=Normal_distribution&action=edit&section=3>]
Any normal distribution is a version of the standard normal distribution
whose domain has been stretched by a factor // (the tandard deviation)
and then tranlated by // (the mean value):
f(x, \mu, \sigma) =\frac{1}{\sigma}

\phi\left(\frac{x-\mu}{\sigma}\right).
The probability density must be scaled by 1/\sigma so that the integral
is still 1.
If /Z/ is a standard normal deviate, then /X/ = /Z/ + // will have a
normal distribution with expected value // and standard deviation //.
Converely, if /X/ i a general normal deviate, then /Z/ = (/X/
//)/// will have a tandard normal ditribution.
Every normal ditribution i the exponential of a quadratic function
</wiki/Quadratic_function>:
f(x) = e^{a x^2 + b x + c}
where /a/ i negative and /c/ i b^2/(4a)+\ln(-a/\pi)/2. In thi form,
the mean value // is /b//(2/a/), and the variance //^2 i 1/(2/a/).
For the standard normal distribution, /a/ is 1/2, /b/ is zero, and /c/
is \ln(2\pi)/2.
Notation[edit
</w/index.php?title=Normal_distribution&action=edit&section=4>]
The standard Gaussian distribution (with zero mean and unit variance) is
often denoted with the Greek letter // (phi </wiki/Phi_(letter)>).^[7]
<#cite_note-7> The alternative form of the Greek phi letter, //, is
also used quite oten.
The normal distribution is also oten denoted by /N/(//, //^2 ).^[8]
<#cite_note-8> Thu when a random variable /X/ i ditributed normally
with mean // and variance //^2 , we write
X\ \im\ \mathcal{N}(\mu,\,\igma^2).
Alternative parameterization[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=5>]
Some author advocate uing the preciion </wiki/Preciion_(tatitic)>
// as he parameer defining he widh of he disribuion, insead of
he deviaion // or the variance //^2 . The preciion i normally
defined a the reciprocal of the variance, 1///^2 .^[9] <#cite_note-9>
The formula for the ditribution then become
f(x) = \qrt{\frac{\tau}{2\pi}}\, e^{\frac{-\tau(x-\mu)^2}{2}}.
Thi choice i claimed to have advantage in numerical computation when
// i very cloe to zero and implify formula in ome context, uch
a in the Bayeian inference </wiki/Bayeian_tatitic> of variable
with multivariate normal ditribution
</wiki/Multivariate_normal_ditribution>.
Occaionally, the preciion // is 1///, the reciprocal of the tandard
deviation; o that
f(x) = \frac{\tau}{\qrt{2\pi}}\, e^{\frac{-\tau^2(x-\mu)^2}{2}}.
According to Stigler, thi formulation i advantageou becaue of a much
impler and eaier-to-remember formula, the fact that the pdf ha unit

height at zero, and imple approximate formula for the quantile


</wiki/Quantile> of the ditribution.
Propertie[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=6>]
Symmetrie and derivative[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=7>]
The normal ditribution /f/(/x/), with any mean // and any positive
deviation //, ha the following propertie:
* It i ymmetric around the point /x = /, which is at the same time
the mode </wiki/Mode_(statistics)>, the median </wiki/Median> and
the mean of the distribution.^[10] <#cite_note-PR2.1.4-10>
* It is unimodal </wiki/Unimodal>: its first derivative
</wiki/Derivative> is positive for /x/ < //, negative for /x/ >
//, and zero only at /x/ = //.
* Its density has two inflection points </wiki/Inflection_point>
(where the second derivative of /f/ is zero and changes sign),
located one standard deviation away from the mean, namely at /x =
/ and /x = + /.^[10] <#cite_note-PR2.1.4-10>
* It denity i log-concave
</wiki/Logarithmically_concave_function>.^[10] <#cite_note-PR2.1.4-10>
* It denity i infinitely differentiable
</wiki/Differentiable_function>, indeed upermooth
</wiki/Supermooth> of order 2.^[11] <#cite_note-11>
* It econd derivative /f/(/x/) i equal to it derivative with
repect to it variance //^/2/
Furthermore, the denity // of the tandard normal ditribution (with
// = 0 and // = 1) alo ha the following propertie:
* It firt derivative //(/x/) i /x/(/x/).
* It econd derivative //(/x/) i (/x/^2 1)//(/x/)
* More generally, it /n/-th derivative //^(/n/) (/x/) i (1)^/n/
/H_n /(/x/)//(/x/), where /H_n / i the Hermite polynomial
</wiki/Hermite_polynomial> of order /n/.^[12] <#cite_note-12>
* It atifie the differential equation </wiki/Differential_equation>
\igma ^2 f'(x)+f(x) (x-\mu )=0,\qquad f(0)=\frac{e^{-\mu
^2/(2\igma ^2)}}{\qrt{2 \pi } \igma }
or
f'(x)+\tau f(x) (x-\mu )=0,\qquad f(0)=\frac{\qrt{\tau }
e^{-\mu^2 \tau/2}}{\qrt{2 \pi }}.
Moment[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=8>]
See alo: Lit of integral of Gauian function
</wiki/Lit_of_integral_of_Gauian_function>
The plain and abolute moment </wiki/Moment_(mathematic)> of a
variable /X/ are the expected value of /X^p / and |/X/|^/p/
,repectively. If the expected value // of /X/ is zero, these

parameters are called /central moments/. Usually we are interested only


in moments with integer order /p/.
If /X/ has a normal distribution, these moments exist and are finite for
any /p/ whose real part is greater than 1. For any nonnegaive ineger
/p/, he plain cenral momens are
\mahrm{E}\lef[X^p\righ] = \begin{cases} 0 & \ex{if }p\ex{ is
odd,} \\ \sigma^p\,(p1)!! & \ex{if }p\ex{ is even.} \end{cases}
Here /n/!! denoes he double facorial </wiki/Double_facorial>, ha
is, he produc of every number from /n/ o 1 ha has he same pariy
as /n/.
The cenral absolue momens coincide wih plain momens for all even
orders, bu are nonzero for odd orders. For any nonnegaive ineger /p/,
\operaorname{E}\lef[|X|^p\righ] = \sigma^p\,(p1)!! \cdo
\lef.\begin{cases} \sqr{\frac{2}{\pi}} & \ex{if }p\ex{ is odd}
\\ 1 & \ex{if }p\ex{ is even} \end{cases}\righ\} = \sigma^p
\cdo
\frac{2^{\frac{p}{2}}\Gamma\lef(\frac{p+1}{2}\righ)}{\sqr{\pi}}
The las formula is valid also for any nonineger /p/ > 1. When he
mean // is not zero, the plain and absolute moments can be expressed in
terms of confluent hypergeometric functions
</wiki/Confluent_hypergeometric_function> _1 /F/_1 and /U/.^[/citation
needed </wiki/Wikipedia:Citation_needed>/]
\operatorname{E} \left[ X^p \right] =\sigma^p \cdot
(-i\sqrt{2}\sgn\mu)^p \; U\left( {-\frac{1}{2}p},\, \frac{1}{2},\,
-\frac{1}{2}(\mu/\sigma)^2 \right),
\operatorname{E} \left[ |X|^p \right] =\sigma^p \cdot 2^{\frac p 2}
\frac {\Gamma\left(\frac{1+p}{2}\right)}{\sqrt\pi}\; _1F_1\left(
{-\frac{1}{2}p},\, \frac{1}{2},\, -\frac{1}{2}(\mu/\sigma)^2 \right).
These expressions remain valid even if /p/ is not integer. See also
generalized Hermite polynomials
</wiki/Hermite_polynomials#.22Negative_variance.22>.
Order
1
2
3
4
5
6
7
8
105//

Non-central moment
Central moment
//
0
//^2 + //^2 // ^2
//^3 + 3//^2
0
//^4 + 6//^2 //^2 + 3//^4 3// ^4
//^5 + 10//^3 //^2 + 15//^4
0
//^6 + 15//^4 //^2 + 45//^2 //^4 + 15//^6
15// ^6
//^7 + 21//^5 //^2 + 105//^3 //^4 + 105//^6
0
//^8 + 28//^6 //^2 + 210//^4 //^4 + 420//^2 //^6 + 105//^8
^8
Fourier tranform and characteritic function[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=9>]

The Fourier tranform </wiki/Fourier_tranform> of a normal ditribution


/f/ with mean // and deviation // i^[13] <#cite_note-13>
\hat\phi(t) = \int_{-\infty}^\infty\! f(x)e^{itx} dx = e^{i\mu t}
e^{- \frac12 (\igma t)^2}

where *i* i the imaginary unit </wiki/Imaginary_unit>. If the mean //


is zero, the first factor is 1, and the Fourier transform is also a
normal distribution on the frequency domain </wiki/Frequency_domain>,
with mean 0 and standard deviation 1///. In particular, the tandard
normal ditribution // (with //=0 and //=1) i an eigenfunction
</wiki/Eigenfunction> of the Fourier tranform.
In probability theory, the Fourier tranform of the probability
ditribution of a real-valued random variable /X/ i called the
characteritic function
</wiki/Characteritic_function_(probability_theory)> of that variable,
and can be defined a the expected value </wiki/Expected_value> of
/e/^/i tX/ , a a function of the real variable /t/ (the frequency
</wiki/Frequency> parameter of the Fourier tranform). Thi definition
can be analytically extended to a complex-value parameter /t/.^[14]
<#cite_note-14>
Moment and cumulant generating function[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=10>]
The moment generating function </wiki/Moment_generating_function> of a
real random variable /X/ i the expected value of /e^tX /, a a function
of the real parameter /t/. For a normal ditribution with mean // and
deviation //, the moment generating function exit and i equal to
M(t) = \hat \phi(-it) = e^{ \mu t} e^{\frac12 \igma^2 t^2 }
The cumulant generating function </wiki/Cumulant_generating_function> i
the logarithm of the moment generating function, namely
g(t) = \ln M(t) = \mu t + \frac{1}{2} \igma^2 t^2
Since thi i a quadratic polynomial in /t/, only the firt two
cumulant </wiki/Cumulant> are nonzero, namely the mean // and the
variance //^2 .
Cumulative ditribution function[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=11>]
The cumulative ditribution function (CDF) of the tandard normal
ditribution, uually denoted with the capital Greek letter \Phi (phi
</wiki/Phi_(letter)>), i the integral
\Phi(x)\; = \;\frac{1}{\qrt{2\pi}} \int_{-\infty}^x e^{-t^2/2} \, dt
In tatitic one often ue the related error function
</wiki/Error_function>, or erf(/x/), defined a the probability of a
random variable with normal ditribution of mean 0 and variance 1/2
falling in the range [-x, x]; that i
\operatorname{erf}(x)\; =\; \frac{1}{\qrt{\pi}} \int_{-x}^x
e^{-t^2} \, dt
Thee integral cannot be expreed in term of elementary function,
and are often aid to be pecial function </wiki/Special_function> *.
They are cloely related, namely

\Phi(x)\; =\; \frac12\left[1 +


\operatorname{erf}\left(\frac{x}{\qrt{2}}\right)\right]
For a generic normal ditribution /f/ with mean // and deviation //,
the cumulative ditribution function i
F(x)\;=\;\Phi\left(\frac{x-\mu}{\igma}\right)\;=\; \frac12\left[1 +
\operatorname{erf}\left(\frac{x-\mu}{\igma\qrt{2}}\right)\right]
The complement of the tandard normal CDF, Q(x) = 1 - \Phi(x), i often
called the Q-function </wiki/Q-function>, epecially in engineering
text.^[15] <#cite_note-15> ^[16] <#cite_note-16> It give the
probability that the value of a tandard normal random variable /X/ will
exceed /x/. Other definition of the /Q/-function, all of which are
imple tranformation of \Phi, are alo ued occaionally.^[17]
<#cite_note-17>
The graph </wiki/Graph_of_a_function> of the tandard normal CDF \Phi
ha 2-fold rotational ymmetry </wiki/Rotational_ymmetry> around the
point (0,1/2); that i, \Phi(-x) = 1 - \Phi(x). It antiderivative
</wiki/Antiderivative> (indefinite integral) \int \Phi(x)\, dx i \int
\Phi(x)\, dx = x\Phi(x) + \phi(x).
* The cumulative ditribution function (CDF) of the tandard normal
ditribution can be expanded by Integration by part
</wiki/Integration_by_part> into a erie:
\Phi(x)\; =\;0.5+\frac{1}{\qrt{2\pi}}\cdot
e^{-x^2/2}\left[x+\frac{x^3}{3}+\frac{x^5}{3\cdot
5}+\cdot+\frac{x^{2n+1}}{3\cdot 5\cdot7\cdot (2n+1)}\right]
Example of Pacal function to calculate CDF (um of firt 100 element)
function CDF(x:extended):extended;
var value,um:extended;
i:integer;
begin
um:=x;
value:=x;
for i:=1 to 100 do
begin
value:=(value*x*x/(2*i+1));
um:=um+value;
end;
reult:=0.5+(um/qrt(2*pi))*exp(-(x*x)/2);
end;
Standard deviation and tolerance interval[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=12>]
Main article: Tolerance interval </wiki/Tolerance_interval>
</wiki/File:Standard_deviation_diagram.vg>
</wiki/File:Standard_deviation_diagram.vg>
Dark blue i le than one tandard deviation </wiki/Standard_deviation>
away from the mean. For the normal ditribution, thi account for 68.2%
of the et, while two tandard deviation from the mean (medium and dark
blue) account for 95.4%, and three tandard deviation (light, medium,
and dark blue) account for 99.7%.

About 68% of value drawn from a normal ditribution are within one
tandard deviation // away from the mean; about 95% of the value lie
within two tandard deviation; and about 99.7% are within three
tandard deviation. Thi fact i known a the 68-95-99.7 (empirical)
rule </wiki/68%E2%80%9395%E2%80%9399.7_rule>, or the /3-igma rule/.
More preciely, the probability that a normal deviate lie in the range
// /n/ and // + /n/ i given by
F(\mu+n\igma) - F(\mu-n\igma) = \Phi(n)-\Phi(-n) =
\mathrm{erf}\left(\frac{n}{\qrt{2}}\right),
To 12 decimal place, the value for /n/ = 1, 2, , 6 are:^[18]
<#cite_note-18>
/n/
/F/(//+/n/) /F/(// /n/)
i.e. 1 minu or 1 in
</wiki/OEIS>
1
0.682689492137 0.317310507863 3.15148718753 OEIS
</wiki/On-Line_Encyclopedia_of_Integer_Sequence> A178647
<//oei.org/A178647>
2
0.954499736104 0.045500263896 21.9778945080 OEIS
</wiki/On-Line_Encyclopedia_of_Integer_Sequence> A110894
<//oei.org/A110894>
3
0.997300203937 0.002699796063 370.398347345
4
0.999936657516 0.000063342484 15787.1927673
5
0.999999426697 0.000000573303 1744277.89362
6
0.999999998027 0.000000001973 506797345.897
Quantile function[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=13>]
The quantile function </wiki/Quantile_function> of a ditribution i the
invere of the cumulative ditribution function. The quantile function
of the tandard normal ditribution i called the probit function
</wiki/Probit_function>, and can be expreed in term of the invere
error function </wiki/Error_function>:
\Phi^{-1}(p)\; =\; \qrt2\;\operatorname{erf}^{-1}(2p - 1), \quad
p\in(0,1).
For a normal random variable with mean // and variance //^2 , the
quantile function i
F^{-1}(p) = \mu + \igma\Phi^{-1}(p) = \mu +
\igma\qrt2\,\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1).
The quantile </wiki/Quantile> \Phi^{-1}(p) of the tandard normal
ditribution i commonly denoted a /z_p /. Thee value are ued in
hypothei teting </wiki/Hypothei_teting>, contruction of
confidence interval </wiki/Confidence_interval> and Q-Q plot
</wiki/Q-Q_plot>. A normal random variable /X/ will exceed // + /z_p /
with probability 1/p/; and will lie ouside he inerval // /z_p /
with probability 2(1/p/). In paricular, he quanile /z/_0.975 is 1.96
</wiki/1.96>; herefore a normal random variable will lie ouside he
inerval // 1.96// in only 5% of cae.
The following table give the multiple /n/ of // uch that /X/ will lie
in the range // /n/ with a pecified probability /p/. Thee value
are ueful to determine tolerance interval </wiki/Tolerance_interval>

OEIS

for ample average


</wiki/Sample_mean_and_ample_covariance#Sample_mean> and other
tatitical etimator </wiki/Etimator> with normal (or aymptotically
</wiki/Aymptotic> normal) ditribution:^[19] <#cite_note-19>
/F/(//
/n/)
0.80
0.90
0.95
0.98
0.99
0.995
0.998

+ /n/) /F/(// /n/)


/n/
/F/(// + /n/) /F/(//
/n/
1.281551565545
0.999 3.290526731492
1.644853626951
0.9999 3.890591886413
1.959963984540
0.99999
4.417173413469
2.326347874041
0.999999
4.891638475699
2.575829303549
0.9999999
5.326723886384
2.807033768344
0.99999999
5.730728868236
3.090232306168
0.999999999
6.109410204869

Zero-variance limit[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=14>]
In the limit </wiki/Limit_(mathematic)> when // tend to zero, the
probability denity /f/(/x/) eventually tend to zero at any /x/ //,
but grows without limit if /x/ = //, while its integral remains equal
to 1. Therefore, the normal distribution cannot be defined as an
ordinary function </wiki/Function_(mathematics)> when // = 0.
However, one can define the normal ditribution with zero variance a a
generalized function </wiki/Generalized_function>; pecifically, a
Dirac' "delta function" </wiki/Dirac_delta_function> // translate by
the mean //, that is /f/(/x/) = //(/x///). Its CDF is then the
Heaviside step function </wiki/Heaviside_step_function> translated by
the mean //, namely
F(x) = \begin{cases} 0 & \text{if }x < \mu \\ 1 & \text{if }x \geq
\mu \end{cases}
The central limit theorem[edit
</w/index.php?title=Normal_distribution&action=edit&section=15>]
</wiki/File:De_moivre-laplace.gif>
</wiki/File:De_moivre-laplace.gif>
As the number of discrete events increases, the function begins to
resemble a normal distribution
</wiki/File:Dice_sum_central_limit_theorem.svg>
</wiki/File:Dice_sum_central_limit_theorem.svg>
Comparison of probability density functions, /p/(/k/) for the sum of /n/
fair 6-sided dice to show their convergence to a normal distribution
with increasing /n/, in accordance to the central limit theorem. In the
bottom-right graph, smoothed profiles of the previous graphs are
rescaled, superimposed and compared with a normal distribution (black
curve).
Main article: Central limit theorem </wiki/Central_limit_theorem>
The central limit theorem states that under certain (fairly common)
conditions, the sum of many random variables will have an approximately
normal distribution. More specifically, where /X/_1 , , /X_n / are
independent and identically distributed
</wiki/Independent_and_identically_distributed> random variables with
the same arbitrary distribution, zero mean, and variance //^2 ; and /Z/
i their mean caled by \qrt{n}

Z = \qrt{n}\left(\frac{1}{n}\um_{i=1}^n X_i\right)
Then, a /n/ increae, the probability ditribution of /Z/ will tend to
the normal ditribution with zero mean and variance //^2 .
The theorem can be extended to variable /X_i / that are not independent
and/or not identically ditributed if certain contraint are placed on
the degree of dependence and the moment of the ditribution.
Many tet tatitic </wiki/Tet_tatitic>, core
</wiki/Score_(tatitic)>, and etimator </wiki/Etimator> encountered
in practice contain um of certain random variable in them, and even
more etimator can be repreented a um of random variable through
the ue of influence function </wiki/Influence_function_(tatitic)>.
The central limit theorem implie that thoe tatitical parameter will
have aymptotically normal ditribution.
The central limit theorem alo implie that certain ditribution can be
approximated by the normal ditribution, for example:
* The binomial ditribution </wiki/Binomial_ditribution> /B/(/n/,
/p/) i approximately normal
</wiki/De_Moivre%E2%80%93Laplace_theorem> with mean /np/ and
variance /np/(1/p/) for large /n/ an for /p/ not too close to zero
or one.
* The Poisson </wiki/Poisson_istribution> istribution with parameter
// is approximatey norma with mean // and variance //, for
arge vaues of //.^[20] <#cite_note-20>
* The chi-squared distribution </wiki/Chi-squared_distribution> //^2
(/k/) is approximately normal with mean /k/ and variane 2/k/, or
large /k/.
* The Student's t-distribution </wiki/Student%27s_t-distribution>
/t/(//) is approximately ormal with mea 0 a d varia ce 1 whe //
is large.
Whether these approximatio s are sufficie tly accurate depe ds o the
purpose for which they are eeded, a d the rate of co verge ce to the
ormal distributio . It is typically the case that such approximatio s
are less accurate i the tails of the distributio .
A ge eral upper bou d for the approximatio error i the ce tral limit
theorem is give by the BerryEssee theorem
</wiki/Berry%E2%80%93Essee _theorem>, improveme ts of the approximatio
are give by the Edgeworth expa sio s </wiki/Edgeworth_expa sio >.
Operatio s o ormal deviates[edit
</w/i dex.php?title=Normal_distributio &actio =edit&sectio =16>]
The family of ormal distributio s is closed u der li ear
tra sformatio s: if /X/ is ormally distributed with mea // and
standard deviation //, then the variable /Y/ = /aX/ + /b/, for any real
number /a/ and /b/, i alo normally ditributed, with mean /a/ + /b/
and standard deviation /|a|/.
Alo if /X/_1 and /X/_2 are two independent
</wiki/Independence_(probability_theory)> normal random variable, with
mean //_1 , //_2 and standard deviations //_1 , //_2 , then their
um /X/_1 + /X/_2 will alo be normally ditributed,^[proof]

</wiki/Sum_of_normally_ditributed_random_variable> with mean //_1 +


//_2 and variance \sigma_1^2 + \sigma_2^2.
In particular, if /X/ and /Y/ are independent normal deviates with zero
mean and variance //^2 , then /X + Y/ and /X Y/ are also i depe de t
a d ormally distributed, with zero mea a d varia ce 2//^2 . Thi i a
pecial cae of the polarization identity
</wiki/Polarization_identity>.^[21] <#cite_note-21>
Alo, if /X/_1 , /X/_2 are two independent normal deviate with mean //
and deviation //, and /a/, /b/ are arbitrary real number, then the
variable
X_3 = \frac{aX_1 + bX_2 - (a+b)\mu}{\qrt{a^2+b^2}} + \mu
i alo normally ditributed with mean // and deviation //. It follow
that the normal ditribution i table </wiki/Stable_ditribution> (with
exponent // = 2).
More gener
lly,
ny line
r combin
tion </wiki/Line
r_combin
tion> of
independent norm
l devi
tes is
norm
l devi
te.
Infinite divisibility
nd Cr
mr's theorem[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=17>]
For
ny positive integer /n/,
ny norm
l distribution with me
n // and
variance //^2 i the ditribution of the um of /n/ independent normal
deviate, each with mean //n/ and variance //^2 //n/. Thi property i
called infinite diviibility
</wiki/Infinite_diviibility_(probability)>.^[22] <#cite_note-22>
Converely, if /X/_1 and /X/_2 are independent random variable and
their um /X/_1 + /X/_2 ha a normal ditribution, then both /X/_1 and
/X/_2 mut be normal deviate.^[23] <#cite_note-23>
Thi reult i known a *Cramr' decompoition theorem
</wiki/Cram%C3%A9r%27_theorem>*, and i equivalent to aying that the
convolution </wiki/Convolution> of two ditribution i normal if and
only if both are normal. Cramr' theorem implie that a linear
combination of independent non-Gauian variable will never have an
exactly normal ditribution, although it may approach it arbitrarily
cloe.^[24] <#cite_note-Bryc_1995_35-24>
Berntein' theorem[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=18>]
Berntein' theorem tate that if /X/ and /Y/ are independent and /X +
Y/ and /X Y/
re
lso independent, then both /X/
nd /Y/ must
necess
rily h
ve norm
l distributions.^[25] <#cite_noteLK25> ^[26]
<#cite_note26>
More gener
lly, if /X/_1 , , /X_n /
re independent r
ndom v
ri
bles,
then two distinct line
r combin
tions /
_k X_k /
nd /b_k X_k / will
be independent if
nd only if
ll /X_k '/s
re norm
l
nd /
_k b_k / 2
/k/ = 0, where 2
/k/ denote the variance of /X_k /.^[25] <#cite_note-LK-25>

Other propertie[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=19>]
1. If the characteritic function /_X / o some random variable /X/ is
o the orm /_X /(/t/) = /e/^/Q/(/t/) , where /Q/(/t/) is a
polynomial </wiki/Polynomial>, then the *Marinkiewiz theorem*
(named ater Jze Marinkiewiz </wiki/J%C3%B3ze_Marinkiewiz>)
asserts that /Q/ an be at most a quadrati polynomial, and
thereore /X/ a normal random variable.^[24]
<#ite_note-Bry_1995_35-24> The onsequene o this result is that
the normal distribution is the only distribution with a inite
number (two) o non-zero umulants </wiki/Cumulant>.
2. I /X/ and /Y/ are jointly normal
</wiki/Multivariate_normal_distribution> and unorrelated
</wiki/Unorrelated>, then they are independent
</wiki/Independene_(probability_theory)>. The requirement that /X/
and /Y/ should be /jointly/ normal is essential, without it the
property does not hold.^[27] <#ite_note-27> ^[28] <#ite_note-28>
^[proo]
</wiki/Normally_distributed_and_unorrelated_does_not_imply_independent>
For non-normal random variables unorrelatedness does not imply
independene.
3. The KullbakLeibler divergene
</wiki/Kullbak%E2%80%93Leibler_divergene> o one normal
distribution /X/_1 /N/(//_1 , //^2 _1 )from another /X/_2
/N/(//_2 , //^2 _2 )i given by:^[29] <#cite_note-29>
D_\mathrm{KL}( X_1 \,\|\, X_2 ) = \frac{(\mu_1 \mu_2)^2}{2\igma_2^2} \,+\, \frac12\left(\,
\frac{\igma_1^2}{\igma_2^2} - 1 \ln\frac{\igma_1^2}{\igma_2^2} \,\right)\ .
The Hellinger ditance </wiki/Hellinger_ditance> between the ame
ditribution i equal to
H^2(X_1,X_2) = 1 \,-\,
\qrt{\frac{2\igma_1\igma_2}{\igma_1^2+\igma_2^2}} \;
e^{-\frac{1}{4}\frac{(\mu_1-\mu_2)^2}{\igma_1^2+\igma_2^2}}\ .
4. The Fiher information matrix </wiki/Fiher_information_matrix> for
a normal ditribution i diagonal and take the form
\mathcal I = \begin{pmatrix} \frac{1}{\igma^2} & 0 \\ 0 &
\frac{1}{2\igma^4} \end{pmatrix}
5. Normal ditribution belong to an exponential family
</wiki/Exponential_family> with natural parameter
\cripttyle\theta_1=\frac{\mu}{\igma^2} and
\cripttyle\theta_2=\frac{-1}{2\igma^2}, and natural tatitic
/x/ and /x/^2 . The dual, expectation parameter for normal
ditribution are //_1 = // and //_2 = //^2 + //^2 .
6. T e conjugate prior </wiki/Conjugate_prior> of t e mean of a normal
ditribution i anot er normal ditribution.^[30] <#cite_note-30>
Specifically, if /x/_1 , , /x_n / are iid /N/(//, //^2 ) and t e
prior i // ~ /N/(//_0 , //2
0), t en t e poterior ditribution for t e etimator of // will be
\mu | x_1,\ldots,x_n\ \sim\ \mathcal{N}\left(
\frac{\frac{\sigma^2}{n}\mu_0 +
\sigma_0^2\bar{x}}{\frac{\sigma^2}{n}+\sigma_0^2},\ \left(

\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2} \right)^{\!-1} \right)


7. Of all probability distributions over the reals with mean // and
variance //^2 , t e normal ditribution /N/(//, //^2 ) i t e one
wit t e maximum entropy
</wiki/Maximum_entropy_probability_ditribution>.^[31] <#cite_note-31>
8. T e family of normal ditribution form a manifold </wiki/Manifold>
wit contant curvature </wiki/Contant_curvature> 1. The s
me
f
mily is fl
t </wiki/Fl
t_m
nifold> with respect to the
(1)connections ^(/e/)
nd ^(/m/) .^[32] <#cite_note32>
Rel
ted distributions[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=20>]
Oper
tions on
single r
ndom v
ri
ble[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=21>]
If /X/ is distributed norm
lly with me
n // and variance //^2 , t en
* T e exponential of /X/ i ditributed log-normally
</wiki/Log-normal_ditribution>: /e^X / ~ ln(/N/ (//, //^2 )).
* T e abolute value of /X/ a folded normal ditribution
</wiki/Folded_normal_ditribution>: |/X/| ~ /N_f / (//, //^2 ). If
// = 0 this is known as the half-normal distribution
</wiki/Half-normal_distribution>.
* The square of /X// a t e noncentral c i-quared ditribution
</wiki/Noncentral_c i-quared_ditribution> wit one degree of
freedom: /X/^2 ///^2 ~ //^2 _1 (//^2 ///^2 ). If // = 0, the
distribution is called simply chi-squared
</wiki/Chi-squared_distribution>.
* The distribution of the variable /X/ restricted to an interval [/a/,
/b/] is called the truncated normal distribution
</wiki/Truncated_normal_distribution>.
* (/X/ //)^2 h
s
Lvy distribution
</wiki/L%C3%A9vy_distribution> with loc
tion 0
nd sc
le //^2 .
Combin
tion of two independent r
ndom v
ri
bles[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=22>]
If /X/_1
nd /X/_2
re two independent st
nd
rd norm
l r
ndom v
ri
bles
with me
n 0
nd v
ri
nce 1, then
* Their sum
nd difference is distributed norm
lly with me
n zero
nd
v
ri
nce two: /X/_1 /X/_2 /N/(0, 2).
* Their product /Z/ = /X/_1 /X/_2 follows the "productnorm
l"
distribution^[33] <#cite_note33> with density function /f_Z /(/z/)
= //^1 /K/_0 (|/z/|), where /K/_0 is the modified Bessel function
of the second kind </wiki/M
cdon
ld_function>. This distribution is
symmetric
round zero, unbounded
t /z/ = 0,
nd h
s the
ch
r
cteristic function
</wiki/Ch
r
cteristic_function_(prob
bility_theory)> /_Z /(/t/) =
(1 + /t/ ^2 )^1/2 .
* Their r
tio follows the st
nd
rd C
uchy distribution
</wiki/C
uchy_distribution>: /X/_1 /X/_2 C
uchy(0, 1).
* Their Euclide
n norm \scriptstyle\sqrt{X_1^2\,+\,X_2^2} h
s the
R
yleigh distribution </wiki/R
yleigh_distribution>.

Combin
tion of two or more independent r
ndom v
ri
bles[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=23>]
* If /X/_1 , /X/_2 , , /X_n /
re independent st
nd
rd norm
l r
ndom
v
ri
bles, then the sum of their squ
res h
s the chisqu
red
distribution </wiki/Chisqu
red_distribution> with /n/ degrees of
freedom
X_1^2 + \cdots + X_n^2\ \sim\ \chi_n^2..
* If /X/_1 , /X/_2 , , /X_n /
re independent norm
lly distributed
r
ndom v
ri
bles with me
ns // and variances //^2 , t en t eir
ample mean </wiki/Sample_mean> i independent from t e ample
tandard deviation </wiki/Standard_deviation>,^[34] <#cite_note-34>
w ic can be demontrated uing Bau' t eorem
</wiki/Bau%27_t eorem> or Coc ran' t eorem
</wiki/Coc ran%27_t eorem>.^[35] <#cite_note-35> T e ratio of t ee
two quantitie will ave t e Student' t-ditribution
</wiki/Student%27_t-ditribution> wit /n/ 1 degrees of freedom:
t = \fr
c{\overline X  \mu}{S/\sqrt{n}} =
\fr
c{\fr
c{1}{n}(X_1+\cdots+X_n) 
\mu}{\sqrt{\fr
c{1}{n(n1)}\left[(X_1\overline
X)^2+\cdots+(X_n\overline X)^2\right]}} \ \sim\ t_{n1}.
* If /X/_1 , , /X_n /, /Y/_1 , , /Y_m /
re independent st
nd
rd
norm
l r
ndom v
ri
bles, then the r
tio of their norm
lized sums of
squ
res will h
ve the Fdistribution </wiki/Fdistribution> with
(/n/, /m/) degrees of freedom:^[36] <#cite_note36>
F =
\fr
c{\left(X_1^2+X_2^2+\cdots+X_n^2\right)/n}{\left(Y_1^2+Y_2^2+\cdots+
Y_m^2\right)/m}\
\sim\ F_{n,\,m}.
Oper
tions on the density function[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=24>]
The split norm
l distribution </wiki/Split_norm
l_distribution> is most
directly defined in terms of joining sc
led sections of the density
functions of different norm
l distributions
nd resc
ling the density to
integr
te to one. The trunc
ted norm
l distribution
</wiki/Trunc
ted_norm
l_distribution> results from resc
ling
section
of
single density function.
Extensions[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=25>]
The notion of norm
l distribution, being one of the most import
nt
distributions in prob
bility theory, h
s been extended f
r beyond the
st
nd
rd fr
mework of the univ
ri
te (th
t is onedimension
l) c
se
(C
se 1). All these extensions
re
lso c
lled /norm
l/ or /G
ussi
n/
l
ws, so
cert
in
mbiguity in n
mes exists.
* The multiv
ri
te norm
l distribution
</wiki/Multiv
ri
te_norm
l_distribution> describes the G
ussi
n l
w
in the /k/dimension
l Euclide
n sp
ce </wiki/Euclide
n_sp
ce>. A

*
*

*
*

*
*

vector /X/ *R*^/k/ is multiv


ri
tenorm
lly distributed if
ny
line
r combin
tion of its components /k/
/j/=1/
_j X_j / h
s
(univ
ri
te) norm
l distribution. The v
ri
nce
of /X/ is
/kk/ symmetric positivedefinite m
trix /V/. The
multiv
ri
te norm
l distribution is
speci
l c
se of the elliptic
l
distributions </wiki/Elliptic
l_distribution>. As such, its
isodensity loci in the /k/ = 2 c
se
re ellipses </wiki/Ellipse>

nd in the c
se of
rbitr
ry /k/
re ellipsoids </wiki/Ellipsoid>.
Rectified G
ussi
n distribution
</wiki/Rectified_G
ussi
n_distribution>
rectified version of
norm
l distribution with
ll the neg
tive elements reset to 0
Complex norm
l distribution </wiki/Complex_norm
l_distribution>
de
ls with the complex norm
l vectors. A complex vector /X/
*C*^/k/ is s
id to be norm
l if both its re
l
nd im
gin
ry
components jointly possess
2/k/dimension
l multiv
ri
te norm
l
distribution. The v
ri
ncecov
ri
nce structure of /X/ is described
by two m
trices: the /v
ri
nce/ m
trix , and t e /relation/ matrix /C/.
Matrix normal ditribution </wiki/Matrix_normal_ditribution>
decribe t e cae of normally ditributed matrice.
auian procee </wiki/ auian_proce> are t e normally
ditributed toc atic procee </wiki/Stoc atic_proce>. T ee
can be viewed a element of ome infinite-dimenional Hilbert pace
</wiki/Hilbert_pace> /H/, and t u are t e analogue of
multivariate normal vector for t e cae /k/ = . A random element
/ / /H/ i aid to be normal if for any contant /a/ /H/ t e
calar product </wiki/Scalar_product> (/a/, / /) a a (univariate)
normal ditribution. T e variance tructure of uc auian random
element can be decribed in term of t e linear /covariance operator
K: H H/. Several auian procee became popular enoug to ave
t eir own name:
o Brownian motion </wiki/Wiener_proce>,
o Brownian bridge </wiki/Brownian_bridge>,
o OrnteinU lenbeck proce
</wiki/Orntein%E2%80%93U lenbeck_proce>.
auian q-ditribution </wiki/ auian_q-ditribution> i an
abtract mat ematical contruction t at repreent a "q-analogue
</wiki/Q-analogue>" of t e normal ditribution.
t e q- auian </wiki/Q- auian> i an analogue of t e auian
ditribution, in t e ene t at it maximie t e Talli entropy
</wiki/Talli_entropy>, and i one type of Talli ditribution
</wiki/Talli_ditribution>. Note t at t i ditribution i
different from t e auian q-ditribution
</wiki/ auian_q-ditribution> above.

One of t e main practical ue of t e auian law i to model t e


empirical ditribution of many different random variable encountered
in practice. In uc cae a poible extenion would be a ric er family
of ditribution, aving more t an two parameter and t erefore being
able to fit t e empirical ditribution more accurately. T e example of
uc extenion are:
* Pearon ditribution </wiki/Pearon_ditribution> a four-parametric
family of probability ditribution t at extend t e normal law to
include different kewne and kurtoi value.
Normality tet[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=26>]
Main article: Normality tet </wiki/Normality_tet>

Normality tet ae t e likeli ood t at t e given data et {/x/_1 ,


, /x_n /} come from a normal ditribution. Typically t e null
ypot ei </wiki/Null_ ypot ei> /H/_0 i t at t e obervation are
ditributed normally wit unpecified mean // and variance //^2 ,
veru t e alternative /H_a / t at t e ditribution i arbitrary. Many
tet (over 40) ave been devied for t i problem, t e more prominent
of t em are outlined below:
* *"Viual" tet* are more intuitively appealing but ubjective at
t e ame time, a t ey rely on informal uman judgement to accept or
reject t e null ypot ei.
o Q-Q plot </wiki/Q-Q_plot> i a plot of t e orted value from
t e data et againt t e expected value of t e correponding
quantile from t e tandard normal ditribution. T at i, it' a
plot of point of t e form (^1 (/p_k /), /x/_(/k/) ), where
plotti g poi ts /p_k / are equal to /p_k
/ = (/k/ //)/(/n/ + 1 2//)
nd // is
n
djustment
const
nt, which c
n be
nything between 0
nd 1. If the null
hypothesis is true, the plotted points should
pproxim
tely lie
on
str
ight line.
o PP plot </wiki/PP_plot> simil
r to the QQ plot, but used
much less frequently. This method consists of plotting the
points ((/z/_(/k/) ), /p_k /), where \scriptstyle z_{(k)} =
(x_{(k)}\hat\mu)/\hat\sigma. or ormally distributed data this
plot should lie o a 45 li e betwee (0, 0) a d (1, 1).
o ShapiroWilk test </wiki/ShapiroWilk_test> employs the fact
that the li e i the QQ plot has the slope of //. T e tet
compare t e leat quare etimate of t at lope wit t e value
of t e ample variance, and reject t e null ypot ei if t ee
two quantitie differ ignificantly.
o Normal probability plot </wiki/Normal_probability_plot> (rankit
</wiki/Rankit> plot)
* *Moment tet*:
o D'Agotino' K-quared tet </wiki/D%27Agotino%27_K-quared_tet>
o JarqueBera tet </wiki/Jarque%E2%80%93Bera_tet>
* *Empirical ditribution function tet*:
o Lilliefor tet </wiki/Lilliefor_tet> (an adaptation of t e
KolmogorovSmirnov tet </wiki/Kolmogorov%E2%80%93Smirnov_tet>)
o AnderonDarling tet </wiki/Anderon%E2%80%93Darling_tet>
Etimation of parameter[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=27>]
See alo: Standard error of t e mean </wiki/Standard_error_of_t e_mean>,
Standard deviation Etimation </wiki/Standard_deviation#Etimation>,
Variance Etimation </wiki/Variance#Etimation> and Maximum likeli ood
Continuou ditribution, continuou parameter pace
</wiki/Maximum_likeli ood#Continuou_ditribution.2C_continuou_parameter_pace>
It i often t e cae t at we don't know t e parameter of t e normal
ditribution, but intead want to etimate </wiki/Etimation_t eory>
t em. T at i, aving a ample (/x/_1 , , /x_n /) from a normal
/N/(//, //^2 ) population we would like to learn t e approximate
value of parameter // and //^2 . T e tandard approac to t i
problem i t e maximum likeli ood </wiki/Maximum_likeli ood> met od,
w ic require maximization of t e /log-likeli ood function/:
\ln\mat cal{L}(\mu,\igma^2) = \um_{i=1}^n \ln

f(x_i;\,\mu,\igma^2) = -\frac{n}{2}\ln(2\pi) \frac{n}{2}\ln\igma^2 - \frac{1}{2\igma^2}\um_{i=1}^n (x_i-\mu)^2.


Taking derivative wit repect to // and //^2 and olving t e
reulting ytem of firt order condition yield t e /maximum
likeli ood etimate/:
\ at{\mu} = \overline{x} \equiv \frac{1}{n}\um_{i=1}^n x_i, \qquad
\ at{\igma}^2 = \frac{1}{n} \um_{i=1}^n (x_i - \overline{x})^2.
Etimator \cripttyle\ at\mu i called t e /ample mean
</wiki/Sample_mean>/, ince it i t e arit metic mean of all
obervation. T e tatitic \cripttyle\overline{x} i complete
</wiki/Complete_tatitic> and ufficient </wiki/Sufficient_tatitic>
for //, and therefore by the LehmannSc eff t eorem
</wiki/Le mann%E2%80%93Sc eff%C3%A9_t eorem>, \cripttyle\ at\mu i t e
uniformly minimum variance unbiaed
</wiki/Uniformly_minimum_variance_unbiaed> (UMVU) etimator.^[37]
<#cite_note-Kri127-37> In finite ample it i ditributed normally:
\ at\mu \ \im\ \mat cal{N}(\mu,\,\,\igma^2\!\!\;/n).
T e variance of t i etimator i equal to t e //-element of the
inverse Fisher information matrix </wiki/Fisher_information_matrix>
\scriptstyle\mathcal{I}^{-1}. This implies that the estimator is
finite-sample efficient </wiki/Efficient_estimator>. Of practical
importance is the fact that the standard error
</wiki/Standard_error_(statistics)> of \scriptstyle\hat\mu is
proportional to \scriptstyle1/\sqrt{n}, that is, if one wishes to
decrease the standard error by a factor of 10, one must increase the
number of points in the sample by a factor of 100. This fact is widely
used in determining sample sizes for opinion polls and the number of
trials in Monte Carlo simulations </wiki/Monte_Carlo_simulation>.
From the standpoint of the asymptotic theory
</wiki/Asymptotic_theory_(statistics)>, \scriptstyle\hat\mu is
consistent </wiki/Consistent_estimator>, that is, it converges in
probability </wiki/Convergence_in_probability> to // as /n/ . The
estimator is also asymptotically normal </wiki/Asymptotic_normality>,
which is a simple corollary of the fact that it is normal in finite samples:
\sqrt{n}(\hat\mu-\mu) \ \xrightarrow{d}\ \mathcal{N}(0,\,\sigma^2).
The estimator \scriptstyle\hat\sigma^2 is called the /sample variance
</wiki/Sample_variance>/, since it is the variance of the sample (/x/_1
, , /x_n /). In practice, another estimator is often used instead of
the \scriptstyle\hat\sigma^2. This other estimator is denoted /s/^2 ,
and is also called the /sample variance/, which represents a certain
ambiguity in terminology; its square root /s/ is called the /sample
standard deviation/. The estimator /s/^2 differs from
\scriptstyle\hat\sigma^2 by having (/n/ 1) inste
d of /n/ in the
denomin
tor (the soc
lled Bessel's correction
</wiki/Bessel%27s_correction>):
s^2 = \fr
c{n}{n1}\,\h
t\sigm
^2 = \fr
c{1}{n1} \sum_{i=1}^n (x_i
 \overline{x})^2.
The difference between /s/^2
nd \scriptstyle\h
t\sigm
^2 becomes
negligibly sm
ll for l
rge /n'/s. In finite s
mples however, the
motiv
tion behind the use of /s/^2 is th
t it is
n unbi
sed estim
tor

</wiki/Unbi
sed_estim
tor> of the underlying p
r
meter //^2 , w erea
\cripttyle\ at\igma^2 i biaed. Alo, by t e Le mannSc eff t eorem
t e etimator //^2 i uniformly minimum variance unbiaed (UMVU),^[37]
<#cite_note-Kri127-37> w ic make it t e "bet" etimator among all
unbiaed one. However it can be  own t at t e biaed etimator
\cripttyle\ at\igma^2 i "better" t an t e //^2 in term of t e mean
quared error </wiki/Mean_quared_error> (MSE) criterion. In finite
ample bot //^2 and \cripttyle\ at\igma^2 ave caled c i-quared
ditribution </wiki/C i-quared_ditribution> wit (/n/ 1) degrees of
freedom:
s^2 \ \sim\ \fr
c{\sigm
^2}{n1} \cdot \chi^2_{n1}, \qqu
d
\h
t\sigm
^2 \ \sim\ \fr
c{\sigm
^2}{n} \cdot \chi^2_{n1}\ .
The first of these expressions shows th
t the v
ri
nce of /s/^2 is equ
l
to 2//^4 /(/n/1), which is slightly gre
ter th
n the //-element of
t e invere Fi er information matrix \cripttyle\mat cal{I}^{-1}.
T u, //^2 i not an efficient etimator for //^2 , and moreover,
ince //^2 i UMVU, we can conclude t at t e finite-ample efficient
etimator for //^2 doe not exit.
Applying t e aymptotic t eory, bot etimator //^2 and
\cripttyle\ at\igma^2 are conitent, t at i t ey converge in
probability to //^2 a t e ample ize /n/ . T e two etimator are
alo bot aymptotically normal:
\qrt{n}(\ at\igma^2 - \igma^2) \imeq \qrt{n}(^2-\igma^2)\
\xrig tarrow{d}\ \mat cal{N}(0,\,2\igma^4).
In particular, bot etimator are aymptotically efficient for //^2 .
By Coc ran' t eorem </wiki/Coc ran%27_t eorem>, for normal
ditribution t e ample mean \cripttyle\ at\mu and t e ample
variance //^2 are independent
</wiki/Independence_(probability_t eory)>, w ic mean t ere can be no
gain in conidering t eir joint ditribution </wiki/Joint_ditribution>.
T ere i alo a revere t eorem: if in a ample t e ample mean and
ample variance are independent, t en t e ample mut ave come from t e
normal ditribution. T e independence between \cripttyle\ at\mu and
// can be employed to contruct t e o-called /t-tatitic/:
t = \frac{\ at\mu-\mu}{/\qrt{n}} =
\frac{\overline{x}-\mu}{\qrt{\frac{1}{n(n-1)}\um(x_i-\overline{x})^2}}\
\im\ t_{n-1}
T i quantity /t/ a t e Student' t-ditribution
</wiki/Student%27_t-ditribution> wit (/n/ 1) degrees of freedom,

nd it is
n
ncill
ry st
tistic </wiki/Ancill
ry_st
tistic>
(independent of the v
lue of the p
r
meters). Inverting the distribution
of this /t/st
tistics will
llow us to construct the confidence
interv
l </wiki/Confidence_interv
l> for //;^[38] <#cite_note-38>
similarly, inverting the //^2 distribution o the statisti /s/^2 will
give us the onidene interval or //^2 :^[39] <#cite_note-39>
\begin{align} & \mu \in \left[\, \ at\mu + t_{n-1,\alp a/2}\,
\frac{1}{\qrt{n}},\ \ \ at\mu +
t_{n-1,1-\alp a/2}\,\frac{1}{\qrt{n}} \,\rig t] \approx \left[\,
\ at\mu - |z_{\alp a/2}|\frac{1}{\qrt n},\ \ \ at\mu +
|z_{\alp a/2}|\frac{1}{\qrt n} \,\rig t], \\ & \igma^2 \in
\left[\, \frac{(n-1)^2}{\c i^2_{n-1,1-\alp a/2}},\ \

\frac{(n-1)^2}{\c i^2_{n-1,\alp a/2}} \,\rig t] \approx \left[\,


^2 - |z_{\alp a/2}|\frac{\qrt{2}}{\qrt{n}}^2,\ \ ^2 +
|z_{\alp a/2}|\frac{\qrt{2}}{\qrt{n}}^2 \,\rig t], \end{align}
w ere /t_k,p / and 2
/k,p/ are the /p/^th quantiles </wiki/Quantile> o the /t/- and //^2
-distributions respetively. These onidene intervals are o the
/level/ 1 //, me
ning th
t the true v
lues // and //^2 fall outide
of t ee interval wit probability //. In pr
ctice people usu
lly t
ke
// = 5%, resulting in the 95% confidence interv
ls. The
pproxim
te
formul
s in the displ
y
bove were derived from the
symptotic
distributions of \scriptstyle\h
t\mu
nd /s/^2 . The
pproxim
te
formul
s become v
lid for l
rge v
lues of /n/,
nd
re more convenient
for the m
nu
l c
lcul
tion since the st
nd
rd norm
l qu
ntiles /z_/2 /
do not depend on /n/. In p
rticul
r, the most popul
r v
lue of // = 5%,
results in |/z/_0.025 | = 1.96 </wiki/1.96>.
B
yesi
n
n
lysis of the norm
l distribution[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=28>]
B
yesi
n
n
lysis of norm
lly distributed d
t
is complic
ted by the
m
ny different possibilities th
t m
y be considered:
* Either the me
n, or the v
ri
nce, or neither, m
y be considered

fixed qu
ntity.
* When the v
ri
nce is unknown,
n
lysis m
y be done directly in terms
of the v
ri
nce, or in terms of the precision
</wiki/Precision_(st
tistics)>, the reciproc
l of the v
ri
nce. The
re
son for expressing the formul
s in terms of precision is th
t the

n
lysis of most c
ses is simplified.
* Both univ
ri
te
nd multiv
ri
te
</wiki/Multiv
ri
te_norm
l_distribution> c
ses need to be considered.
* Either conjug
te </wiki/Conjug
te_prior> or improper
</wiki/Improper_prior> prior distributions
</wiki/Prior_distribution> m
y be pl
ced on the unknown v
ri
bles.
* An
ddition
l set of c
ses occurs in B
yesi
n line
r regression
</wiki/B
yesi
n_line
r_regression>, where in the b
sic model the
d
t
is
ssumed to be norm
lly distributed,
nd norm
l priors
re
pl
ced on the regression coefficients
</wiki/Regression_coefficient>. The resulting
n
lysis is simil
r to
the b
sic c
ses of independent identic
lly distributed
</wiki/Independent_identic
lly_distributed> d
t
, but more complex.
The formul
s for the nonline
rregression c
ses
re summ
rized in the
conjug
te prior </wiki/Conjug
te_prior>
rticle.
The sum of two qu
dr
tics[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=29>]
Sc
l
r form[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=30>]
The following
uxili
ry formul
is useful for simplifying the posterior
</wiki/Posterior_distribution> upd
te equ
tions, which otherwise become
f
irly tedious.

(xy)^2 + b(xz)^2 = (
+ b)\left(x  \fr
c{
y+bz}{
+b}\right)^2 +

\fr
c{
b}{
+b}(yz)^2
This equ
tion rewrites the sum of two qu
dr
tics in /x/ by exp
nding the
squ
res, grouping the terms in /x/,
nd completing the squ
re
</wiki/Completing_the_squ
re>. Note the following
bout the complex
const
nt f
ctors
tt
ched to some of the terms:
1. The f
ctor \fr
c{
y+bz}{
+b} h
s the form of
weighted
ver
ge
</wiki/Weighted_
ver
ge> of /y/
nd /z/.
2. \fr
c{
b}{
+b} = \fr
c{1}{\fr
c{1}{
}+\fr
c{1}{b}} = (
^{1} +
b^{1})^{1}. This shows th
t this f
ctor c
n be thought of
s
resulting from
situ
tion where the reciproc
ls
</wiki/Multiplic
tive_inverse> of qu
ntities /
/
nd /b/
dd
directly, so to combine /
/
nd /b/ themselves, it's necess
ry to
reciproc
te,
dd,
nd reciproc
te the result
g
in to get b
ck into
the origin
l units. This is ex
ctly the sort of oper
tion performed
by the h
rmonic me
n </wiki/H
rmonic_me
n>, so it is not surprising
th
t \fr
c{
b}{
+b} is oneh
lf the h
rmonic me
n
</wiki/H
rmonic_me
n> of /
/
nd /b/.
Vector form[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=31>]
A simil
r formul
c
n be written for the sum of two vector qu
dr
tics:
If *x*, *y*, *z*
re vectors of length /k/,
nd *A*
nd *B*
re
symmetric </wiki/Symmetric_m
trix>, invertible m
trices
</wiki/Invertible_m
trices> of size k\times k, then
(\m
thbf{y}\m
thbf{x})'\m
thbf{A}(\m
thbf{y}\m
thbf{x}) +
(\m
thbf{x}\m
thbf{z})'\m
thbf{B}(\m
thbf{x}\m
thbf{z}) =
(\m
thbf{x}  \m
thbf{c})'(\m
thbf{A}+\m
thbf{B})(\m
thbf{x} 
\m
thbf{c}) + (\m
thbf{y}  \m
thbf{z})'(\m
thbf{A}^{1} +
\m
thbf{B}^{1})^{1}(\m
thbf{y}  \m
thbf{z})
where
\m
thbf{c} = (\m
thbf{A} + \m
thbf{B})^{1}(\m
thbf{A}\m
thbf{y} +
\m
thbf{B}\m
thbf{z})
Note th
t the form *x* *A* *x* is c
lled
qu
dr
tic form
</wiki/Qu
dr
tic_form>
nd is
sc
l
r </wiki/Sc
l
r_(m
them
tics)>:
\m
thbf{x}'\m
thbf{A}\m
thbf{x} = \sum_{i,j}
_{ij} x_i x_j
In other words, it sums up
ll possible combin
tions of products of
p
irs of elements from *x*, with
sep
r
te coefficient for e
ch. In

ddition, since x_i x_j = x_j x_i, only the sum


_{ij} +
_{ji} m
tters
for
ny offdi
gon
l elements of *A*,
nd there is no loss of gener
lity
in
ssuming th
t *A* is symmetric </wiki/Symmetric_m
trix>. Furthermore,
if *A* is symmetric, then the form \m
thbf{x}'\m
thbf{A}\m
thbf{y} =
\m
thbf{y}'\m
thbf{A}\m
thbf{x} .
The sum of differences from the me
n[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=32>]
Another useful formul
is
s follows:
\sum_{i=1}^n (x_i\mu)^2 = \sum_{i=1}^n(x_i\b
r{x})^2 + n(\b
r{x}

\mu)^2
where \b
r{x} = \fr
c{1}{n}\sum_{i=1}^n x_i.
With known v
ri
nce[edit
</w/index.php?title=Norm
l_distribution&
ction=edit&section=33>]
For
set of i.i.d. </wiki/I.i.d.> norm
lly distributed d
t
points *X*
of size /n/ where e
ch individu
l point /x/ follows x \sim
\m
thc
l{N}(\mu, \sigm
^2) with known v
ri
nce </wiki/V
ri
nce> ^2 ,
t e conjugate prior </wiki/Conjugate_prior> ditribution i alo
normally ditributed.
T i can be  own more eaily by rewriting t e variance a t e preciion
</wiki/Preciion_(tatitic)>, i.e. uing = 1/^2 . T en if x \im
\mat cal{N}(\mu, \tau) and \mu \im \mat cal{N}(\mu_0, \tau_0), we
proceed a follow.
Firt, t e likeli ood function </wiki/Likeli ood_function> i (uing t e
formula above for t e um of difference from t e mean):
\begin{align} p(\mat bf{X}|\mu,\tau) &= \prod_{i=1}^n
\qrt{\frac{\tau}{2\pi}}
\exp\left(-\frac{1}{2}\tau(x_i-\mu)^2\rig t) \\ &=
\left(\frac{\tau}{2\pi}\rig t)^{\frac{n}{2}}
\exp\left(-\frac{1}{2}\tau \um_{i=1}^n (x_i-\mu)^2\rig t) \\ &=
\left(\frac{\tau}{2\pi}\rig t)^{\frac{n}{2}}
\exp\left[-\frac{1}{2}\tau \left(\um_{i=1}^n(x_i-\bar{x})^2 +
n(\bar{x} -\mu)^2\rig t)\rig t]. \end{align}
T en, we proceed a follow:
\begin{align} p(\mu|\mat bf{X}) &\propto p(\mat bf{X}|\mu) p(\mu) \\
& = \left(\frac{\tau}{2\pi}\rig t)^{\frac{n}{2}}
\exp\left[-\frac{1}{2}\tau \left(\um_{i=1}^n(x_i-\bar{x})^2 +
n(\bar{x} -\mu)^2\rig t)\rig t] \qrt{\frac{\tau_0}{2\pi}}
\exp\left(-\frac{1}{2}\tau_0(\mu-\mu_0)^2\rig t) \\ &\propto
\exp\left(-\frac{1}{2}\left(\tau\left(\um_{i=1}^n(x_i-\bar{x})^2 +
n(\bar{x} -\mu)^2\rig t) + \tau_0(\mu-\mu_0)^2\rig t)\rig t) \\
&\propto \exp\left(-\frac{1}{2} \left(n\tau(\bar{x}-\mu)^2 +
\tau_0(\mu-\mu_0)^2 \rig t)\rig t) \\ &=
\exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau
\bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\rig t)^2 +
\frac{n\tau\tau_0}{n\tau+\tau_0}(\bar{x} - \mu_0)^2\rig t) \\
&\propto \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\rig t)^2\rig t)
\end{align}
In t e above derivation, we ued t e formula above for t e um of two
quadratic and eliminated all contant factor not involving //. The
result is the kernel </wiki/Kernel_(statistics)> of a normal
distribution, with mean \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau +
\tau_0} and precision n\tau + \tau_0, i.e.
p(\mu|\mathbf{X}) \sim \mathcal{N}\left(\frac{n\tau \bar{x} +
\tau_0\mu_0}{n\tau + \tau_0}, n\tau + \tau_0\right)
This can be written as a set of Bayesian update equations for the
posterior parameters in terms of the prior parameters:

\begin{align} \tau_0 &= \tau_0 + n\tau \\ \mu_0 &= \frac{n\tau


\bar{x} + \tau_0\mu_0}{n\tau + \tau_0} \\ \bar{x} &=
\frac{1}{n}\sum_{i=1}^n x_i \end{align}
That is, to combine /n/ data points with total precision of /n/ (or
equivalenly, oal variance of /n//^2 ) and mean of value \bar{x},
derive a new total preciion imply by adding t e total preciion of t e
data to t e prior total preciion, and form a new mean t roug a
/preciion-weig ted average/, i.e. a weig ted average
</wiki/Weig ted_average> of t e data mean and t e prior mean, eac
weig ted by t e aociated total preciion. T i make logical ene if
t e preciion i t oug t of a indicating t e certainty of t e
obervation: In t e ditribution of t e poterior mean, eac of t e
input component i weig ted by it certainty, and t e certainty of t i
ditribution i t e um of t e individual certaintie. (For t e
intuition of t i, compare t e expreion "t e w ole i (or i not)
greater t an t e um of it part". In addition, conider t at t e
knowledge of t e poterior come from a combination of t e knowledge of
t e prior and likeli ood, o it make ene t at we are more certain of
it t an of eit er of it component.)
T e above formula reveal w y it i more convenient to do Bayeian
analyi </wiki/Bayeian_analyi> of conjugate prior
</wiki/Conjugate_prior> for t e normal ditribution in term of t e
preciion. T e poterior preciion i imply t e um of t e prior and
likeli ood preciion, and t e poterior mean i computed t roug a
preciion-weig ted average, a decribed above. T e ame formula can be
written in term of variance by reciprocating all t e preciion,
yielding t e more ugly formula
\begin{align} {\igma^2_0}' &= \frac{1}{\frac{n}{\igma^2} +
\frac{1}{\igma_0^2}} \\ \mu_0' &= \frac{\frac{n\bar{x}}{\igma^2} +
\frac{\mu_0}{\igma_0^2}}{\frac{n}{\igma^2} + \frac{1}{\igma_0^2}}
\\ \bar{x} &= \frac{1}{n}\um_{i=1}^n x_i \end{align}
Wit known mean[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=34>]
For a et of i.i.d. </wiki/I.i.d.> normally ditributed data point *X*
of ize /n/ w ere eac individual point /x/ follow x \im
\mat cal{N}(\mu, \igma^2) wit known mean , the conjugate prior
</wiki/Conjugate_prior> of the variance </wiki/Variance> has an inverse
gamma distribution </wiki/Inverse_gamma_distribution> or a scaled
inverse chi-squared distribution
</wiki/Scaled_inverse_chi-squared_distribution>. The two are equivalent
except for having different parameterizations. Although the inverse
gamma is more commonly used, we use the scaled inverse chi-squared for
the sake of convenience. The prior for ^2 i a follow:
p(\igma^2|\nu_0,\igma_0^2) =
\frac{(\igma_0^2\frac{\nu_0}{2})^{\frac{\nu_0}{2}}}{\ amma\left(\frac{\nu_0
}{2}
\rig t)}~\frac{\exp\left[ \frac{-\nu_0 \igma_0^2}{2
\igma^2}\rig t]}{(\igma^2)^{1+\frac{\nu_0}{2}}} \propto
\frac{\exp\left[ \frac{-\nu_0 \igma_0^2}{2
\igma^2}\rig t]}{(\igma^2)^{1+\frac{\nu_0}{2}}}
T e likeli ood function </wiki/Likeli ood_function> from above, written

in term of t e variance, i:


\begin{align} p(\mat bf{X}|\mu,\igma^2) &=
\left(\frac{1}{2\pi\igma^2}\rig t)^{\frac{n}{2}}
\exp\left[-\frac{1}{2\igma^2} \um_{i=1}^n (x_i-\mu)^2\rig t] \\ &=
\left(\frac{1}{2\pi\igma^2}\rig t)^{\frac{n}{2}}
\exp\left[-\frac{S}{2\igma^2}\rig t] \end{align}
w ere
S = \um_{i=1}^n (x_i-\mu)^2.
T en:
\begin{align} p(\igma^2|\mat bf{X}) &\propto p(\mat bf{X}|\igma^2)
p(\igma^2) \\ &= \left(\frac{1}{2\pi\igma^2}\rig t)^{\frac{n}{2}}
\exp\left[-\frac{S}{2\igma^2}\rig t]
\frac{(\igma_0^2\frac{\nu_0}{2})^{\frac{\nu_0}{2}}}{\ amma\left(\frac{\nu_0
}{2}
\rig t)}~\frac{\exp\left[ \frac{-\nu_0 \igma_0^2}{2
\igma^2}\rig t]}{(\igma^2)^{1+\frac{\nu_0}{2}}} \\ &\propto
\left(\frac{1}{\igma^2}\rig t)^{\frac{n}{2}}
\frac{1}{(\igma^2)^{1+\frac{\nu_0}{2}}}
\exp\left[-\frac{S}{2\igma^2} + \frac{-\nu_0 \igma_0^2}{2
\igma^2}\rig t] \\ &= \frac{1}{(\igma^2)^{1+\frac{\nu_0+n}{2}}}
\exp\left[-\frac{\nu_0 \igma_0^2 + S}{2\igma^2}\rig t] \end{align}
T e above i alo a caled invere c i-quared ditribution w ere
\begin{align} \nu_0' &= \nu_0 + n \\ \nu_0'{\igma_0^2}' &= \nu_0
\igma_0^2 + \um_{i=1}^n (x_i-\mu)^2 \end{align}
or equivalently
\begin{align} \nu_0' &= \nu_0 + n \\ {\igma_0^2}' &= \frac{\nu_0
\igma_0^2 + \um_{i=1}^n (x_i-\mu)^2}{\nu_0+n} \end{align}
Reparameterizing in term of an invere gamma ditribution
</wiki/Invere_gamma_ditribution>, t e reult i:
\begin{align} \alp a' &= \alp a + \frac{n}{2} \\ \beta' &= \beta +
\frac{\um_{i=1}^n (x_i-\mu)^2}{2} \end{align}
Wit unknown mean and unknown variance[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=35>]
For a et of i.i.d. </wiki/I.i.d.> normally ditributed data point *X*
of ize /n/ w ere eac individual point /x/ follow x \im
\mat cal{N}(\mu, \igma^2) wit unknown mean and unknown variance
</wiki/Variance> ^2 , a combined (multivariate) conjugate prior
</wiki/Conjugate_prior> i placed over t e mean and variance, coniting
of a normal-invere-gamma ditribution
</wiki/Normal-invere-gamma_ditribution>. Logically, t i originate a
follow:
1. From t e analyi of t e cae wit unknown mean but known variance,
we ee t at t e update equation involve ufficient tatitic
</wiki/Sufficient_tatitic> computed from t e data coniting of
t e mean of t e data point and t e total variance of t e data

2.

3.

4.

5.

6.

point, computed in turn from t e known variance divided by t e


number of data point.
From t e analyi of t e cae wit unknown variance but known mean,
we ee t at t e update equation involve ufficient tatitic over
t e data coniting of t e number of data point and um of quared
deviation </wiki/Sum_of_quared_deviation>.
Keep in mind t at t e poterior update value erve a t e prior
ditribution w en furt er data i andled. T u, we  ould logically
t ink of our prior in term of t e ufficient tatitic jut
decribed, wit t e ame emantic kept in mind a muc a poible.
To andle t e cae w ere bot mean and variance are unknown, we
could place independent prior over t e mean and variance, wit
fixed etimate of t e average mean, total variance, number of data
point ued to compute t e variance prior, and um of quared
deviation. Note owever t at in reality, t e total variance of t e
mean depend on t e unknown variance, and t e um of quared
deviation t at goe into t e variance prior (appear to) depend on
t e unknown mean. In practice, t e latter dependence i relatively
unimportant: S ifting t e actual mean  ift t e generated point by
an equal amount, and on average t e quared deviation will remain
t e ame. T i i not t e cae, owever, wit t e total variance of
t e mean: A t e unknown variance increae, t e total variance of
t e mean will increae proportionately, and we would like to capture
t i dependence.
T i ugget t at we create a /conditional prior/ of t e mean on
t e unknown variance, wit a yperparameter pecifying t e mean of
t e peudo-obervation </wiki/Peudo-obervation> aociated wit
t e prior, and anot er parameter pecifying t e number of
peudo-obervation. T i number erve a a caling parameter on
t e variance, making it poible to control t e overall variance of
t e mean relative to t e actual variance parameter. T e prior for
t e variance alo a two yperparameter, one pecifying t e um of
quared deviation of t e peudo-obervation aociated wit t e
prior, and anot er pecifying once again t e number of
peudo-obervation. Note t at eac of t e prior a a
yperparameter pecifying t e number of peudo-obervation, and in
eac cae t i control t e relative variance of t at prior. T ee
are given a two eparate yperparameter o t at t e variance (aka
t e confidence) of t e two prior can be controlled eparately.
T i lead immediately to t e normal-invere-gamma ditribution
</wiki/Normal-invere-gamma_ditribution>, w ic i t e product of
t e two ditribution jut defined, wit conjugate prior
</wiki/Conjugate_prior> ued (an invere gamma ditribution
</wiki/Invere_gamma_ditribution> over t e variance, and a normal
ditribution over t e mean, /conditional/ on t e variance) and wit
t e ame four parameter jut defined.

T e prior are normally defined a follow:


\begin{align} p(\mu|\igma^2; \mu_0, n_0) &\im
\mat cal{N}(\mu_0,\igma^2/n_0) \\ p(\igma^2; \nu_0,\igma_0^2)
&\im I\c i^2(\nu_0,\igma_0^2) = I (\nu_0/2, \nu_0\igma_0^2/2)
\end{align}
T e update equation can be derived, and look a follow:
\begin{align} \bar{x} &= \frac{1}{n}\um_{i=1}^n x_i \\ \mu_0' &=
\frac{n_0\mu_0 + n\bar{x}}{n_0 + n} \\ n_0' &= n_0 + n \\ \nu_0' &=
\nu_0 + n \\ \nu_0'{\igma_0^2}' &= \nu_0 \igma_0^2 + \um_{i=1}^n

(x_i-\bar{x})^2 + \frac{n_0 n}{n_0 + n}(\mu_0 - \bar{x})^2 \end{align}


T e repective number of peudo-obervation add t e number of actual
obervation to t em. T e new mean yperparameter i once again a
weig ted average, t i time weig ted by t e relative number of
obervation. Finally, t e update for \nu_0'{\igma_0^2}' i imilar to
t e cae wit known mean, but in t i cae t e um of quared deviation
i taken wit repect to t e oberved data mean rat er t an t e true
mean, and a a reult a new "interaction term" need to be added to take
care of t e additional error ource temming from t e deviation between
prior and data mean.
[ ow] <#>
[Proof]
T e prior ditribution are
\begin{align} p(\mu|\igma^2; \mu_0, n_0) &\im
\mat cal{N}(\mu_0,\igma^2/n_0) =
\frac{1}{\qrt{2\pi\frac{\igma^2}{n_0}}}
\exp\left(-\frac{n_0}{2\igma^2}(\mu-\mu_0)^2\rig t) \\ &\propto
(\igma^2)^{-1/2}
\exp\left(-\frac{n_0}{2\igma^2}(\mu-\mu_0)^2\rig t) \\ p(\igma^2;
\nu_0,\igma_0^2) &\im I\c i^2(\nu_0,\igma_0^2) = I (\nu_0/2,
\nu_0\igma_0^2/2) \\ &=
\frac{(\igma_0^2\nu_0/2)^{\nu_0/2}}{\ amma(\nu_0/2)}~\frac{\exp\left[
\frac{-\nu_0 \igma_0^2}{2 \igma^2}\rig t]}{(\igma^2)^{1+\nu_0/2}}
\\ &\propto {(\igma^2)^{-(1+\nu_0/2)}} \exp\left[ \frac{-\nu_0
\igma_0^2}{2 \igma^2}\rig t] \end{align}
T erefore, t e joint prior i
\begin{align} p(\mu,\igma^2; \mu_0, n_0, \nu_0,\igma_0^2) &=
p(\mu|\igma^2; \mu_0, n_0)\,p(\igma^2; \nu_0,\igma_0^2) \\
&\propto (\igma^2)^{-(\nu_0+3)/2}
\exp\left[-\frac{1}{2\igma^2}\left(\nu_0\igma_0^2 +
n_0(\mu-\mu_0)^2\rig t)\rig t] \end{align}
T e likeli ood function </wiki/Likeli ood_function> from t e ection
above wit known variance i:
\begin{align} p(\mat bf{X}|\mu,\igma^2) &=
\left(\frac{1}{2\pi\igma^2}\rig t)^{n/2}
\exp\left[-\frac{1}{2\igma^2} \left(\um_{i=1}^n(x_i
-\mu)^2\rig t)\rig t] \end{align}
Writing it in term of variance rat er t an preciion, we get:
\begin{align} p(\mat bf{X}|\mu,\igma^2) &=
\left(\frac{1}{2\pi\igma^2}\rig t)^{n/2}
\exp\left[-\frac{1}{2\igma^2} \left(\um_{i=1}^n(x_i-\bar{x})^2 +
n(\bar{x} -\mu)^2\rig t)\rig t] \\ &\propto {\igma^2}^{-n/2}
\exp\left[-\frac{1}{2\igma^2} \left(S + n(\bar{x}
-\mu)^2\rig t)\rig t] \end{align}
w ere S = \um_{i=1}^n(x_i-\bar{x})^2.
T erefore, t e poterior i (dropping t e yperparameter a
conditioning factor):

\begin{align} p(\mu,\igma^2|\mat bf{X}) & \propto p(\mu,\igma^2)


\, p(\mat bf{X}|\mu,\igma^2) \\ & \propto (\igma^2)^{-(\nu_0+3)/2}
\exp\left[-\frac{1}{2\igma^2}\left(\nu_0\igma_0^2 +
n_0(\mu-\mu_0)^2\rig t)\rig t] {\igma^2}^{-n/2}
\exp\left[-\frac{1}{2\igma^2} \left(S + n(\bar{x}
-\mu)^2\rig t)\rig t] \\ &= (\igma^2)^{-(\nu_0+n+3)/2}
\exp\left[-\frac{1}{2\igma^2}\left(\nu_0\igma_0^2 + S +
n_0(\mu-\mu_0)^2 + n(\bar{x} -\mu)^2\rig t)\rig t] \\ &=
(\igma^2)^{-(\nu_0+n+3)/2}
\exp\left[-\frac{1}{2\igma^2}\left(\nu_0\igma_0^2 + S + \frac{n_0
n}{n_0+n}(\mu_0-\bar{x})^2 + (n_0+n)\left(\mu-\frac{n_0\mu_0 +
n\bar{x}}{n_0 + n}\rig t)^2\rig t)\rig t] \\ & \propto
(\igma^2)^{-1/2}
\exp\left[-\frac{n_0+n}{2\igma^2}\left(\mu-\frac{n_0\mu_0 +
n\bar{x}}{n_0 + n}\rig t)^2\rig t] \\ & \quad\time
(\igma^2)^{-(\nu_0/2+n/2+1)}
\exp\left[-\frac{1}{2\igma^2}\left(\nu_0\igma_0^2 + S + \frac{n_0
n}{n_0+n}(\mu_0-\bar{x})^2\rig t)\rig t] \\ & =
\mat cal{N}_{\mu|\igma^2}\left(\frac{n_0\mu_0 + n\bar{x}}{n_0 + n},
\frac{\igma^2}{n_0+n}\rig t) \cdot {\rm
I }_{\igma^2}\left(\frac12(\nu_0+n), \frac12\left(\nu_0\igma_0^2 +
S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\rig t)\rig t). \end{align}
In ot er word, t e poterior ditribution a t e form of a product of
a normal ditribution over /p/(|^2 ) time an invere gamma
ditribution over /p/(^2 ), wit parameter t at are t e ame a t e
update equation above.
Occurrence[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=36>]
T e occurrence of normal ditribution in practical problem can be
looely claified into four categorie:
1. Exactly normal ditribution;
2. Approximately normal law, for example w en uc approximation i
jutified by t e central limit t eorem
</wiki/Central_limit_t eorem>; and
3. Ditribution modeled a normal t e normal ditribution being t e
ditribution wit maximum entropy
</wiki/Principle_of_maximum_entropy> for a given mean and variance.
4. Regreion problem t e normal ditribution being found after
ytematic effect ave been modeled ufficiently well.
Exact normality[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=37>]
</wiki/File:QHarmonicOcillator.png>
</wiki/File:QHarmonicOcillator.png>
T e ground tate of a quantum armonic ocillator
</wiki/Quantum_ armonic_ocillator> a t e auian ditribution
</wiki/ auian_ditribution>.
Certain quantitie in p yic </wiki/P yic> are ditributed normally,
a wa firt demontrated by Jame Clerk Maxwell
</wiki/Jame_Clerk_Maxwell>. Example of uc quantitie are:
* Velocitie of t e molecule in t e ideal ga </wiki/Ideal_ga>. More

generally, velocitie of t e particle in any ytem in


t ermodynamic equilibrium will ave normal ditribution, due to t e
maximum entropy principle </wiki/Maximum_entropy_principle>.
* Probability denity function of a ground tate in a quantum armonic
ocillator </wiki/Quantum_ armonic_ocillator>.
* T e poition of a particle t at experience diffuion
</wiki/Diffuion>. If initially t e particle i located at a
pecific point (t at i it probability ditribution i t e dirac
delta function </wiki/Dirac_delta_function>), t en after time /t/
it location i decribed by a normal ditribution wit variance
/t/, w ic atifie t e diffuion equation
</wiki/Diffuion_equation> //t/ /f/(/x,t/) = 1/2 ^2 //x/^2
/f/(/x,t/). If t e initial location i given by a certain denity
function /g/(/x/), t en t e denity at time /t/ i t e convolution
</wiki/Convolution> of /g/ and t e normal PDF.
Approximate normality[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=38>]
/Approximately/ normal ditribution occur in many ituation, a
explained by t e central limit t eorem </wiki/Central_limit_t eorem>.
W en t e outcome i produced by many mall effect acting /additively
and independently/, it ditribution will be cloe to normal. T e normal
approximation will not be valid if t e effect act multiplicatively
(intead of additively), or if t ere i a ingle external influence t at
a a coniderably larger magnitude t an t e ret of t e effect.
* In counting problem, w ere t e central limit t eorem include a
dicrete-to-continuum approximation and w ere infinitely diviible
</wiki/Infinite_diviibility> and decompoable
</wiki/Indecompoable_ditribution> ditribution are involved, uc a
o Binomial random variable </wiki/Binomial_ditribution>,
aociated wit binary repone variable;
o Poion random variable </wiki/Poion_ditribution>,
aociated wit rare event;
* T ermal lig t a a BoeEintein
</wiki/Boe%E2%80%93Eintein_tatitic> ditribution on very  ort
time cale, and a normal ditribution on longer timecale due to
t e central limit t eorem.
Aumed normality[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=39>]
</wiki/File:Fi er_iri_vericolor_epalwidt .vg>
</wiki/File:Fi er_iri_vericolor_epalwidt .vg>
Hitogram of epal widt  for /Iri vericolor/ from Fi er' Iri
flower data et </wiki/Iri_flower_data_et>, wit uperimpoed
bet-fitting normal ditribution.
I can only recognize t e occurrence of t e normal curve t e
Laplacian curve of error a a very abnormal p enomenon. It i
roug ly approximated to in certain ditribution; for t i reaon,
and on account for it beautiful implicity, we may, per ap, ue it
a a firt approximation, particularly in t eoretical invetigation.
Pearon (1901 <#CITEREFPearon1901>)
T ere are tatitical met od to empirically tet t at aumption, ee

t e above Normality tet </wiki/Normal_ditribution#Normality_tet>


ection.
* In biology </wiki/Biology>, t e /logarit m/ of variou variable
tend to ave a normal ditribution, t at i, t ey tend to ave a
log-normal ditribution </wiki/Log-normal_ditribution> (after
eparation on male/female ubpopulation), wit example including:
o Meaure of ize of living tiue (lengt , eig t, kin area,
weig t);^[40] <#cite_note-40>
o T e /lengt / of /inert/ appendage ( air, claw, nail, teet )
of biological pecimen, /in t e direction of growt /;
preumably t e t ickne of tree bark alo fall under t i
category;
o Certain p yiological meaurement, uc a blood preure of
adult uman.
* In finance, in particular t e BlackSc ole model
</wiki/Black%E2%80%93Sc ole_model>, c ange in t e /logarit m/ of
exc ange rate, price indice, and tock market indice are aumed
normal (t ee variable be ave like compound interet
</wiki/Compound_interet>, not like imple interet, and o are
multiplicative). Some mat ematician uc a Benot Mandelbrot
</wiki/Beno%C3%AEt_Mandelbrot> ave argued t at log-Levy
ditribution </wiki/Levy_kew_alp a-table_ditribution>, w ic
poee eavy tail </wiki/Heavy_tail> would be a more
appropriate model, in particular for t e analyi for tock market
cra e </wiki/Stock_market_cra >.
* Meaurement error </wiki/Propagation_of_uncertainty> in p yical
experiment are often modeled by a normal ditribution. T i ue of
a normal ditribution doe not imply t at one i auming t e
meaurement error are normally ditributed, rat er uing t e normal
ditribution produce t e mot conervative prediction poible
given only knowledge about t e mean and variance of t e error.^[41]
<#cite_note-41>
</wiki/File:FitNormDitr.tif>
</wiki/File:FitNormDitr.tif>
Fitted cumulative normal ditribution to October rainfall, ee
ditribution fitting </wiki/Ditribution_fitting>
* In tandardized teting </wiki/Standardized_teting>, reult can be
made to ave a normal ditribution by eit er electing t e number
and difficulty of quetion (a in t e IQ tet
</wiki/Intelligence_quotient>) or tranforming t e raw tet core
into "output" core by fitting t em to t e normal ditribution. For
example, t e SAT </wiki/SAT>' traditional range of 200800 i baed
on a normal ditribution wit a mean of 500 and a tandard deviation
of 100.
* Many core are derived from t e normal ditribution, including
percentile rank </wiki/Percentile_rank> ("percentile" or
"quantile"), normal curve equivalent
</wiki/Normal_curve_equivalent>, tanine </wiki/Stanine>, z-core
</wiki/Standard_core>, and T-core. Additionally, ome be avioral
tatitical procedure aume t at core are normally ditributed;
for example, t-tet </wiki/Student%27_t-tet> and ANOVA
</wiki/Analyi_of_variance>. Bell curve grading
</wiki/Bell_curve_grading> aign relative grade baed on a normal
ditribution of core.
* In ydrology </wiki/Hydrology> t e ditribution of long duration
river dic arge or rainfall, e.g. mont ly and yearly total, i
often t oug t to be practically normal according to t e central

limit t eorem </wiki/Central_limit_t eorem>.^[42] <#cite_note-42>


T e blue picture illutrate an example of fitting t e normal
ditribution to ranked October rainfall  owing t e 90% confidence
belt </wiki/Confidence_belt> baed on t e binomial ditribution
</wiki/Binomial_ditribution>. T e rainfall data are repreented by
plotting poition </wiki/Plotting_poition> a part of t e
cumulative frequency analyi
</w/index.p p?title=Cumulative_frequency_analyi&action=edit&redlink=1>.
Produced normality[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=40>]
In regreion analyi </wiki/Regreion_analyi>, lack of normality in
reidual </wiki/Error_and_reidual_in_tatitic> imply indicate
t at t e model potulated i inadequate in accounting for t e tendency
in t e data and need to be augmented; in ot er word, normality in
reidual can alway be ac ieved given a properly contructed model.

enerating value from normal ditribution[edit


</w/index.p p?title=Normal_ditribution&action=edit&ection=41>]
</wiki/File:Planc e_de_ alton.jpg>
</wiki/File:Planc e_de_ alton.jpg>
T e bean mac ine </wiki/Bean_mac ine>, a device invented by Franci
alton </wiki/Franci_ alton>, can be called t e firt generator of
normal random variable. T i mac ine conit of a vertical board wit
interleaved row of pin. Small ball are dropped from t e top and t en
bounce randomly left or rig t a t ey it t e pin. T e ball are
collected into bin at t e bottom and ettle down into a pattern
reembling t e auian curve.
In computer imulation, epecially in application of t e Monte-Carlo
met od </wiki/Monte-Carlo_met od>, it i often deirable to generate
value t at are normally ditributed. T e algorit m lited below all
generate t e tandard normal deviate, ince a /N/(/, /2
) can be generated a /X = + Z/, w ere /Z/ i tandard normal. All
t ee algorit m rely on t e availability of a random number generator
</wiki/Random_number_generator> /U/ capable of producing uniform
</wiki/Uniform_ditribution_(continuou)> random variate.
* T e mot traig tforward met od i baed on t e probability integral
tranform </wiki/Probability_integral_tranform> property: if /U/ i
ditributed uniformly on (0,1), t en ^1 (/U/) will have the
sta dard ormal distributio . The drawback of this method is that it
relies o calculatio of the probit fu ctio </wiki/Probit_fu ctio >
^1 , which ca ot be do e a alytically. Some approximate methods
are described i Hart (1968 <#CITERE Hart1968>) a d i the erf
</wiki/Error_fu ctio > article. Wichura^[43] <#cite_ ote43> gives a
fast algorithm for computi g this fu ctio to 16 decimal places,
which is used by R </wiki/R_programmi g_la guage> to compute ra dom
variates of the ormal distributio .
* A easy to program approximate approach, that relies o the ce tral
limit theorem </wiki/Ce tral_limit_theorem>, is as follows: ge erate
12 u iform /U/(0,1) deviates, add them all up, a d subtract 6 the
resulti g ra dom variable will have approximately sta dard ormal
distributio . I truth, the distributio will be Irwi Hall
</wiki/Irwi %E2%80%93Hall_distributio >, which is a 12sectio
eleve thorder poly omial approximatio to the ormal distributio .

This ra dom deviate will have a limited ra ge of (6, 6).^[44]


<#cite_ ote44>
* The BoxMuller method </wiki/Box%E2%80%93Muller_tra sform> uses two
i depe de t ra dom umbers /U/ a d /V/ distributed u iformly
</wiki/U iform_distributio _(co ti uous)> o (0,1). The the two
ra dom variables /X/ a d /Y/
X = \sqrt{ 2 \l U} \, \cos(2 \pi V) , \qquad Y = \sqrt{ 2 \l
U} \, \si (2 \pi V) .
will both have the sta dard ormal distributio , a d will be
i depe de t </wiki/I depe de ce_(probability_theory)>. This
formulatio arises because for a bivariate ormal
</wiki/Bivariate_ ormal> ra dom vector (/X/ /Y/) the squared orm
/X/^2 + /Y/^2 will have the chisquared distributio with two
degrees of freedom, which is a easily ge erated expo e tial
</wiki/Expo e tial_distributio > ra dom variable correspo di g to
the qua tity 2l (/U/) i these equatio s; a d the a gle is
distributed u iformly arou d the circle, chose by the ra dom
variable /V/.
* Marsaglia polar method </wiki/Marsaglia_polar_method> is a
modificatio of the BoxMuller method algorithm, which does ot
require computatio of fu ctio s si () a d cos(). I this method /U/
a d /V/ are draw from the u iform (1,1) distributio , a d the /S/
= /U/^2 + /V/^2 is computed. If /S/ is greater or equal to o e the
the method starts over, otherwise two qua tities
X = U\sqrt{\frac{2\l S}{S}}, \qquad Y = V\sqrt{\frac{2\l S}{S}}
are retur ed. Agai , /X/ a d /Y/ will be i depe de t a d sta dard
ormally distributed.
* The Ratio method^[45] <#cite_ ote45> is a rejectio method. The
algorithm proceeds as follows:
o Ge erate two i depe de t u iform deviates /U/ a d /V/;
o Compute /X/ = 8//e/ (/V/ 0.5)//U/;
o Optio al: if /X/^2 5 4/e/^1/4 /U/ the accept /X/ a d
termi ate algorithm;
o Optio al: if /X/^2 4/e/^1.35 //U/ + 1.4 the reject /X/ a d
start over from step 1;
o If /X/^2 4 l /U/ the accept /X/, otherwise start over the
algorithm.
* The ziggurat algorithm </wiki/Ziggurat_algorithm>^[46]
<#cite_ ote46> is faster tha the BoxMuller tra sform a d still
exact. I about 97% of all cases it uses o ly two ra dom umbers,
o e ra dom i teger a d o e ra dom u iform, o e multiplicatio a d a
iftest. O ly i 3% of the cases, where the combi atio of those two
falls outside the "core of the ziggurat" (a ki d of rejectio
sampli g usi g logarithms), do expo e tials a d more u iform ra dom
umbers have to be employed.
* There is also some i vestigatio ^[47] <#cite_ ote47> i to the
co ectio betwee the fast Hadamard tra sform
</wiki/Hadamard_tra sform> a d the ormal distributio , si ce the
tra sform employs just additio a d subtractio a d by the ce tral
limit theorem ra dom umbers from almost a y distributio will be
tra sformed i to the ormal distributio . I this regard a series of
Hadamard tra sforms ca be combi ed with ra dom permutatio s to tur
arbitrary data sets i to a ormally distributed data.

Numerical approximatio s for the ormal CD [edit


</w/i dex.php?title=Normal_distributio &actio =edit&sectio =42>]
The sta dard ormal CD </wiki/Cumulative_distributio _fu ctio > is
widely used i scie tific a d statistical computi g. The values (/x/)
may be approximated very accurately by a variety of methods, such as
umerical i tegratio </wiki/Numerical_i tegratio >, Taylor series
</wiki/Taylor_series>, asymptotic series </wiki/Asymptotic_series> a d
co ti ued fractio s
</wiki/Gauss%27s_co ti ued_fractio #Of_Kummer.27s_co flue t_hypergeometric_fu ct
io >.
Differe t approximatio s are used depe di g o the desired level of
accuracy.
* Zele & Severo (1964 <#CITERE Zele Severo1964>) give the
approximatio for (/x/) for /x > 0/ with the absolute error
|//(/x/)| < 7.510^8 (algorithm 26.2.17
<http://www.math.sfu.ca/~cbm/aands/pag_932.htm>):
\Phi(x) = 1  \phi(x)\lft(b_1t + b_2t^2 + b_3t^3 + b_4t^4 +
b_5t^5\right) + \varpsilon(x), \qquad t = \frac{1}{1+b_0x},
whr //(/x/) i the tandard normal PDF, and /b/_0 = 0.2316419,
/b/_1 = 0.319381530, /b/_2 = 0.356563782, /b/_3 = 1.781477937,
/b/_4 = 1.821255978, /b/_5 = 1.330274429.
* Hart (1968 <#CITEREFHart1968>) lists almost a hundrd of rational
function </wiki/Rational_function> approximations for th rfc()
function. His algorithms vary in th dgr of complxity and th
rsulting prcision, with maximum absolut prcision of 24 digits.
An algorithm by Wst (2009 <#CITEREFWst2009>) combins Hart's
algorithm 5666 with a continud fraction </wiki/Continud_fraction>
approximation in th tail to provid a fast computation algorithm
with a 16digit prcision.
* Cody (1969 <#CITEREFCody1969>) aftr rcalling Hart68 solution is
not suitd for /rf/, givs a solution for both /rf/ and /rfc/,
with maximal rlativ rror bound, via Rational Chbyshv
Approximation </wiki/Rational_function>.
* Marsaglia (2004 <#CITEREFMarsaglia2004>) suggstd a simpl
algorithm^[nb 1] <#cit_not48> basd on th Taylor sris xpansion
\Phi(x) = \frac12 + \phi(x)\lft( x + \frac{x^3}{3} +
\frac{x^5}{3\cdot5} + \frac{x^7}{3\cdot5\cdot7} +
\frac{x^9}{3\cdot5\cdot7\cdot9} + \cdots \right)
for calculating (/x/) with arbitrary precisio . The drawback of
this algorithm is comparatively slow calculatio time (for example
it takes over 300 iteratio s to calculate the fu ctio with 16
digits of precisio whe /x/ = 10).
* The GNU Scie tific Library </wiki/GNU_Scie tific_Library> calculates
values of the sta dard ormal CD usi g Hart's algorithms a d
approximatio s with Chebyshev poly omials </wiki/Chebyshev_poly omial>.
History[edit
</w/i dex.php?title=Normal_distributio &actio =edit&sectio =43>]
It has bee suggested that this sectio be split
</wiki/Wikipedia:Splitti g> i to a ew article titled /History of the
ormal distributio

</w/i dex.php?title=History_of_the_ ormal_distributio &actio =edit&redli k=1>/.


(Discuss </wiki/Talk:Normal_distributio >) /Proposed si ce May 2013./
Developme t[edit
</w/i dex.php?title=Normal_distributio &actio =edit&sectio =44>]
Some authors^[48] <#cite_ ote49> ^[49] <#cite_ ote50> attribute the
credit for the discovery of the ormal distributio to de Moivre
</wiki/Abraham_de_Moivre>, who i 1738^[ b 2] <#cite_ ote51> published
i the seco d editio of his "/The Doctri e of Cha ces
</wiki/The_Doctri e_of_Cha ces>/" the study of the coefficie ts i the
bi omial expa sio </wiki/Bi omial_expa sio > of (/a + b/)^/ / . De
Moivre proved that the middle term i this expa sio has the approximate
mag itude of \scriptstyle 2/\sqrt{2\pi }, a d that "If /m/ or / / be a
Qua tity i fi itely great, the the Logarithm of the Ratio, which a Term
dista t from the middle by the I terval //, has to the middle Term, is
\scriptstyle \frac{2\ell\ell}{ }."^[50] <#cite_ ote52> Although this
theorem ca be i terpreted as the first obscure expressio for the
ormal probability law, Stigler </wiki/Stephe _Stigler> poi ts out that
de Moivre himself did ot i terpret his results as a ythi g more tha
the approximate rule for the bi omial coefficie ts, a d i particular de
Moivre lacked the co cept of the probability de sity fu ctio .^[51]
<#cite_ ote53>
</wiki/ ile:Carl_ riedrich_Gauss.jpg>
</wiki/ ile:Carl_ riedrich_Gauss.jpg>
Carl riedrich Gauss </wiki/Carl_ riedrich_Gauss> discovered the ormal
distributio i 1809 as a way to ratio alize the method of least squares
</wiki/Method_of_least_squares>.
I 1809 Gauss </wiki/Carl_ riedrich_Gauss> published his mo ograph
"/Theoria motus corporum coelestium i sectio ibus co icis solem
ambie tium/" where amo g other thi gs he i troduces several importa t
statistical co cepts, such as the method of least squares
</wiki/Method_of_least_squares>, the method of maximum likelihood
</wiki/Method_of_maximum_likelihood>, a d the / ormal distributio /.
Gauss used /M/, /M/, /M/, to de ote the measureme ts of some
u k ow qua tity /V/, a d sought the "most probable" estimator: the o e
that maximizes the probability //(/MV/) //(/MV/) //(/MV/)
of obtaining th obsrvd xprimntal rsults. In his notation //
is the probability law o the measurement errors o magnitude //. Not
knowing what the untion // is, Gauss requires that his method should
redue to the well-known answer: the arithmeti mean o the measured
values.^[nb 3] <#ite_note-54> Starting rom these priniples, Gauss
demonstrates that the only law that rationalizes the hoie o
arithmeti mean as an estimator o the loation parameter, is the normal
law o errors:^[52] <#ite_note-55>
\varphi\mathit{\Delta} = \ra{h}{\surd\pi}\, e^{-\mathrm{hh}\Delta\Delta},
where /h/ is "the measure o the preision o the observations". Using
this normal law as a generi model or errors in the experiments, Gauss
ormulates what is now known as the non-linear weighted least squares
(NWLS) method.^[53] <#ite_note-56>
</wiki/File:Pierre-Simon_Laplae.jpg>
</wiki/File:Pierre-Simon_Laplae.jpg>
Marquis de Laplae </wiki/Pierre-Simon_Laplae> proved the entral limit
theorem </wiki/Central_limit_theorem> in 1810, onsolidating the

importane o the normal distribution in statistis.


Although Gauss was the irst to suggest the normal distribution law,
Laplae </wiki/Pierre_Simon_de_Laplae> made signiiant
ontributions.^[nb 4] <#ite_note-57> It was Laplae who irst posed the
problem o aggregating several observations in 1774,^[54]
<#ite_note-58> although his own solution led to the Laplaian
distribution </wiki/Laplaian_distribution>. It was Laplae who irst
alulated the value o the integral /e/^/t/ /dt/ = //
</wiki/Gaussian_integral> in 1782, roviding the normalization constant
for the normal distribution.^[55] <#cite_note-59> Finally, it was
Lalace who in 1810 roved and resented to the Academy the fundamental
central limit theorem, which emhasized the theoretical imortance of
the normal distribution.^[56] <#cite_note-60>
It is of interest to note that in 1809 an American mathematician Adrain
</wiki/Robert_Adrain> ublished two derivations of the normal
robability law, simultaneously and indeendently from Gauss.^[57]
<#cite_note-61> His works remained largely unnoticed by the scientific
community, until in 1871 they were "rediscovered" by Abbe
</wiki/Cleveland_Abbe>.^[58] <#cite_note-62>
In the middle of the 19th century Maxwell </wiki/James_Clerk_Maxwell>
demonstrated that the normal distribution is not just a convenient
mathematical tool, but may also occur in natural henomena:^[59]
<#cite_note-63> "The number of articles whose velocity, resolved in a
certain direction, lies between /x/ and /x/ + /dx/ is
\mathrm{N}\; \frac{1}{\alha\;\sqrt\i}\; e^{-\frac{x^2}{\alha^2}}dx
Naming[edit
</w/index.h?title=Normal_distribution&action=edit&section=45>]
Since its introduction, the normal distribution has been known by many
different names: the law of error, the law of facility of errors,
Lalace's second law, Gaussian law, etc. Gauss himself aarently coined
the term with reference to the "normal equations" involved in its
alications, with normal having its technical meaning of orthogonal
rather than "usual".^[60] <#cite_note-64> However, by the end of the
19th century some authors^[nb 5] <#cite_note-65> had started using the
name /normal distribution/, where the word "normal" was used as an
adjective the term now being seen as a reflection of the fact that
this distribution was seen as tyical, common and thus "normal".
Peirce (one of those authors) once defined "normal" thus: "...the
'normal' is not the average (or any other kind of mean) of what actually
occurs, but of what /would/, in the long run, occur under certain
circumstances."^[61] <#cite_note-66> Around the turn of the 20th century
Pearson </wiki/Karl_Pearson> oularized the term /normal/ as a
designation for this distribution.^[62] <#cite_note-67>
Many years ago I called the LalaceGaussian curve the /normal/
curve, which name, while it avoids an international question of
riority, has the disadvantage of leading eole to believe that all
other distributions of frequency are in one sense or another 'abnormal'.
Pearson (1920 <#CITEREFPearson1920>)
Also, it was Pearson who first wrote the distribution in terms of the
standard deviation // a in modern notation. Soon after t i, in year

1915, Fi er </wiki/Ronald_Fi er> added t e location parameter to t e


formula for normal ditribution, expreing it in t e way it i written
nowaday:
df = \frac{1}{\igma\qrt{2\pi}}e^{-\frac{(x-m)^2}{2\igma^2}}dx
T e term "tandard normal", w ic denote t e normal ditribution wit
zero mean and unit variance came into general ue around t e 1950,
appearing in t e popular textbook by P. . Hoel (1947) "/Introduction to
mat ematical tatitic/" and A.M. Mood (1950) "/Introduction to t e
t eory of tatitic/".^[63] <#cite_note-68>
W en t e name i ued, t e " auian ditribution" wa named after
</wiki/Lit_of_topic_named_after_Carl_Friedric _ au> Carl Friedric
au </wiki/Carl_Friedric _ au>, w o introduced t e ditribution in
1809 a a way of rationalizing t e met od of leat quare
</wiki/Met od_of_leat_quare> a outlined above. Among Engli
peaker, bot "normal ditribution" and " auian ditribution" are in
common ue, wit different term preferred by different communitie.
See alo[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=46>]
Portal icon </wiki/File:Fi er_iri_vericolor_epalwidt .vg>
Statitic portal </wiki/Portal:Statitic>
* Be renFi er problem </wiki/Be ren%E2%80%93Fi er_problem> t e
long-tanding problem of teting w et er two normal ample wit
different variance ave ame mean;
* B attac aryya ditance </wiki/B attac aryya_ditance> met od ued
to eparate mixture of normal ditribution
* ErdKac t eorem </wiki/Erd%C5%91%E2%80%93Kac_t eorem>on t e
occurrence of t e normal ditribution in number t eory
</wiki/Number_t eory>
* auian blur </wiki/ auian_blur>convolution </wiki/Convolution>,
w ic ue t e normal ditribution a a kernel
* Sum of normally ditributed random variable
</wiki/Sum_of_normally_ditributed_random_variable>
* Normally ditributed and uncorrelated doe not imply independent
</wiki/Normally_ditributed_and_uncorrelated_doe_not_imply_independent>
* Tweedie ditribution </wiki/Tweedie_ditribution> T e normal
ditribution i a member of t e family of Tweedie exponential
diperion model </wiki/Exponential_diperion_model>
* Z-tet </wiki/Z-tet> uing t e normal ditribution
* Rayleig ditribution </wiki/Rayleig _ditribution>
Note[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=47>]
1. *Jump up ^ <#cite_ref-48>* For example, t i algorit m i given in
t e article Bc programming language
</wiki/Bc_programming_language#A_tranlated_C_function>.
2. *Jump up ^ <#cite_ref-51>* De Moivre firt publi ed i finding in
1733, in a pamp let "Approximatio ad Summam Terminorum Binomii (/a +
b/)^/n/ in Seriem Expani" t at wa deignated for private
circulation only. But it wa not until t e year 1738 t at e made
i reult publicly available. T e original pamp let wa reprinted
everal time, ee for example Walker (1985 <#CITEREFWalker1985>).

3. *Jump up ^ <#cite_ref-54>* "It a been cutomary certainly to


regard a an axiom t e ypot ei t at if any quantity a been
determined by everal direct obervation, made under t e ame
circumtance and wit equal care, t e arit metical mean of t e
oberved value afford t e mot probable value, if not rigorouly,
yet very nearly at leat, o t at it i alway mot afe to ad ere
to it." au (1809 <#CITEREF au1809>, ection 177)
4. *Jump up ^ <#cite_ref-57>* "My cutom of terming t e curve t e
auLaplacian or /normal/ curve ave u from proportioning t e
merit of dicovery between t e two great atronomer mat ematician."
quote from Pearon (1905 <#CITEREFPearon1905>, p. 189)
5. *Jump up ^ <#cite_ref-65>* Beide t oe pecifically referenced
ere, uc ue i encountered in t e work of Peirce
</wiki/C arle_Sander_Peirce>, alton </wiki/Franci_ alton>
( alton (1889 <#CITEREF alton1889>, c apter V)) and Lexi
</wiki/Wil elm_Lexi> (Lexi (1878 <#CITEREFLexi1878>), Ro rbaer
& Vron (2003 <#CITEREFRo rbaerV.C3.A9ron2003>)) c.
1875.^[/citation needed </wiki/Wikipedia:Citation_needed>/]
Citation[edit
</w/index.p p?title=Normal_ditribution&action=edit&ection=48>]
1. *Jump up ^ <#cite_ref-1>* /Normal Ditribution/
< ttp://www.encyclopedia.com/topic/Normal_Ditribution.apx#3>, ale
Encyclopedia of Pyc ology
2. *Jump up ^ <#cite_ref-2>* Caella & Berger (2001
<#CITEREFCaellaBerger2001>, p. 102)
3. *Jump up ^ <#cite_ref-3>* Cover, T oma M.; T oma, Joy A. (2006).
/Element of Information T eory/. Jo n Wiley and Son. p. 254.
4. *Jump up ^ <#cite_ref-4>* Park, Sung Y.; Bera, Anil K. (2009).
"Maximum Entropy Autoregreive Conditional Heterokedaticity
Model"
< ttp://www.wie.xmu.edu.cn/Mater/Download/..%5C..%5CUploadFile%5Cpaper-ma
terdownload%5C2009519932327055475115776.pdf>.
/Journal of Econometric/ (Elevier) *150* (2): 219230. doi
</wiki/Digital_object_identifier>:10.1016/j.jeconom.2008.12.014
< ttp://dx.doi.org/10.1016%2Fj.jeconom.2008.12.014>. Retrieved
2011-06-02.
5. *Jump up ^ <#cite_ref-5>* For t e proof ee auian integral
</wiki/ auian_integral>
6. *Jump up ^ <#cite_ref-6>* Stigler (1982 <#CITEREFStigler1982>)
7. *Jump up ^ <#cite_ref-7>* Halperin, Hartley & Hoel (1965
<#CITEREFHalperinHartleyHoel1965>, item 7)
8. *Jump up ^ <#cite_ref-8>* McP eron (1990 <#CITEREFMcP eron1990>,
p. 110)
9. *Jump up ^ <#cite_ref-9>* Bernardo & Smit (2000
<#CITEREFBernardoSmit 2000>, p. 121)
10. ^ Jump up to: ^/*a*/ <#cite_ref-PR2.1.4_10-0> ^/*b*/
<#cite_ref-PR2.1.4_10-1> ^/*c*/ <#cite_ref-PR2.1.4_10-2> Patel &
Read (1996 <#CITEREFPatelRead1996>, [2.1.4])
11. *Jump up ^ <#cite_ref-11>* Fan (1991 <#CITEREFFan1991>, p. 1258)
12. *Jump up ^ <#cite_ref-12>* Patel & Read (1996
<#CITEREFPatelRead1996>, [2.1.8])
13. *Jump up ^ <#cite_ref-13>* Bryc (1995 <#CITEREFBryc1995>, p. 23)
14. *Jump up ^ <#cite_ref-14>* Bryc (1995 <#CITEREFBryc1995>, p. 24)
15. *Jump up ^ <#cite_ref-15>* Scott, Clayton; Nowak, Robert (Augut 7,
2003). "T e Q-function" < ttp://cnx.org/content/m11537/1.2/>.
/Connexion/.
16. *Jump up ^ <#cite_ref-16>* Barak, O ad (April 6, 2006). "Q Function

and Error Function" < ttp://www.eng.tau.ac.il/~jo/academic/Q.pdf>.


Tel Aviv Univerity.
17. *Jump up ^ <#cite_ref-17>* Weitein, Eric W.
</wiki/Eric_W._Weitein>, "Normal Ditribution Function"
< ttp://mat world.wolfram.com/NormalDitributionFunction. tml>,
/Mat World </wiki/Mat World>/.
18. *Jump up ^ <#cite_ref-18>* WolframAlp a.com
< ttp://www.wolframalp a.com/input/?i=Table%5B{N(Erf(n/Sqrt(2)),+12),+N(1-Er
f(n/Sqrt(2)),+12),+N(1/(1-Erf(n/Sqrt(2))),+12)},+{n,1,6}%5D>
19. *Jump up ^ <#cite_ref-19>* part 1
< ttp://www.wolframalp a.com/input/?i=Table%5BSqrt%282%29*InvereErf%28x%29%
2C+{x%2C+N%28{8%2F10%2C+9%2F10%2C+19%2F20%2C+49%2F50%2C+99%2F100%2C+995%2F1000%2
C+998%2F1000}%2C+13%29}%5D>,
part 2
< ttp://www.wolframalp a.com/input/?i=Table%5B%7BN(1-10%5E(-x),9),N(Sqrt(2)*
InvereErf(1-10%5E(-x)),13)%7D,%7Bx,3,9%7D%5D>
20. *Jump up ^ <#cite_ref-20>* Normal Approximation to Poion()
Distribution, http://www.stat.uca.edu/
<http://www.stat.uca.edu/~dinov/courses_students.dir/Appets.dir/NormaAppr
ox2PoissonAppet.htm>
21. *Jump up ^ <#cite_ref-21>* Bryc (1995 <#CITEREFBryc1995>, p. 27)
22. *Jump up ^ <#cite_ref-22>* Pate & Read (1996
<#CITEREFPateRead1996>, [2.3.6])
23. *Jump up ^ <#cite_ref-23>* Gaambos & Simonei (2004
<#CITEREFGaambosSimonei2004>, Theorem 3.5)
24. ^ Jump up to: ^/*a*/ <#cite_ref-Bryc_1995_35_24-0> ^/*b*/
<#cite_ref-Bryc_1995_35_24-1> Bryc (1995 <#CITEREFBryc1995>, p. 35)
25. ^ Jump up to: ^/*a*/ <#cite_ref-LK_25-0> ^/*b*/ <#cite_ref-LK_25-1>
Lukacs & King (1954 <#CITEREFLukacsKing1954>)
26. *Jump up ^ <#cite_ref-26>* Quine, M.P. (1993) "On three
characterisations of the norma distribution"
<http://www.math.uni.wroc.p/~pms/pubicationsArtice.php?nr=14.2&nrA=8&ppB=
257&ppE=263>,
/Probabiity and Mathematica Statistics/, 14 (2), 257-263
27. *Jump up ^ <#cite_ref-27>* UIUC, Lecture 21. /The Mutivariate
Norma Distribution/
<http://www.math.uiuc.edu/~r-ash/Stat/StatLec21-25.pdf>,
21.6:"Individuay Gaussian Versus Jointy Gaussian".
28. *Jump up ^ <#cite_ref-28>* Edward L. Menick and Aaron Tenenbein,
"Misspecifications of the Norma Distribution", /The American
Statistician </wiki/The_American_Statistician>/, voume 36, number 4
November 1982, pages 372373
29. *Jump up ^ <#cite_ref-29>* http://www.aisons.org//MML/KL/Norma/
30. *Jump up ^ <#cite_ref-30>* Jordan, Michae I. (February 8, 2010).
"Stat260: Bayesian Modeing and Inference: The Conjugate Prior for
the Norma Distribution"
<http://www.cs.berkeey.edu/~jordan/courses/260-spring10/ectures/ecture5.p
df>.
31. *Jump up ^ <#cite_ref-31>* Cover & Thomas (2006
<#CITEREFCoverThomas2006>, p. 254)
32. *Jump up ^ <#cite_ref-32>* Amari & Nagaoka (2000
<#CITEREFAmariNagaoka2000>)
33. *Jump up ^ <#cite_ref-33>* /Norma Product Distribution/
<http://mathword.wofram.com/NormaProductDistribution.htm>, Mathword
34. *Jump up ^ <#cite_ref-34>* Eugene Lukacs (1942). "A Characterization
of the Norma Distribution"
<http://www.jstor.org/stabe/2236166%7C.>. /The Annas of
Mathematica Statistics/ *13* (1): 9193. doi
</wiki/Digita_object_identifier>:10.1214/aoms/1177731647
<http://dx.doi.org/10.1214%2Faoms%2F1177731647>.

35. *Jump up ^ <#cite_ref-35>* D. Basu and R. G. Laha (1954). "On Some


Characterizations of the Norma Distribution"
<http://www.jstor.org/stabe/25048183%7C.>. /Sankhy
</wiki/Sankhya_(journa)>/ *13* (4): 359362.
36. *Jump up ^ <#cite_ref-36>* Lehmann, E. L. (1997). /Testing
Statistica Hypotheses/ (2nd ed.). Springer. p. 199. ISBN
</wiki/Internationa_Standard_Book_Number> 0-387-94919-4
</wiki/Specia:BookSources/0-387-94919-4>.
37. ^ Jump up to: ^/*a*/ <#cite_ref-Kri127_37-0> ^/*b*/
<#cite_ref-Kri127_37-1> Krishnamoorthy (2006
<#CITEREFKrishnamoorthy2006>, p. 127)
38. *Jump up ^ <#cite_ref-38>* Krishnamoorthy (2006
<#CITEREFKrishnamoorthy2006>, p. 130)
39. *Jump up ^ <#cite_ref-39>* Krishnamoorthy (2006
<#CITEREFKrishnamoorthy2006>, p. 133)
40. *Jump up ^ <#cite_ref-40>* Huxey (1932 <#CITEREFHuxey1932>)
41. *Jump up ^ <#cite_ref-41>* Jaynes, Edwin T. (2003). /Probabiity
Theory: The Logic of Science/. Cambridge University Press. pp. 592593.
42. *Jump up ^ <#cite_ref-42>* Oosterbaan, Roand J. (1994). "Chapter 6:
Frequency and Regression Anaysis of Hydroogic Data"
<http://www.waterog.info/pdf/freqtxt.pdf>. In Ritzema, Henk P.
/Drainage Principes and Appications, Pubication 16/ (second
revised ed.). Wageningen, The Netherands: Internationa Institute
for Land Recamation and Improvement (ILRI). pp. 175224. ISBN
</wiki/Internationa_Standard_Book_Number> 90-70754-33-9
</wiki/Specia:BookSources/90-70754-33-9>.
43. *Jump up ^ <#cite_ref-43>* Wichura, Michae J. (1988). "Agorithm
AS241: The Percentage Points of the Norma Distribution". /Appied
Statistics/ (Backwe Pubishing) *37* (3): 477484. doi
</wiki/Digita_object_identifier>:10.2307/2347330
<http://dx.doi.org/10.2307%2F2347330>. JSTOR </wiki/JSTOR> 2347330
<//www.jstor.org/stabe/2347330>.
44. *Jump up ^ <#cite_ref-44>* Johnson, Kotz & Baakrishnan (1995
<#CITEREFJohnsonKotzBaakrishnan1995>, Equation (26.48))
45. *Jump up ^ <#cite_ref-45>* Kinderman & Monahan (1977
<#CITEREFKindermanMonahan1977>)
46. *Jump up ^ <#cite_ref-46>* Marsagia & Tsang (2000
<#CITEREFMarsagiaTsang2000>)
47. *Jump up ^ <#cite_ref-47>* Waace (1996 <#CITEREFWaace1996>)
48. *Jump up ^ <#cite_ref-49>* Johnson, Kotz & Baakrishnan (1994
<#CITEREFJohnsonKotzBaakrishnan1994>, p. 85)
49. *Jump up ^ <#cite_ref-50>* Le Cam & Lo Yang (2000
<#CITEREFLe_CamLo_Yang2000>, p. 74)
50. *Jump up ^ <#cite_ref-52>* De Moivre, Abraham (1733), Coroary I
see Waker (1985 <#CITEREFWaker1985>, p. 77)
51. *Jump up ^ <#cite_ref-53>* Stiger (1986 <#CITEREFStiger1986>, p. 76)
52. *Jump up ^ <#cite_ref-55>* Gauss (1809 <#CITEREFGauss1809>, section 177)
53. *Jump up ^ <#cite_ref-56>* Gauss (1809 <#CITEREFGauss1809>, section 179)
54. *Jump up ^ <#cite_ref-58>* Lapace (1774 <#CITEREFLapace1774>,
Probem III)
55. *Jump up ^ <#cite_ref-59>* Pearson (1905 <#CITEREFPearson1905>, p. 189)
56. *Jump up ^ <#cite_ref-60>* Stiger (1986 <#CITEREFStiger1986>, p. 144)
57. *Jump up ^ <#cite_ref-61>* Stiger (1978 <#CITEREFStiger1978>, p. 243)
58. *Jump up ^ <#cite_ref-62>* Stiger (1978 <#CITEREFStiger1978>, p. 244)
59. *Jump up ^ <#cite_ref-63>* Maxwe (1860 <#CITEREFMaxwe1860>, p. 23)
60. *Jump up ^ <#cite_ref-64>* Jaynes, Edwin J.; /Probabiity Theory:
The Logic of Science/, Ch 7
<http://www-biba.inriapes.fr/Jaynes/cc07s.pdf>
61. *Jump up ^ <#cite_ref-66>* Peirce, Chares S. (c. 1909 MS),
/Coected Papers </wiki/Chares_Sanders_Peirce_bibiography#CP>/ v.

6, paragraph 327
62. *Jump up ^ <#cite_ref-67>* Kruska & Stiger (1997
<#CITEREFKruskaStiger1997>)
63. *Jump up ^ <#cite_ref-68>* "Eariest uses (entry STANDARD NORMAL
CURVE)" <http://jeff560.tripod.com/s.htm>.
References[edit
</w/index.php?tite=Norma_distribution&action=edit&section=49>]
* Adrich, John; Mier, Jeff. "Eariest Uses of Symbos in
Probabiity and Statistics" <http://jeff560.tripod.com/stat.htm>.
* Adrich, John; Mier, Jeff. "Eariest Known Uses of Some of the
Words of Mathematics" <http://jeff560.tripod.com/mathword.htm>. In
particuar, the entries for "be-shaped and be curve"
<http://jeff560.tripod.com/b.htm>, "norma (distribution)"
<http://jeff560.tripod.com/n.htm>, "Gaussian"
<http://jeff560.tripod.com/g.htm>, and "Error, aw of error, theory
of errors, etc." <http://jeff560.tripod.com/e.htm>.
* Amari, Shun-ichi; Nagaoka, Hiroshi (2000). /Methods of Information
Geometry/. Oxford University Press. ISBN
</wiki/Internationa_Standard_Book_Number> 0-8218-0531-2
</wiki/Specia:BookSources/0-8218-0531-2>.
* Bernardo, Jos M.; Smith, Adrian F. M. (2000). /Bayesian Theory/.
Wiey. ISBN </wiki/Internationa_Standard_Book_Number> 0-471-49464-X
</wiki/Specia:BookSources/0-471-49464-X>.
* Bryc, Wodzimierz (1995). /The Norma Distribution:
Characterizations with Appications/. Springer-Verag. ISBN
</wiki/Internationa_Standard_Book_Number> 0-387-97990-5
</wiki/Specia:BookSources/0-387-97990-5>.
* Casea, George; Berger, Roger L. (2001). /Statistica Inference/
(2nd ed.). Duxbury. ISBN
</wiki/Internationa_Standard_Book_Number> 0-534-24312-6
</wiki/Specia:BookSources/0-534-24312-6>.
* Cody, Wiiam J. (1969). "Rationa Chebyshev Approximations for the
Error Function </wiki/Error_function#cite_note-5>". /Mathematics of
Computation/ *23* (107): 631638. doi
</wiki/Digita_object_identifier>:10.1090/S0025-5718-1969-0247736-4
<http://dx.doi.org/10.1090%2FS0025-5718-1969-0247736-4>.
* Cover, Thomas M.; Thomas, Joy A. (2006). /Eements of Information
Theory/. John Wiey and Sons.
* de Moivre, Abraham </wiki/Abraham_de_Moivre> (1738). /The Doctrine
of Chances </wiki/The_Doctrine_of_Chances>/. ISBN
</wiki/Internationa_Standard_Book_Number> 0-8218-2103-2
</wiki/Specia:BookSources/0-8218-2103-2>.
* Fan, Jianqing (1991). "On the optima rates of convergence for
nonparametric deconvoution probems". /The Annas of Statistics/
*19* (3): 12571272. doi
</wiki/Digita_object_identifier>:10.1214/aos/1176348248
<http://dx.doi.org/10.1214%2Faos%2F1176348248>. JSTOR
</wiki/JSTOR> 2241949 <//www.jstor.org/stabe/2241949>.
* Gaton, Francis (1889). /Natura Inheritance/
<http://gaton.org/books/natura-inheritance/pdf/gaton-nat-inh-1up-cean.pd
f>.
London, UK: Richard Cay and Sons.
* Gaambos, Janos; Simonei, Itao (2004). /Products of Random
Variabes: Appications to Probems of Physics and to Arithmetica
Functions/. Marce Dekker, Inc. ISBN
</wiki/Internationa_Standard_Book_Number> 0-8247-5402-6
</wiki/Specia:BookSources/0-8247-5402-6>.

* Gauss, Caroo Friderico </wiki/Car_Friedrich_Gauss> (1809).


/Theoria motvs corporvm coeestivm in sectionibvs conicis Soem
ambientivm/ [/Theory of the Motion of the Heaveny Bodies Moving
about the Sun in Conic Sections/] (in Latin). Engish transation
<http://books.googe.com/books?id=1TIAAAAAQAAJ>.
* Goud, Stephen Jay </wiki/Stephen_Jay_Goud> (1981). /The Mismeasure
of Man </wiki/The_Mismeasure_of_Man>/ (first ed.). W. W. Norton.
ISBN </wiki/Internationa_Standard_Book_Number> 0-393-01489-4
</wiki/Specia:BookSources/0-393-01489-4>.
* Haperin, Max; Hartey, Herman O.; Hoe, Pau G. (1965).
"Recommended Standards for Statistica Symbos and Notation. COPSS
Committee on Symbos and Notation". /The American Statistician/ *19*
(3): 1214. doi </wiki/Digita_object_identifier>:10.2307/2681417
<http://dx.doi.org/10.2307%2F2681417>. JSTOR </wiki/JSTOR> 2681417
<//www.jstor.org/stabe/2681417>.
* Hart, John F.; et a. (1968). /Computer Approximations/. New York,
NY: John Wiey & Sons, Inc. ISBN
</wiki/Internationa_Standard_Book_Number> 0-88275-642-7
</wiki/Specia:BookSources/0-88275-642-7>.
* Hazewinke, Michie, ed. (2001), "Norma Distribution"
<http://www.encycopediaofmath.org/index.php?tite=p/n067460>,
/Encycopedia of Mathematics </wiki/Encycopedia_of_Mathematics>/,
Springer </wiki/Springer_Science%2BBusiness_Media>, ISBN
</wiki/Internationa_Standard_Book_Number> 978-1-55608-010-4
</wiki/Specia:BookSources/978-1-55608-010-4>
* Herrnstein, Richard J.; Murray, Chares
</wiki/Chares_Murray_(author)> (1994). /The Be Curve
</wiki/The_Be_Curve>: Inteigence and Cass Structure in American
Life/. Free Press </wiki/Free_Press_(pubisher)>. ISBN
</wiki/Internationa_Standard_Book_Number> 0-02-914673-9
</wiki/Specia:BookSources/0-02-914673-9>.
* Huxey, Juian S. (1932). /Probems of Reative Growth/. London.
ISBN </wiki/Internationa_Standard_Book_Number> 0-486-61114-0
</wiki/Specia:BookSources/0-486-61114-0>. OCLC
</wiki/OCLC> 476909537 <//www.wordcat.org/occ/476909537>.
* Johnson, Norman L.; Kotz, Samue; Baakrishnan, Narayanaswamy
(1994). /Continuous Univariate Distributions, Voume 1/. Wiey. ISBN
</wiki/Internationa_Standard_Book_Number> 0-471-58495-9
</wiki/Specia:BookSources/0-471-58495-9>.
* Johnson, Norman L.; Kotz, Samue; Baakrishnan, Narayanaswamy
(1995). /Continuous Univariate Distributions, Voume 2/. Wiey. ISBN
</wiki/Internationa_Standard_Book_Number> 0-471-58494-0
</wiki/Specia:BookSources/0-471-58494-0>.
* Kinderman, Abert J.; Monahan, John F. (1977). "Computer Generation
of Random Variabes Using the Ratio of Uniform Deviates". /ACM
Transactions on Mathematica Software/ *3* (3): 257260. doi
</wiki/Digita_object_identifier>:10.1145/355744.355750
<http://dx.doi.org/10.1145%2F355744.355750>.
* Krishnamoorthy, Kaimuthu (2006). /Handbook of Statistica
Distributions with Appications/. Chapman & Ha/CRC. ISBN
</wiki/Internationa_Standard_Book_Number> 1-58488-635-8
</wiki/Specia:BookSources/1-58488-635-8>.
* Kruska, Wiiam H.; Stiger, Stephen M. (1997). Spencer, Bruce D.,
ed. /Normative Terminoogy: 'Norma' in Statistics and Esewhere/.
Statistics and Pubic Poicy. Oxford University Press. ISBN
</wiki/Internationa_Standard_Book_Number> 0-19-852341-6
</wiki/Specia:BookSources/0-19-852341-6>.
* Lapace, Pierre-Simon de </wiki/Pierre-Simon_Lapace> (1774).
"Mmoire sur a probabiit des causes par es vnements"
<http://gaica.bnf.fr/ark:/12148/bpt6k77596b/f32>. /Mmoires de

'Acadmie royae des Sciences de Paris (Savants trangers), tome


6/: 621656. Transated by Stephen M. Stiger in /Statistica
Science/ *1* (3), 1986: JSTOR </wiki/JSTOR> 2245476
<http://www.jstor.org/stabe/2245476>.
* Lapace, Pierre-Simon (1812). /Thorie anaytique des probabiits/
[/Anaytica theory of probabiities
</wiki/Anaytica_theory_of_probabiities>/].
* Le Cam, Lucien; Lo Yang, Grace (2000). /Asymptotics in Statistics:
Some Basic Concepts/ (second ed.). Springer. ISBN
</wiki/Internationa_Standard_Book_Number> 0-387-95036-2
</wiki/Specia:BookSources/0-387-95036-2>.
* Lexis, Wihem (1878). "Sur a dure normae de a vie humaine et
sur a thorie de a stabiit des rapports statistiques". /Annaes
de dmographie internationae/ (Paris) *II*: 447462.
* Lukacs, Eugene; King, Edgar P. (1954). "A Property of Norma
Distribution". /The Annas of Mathematica Statistics/ *25* (2):
389394. doi
</wiki/Digita_object_identifier>:10.1214/aoms/1177728796
<http://dx.doi.org/10.1214%2Faoms%2F1177728796>. JSTOR
</wiki/JSTOR> 2236741 <//www.jstor.org/stabe/2236741>.
* McPherson, Gen (1990). /Statistics in Scientific Investigation: Its
Basis, Appication and Interpretation/. Springer-Verag. ISBN
</wiki/Internationa_Standard_Book_Number> 0-387-97137-8
</wiki/Specia:BookSources/0-387-97137-8>.
* Marsagia, George </wiki/George_Marsagia>; Tsang, Wai Wan (2000).
"The Ziggurat Method for Generating Random Variabes"
<http://www.jstatsoft.org/v05/i08/paper>. /Journa of Statistica
Software/ *5* (8).
* Waace, C. S. </wiki/Chris_Waace_(computer_scientist)> (1996).
"Fast pseudo-random generators for norma and exponentia variates".
/ACM Transactions on Mathematica Software/ *22* (1): 119127. doi
</wiki/Digita_object_identifier>:10.1145/225545.225554
<http://dx.doi.org/10.1145%2F225545.225554>.
* Marsagia, George (2004). "Evauating the Norma Distribution"
<http://www.jstatsoft.org/v11/i05/paper>. /Journa of Statistica
Software/ *11* (4).
* Maxwe, James Cerk </wiki/James_Cerk_Maxwe> (1860). "V.
Iustrations of the dynamica theory of gases. Part I: On the
motions and coisions of perfecty eastic spheres". /Phiosophica
Magazine, series 4/ *19* (124): 1932. doi
</wiki/Digita_object_identifier>:10.1080/14786446008642818
<http://dx.doi.org/10.1080%2F14786446008642818>.
* Pate, Jagdish K.; Read, Campbe B. (1996). /Handbook of the Norma
Distribution/ (2nd ed.). CRC Press. ISBN
</wiki/Internationa_Standard_Book_Number> 0-8247-9342-0
</wiki/Specia:BookSources/0-8247-9342-0>.
* Pearson, Kar </wiki/Kar_Pearson> (1905). "'Das Fehergesetz und
seine Veragemeinerungen durch Fechner und Pearson'. A rejoinder".
/Biometrika/ *4* (1): 169212. JSTOR </wiki/JSTOR> 2331536
<//www.jstor.org/stabe/2331536>.
* Pearson, Kar (1920). "Notes on the History of Correation".
/Biometrika/ *13* (1): 2545. doi
</wiki/Digita_object_identifier>:10.1093/biomet/13.1.25
<http://dx.doi.org/10.1093%2Fbiomet%2F13.1.25>. JSTOR
</wiki/JSTOR> 2331722 <//www.jstor.org/stabe/2331722>.
* Rohrbasser, Jean-Marc; Vron, Jacques (2003). "Wihem Lexis: The
Norma Length of Life as an Expression of the "Nature of Things""
<http://www.persee.fr/web/revues/home/prescript/artice/pop_1634-2941_2003_n
um_58_3_18444>.
/Popuation/ *58* (3): 303322. doi

*
*
*

</wiki/Digita_object_identifier>:10.3917/pope.303.0303
<http://dx.doi.org/10.3917%2Fpope.303.0303>.
Stiger, Stephen M. </wiki/Stephen_Stiger> (1978). "Mathematica
Statistics in the Eary States". /The Annas of Statistics/ *6* (2):
239265. doi
</wiki/Digita_object_identifier>:10.1214/aos/1176344123
<http://dx.doi.org/10.1214%2Faos%2F1176344123>. JSTOR
</wiki/JSTOR> 2958876 <//www.jstor.org/stabe/2958876>.
Stiger, Stephen M. (1982). "A Modest Proposa: A New Standard for
the Norma". /The American Statistician/ *36* (2): 137138. doi
</wiki/Digita_object_identifier>:10.2307/2684031
<http://dx.doi.org/10.2307%2F2684031>. JSTOR </wiki/JSTOR> 2684031
<//www.jstor.org/stabe/2684031>.
Stiger, Stephen M. (1986). /The History of Statistics: The
Measurement of Uncertainty before 1900/. Harvard University Press.
ISBN </wiki/Internationa_Standard_Book_Number> 0-674-40340-1
</wiki/Specia:BookSources/0-674-40340-1>.
Stiger, Stephen M. (1999). /Statistics on the Tabe/. Harvard
University Press. ISBN
</wiki/Internationa_Standard_Book_Number> 0-674-83601-4
</wiki/Specia:BookSources/0-674-83601-4>.
Waker, Heen M. (1985). "De Moivre on the Law of Norma
Probabiity"
<http://www.york.ac.uk/depts/maths/histstat/demoivre.pdf>. In Smith,
David Eugene. /A Source Book in Mathematics/. Dover. ISBN
</wiki/Internationa_Standard_Book_Number> 0-486-64690-4
</wiki/Specia:BookSources/0-486-64690-4>.
Weisstein, Eric W. </wiki/Eric_W._Weisstein>. "Norma Distribution"
<http://mathword.wofram.com/NormaDistribution.htm>. MathWord
</wiki/MathWord>.
West, Graeme (2009). "Better Approximations to Cumuative Norma
Functions" <http://www.wimott.com/pdfs/090721_west.pdf>. /Wimott
Magazine/: 7076.
Zeen, Marvin; Severo, Norman C. (1964). /Probabiity Functions
(chapter 26)/ <http://www.math.sfu.ca/~cbm/aands/page_931.htm>.
/Handbook of mathematica functions with formuas, graphs, and
mathematica tabes </wiki/Abramowitz_and_Stegun>/, by Abramowitz,
M. </wiki/Miton_Abramowitz>; and Stegun, I. A.
</wiki/Irene_A._Stegun>: Nationa Bureau of Standards. New York, NY:
Dover. ISBN </wiki/Internationa_Standard_Book_Number> 0-486-61272-4
</wiki/Specia:BookSources/0-486-61272-4>.
Externa inks[edit
</w/index.php?tite=Norma_distribution&action=edit&section=50>]

Wikimedia Commons has media reated to /*Norma distribution


<//commons.wikimedia.org/wiki/Category:Norma_distribution>*/.
* Hazewinke, Michie, ed. (2001), "Norma distribution"
<http://www.encycopediaofmath.org/index.php?tite=p/n067460>,
/Encycopedia of Mathematics </wiki/Encycopedia_of_Mathematics>/,
Springer </wiki/Springer_Science%2BBusiness_Media>, ISBN
</wiki/Internationa_Standard_Book_Number> 978-1-55608-010-4
</wiki/Specia:BookSources/978-1-55608-010-4>
* Norma Distribution Video Tutoria Part 1-2
<https://www.youtube.com/watch?v=kB_kYUbS_ig> on YouTube </wiki/YouTube>
* An 8-foot-ta (2.4 m) Probabiity Machine (named Sir Francis)
comparing stock market returns to the randomness of the beans
dropping through the quincunx pattern.

<https://www.youtube.com/watch?v=AUSKTk9ENzg> on YouTube
</wiki/YouTube> Link originating from Index Funds Advisors
<http://www.ifa.com/>
[hide <#>]
* v </wiki/Tempate:Common_univariate_probabiity_distributions>
* t </wiki/Tempate_tak:Common_univariate_probabiity_distributions>
* e
<//en.wikipedia.org/w/index.php?tite=Tempate:Common_univariate_probabiity
_distributions&action=edit>
Some common univariate </wiki/Univariate_distribution> probabiity
distributions </wiki/Probabiity_distribution>
Continuous </wiki/Continuous_probabiity_distribution>
*
*
*
*
*
*
*
*
*
*
*
*
*

beta </wiki/Beta_distribution>
Cauchy </wiki/Cauchy_distribution>
chi-squared </wiki/Chi-squared_distribution>
exponentia </wiki/Exponentia_distribution>
/F/ </wiki/F-distribution>
gamma </wiki/Gamma_distribution>
Lapace </wiki/Lapace_distribution>
og-norma </wiki/Log-norma_distribution>
*norma*
Pareto </wiki/Pareto_distribution>
Student's /t/ </wiki/Student%27s_t-distribution>
uniform </wiki/Uniform_distribution_(continuous)>
Weibu </wiki/Weibu_distribution>

Discrete </wiki/Discrete_probabiity_distribution>
*
*
*
*
*
*
*

Bernoui </wiki/Bernoui_distribution>
binomia </wiki/Binomia_distribution>
discrete uniform </wiki/Uniform_distribution_(discrete)>
geometric </wiki/Geometric_distribution>
hypergeometric </wiki/Hypergeometric_distribution>
negative binomia </wiki/Negative_binomia_distribution>
Poisson </wiki/Poisson_distribution>

List of probabiity distributions </wiki/List_of_probabiity_distributions>


[show <#>]
* v </wiki/Tempate:Probabiity_distributions>
* t </wiki/Tempate_tak:Probabiity_distributions>
* e
<//en.wikipedia.org/w/index.php?tite=Tempate:Probabiity_distributions&act
ion=edit>
Probabiity distributions </wiki/Probabiity_distribution>
[show <#>]
Discrete univariate with finite support
</wiki/List_of_probabiity_distributions#With_finite_support>
*
*
*
*
*

Benford </wiki/Benford%27s_aw>
Bernoui </wiki/Bernoui_distribution>
Beta-binomia </wiki/Beta-binomia_distribution>
binomia </wiki/Binomia_distribution>
categorica </wiki/Categorica_distribution>

*
*
*
*
*
*

hypergeometric </wiki/Hypergeometric_distribution>
Poisson binomia </wiki/Poisson_binomia_distribution>
Rademacher </wiki/Rademacher_distribution>
discrete uniform </wiki/Uniform_distribution_(discrete)>
Zipf </wiki/Zipf%27s_aw>
ZipfMandebrot </wiki/Zipf%E2%80%93Mandebrot_aw>

[show <#>]
Discrete univariate with infinite support
</wiki/List_of_probabiity_distributions#With_infinite_support>
* beta negative binomia </wiki/Beta_negative_binomia_distribution>
* Bore </wiki/Bore_distribution>
* ConwayMaxwePoisson
</wiki/Conway%E2%80%93Maxwe%E2%80%93Poisson_distribution>
* discrete phase-type </wiki/Discrete_phase-type_distribution>
* Deaporte </wiki/Deaporte_distribution>
* extended negative binomia
</wiki/Extended_negative_binomia_distribution>
* GaussKuzmin </wiki/Gauss%E2%80%93Kuzmin_distribution>
* geometric </wiki/Geometric_distribution>
* ogarithmic </wiki/Logarithmic_distribution>
* negative binomia </wiki/Negative_binomia_distribution>
* paraboic fracta </wiki/Paraboic_fracta_distribution>
* Poisson </wiki/Poisson_distribution>
* Skeam </wiki/Skeam_distribution>
* YueSimon </wiki/Yue%E2%80%93Simon_distribution>
* zeta </wiki/Zeta_distribution>
[show <#>]
Continuous univariate supported on a bounded interva, e.g. [0,1]
</wiki/List_of_probabiity_distributions#Supported_on_a_bounded_interva>
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*

Arcsine </wiki/Arcsine_distribution>
ARGUS </wiki/ARGUS_distribution>
BadingNichos </wiki/Bading%E2%80%93Nichos_mode>
Bates </wiki/Bates_distribution>
Beta </wiki/Beta_distribution>
Beta rectanguar </wiki/Beta_rectanguar_distribution>
IrwinHa </wiki/Irwin%E2%80%93Ha_distribution>
Kumaraswamy </wiki/Kumaraswamy_distribution>
ogit-norma </wiki/Logit-norma_distribution>
Noncentra beta </wiki/Noncentra_beta_distribution>
raised cosine </wiki/Raised_cosine_distribution>
Trianguar </wiki/Trianguar_distribution>
U-quadratic </wiki/U-quadratic_distribution>
uniform </wiki/Uniform_distribution_(continuous)>
Wigner semicirce </wiki/Wigner_semicirce_distribution>

[show <#>]
Continuous univariate supported on a semi-infinite interva, usuay
[0,)
</wiki/List_of_probabiity_distributions#Supported_on_semi-infinite_intervas.2C
_usuay_.5B0.2C.E2.88.9E.29>
*
*
*
*
*

Benini </wiki/Benini_distribution>
Benktander 1st kind </wiki/Benktander_Gibrat_distribution>
Benktander 2nd kind </wiki/Benktander_Weibu_distribution>
Beta prime </wiki/Beta_prime_distribution>
Burr </wiki/Burr_distribution>

*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*

chi-squared </wiki/Chi-squared_distribution>
chi </wiki/Chi_distribution>
Coxian </wiki/Phase-type_distribution#Coxian_distribution>
Dagum </wiki/Dagum_distribution>
Davis </wiki/Davis_distribution>
EL </wiki/Exponentia-Logarithmic_distribution>
Erang </wiki/Erang_distribution>
exponentia </wiki/Exponentia_distribution>
/F/ </wiki/F-distribution>
foded norma </wiki/Foded_norma_distribution>
Fory-Schuz </wiki/Fory%E2%80%93Schuz_distribution>
Frchet </wiki/Fr%C3%A9chet_distribution>
Gamma </wiki/Gamma_distribution>
Gamma/Gompertz </wiki/Gamma/Gompertz_distribution>
generaized inverse Gaussian
</wiki/Generaized_inverse_Gaussian_distribution>
Gompertz </wiki/Gompertz_distribution>
haf-ogistic </wiki/Haf-ogistic_distribution>
haf-norma </wiki/Haf-norma_distribution>
Hoteing's T-squared </wiki/Hoteing%27s_T-squared_distribution>
hyper-Erang </wiki/Hyper-Erang_distribution>
hyperexponentia </wiki/Hyperexponentia_distribution>
hypoexponentia </wiki/Hypoexponentia_distribution>
inverse chi-squared </wiki/Inverse-chi-squared_distribution> (scaed
inverse chi-squared </wiki/Scaed_inverse_chi-squared_distribution>)
inverse Gaussian </wiki/Inverse_Gaussian_distribution>
inverse gamma </wiki/Inverse-gamma_distribution>
Komogorov </wiki/Komogorov_distribution>
Lvy </wiki/L%C3%A9vy_distribution>
og-Cauchy </wiki/Log-Cauchy_distribution>
og-Lapace </wiki/Log-Lapace_distribution>
og-ogistic </wiki/Log-ogistic_distribution>
og-norma </wiki/Log-norma_distribution>
matrix-exponentia </wiki/Matrix-exponentia_distribution>
MaxweBotzmann </wiki/Maxwe%E2%80%93Botzmann_distribution>
MaxweJttner </wiki/Maxwe%E2%80%93J%C3%BCttner_distribution>
MittagLeffer </wiki/Mittag%E2%80%93Leffer_distribution>
Nakagami </wiki/Nakagami_distribution>
noncentra chi-squared </wiki/Noncentra_chi-squared_distribution>
Pareto </wiki/Pareto_distribution>
phase-type </wiki/Phase-type_distribution>
Poy-Weibu </wiki/Poy-Weibu_distribution>
Rayeigh </wiki/Rayeigh_distribution>
reativistic BreitWigner
</wiki/Reativistic_Breit%E2%80%93Wigner_distribution>
Rice </wiki/Rice_distribution>
RosinRammer </wiki/Rosin%E2%80%93Rammer_distribution>
shifted Gompertz </wiki/Shifted_Gompertz_distribution>
truncated norma </wiki/Truncated_norma_distribution>
type-2 Gumbe </wiki/Type-2_Gumbe_distribution>
Weibu </wiki/Weibu_distribution>
Wiks' ambda </wiki/Wiks%27_ambda_distribution>

[hide <#>]
Continuous univariate supported on the whoe rea ine (, )
</wiki/List_of_probability_distributions#Supportd_on_th_whol_ral_lin>
* Cauchy </wiki/Cauchy_distribution>
* xponntial powr </wiki/Gnralizd_normal_distribution#Vrsion_1>
* Fishr's z </wiki/Fishr%27s_zdistribution>

*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*

gnralizd normal </wiki/Gnralizd_normal_distribution>


gnralizd hyprbolic </wiki/Gnralisd_hyprbolic_distribution>
gomtric stabl </wiki/Gomtric_stabl_distribution>
Gumbl </wiki/Gumbl_distribution>
Holtsmark </wiki/Holtsmark_distribution>
hyprbolic scant </wiki/Hyprbolic_scant_distribution>
Johnson SU </wiki/Johnson_SU_distribution>
Landau </wiki/Landau_distribution>
Laplac </wiki/Laplac_distribution>
Linnik </wiki/Linnik_distribution>
logistic </wiki/Logistic_distribution>
noncntral t </wiki/Noncntral_tdistribution>
*normal (Gaussian)*
normalinvrs Gaussian </wiki/Normalinvrs_Gaussian_distribution>
skw normal </wiki/Skw_normal_distribution>
slash </wiki/Slash_distribution>
stabl </wiki/Stabl_distribution>
Studnt's /t/ </wiki/Studnt%27s_tdistribution>
typ1 Gumbl </wiki/Typ1_Gumbl_distribution>
TracyWidom </wiki/Tracy%E2%80%93Widom_distribution>
variancgamma </wiki/Variancgamma_distribution>
Voigt </wiki/Voigt_profil>

[show <#>]
Continuous univariat with support whos typ varis
</wiki/List_of_probability_distributions#With_variabl_support>
*
*
*
*
*
*
*

gnralizd xtrm valu </wiki/Gnralizd_xtrm_valu_distribution>


gnralizd Parto </wiki/Gnralizd_Parto_distribution>
Tuky lambda </wiki/Tuky_lambda_distribution>
qGaussian </wiki/QGaussian_distribution>
qxponntial </wiki/Qxponntial_distribution>
qWibull </wiki/QWibull_distribution>
shiftd loglogistic </wiki/Shiftd_loglogistic_distribution>

[show <#>]
Mixd continuousdiscrt univariat distributions
* rctifid Gaussian </wiki/Rctifid_Gaussian_distribution>
[show <#>]
Multivariat (joint) </wiki/Joint_probability_distribution>
/Discrt/
Ewns </wiki/Ewns%27s_sampling_formula>
multinomial </wiki/Multinomial_distribution>
Dirichltmultinomial </wiki/Dirichltmultinomial_distribution>
ngativ multinomial </wiki/Ngativ_multinomial_distribution>
/Continuous/
Dirichlt </wiki/Dirichlt_distribution>
Gnralizd Dirichlt </wiki/Gnralizd_Dirichlt_distribution>
multivariat normal </wiki/Multivariat_normal_distribution>
Multivariat stabl </wiki/Multivariat_stabl_distribution>
multivariat Studnt </wiki/Multivariat_Studnt_distribution>
normalscald invrs gamma
</wiki/Normalscald_invrs_gamma_distribution>
normalgamma </wiki/Normalgamma_distribution>
/Matrixvalud </wiki/Random_matrix>/

invrs matrix gamma </wiki/Invrs_matrix_gamma_distribution>


invrsWishart </wiki/InvrsWishart_distribution>
matrix normal </wiki/Matrix_normal_distribution>
matrix t </wiki/Matrix_tdistribution>
matrix gamma </wiki/Matrix_gamma_distribution>
normalinvrsWishart </wiki/NormalinvrsWishart_distribution>
normalWishart </wiki/NormalWishart_distribution>
Wishart </wiki/Wishart_distribution>
[show <#>]
Dirctional </wiki/Dirctional_statistics>
/Univariat (circular) dirctional </wiki/Dirctional_statistics>/
Circular uniform </wiki/Circular_uniform_distribution>
univariat von Miss </wiki/Von_Miss_distribution>
wrappd normal </wiki/Wrappd_normal_distribution>
wrappd Cauchy </wiki/Wrappd_Cauchy_distribution>
wrappd xponntial </wiki/Wrappd_xponntial_distribution>
wrappd Lvy </wiki/Wrappd_L%C3%A9vy_distribution>
/Bivariat (sphrical)/
Knt </wiki/Knt_distribution>
/Bivariat (toroidal)/
bivariat von Miss </wiki/Bivariat_von_Miss_distribution>
/Multivariat/
von MissFishr </wiki/Von_Miss%E2%80%93Fishr_distribution>
Bingham </wiki/Bingham_distribution>
[show <#>]
Dgnrat </wiki/Dgnrat_distribution> and singular
</wiki/Singular_distribution>
/Dgnrat </wiki/Dgnrat_distribution>/
discrt dgnrat </wiki/Dgnrat_distribution>
Dirac dlta function </wiki/Dirac_dlta_function>
/Singular </wiki/Singular_distribution>/
Cantor </wiki/Cantor_distribution>
[show <#>]
Familis
*
*
*
*
*
*
*
*
*
*
*

Circular </wiki/Circular_distribution>
compound Poisson </wiki/Compound_Poisson_distribution>
lliptical </wiki/Elliptical_distribution>
xponntial </wiki/Exponntial_family>
natural xponntial </wiki/Natural_xponntial_family>
locationscal </wiki/Locationscal_family>
maximum ntropy </wiki/Maximum_ntropy_probability_distribution>
mixtur </wiki/Mixtur_dnsity>
Parson </wiki/Parson_distribution>
Twdi </wiki/Twdi_distribution>
wrappd </wiki/Wrappd_distribution>

Rtrivd from
"http://n.wikipdia.org/w/indx.php?titl=Normal_distribution&oldid=625468853"
Catgoris </wiki/Hlp:Catgory>:

* Continuous distributions </wiki/Catgory:Continuous_distributions>


* Conjugat prior distributions
</wiki/Catgory:Conjugat_prior_distributions>
* Distributions with conjugat priors
</wiki/Catgory:Distributions_with_conjugat_priors>
* Normal distribution </wiki/Catgory:Normal_distribution>
* Exponntial family distributions
</wiki/Catgory:Exponntial_family_distributions>
* Stabl distributions </wiki/Catgory:Stabl_distributions>
* Probability distributions </wiki/Catgory:Probability_distributions>
Hiddn catgoris:
* All articls with unsourcd statmnts
</wiki/Catgory:All_articls_with_unsourcd_statmnts>
* Articls with unsourcd statmnts from Jun 2011
</wiki/Catgory:Articls_with_unsourcd_statmnts_from_Jun_2011>
* Us mdy dats from August 2012
</wiki/Catgory:Us_mdy_dats_from_August_2012>
* Articls with unsourcd statmnts from Jun 2010
</wiki/Catgory:Articls_with_unsourcd_statmnts_from_Jun_2010>
* Articl sctions to b split from May 2013
</wiki/Catgory:Articl_sctions_to_b_split_from_May_2013>
* Articls to b split from May 2013
</wiki/Catgory:Articls_to_b_split_from_May_2013>
* All articls to b split </wiki/Catgory:All_articls_to_b_split>
* Commons catgory with local link sam as on Wikidata
</wiki/Catgory:Commons_catgory_with_local_link_sam_as_on_Wikidata>
Navigation mnu
Prsonal tools
* Crat account
</w/indx.php?titl=Spcial:UsrLogin&rturnto=Normal+distribution&typ=sign
up>
* Log in
</w/indx.php?titl=Spcial:UsrLogin&rturnto=Normal+distribution>
Namspacs
* Articl </wiki/Normal_distribution>
* Talk </wiki/Talk:Normal_distribution>
Variants<#>
Viws
* Rad </wiki/Normal_distribution>
* Edit </w/indx.php?titl=Normal_distribution&action=dit>
* Viw history </w/indx.php?titl=Normal_distribution&action=history>
Mor<#>

Sarch
</wiki/Main_Pag>
Navigation
Main pag </wiki/Main_Pag>
Contnts </wiki/Portal:Contnts>
Faturd contnt </wiki/Portal:Faturd_contnt>
Currnt vnts </wiki/Portal:Currnt_vnts>
Random articl </wiki/Spcial:Random>
Donat to Wikipdia
<https://donat.wikimdia.org/wiki/Spcial:FundraisrRdirctor?utm_sourc=d
onat&utm_mdium=sidbar&utm_campaign=C13_n.wikipdia.org&uslang=n>
* Wikimdia Shop <//shop.wikimdia.org>
*
*
*
*
*
*

Intraction
*
*
*
*
*

Hlp </wiki/Hlp:Contnts>
About Wikipdia </wiki/Wikipdia:About>
Community portal </wiki/Wikipdia:Community_portal>
Rcnt changs </wiki/Spcial:RcntChangs>
Contact pag <//n.wikipdia.org/wiki/Wikipdia:Contact_us>
Tools

*
*
*
*
*
*
*
*

What links hr </wiki/Spcial:WhatLinksHr/Normal_distribution>


Rlatd changs </wiki/Spcial:RcntChangsLinkd/Normal_distribution>
Upload fil </wiki/Wikipdia:Fil_Upload_Wizard>
Spcial pags </wiki/Spcial:SpcialPags>
Prmannt link </w/indx.php?titl=Normal_distribution&oldid=625468853>
Pag information </w/indx.php?titl=Normal_distribution&action=info>
Wikidata itm <//www.wikidata.org/wiki/Q133871>
Cit this pag
</w/indx.php?titl=Spcial:Cit&pag=Normal_distribution&id=625468853>
Print/xport

* Crat a book
</w/indx.php?titl=Spcial:Book&bookcmd=book_crator&rfrr=Normal+distrib
ution>
* Download as PDF
</w/indx.php?titl=Spcial:Book&bookcmd=rndr_articl&arttitl=Normal+dist
ribution&oldid=625468853&writr=rl>
* Printabl vrsion </w/indx.php?titl=Normal_distribution&printabl=ys>
Languags
* Almannisch <//als.wikipdia.org/wiki/Normalvrtilung>
*
<//ar.wikipdia.org/wiki/%D8%AA%D9%88%D8%B2%D9%8A%D8%B9_%D8%A7%D8%AD%D8%AA%D
9%85%D8%A7%D9%84%D9%8A_%D8%B7%D8%A8%D9%8A%D8%B9%D9%8A>
* Azrbaycanca <//az.wikipdia.org/wiki/Normal_paylanma>
* Bnlmg

<//zhminnan.wikipdia.org/wiki/Si%C3%B4ngth%C3%A0i_hunp%C3%B2%CD%98>
*
<//b.wikipdia.org/wiki/%D0%9D%D0%B0%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0
%B0%D0%B5_%D1%80%D0%B0%D0%B7%D0%BC%D0%B5%D1%80%D0%BA%D0%B0%D0%B2%D0%B0%D0%BD%D0%
BD%D0%B5>
*
<//bg.wikipdia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%BE_%D
1%80%D0%B0%D0%B7%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5>
* Catal <//ca.wikipdia.org/wiki/Distribuci%C3%B3_normal>
* tina <//cs.wikipdia.org/wiki/Norm%C3%A1ln%C3%AD_rozd%C4%9Bln%C3%AD>
* Cymrag <//cy.wikipdia.org/wiki/Dosraniad_normal>
* Dansk <//da.wikipdia.org/wiki/Normalfordling>
* Dutsch <//d.wikipdia.org/wiki/Normalvrtilung>
* Esti <//t.wikipdia.org/wiki/Normaaljaotus>
*
<//el.wpeda.org/w/%CE%9A%CE%B1%CE%BD%CE%BF%CE%BD%CE%B9%CE%BA%CE%AE_%C
E%BA%CE%B1%CF%84%CE%B1%CE%BD%CE%BF%CE%BC%CE%AE>
* Espaol <//es.wpeda.org/w/Dstrbuc%C3%B3n_normal>
* Esperanto <//eo.wpeda.org/w/Normala_dstrbuo>
* Eusara <//eu.wpeda.org/w/Banaeta_normal>
*
<//fa.wpeda.org/w/%D8%AA%D9%88%D8%B2%DB%8C%D8%B9_%D8%B7%D8%A8%DB%8C%D
8%B9%DB%8C>
* Franas <//fr.wpeda.org/w/Lo_normale>
* Galego <//gl.wpeda.org/w/Dstrbuc%C3%B3n_normal>
*
<//o.wpeda.org/w/%EC%A0%95%EA%B7%9C%EB%B6%84%ED%8F%AC>
* Hrvats <//hr.wpeda.org/w/Normalna_raspodjela>
* Bahasa Indonesa <//d.wpeda.org/w/Dstrbus_normal>
* slensa <//s.wpeda.org/w/Normaldrefng>
* Italano <//t.wpeda.org/w/Dstrbuzone_normale>
*
<//he.wpeda.org/w/%D7%94%D7%AA%D7%A4%D7%9C%D7%92%D7%95%D7%AA_%D7%A0%D
7%95%D7%A8%D7%9E%D7%9C%D7%99%D7%AA>
*
<//a.wpeda.org/w/%E1%83%9C%E1%83%9D%E1%83%A0%E1%83%9B%E1%83%90%E1%83
%9A%E1%83%A3%E1%83%A0%E1%83%98_%E1%83%92%E1%83%90%E1%83%9C%E1%83%90%E1%83%AC%E1%
83%98%E1%83%9A%E1%83%94%E1%83%91%E1%83%90>
*
<//kk.wikipdia.org/wiki/%D2%9A%D0%B0%D0%BB%D1%8B%D0%BF%D1%82%D1%8B_%D0%B4%D
0%B8%D1%81%D0%BF%D0%B5%D1%80%D1%81%D0%B8%D1%8F>
* Latina <//la.wikipdia.org/wiki/Distributio_normalis>
* Latviu <//lv.wikipdia.org/wiki/Norm%C4%81lais_sadal%C4%ABjums>
* Lituvi <//lt.wikipdia.org/wiki/Normalusis_skirstinys>
* Magyar <//hu.wikipdia.org/wiki/Norm%C3%A1lis_loszl%C3%A1s>
*
<//mr.wikipdia.org/wiki/%E0%A4%B8%E0%A4%BE%E0%A4%AE%E0%A4%BE%E0%A4%A8%E0%A5
%8D%E0%A4%AF_%E0%A4%B5%E0%A4%BF%E0%A4%A4%E0%A4%B0%E0%A4%A3>
* Ndrlands <//nl.wikipdia.org/wiki/Normal_vrdling>
* <//ja.wikipdia.org/wiki/%E6%AD%A3%E8%A6%8F%E5%88%86%E5%B8%83>
* Norsk bokml <//no.wikipdia.org/wiki/Normalfordling>
* Norsk nynorsk <//nn.wikipdia.org/wiki/Normalfordling>
* Pimontis <//pms.wikipdia.org/wiki/Distribussion_%C3%ABd_Gauss>
* Polski <//pl.wikipdia.org/wiki/Rozk%C5%82ad_normalny>
* Portugus <//pt.wikipdia.org/wiki/Distribui%C3%A7%C3%A3o_normal>
* Romn <//ro.wikipdia.org/wiki/Distribu%C8%9Bia_Gauss>
*
<//ru.wikipdia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0
%BE%D0%B5_%D1%80%D0%B0%D1%81%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%
B8%D0%B5>
* Simpl English <//simpl.wikipdia.org/wiki/Normal_distribution>

* Slovnina <//sk.wikipdia.org/wiki/Norm%C3%A1ln_rozdlni>
* Slovnina <//sl.wikipdia.org/wiki/Normalna_porazdlitv>
* / srpski
<//sr.wikipdia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%B0_%D
1%80%D0%B0%D1%81%D0%BF%D0%BE%D0%B4%D0%B5%D0%BB%D0%B0>
* Srpskohrvatski /
<//sh.wikipdia.org/wiki/Normalna_raspodla>
* Basa Sunda <//su.wikipdia.org/wiki/Sbaran_normal>
* Suomi <//fi.wikipdia.org/wiki/Normaalijakauma>
* Svnska <//sv.wikipdia.org/wiki/Normalf%C3%B6rdlning>
* Tagalog <//tl.wikipdia.org/wiki/Distribusyong_normal>
*
<//ta.wikipdia.org/wiki/%E0%AE%87%E0%AE%AF%E0%AE%B2%E0%AF%8D%E0%AE%A8%E0%AE
%BF%E0%AE%B2%E0%AF%88%E0%AE%AA%E0%AF%8D_%E0%AE%AA%E0%AE%B0%E0%AE%B5%E0%AE%B2%E0%
AF%8D>
* /tatara
<//tt.wikipdia.org/wiki/%D0%93%D0%B0%D1%83%D1%81%D1%81_%D0%B1%D2%AF%D0%BB%D
0%B5%D0%BD%D0%B5%D1%88%D0%B5>
*
<//th.wikipdia.org/wiki/%E0%B8%81%E0%B8%B2%E0%B8%A3%E0%B9%81%E0%B8%88%E0%B8
%81%E0%B9%81%E0%B8%88%E0%B8%87%E0%B8%9B%E0%B8%A3%E0%B8%81%E0%B8%95%E0%B8%B4>
* Trk <//tr.wikipdia.org/wiki/Normal_da%C4%9F%C4%B1l%C4%B1m>
*
<//uk.wikipdia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0
%B8%D0%B9_%D1%80%D0%BE%D0%B7%D0%BF%D0%BE%D0%B4%D1%96%D0%BB>
*
<//ur.wikipdia.org/wiki/%D9%85%D8%B9%D9%85%D9%88%D9%84_%D8%AA%D9%88%D8%B2%D
B%8C%D8%B9>
* Ting Vit
<//vi.wikipdia.org/wiki/Ph%C3%A2n_ph%E1%BB%91i_chu%E1%BA%A9n>
*
<//y.wpeda.org/w/%D7%A0%D7%90%D7%A8%D7%9E%D7%90%D7%9C%D7%A2_%D7%A4%D
7%90%D7%A8%D7%98%D7%99%D7%99%D7%9C%D7%95%D7%A0%D7%92>
* <//zh.wpeda.org/w/%E6%AD%A3%E6%80%81%E5%88%86%E5%B8%83>
Edt lns <//www.wdata.org/w/Q133871#stelns-wpeda>
* Ths page was last modfed on 14 September 2014 at 02:35.
* Text s avalable under the Creatve Commons Attrbuton-ShareAle
Lcense
<//en.wpeda.org/w/Wpeda:Text_of_Creatve_Commons_Attrbuton-Shar
eAle_3.0_Unported_Lcense><//creatvecommons.org/lcenses/by-sa/3.0/>;
addtonal terms may apply. By usng ths ste, you agree to the
Terms of Use <//wmedafoundaton.org/w/Terms_of_Use> and
Prvacy Polcy <//wmedafoundaton.org/w/Prvacy_polcy>.
Wpeda s a regstered trademar of the Wmeda Foundaton,
Inc. <//www.wmedafoundaton.org/>, a non-proft organzaton.
Prvacy polcy <//wmedafoundaton.org/w/Prvacy_polcy>
About Wpeda </w/Wpeda:About>
Dsclamers </w/Wpeda:General_dsclamer>
Contact Wpeda <//en.wpeda.org/w/Wpeda:Contact_us>
Developers
<https://www.medaw.org/w/Specal:MyLanguage/How_to_contrbute>
* Moble vew
<//en.m.wpeda.org/w/ndex.php?ttle=Normal_dstrbuton&mobleacton=tog
gle_vew_moble>
*
*
*
*
*

* Wmeda Foundaton <//wmedafoundaton.org/>


* Powered by MedaW <//www.medaw.org/>

You might also like