Professional Documents
Culture Documents
Problems donating?
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Problems_donating&country=NL&language=en&uselang=en>
| Other ways to give
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Ways_to_Give&country=NL&language=en&uselang=en>
| Frequently asked questions
<https://wikimediafoundation.org/wiki/Special:LandingCheck?landing_page=FAQ&basi
c=true&country=NL&language=en&uselang=en>
| By donating, you are agreeing to our donor privacy policy
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Donor_policy&country=NL&language=en&uselang=en>.
The Wikimedia Foundation is a nonprofit, tax-exempt organization
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Tax_Deductibility&country=NL&language=en&uselang=en>.
By donating, you are agreeing to our donor privacy policy
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Donor_policy&country=NL&language=en&uselang=en>
and to sharing your information with the Wikimedia Foundation and its
service providers in the U.S. and elsewhere. The Wikimedia Foundation is
a nonprofit, tax-exempt organization
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Tax_Deductibility&country=NL&language=en&uselang=en>.
By donating, you are agreeing to our donor privacy policy
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Donor_policy&country=NL&language=en&uselang=en>
and to sharing your information with the Wikimedia Foundation
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Tax_Deductibility&country=NL&language=en&uselang=en>
and its service providers in the U.S. and elsewhere. *Recurring payments
will be debited by the Wikimedia Foundation until you notify us to stop.
We ll send you an email receipt for each payment, which will include a
link to easy cancellation instructions.
<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa
ge=Cancel_or_change_recurring_payments&country=NL&language=en&uselang=en>
If we all gave 2, the fundraiser would be over in an hour. Please
Donate Now
Normal distribution
From Wikipedia, the free encyclopedia
Jump to: navigation <#mw-navigation>, search <#p-search>
This article is about the univariate normal distribution. For normally
distributed vectors, see Multivariate normal distribution
</wiki/Multivariate_normal_distribution>.
Normal
Probability density function
Probability density function for the normal distribution
</wiki/File:Normal_Distribution_PDF.svg>
The red curve is the /standard normal distribution/
Cumulative distribution function
Cumulative distribution function for the normal distribution
</wiki/File:Normal_Distribution_CDF.svg>
Notation
\mathcal{N}(\mu,\,\sigma^2)
Parameters
// *R* mean (location </wiki/Location_parameter>)
//^2 > 0 variance (quared cale </wiki/Scale_parameter>)
Support </wiki/Support_(mathematic)> /x/ *R*
pdf </wiki/Probability_denity_function>
\frac{1}{\igma\qrt{2\pi}}\,
e^{-\frac{(x - \mu)^2}{2 \igma^2}}
CDF </wiki/Cumulative_ditribution_function>
\frac12\left[1 +
\operatorname{erf}\left( \frac{x-\mu}{\igma\qrt{2}}\right)\right]
Quantile </wiki/Quantile_function>
\mu+\igma\qrt{2}\,\operatorname{erf}^{-1}(2F-1)
Mean </wiki/Expected_value>
//
Median </wiki/Median> //
Mode </wiki/Mode_(statistics)> //
Variance </wiki/Variance>
\sigma^2\,
Skewness </wiki/Skewness>
0
Ex. kurtosis </wiki/Excess_kurtosis>
0
Entropy </wiki/Information_entropy>
\frac12 \ln(2 \pi e \, \sigma^2)
MGF </wiki/Moment-generating_function> \exp\{ \mu t +
\frac{1}{2}\sigma^2t^2 \}
CF </wiki/Characteristic_function_(probability_theory)>
\exp \{ i\mu t
- \frac{1}{2}\sigma^2 t^2 \}
Fisher information </wiki/Fisher_information>
\begin{pmatrix}1/\sigma^2&0\\0&1/(2\sigma^4)\end{pmatrix}
In probability theory </wiki/Probability_theory>, the *normal* (or
*Gaussian*) *distribution* is a very commonly occurring continuous
probability distribution </wiki/Continuous_probability_distribution>a
function that tells the probability that any real observation will fall
between any two real limits or real numbers </wiki/Real_number>, as the
curve approaches zero on either side. Normal distributions are extremely
important in statistics </wiki/Statistics> and are often used in the
natural </wiki/Natural_science> and social sciences
</wiki/Social_science> for real-valued random variables
</wiki/Random_variable> whose distributions are not known.^[1]
<#cite_note-1> ^[2] <#cite_note-2>
The normal distribution is immensely useful because of the central limit
theorem </wiki/Central_limit_theorem>, which states that, under mild
conditions, the mean </wiki/Mean> of many random variables
</wiki/Random_variables> independently drawn from the same distribution
is distributed approximately normally, irrespective of the form of the
original distribution: physical quantities that are expected to be the
sum of many independent processes (such as measurement errors
</wiki/Measurement_error>) often have a distribution very close to the
normal. Moreover, many results and methods (such as propagation of
*
*
*
*
*
*
*
*
\phi\left(\frac{x-\mu}{\sigma}\right).
The probability density must be scaled by 1/\sigma so that the integral
is still 1.
If /Z/ is a standard normal deviate, then /X/ = /Z/ + // will have a
normal distribution with expected value // and standard deviation //.
Converely, if /X/ i a general normal deviate, then /Z/ = (/X/
//)/// will have a tandard normal ditribution.
Every normal ditribution i the exponential of a quadratic function
</wiki/Quadratic_function>:
f(x) = e^{a x^2 + b x + c}
where /a/ i negative and /c/ i b^2/(4a)+\ln(-a/\pi)/2. In thi form,
the mean value // is /b//(2/a/), and the variance //^2 i 1/(2/a/).
For the standard normal distribution, /a/ is 1/2, /b/ is zero, and /c/
is \ln(2\pi)/2.
Notation[edit
</w/index.php?title=Normal_distribution&action=edit§ion=4>]
The standard Gaussian distribution (with zero mean and unit variance) is
often denoted with the Greek letter // (phi </wiki/Phi_(letter)>).^[7]
<#cite_note-7> The alternative form of the Greek phi letter, //, is
also used quite oten.
The normal distribution is also oten denoted by /N/(//, //^2 ).^[8]
<#cite_note-8> Thu when a random variable /X/ i ditributed normally
with mean // and variance //^2 , we write
X\ \im\ \mathcal{N}(\mu,\,\igma^2).
Alternative parameterization[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=5>]
Some author advocate uing the preciion </wiki/Preciion_(tatitic)>
// as he parameer defining he widh of he disribuion, insead of
he deviaion // or the variance //^2 . The preciion i normally
defined a the reciprocal of the variance, 1///^2 .^[9] <#cite_note-9>
The formula for the ditribution then become
f(x) = \qrt{\frac{\tau}{2\pi}}\, e^{\frac{-\tau(x-\mu)^2}{2}}.
Thi choice i claimed to have advantage in numerical computation when
// i very cloe to zero and implify formula in ome context, uch
a in the Bayeian inference </wiki/Bayeian_tatitic> of variable
with multivariate normal ditribution
</wiki/Multivariate_normal_ditribution>.
Occaionally, the preciion // is 1///, the reciprocal of the tandard
deviation; o that
f(x) = \frac{\tau}{\qrt{2\pi}}\, e^{\frac{-\tau^2(x-\mu)^2}{2}}.
According to Stigler, thi formulation i advantageou becaue of a much
impler and eaier-to-remember formula, the fact that the pdf ha unit
Non-central moment
Central moment
//
0
//^2 + //^2 // ^2
//^3 + 3//^2
0
//^4 + 6//^2 //^2 + 3//^4 3// ^4
//^5 + 10//^3 //^2 + 15//^4
0
//^6 + 15//^4 //^2 + 45//^2 //^4 + 15//^6
15// ^6
//^7 + 21//^5 //^2 + 105//^3 //^4 + 105//^6
0
//^8 + 28//^6 //^2 + 210//^4 //^4 + 420//^2 //^6 + 105//^8
^8
Fourier tranform and characteritic function[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=9>]
About 68% of value drawn from a normal ditribution are within one
tandard deviation // away from the mean; about 95% of the value lie
within two tandard deviation; and about 99.7% are within three
tandard deviation. Thi fact i known a the 68-95-99.7 (empirical)
rule </wiki/68%E2%80%9395%E2%80%9399.7_rule>, or the /3-igma rule/.
More preciely, the probability that a normal deviate lie in the range
// /n/ and // + /n/ i given by
F(\mu+n\igma) - F(\mu-n\igma) = \Phi(n)-\Phi(-n) =
\mathrm{erf}\left(\frac{n}{\qrt{2}}\right),
To 12 decimal place, the value for /n/ = 1, 2, , 6 are:^[18]
<#cite_note-18>
/n/
/F/(//+/n/) /F/(// /n/)
i.e. 1 minu or 1 in
</wiki/OEIS>
1
0.682689492137 0.317310507863 3.15148718753 OEIS
</wiki/On-Line_Encyclopedia_of_Integer_Sequence> A178647
<//oei.org/A178647>
2
0.954499736104 0.045500263896 21.9778945080 OEIS
</wiki/On-Line_Encyclopedia_of_Integer_Sequence> A110894
<//oei.org/A110894>
3
0.997300203937 0.002699796063 370.398347345
4
0.999936657516 0.000063342484 15787.1927673
5
0.999999426697 0.000000573303 1744277.89362
6
0.999999998027 0.000000001973 506797345.897
Quantile function[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=13>]
The quantile function </wiki/Quantile_function> of a ditribution i the
invere of the cumulative ditribution function. The quantile function
of the tandard normal ditribution i called the probit function
</wiki/Probit_function>, and can be expreed in term of the invere
error function </wiki/Error_function>:
\Phi^{-1}(p)\; =\; \qrt2\;\operatorname{erf}^{-1}(2p - 1), \quad
p\in(0,1).
For a normal random variable with mean // and variance //^2 , the
quantile function i
F^{-1}(p) = \mu + \igma\Phi^{-1}(p) = \mu +
\igma\qrt2\,\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1).
The quantile </wiki/Quantile> \Phi^{-1}(p) of the tandard normal
ditribution i commonly denoted a /z_p /. Thee value are ued in
hypothei teting </wiki/Hypothei_teting>, contruction of
confidence interval </wiki/Confidence_interval> and Q-Q plot
</wiki/Q-Q_plot>. A normal random variable /X/ will exceed // + /z_p /
with probability 1/p/; and will lie ouside he inerval // /z_p /
with probability 2(1/p/). In paricular, he quanile /z/_0.975 is 1.96
</wiki/1.96>; herefore a normal random variable will lie ouside he
inerval // 1.96// in only 5% of cae.
The following table give the multiple /n/ of // uch that /X/ will lie
in the range // /n/ with a pecified probability /p/. Thee value
are ueful to determine tolerance interval </wiki/Tolerance_interval>
OEIS
Zero-variance limit[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=14>]
In the limit </wiki/Limit_(mathematic)> when // tend to zero, the
probability denity /f/(/x/) eventually tend to zero at any /x/ //,
but grows without limit if /x/ = //, while its integral remains equal
to 1. Therefore, the normal distribution cannot be defined as an
ordinary function </wiki/Function_(mathematics)> when // = 0.
However, one can define the normal ditribution with zero variance a a
generalized function </wiki/Generalized_function>; pecifically, a
Dirac' "delta function" </wiki/Dirac_delta_function> // translate by
the mean //, that is /f/(/x/) = //(/x///). Its CDF is then the
Heaviside step function </wiki/Heaviside_step_function> translated by
the mean //, namely
F(x) = \begin{cases} 0 & \text{if }x < \mu \\ 1 & \text{if }x \geq
\mu \end{cases}
The central limit theorem[edit
</w/index.php?title=Normal_distribution&action=edit§ion=15>]
</wiki/File:De_moivre-laplace.gif>
</wiki/File:De_moivre-laplace.gif>
As the number of discrete events increases, the function begins to
resemble a normal distribution
</wiki/File:Dice_sum_central_limit_theorem.svg>
</wiki/File:Dice_sum_central_limit_theorem.svg>
Comparison of probability density functions, /p/(/k/) for the sum of /n/
fair 6-sided dice to show their convergence to a normal distribution
with increasing /n/, in accordance to the central limit theorem. In the
bottom-right graph, smoothed profiles of the previous graphs are
rescaled, superimposed and compared with a normal distribution (black
curve).
Main article: Central limit theorem </wiki/Central_limit_theorem>
The central limit theorem states that under certain (fairly common)
conditions, the sum of many random variables will have an approximately
normal distribution. More specifically, where /X/_1 , , /X_n / are
independent and identically distributed
</wiki/Independent_and_identically_distributed> random variables with
the same arbitrary distribution, zero mean, and variance //^2 ; and /Z/
i their mean caled by \qrt{n}
Z = \qrt{n}\left(\frac{1}{n}\um_{i=1}^n X_i\right)
Then, a /n/ increae, the probability ditribution of /Z/ will tend to
the normal ditribution with zero mean and variance //^2 .
The theorem can be extended to variable /X_i / that are not independent
and/or not identically ditributed if certain contraint are placed on
the degree of dependence and the moment of the ditribution.
Many tet tatitic </wiki/Tet_tatitic>, core
</wiki/Score_(tatitic)>, and etimator </wiki/Etimator> encountered
in practice contain um of certain random variable in them, and even
more etimator can be repreented a um of random variable through
the ue of influence function </wiki/Influence_function_(tatitic)>.
The central limit theorem implie that thoe tatitical parameter will
have aymptotically normal ditribution.
The central limit theorem alo implie that certain ditribution can be
approximated by the normal ditribution, for example:
* The binomial ditribution </wiki/Binomial_ditribution> /B/(/n/,
/p/) i approximately normal
</wiki/De_Moivre%E2%80%93Laplace_theorem> with mean /np/ and
variance /np/(1/p/) for large /n/ an for /p/ not too close to zero
or one.
* The Poisson </wiki/Poisson_istribution> istribution with parameter
// is approximatey norma with mean // and variance //, for
arge vaues of //.^[20] <#cite_note-20>
* The chi-squared distribution </wiki/Chi-squared_distribution> //^2
(/k/) is approximately normal with mean /k/ and variane 2/k/, or
large /k/.
* The Student's t-distribution </wiki/Student%27s_t-distribution>
/t/(//) is approximately ormal with mea 0 a d varia ce 1 whe //
is large.
Whether these approximatio s are sufficie tly accurate depe ds o the
purpose for which they are eeded, a d the rate of co verge ce to the
ormal distributio . It is typically the case that such approximatio s
are less accurate i the tails of the distributio .
A ge eral upper bou d for the approximatio error i the ce tral limit
theorem is give by the BerryEssee theorem
</wiki/Berry%E2%80%93Essee _theorem>, improveme ts of the approximatio
are give by the Edgeworth expa sio s </wiki/Edgeworth_expa sio >.
Operatio s o ormal deviates[edit
</w/i dex.php?title=Normal_distributio &actio =edit§io =16>]
The family of ormal distributio s is closed u der li ear
tra sformatio s: if /X/ is ormally distributed with mea // and
standard deviation //, then the variable /Y/ = /aX/ + /b/, for any real
number /a/ and /b/, i alo normally ditributed, with mean /a/ + /b/
and standard deviation /|a|/.
Alo if /X/_1 and /X/_2 are two independent
</wiki/Independence_(probability_theory)> normal random variable, with
mean //_1 , //_2 and standard deviations //_1 , //_2 , then their
um /X/_1 + /X/_2 will alo be normally ditributed,^[proof]
Other propertie[edit
</w/index.php?title=Normal_ditribution&action=edit&ection=19>]
1. If the characteritic function /_X / o some random variable /X/ is
o the orm /_X /(/t/) = /e/^/Q/(/t/) , where /Q/(/t/) is a
polynomial </wiki/Polynomial>, then the *Marinkiewiz theorem*
(named ater Jze Marinkiewiz </wiki/J%C3%B3ze_Marinkiewiz>)
asserts that /Q/ an be at most a quadrati polynomial, and
thereore /X/ a normal random variable.^[24]
<#ite_note-Bry_1995_35-24> The onsequene o this result is that
the normal distribution is the only distribution with a inite
number (two) o non-zero umulants </wiki/Cumulant>.
2. I /X/ and /Y/ are jointly normal
</wiki/Multivariate_normal_distribution> and unorrelated
</wiki/Unorrelated>, then they are independent
</wiki/Independene_(probability_theory)>. The requirement that /X/
and /Y/ should be /jointly/ normal is essential, without it the
property does not hold.^[27] <#ite_note-27> ^[28] <#ite_note-28>
^[proo]
</wiki/Normally_distributed_and_unorrelated_does_not_imply_independent>
For non-normal random variables unorrelatedness does not imply
independene.
3. The KullbakLeibler divergene
</wiki/Kullbak%E2%80%93Leibler_divergene> o one normal
distribution /X/_1 /N/(//_1 , //^2 _1 )from another /X/_2
/N/(//_2 , //^2 _2 )i given by:^[29] <#cite_note-29>
D_\mathrm{KL}( X_1 \,\|\, X_2 ) = \frac{(\mu_1 \mu_2)^2}{2\igma_2^2} \,+\, \frac12\left(\,
\frac{\igma_1^2}{\igma_2^2} - 1 \ln\frac{\igma_1^2}{\igma_2^2} \,\right)\ .
The Hellinger ditance </wiki/Hellinger_ditance> between the ame
ditribution i equal to
H^2(X_1,X_2) = 1 \,-\,
\qrt{\frac{2\igma_1\igma_2}{\igma_1^2+\igma_2^2}} \;
e^{-\frac{1}{4}\frac{(\mu_1-\mu_2)^2}{\igma_1^2+\igma_2^2}}\ .
4. The Fiher information matrix </wiki/Fiher_information_matrix> for
a normal ditribution i diagonal and take the form
\mathcal I = \begin{pmatrix} \frac{1}{\igma^2} & 0 \\ 0 &
\frac{1}{2\igma^4} \end{pmatrix}
5. Normal ditribution belong to an exponential family
</wiki/Exponential_family> with natural parameter
\cripttyle\theta_1=\frac{\mu}{\igma^2} and
\cripttyle\theta_2=\frac{-1}{2\igma^2}, and natural tatitic
/x/ and /x/^2 . The dual, expectation parameter for normal
ditribution are //_1 = // and //_2 = //^2 + //^2 .
6. Te conjugate prior </wiki/Conjugate_prior> of te mean of a normal
ditribution i anoter normal ditribution.^[30] <#cite_note-30>
Specifically, if /x/_1 , , /x_n / are iid /N/(//, //^2 ) and te
prior i // ~ /N/(//_0 , //2
0), ten te poterior ditribution for te etimator of // will be
\mu | x_1,\ldots,x_n\ \sim\ \mathcal{N}\left(
\frac{\frac{\sigma^2}{n}\mu_0 +
\sigma_0^2\bar{x}}{\frac{\sigma^2}{n}+\sigma_0^2},\ \left(
Combin
tion of two or more independent r
ndom v
ri
bles[edit
</w/index.php?title=Norm
l_distribution&
ction=edit§ion=23>]
* If /X/_1 , /X/_2 , , /X_n /
re independent st
nd
rd norm
l r
ndom
v
ri
bles, then the sum of their squ
res h
s the chisqu
red
distribution </wiki/Chisqu
red_distribution> with /n/ degrees of
freedom
X_1^2 + \cdots + X_n^2\ \sim\ \chi_n^2..
* If /X/_1 , /X/_2 , , /X_n /
re independent norm
lly distributed
r
ndom v
ri
bles with me
ns // and variances //^2 , ten teir
ample mean </wiki/Sample_mean> i independent from te ample
tandard deviation </wiki/Standard_deviation>,^[34] <#cite_note-34>
wic can be demontrated uing Bau' teorem
</wiki/Bau%27_teorem> or Cocran' teorem
</wiki/Cocran%27_teorem>.^[35] <#cite_note-35> Te ratio of tee
two quantitie will ave te Student' t-ditribution
</wiki/Student%27_t-ditribution> wit /n/ 1 degrees of freedom:
t = \fr
c{\overline X \mu}{S/\sqrt{n}} =
\fr
c{\fr
c{1}{n}(X_1+\cdots+X_n)
\mu}{\sqrt{\fr
c{1}{n(n1)}\left[(X_1\overline
X)^2+\cdots+(X_n\overline X)^2\right]}} \ \sim\ t_{n1}.
* If /X/_1 , , /X_n /, /Y/_1 , , /Y_m /
re independent st
nd
rd
norm
l r
ndom v
ri
bles, then the r
tio of their norm
lized sums of
squ
res will h
ve the Fdistribution </wiki/Fdistribution> with
(/n/, /m/) degrees of freedom:^[36] <#cite_note36>
F =
\fr
c{\left(X_1^2+X_2^2+\cdots+X_n^2\right)/n}{\left(Y_1^2+Y_2^2+\cdots+
Y_m^2\right)/m}\
\sim\ F_{n,\,m}.
Oper
tions on the density function[edit
</w/index.php?title=Norm
l_distribution&
ction=edit§ion=24>]
The split norm
l distribution </wiki/Split_norm
l_distribution> is most
directly defined in terms of joining sc
led sections of the density
functions of different norm
l distributions
nd resc
ling the density to
integr
te to one. The trunc
ted norm
l distribution
</wiki/Trunc
ted_norm
l_distribution> results from resc
ling
section
of
single density function.
Extensions[edit
</w/index.php?title=Norm
l_distribution&
ction=edit§ion=25>]
The notion of norm
l distribution, being one of the most import
nt
distributions in prob
bility theory, h
s been extended f
r beyond the
st
nd
rd fr
mework of the univ
ri
te (th
t is onedimension
l) c
se
(C
se 1). All these extensions
re
lso c
lled /norm
l/ or /G
ussi
n/
l
ws, so
cert
in
mbiguity in n
mes exists.
* The multiv
ri
te norm
l distribution
</wiki/Multiv
ri
te_norm
l_distribution> describes the G
ussi
n l
w
in the /k/dimension
l Euclide
n sp
ce </wiki/Euclide
n_sp
ce>. A
*
*
*
*
*
*
nd in the c
se of
rbitr
ry /k/
re ellipsoids </wiki/Ellipsoid>.
Rectified G
ussi
n distribution
</wiki/Rectified_G
ussi
n_distribution>
rectified version of
norm
l distribution with
ll the neg
tive elements reset to 0
Complex norm
l distribution </wiki/Complex_norm
l_distribution>
de
ls with the complex norm
l vectors. A complex vector /X/
*C*^/k/ is s
id to be norm
l if both its re
l
nd im
gin
ry
components jointly possess
2/k/dimension
l multiv
ri
te norm
l
distribution. The v
ri
ncecov
ri
nce structure of /X/ is described
by two m
trices: the /v
ri
nce/ m
trix , and te /relation/ matrix /C/.
Matrix normal ditribution </wiki/Matrix_normal_ditribution>
decribe te cae of normally ditributed matrice.
auian procee </wiki/auian_proce> are te normally
ditributed tocatic procee </wiki/Stocatic_proce>. Tee
can be viewed a element of ome infinite-dimenional Hilbert pace
</wiki/Hilbert_pace> /H/, and tu are te analogue of
multivariate normal vector for te cae /k/ = . A random element
// /H/ i aid to be normal if for any contant /a/ /H/ te
calar product </wiki/Scalar_product> (/a/, //) a a (univariate)
normal ditribution. Te variance tructure of uc auian random
element can be decribed in term of te linear /covariance operator
K: H H/. Several auian procee became popular enoug to ave
teir own name:
o Brownian motion </wiki/Wiener_proce>,
o Brownian bridge </wiki/Brownian_bridge>,
o OrnteinUlenbeck proce
</wiki/Orntein%E2%80%93Ulenbeck_proce>.
auian q-ditribution </wiki/auian_q-ditribution> i an
abtract matematical contruction tat repreent a "q-analogue
</wiki/Q-analogue>" of te normal ditribution.
te q-auian </wiki/Q-auian> i an analogue of te auian
ditribution, in te ene tat it maximie te Talli entropy
</wiki/Talli_entropy>, and i one type of Talli ditribution
</wiki/Talli_ditribution>. Note tat ti ditribution i
different from te auian q-ditribution
</wiki/auian_q-ditribution> above.
</wiki/Unbi
sed_estim
tor> of the underlying p
r
meter //^2 , werea
\cripttyle\at\igma^2 i biaed. Alo, by te LemannSceff teorem
te etimator //^2 i uniformly minimum variance unbiaed (UMVU),^[37]
<#cite_note-Kri127-37> wic make it te "bet" etimator among all
unbiaed one. However it can be own tat te biaed etimator
\cripttyle\at\igma^2 i "better" tan te //^2 in term of te mean
quared error </wiki/Mean_quared_error> (MSE) criterion. In finite
ample bot //^2 and \cripttyle\at\igma^2 ave caled ci-quared
ditribution </wiki/Ci-quared_ditribution> wit (/n/ 1) degrees of
freedom:
s^2 \ \sim\ \fr
c{\sigm
^2}{n1} \cdot \chi^2_{n1}, \qqu
d
\h
t\sigm
^2 \ \sim\ \fr
c{\sigm
^2}{n} \cdot \chi^2_{n1}\ .
The first of these expressions shows th
t the v
ri
nce of /s/^2 is equ
l
to 2//^4 /(/n/1), which is slightly gre
ter th
n the //-element of
te invere Fier information matrix \cripttyle\matcal{I}^{-1}.
Tu, //^2 i not an efficient etimator for //^2 , and moreover,
ince //^2 i UMVU, we can conclude tat te finite-ample efficient
etimator for //^2 doe not exit.
Applying te aymptotic teory, bot etimator //^2 and
\cripttyle\at\igma^2 are conitent, tat i tey converge in
probability to //^2 a te ample ize /n/ . Te two etimator are
alo bot aymptotically normal:
\qrt{n}(\at\igma^2 - \igma^2) \imeq \qrt{n}(^2-\igma^2)\
\xrigtarrow{d}\ \matcal{N}(0,\,2\igma^4).
In particular, bot etimator are aymptotically efficient for //^2 .
By Cocran' teorem </wiki/Cocran%27_teorem>, for normal
ditribution te ample mean \cripttyle\at\mu and te ample
variance //^2 are independent
</wiki/Independence_(probability_teory)>, wic mean tere can be no
gain in conidering teir joint ditribution </wiki/Joint_ditribution>.
Tere i alo a revere teorem: if in a ample te ample mean and
ample variance are independent, ten te ample mut ave come from te
normal ditribution. Te independence between \cripttyle\at\mu and
// can be employed to contruct te o-called /t-tatitic/:
t = \frac{\at\mu-\mu}{/\qrt{n}} =
\frac{\overline{x}-\mu}{\qrt{\frac{1}{n(n-1)}\um(x_i-\overline{x})^2}}\
\im\ t_{n-1}
Ti quantity /t/ a te Student' t-ditribution
</wiki/Student%27_t-ditribution> wit (/n/ 1) degrees of freedom,
nd it is
n
ncill
ry st
tistic </wiki/Ancill
ry_st
tistic>
(independent of the v
lue of the p
r
meters). Inverting the distribution
of this /t/st
tistics will
llow us to construct the confidence
interv
l </wiki/Confidence_interv
l> for //;^[38] <#cite_note-38>
similarly, inverting the //^2 distribution o the statisti /s/^2 will
give us the onidene interval or //^2 :^[39] <#cite_note-39>
\begin{align} & \mu \in \left[\, \at\mu + t_{n-1,\alpa/2}\,
\frac{1}{\qrt{n}},\ \ \at\mu +
t_{n-1,1-\alpa/2}\,\frac{1}{\qrt{n}} \,\rigt] \approx \left[\,
\at\mu - |z_{\alpa/2}|\frac{1}{\qrt n},\ \ \at\mu +
|z_{\alpa/2}|\frac{1}{\qrt n} \,\rigt], \\ & \igma^2 \in
\left[\, \frac{(n-1)^2}{\ci^2_{n-1,1-\alpa/2}},\ \
fixed qu
ntity.
* When the v
ri
nce is unknown,
n
lysis m
y be done directly in terms
of the v
ri
nce, or in terms of the precision
</wiki/Precision_(st
tistics)>, the reciproc
l of the v
ri
nce. The
re
son for expressing the formul
s in terms of precision is th
t the
n
lysis of most c
ses is simplified.
* Both univ
ri
te
nd multiv
ri
te
</wiki/Multiv
ri
te_norm
l_distribution> c
ses need to be considered.
* Either conjug
te </wiki/Conjug
te_prior> or improper
</wiki/Improper_prior> prior distributions
</wiki/Prior_distribution> m
y be pl
ced on the unknown v
ri
bles.
* An
ddition
l set of c
ses occurs in B
yesi
n line
r regression
</wiki/B
yesi
n_line
r_regression>, where in the b
sic model the
d
t
is
ssumed to be norm
lly distributed,
nd norm
l priors
re
pl
ced on the regression coefficients
</wiki/Regression_coefficient>. The resulting
n
lysis is simil
r to
the b
sic c
ses of independent identic
lly distributed
</wiki/Independent_identic
lly_distributed> d
t
, but more complex.
The formul
s for the nonline
rregression c
ses
re summ
rized in the
conjug
te prior </wiki/Conjug
te_prior>
rticle.
The sum of two qu
dr
tics[edit
</w/index.php?title=Norm
l_distribution&
ction=edit§ion=29>]
Sc
l
r form[edit
</w/index.php?title=Norm
l_distribution&
ction=edit§ion=30>]
The following
uxili
ry formul
is useful for simplifying the posterior
</wiki/Posterior_distribution> upd
te equ
tions, which otherwise become
f
irly tedious.
(xy)^2 + b(xz)^2 = (
+ b)\left(x \fr
c{
y+bz}{
+b}\right)^2 +
\fr
c{
b}{
+b}(yz)^2
This equ
tion rewrites the sum of two qu
dr
tics in /x/ by exp
nding the
squ
res, grouping the terms in /x/,
nd completing the squ
re
</wiki/Completing_the_squ
re>. Note the following
bout the complex
const
nt f
ctors
tt
ched to some of the terms:
1. The f
ctor \fr
c{
y+bz}{
+b} h
s the form of
weighted
ver
ge
</wiki/Weighted_
ver
ge> of /y/
nd /z/.
2. \fr
c{
b}{
+b} = \fr
c{1}{\fr
c{1}{
}+\fr
c{1}{b}} = (
^{1} +
b^{1})^{1}. This shows th
t this f
ctor c
n be thought of
s
resulting from
situ
tion where the reciproc
ls
</wiki/Multiplic
tive_inverse> of qu
ntities /
/
nd /b/
dd
directly, so to combine /
/
nd /b/ themselves, it's necess
ry to
reciproc
te,
dd,
nd reciproc
te the result
g
in to get b
ck into
the origin
l units. This is ex
ctly the sort of oper
tion performed
by the h
rmonic me
n </wiki/H
rmonic_me
n>, so it is not surprising
th
t \fr
c{
b}{
+b} is oneh
lf the h
rmonic me
n
</wiki/H
rmonic_me
n> of /
/
nd /b/.
Vector form[edit
</w/index.php?title=Norm
l_distribution&
ction=edit§ion=31>]
A simil
r formul
c
n be written for the sum of two vector qu
dr
tics:
If *x*, *y*, *z*
re vectors of length /k/,
nd *A*
nd *B*
re
symmetric </wiki/Symmetric_m
trix>, invertible m
trices
</wiki/Invertible_m
trices> of size k\times k, then
(\m
thbf{y}\m
thbf{x})'\m
thbf{A}(\m
thbf{y}\m
thbf{x}) +
(\m
thbf{x}\m
thbf{z})'\m
thbf{B}(\m
thbf{x}\m
thbf{z}) =
(\m
thbf{x} \m
thbf{c})'(\m
thbf{A}+\m
thbf{B})(\m
thbf{x}
\m
thbf{c}) + (\m
thbf{y} \m
thbf{z})'(\m
thbf{A}^{1} +
\m
thbf{B}^{1})^{1}(\m
thbf{y} \m
thbf{z})
where
\m
thbf{c} = (\m
thbf{A} + \m
thbf{B})^{1}(\m
thbf{A}\m
thbf{y} +
\m
thbf{B}\m
thbf{z})
Note th
t the form *x* *A* *x* is c
lled
qu
dr
tic form
</wiki/Qu
dr
tic_form>
nd is
sc
l
r </wiki/Sc
l
r_(m
them
tics)>:
\m
thbf{x}'\m
thbf{A}\m
thbf{x} = \sum_{i,j}
_{ij} x_i x_j
In other words, it sums up
ll possible combin
tions of products of
p
irs of elements from *x*, with
sep
r
te coefficient for e
ch. In
\mu)^2
where \b
r{x} = \fr
c{1}{n}\sum_{i=1}^n x_i.
With known v
ri
nce[edit
</w/index.php?title=Norm
l_distribution&
ction=edit§ion=33>]
For
set of i.i.d. </wiki/I.i.d.> norm
lly distributed d
t
points *X*
of size /n/ where e
ch individu
l point /x/ follows x \sim
\m
thc
l{N}(\mu, \sigm
^2) with known v
ri
nce </wiki/V
ri
nce> ^2 ,
te conjugate prior </wiki/Conjugate_prior> ditribution i alo
normally ditributed.
Ti can be own more eaily by rewriting te variance a te preciion
</wiki/Preciion_(tatitic)>, i.e. uing = 1/^2 . Ten if x \im
\matcal{N}(\mu, \tau) and \mu \im \matcal{N}(\mu_0, \tau_0), we
proceed a follow.
Firt, te likeliood function </wiki/Likeliood_function> i (uing te
formula above for te um of difference from te mean):
\begin{align} p(\matbf{X}|\mu,\tau) &= \prod_{i=1}^n
\qrt{\frac{\tau}{2\pi}}
\exp\left(-\frac{1}{2}\tau(x_i-\mu)^2\rigt) \\ &=
\left(\frac{\tau}{2\pi}\rigt)^{\frac{n}{2}}
\exp\left(-\frac{1}{2}\tau \um_{i=1}^n (x_i-\mu)^2\rigt) \\ &=
\left(\frac{\tau}{2\pi}\rigt)^{\frac{n}{2}}
\exp\left[-\frac{1}{2}\tau \left(\um_{i=1}^n(x_i-\bar{x})^2 +
n(\bar{x} -\mu)^2\rigt)\rigt]. \end{align}
Ten, we proceed a follow:
\begin{align} p(\mu|\matbf{X}) &\propto p(\matbf{X}|\mu) p(\mu) \\
& = \left(\frac{\tau}{2\pi}\rigt)^{\frac{n}{2}}
\exp\left[-\frac{1}{2}\tau \left(\um_{i=1}^n(x_i-\bar{x})^2 +
n(\bar{x} -\mu)^2\rigt)\rigt] \qrt{\frac{\tau_0}{2\pi}}
\exp\left(-\frac{1}{2}\tau_0(\mu-\mu_0)^2\rigt) \\ &\propto
\exp\left(-\frac{1}{2}\left(\tau\left(\um_{i=1}^n(x_i-\bar{x})^2 +
n(\bar{x} -\mu)^2\rigt) + \tau_0(\mu-\mu_0)^2\rigt)\rigt) \\
&\propto \exp\left(-\frac{1}{2} \left(n\tau(\bar{x}-\mu)^2 +
\tau_0(\mu-\mu_0)^2 \rigt)\rigt) \\ &=
\exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau
\bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\rigt)^2 +
\frac{n\tau\tau_0}{n\tau+\tau_0}(\bar{x} - \mu_0)^2\rigt) \\
&\propto \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\rigt)^2\rigt)
\end{align}
In te above derivation, we ued te formula above for te um of two
quadratic and eliminated all contant factor not involving //. The
result is the kernel </wiki/Kernel_(statistics)> of a normal
distribution, with mean \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau +
\tau_0} and precision n\tau + \tau_0, i.e.
p(\mu|\mathbf{X}) \sim \mathcal{N}\left(\frac{n\tau \bar{x} +
\tau_0\mu_0}{n\tau + \tau_0}, n\tau + \tau_0\right)
This can be written as a set of Bayesian update equations for the
posterior parameters in terms of the prior parameters:
2.
3.
4.
5.
6.
6, paragraph 327
62. *Jump up ^ <#cite_ref-67>* Kruska & Stiger (1997
<#CITEREFKruskaStiger1997>)
63. *Jump up ^ <#cite_ref-68>* "Eariest uses (entry STANDARD NORMAL
CURVE)" <http://jeff560.tripod.com/s.htm>.
References[edit
</w/index.php?tite=Norma_distribution&action=edit§ion=49>]
* Adrich, John; Mier, Jeff. "Eariest Uses of Symbos in
Probabiity and Statistics" <http://jeff560.tripod.com/stat.htm>.
* Adrich, John; Mier, Jeff. "Eariest Known Uses of Some of the
Words of Mathematics" <http://jeff560.tripod.com/mathword.htm>. In
particuar, the entries for "be-shaped and be curve"
<http://jeff560.tripod.com/b.htm>, "norma (distribution)"
<http://jeff560.tripod.com/n.htm>, "Gaussian"
<http://jeff560.tripod.com/g.htm>, and "Error, aw of error, theory
of errors, etc." <http://jeff560.tripod.com/e.htm>.
* Amari, Shun-ichi; Nagaoka, Hiroshi (2000). /Methods of Information
Geometry/. Oxford University Press. ISBN
</wiki/Internationa_Standard_Book_Number> 0-8218-0531-2
</wiki/Specia:BookSources/0-8218-0531-2>.
* Bernardo, Jos M.; Smith, Adrian F. M. (2000). /Bayesian Theory/.
Wiey. ISBN </wiki/Internationa_Standard_Book_Number> 0-471-49464-X
</wiki/Specia:BookSources/0-471-49464-X>.
* Bryc, Wodzimierz (1995). /The Norma Distribution:
Characterizations with Appications/. Springer-Verag. ISBN
</wiki/Internationa_Standard_Book_Number> 0-387-97990-5
</wiki/Specia:BookSources/0-387-97990-5>.
* Casea, George; Berger, Roger L. (2001). /Statistica Inference/
(2nd ed.). Duxbury. ISBN
</wiki/Internationa_Standard_Book_Number> 0-534-24312-6
</wiki/Specia:BookSources/0-534-24312-6>.
* Cody, Wiiam J. (1969). "Rationa Chebyshev Approximations for the
Error Function </wiki/Error_function#cite_note-5>". /Mathematics of
Computation/ *23* (107): 631638. doi
</wiki/Digita_object_identifier>:10.1090/S0025-5718-1969-0247736-4
<http://dx.doi.org/10.1090%2FS0025-5718-1969-0247736-4>.
* Cover, Thomas M.; Thomas, Joy A. (2006). /Eements of Information
Theory/. John Wiey and Sons.
* de Moivre, Abraham </wiki/Abraham_de_Moivre> (1738). /The Doctrine
of Chances </wiki/The_Doctrine_of_Chances>/. ISBN
</wiki/Internationa_Standard_Book_Number> 0-8218-2103-2
</wiki/Specia:BookSources/0-8218-2103-2>.
* Fan, Jianqing (1991). "On the optima rates of convergence for
nonparametric deconvoution probems". /The Annas of Statistics/
*19* (3): 12571272. doi
</wiki/Digita_object_identifier>:10.1214/aos/1176348248
<http://dx.doi.org/10.1214%2Faos%2F1176348248>. JSTOR
</wiki/JSTOR> 2241949 <//www.jstor.org/stabe/2241949>.
* Gaton, Francis (1889). /Natura Inheritance/
<http://gaton.org/books/natura-inheritance/pdf/gaton-nat-inh-1up-cean.pd
f>.
London, UK: Richard Cay and Sons.
* Gaambos, Janos; Simonei, Itao (2004). /Products of Random
Variabes: Appications to Probems of Physics and to Arithmetica
Functions/. Marce Dekker, Inc. ISBN
</wiki/Internationa_Standard_Book_Number> 0-8247-5402-6
</wiki/Specia:BookSources/0-8247-5402-6>.
*
*
*
</wiki/Digita_object_identifier>:10.3917/pope.303.0303
<http://dx.doi.org/10.3917%2Fpope.303.0303>.
Stiger, Stephen M. </wiki/Stephen_Stiger> (1978). "Mathematica
Statistics in the Eary States". /The Annas of Statistics/ *6* (2):
239265. doi
</wiki/Digita_object_identifier>:10.1214/aos/1176344123
<http://dx.doi.org/10.1214%2Faos%2F1176344123>. JSTOR
</wiki/JSTOR> 2958876 <//www.jstor.org/stabe/2958876>.
Stiger, Stephen M. (1982). "A Modest Proposa: A New Standard for
the Norma". /The American Statistician/ *36* (2): 137138. doi
</wiki/Digita_object_identifier>:10.2307/2684031
<http://dx.doi.org/10.2307%2F2684031>. JSTOR </wiki/JSTOR> 2684031
<//www.jstor.org/stabe/2684031>.
Stiger, Stephen M. (1986). /The History of Statistics: The
Measurement of Uncertainty before 1900/. Harvard University Press.
ISBN </wiki/Internationa_Standard_Book_Number> 0-674-40340-1
</wiki/Specia:BookSources/0-674-40340-1>.
Stiger, Stephen M. (1999). /Statistics on the Tabe/. Harvard
University Press. ISBN
</wiki/Internationa_Standard_Book_Number> 0-674-83601-4
</wiki/Specia:BookSources/0-674-83601-4>.
Waker, Heen M. (1985). "De Moivre on the Law of Norma
Probabiity"
<http://www.york.ac.uk/depts/maths/histstat/demoivre.pdf>. In Smith,
David Eugene. /A Source Book in Mathematics/. Dover. ISBN
</wiki/Internationa_Standard_Book_Number> 0-486-64690-4
</wiki/Specia:BookSources/0-486-64690-4>.
Weisstein, Eric W. </wiki/Eric_W._Weisstein>. "Norma Distribution"
<http://mathword.wofram.com/NormaDistribution.htm>. MathWord
</wiki/MathWord>.
West, Graeme (2009). "Better Approximations to Cumuative Norma
Functions" <http://www.wimott.com/pdfs/090721_west.pdf>. /Wimott
Magazine/: 7076.
Zeen, Marvin; Severo, Norman C. (1964). /Probabiity Functions
(chapter 26)/ <http://www.math.sfu.ca/~cbm/aands/page_931.htm>.
/Handbook of mathematica functions with formuas, graphs, and
mathematica tabes </wiki/Abramowitz_and_Stegun>/, by Abramowitz,
M. </wiki/Miton_Abramowitz>; and Stegun, I. A.
</wiki/Irene_A._Stegun>: Nationa Bureau of Standards. New York, NY:
Dover. ISBN </wiki/Internationa_Standard_Book_Number> 0-486-61272-4
</wiki/Specia:BookSources/0-486-61272-4>.
Externa inks[edit
</w/index.php?tite=Norma_distribution&action=edit§ion=50>]
<https://www.youtube.com/watch?v=AUSKTk9ENzg> on YouTube
</wiki/YouTube> Link originating from Index Funds Advisors
<http://www.ifa.com/>
[hide <#>]
* v </wiki/Tempate:Common_univariate_probabiity_distributions>
* t </wiki/Tempate_tak:Common_univariate_probabiity_distributions>
* e
<//en.wikipedia.org/w/index.php?tite=Tempate:Common_univariate_probabiity
_distributions&action=edit>
Some common univariate </wiki/Univariate_distribution> probabiity
distributions </wiki/Probabiity_distribution>
Continuous </wiki/Continuous_probabiity_distribution>
*
*
*
*
*
*
*
*
*
*
*
*
*
beta </wiki/Beta_distribution>
Cauchy </wiki/Cauchy_distribution>
chi-squared </wiki/Chi-squared_distribution>
exponentia </wiki/Exponentia_distribution>
/F/ </wiki/F-distribution>
gamma </wiki/Gamma_distribution>
Lapace </wiki/Lapace_distribution>
og-norma </wiki/Log-norma_distribution>
*norma*
Pareto </wiki/Pareto_distribution>
Student's /t/ </wiki/Student%27s_t-distribution>
uniform </wiki/Uniform_distribution_(continuous)>
Weibu </wiki/Weibu_distribution>
Discrete </wiki/Discrete_probabiity_distribution>
*
*
*
*
*
*
*
Bernoui </wiki/Bernoui_distribution>
binomia </wiki/Binomia_distribution>
discrete uniform </wiki/Uniform_distribution_(discrete)>
geometric </wiki/Geometric_distribution>
hypergeometric </wiki/Hypergeometric_distribution>
negative binomia </wiki/Negative_binomia_distribution>
Poisson </wiki/Poisson_distribution>
Benford </wiki/Benford%27s_aw>
Bernoui </wiki/Bernoui_distribution>
Beta-binomia </wiki/Beta-binomia_distribution>
binomia </wiki/Binomia_distribution>
categorica </wiki/Categorica_distribution>
*
*
*
*
*
*
hypergeometric </wiki/Hypergeometric_distribution>
Poisson binomia </wiki/Poisson_binomia_distribution>
Rademacher </wiki/Rademacher_distribution>
discrete uniform </wiki/Uniform_distribution_(discrete)>
Zipf </wiki/Zipf%27s_aw>
ZipfMandebrot </wiki/Zipf%E2%80%93Mandebrot_aw>
[show <#>]
Discrete univariate with infinite support
</wiki/List_of_probabiity_distributions#With_infinite_support>
* beta negative binomia </wiki/Beta_negative_binomia_distribution>
* Bore </wiki/Bore_distribution>
* ConwayMaxwePoisson
</wiki/Conway%E2%80%93Maxwe%E2%80%93Poisson_distribution>
* discrete phase-type </wiki/Discrete_phase-type_distribution>
* Deaporte </wiki/Deaporte_distribution>
* extended negative binomia
</wiki/Extended_negative_binomia_distribution>
* GaussKuzmin </wiki/Gauss%E2%80%93Kuzmin_distribution>
* geometric </wiki/Geometric_distribution>
* ogarithmic </wiki/Logarithmic_distribution>
* negative binomia </wiki/Negative_binomia_distribution>
* paraboic fracta </wiki/Paraboic_fracta_distribution>
* Poisson </wiki/Poisson_distribution>
* Skeam </wiki/Skeam_distribution>
* YueSimon </wiki/Yue%E2%80%93Simon_distribution>
* zeta </wiki/Zeta_distribution>
[show <#>]
Continuous univariate supported on a bounded interva, e.g. [0,1]
</wiki/List_of_probabiity_distributions#Supported_on_a_bounded_interva>
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
Arcsine </wiki/Arcsine_distribution>
ARGUS </wiki/ARGUS_distribution>
BadingNichos </wiki/Bading%E2%80%93Nichos_mode>
Bates </wiki/Bates_distribution>
Beta </wiki/Beta_distribution>
Beta rectanguar </wiki/Beta_rectanguar_distribution>
IrwinHa </wiki/Irwin%E2%80%93Ha_distribution>
Kumaraswamy </wiki/Kumaraswamy_distribution>
ogit-norma </wiki/Logit-norma_distribution>
Noncentra beta </wiki/Noncentra_beta_distribution>
raised cosine </wiki/Raised_cosine_distribution>
Trianguar </wiki/Trianguar_distribution>
U-quadratic </wiki/U-quadratic_distribution>
uniform </wiki/Uniform_distribution_(continuous)>
Wigner semicirce </wiki/Wigner_semicirce_distribution>
[show <#>]
Continuous univariate supported on a semi-infinite interva, usuay
[0,)
</wiki/List_of_probabiity_distributions#Supported_on_semi-infinite_intervas.2C
_usuay_.5B0.2C.E2.88.9E.29>
*
*
*
*
*
Benini </wiki/Benini_distribution>
Benktander 1st kind </wiki/Benktander_Gibrat_distribution>
Benktander 2nd kind </wiki/Benktander_Weibu_distribution>
Beta prime </wiki/Beta_prime_distribution>
Burr </wiki/Burr_distribution>
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
chi-squared </wiki/Chi-squared_distribution>
chi </wiki/Chi_distribution>
Coxian </wiki/Phase-type_distribution#Coxian_distribution>
Dagum </wiki/Dagum_distribution>
Davis </wiki/Davis_distribution>
EL </wiki/Exponentia-Logarithmic_distribution>
Erang </wiki/Erang_distribution>
exponentia </wiki/Exponentia_distribution>
/F/ </wiki/F-distribution>
foded norma </wiki/Foded_norma_distribution>
Fory-Schuz </wiki/Fory%E2%80%93Schuz_distribution>
Frchet </wiki/Fr%C3%A9chet_distribution>
Gamma </wiki/Gamma_distribution>
Gamma/Gompertz </wiki/Gamma/Gompertz_distribution>
generaized inverse Gaussian
</wiki/Generaized_inverse_Gaussian_distribution>
Gompertz </wiki/Gompertz_distribution>
haf-ogistic </wiki/Haf-ogistic_distribution>
haf-norma </wiki/Haf-norma_distribution>
Hoteing's T-squared </wiki/Hoteing%27s_T-squared_distribution>
hyper-Erang </wiki/Hyper-Erang_distribution>
hyperexponentia </wiki/Hyperexponentia_distribution>
hypoexponentia </wiki/Hypoexponentia_distribution>
inverse chi-squared </wiki/Inverse-chi-squared_distribution> (scaed
inverse chi-squared </wiki/Scaed_inverse_chi-squared_distribution>)
inverse Gaussian </wiki/Inverse_Gaussian_distribution>
inverse gamma </wiki/Inverse-gamma_distribution>
Komogorov </wiki/Komogorov_distribution>
Lvy </wiki/L%C3%A9vy_distribution>
og-Cauchy </wiki/Log-Cauchy_distribution>
og-Lapace </wiki/Log-Lapace_distribution>
og-ogistic </wiki/Log-ogistic_distribution>
og-norma </wiki/Log-norma_distribution>
matrix-exponentia </wiki/Matrix-exponentia_distribution>
MaxweBotzmann </wiki/Maxwe%E2%80%93Botzmann_distribution>
MaxweJttner </wiki/Maxwe%E2%80%93J%C3%BCttner_distribution>
MittagLeffer </wiki/Mittag%E2%80%93Leffer_distribution>
Nakagami </wiki/Nakagami_distribution>
noncentra chi-squared </wiki/Noncentra_chi-squared_distribution>
Pareto </wiki/Pareto_distribution>
phase-type </wiki/Phase-type_distribution>
Poy-Weibu </wiki/Poy-Weibu_distribution>
Rayeigh </wiki/Rayeigh_distribution>
reativistic BreitWigner
</wiki/Reativistic_Breit%E2%80%93Wigner_distribution>
Rice </wiki/Rice_distribution>
RosinRammer </wiki/Rosin%E2%80%93Rammer_distribution>
shifted Gompertz </wiki/Shifted_Gompertz_distribution>
truncated norma </wiki/Truncated_norma_distribution>
type-2 Gumbe </wiki/Type-2_Gumbe_distribution>
Weibu </wiki/Weibu_distribution>
Wiks' ambda </wiki/Wiks%27_ambda_distribution>
[hide <#>]
Continuous univariate supported on the whoe rea ine (, )
</wiki/List_of_probability_distributions#Supportd_on_th_whol_ral_lin>
* Cauchy </wiki/Cauchy_distribution>
* xponntial powr </wiki/Gnralizd_normal_distribution#Vrsion_1>
* Fishr's z </wiki/Fishr%27s_zdistribution>
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
[show <#>]
Continuous univariat with support whos typ varis
</wiki/List_of_probability_distributions#With_variabl_support>
*
*
*
*
*
*
*
[show <#>]
Mixd continuousdiscrt univariat distributions
* rctifid Gaussian </wiki/Rctifid_Gaussian_distribution>
[show <#>]
Multivariat (joint) </wiki/Joint_probability_distribution>
/Discrt/
Ewns </wiki/Ewns%27s_sampling_formula>
multinomial </wiki/Multinomial_distribution>
Dirichltmultinomial </wiki/Dirichltmultinomial_distribution>
ngativ multinomial </wiki/Ngativ_multinomial_distribution>
/Continuous/
Dirichlt </wiki/Dirichlt_distribution>
Gnralizd Dirichlt </wiki/Gnralizd_Dirichlt_distribution>
multivariat normal </wiki/Multivariat_normal_distribution>
Multivariat stabl </wiki/Multivariat_stabl_distribution>
multivariat Studnt </wiki/Multivariat_Studnt_distribution>
normalscald invrs gamma
</wiki/Normalscald_invrs_gamma_distribution>
normalgamma </wiki/Normalgamma_distribution>
/Matrixvalud </wiki/Random_matrix>/
Circular </wiki/Circular_distribution>
compound Poisson </wiki/Compound_Poisson_distribution>
lliptical </wiki/Elliptical_distribution>
xponntial </wiki/Exponntial_family>
natural xponntial </wiki/Natural_xponntial_family>
locationscal </wiki/Locationscal_family>
maximum ntropy </wiki/Maximum_ntropy_probability_distribution>
mixtur </wiki/Mixtur_dnsity>
Parson </wiki/Parson_distribution>
Twdi </wiki/Twdi_distribution>
wrappd </wiki/Wrappd_distribution>
Rtrivd from
"http://n.wikipdia.org/w/indx.php?titl=Normal_distribution&oldid=625468853"
Catgoris </wiki/Hlp:Catgory>:
Sarch
</wiki/Main_Pag>
Navigation
Main pag </wiki/Main_Pag>
Contnts </wiki/Portal:Contnts>
Faturd contnt </wiki/Portal:Faturd_contnt>
Currnt vnts </wiki/Portal:Currnt_vnts>
Random articl </wiki/Spcial:Random>
Donat to Wikipdia
<https://donat.wikimdia.org/wiki/Spcial:FundraisrRdirctor?utm_sourc=d
onat&utm_mdium=sidbar&utm_campaign=C13_n.wikipdia.org&uslang=n>
* Wikimdia Shop <//shop.wikimdia.org>
*
*
*
*
*
*
Intraction
*
*
*
*
*
Hlp </wiki/Hlp:Contnts>
About Wikipdia </wiki/Wikipdia:About>
Community portal </wiki/Wikipdia:Community_portal>
Rcnt changs </wiki/Spcial:RcntChangs>
Contact pag <//n.wikipdia.org/wiki/Wikipdia:Contact_us>
Tools
*
*
*
*
*
*
*
*
* Crat a book
</w/indx.php?titl=Spcial:Book&bookcmd=book_crator&rfrr=Normal+distrib
ution>
* Download as PDF
</w/indx.php?titl=Spcial:Book&bookcmd=rndr_articl&arttitl=Normal+dist
ribution&oldid=625468853&writr=rl>
* Printabl vrsion </w/indx.php?titl=Normal_distribution&printabl=ys>
Languags
* Almannisch <//als.wikipdia.org/wiki/Normalvrtilung>
*
<//ar.wikipdia.org/wiki/%D8%AA%D9%88%D8%B2%D9%8A%D8%B9_%D8%A7%D8%AD%D8%AA%D
9%85%D8%A7%D9%84%D9%8A_%D8%B7%D8%A8%D9%8A%D8%B9%D9%8A>
* Azrbaycanca <//az.wikipdia.org/wiki/Normal_paylanma>
* Bnlmg
<//zhminnan.wikipdia.org/wiki/Si%C3%B4ngth%C3%A0i_hunp%C3%B2%CD%98>
*
<//b.wikipdia.org/wiki/%D0%9D%D0%B0%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0
%B0%D0%B5_%D1%80%D0%B0%D0%B7%D0%BC%D0%B5%D1%80%D0%BA%D0%B0%D0%B2%D0%B0%D0%BD%D0%
BD%D0%B5>
*
<//bg.wikipdia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%BE_%D
1%80%D0%B0%D0%B7%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5>
* Catal <//ca.wikipdia.org/wiki/Distribuci%C3%B3_normal>
* tina <//cs.wikipdia.org/wiki/Norm%C3%A1ln%C3%AD_rozd%C4%9Bln%C3%AD>
* Cymrag <//cy.wikipdia.org/wiki/Dosraniad_normal>
* Dansk <//da.wikipdia.org/wiki/Normalfordling>
* Dutsch <//d.wikipdia.org/wiki/Normalvrtilung>
* Esti <//t.wikipdia.org/wiki/Normaaljaotus>
*
<//el.wpeda.org/w/%CE%9A%CE%B1%CE%BD%CE%BF%CE%BD%CE%B9%CE%BA%CE%AE_%C
E%BA%CE%B1%CF%84%CE%B1%CE%BD%CE%BF%CE%BC%CE%AE>
* Espaol <//es.wpeda.org/w/Dstrbuc%C3%B3n_normal>
* Esperanto <//eo.wpeda.org/w/Normala_dstrbuo>
* Eusara <//eu.wpeda.org/w/Banaeta_normal>
*
<//fa.wpeda.org/w/%D8%AA%D9%88%D8%B2%DB%8C%D8%B9_%D8%B7%D8%A8%DB%8C%D
8%B9%DB%8C>
* Franas <//fr.wpeda.org/w/Lo_normale>
* Galego <//gl.wpeda.org/w/Dstrbuc%C3%B3n_normal>
*
<//o.wpeda.org/w/%EC%A0%95%EA%B7%9C%EB%B6%84%ED%8F%AC>
* Hrvats <//hr.wpeda.org/w/Normalna_raspodjela>
* Bahasa Indonesa <//d.wpeda.org/w/Dstrbus_normal>
* slensa <//s.wpeda.org/w/Normaldrefng>
* Italano <//t.wpeda.org/w/Dstrbuzone_normale>
*
<//he.wpeda.org/w/%D7%94%D7%AA%D7%A4%D7%9C%D7%92%D7%95%D7%AA_%D7%A0%D
7%95%D7%A8%D7%9E%D7%9C%D7%99%D7%AA>
*
<//a.wpeda.org/w/%E1%83%9C%E1%83%9D%E1%83%A0%E1%83%9B%E1%83%90%E1%83
%9A%E1%83%A3%E1%83%A0%E1%83%98_%E1%83%92%E1%83%90%E1%83%9C%E1%83%90%E1%83%AC%E1%
83%98%E1%83%9A%E1%83%94%E1%83%91%E1%83%90>
*
<//kk.wikipdia.org/wiki/%D2%9A%D0%B0%D0%BB%D1%8B%D0%BF%D1%82%D1%8B_%D0%B4%D
0%B8%D1%81%D0%BF%D0%B5%D1%80%D1%81%D0%B8%D1%8F>
* Latina <//la.wikipdia.org/wiki/Distributio_normalis>
* Latviu <//lv.wikipdia.org/wiki/Norm%C4%81lais_sadal%C4%ABjums>
* Lituvi <//lt.wikipdia.org/wiki/Normalusis_skirstinys>
* Magyar <//hu.wikipdia.org/wiki/Norm%C3%A1lis_loszl%C3%A1s>
*
<//mr.wikipdia.org/wiki/%E0%A4%B8%E0%A4%BE%E0%A4%AE%E0%A4%BE%E0%A4%A8%E0%A5
%8D%E0%A4%AF_%E0%A4%B5%E0%A4%BF%E0%A4%A4%E0%A4%B0%E0%A4%A3>
* Ndrlands <//nl.wikipdia.org/wiki/Normal_vrdling>
* <//ja.wikipdia.org/wiki/%E6%AD%A3%E8%A6%8F%E5%88%86%E5%B8%83>
* Norsk bokml <//no.wikipdia.org/wiki/Normalfordling>
* Norsk nynorsk <//nn.wikipdia.org/wiki/Normalfordling>
* Pimontis <//pms.wikipdia.org/wiki/Distribussion_%C3%ABd_Gauss>
* Polski <//pl.wikipdia.org/wiki/Rozk%C5%82ad_normalny>
* Portugus <//pt.wikipdia.org/wiki/Distribui%C3%A7%C3%A3o_normal>
* Romn <//ro.wikipdia.org/wiki/Distribu%C8%9Bia_Gauss>
*
<//ru.wikipdia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0
%BE%D0%B5_%D1%80%D0%B0%D1%81%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%
B8%D0%B5>
* Simpl English <//simpl.wikipdia.org/wiki/Normal_distribution>
* Slovnina <//sk.wikipdia.org/wiki/Norm%C3%A1ln_rozdlni>
* Slovnina <//sl.wikipdia.org/wiki/Normalna_porazdlitv>
* / srpski
<//sr.wikipdia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%B0_%D
1%80%D0%B0%D1%81%D0%BF%D0%BE%D0%B4%D0%B5%D0%BB%D0%B0>
* Srpskohrvatski /
<//sh.wikipdia.org/wiki/Normalna_raspodla>
* Basa Sunda <//su.wikipdia.org/wiki/Sbaran_normal>
* Suomi <//fi.wikipdia.org/wiki/Normaalijakauma>
* Svnska <//sv.wikipdia.org/wiki/Normalf%C3%B6rdlning>
* Tagalog <//tl.wikipdia.org/wiki/Distribusyong_normal>
*
<//ta.wikipdia.org/wiki/%E0%AE%87%E0%AE%AF%E0%AE%B2%E0%AF%8D%E0%AE%A8%E0%AE
%BF%E0%AE%B2%E0%AF%88%E0%AE%AA%E0%AF%8D_%E0%AE%AA%E0%AE%B0%E0%AE%B5%E0%AE%B2%E0%
AF%8D>
* /tatara
<//tt.wikipdia.org/wiki/%D0%93%D0%B0%D1%83%D1%81%D1%81_%D0%B1%D2%AF%D0%BB%D
0%B5%D0%BD%D0%B5%D1%88%D0%B5>
*
<//th.wikipdia.org/wiki/%E0%B8%81%E0%B8%B2%E0%B8%A3%E0%B9%81%E0%B8%88%E0%B8
%81%E0%B9%81%E0%B8%88%E0%B8%87%E0%B8%9B%E0%B8%A3%E0%B8%81%E0%B8%95%E0%B8%B4>
* Trk <//tr.wikipdia.org/wiki/Normal_da%C4%9F%C4%B1l%C4%B1m>
*
<//uk.wikipdia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0
%B8%D0%B9_%D1%80%D0%BE%D0%B7%D0%BF%D0%BE%D0%B4%D1%96%D0%BB>
*
<//ur.wikipdia.org/wiki/%D9%85%D8%B9%D9%85%D9%88%D9%84_%D8%AA%D9%88%D8%B2%D
B%8C%D8%B9>
* Ting Vit
<//vi.wikipdia.org/wiki/Ph%C3%A2n_ph%E1%BB%91i_chu%E1%BA%A9n>
*
<//y.wpeda.org/w/%D7%A0%D7%90%D7%A8%D7%9E%D7%90%D7%9C%D7%A2_%D7%A4%D
7%90%D7%A8%D7%98%D7%99%D7%99%D7%9C%D7%95%D7%A0%D7%92>
* <//zh.wpeda.org/w/%E6%AD%A3%E6%80%81%E5%88%86%E5%B8%83>
Edt lns <//www.wdata.org/w/Q133871#stelns-wpeda>
* Ths page was last modfed on 14 September 2014 at 02:35.
* Text s avalable under the Creatve Commons Attrbuton-ShareAle
Lcense
<//en.wpeda.org/w/Wpeda:Text_of_Creatve_Commons_Attrbuton-Shar
eAle_3.0_Unported_Lcense><//creatvecommons.org/lcenses/by-sa/3.0/>;
addtonal terms may apply. By usng ths ste, you agree to the
Terms of Use <//wmedafoundaton.org/w/Terms_of_Use> and
Prvacy Polcy <//wmedafoundaton.org/w/Prvacy_polcy>.
Wpeda s a regstered trademar of the Wmeda Foundaton,
Inc. <//www.wmedafoundaton.org/>, a non-proft organzaton.
Prvacy polcy <//wmedafoundaton.org/w/Prvacy_polcy>
About Wpeda </w/Wpeda:About>
Dsclamers </w/Wpeda:General_dsclamer>
Contact Wpeda <//en.wpeda.org/w/Wpeda:Contact_us>
Developers
<https://www.medaw.org/w/Specal:MyLanguage/How_to_contrbute>
* Moble vew
<//en.m.wpeda.org/w/ndex.php?ttle=Normal_dstrbuton&mobleacton=tog
gle_vew_moble>
*
*
*
*
*