You are on page 1of 2

ECE 461 Fall 2006

August 31, 2006

Gaussian Random Variables and Vectors
The Gaussian Probability Density Function
This is the most important pdf for this course. It also called a normal pdf.
" #
1 (x − m)2
fX (x) = √ exp − .
σ 2π 2σ 2

It can be shown this fX integrates to 1 (i.e., it is a valid pdf), and that the mean of the random
variable X with the above pdf is m and the variance is σ 2 .
The statement “X is Gaussian with mean m and variance σ 2 ” is compactly written as “X ∼
N (m, σ 2 ).”
The cdf corresponding to the Gaussian pdf is given by
" #
x x
1 (u − m)2
Z Z
FX (x) = fX (u)du = √ exp − du.
−∞ −∞ σ 2π 2σ 2

u−m
This integral cannot be computed in closed-form, but if we make the change of variabe =v
σ
we get
Z x−m  2  
σ 1 v x−m
FX (x) = √ exp − dv = Φ ,
−∞ 2π 2 σ
where Φ is the cdf of a N (0, 1) random variable, i.e.,
Z x  2
1 u
Φ(x) = √ exp − du.
−∞ 2π 2

Note that due to the symmetry of the Gaussian pdf,

Φ(−x) = 1 − Φ(x).

A closely related function to Φ is the Q function which is defined by:

Z ∞  2
1 u
Q(x) = 1 − Φ(x) = √ exp − du.
x 2π 2

Q(∞) = Φ(−∞) = 0, Q(−∞) = Φ(∞) = 1, Q(0) = Φ(0) = 0.5

For computing the Q function in Matlab, we may use the Matlab functions erf or erfc after
modifying them appropriately.

c
V.V. Veeravalli, 2006 1
Jointly Gaussian Random Variables
Two random variables X and Y are said to be jointly Gaussian if their joint density satisfies the
equation
(x − mX )2 2ρ(x − mX )(y − mY ) (y − mY )2
  
1 1
fX,Y (x, y) = exp − 2 − + .
2(1 − ρ2 ) σY2
p
2πσX σY 1 − ρ2 σX σX σY
Note that the following properties hold:

2
• X is Gaussian with mean mX and variance σX
• Y is Gaussian with mean mY and variance σY2
• The conditional densities fX|Y (x|y) and fY |X (y|x) are also Gaussian
• ρ is the correlation coefficient between X and Y . If ρ = 0, then X and Y are independent.
• Z = aX + bY is also Gaussian (what are the mean and variance of Z?)

The definition of jointly Gaussian random variables extends quite naturally to n variables X1 , X2 , . . . , Xn .
Let the vectors X and m, and matrix Σ be defined by
     
X1 m1 Σ11 Σ12 . . . Σ1n
 X2   m2   Σ21 Σ22 . . . Σ2n 
X =  .  m = E[X] =  .  Σ = E[(X − m)(X − m)⊤ ] =  . .. .. .. 
     
 ..   ..   .. . . . 
Xn mn Σn1 Σn2 . . . Σnn
where mi = E[Xi ] and Σij = cov(Xi , Xj ). Then the random variables X1 , X2 , . . . , Xn are jointly
Gaussian if their joint density is given by
 
1 1 ⊤ −1
f (x) = p exp − (x − m) Σ (x − m) .
(2π)n det(Σ) 2
The statement “X1 , X2 , . . . , Xn are jointly Gaussian with mean m and covariance matrix Σ” can
be compactly written as “X ∼ N (m, Σ)”.
Properties of jointly Gaussian random variables include:

• Any subset of jointly Gaussian random variables is also jointly Gaussian.

• Any subset of jointly Gaussian random variables conditioned on any other subset of the
original random variables is also jointly Gaussian.
• Jointly Gaussian random variables that are uncorrelated are also independent.
• Linear combinations of jointly Gaussian random variables are also jointly Gaussian. In par-
ticular, suppose we produce the vector Y = [Y1 Y2 . . . Ym ]⊤ using the linear transformation
Y = AX, where A is an m × n matrix. Then,
 
Y ∼ N AmX , AΣX A⊤

i.e., Y is jointly Gaussian with mean mY = AmX , and covariance matrix ΣY = AΣX A⊤ .

c
V.V. Veeravalli, 2006 2