Professional Documents
Culture Documents
Then fX is called the probability density function (pdf) of the random vari-
able X.
In particular, for any real numbers a and b, with a < b, letting B = [a, b],
we obtain from Equation (1) that :
b
P(a X b) = fX (x) dx (2)
a
1
For any real numbers a and b, with a < b
The above equation states that including or not the bounds of an interval
does not modify the probability of a continuous rrv.
Proof. Let us first prove Equation (3) :
a
P(X = a) = P(X [a, a]) = fX (x) dx = 0
a
Property 1.2. Let FX be the cdf of a random variable X. Following are some
properties of FX :
FX is increasing : x y FX (x) FX (y)
limx FX (x) = 1 and limx FX (x) = 0
FX is cdlg :
FX is right continuous : limxx0 FX (x) = FX (x0 ) , for x0 R
FX has left limits : limxx0 FX (x) exists, for x0 R
Property 1.3 (Cumulative distribution function of a continuous rrv). Let X
be a continuous rrv with pdf fX . Then the cumulative distribution function FX
of X is given by : x
FX (x) = fX (t) dt (6)
2
Proof. We have the following :
FX (x) = P(X x)
= P(X (, x))
x
= fX (t) dt
fX is integrable on R and
fX (x) dx = 1 (9)
Proof. Proof of (8) : Property 1.2 states that the cumulative distribution
function FX is increasing on R. Therefore FX
(x) 0. According to
Lemma 1.3, FX (x) = fX (x) if fX is continuous at x. This completes the
proof.
Proof of (9) :
fX (x) dx = P(X (, )) = P(X R) = P( ) = 1
In other words, the event that X takes on some value (X R) is the sure
event .
3
Definition 1.3. A real-valued function f is said to be a valid pdf if the following
holds :
f is nonnegative on R :
f is integrable on R and
f (x) dx = 1 (11)
This means that if f is a valid pdf, then there exists some continuous rrv X
that has f as its pdf
Questions.
(1) Determine c such that fX satisfies the properties of a pdf.
(2) Give the cdf of X.
Proof. (1) Since fX is a pdf, fX (x) should be nonnegative for all x R. This
is the case for x (, 0) and x (1/2, ) where fX (x) equals zero. On
the interval [0, 1/2], fX (x) = c. This implies that c should be nonnegative
as well.
Let us now focus on the second condition,
0 1/2
fX (x) dx = 1 fX (x) dx + fX (x) dx + fX (x) dx = 1
0 1/2
0 1/2
0 dx + c dx + 0 dx = 1
0 1/2
-1/2
-
cx-- =1
0
1
c =1
2
c=2
4
If x < 0, then fX (t) = 0 for all t (, x].
x x
FX (x) = fX (t) dt = 0 dt = 0
= FX (1/2) + 0
1
=2
2
=1
In a nutshell, FX is given by :
Y
] 0 if x < 0
FX (x) = 2x if 0 x 1/2
[
1 if x > 1/2
5
2 Expectation and Variance
Definition
s 2.1 (Expected Value). Let X be sa continuous rrv with pdf fX . If
(x) dx is absolutely convergent, i.e. |x|fX (x) dx < 1 , then, the
xf X
mathematical expectation (or expected value or mean) of X exists, is
denoted by E[X] and is defined as follows :
E[X] = xfX (x) dx (12)
for all c R,
E[c] = c (14)
1 We then say that X is integrable.
6
If g : R R is a nonnegative piecewise continuous function and g(X) is
integrable. Then, we have :
E[g(X)] 0 (15)
=c
Proof of Equation (15) : This comes from the nonnegativity of the integral
for nonnegative functions.
Proof of Equation (16) : This is a direct application of Equation (15)
applied to function g2 g1 .
7
Definition 2.3 (VarianceStandard Deviation). Let X be a real-valued random
variable. When E[X 2 ] exists2 , the variance of X is defined as follows :
Var(X) 0
If a, b R are two constants, then Var(aX + b) = a2 Var(X)
Proof. This property is true for any kind of random variables (discrete or con-
tinuous). See proof of Property 4.1 given in the lecture notes of the chapter
about discrete rrvs.
Proof. This property is true for any kind of random variables (discrete or con-
tinuous). See proof of Theorem 4.1 given in the lecture notes of the chapter
about discrete rrvs.
8
Proof. We computed previously the expectation of X that is E[X] = 1/4. Com-
puting the variance of X thus boils down to calculating E[X 2 ] :
2
E[X ] = x2 fX (x) dx
0 1/2
= x2 fX (x) dx + x2 fX (x) dx + x2 fX (x) dx
0 1/2
1/2
=0+ x2 2 dx + 0
0
-1/2
1 3 --
=2 x -
3 0
1
=
12
Therefore,
3 42
1 1 1
Var(X) = =
12 4 48
9
Property 3.1 (Mean and Variance for a Uniform Distribution). If X follows
a uniform distribution U(a, b), then
its expected value is given by :
a+b
E[X] = (22)
2
Proof. Expectation :
E[X] = xfX (x) dx
1 b
= x dx
ba a
-b
1 1 2 --
= x -
ba 2 a
1 b2 a2
=
2 ba
a+b
=
2
b2 + ab + a2 (a + b)2 (b a)2
Var(X) = =
3 4 12
10
Motivation. Most computer programming languages include functions or li-
brary routines that provide random number generators. They are often designed
to provide a random byte or word, or a floating point number uniformly dis-
tributed between 0 and 1.
The quality i.e. randomness of such library functions varies widely from
completely predictable output, to cryptographically secure.
There are a couple of methods to generate a random number based on a
probability density function. These methods involve transforming a uniform
random number in some way. Because of this, these methods work equally well
in generating both pseudo-random and true random numbers.
E[X] = (25)
Remark. The normal distribution is one of the most (even perhaps the most)
important distributions in Probability and Statistics. It allows to model many
natural, physical and social phenomenons. We will see later in this course how
all the distributions are somehow related to the normal distribution.
11
(1) fX is symmetric about the mean :
fX ( x) = fX ( + x), for x R (27)
(2) fX is maximized at x = .
(3) The limit of fX (x), as x approches or , is 0:
lim fX (x) = lim fX (x) = 0 (28)
x x
12
We now recognize the cdf of a normal distribution N (a + b, a2 2 ), which com-
pletes the proof.
The proof in the case where a is negative is left as an exercise.
Corollary 3.1. If X follows a normal distribution N (, 2 ), then random vari-
able Z defined by:
X
Z=
is a standard normal random variable.
Finding Normal probabilities. If X N (, 2 ). The above property leads
us to the following strategy for finding probabilities P(a X b) :
(1) Transform X, a, and b, by:
X
Z=
(2) Use the standard normal N (0, 1) Table to find the desired probability.
(b) What is the probability that a randomly selected burger has a weight above
270 grams?
(c) What is the probability that a randomly selected burger has a weight be-
tween 230 and 265 grams?
3.3 Quantiles
Previously, we learned how to use the standard normal curve N (0, 1) to find
probabilities concerning a normal random variable X. Now, what would we do if
we wanted to find some range of values for X in order to reach some probability?
Remarks :
The quantile of order 1/2 is called the median of the distribution
If FX is a bijective function, then q = FX
1
()
13
Property 3.6. For (0, 1), the quantile function of a standard normal
random variable is given by :
1
() = 1
(1 ) (32)
Example. Suppose that people join the line to visit the Eiffel Tower according
to an approximate Poisson process at a mean rate of 800 visitors per hour. What
is the probability that nobody joins the line in the next 30 seconds?
14