You are on page 1of 8

M5A42 APPLIED STOCHASTIC PROCESSES

PROBLEM SHEET 1 SOLUTIONS


Term 1 2010-2011

1. Calculate the mean, variance and characteristic function of the following probability density functions.

(a) The exponential distribution with density


λe−λx x > 0,

f (x) =
0 x < 0,
with λ > 0.
(b) The uniform distribution with density
1

b−a a < x < b,
f (x) =
0 x∈/ (a, b),
with a < b.
(c) The Gamma distribution with density
(
λ α−1 e−λx
f (x) = Γ(α) (λx) x > 0,
0 x < 0,

with λ > 0, α > 0 and Γ(α) is the Gamma function


Z ∞
Γ(α) = ξ α−1 e−ξ dξ, α > 0.
0

SOLUTION

(a)
Z +∞ Z +∞
E(X) = xf (x) dx = λ xe−λx dx
−∞ 0
1
= .
λ
Z +∞ Z +∞
E(X 2 ) = x2 f (x) dx = λ x2 e−λx dx
−∞ 0
2
= .
λ2

1
Consequently,
1
var(X) = E(X 2 ) − (EX)2 = .
λ2
The characteristic function is
Z ∞
λ
φ(t) = E(eitx ) = λ eitx e−λx dt = .
0 λ − it
(b)
Z +∞ Z b
x
E(X) = xf (x) dx = dx
−∞ a b−a
a+b
= .
2
+∞ b
x2
Z Z
2 2
E(X ) = x f (x) dx = λ x2 dx
−∞ a b−a
b2 + ab + a2
= .
3
Consequently,
(b − a)2
var(X) = E(X 2 ) − (EX)2 = .
12
The characteristic function is
b
eitb − eita
Z
itx 1
φ(t) = E(e )=λ eitx dt = .
a b−a it(b − a)
(c)
Z +∞
λα
E(X) = λ xα e−λx dx
Γ(α) 0
Γ(α + 1) α
= = .
λΓ(α) λ
Z +∞
2
E(X ) = λ x1+α e−λx dx
0
Γ(α + 2) α(α + 1)
= 2
= .
λ Γ(α) λ2
Consequently,
α
var(X) = E(X 2 ) − (EX)2 = .
λ2
The characteristic function is
Z ∞
λα itx
φ(t) = E(e ) = eitx xα−1 dt
Γ(α) 0
Z ∞
λα 1
= e−y y α−1 dy
Γ(α) (λ − it)α 0
λα
= .
(λ − it)α

2
2. (a) Let X be a continuous random variable with characteristic function φ(t). Show that
1 (k)
EX k = φ (0),
ik
where φ(k) (t) denotes the k-th derivative of φ evaluated at t.
(b) Let X be a nonnegative random variable with distribution function F (x). Show that
Z +∞
E(X) = (1 − F (x)) dx.
0

(c) Let X be a continuous random variable with probability density function f (x) and characteristic
function φ(t). Find the probability density and characteristic function of the random variable
Y = aX + b with a, b ∈ R.
(d) Let X be a random variable with uniform distribution on [0, 2π]. Find the probability density of
the random variable Y = sin(X).
SOLUTION

(a) We have Z
itx
φ(t) = E(e )= eitx f (x) dx.
R
Consequently Z
φ(k) (t) = (ix)k eitx f (x) dx.
R
Thus:
Z
(k)
φ (0) = (ix)k f (x) dx = ik EX k ,
R

1 (k)
and EX k = ik
φ (0).
(b) Let R > 0 and consider
Z R
P(X < R) = xf (x) dx
0
Z R
dF
= dx x
0 dx
Z R
R
= xF (x)|0 − F (x) dx
0
Z R
= (F (R) − F (x)) dx.
0

Thus,

EX = lim P(X < R)


R→∞
Z ∞
= (1 − F (x)) dx,
0

3
where the fact limx→∞ F (x) = 1 was used.
Alternatively:
Z ∞ Z ∞Z ∞
(1 − F (x)) dx = f (y) dydx
0
Z0 ∞ Zxy
= f (y) dxdy
0 0
Z ∞
= yf (y) dx = EX.
0

(c) We have:

P(Y ≤ y) = P(aX + b ≤ y)
y−b
= P(X ≤ )
a
Z y−b
a
= f (x) dx.
−∞

Consequently,  
∂ 1 y−b
fY (y) = P(Y ≤ y) = f .
∂y a a
Similarly,

φY (t) = EeitY = Eeit(aX+b)


= eitb EeitaX = eitb φ(at).

(d) The density of the random variable X is


1
n
2π , x ∈ [0, 2π],
fX (x) =
0, x∈/ [0, 2π].

The distribution function is

n 0 x < 0,
x
FX (x) = 2π , x ∈ [0, 2π],
1, x > 2π.

The random variable Y takes values on [−1, 1]. Hence, P(Y ≤ y) = 0 for y ≤ −1 and
P(Y ≤ y) = 1 for y ≥ 1. Let now y ∈ (−1, 1). We have

FY (y) = P(Y ≤ y) = P(sin(X) ≤ y).

The equation sin(x) = y has two solutions in the interval [0, 2π]: x = arcsin(y), π − arcsin(y)
for y > 0 and x = π − arcsin(y), 2π + arcsin(y) for y < 0. Hence,

π + 2 arcsin(y)
FY (y) = , y ∈ (−1, 1).

4
The distribution function of Y is

n 0 y ≤ 0,
π+2 arcsin(y)
FY (y) = 2π , y ∈ (−1, 1),
1, y ≥ 1.
We differentiate the above expression to obtain the probability density:
1 √1
n
π , y ∈ (−1, 1),
fY (y) = 1−y 2
0, y∈
/ (−1, 1).

3. Let X be a discrete random variable taking vales on the set of nonnegative integers with probability
mass function pk = P(X = k) with pk ≥ 0, +∞
P
k=0 pk = 1. The generating function is defined as
+∞
X
g(s) = E(sX ) = pk sk .
k=0

(a) Show that


EX = g 0 (1) and EX 2 = g 00 (1) + g 0 (1),
where the prime denotes differentiation.
(b) Calculate the generating function of the Poisson random variable with
e−λ λk
pk = P(X = k) = , k = 0, 1, 2, . . . and λ > 0.
k!
(c) Prove that the generating function of a sum of independent nonnegative integer valued random
variables is the product of their generating functions.

(a) We have
+∞
X +∞
X
g 0 (s) = kpk sk−1 and g 00 (s) = k(k − 1)pk sk−2 .
k=0 k=0
Hence
+∞
X
g 0 (1) = kpk = EX
k=0
and
+∞
X +∞
X
00
g (1) = 2
k pk − kpk = EX 2 − g 0 (1)
k=0 k=0
from which it follows
EX 2 = g 00 (1) + g 0 (1).

(b) We calculate
+∞ −λ k
X e λ
g(s) = sk
k!
k=0
λ(s−1)
= e .

5
(c) Consider the independent nonnegative integer valued random variables Xi , i = 1, . . . d. Since
the Xi ’s are independent, so are the random variables eXi , i = 1, . . . . Consequently,
Pd
Xi
gPd (s) = E(e i=1 ) = Πdi=1 E(eXi ) = Πdi=1 gXi (s).
i=1 Xi

4. Let b ∈ Rn and Σ ∈ Rn×n a symmetric and positive definite matrix. Let X be the multivariate
Gaussian random variable with probability density function
 
1 1 1 −1
γ(x) = exp − hΣ (x − b), x − bi .
(2π)n/2 det(Σ)
p
2

(a) Show that Z


γ(x) dx = 1.
Rd

(b) Calculate the mean and the covariance matrix of X.


(c) Calculate the characteristic function of X.

(a) From the spectral theorem for symmetric positive definite matrices we have that there exists a
diagonal matrix Λ with positive entries and an orthogonal matrix B such that

Σ−1 = B T ΛB.

Let z = x − b and y = Bz. We have

hΣ−1 z, zi = hB T ΛBz, zi
= hΛBz, Bzi = hΛy, yi
d
X
= λi yi2 .
i=1

Furthermore, we have that det(Σ−1 ) = Πdi=1 λi , that det(Σ) = Πdi=1 λ−1


i and that the Jacobian
of an orthogonal transformation is J = det(B) = 1. Hence,
Z   Z  
1 −1 1 −1
exp − hΣ (x − b), x − bi dx = exp − hΣ z, zi dz
Rd 2 Rd 2
d
Z !
1X
= exp − λi yi2 |J| dy
Rd 2
i=1
Z  
1
= Πni=1 exp − λi yi2 dyi
R 2
−1/2
p
= (2π)n/2 Πni=1 λi = (2π)n/2 det(Σ),

from which we get that Z


γ(x) dx = 1.
Rd

6
(b) From the above calculation we have that
γ(x) dx = γ(B T y + b) dy
 
1 n 1 2
= p Πi=1 exp − λi yi dyi .
(2π)n/2 det(Σ) 2
Consequently
Z
EX = xγ(x) dx
d
ZR
= (B T y + b)γ(B T y + b) dy
RZd

= b γ(B T y + b) dy = b.
Rd

We note that, since Σ−1 = B T ΛB, we have that Σ = B T Λ−1 B. Furthermore, z = B T y. We


calculate
Z
E((Xi − bi )(Xj − bj )) = zi zj γ(z + b) dz
Rd
Z X !
1 X 1X
= p Bki yk Bmi ym exp − λ` y`2 dy
(2π)n/2 det(Σ) Rd k m
2
`
Z !
1 X 1X 2
= p Bki Bmj yk ym exp − λ` y` dy
(2π)n/2 det(Σ) k,m Rd 2
`
X
−1
= Bki Bmj λk δkm
k,m
= Σij .

(c) Let y be a multivariate Gaussian random variable with mean 0 and covariance I. Let also

C = B Λ. We have that Σ = CC T = C T C. We have that
X = CY + b.
To see this, we first note that X is Gaussian since it is given through a linear transformation of a
Gaussian random variable. Furthermore,
EX = b and E((Xi − bi )(Xj − bj )) = Σij .
Now we have:
φ(t) = EeihX,ti = eihb,ti EeihCY,ti
T ti
= eihb,ti EeihY,C
P P
= eihb,ti Eei j ( k Cjk tk )yj
2
ihb,ti − 12 j | k Cjk tk |
P P
= e e
ihb,ti − 12 hCt,Cti
= e e
ihb,ti − 21 ht,C T Cti
= e e
ihb,ti − 21 ht,Σti
= e e .

7
Consequently,
1
φ(t) = eihb,ti− 2 ht,Σti .

You might also like