You are on page 1of 6

EE 562a: Random Processes in Engineering

EE department, USC, Fall 2014


Instructor: Prof. Salman Avestimehr
Homework 3
Solutions
1. (Uncorrelated vs. Independent)
(a) We have seen that independence of two random variables X and Y implies that they are
uncorrelated, but that the converse is not true. To see this, let be uniformly distributed
on [0, ] and let random variables X and Y be dened by X = cos and Y = sin.
Show that X and Y are uncorrelated but they are not independent.
(b) Now show that if we make the stronger assumption that random variables X and Y
have the property that for all functions g and h, the random variables g(X) and h(Y )
are uncorrelated, then X and Y must be independent. That is, uncorrelation of the ran-
dom variables is not enough, but uncorrelation of all functions of the random variables
is enough to ensure independence. For simplicity assume that X and Y are discrete
random variables.
Hint Consider for all x and y the indicator function
g(x) = 1
a
(x) =

1 x = a
0 x = a
h(y) = 1
b
(y) =

1 y = b
0 y = b
Problem 1 Solution
(a) X and Y are uncorrelated if cov(X,Y) = 0.
E[X] = E[cos()] = 0
E[Y ] = E[sin()] =


0
1

sin()d =
2

cov(X, Y ) = E[XY ] E[X]E[Y ] =


0
1

sin() cos()d
=


0
2

sin(2)d = 0
(1)
Hence, X and Y are uncorrelated. Moreover, if you know X, you can exactly tell the
value of Y; in other words, Y = f(X) =

1 X
2
. This is due to the fact that
[0, ]. Therefore, they are not independent.
(b) X and Y are uncorrelated therefore cov(X,Y) = 0. Using the functions given in the hint
E[g(X)] = P(X = a)
E[h(Y )] = P(Y = b)
cov(X, Y ) = E[g(X)h(Y )] E[g(X)]E[h(Y )]
= E[g(X)h(Y )] P(X = a)P(Y = b) = 0
E[g(X)h(Y )] = P(X = a)P(Y = b)
P(X = a, Y = b) = P(X = a)P(Y = b) a, b
P(X = a, Y = b) = P
X
(a)P
Y
(b) a, b
(2)
which means X and Y are independent.
2. (Bayes Rule for PDF) Consider a communication channel corrupted by noise. Let X be
the value of the transmitted signal and Y be the value of the received signal. Assume that
the conditional density of Y given {X = x} is Gaussian, that is
f
Y |X
(y|x) =
1

2
2
exp

(y x)
2
2
2

,
and X is uniformly distributed on [1, 1]. What is the conditional probability density func-
tion of X given Y (i.e., f
X|Y
(x|y)?
Problem 2 Solution
f
X|Y
(x|y) =
f
Y |X
(y|x)f
X
(x)
f
Y
(y)
f
X
(x) =
1
2
rect(
x
2
)
f
Y
(y) =

f
X,Y
(x, y)dx
=

f
Y |X
(y|x)f
X
(x)dx
=

1
1
1
2
1

2
2
e

(yx)
2
2
2
dx
=
1
2
1y

1y

2
e

u
2
2
du
=
1
2
[Q(
1 y

) + Q(
1 y

)]
=
1
2
[Q(
1 + y

) + Q(
y 1

)]
f
X|Y
(x|y) =
1

2
2
e

(yx)
2
2
2
rect(
x
2
)
Q(
y1

) Q(
1+y

)
(3)
2
3. (Cauchy-Schwarz Inequality)
(a) Let f
1
(X) and f
2
(X) be function of the randomvariable X. Showthat (E[f
1
(X)f
2
(X)])
2

E[f
1
(X)
2
]E[f
2
(X)
2
].
(b) Use (a) to deduce that P(X = 0) 1
(E[X])
2
E(X
2
)
.
(c) Generalize (a) and prove that for any two random variables, X and Y , (E[XY ])
2

E[X
2
]E[Y
2
].
Hint: Use the fact that E[(tX + Y )
2
] 0 for all t R.
Problem 3 Solution
(a) You can either prove this part using the hint for part(c) or use the Cauchy-Schwarz in-
equality for integrals, i.e. |

f(x)g(x)dx|
2

|f(x)|
2
dx

|g(x)|
2
dx, and let f(x) =
f
1
(x)

f
X
(x) and g(x) = f
2
(x)

f
X
(x).
(b) Assume X is a discrete RV (for continuous X the result is trivial as long as f
X
(x) does
not include any impulse functions). Let f
1
(X) = X and f
2
(X) = 1 (X), where
(X) = 1 if and only if X = 0. Applying the result in part (a) to these functions we get
the desired result. Note that E[X(X)] = 0 and E[(1 (X))
2
] = E[(1 (X))].
(E[f
1
(X)f
2
(X)])
2
= (E[X])
2
(E[f
1
(X)])
2
= E[X
2
]
(E[f
2
(X)])
2
= (1 P(X = 0))
(E[X])
2
E[X
2
](1 P(X = 0))
P(X = 0) 1
(E[X])
2
E[X
2
]
(4)
(c)
E[(tX + Y )
2
] 0for any t R
t
2
E[X
2
] + 2E[XY ]t +E[Y
2
] 0
(5)
Now by setting t =
E[XY ]
E[X
2
]
we get
(E[XY ])
2
E[X
2
]E[Y
2
] (6)
4. (Linear Estimation) The output of a channel is Y = X + N, where the input X and the
noise N are independent, zero mean random variables.
(a) Find the correlation coefcient between the input X and the output Y .
(b) Suppose we estimate the input X by a linear function g(Y ) = aY . Find the value of a
that minimizes the mean squared error E[(X aY )
2
].
(c) Express the resulting mean squared error in terms of the ratio between the variance of
X and the variance of N (i.e,

X

Y
).
3
(d) Find COV(X aY, X) for your choice of a in part (b).
Problem 4 Solution
(a)

XY
=
cov(XY )

Y
cov(X, Y ) = E[XY ] E[X]E[Y ] = E[XY ]
cov(X, Y ) = E[X(X + N)] = E[X
2
] +E[XN] = E[X
2
]
E[X
2
] = var(X) (E[X])
2
= var(X) =
2
X

XY
=

2
X

Y
=

X

Y
=

2
X
+
2
N
(7)
(b)
E[(X aY )
2
] = E[X
2
] 2aE[XY ] + a
2
E[Y
2
]
=
2
X
2a
2
X
+ a
2
(
2
X
+
2
N
)
d
da
E[(X aY )
2
] = 0 a =

2
X

2
X
+
2
N
(8)
(c)
MSE =
2
X
2(

2
X

2
X
+
2
N
)
2
X
+ (

2
X

2
X
+
2
N
)
2
(
2
X
+
2
N
)
=

2
X
1 +

2
X

2
N
(9)
(d)
cov(X aY, X) = E[X(X aY )] E[X]E[(X aY )]
= E[X
2
] aE[XY ] =
2
X
(1

2
X

2
X
+
2
N
) =
2
X
(1
2
XY
)
(10)
5. (a) Show that COV(X, E[Y |X]) = COV(X, Y ).
(b) Show that if E[Y |X = x] = E[Y ], for all x implies that X and Y are uncorrelated.
Problem 5 Solution
(a)
cov(X, E[Y |X]) = E[XE[Y |X]] E[X]E[E[Y |X]]
= E[E[XY |X]] E[X]E[Y ]
= E[XY ] E[X]E[Y ] = cov(X, Y )
(11)
4
(b)
E[Y |X = x] = E[Y ] x E[Y |X] = E[Y ]
cov(X, Y ) = cov(X, E[Y |X]) = E[XE[Y |X]] E[X]E[E[Y |X]]
= E[XE[Y ]] E[X]E[Y ] = E[X]E[Y ] E[X]E[Y ] = 0
X and Y are uncorrelated.
(12)
6. (Linear Combination of Two RVs) The characteristic function of a continuous random
variable X is dened as

X
() = E [exp(jX)]
=

exp(jx)f
X
(x) dx,
where j =

1. Thus
X
() is the Fourier transform of the PDF. In particular, the
characteristic function uniquely determines the PDF.
(a) Suppose that X is N(,
2
). Show that its characteristic function equals

X
() = exp(j
2

2
/2).
Hint: Complete the square in the exponent.
(b) Suppose X and Y are independent. Determine the characteristic function of aX + bY
in terms of a, b, and the characteristic functions of X and Y .
Hint: You do not need to compute the density of aX + bY .
(c) Suppose that X and Y are independent and Gaussian. Use the results in (a) and (b) to
show that aX + bY is also Gaussian.
Problem 6 Solution
(a)
Y = aX + b
Y
() = e
jb

X
(a)()

X
(s) = E[e
sX
]
Z =
X

Z
(s) =
1

e
sz
e
z
2
2
dz

Z
(s) = e
s
2
2

2
e
(zs)
2
2
dz
(13)
Apply (*) since we have X = z + , and set s = j.
5
(b)

aX+bY
() =

e
j(ax+by)
f
X,Y
(x, y)dxdy
=

e
jax
f
X
(x)dx

e
jby
f
Y
(y)dy
=
X
(a)
Y
(b)
(14)
(c)

aX+bY
() =
X
(a)
Y
(b)
= e
j
X
a

2
X
a
2

2
2
e
j
Y
b

2
Y
b
2

2
2
= e
j(a
X
+b
Y
)
(a
2

2
X
+b
2

2
Y
)
2
2
(15)
since the characteristic function uniquely determines the PDF, aX + bY is Gaussian
with mean a
X
+ b
Y
and variance a
2

2
X
+ b
2

2
Y
.
6

You might also like