You are on page 1of 4

Delft University of Technology

Faculty of Electrical Engineering, Mathematics, and Computer Science


Circuits and Systems Group

ET 4235 DIGITAL SIGNAL PROCESSING


23 January 2015, 9:0012:00
Open book exam: copies of the book by Hayes and the course slides allowed.

This exam has four questions (40 points)

Question 1 (8 points)
Assume that x(n) is a real-valued stationary zero-mean random process. Let us now consider
the following vector of samples:

x = [x(0), x(1), x(2), x(4), x(5)]T .

Note that the sample x(3) is missing.

(a) Is the covariance matrix Rx of this random vector x Toeplitz or not? Explain why.

(b) Is the covariance matrix Rx of this random vector x symmetric or not? Explain why.

(c) Use the Pade method to determine the coefficients of a first-order all-pole filter that fits
x. In other words, write the coefficients of the first-order all-pole filter as a function of
x. Up to what order can we compute the all-pole filter using the Pade method from x.
Explain.

(d) Assume the covariance matrix Rx of the random vector x is given. Use the Yule-Walker
method to determine the coefficients of a first-order all-pole filter that fits Rx . In other
words, write the coefficients of the first-order all-pole filter as a function of Rx (in order
to do so, clearly define the entries of Rx ). Up to what order can we compute the all-pole
filter using the Yule-Walker method from Rx . Explain.

(e) If the covariance matrix Rx of the random vector x is given, can you determine the
covariance matrix Rx of the vector

x = [x(0), x(1), x(2), x(3), x(4), x(5)]T ,

where the sample x(3) is not missing? Why or why not? Does this have any implication
on your answer to the last question in part (d)?
Question 2 (10 points)
A certain real-valued random process is defined by

x(n) = A cos(0 n) + w(n),

where the frequency 0 is deterministic, A is a Gaussian random variable with mean zero and
2 , and w(n) is a white noise process with mean zero and variance 2 independent
variance A w
of A. Hint: You may use cos cos = cos( + )/2 + cos( )/2.

(a) What is the correlation function of x(n)? Is the power spectrum of the process x(n)
defined? If so, what is the power spectral density?

(b) Repeat parts (a) and (b) for the process

x(n) = A cos(0 n + ) + w(n),

where all parameters are defined as before, except for the phase of the cosine, , which
now is a random variable that is uniformly distributed between and . Is the power
spectrum of the process x(n) defined? If so, what is the power spectral density?

Consider now the process

x(n) = A1 cos(1 n + 1 ) + A2 cos(2 n + 2 ) + w(n),

where all parameters are defined as in (b), with the difference that we have two cosines now.
Further assume that 1 = 0.6 and 2 = 0.7.

(c) Suppose we use the traditional periodogram to estimate the two frequencies from x(n).
What is the smallest number of samples that we can use? Is there any way that this
number of samples can be reduced? Explain.

(d) Suppose we use Bartletts method of periodogram averaging to estimate the two fre-
quencies from x(n) and we want to improve the variance of the estimated spectrum by
a factor of ten compared to (c). What is the smallest number of samples that we can
use? Is there any way that this number of samples can be reduced, without sacrificing
the tenfold improvement in variance?

2
Question 3 (12 points)
Assume we observe a signal d(n) in noise v(n) as

x(n) = d(n) + v(n),

where v(n) is uncorrelated with d(n). We would like to estimate d(n) from x(n). The power
spectral densities of d(n) and v(n) are shown in the following figure.

(a) Compute and draw the frequency response of the optimal noncausal Wiener filter to
estimate d(n) from x(n). Hint: Transform the Wiener-Hopf equations for an infinite-
length filter from the time domain to the frequency domain.

(b) Denoting the estimate obtained in (a) as d(n), compute the mean-square error =
2
E{|d(n) d(n)| } and compare it to the mean-square error when we do not filter x(n)
and use x(n) as an estimate for d(n). Hint: Transform the expression of the mean-square
error for an infinite-length filter from the time-domain to the frequency domain.

(c) Explain the procedure for computing a first-order causal FIR Wiener filter

W (z) = w(0) + w(1)z 1

to estimate d(n) from x(n). You dont have to give a detailed expression of the corre-
lation functions. Without explicitly computing the mean-square error for this filter, do
you expect it to be smaller or larger than the one in (b)? Explain

(d) Explain the procedure for computing a first-order noncausal FIR Wiener filter

W (z) = w(1)z + w(0)

to estimate d(n) from x(n). You dont have to give a detailed expression of the corre-
lation functions. Without explicitly computing the mean-square error for this filter, do
you expect it to be smaller or larger than the one in (b)? Explain

(e) Without explicitly computing the mean-square error, compare the mean-square errors
of the filters computed in (c) and (d). Explain.

3
Question 4 (10 points)
Consider the adaptive filter wn that is applied to x(n) in order to estimate a desired signal
d(n):

d(n) = wT x(n).
n

The traditional LMS adaptive filter minimizes the instantaneous squared error

(n) = |e(n)|2 = |d(n) d(n)|2
.

However, let us also consider the new functional

(n) = |e(n)|2 + kwn k2 ,

where > 0.

(a) Give the traditional LMS coefficient update equation for wn that minimizes (n). What
is the maximum step size that can be used?

(b) Derive now the LMS coefficient update equation for wn that minimizes (n) instead of
(n).

(c) Determine the condition on the step size that will ensure that wn in (b) converges in
the mean.

(d) If is small enough so that wn in (b) converges in the mean, what does it converge to?

(e) What is the advantage of the new LMS filter over the standard LMS filter.

You might also like