You are on page 1of 2

1

Experiments in Compressive Sensing In the compressive sensing (CS) framework, an unobserved signal x, L = dim(x), has a K-sparse basis expansion: x = a. The columns of form an orthonormal basis and a are the expansion coefcients. To be K-sparse, the coefcient vector a has only K nonzero entries at unknown locations. M inner-product-type measurements are made with the M L measurement matrix : y = x. From these M L measurements, we want to construct x. According to CS theory, the number of measurements needed to construct x with high probability is of the order O(K log(L/K)) when a random measurement matrix is used. The coefcient vector a is found as a solution of the 1 optimization problem min a
1

subject to a = y .

In practice, we may not know how sparse the signal actually is. In this problem, we try to nd how well solving the CS problem works in a scenario where the sparsity is known. Software. The following Matlab commands produce a sparse length-N coefcient vector,1 with random non-zero values and locations. a = zeros(N, 1); q = randperm(N); a(q(1:K)) = randn(K,1); We form a measurement matrix from Gaussian random numbers. N OTE: The rows of should form an orthonormal set. Use the Matlab function orth() to make the matrix have this property. Psi = randn(M, L); Psi = orth(Psi); The function l1eq() solves the constrained optimization problem and has the calling sequence A = Psi*Phi; % a0 = initial starting point, here the least-squares solution a0 = A*inv(A*A)*y; a_hat = l1eq(a0,A,[],y); (a) For the rst experiment, let the signal itself be a sparse waveform, having K = 20 non-zero values. In this case, N = L; use L = 512. In this case, the matrix is the identity matrix. Using M = 120 measurements, compare the least-squares solution to the actual sparse signal for one example. Is the least-squares solution sparse?
1 In

some cases, the signals dimension (N) is smaller than its length.

(b) Solve the 1 optimization problem for the same data and measurements. Now is the solution sparse? Find the normalized root-mean-squared (rms) error (normalized by the signals rms value) using the Matlab function norm(): 2 =norm(a_hat-a)/norm(a) and compare with the normalized rms error resulting from the least-squares solution. (c) For the second round of experiments, let the signal be sparse with respect to a cosineonly Fourier series.
L/2

xl =

j=0

aj

2 jl 2 cos , L L

l = 0, . . . , L 1

Set up the matrix and solve the compressive sensing problem for the same values of K, L and M. (d) The major issue is how many measurements are needed. Using the same sparse signal x, nd the normalized rms error as the number of measurements M varies from K to 120. Plot the normalized rms error as a function of M. Did you nd anything surprising about your plot? (e) Now set K = 10 and create a new signal that is sparse with respect to the Fourier basis. Plot the normalized error as in part (d). Does the threshold number of measurements necessary for accurate reconstruction follow the predicted behavior?

You might also like