Professional Documents
Culture Documents
1 Motivation
In previous classes, we have explored the topic of compressive sensing. Tra-
ditional techniques for signal acquisition involve acquiring N samples of
a signal sampled at a rate faster than twice the Nyquist rate in order to
guarantee perfect signal reconstruction. Many signals of practical interest,
however, are sparse in some basis, meaning that in some basis they can be
represented with K N samples. Rather than initially acquire N samples
and then throw away ≈ N − K samples during compression, we would like
to be able to sample the compressed signal directly.
y = Φx, (1)
where x is an N x1 (sparse) vector, Φ is an M xN matrix, and y is an M x1
vector.
1
• Convex Optimization:
2 Notation
We define some basic notation used throughout this scribe to eliminate con-
fusion.
Vectors:
-Bolded lower case letters (x, y, etc) are used exclusively for vectors.
-Double bars around a vector with Èan subscript p (ex. kxkp ) indicate the
P p
ell-p norm of the vector x. kxkp = p
i |xi |
Matrices:
2
Sets:
-|T | indicate the cardinality of the set, or the number of elements that it
contains.
-We use the subscribpt notation |T to show that a vector or matrix is being
restricted to only certain elements or columns. For example, x|T indicates
the vector x is restricted only the elements given in T . Φ|T C indicates that
the matrix Φ is restricted to the colums contained in T C .
-We often use the notation xN to indicate the best N -point support set of
the vector x, supp(x). That is, the set of indices that best represent the
vector x by minimizing kx − x|supp(x) k2
3 CoSaMP Algorithm
The following table lists the steps of the CoSaMP algorithm:
3
1) Initialization:
x−1 = 0 (xJ is the estimate of x) at the J th iteration
r = y (the current residual)
2) Loop until convergence
i) Compute the current error: (Note that for Gaussian Φ, ΦT Φ is ∼ diagonal)
e = Φ∗ r.
ii) Compute the best 2K support set of the error (index set):
Ω = e2K .
4.1 Lemma 1
Suppose that Φ obeys the RIP and has an isometry constant δr . Also, define
T to be a set of r indices. Then:
4
È
kΦ∗T uk2 ≤ 1 + δr kuk2 , (3)
1
kΦ†T uk2 ≤ √ kuk2 , (4)
1 + δr
È È
1 − δr kuk2 ≤kΦ∗T ΦT uk2 ≤ 1 + δr kuk2 , (5)
1 1
√ kuk2 ≤kΦ∗T ΦT uk2 ≤ √ kuk2 . (6)
1 − δr 1 + δr
Proof: This follows directly from the
√ RIP property
√ which indicates that the
singular values of Φ|T lie between 1 + δr and 1 − δr .
4.2 Lemma 2
T
Suppose that we have index sets S and T that are disjoint (S T = ∅). Now
S
define R = S T with |R| ≤ r.
" #
Φ∗|S Φ|T − I Φ∗|S Φ|T
M= . (8)
Φ∗|T Φ|S Φ∗|T Φ|T − I
Hence, Φ∗|S Φ|T is a submatrix of M. By the spectral norm inequality:
5
kΦ∗|S Φ|T k ≤ kMk, (12)
≤ max{(1 + δr ) − 1, 1 − (1 − δr )}, (13)
= δr . (14)
4.3 Lemma 3
Suppose that Φ obeys the RIP and has isometry constant δr . Let T be a set
S
of indices and x be a vector. Further suppose that r ≥ | T supp(x) | then:
4.4 Lemma 4
Define:
s = x − xJ
r = y − ΦxJ = Φs + n
e = Φ∗ r and
Ω = e2K
6
Defining R = supp(s) we can show that
Substituting the expansions of both the LHS and RHS into the inital in-
equality we find that:
√
2 1 + δ2K
2.34 = , (33)
1 − δ2K
δ2K + δ4K
.2233 = , (34)
1 − δ4K
δ2K ≤ δ4K ≤ .1. (35)
7
4.5 Lemma 5
Define:
b|T = Φ†|T y
b|T C = 0
then:
δ4K knk2
kx − bk2 ≤ kx|T C k2 + √ knk2 , (41)
1 − δ3K 1 − δ3K
= 1.112kx|T C k2 + 1.06knk2 . (42)
4.6 Lemma 6
We wish to show that:
8
kx − xJ k2 = kx − b + b − xJ k2 , (44)
≤ kx − bk2 + |xJ − bk2 , (45)
≤ 2kx − bk2 . (46)
What this result tells us is that the error in our approximation of x decays
exponenentially each iteration until finally reaching a limit bound by the
noise power present in our signal. This makes the CoSaMP algorithm a
very power tool in sparse signal reconstruction.