You are on page 1of 14

SPARKS@ELECTROMANIA 2K9 

IMPLEMENTATION OF ANALOG TO INFORMATION COVERTER

1.Harishchandra dubey(E.C.E),2.Prateesh Raj(E.E)

1B.Tech ECE, Motilal Nehru National Institute of Technology,


E-mail: harish.dubey123@gmail.com
2
B.Tech EE, Motilal Nehru National Institute of Technology,
E-mail: xyz@abc.com

Abstract problem posed by Nyquist sampling theorem but also


can serve multiple purposes for us.
In this paper, I have implemented anolog to
information (by information I mean something giving 1. Introduction
the details input analog signal .In most cases ,we are
required it in the form of an image or another analog In real life-situations ,we mostly deal with analog
signal) conversion based on theory of compressed signals but the processing of signals is done on digital
Sensing (also known as sampling).Sampling process systems due to a number of reasons.Also at the same
coverts the continuous time signals to sequences of time after doing required manipulations or so-called
numbers (samples) that can be processed and stored in processing of digital signals we have to convert it back
digital devices (like buffer or registers).I have used into some information (like an image) or in another
buffer for the storage of sampled signals. It is more analog signal so that it can be used by another device
appropriate to do sampling of signals by considering or as some information .
them as union of linear subspaces rather a single linear
space. .It is because union of sub spaces is a more In accordance with Nyquist Sampling theorem it is
appropriate mathematical model for analog signals very difficult to sample high bandwidth signal
than a single linear vector space.Instead .Recent especillay those in radio frequencies because of high
development in areas of compressive Sensing sampling rate required for efficient sampling.But ,it is
(Sampling) and mathemtics including convex not the end ,today we are having the modern theory of
programming,Uniform Compressive sensing/sampling which can be used to
principle and linear optimization theory have inspired eliminate the problems posed by Nyquist sampling
me do this project.Compressive Sensing allows us to theorem and at the same time getting a satisfactory
do sampling at a rate comparable to that of signal level of accuracy in sparse recovery of signals.
frequency unlike the traditional Sensing (which was
based on Nyquist theorem).The stable sampling
condition is related to important concept of Restricted Following the same way ,I have tried to implement it a
Isometry Property(RIP)which helps in selection of different way,instead of using all the sample we will be
samples containing maximum information about input using only a few which contain the maximum
signal. information .The selection of these samples is based on
some mathematical theory involved.
Firstly,Fourier Sampling Algorithm is applied on
sampled signals. For storage of sampled signals we use
buffer(which is unity gain op-amp in our case).The 2. PROBLEM STATEMENT
transformed signals are analysed by some appropriate Many problems in radar and communication signal
algorithms.(in our case it is based on convex processing involve radio frequency (RF) typically 30-
programming and uniform uncertainity principle) 300 GHz signals of very high bandwidth. This presents
.Upto this stage we have collected only those samples a serious challenge to systems that might attempt to
that contain maximum information about signal. Now, use a high-rate Analog-to-Digital Converter (ADC) to
if required we can reconstruct the signal using sample these signals, as prescribed by the
algorithm 890 in SPARCO : toolbox of MATLAB,else Shannon/Nyquist Sampling Theorem(according to
required information can be taken from collected which the highest frequency that can be accurately
samples. With the power of MATLAB (language of represented is less than one half of the sampling rate ).
mathematical and scientific calculations) ,we have thus The dashed vertical lines are sample intervals, and the
implemented a converter which not only eliminates the blue dots are the crossing points – the actual samples
SPARKS@ELECTROMANIA 2K9 

taken by the conversion process. The sampling rate that x lies in a known subspace. Recently, there has
here is below the Nyquist frequency, so when were been growing interest in nonlinear but structured signal
reconstruction the waveform we see the problem quite models, in which x lies in a union of subspaces. In this
readily.The power, stability, and low cost of digital project I develop a general framework for efficient
signal processing (DSP) have pushed the analog-to- recovery of such signals from a given set of samples.
digital converter (ADC) increasingly close to the front- More specifically, we treat the case in which x lies in
end of many important sensing, imaging, and a sum of k subspaces, chosen from a larger set of m
communication systems. The power, stability, and low possibilities. The samples are modelled as inner
cost of digital signal processing (DSP) have pushed the products with an arbitrary set of sampling functions.
analog-to-digital converter (ADC) increasingly close To derive an efficient recovery algorithm, we show
to the front-end of many important sensing, imaging, that our problem can be formulated as that of
and recovering a block-sparse vector whose non-zero
communication systems. The power, stability, and low elements appear in fixed blocks. Our main result is an
cost of digital signal processing (DSP) have pushed the equivalence condition under which the proposed
analog-to-digital converter (ADC) increasingly close Convex Algorithm along with uniform uncertainity
to the front-end of many important sensing, imaging, principle is guaranteed to recover the original signal.
and communication systems. This result relies on the notion of block restricted
isometry property (RIP), which is a generalization of
The power, stability, and low cost of digital signal the standard RIP used extensively in the context of
processing (DSP) have pushed the analog-to-digital compressed sensing. Based on RIP we also prove
converter (ADC)increasingly close to the front-end of stability of our approach in the presence of noise and
many important sensing, imaging, and communication have a large bandwidth but a small “information rate”.
Systems.Unfortunately, many systems, especially those Fortunately, recent developments in mathematics and
operating in the radio frequency (RF) bands, severely signal processing have uncovered a promising
stress current ADC technologies.For example, some approach to the ADC bottleneck that enables sensing at
important radar and communications applications a rate comparable to the signal’s information rate. A
would be best served by an ADC sampling over 5 new field, known as Compressive Sensing (CS)
GSample/s and resolution of over 20 bits, a establishes mathematically that a relatively small
combination that greatly exceeds current capabilities. number of non-adaptive, linear measurements can
harvest all of the information necessary to faithfully
It could be decades before ADCs based on current reconstruct sparse or compressible signals.
technology will be fast and precise enough for these
applications.And even after better ADCs become Traditional sampling theories consider the problem of
available, the deluge of data will swamp back-end DSP reconstructing an unknown signal x from a series of
algorithms. For example, sampling a 1GHz band using samples. A prevalent assumption which often
2 GSample/s at 16 bits-persample generates data at a guarantees recovery from the given measurements is
rate of 4GB/s, enough to fill a modern hard disk in that x lies in a known subspace. Recently, there has
roughly one minute. In a typical application, only a been growing interest in nonlinear but structured signal
tiny fraction of this information is actually relevant; models, in which x lies in a union of subspaces. In this
the wideband signals in many RF applications often project I develop a general framework for efficient
have a large bandwidth but a small “information rate”. recovery of such signals from a given set of samples.
Fortunately, recent developments in mathematics and More specifically, we treat the case in which x lies in a
signal processing have uncovered a promising sum of k subspaces, chosen from a larger set of m
approach to the ADC bottleneck that enables sensing possibilities. The samples are modelled as inner
at a rate comparable to the signal’s information rate. A products with an arbitrary set of sampling functions.
new field, known as Compressive Sensing (CS) To derive an efficient recovery algorithm, we show
establishes mathematically that a relatively small that our problem can be formulated as that of
number of non-adaptive, linear measurements can recovering a block-sparse vector whose non-zero
harvest all of the information necessary to faithfully elements appear in fixed blocks. Our main result is an
reconstruct sparse or compressible signals. equivalence condition under which the proposed
Convex Algorithm along with uniform uncertainity
Traditional sampling theories consider the problem of principle is guaranteed to recover the original signal.
reconstructing an unknown signal x from a series of This result relies on the notion of block restricted
samples. A prevalent assumption which often isometry property (RIP), which is a generalization of
guarantees recovery from the given measurements is the standard RIP used extensively in the context of
SPARKS@ELECTROMANIA 2K9 

compressed sensing. Based on RIP we also prove


stability of our approach in the presence of noise and Here denotes the standard
modeling errors. Adapting our results to this context
leads to new MMV recovery methods as well as
equivalence conditions under which the entire set can
be determined efficiently. inner product on Ή.For example, if Ή =L 2 is
the space of real finite energy signals then
3.PROPOSED SOLUTION
3.1. OUTLINE OF SOLUTION  
Our“Analog-To-Informationconverter” (AIC)is
inspired by the recent theory of Compressive Sensing
(CS), which states that a discrete signal having a
sparse representation in some domain can be
recovered from a small number of linear projections of
that signal. We generalize the CS theory to continuous- …………………..(2)
time sparse signals, explain our proposed AIC system
in the CS context, and discuss practical issues
regarding implementation.
 
Analog signals are sampled considering them as union
of linear sub spaces rather than a single space.In most
of the practical applications it is found union of sub
spaces is a better model for the signal of intertest that
using Fast Fourier Transform and then stored in a
buffer .Using Convex Programming and Uniform …………..(3)
Uncertainity Principle we take the sparsed sampled Non-linear sampling is also there but our focus will be
signals which collect maximum information.Finally on the Linear one only.When Ή= IRN ,the unknown x
using 890 algorithm in SPARCO toolbox of matlab = x as well as the sampling functions si = si are vectors
we reconstruct the signal as information in some in IRN. Therefore, the samples can be written
desired format. conveniently in matrix form as y = ST x, where S is the
matrix with columns functions si .In the more general
3.2..”UNION OF FINITE DIMENSIONAL case in which Ή =L2 or any other abstract Hilbert
LINEAR SUB SPACES MODEL” FOR space, we can use the set transformation notation in
SIGNALS OF INTEREST order to conveniently represent the samples. A set
transformation S : IRN Ή corresponding to sampling
A.Subspace Sampling vectors
Traditional sampling theory deals with the problem of
{ si € Ή, 1 ≤ i ≤ n} is defined by
recovering an unknown signal ϰ ∈ Ή from a set of n
samples yi=fi(ϰ) where fi(ϰ) is some function of ϰ. The
n
signal ϰ can be a function of time ϰ = x(t), or can
Sc = Σ si
represent a finite-length vector ϰ = x. The most
i=1
common type of sampling is linear sampling in which
……………………(4)
yion here. for all c € IRN. From the definition of the adjoint,
if c = s*x, then c(i) = <si,x>.
Note that when Ή= IR N , S = S and S* = ST .Using
this notation, we can always express the samples as
…………… (1)
for a set of function y= S* x
…………………….(5)

where S is a set transformation for arbitrary Ή and an


appropriate matrix when Ή= IRN.Our goal is to
SPARKS@ELECTROMANIA 2K9 

recover x from the samples y ∈IRN. If the vectors si do where { A J , 1 ≤ j ≤ m} are a given set of disjoint
not span the entire space Ή , then there are many subspaces, and |j| = k denotes a sum over k indices.
possible signals x consistent with y. More specifically, Thus, each subspace vi corresponds to a different
if we define by S the sampling space spanned by the choice of
vectors si, then clearly S* v= 0 for any v subspaces AJ that comprise the sum. We assume
€S*.Therefore, if S*. is not the trivial space then adding throughout the paper that m and the dimensions di =
such a vector v to any solution xof (5) will result in dim(AJ ) of the subspaces A J are finite. Given n
the same samples y. However, by exploiting prior samples
knowledge on x in many cases uniqueness can be
guaranteed. A prior very often assumed is that lies in y= S* x
a given subspace A of Ή . If A and S have the same ……………..(8)
finite dimension, and S⊥and A intersect only at the 0
and the knowledge that x lies in exactly one of the
vector, then x can be perfectly recovered from the
subspaces v i ,we would like to recover the unknown
samples y.
signal x .In this setting, there are
possible subspaces comprising the
B. Union of Subspaces
When subspace information is available, perfect
reconstruction can often be guaranteed.Furthermore,
recover can be implemented by a simple linear
transformation of the given samples (5). However, union.An alternative interpretation of our model is as
there are many practical scenarios in which we are follows. Given an observation vector y, we seek a
given prior information about x that is not necessarily signal xfor which y= S* x and in addition x can be
in the form of a subspace.Here we focus our attention written as
on the setting where x lies in a union of subspaces
n k
u=U v i ……i=1……(6) ϰ = Σ xi
i=1

where each v i is a subspace. Thus, x belongs to one


of the v i , but we do not know a priori to which one. …………………………………. (9)
Note that the set u is no longer a subspace. Indeed, if where each ϰi lies in AJ for some index j.A special
v i is, for example, a one-dimensional space spanned case is the standard CS problem in which x = x is a
by the vector vi, then U contains vectors of the form vector of length N, that has a sparse representation in
αvi, for some i but does not include their linear a given basis defined by an invertible matrix W. Thus,
combinations. Our goal is to recover a vector ϰ lying x = Wc where c is a sparse vector that has at most k
in a union of subspaces, from a given set of samples. nonzero elements. This fits our framework by
In principle, if we knew which subspace ϰ belonged choosing A i as nthe space.
to, then reconstruction can be obtained using standard spanned by the ith column of W. In this setting m = N,
sampling results. However, here the problem is more and there are
involved because conceptually we first need to identify
the correct subspace and only then can we recover the
signal within the space. Previous work on sampling
over a union focused on invertibility and stability
results in some generaliastions which are useful for us. subspaces comprising the union.
To achieve this goal,we limit our attention to a
subclass of (6) for which stable recovery algorithms A.Problem Formulation and Main Results
can be developed and analyzed. Specifically, we treat Given k and the subspaces A i, ,we would like to
the case in which each v i has the additional structure address the following questions:
v i= AJ 1) What are the conditions on the sampling vectors si,
1 ≤ i ≤ n in order to guarantee that the sampling is
invertible and stable?
|j| = k
…… …………………………
2) How can we recover the unique x (regardless of
(7) computational complexity)?
SPARKS@ELECTROMANIA 2K9 

3) How can we recover the unique x in an efficient and Proposition 2: The measurement matrix D is stable for
stable manner? However, no concrete methods were every block k-sparse vector c if and only if there exists
proposed in order to recover x. Here we provide c1 > 0 and c2 < ∞ such that
efficient convex algorithms that recover x in a stable
way for arbitrary k under appropriate conditions on the c1 II vII 22 <= II Dv II 22 <= II vII 22 ………(19)

sampling functions si another spaces A i. My results for every v that is block 2k-sparse.It is easy to see that
are based on an equivalence between the union of if D satisfies (19) then Dc ǂ 0 for all block 2k-sparse
subspaces problem assuming (7) and that of recovering vectors c. Therefore, this condition implies both
block-sparse vectors. This allows us to recover x from invertibility and stability .
the given samples by first treating the problem of
recovering a block k-sparse vector c from a given set A.Block RIP
of measurements. This relationship is established in Property (19) is related to the RIP used in several
the next section. In the reminder of the paper we previous works in CS [9], [13], [14]. A matrix D of
therefore focus on the block k-sparse model and size n × N is said to have the RIP if there exists a
develop our results in that context. In particular, we constant δk€ [0, 1) such that for every k-sparse c €IRN,
introduce a block RIP condition that ensures
uniqueness and stability of our sampling problem. We (1 − δk) II c II2 2 <= II Dv II 22 <= (1 + δk) II c II2 2
then suggest an efficient convex optimization problem …………. (20)
which approximates an unknown block-sparse vector Extending this property to block-sparse vectors leads
c. Based on block RIP we prove that c can be to the following definition:
recovered exactly in a stable way using the proposed Definition 2: Let D : IRN →IRN be a given matrix.
optimization program. Furthermore, in the presence of Then D has the block RIP over I = {d1, . . . , dm} with
noise and modeling errors, this algorithm can parameter δk|I if for every c ∈ IRN that is block k-
approximate the best block-k sparse solution. sparse over I we have that
D. UNIQUENESS AND STABILITY
In this section we study the uniqueness and stability of (1 − δk/I) II c II2 2 <= II Dv II 22 <= (1 +
our sampling method. These properties are intimately δk/I) II c II2 2

related to the RIP, which we generalize here to the


block-sparse setting.The first questi we address is that …………………..
of uniqueness, namely conditions under which a block- (21)
sparse vector c is uniquely determined by the
measurement vector y = Dc. By abuse of notation, we use δk for the block-RIP
Proposition 1: There is a unique block-k sparse vector constant δk|I when it is clear from the context that we
c consistent with the measurements y = Dc if and only refer to blocks. Block-RIP is a special case of the A
if Dc ǂ 0 for every c ǂ 0 that is block 2k-sparse. -restricted isometry defined in [23]. From Proposition
Proof: The proof follows from [22, Proposition 4]. We 1 it follows that if D satisfies the RIP (21) with δ2k <
next address the issue of stability. A sampling operator 1, then there is a unique block-sparse vector c
is stable for a set T if and only if there exists constants consistent with (16). Note that a block k-sparse vector
α> 0, β < ∞ such that over I is M-sparse in the conventional sense where M
α II x 1 −x 2 II 2 Ή <= IIS* x 1 -- S* x 2 II2 <= βIIx 1 −x 2 II is the sum of the k largest values in I, since it has at
2
Ή most M nonzero elements. If we require D to satisfy
RIP for all M-sparse vectors, then (21) must hold for
all 2M-sparse vectors c. Since we only require the RIP
…………… (18) for block sparse signals, (21) only has to be satisfied
for every x 1, x 2 in T . The ratio κ = β/α provides a for a certain subset of 2M sparse signals, namely those
measure for stability of the sampling operator. The that have block sparsity. As a result, the block-RIP
operator is maximally stable when κ = 1. In our constant δk|I is typically smaller than δM (where M
settings S is replaced by D, and the set T contains depends on k; for blocks with equalsize d, M = kd).To
block-k sparse vectors. The following proposition emphasize the advantage of block RIP over standard
follows immediately from (18) by noting that given RIP, consider the following matrix, separated into three
two block-k sparse vectors c1,c2 their difference c1 −c2 blocks of two columns each:
is block-2k sparse.
SPARKS@ELECTROMANIA 2K9 

  uniquely recovered by solving the optimization


problem

Min

…………………. (23)
To show that (23) will indeed recover the true value of
c, suppose that there exists a c′ such that Dc′ = y and II
………………………(22) c′II 0,I <= II c′II 0,I <= k. Since both c, c′ are consistent
with the measurements,
where B is a diagonal matrix that results in unit-norm
columns of D, i.e., B = diag (1, 15, 1, 1, 1, 12)−1/2 . In 0 = D(c − c′) = Dd, ……………
this example m = 3 and I = {d1 = 2,d2 = 2,d3 = 2}. ………(24)
Suppose that c is block-1 sparse, which corresponds to
at most two non-zero values. Brute-force calculations
show that the smallest value of δ2 satisfying the where II d II 0,I <= 2k so that d is a block 2k-sparse
standard RIP (20) is δ2 = 0.866. On the other hand, the vector. Since D satisfies(21) with δ2k < 1, we must
block-RIP (21) corresponding to the case in which the have that d = 0 or c = c′.
two non-zero elements are restricted to occur in one
block is satisfied with δ1|I = 0.289. Increasing the 3.3.The Fast Fourier TransformAlgorithm
number of non-zero elements to k = 4, we can verify
that the standard RIP (20) does not hold for any δ4 € [0, This is how the DFT may be computed efficiently.
1). Indeed, in this example there exist two 4-sparse 1D Case
vectors that result in the same measurements. In
contrast, δ2|I = 0.966 satisfies the lower bound in (21)
when restricting the 4 non-zero values to two blocks.
Consequently, the measurements y = Dc uniquely
specify a single block-sparse c. In the next section, we
has to be evaluated for N values of u, which if done in
will see that the ability to recover c in a
computationally efficient way depends on the constant the obvious way clearly takes multiplications.It is
δ2k|I in the block RIP (21). The smaller the value possible to calculate the DFT more efficiently than
ofδ2k|I , the fewer samples are needed in order to this, using the fast Fourier transform or FFT
guarantee stable recovery. Both standard and block algorithm, which reduces the number of operations to
RIP constants δk,δk|I are by definition increasing with
k. Therefore, it was suggested in [12] to normalize
each of the columns of D to 1, so as to start with δ1 = 0. .
In the same spirit, we recommend choosing the bases We shall assume for simplicity that N is a power of 2,
for A I such that D = S* A has unit-norm columns, . If we define to be the root of unity
corresponding to δ1|I = 0. given by

B. Recovery Method ,
We have seen that if D satisfies the RIP (21) with δ2k and set M=N/2, we have
< 1, then there is a unique block-sparse vector c
consistent with (16). The question is how to find c in
practice. Below we present an algorithm that will in
principle find the unique c from the samples y.
Unfortunately, though, it has exponential complexity. This can be split apart into two separate sums of
In the next section we show that under a stronger alternate terms from the original sum,
condition on δ2k we can recover c in a stable and
efficient manner.Our first claim is that c can be
SPARKS@ELECTROMANIA 2K9 

Thus, we can compute an N-point DFT by dividing it


into two parts:

The first half of F(u) for

Now, since the square of a root of unity is an can be found from


Eqn. 28,
root of unity, we have that
• The second half for
can be found simply be reusing the same
and hence terms differently as shown by Eqn. 30.
• This is obviously a divide and conquer
method.
To show how many operations this requires, let T(n) be
the time taken to perform a transform of size
, measured by the number of multiplications
performed. The above analysis shows that
10
If we call the two sums demarcated above

the first term on the right hand side coming from the
and two transforms of half the original size, and the second
term coming from the multiplications of

respectively,then we have
by

.
Induction can be used to prove that

Note that each of

and A similar argument can also be applied to the number


of additions required, to show that the algorithm as a
whole takes time.
for
.
is in itself a discrete Fourier transform over N/2=M Also Note that the same algorithm can be used with a
points. little modification to perform the inverse DFT too.
How does this help us? Going back to the definitions of the DFT and its
Well inverse,

and we can also write

10
SPARKS@ELECTROMANIA 2K9 

and amplifier with a gain of exactly 1.The gain for a non-


inverting amplifier is given by the formula :

If we take the complex conjugate of the second


equation, we have that

This now looks (apart from a factor of 1/N) like a


forward DFT, rather than an inverse DFT. Thus to
compute an inverse DFT,
• take the conjugate of the Fourier space data,
• put conjugate through a forward DFT
algorithm,
• take the conjugate of the result, at the same So, if we make R2 zero, and R1 infinity, we'll have an
time multiplying each value by N. amp with a gain of exactly 1. How can we do this? The
2D Case : circuit is surprisingly simple. Here, R2 is a plain wire,
The same fast Fourier transform algorithm can be used which has effectively zero resistance. We can think of
-- applying the separability property of the 2D R1 as an infinite resistor -- we don't have any
transform. connection to ground at all.This arrangement is called
an Op-Amp Follower, or Buffer. The buffer has an
Rewrite the 2D DFT as output that exactly mirrors the input (assuming it's
within range of the voltage rails), so it looks kind of
useless at first.

The right hand sum is basically just a one-dimensional


DFT if x is held constant. The left hand sum is then
another one-dimensional DFT performedwith the
numbers that come out of the first set of sums.
However, the buffer is an extremely useful circuit,
So we can compute a two-dimensional DFT by since it helps to store the signal for sometime . The
input impedance of the op-amp buffer is very high:
• performing a one-dimensional DFT for each close to infinity. And the output impedance is very
value of x, i.e. for each column of f(x,y), then low: just a few ohms.
• performing a one-dimensional DFT in the
opposite direction (for each row) on the 3.5.SPARSE RECOVERY BY 890 ALGORITHM
resulting values. USING SPARCO TOOLBOX OF MATLAB
Now in this section ,I am giving a brief account of
This requires a total of 2 N one dimensional SPARCO with emphasis on applicatiopn part .Sparco
transforms, so the overall process takes time is organized as a fexible framework providing test
problems for sparse signal recon-struction as well as a
library of operators and tools. The problem suite
3.4. STORAGE OF SAMPLED SIGNALS currently contains 25 problems and 28 operators.The
For storage of sampled signals I have used buffer as latest version of Sparco and releted stuffs like
described below.I have designed a non-inverting installation guides ,prerequisites code forc sparase
SPARKS@ELECTROMANIA 2K9 

MRIB toolbox ,test problems packaged with the GPSR y = D(x,1); % gives y := Dx
solver can be obtained from x = D(y,2); % gives x := Dty
www.cs.ubc.ca/labs/scl/sparco.Also an open source A full list of the basic operators available in the Sparco
pacakage Rice Wavelet Toolbox can be of great help.A library is given in Tables 3 and 4:
brief description of various sparco operators is as
follows : Matlab classes can be used to overload operations
commonly used for matrices so that the objects in that
At the core of the Sparco architect architecture is a class behave exactly like explicit matrices. Although
large library of linear operators. Where possible, this mechanism is not used for the implementation of
specialized code is used for fast evaluation of matrix- the Sparco operators, operator overloading can provide
vector multiplications. Once an operator has been a very convenient interface for the user. To facilitate
created D = opDCT(128) this feature, Sparco provides the function classOp:
matrix-vector products with the created operator can
be accessed as follows:
Matlab function Description:

opBinary binary (0/1) ensemble


opBlockDiag compound operator with operators on the diagonal
opBlur two-dimensional blurring operator
opColumnRestrict restriction on matrix columns
opConvolve1d one-dimensional convolution operator
opCurvelet2d two-dimensional curvelet operator
opDCT one-dimensional discrete cosine transform
opDiag scaling operator
opDictionary compound operator with operators abutted
opDirac identity operator
opFFT one-dimensional FFT
opFFT2d two-dimensional FFT
opFFT2C centralized two-dimensional FFT
opFoG subsequent application of a set of operators
opGaussian Gaussian ensemble
opHaar one-dimensional Haar wavelet transform
opHaar2d two-dimensional Haar wavelet transform
opHeaviside Heaviside matrix operator
opKron Kronecker product of two operators
opMask vector entry selection mask
opMatrix wrapper for matrices
opPadding pad and unpad operators equally around each side
opReal discard imaginary components
opRestriction vector entry restriction
opSign sign-ensemble operator
opWavelet wavelet operator
opWindowedOp overcomplete windowed operator

opWindowedOp overcomplete windowed operator

Table 3: The operators in the Sparco library

C = classOp(op); % Create matrix object C from op


C = classOp(op,'nCprod'); % Additionally, create a global counter variable nCprod

the main matrix-vector operations are de_ned. In its second form, the classOp function
SPARKS@ELECTROMANIA 2K9 

H = opDictionary(A1,A2,...); % H := [A1 j A2 j _ _ _ j
These calls take an operator op and return an object An]
from the operator class for which H = opTranspose(A); % H := AT
the main matrix-vector operations are de_ned. In its H = opBlockDiag(A1,A2,...); % H := diag(A1;A2; : : :)
second form, the classOp function
accepts an optional string argument and creates a H = opKron(A1,A2); % H := A1 A2
global variable that keeps track of the
number of multiplications with C and CT . The A sixth meta-operator, opWindowedOp, is a mixture
variable can be accessed from Matlab's between opDictionary and opBlockDiagv in which
base workspace. The following example illustrates the blocks can partially overlap rather than fully
use of classOp: (opDictionary), or not at all (opBlockDiag). A further
two di_erences are that only a single operator is
repeated and that each operator is implicitly preceded
F = opFFT(128); by a diagonal window operator.
G = classOp(F);
g1 = F(y,2); % gives g1 := FTy
g2 = G'*y; % gives g2 := GTy _ Fty Ensemble operators and general matrices:The three
ensemble
Operator type Matlab function operators (see Table 4) can be instantiated by simply
specifying their dimensions and a mode that
determines the normalization of the ensembles. Unlike
Ensembles opBinary, opSign, theother operators in the collection, the ensemble
opGaussian operators can be instantiated as explicit matrices
Selection opMask, (requiring O(m_ n) storage), or as implicit operators.
opColumnRestrict, opRestriction When instantiated as implicit operators, the random
Matrix opDiag, opDirac, number seeds are saved and rows and columns are
opMatrix, opToMatrix generated on the y during multiplication, requiring
Fast operators opCurvelet, only O(n) storage for the normalization coeffcients.
opConvolve1d, opConvolve2d, opDCT,
opFFT, opFFT2d, Selection operators:
opFFT2C, opHaar, opHaar2d,
opHeaviside, Two selection operators are provided: opMask and
opWavelet opRestriction. In forward mode, the restriction
Compound operators opBlockDiag, operator selects certain entries from the given vector
opDictionary, opFoG, opKron, and returns a correspondingly shortened vector. In
opWindowedOp contrast, the mask operator evaluates the dot-product
Nonlinear opReal, with a binary vector thus zeroing out the entries
opPadding instead of discarding them, and returns a vector of the
same length as the input vector.

Fast operators:
Table 4: Operators grouped by type: Sparco also provides support for
operators with a special structure for which fast
Meta operators: algorithms are available. Such operators in the library
include Fourier, discrete cosine, wavelet, two-
Several tools are available for conveniently assembling dimensional curvelet, and one-dimensional
more complex operators from the basic operators. The convolution of a signal with a kernel.For example, the
_ve meta-operators opFoG, opDictionary, following code generates a partial 12 Fourier
opTranspose, opBlockDiag, and opKron take one or measurement operator (F), a masked version with 30%
more of the basis operators as inputs, and assemble of the rows randomly zeroed out (M), and a
them into a single operator: dictionary consisting of an FFT and a scaled Dirac
basis (B):
m = 128;
H = opFoG(A1,A2,...); % H := A1 _ A2 _ : : : _ An
SPARKS@ELECTROMANIA 2K9 

D = opDiag(m,0.1); F = opFFT(m); % D is a diagonal


operator, F is an FFT
M = opFoG(opMask(rand(m,1) < 0.7),F); % M is a
masked version of F
B = opDictionary(F,D); % B = [F D]

Utilities:
For general matrices there are three operators:
opDirac, opDiag, and opMatrix. The Dirac operator
coincides with the identity matrix of desired size.
Diagonal matrices can be generated using opDiag
which takes either a size and scalar, or a vector
containing the
5.RESULTS AND INFERENCES
diagonal entries. General matrix operators can be
created using opMatrix with a (sparse) matrix as an
Though the experimentation was not so successful
argument.
completely due to some inevitable and unavoidable
The opToMatrix utility function takes an implicit
circumstances , the result is still satisfactory.Since
linear operator and forms and returns an explicit
always there is a
matrix. Figure 1 shows the results of using this utility
function on the operator M and B :
scope of improvement , design of such a system in
Mexplicit = opToMatrix(M); imagesc(Mexplicit);
practical form is still a challenge. The
Bexplicit = opToMatrix(B); imagesc(Bexplicit);
top view of DSP kit used is shown below :
Using the appropriate algorithms and tools I have done
experimentation which is discussed in next section.

4. EXPERIMENTASTION AND OBSERVATION :

4.1. Aim: to reconstruct an image given input image is


provided .

4.6.Output image is as follows:


SPARKS@ELECTROMANIA 2K9 

http: //www.curvelet.org/, 2007.


[14] M. Lustig, D. L. Donoho, J. M. Santos, and J. M.
7. References Pauly, Compressed sensing MRI, 2007. Submitted to
[1] S. Boyd and L. Vandenberghe, Convex IEEE Signal Processing Magazine.
Optimization. Cambridge, U.K.: Cambridge Univ. [15] D. Malioutov, M. C etin, and A. S. Willsky, A
Press, 2004. sparse signalreconstruction prespective for source
[2] E. Candès and J. Romberg, “Sparsity and localization with sensor arrays, IEEE Trans. Sig. Proc.,
incoherence in compressive sampling,” Inverse Prob., 53 (2005), pp. 3010{3022.
vol. 23, no. 3, pp. 969–986, June 2007. [16] S. Mendelson, A. Pajor, and N. Tomczak-
[3] M. Rudelson and R. Vershynin, “On sparse Jaegermann, Uniform
reconstruction from Fourier and Gaussian uncertainty principle for Bernoulli and subgaussian
measurements,” submitted for publication. ensembles, 2007. arXiv:math/0608665.
[4] J. Shapiro, “Embedded image coding using [17] B. K. Natarajan, Sparse approximate solutions to
zerotrees of wavelet sDec. 1993. linear systems, SIAM J.Comput., 24 (1995), pp.
[5] A. Skodras, C. Christopoulos, and T. Ebrahimi, 227{234.
“The JPEG2000 still image compression standard,” [18] S. Mendelson, A. Pajor, and N. Tomczak-
IEEE Signal Processing Mag., vol. 18, pp. 36–58,Sept. Jaegermann, Uniform uncertainty principle for
2001. Bernoulli and subgaussian ensembles,
[6] D. Takhar, J.N. Laska, M.B. Wakin, M.F. Duarte, 2007.arXiv:math/0608665.
D. Baron, S. Sarvotham,K.F. Kelly, and R.G. [19] B. K. Natarajan, Sparse approximate solutions to
Baraniuk, “A new compressive imaging camera linear systems, SIAM J Comput., 24 (1995), pp.
architecture using 227{234.
optical-domain compression,” in Proc. SPIE Conf. [20] S. Mendelson, A. Pajor, and N. Tomczak-
Computational Imaging IV,San Jose, CA, Jan. 2006, Jaegermann, Uniform
pp. 43-52. uncertainty principle for Bernoulli and subgaussian
ensembles, 2007.
[7] J. Tropp, “Just relax: Convex programming arXiv:math/0608665.
methods for identifying sparse signals in noise,” IEEE [21] B. K. Natarajan, Sparse approximate solutions to
Trans. Inform. Theory, vol. 52, no. 3, pp. 1030–1051 linear systems, SIAM J.Comput., 24 (1995), pp.
,2006. 227{234}.
[8] M. Vetterli and J. Kovacevic, Wavelets and [22] C. A. Shannon and W. Weaver, The mathematical
Subband Coding. Englewood Cliffs, NJ: Prentice-Hall, theory ofcommuni-cation. University of Illinois Press,
1995. 1949.
[9] R. Baraniuk, H. Choi, F. Fernandes, B. Hendricks, [23] I. F. Gorodnitsky and B. D. Rao, “Sparse signal
R. Neelamani, V.Ribeiro, J. Romber, R. Gopinath, H.- reconstruction from
T. Guo, M. Lang, J. E. Odegard, and D.Wei, Rice limited data using FOCUSS: a re-weighted minimum
Wavelet norm algorithm,” IEEE Transactions on Signal
Toolbox.http://www.dsp.rice.edu/software/rwt.shtml, Processing, vol. 45, pp. 600–616, March 1997.
1993. [24] M. Vetterli, P. Marziliano, and T. Blu, “Sampling
[10] E. van den Berg and M. P. Friedlander, In pursuit signals with finite rate of innovation,” IEEE
of a root, Tech. Rep.TR 2007-19, Department of Transactions on Signal Processing, vol. 50, no. 6, pp.
Computer Science, University of British Columbia, 1417–1428, 2002.
June 2007. [25] E. Cand`es and J. Romberg, “Quantitative robust
[11] R. Boisvert, R. Pozo, K. Remington, R. Barrett, uncertainty principles and optimally sparse
and J. Dongarra, Matrix Market: A web resource for decompositions,” Foundations of Comput. Math,vol. 6,
test matrix collections, in The quality of no. 2, pp. 227 – 254, 2006.
numerical software: assesment and enhancement, R. F. [26] E. Cand`es and T. Tao, “Near optimal signal
Boisvert, ed., recovery from random
Chapman & Hall, London 1997, pp. 125{137. projections: Universal encoding strategies?,” IEEE
Trans. on Information Theory, vol. 52, no. 12, pp. 5406
[12] E. J. Candes, Compressive sampling, in – 5425, 2006.
Proceedings of the International Congress of [27] D. Donoho, “Compressed sensing,” IEEE Trans.
Mathematicians, 2006. on Information Theory,vol. 52, no. 4, pp. 1289–1306,
[13] E. J. Candes, L. Demanet, D. L. Donoho, and L.- 2006.
X. Ying, CurveLab.
SPARKS@ELECTROMANIA 2K9 

[28] J. A. Tropp, A. C. Gilbert, and M. J. Strauss, .[41] I. Maravic and M. Vetterli, “Sampling and
“Algorithms for simultaneous sparse approximation. reconstruction of signals with finite rate of innovation
Part I: Greedy pursuit,” Signal Processing,vol. 86, pp. in the presence of noise,” IEEE Trans. Signal Process.,
572–588, 2006. vol. 53, no. 8, pp. 2788–2805, Aug. 2005.
[29] J. A. Tropp, “Algorithms for simultaneous sparse [42] P. Dragotti, M. Vetterli, and T. Blu, “Sampling
approximation. PartII: Convex relaxation,” Signal moments and reconstructing signals of finite rate of
Processing, vol. 86, pp. 589–602, 2006. innovation: Shannon meets Strang-Fix,” IEEE Trans.
[30] K. S. R. Gribonval, H. Rauhut and P. Signal Process., vol. 55, no. 5, pp. 1741–1757, May
Vandergheynst, “Atoms of all 2007.
channels, unite! Average case analysis of multi- [43] D. L. Donoho, M. Vetterli, R. A. DeVore, and I.
channel sparse recovery using greedy algorithms,” Daubechies, “Data
Journal of Fourier analysis and applications,Published compression and harmonic analysis,” IEEE Trans. Inf.
online, DOI:10.1007/s00041-008-9044-y, October, Theory, vol. 44,no. 6, pp. 2435–2476, Oct. 1998.
2008. [44] S. Mallat, A Wavelet Tour of Signal Processing,
[31] E. Cand`es, “Compressive sampling,” in 2nd ed. San Diego:Academic Press, 1999.
Proceedings of the International Congress of [45] A. M. Bruchstein, T. J. Shan, and T. Kailath, “The
Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, resolution of
2006.
[32] E. Cand`es, “Compressive sampling,” in overlapping echos,” IEEE Trans. Acoust., Speech, and
Proceedings of the International Congress of Signal Process.,vol. 33, no. 6, pp. 1357–1367, Dec.
Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, 1985.
2006. [46] A. Aldroubi and K. Gr¨ochenig, “Nonuniform
[33] E. Cand`es, “Compressive sampling,” in sampling and reconstruction in shift-invariant spaces,”
Proceedings of the International Congress of SIAM Review, vol. 43, no. 4, pp. 585–620, 2001.
Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, [47] C. Zhao and P. Zhao, “Sampling theorem and
2006. irregular sampling theorem for multiwavelet
[34] E. Cand`es, “Compressive sampling,” in subspaces,” IEEE Trans. Signal Process., vol. 53, no.
Proceedings of the International Congress of 2,pp. 705–713, Feb. 2005.[48] P. Zhao, C. Zhao, and P.
Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, G. Casazza, “Pertubation of regular sampling in shift-
2006 invariant spaces for frames,” IEEE Trans. Inf. Theory,
[35] E. Cand`es, “Compressive sampling,” in vol. 52,no. 10, pp. 4643–4648, Oct. 2006.
Proceedings of the International Congress of [49] M. Vetterli, P. Marziliano, and T. Blu, “Sampling
Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, signals with finite
2006. rate of innovation,” IEEE Trans. Signal Process., vol.
[36] E. Cand`es, “Compressive sampling,” in 50, no. 6, pp.
Proceedings of the International Congress of 1417–1428, Jun. 2002.
Mathematics, (Madrid, Spain), Vol. 3, pp. 1433-1452, [50] I. Maravic and M. Vetterli, “Sampling and
2006. reconstruction of signals with
[37] A. Aldroubi and K. Gr¨ochenig, “Nonuniform finite rate of innovation in the presence of noise,”
sampling and reconstruction in shift-invariant spaces,” IEEE Trans. Signal Process., vol. 53, no. 8, pp. 2788–
SIAM Review, vol. 43, no. 4, pp. 585–620, 2001. 2805, Aug. 2005.
[38] C. Zhao and P. Zhao, “Sampling theorem and [51] P. Dragotti, M. Vetterli, and T. Blu, “Sampling
irregular sampling theorem for multiwavelet moments andreconstructing signals of finite rate of
subspaces,” IEEE Trans. Signal Process., vol. 53, no. innovation: Shannon meets Strang-Fix,” IEEE Trans.
2,pp. 705–713, Feb. 2005. Signal Process., vol. 55, no. 5, pp. 1741–1757, May
[39] P. Zhao, C. Zhao, and P. G. Casazza, “Pertubation 2007.
of regular sampling in shift-invariant spaces for [52] D. L. Donoho, M. Vetterli, R. A. DeVore, and I.
frames,18 ” IEEE Trans. Inf. Theory, vol. 52, no. 10, pp. Daubechies, “Data
4643–4648, Oct. 2006. compression and harmonic analysis,” IEEE Trans. Inf.
[40] M. Vetterli, P. Marziliano, and T. Blu, “Sampling Theory, vol. 44,no. 6, pp. 2435–2476, Oct. 1998.
signals with finite [53] S. Mallat, A Wavelet Tour of Signal Processing,
rate of innovation,” IEEE Trans. Signal Process., vol. 2nd ed. San Diego: Academic Press, 1999.
50, no. 6, pp.
1417–1428, Jun. 2002 [54]. A. M. Bruchstein, T. J. Shan, and T. Kailath,
18 “The resolution of overlapping echos,” IEEE Trans.
SPARKS@ELECTROMANIA 2K9 

Acoust., Speech, and Signal Process.,vol. 33, no. 6, pp.


1357–1367, Dec. 1985.

You might also like