You are on page 1of 6

ESTIMATION AND DETECTION THEORY TERM PAPER- 2014-15 EVEN SEMESTER

Recent approaches to sparse channel estimation


using compressed channel sensing
Pawan Barnwal, SPCOM, ID:14104028, and Naveen Gupta, SPCOM, ID:14104090,

AbstractIt has been realized that high data rate wireless


communication can be achieved by collecting small samples of
available information of communication channels. Traditionally,
channel estimation are based on classical (Least squares) method
without considering whether channel is sparse or not, so there is
always trade-off between channel estimation, spectrum efficiency,
and computation complexity. It has been known that many
wireless channels encountered in practice tend to exhibit a sparse
multipath structure though signal space dimension gets large
(due to large bandwidth or large number of antenna). Here, we
have shown different novel approaches which have been proposed
recently for sparse channel estimation for OFDM system using
compressed sensing.
Index Terms: Channel Estimation, compressed sensing, Orthogonal Frequency division multiplexing, Sparse channel Modelling; smooth l0 norm algorithm and Bayesian Sparse channel
Estimation.

I. I NTRODUCTION
N current wireless communication system, channel estimation is done by sending pilot signal which do not convey
any user data but consume power and bandwidth, reducing the
amount of required signal while keeping sufficient accuracy
of estimation is main purpose. Although several non-training
assisted channel estimation (i.e. blind channel estimation)
scheme have been proposed, the blind approach is not used
in practical communication scenario because it requires high
complexity.
As it is known that impulse response of wireless channel
tends to show sparsity for larger bandwidth, although being
considering in high scattering scenario. Also, under-water
acoustic system channel exhibits sparsity in both frequency
and temporal domains. Considering Sparsity, recent approach
adopted in channel estimation like, Compressed sensing
approach using orthogonal matching pursuit (OMP) and l1-l2
optimization, which can be applied to channel estimation for
underwater acoustic communication system which is robust
against Doppler effects. Sparse channel estimation in OFDM
system has been done by using modified form of a Discrete
Stochastic Approximation Algorithm in recent literature. .
Our paper is divided into 4 Sections, in Section II, we have
introduced Compressive sensing solution based on greedy
algorithm which uses Matching Pursuit(MP) and orthogonal
Matching Pursuit (OMP), in Section III, Channel Estimation
in OFDM system using smooth l0 norm have been included
and in section IV, Accurate Channel Estimation Based on
Bayesian Compressive Sensing (BCS) based on statistical
learning theory (SLT) and relevance vector machines (RVM)
algorithm to improve the sparse channel estimation accuracy.

Fig. 1. Typical example of sparse channel taps.

II. C OMPRESSIVE S ENSING G REEDY A LGORITHM


Here, in first case we have considered sparse channel
estimation with sparsity in time domain and have introduced
Greedy Algorithm based on Matching Pursuit and orthogonal
matching pursuit for getting optimal solution of compressive
sensing problem. Finally, we have extended OMP algorithm
to Stage OMP which reduces error and improves accuracy.
Compressed sensing problem can be framed as l0 norm
minimization problem:
minx kxk0 subject to Ax = y
Suppose for time being, the mutual coherence of A satisfies
(A) < 1 and it is known that the optimal vector x is
1-sparse for this case , that is, kx k0 = 1. The solution is
unique and the vector y is scalar multiple of a column vector
in A. That is, there exist J {1, 2, ..., n} and z R such that
y = z a J
where, aJ is the J-th column vector of A. To find optimal
index J, we define the error function as
(aT y)2
2
2
e (j) = minz kzaj yk2 = kyk2 kaj k2
j 2
where, j {1, 2, 3....n} such that e(J) = 0 , and the optimal
T
solution x = [x1 , x2 ...] is given by
aT y

xj = kaj k2 , if j = J, and0, otherwise.


j 2
But above steps have high complexity, so greedy algorithm is
applied to reduce complexity. A greedy algorithm iteratively

ESTIMATION AND DETECTION THEORY TERM PAPER- 2014-15 EVEN SEMESTER

builds up approximate solution of l0 norm by updating the support set(cardinality) one by one. Although greedy algorithms
do not lead to the optimal solution but a local minimum in
general, it may outperform the l1 optimization in some cases.
Here we have two greedy algorithm to obtain the solution of
the Compressed sensing problem.
A. Matching pursuit
One of the simplest algorithms is the Matching pursuit [MP]
also known as pure greedy algorithm in estimation theory. MP
optimizes approximation by selecting a column vector at each
step. At the first step(k=1), we search for the best 1-sparse
approximation x[1] for x in the sense for minimizing residual
y Ax [1].
J [1] = argminj e(j)

2
(aT y)
2
J [1] = argminj kyk2 kaj k2
j
 T 
|a y|
J [1] = argmaxj kajj k
2

aT
j[1] y

z [1] = ka k2
j 2
Here, the first approximation x [1] is then given by setting the J [1] -th element of x [0] = 0 by z [1], that is
xj[1] [1] = z [1]. That gives a residual r [1] = y Ax [1] =
y z [1] aj[1] which results in best 1-sparse approximation of
vector y: y = z [1] aj[1] + r [1].
At the next step(k=2), MP further approximates the residual
r [1] by a-sparse vector z [2] aj[2] just as in the first step
described above, that is,
|aT
j r[1]|
J [2] = argmaxj ka
j k2
Then 2-sparse approximation is obtained by y:
y = z [1] aj[1] + z [2] aj[2] + r [2]
In the same order, an M-sparse approximation of vector are
obtained y after M steps:
PM
y = k=1 z [k] aj[k] + r [M ]
It is provided that the residual sequence {r [k] : k = 0, 1, 2....}
converges linearly to zero if,
span(a (1) , a (2) , a (3) , ...) = Rm
Algorithm for matching pursuit is as below:
ALGORITHM1: Matching pursuit(MP)
Require : yRm observed vector
Ensure : xRn estimated sparse vector
x [0] := 0.
r [0] := y Ax [0] = y
k=0 repeat
aT
|aT
j r[k]|
j r[k]

J := argmaxj ka
;
z
:=
kaj k22
j k2
xj [k + 1] := xk + z .
r [k + 1] := y Ax [k + 1] = r [k] z aj .
k =k+1
untill : kr [k]k2 EPS
return : x := x [k].

The number of iterations remarkably reduced compared


with the exhaustive search mentioned above, which requires
solving minimization problem O nM times.

B. Orthogonal matching Pursuit(OMP)


OMP is improved version of MP and also known as
orthogonal greedy algorithm. At Step k 1, OMP selects
optimal index J [k] as given above and update support sets
as [k] = [k 1] J [k], with initialization [0] = .
OMP then updates the vector x [k] by orthogonally projecting
measurement vector y onto the subspace spanned by column vector {aj : j [k]}. In other words, x [k] minimizes
2
kAx bk2 subject to card(x)= [k]. Full algorithm is described below. In this algorithm, x is the vector of the length
|| obtained by collecting the entries of x corresponding to
the cardinality set , and ,A is the submatrix of x of size
Mx|| composed of column vector of A corresponding to .
A major difference of OMP as compared to MP is that OMP
will never select the same index twice since residual r [k] is
orthogonal to already chosen column vector {aj := j [k]}.
As a result if vector a( 1), a( 2)...., a( n) spans Rm holds, OMP
will always produces an estimate x which satisfies Ax = y
after M iterations.Moreover, if atleast one of the following
statement is true
1. mutual coherence of A satisfies (A) < 2M11
2. A satisfies RIP of order M+1 with constraints
M +1 < 31M
then OMP will recover any M-sparse vector x from measurement y = Ax in M -iterations.A probabilistic guarantee with
a random matrix A is also obtained by l-norm minimization
problem.If the row vector of A are drawn independently from
the n-dimensional standard Gaussian distribution, then, OMP
will recover M-sparse vector with high probability using only
m O(M logn) measurements.
Algorithm2: Orthogonal Matching Pursuit(OMP)
Require : yRm [ observed vector ]
Ensure : xRn [ estimated sparse vector ]
x[0] := 0;
r[0] :=Sy Ax[0] = y;
: [J]
:= 0;
k := 0;
aT
j r[k]
repeat J := argmaxj ka
jk
2

x [k + 1] := argminv kA v yk2 .
r [k + 1] := y Ax [k + 1] = y A x [k + 1] .
k := k + 1.
untill
kr [k]k2 EP S
return x := x [k] .

Other greedy algorithm which are also mentioned in recent


literature are:

ESTIMATION AND DETECTION THEORY TERM PAPER- 2014-15 EVEN SEMESTER

C. Gradient Pursuit
Gradient Pursuit in which orthogonal projection step of
OMP is replaced by
x [k + 1] := x [k] S [k] AT
(A x[k] y)
where, s[k] is a step size and the update direction is the
negative gradient of the cost function (least square error
minimization ). This is called as Gradient pursuit.
D. Stagewise OMP
Stagewise OMP(StOMP) To speed up above algorithm,
multiple columns for M-Sparse vector are selected at each
step, so the support set(cardinality) update step in OMP can
be replaced


S  by T
j := aj .r [k] / kaj k2 T [k]
:
where T [k] is a threshold parameter determining which
columns are to be selected for addition to the support set.This
is known as Stagewise orthogonal matching pursuit.
III. C HANNEL E STIMATION IN OFDM SYSTEMS USING
SMOOTHED L 0 NORM
Assume that the fading channel is frequency-selective. To
avoid inter-symbol interference (ISI), let the length of the
cyclic prefix (CP) in the OFDM symbols is larger than the
maximum delay spread max .

Fig. 2. Pilot Patterns for (a) conventional pilot-aided channel estimation


method and (b) compressive sensing based approach.

If the coherence time of the channel is much larger than


the OFDM symbol duration T, then the channel can be
considered quasi-static over an OFDM symbol. Let the
number of subcarriers used in OFDM transmission be N,
the transmitted symbol of the of ith subcarrier be X(i), and
the corresponding received symbol Y(i) can be expressed as:
Y (i) = X(i).H(i) + V (i)
where H(i) and V(i) are the channel frequency response(CFR)
and AWGN on
the ith subcarrier respectively.Equivalently,

X(1)
0
0
0
Y (1))
0
H(1)
X(2)
0
0
Y (2)

H(2)

0
X(3)
0
.. +
... = 0
0

..
...
0
Y (N )
H(N )
0
..
...
X(N )

V (1)
V (2)

..
V (N )

Y = XH + V = XWh + V
where,
Y C N x1 is the vector of received signal
X = diag(X(1), X(2), ....., X(N ))C N xN is the diagonal
matrix composed of the transmitted data,
HC N x1 is the CFR vector.
hC Lx1 is the CIR vector
V C N x1 is the vector AWGN samples
W C N xL is the partial Fourier Transform matrix described as:

w00
...
w0(L1)

...
W = 1N ....
w(N 1)0 ... w(N 1)(L1)
where,
j2ln
wnl = e N
Compressive sensing can exploit the sparse property of a
channel, namely, the CSI can be recovered from only a few
pilot subcarriers which are far less than the ones of the
conventional linear method. In above system model, the CIR
vector h is sparse, consequently the number of pilot subcarriers
can be less than the channel length L , i.e.P < L. By regarding
the approximate random Fourier transform matrix Xp Wp to be
the sensing matrix A, the received pilot subcarriers Yp to be
the measurement vector y, and the channel impulse vector h
to be the unknown vector x in above equation. And, the
problem to obtain h from Yp in channel estimation model
can be considered as a compressive sensing problem but with
the observation noise Vp and with all complex-valued vectors
and matrices. It is remarkable that the success of recovering
the unknown sparse vector x from the measurement vector
y is based on the random measurement effect of sensing
matrix A, i.e. the sensing matrix A which satisfies (Restricted
Isometry Property) RIP is usually with random elements. In
the compressive sensing approach to sparse channel estimation, the randomness of the sensing matrix Xp Wp should also
be satisfied. Hence the selection matrix S, which selects P
pilot locations from N OFDM subcarriers, should also be a
random selection matrix composed of random P rows from
an NxN identity matrix. As a consequence, transmitted
pilot subcarriers are inserted in a sparsely and random-spaced
manner illustrated in Fig. 1(b) on the contrary to that of
conventional linear method demonstrated in Fig. 1(a). As for
the recoverability of the sparse vector for the approximate
random Fourier transform sensing matrix, it is shown that
random Fourier matrices satisfy RIP under certain conditions.
A. Recovery Algorithm
After the random measurement of the sensing matrix Xp Wp
which contains transmitted pilot Xp , the CIR vector h can
be extracted from the received pilot subcarriers Yp . In this
paradigm, the length of Yp is shorter than that of the unknown
vector, hence can be solved by compressive sensing recovery
algorithm.As this problem can be solved by l0 norm minimization, however its NP-Hard problem, so here its solved by
Smoothed l0 algorithm. SL0 approximates l0 norm of a vector
X with a continous function of N-F (x), where
PN
F (x) = i=1 f (xi ) .

ESTIMATION AND DETECTION THEORY TERM PAPER- 2014-15 EVEN SEMESTER

Fig. 3. Algorithm.

IV. C HANNEL E STIMATION BASED ON BAYESIAN


C OMPRESSIVE S ENSING (BCS)
A. OFDM SYSTEM MODEL
TDS-OFDM symbol contains two parts,which are the PN
sequence and OFDM data. Le PN sequence is :
c = [c0 , c1 , c2 , c3 ......cM 1 ]T of length M and OFDM data as
x = [x0 , x1 , x2 , x3 ......xN 1 ]T of length N. So the symbol
can be expressed as

T
S = [S0 , S1 , S2 , S3 ......SM +N 1 ]T = cT , xT .
The discrete-time complex channel impulse response (CIR) is
expressed as
T
h = [h
0 , h1 , h2 , h3 ......hL1 ] of length L (L M ):
P
L
ht = k=1 ck (t k );
where hk represents the delay of the kth multipath gain
coefficient, and ck denotes the gain of kth path.
So after passing through the multiple channel, the received PN
sequence
can be presented as:

c0
xN 1 xN 2 ... ... xN L+1
c1

c0
xN 2 ... ... xN L+2

c2

c1
c0
... ... xN L+3

.
.
.
.

c0
cL1 cL2 cL3 ... ...

.
.

.
.
... ...
..
cM 1 cM 2 cM 3 ... ...
cM L

f belongs to a family of one variable continuous approximate Kronecker-Delta function as 0.


Thus instead of minimizing kxk0 subject to data, SL0
attempts to solve the problem
Q :
maxx F (x)
s.t. y=Ax
decreasing at each step, and initializing the next step at
the maximizer of the previous larger value of . However,
it is observed that Q is solved using few iteration of gradient
ascent.Thus, SL0 algorithm consists of two loops: the external
loop that is responsible for gradually decreasing of and the
internal loop, being a simple steepest descent algorithm for
finding the maximizer of Q for given .
The final SL0 algorithm, which is obtained by applying the
main idea of above on the Gaussian family, for extracting
the CIR vector h from the received pilot subcarrier Yp is
shown in algorithm. the convergence analysis of SLO has
been thoroughly considered in literature and it was shown that
under mild conditions, the sequence maximizer of Q indeed
converges to unique minimizer of P0 whenever such answer
exists. Moreover, SL0, solves the Po problem directly hence
no need to differentiate real-valued or complex-valued data,
runs significantly faster than the competing algorithms, while
producing answers with the same or better accuracy. MSE and
BER performance of the OFDM system improves significantly.

In the received sequence, there exists an IBI-free region


which can be presented as: y = h + n
where n is the Zero-mean Gaussian noise in the signal
transmission
and

cL1 cL2 cL3 ... ...


c0
cL

cL1 cL2 ... ...


c1

.
.
.
.

cL
cL1 ... ...
c2
= c1

.
.

.
.
... ...
..
cM 1 cM 2 cM 3 ... ... cM L
GxL
where G = M N + 1 represents the length of the IBI-free
region. The problem turns into using the IBI-free region of
small size to recover the high-dimensional CIR without any
interference cancellation.
Fig. 4. OFDM frame structure.

B. BCS Model
As shown above, y is the M-order measurement signal,
is the measurement matrix of size M N and x is the

ESTIMATION AND DETECTION THEORY TERM PAPER- 2014-15 EVEN SEMESTER

N-order original signal.


y = (x) Assume that xs and xe are N-order vectors, xs is
same with the first K maximum terms of x, the other N-K
terms are zero. xe is same with the first N-K minimum terms
of x, the other K terms are zero. Obviously,x = xs + xe and:
x = xs + xe
As is with good randomness, according to the Central
Limit Theorem, it can be approximated to xe as Gaussian
White Noise. Considering the noise nw in the process of
measuring, so:
x = xs + xe + nw = xs + n
Where n = xe + nw , setting its mean as zero and variance
as 2 .According to the probability theory, we can get the
Gaussian likelihood model of y as follows:



K/2
2
p y|xs , 2
=
2 2
exp 21 2 ky xs k
Through the above analysis, the problem of estimating
original signal xturns into Bayesian estimation.

C. BCS Algorithm via RVM


It has been assumed that n N (0, 2 ). In addition, define
0 = 12 as the accuracy of n, and assume 0 obeys gamma
distribution that p(0 |c, d) = (0 |c, d), where c is the shape
parameter and d is the scale parameter.
Then another priori hypothesis has been put forward that
p (x|) obeys zero-mean Gaussian distribution with accuracy
. obeys Gamma distribution that p(|c, d) = (|c, d), so
:
QN R
p(x|a, b) = i=1 0 N (x(i)|0, i1 )(i |a, b)di where
1 represents variance of each component. It can be seen
that this formula obeys Student-t distribution which can get
the extremum near x(i) = 0 by choosing right a and b so
that sparsity of original signal x can be assumed. Setting
a=b=c=d=0 and it can be seen that y obeys the Zero-mean
Gaussian distribution:
1
k
p(y|, 0 ) = (2p) 2 |01 I + A1 T | 2 .
exp( 12 (01 I + A1 T )1 y)
According to Bayesian formula, posterior probability is
given by :
p(x|y, , 0 ) =
N

= (2p) 2 ||

1
2

exp 12 (x )T 1 (x )

i
, i1, 2, .....N
2i
2
ky xk2 /(N

inew =
=

where i = 1 i ii , where
ii is ith diagonal element of
Then the signal estimation becomes an iterative process
between above equations until a convergence criterion has
been satisfied.

D. Reducing the column correlation of


To get a better estimation performance, the effect of the
measurement matrix has been studied in literature. And it has
been observed that the smaller correlation of the measurement
matrix, the less estimation error will be got. The following
section will explain how to reduce the correlation of the
columns of measurement matrix.
Lets define G = T as the correlation matrix which
represents the correlation between different columns of the
measurement matrix in non-diagonal elements. To reduce the
correlation, we should reduce the elements of G through the
followingiteration:
gij , |gij | t

gij = t sign (gij ) , t > |gij | t

gij , t > |gij |


Where (0 < < 1) is the attenuation factor and t is
toleration coefficient.
1) step 1: Normalized every column of the measurement.

matrix ,then we get the new matrix .


 T

2) step 2: Calculate the Gram matrix: G =


3) step 3: Set up the threshold t for the iteration.
4) step 4: Update the Gram matrix G, then we get the new

matrix G.
5) step 5: Reduce rank operation. Apply singular value
.By taking K elements of the
decomposition (SVD) to the G
diagonal matrix, we reduce the rank of G to K.
6) step 6: Calculate the new measurement matrix new . We
to get new of
= Tnew new by decomposing the G
make G
size K N
Through the above steps, we can get the new measurement
matrix with lower correlations. In order to get better results,
we can iterate the prior steps for several times.

p(y|x,0 )p(x|)
p(y|x,0 )

Then we get
= (0 T + A)1
= 0 T Y
where
A= diag (1 , 2 , 3 ......);
The and are variance and mean respectively.
To get the estimation of original signal, estimation of
the and 0 is also required. Based on the type-II ML
approximation, the point estimate for and 0 can be used as:

1
0 new

i i )

C ONCLUSION
Traditional sparse channel estimation methods are vulnerable to noise and column coherence interference in training matrix. Study of recent novel approaches for channel estimation
based on compressive sensing is done and their comparison
is done. SL0 based sparse channel estimation method which
not only works well in the complex valued channel, but
also extracts CSI more accurate from restricted amount of
pilots as well. It has been shown in literature that BCS-based
channel estimation algorithm has a better performance than the
traditional CS algorithms. Further to improve accuracy there
is reduction of the correlation of the columns of measurement
matrix is done.

ESTIMATION AND DETECTION THEORY TERM PAPER- 2014-15 EVEN SEMESTER

R EFERENCES
[1] H. Wang, Q.Guo, G.Zhang, Complex-valued sparse channel estimation
method basedon smoothed 0 norm algorithm, Signal Processing (ICSP),
2014 12th International Conference on, Page 1602-1607
[2] W.U. Bajwa, Jarvis Haupt, A.M. Sayeed, and R. Nowak Compressed
Channel Sensing: A New Approach to Estimating Sparse Multipath
Channels, Proc. IEEE, Vol. 98, No. 6, June 2010
[3] H. Xie, G. Andrieux, Y. Wang, J.F. Diouris, S. Feng, A novel effective
compressed sensing based sparse channel estimation in OFDM system,
Proc. IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC), KunMing , China, pp.1-6, Aug. 2013.
[4] Gui G, Mehbodniya A, Adachi F. Bayesian Sparse Channel Estimation
and Data Detection for OFDM Communication Systems.IEEE 78th
Vehicular Technology Conference (VTC-Fall), Las Vegas, USA, 25 Sept.
2013.
[5] Y.Han, Z.FanAccurate channel estimation based on Bayesian Compressive Sensing for next-generation wireless broadcasting systems.IEEE
International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB),June, 2014

You might also like