You are on page 1of 5

A Variable Step-Size Blind Equalization Algorithm

Based on Particle Swarm Optimization


Omar Alhmouz∗ , Shafayat Abrar‡ , Naveed Iqbal∗ and Azzedine Zerguine∗
∗ Electrical
Engineering Department
King Fahd University of Petroleum and Minerals
Dhahran, 31261, Saudi Arabia
Email: ohmouz@gmail.com, {naveediqbal, azzedine}@kfupm.edu.sa
‡ Department of Electrical Engineering, School of Science and Engineering

Habib University, Karachi, Pakistan


Email: shafayat.abrar@sse.habib.edu.pk

Abstract—In this work, a variable step-size (VSS) blind symbols, v(n) is the channel noise, x(n) is the equalizer input
equalization algorithm is presented for a multimodulus blind and â(n) is the output of the decision device. The received
equalization scheme. The parameters of the proposed algo- signal can be modeled as
rithm are selected using a particle swarm optimization strategy.
Eventually, the best parameters for the proposed VSS blind L−1
X
equalization algorithm are obtained and better performance x(n) = h(i)a(n − i)ejφ(n) + v(n),
is obtained when compared to the trial-and-error method for i=0
choosing these parameters. Ultimately, a considerable reduction
in computational complexity when compared to the fixed-step where h(i), i = 0, 1, · · · , L − 1 are the complex static channel
algorithm with extra constraints. Simulation results support this tap weights, L is the length of the channel response, a(n)
new proposed technique. are the complex data symbols, n is the time index, ejφ(n) is
I. I NTRODUCTION caused by a carrier phase error given by φ(n) = 2π ∆f Rd where
Rd is the data rate used and ∆f is the frequency shift. The
Blind equalization is used for many high speed communica- equalizer tap-weight vector is defined as
tion systems [1]-[5] requiring a fast convergence algorithm that
would not delay the startup of the communication device and w(n) = [w0 (n), w1 (n), · · · , wN −1 (n)]T . (1)
can (to some extent) keep track of the channel changes without
We define the actual equalizer output as follows:
introducing delays in the data exchange. The constant modulus
algorithm (CMA) having the problem of slow convergence was y(n) = wH (n)x(n). (2)
not able to satisfy these demands [6]. Similar to the least-mean
squares (LMS) algorithm [7], to overcome this problem, many The objective is to achieve the following:
solutions have been opted for and among these is the variable- â(n) = eθ a(n − ∆) (3)
step-size (VSS) strategy [8]. Many Algorithms for variable
step-size have been proposed in the literature, some using a without using a training signal available at the receiver.
simple idea of two filters with two different step-size [9], or
use computationally expensive calculations to find the optimal v(n)
step-size [10]. Other methods used a procedure to estimate the
convergence state of the equalizer with a set of parameters to
optimize for best performance [11].
a(n) Channel
C(z) + x(n) Equalizer
W(z)
y(n) Decision
Device
a(n)

Decision Directed transfer techniques are used to put the


equalizer in Decision Directed mode (DD-mode) without the Blind Algorithm
need for a threshold level or estimating the state of conver-
gence, resulting in a lower misadjustment steady-state error
[12]. Fig. 1: Blind equalization in the baseband.
In this work, we propose to use a variable step-size for the
algorithm in [13], moreover the Particle Swarm Optimization
(PSO) strategy [14] is used to find the optimum variable step- III. A LGORITHM D EVELOPMENT
size parameters. The multimodulas class of algorithms propose a cost func-
II. S YSTEM M ODEL tion that would lead to simultaneous phase recovery and blind
equalization [15]
Consider the baseband representation of a digital data
transmission system in Fig. 1, where a(n) is the transmitted Jy (n) = Jy,R (n) + Jy,I (n), (4)

978-1-5386-2070-0/18/$31.00 ©2018 IEEE 1357


where Jy,R and Jy,R are, respectively, the cost functions for Equation (13) can be modified by replacing the difference of
real and imaginary parts of the equalizer output, defined by the vectors s and y by a single vector E, and using a learning
2 parameter µ to yield
Jy,R (n) = E[(yR (n) − RR )2 ], (5)
Jy,I (n) = E[(yI2 (n) 2
− RI ) ]. (6) w(n + 1) = w(n) + µXΩ−1 EH . (14)

The dispersion constants RR and RI , are given by For the case of m = k constraints, E can be written as
E[a4R (n)] E[a4I (n)]
 
E = E1 E2 . . . E k , (15)
RR = and RI = . (7)
E[a2R (n)] E[a2I (n)]
where each Ei (1 ≤ i ≤ k) can be written as
for an independent and identically distributed random variables
as the input data symbols, where a(n) = aR (n) + jaI (n). Ei = RR sgn[yR (n−i+1)]+jRI sgn[yI (n−i+1)]−y(n−i+1).
Lin proposed to form the following constrained optimization (16)
problem [16]: Decision directed mode transfer technique was implemented
n in [13] in order to minimize the steady state misadjustments
J = min kw(n + 1) − w(n)k22 and obtain a lower mean square error such that
w(n+1)

RR sgn[yR (n − i + 1)] + jRI sgn[yI (n − i + 1)]
o
+λ1 sR (n)(s2R (n) − RR
2
) + λ2 sI (n)(s2I (n) − RI2 ) (8) 
Ei = −y(n − i + 1), y(n − i + 1) ⊂ C
where λ1 and λ2 are the Lagrange multipliers, RR and RI are  − y(n − i + 1), y(n − i + 1) ⊆ C.

the dispersion constants, and the a posteriori output s(n) is (17)
defined as:
where C is a small region around each of the original
s(n) = sR (n) + jsI (n) = wH (n + 1)x(n). (9) constellation data symbols as shown in Fig. 2. Note that here
we are assuming some knowledge about the constellation used.
If hard constraints are enforced, the above equation becomes
The conditioning for the input data matrix Ω = XH X
s(n) = RR sgn[yR ((n))] + jRI sgn[yI (n)]. (10) is an important factors affecting the algorithm convergence
and sensitivity to noise. All algorithms need a regularization
Qutub and Zerguine [13] generalized Lin’s [16] work for factor when implemented practically. In this work, a Takhonov
the case of m constraints. Thus, the constrained optimization regularization procedure is used.
problem can be formulated as [13]
n
J = min kw(n + 1) − w(n)k22
w(n+1)
3
Xm
+ [λ1n sR (n − k + 1)(s2R (n − k + 1) − 2
RR )]
k=1
Xm o
1
+ [λ2n sI (n − k + 1)(s2I (n − k + 1) − RI2 )] , (11)
Quadrature

k=1
where
E[|aR (n)|2 ] E[|aI (n)|2 ] −1
RR = and RI = , (12)
E[|aR (n)|] E[|aI (n)|]
λ1n , λ2n {1 ≤ n ≤ m} are the Lagrange multipliers.
Note that for m = 1, (11) reduces to that of Lin’s algorithm −3
[16], [17]. Using the method of Lagrange multipliers, (11) can
be solved and the following tap update equation is obtained −3 −1 1 3
as: In−phase

w(n + 1) = w(n) + XΩ−1 [s − y]H , (13) Fig. 2: 16-QAM constellation decision boundaries representing
C.
where s, y, X, and Ω are defined, respectively, as
 
s= s(n) s(n − 1) s(n − i + 1) ,
...
  IV. T HE VARIABLE S TEP -S IZE A LGORITHM
y= y(n) y(n − 1) . . . y(n − i + 1) ,
  The convergence rate of the algorithm in [13] can be further
X= x(n) x(n − 1) . . . x(n − i + 1) , increased by allowing the step-size to vary. Following the
procedure in [18], the step-size can be updated based on the
and
prediction error, hence equation (14) is modified to look like
Ω = XH X. w(n + 1) = w(n) + µ(n)XΩ−1 EH , (18)

1358
where TABLE I: VSS Parameters
µ(n + 1) = αµ(n) + γp(n)p (n),∗
(19) No. of Constraints β γ µmax µmin
m=1 0.99 0.8843 0.1 0.065
the constrains on α and γ are the same as in [8], and the m=2 0.99 0.76 0.05 -
variable p(n) is updated according to the autocorrelation error m=3 0.99 0.0437 0.027 0.009
as follows:
p(n) = βp(n − 1) + (1 − β)|EH (n)E(n − 1)| (20) 10

with 0 < β < 1 acting as a forgetting factor. It should be 5


noted here that for the case of complex data, equation (20)
is modified with Ei (1 ≤ i ≤ k) given by (17). 0

Mean Error Square


Finally, the step size µ is confined to the following: −5

 µmax , if µ(n) > µmax , Using PS
−10
µ(n) = µmin , if µ(n) < µmin , (21)
µ(n), otherwise.

−15

Here, µmax and µmin are employed to better control the step- −20
size behavior where µmax is selected to provide maximum Arbitrary choise for γ
possible convergence speed, while µmin is chosen to provide −25
0 500 1000 1500 2000 2500
a minimum level of tracking ability for the algorithm. Iterations

V. PARTICLE -S WARM -BASED A LGORITHM FOR Fig. 3: MSE performance of the VSS algorithm using a
S ELECTION OF T HE VSS PARAMETERS single constraint whose parameters are chosen by i) PSO, ii)
The advantages gained by using a VSS are sometimes Randomly.
disregarded because of the parameters associated with it. Some
algorithms found in the literature offer a very good perfor-
mance enhancement but have a huge number of parameters applied in this study relies on two important parameters. The
that have to be optimized for each scenario. This is considered positions and velocities of the particles which are generated by
as an extra task for the designer when implementing these randomly selecting a value with uniform probability over the
algorithms. For this matter, we propose to use a Particle Swarm dth optimized search space [udmin , udmax ] = [0, 1]. These
Optimization (PSO) strategy [14], [19]-[24] to find the VSS are updated according to the following recursions [14], the
parameters values that would lead to the best performance of velocity is updated according to
the algorithm. In this work, since γ in (19) is the dominant
vid (n) = z(n) ∗ vid (n − 1)
parameter controlling the behavior of the algorithm, it is
then the only one variable optimized using PSO. The PSO + c1 ∗ a ∗ [uidpbest (n − 1) − uid (n − 1)]
algorithm applied in this study is described fully in [14]. + c2 ∗ a ∗ [uidgbest (n − 1) − uid (n − 1)], (22)
VI. S IMULATION R ESULTS while the position is updated as follows:
The channel used in the simulation is taken from [25], the
data input symbols are 16-QAM and the SNR is set to 30 dB. uid (n) = vid (n) + uid (n − 1), (23)
An equalizer of 9 taps (N = 9) is used, and the performance the inertia weight update is given by
indices are the mean square error (MSE) and the residual Inter
Symbol Interference (ISI) which is defined as [12] z(n) = 0.99z(n − 1), (24)
P
|h(n) ∗ w∗ (n)|−|h(n) ∗ w∗ (n)|max
ISI = n , where a is a uniformly distributed random variable in [0, 1], i
|h(n) ∗ w∗ (n)|max is the number of the particles that goes from 1 to Np = 20, c1
where h(n) is the channel impulse response. In order to show and c2 are constants and are equal to 2, d is the dimension of
the effect of using a variable step-size on the performance the the variables which is here 1, and upbest is the particle with
algorithm in [13] is compared for the cases of the fixed step- best local fitness value while ugbest is the global best fitness
size and variable step-size for different number of constraints. value.
We used α = 0.97. The values for β and γ are given in Table I. To show the effectiveness of the PSO in choosing the VSS
Also, the values for µmax and µmin used in (21) are given parameter, γ, the mean square error and the residual ISI are
in Table I. We used β = 0.99, since β should be β ≈ 1 for plotted, respectively, in Fig. 3 and Fig. 4, for two cases; one
stationary environment [8]. with γ selected randomly and the other case γ selected using
The use of PSO was implemented to give the best value for PSO. The results are shown for the case of a single constraint.
the parameter γ that would lead to a maximum speed of con- As can be seen from these two figures that the use of PSO
vergence and a steady state error of -20 dB. The PSO algorithm in selecting γ resulted in better performance. Approximately,

1359
−5
10
−10
5
−15

Using PSO 0
−20
Residual ISI

Mean Error Square


Arbitrary choice for γ
−5
−25
−10 Fixed Step−Size
−30
−15 New Algorithm
−35
−20
−40
0 500 1000 1500 2000 2500
Iterations −25
0 500 1000 1500 2000 2500
Iterations
Fig. 4: Residual ISI performance of the VSS algorithm using
a single constraint whose parameters are chosen by i) PSO, Fig. 5: MSE performance of the VSS algorithm using PSO
ii) Randomly. and that of [13] for a single constraint.

500 iterations in favor of the proposed algorithm when γ is


selected using PSO over the randomly selected one. 10
To prove best the choice of the proposed algorithm, the
5
performance behavior of the proposed algorithm using PSO in
selecting the value of γ is compared to that of [13] for different 0
constraints. Figures 5-7 show the enhanced performance of the
Mean Error Square

VSS algorithm over its fixed step-size counterpart [13], for −5

the case of a single constraint, double constraints, and triple −10 New Algorithm
constraints, respectively, in terms of the MSE. The use of the
VSS procedure has ensured a better convergence speed in all −15 Fixed Step−Size
cases. Figure 8 shows the MSE performance of the proposed
−20
algorithm for different constraints. As can be depicted from
this figure that the use of the VSS procedure with a single −25
0 500 1000 1500 2000 2500
constraint can provide good convergence speed without the Iterations
need to go for additional constraints. Hence, a reduction in
computational complexity from m(2N + 1) additions and Fig. 6: MSE performance of the VSS algorithm using PSO
[m(2N + 2) + 2] multiplications in that of [13], m being the and that of [13] for double constraints.
number of constraints, to only (2N +3) additions and (3N +8)
multiplications for the proposed VSS PSO-based algorithm.
To sum up, the proposed algorithm is able to compensate
for the expensive use of extra constraints in the fixed step- 10
size algorithm [13] through the simple addition of a VSS
5
procedure.
0
Mean Error Square

VII. C ONCLUSION
−5

The variable step-size algorithm presented in this work lead −10


to a better performance than that of [13], eliminating the
−15 Fixed Step−Size
need to use multiple constraints to increase the convergence
speed. The VSS parameter, γ, has been optimized using a PSO New Algorithm
−20
procedure in order to give the best performance in terms of the
time of convergence. The simulation results presented show −25
0 500 1000 1500 2000 2500
the enhanced performance brought about the VSS algorithm Iterations
incorporating the PSO. Moreover, the proposed algorithm re-
sulted in a considerable reduction in computational complexity Fig. 7: MSE performance of the VSS algorithm using PSO
when compared to that in [13] and therefore eliminating the and that of [13] for triple constraints.
use of extra constraints.

1360
10 [14] J. Kennedy and R. Eberhart, “Particle Swarm Optimization,”
Proc. of IEEE Int. Conf.on Neural Networks, vol. 4, pp. 1942–
5 1948, 27 Nov.-1 Dec. 1995.
[15] J. Yang, J.-J. Werner, and G.A. Dumont, “The Miltimodulus
0 Single Constraint
Blind Equalization Algorithm,” IEEE Intl. Conf. on DSP, vol. 1,
pp. 127–130, 1997.
Mean Error Square

−5 Double Constriants
[16] J. -C. Lin, “Blind Equalization Technique based on an Improved
Triple Constraints
Constant Modulus Adaptive Algorithm,” IEE Proc. Comm., vol.
−10
149(1), pp. 45-50, Feb. 2002.
−15
[17] S. Abrar, A. Zerguine and M. Deriche, “Soft Constraint
Satisfaction Multimodulus Blind Equalization Algorithms,” IEEE
−20 Signal Processing Letters, Vol. 12, No. 9, pp. 637-640, September
2005.
−25
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
[18] J. Jusak, M. Hussain and J. Harris, “Performance of variable
Iterations
step size dithered signed-error CMA for blind equalization,” IEEE
TENCON 2004, Vol. 2, pp. 684–687, 21-24 Nov. 2004.
Fig. 8: MSE performance of the VSS algorithm using PSO for [19] H. El-Morra, A. U. Sheikh, and A. Zerguine, “Application of
all constraints. Particle Swarm Optimization Algorithm to Multiuser Detection
in CDMA,” PIMRC 2005, Berlin, Germany, pp. 2522-2526,
September 11-14, 2005.
[20] H. El-Morra, A. Zerguine and A. U. Sheikh, “A New Hybrid
Acknowledgment: The authors acknowledge the support pro- Heuristic Multiuser Detector for DS-CDMA Communication Sys-
vided by the Deanship of Scientific Research at KFUPM under tems,” Wireless Communications and Mobile Computing, John
Research Grant RG1414. Wiley, Vol. 9, No. 10, pp. 1331-1342, October 2009.
[21] A. Al-Awami, A. Zerguine, L. Cheded, A. Zidouri, and W. A.
R EFERENCES Saif, “A New Modified Particle Swarm Optimization Algorithm
for Adaptive Equalization,” Digital Signal Processing, Volume
[1] S. Abrar, A. Ali, A. Zerguine, and A. K. Nandi, “Tracking 21, Issue 2, pp. 195-207, 2011.
Performance of Two Constant Modulus Equalizers,” IEEE Com- [22] A. H. Abdelhafiz, O. Hammi, A. Zerguine, A. T. Al-Awami,
munications Letters, Vol. 17, No. 5, pp. 830-834, May 2013. and F. M. Ghannouchi, “A PSO based Memory Polynomial
[2] A. Ali, S. Abrar, A. Zerguine, and A. K. Nandi, “Newton-Like Predistorter with Embedded Dimension Estimation,” IEEE Trans.
Minimum Entropy Equalization Algorithm for APSK Systems,” Broadcast., vol. 59, no. 4, pp. 665-673, Dec. 2013.
Signal Processing, Volume 101, pp. 74-86, August 2014. [23] N. Iqbal, A. Zerguine, and N. Al-Dhahir, “Adaptive Equaliza-
[3] A. W. Azim, S. Abrar, A. Zerguine, and A. K. Nandi, “Steady- tion using Particle Swarm Optimization for Uplink SC-FDMA,”
State Performance of Multimodulus Blind Equalizers Signal Electron. Lett., 50 (6), pp. 469-471, March 2014.
Processing,” Signal Processing, Volume 108, pp. 509-520, March [24] N. Iqbal, A. Zerguine, and N. Al-Dhahir, “Decision Feedback
2015. Equalization using Particle Swarm Optimization,” Signal Process-
[4] A. W. Azim, S. Abrar, A. Zerguine, and A. K. Nandi, “Per- ing, Volume 108, pp. 1-12, March 2015.
formance analysis of a family of adaptive blind equalization [25] G. Picchi and G. Prati, “Blind Equalization and Carrier
algorithms for square-QAM,” Digital Signal Processing, Volume Recovery Using a ‘Stop-and-Go’ Decision-Directed Algorithm,”
48, pp. 163-177, January 2016. IEEE Trans. Comm., vol. COM-35, NO. 9, Sep. 1987.
[5] S. Abrar, A. Zerguine, and A. K. Nandi, “Blind adaptive carrier
phase recovery for QAM systems,” Digital Signal Processing,
Volume 48, pp. 65-85, February 2016.
[6] J. R. Treichler and B. G. Agee, “A new approach to multipath
correction of constant modulus signals,” IEEE Trans. on Acoust.,
Speech, Signal Processing, vol. 31, pp. 349-372, Apr. 1983.
[7] S. Haykin, Adaptive Filter Theory, 4th ed., Prentice-Hall, Upper-
Saddle River, NJ, 2002.
[8] T. Aboulnasr and K. Mayyas, “A robust variable step-size lms-
type algorithm: Analysis and simulations,” IEEE Trans. Signal
Processing, vol. SP-45(3), pp. 631–639, March 1997.
[9] J. Arenas-Garca and A.R. Figueiras-Vidal, “Improved Blind
Equalization Via Adaptive Combination of constant modulus
Algorithms,” Proc. of IEEE ICASSP 2006, Vol.3, pp. 756-759,
14-19 May 2006
[10] Z. Vicente and P. Comon, “Blind Channel Equalization with
Algebraic Optimal Step Size,” EUSIPCO2005, 4-8 Sept. 2005.
[11] K. Hung and D.W. Lin, “A Hybrid Variable Step-Size Adaptive
Blind Equalization Algorithm for QAM Signal,” IEEE GLOBE-
COM ’05, Vol. 4, pp. 2140–2144, 28 Nov.-2 Dec. 2005.
[12] F. C. C. De Castro, M. C. F. De Castro and D. S. Arantes,
“Concurrent Blind Deconvolution for Channel Equalization,”
Proc. of ICC’2001, vol. 2, pp. 366–371, 11-15 June 2001.
[13] B. Qutub and A. Zerguine, “A Generalized Dual Mode Blind
Equalization Scheme With Carrier Recovery,” EUSIPCO-2005,
4-8 Sept. 2005.

1361

You might also like