You are on page 1of 3

1108 IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO.

9, SEPTEMBER 2014

Continuous Mixed -Norm Adaptive Algorithm


for System Identification
Hadi Zayyani

AbstractWe propose a new adaptive filtering algorithm in weight vector of adaptive algorithm and
system identification applications which is based on a continuous is the -tap input vector at iteration
mixed -norm. It enjoys the advantages of various error norms . The desired signal is comprised of the output
since it combines p-norms for . The mixture is
controlled by a continuous probability density-like function of of the unknown system , and of impulsive and non
which is assumed to be uniform in our derivations in this letter. impulsive noise terms.
Two versions of the suggested algorithm are developed. The There are also some algorithms which minimize the cost
robustness of the proposed algorithms against impulsive noise are function selected as the -th order moment of error [9].
demonstrated in a system identification simulation. These algorithms have been called Least Mean -norm
Index TermsAdaptive filter, impulsive noise, mixed-norm, (LMP) algorithms. In [9], a Variable Step-Size Normalized
system identification. LMP (VSS-NLMP) adaptive algorithm was proposed for
non-Gaussian interference environment.
In this letter, inspiring from some -norm adaptive filtering
I. INTRODUCTION
algorithms (refer to [9], [10] and [11]), the mixed norm defined
in (1) is heuristically generalized to a continuous mixed -norm

A DAPTIVE filters have a wide range of applications in


signal processing including system identification [1]. The
family of mixed-norm adaptive filters has been introduced to
which is

(2)
benefit two previously established adaptive filter algorithms. At
first, least mean mixed-norm (LMNN) adaptive filter was sug- where is the time index and is a probability density-like
gested which combined the least mean square (LMS) and the
weighting function with the constraint . Based
least mean fourth (LMF) algorithms [2]. Then, a robust mixed-
on this cost function, we propose two adaptive algorithms.
norm (RMN) algorithm was presented in [3] which combined
LMS and least absolute deviation (LAD) algorithms. Also, a II. DERIVATION OF THE PROPOSED ALGORITHM
normalized RMN (NRMN) algorithm was proposed in [4]. The update equation of the proposed continuous mixed
In a different line of research, recently, affine projection -norm (CMPN) based on (2), is derived from the following
sign algorithms (APSA) has been proposed which are robust steepest descent recursion
against impulsive noise [5]. In addition, a proportionate APSA
(PAPSA) has been presented in [6]. Moreover, an improved (3)
APSA algorithm [7] and a memory improved PAPSA algorithm
[8] have also been suggested. where is the step size and is the instantaneous
In this letter, a new adaptive algorithm was devised which is estimate of the gradient of the error norm with respect
robust against impulse noise, generalizing the mixed-norm def- to . Differentiation of (2) with respect to yields the
inition in RMN algorithm. RMN algorithm was based on mini- following equation
mization of the error norm:
(4)
(1)
where depending on approximating the expectation ,
where is the time index and is a mixing pa- two versions of the proposed CMPN algorithm can be derived.
rameter which controls the combination of error norms. The
output error of adaptive algorithm is , A. Approximation Based on Single Point Estimate
where is the -tap In the first suggested algorithm, we approximate
by a point estimate . Replacing in
(4), and performing some simple calculation, we obtain
Manuscript received March 17, 2014; revised May 05, 2014; accepted May
15, 2014. Date of publication May 19, 2014; date of current version May 23, (5)
2014. The associate editor coordinating the review of this manuscript and ap-
proving it for publication was Prof. Jeronimo Arenas-Garcia.
where is a variable step size which
The authors is with the Department of Electrical and Computer Engineering,
Qom University of Technology, Qom, Iran (e-mail: zayyani2009@gmail.com). depends on . So, the first suggested algorithm can be con-
Digital Object Identifier 10.1109/LSP.2014.2325495 sidered as a variable step size sign algorithm. Thus, we nomi-

1070-9908 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
ZAYYANI: CONTINUOUS MIXED -NORM ADAPTIVE ALGORITHM 1109

nate this algorithm as variable step size CMPN (VSS-CMPN).


To obtain a closed form formula for , a uniform weighting
function is assumed. So, the variable step size can
be calculated as

(6)

B. Approximation Based on a Block of Error Signal


In the second suggested algorithm, similar to the cost func-
tion defined in [11], we approximate by successive
samples of error signal. This approximation is Fig. 1. Averaged normalized misalignment error of LAD, RMN, APSA, VSS-
CMPN and Block-CMPN in impulsive noise environment.

(7)

coefficients were drawn from a random Gaussian with zero


where is a forgetting factor which weights more mean and variance . To investigate the tracking ability
the current error sample than the previous samples, and is of the algorithms, we suddenly change the unknown system to
the length of the error block. For a uniform weighting function a new random dispersive channel at time index .
, following the derivation in the Appendix, the update The input signal was an AR(2) signal with the recursion
equation for block-CMPN algorithm is , where was a
white Gaussian noise with variance .
(8) The simulations were repeated 50 times with new unknown
system and new input signal. The performances of the algo-
where rithms were measured by a normalized misalignment error de-
fined as which were averaged
on 50 independent trials.
The projection order or block length of APSA was assumed
(9)
to be . For RMN, its parameter was selected as
[3]. The block length and forgetting factor of Block-CMPN
and
were selected as and , respectively.
To fairly compare the algorithms, the step sizes of various
algorithms were selected in such a way that all algorithms
(10) have same final misalignment error. So, the step size was
where function is defined in (18). selected as 0.003, 0.002, 0.015, 0.015, and 0.0015 for LAD,
RMN, APSA, VSS-CMPN, and Block-CMPN, respectively.
The misalignment error curves for various algorithms are
III. SIMULATION RESULTS
shown in Fig. 1. It shows that the proposed VSS-CMPN has the
Two versions of the proposed algorithm (VSS-CMPN and fastest convergence. The block-CMPN has worse results than
Block-CMPN) were compared to the LAD, RMN, APSA and VSS-CMPN because it is based on previous error samples. If
VSS-NLMP. For VSS-CMPN, a uniform weighting function the convergence is assumed, the current error is more relevant
was used. than the previous error samples.
Experiments were performed in a system identification ap-
plication in presence of both impulsive and background noise.
Background noise was modeled by an independent white
Gaussian noise with a 20 dB signal-to-noise ratio (SNR). In IV. CONCLUSION
addition, an impulse noise with a Bernoulli-Gaussian (BG)
distribution [3] was added to the system output. The BG im- A new continuous mixed -norm adaptive filter algorithm has
pulse noise, , is a product of a Bernoulli been introduced. Based on the approximation of the expecta-
process and a Gaussian process where is a tion of the -norm of the error, two versions of the proposed
white Gaussian random process with zero mean and variance algorithm are developed. The continuous combination of error
, and is a Bernoulli process with the norms yields an algorithm that is robust against impulsive noise
probabilities and . The error rate while resulting in fast convergence in comparison to some algo-
were assumed to be . The unknown system was rithms. The future work is to devise proper weighting functions
assumed to be a 100-tap dispersive impulse response whose to further increase the speed of convergence.
1110 IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 9, SEPTEMBER 2014

APPENDIX where function is


DERIVATION OF BLOCK-CMPN UPDATE EQUATION
Replacing approximation (7) in (2), assuming , and (18)
some simple integration leads to
Therefore, the update of the weight coefficient is as follows

(11)

(19)
Then, we obtain Then, by some manipulations, putting (19) in a vector form and
assuming leads to the final update formula in
(8).

(12) REFERENCES
where is calculated as [1] S. Haykin, Adaptive Filter Theory, Fourth edition ed. Upper Saddle
River, NJ, USA: Prentice-Hall, 2002.
[2] J. A. Chambers, O. Tanrikulu, and A. G. Constantinides, Least
(13) mean mixed-norm adaptive filtering, Electron. Lett., vol. 30, pp.
15741575, 1994.
[3] J. A. Chambers and A. Avlonitis, A robust mixed-norm adaptive filter
algorithm, IEEE Signal Process. Lett., vol. 4, no. 4, pp. 4648, 1997.
where and [4] E. V. Papoulis and T. Stathaki, A normalized robust mixed-norm
. Calculations show adaptive algorithm for system identification, IEEE Signal Process.
Lett., vol. 11, no. 1, pp. 5659, 2004.
[5] T. Shao, Y. R. Zheng, and J. Benesty, An affine projection sign algo-
(14) rithms robust against impulsive interferences, IEEE Signal Process.
Lett., vol. 17, no. 4, pp. 327330, 2010.
[6] Z. Yang, Y. R. Zheng, and S. L. Grant, Proportionate affine projec-
(15) tion sign algorithms for network echo cancellation, IEEE Tran. Audio,
Speech Lang. Process., vol. 19, no. 8, pp. 22732284, 2011.
[7] J. Yoo, J. Shin, H. Choi, and P. Park, Improved affine projection sign
algorithm for sparse system identification, Electron. Lett., vol. 48, no.
Regarding and assuming 15, pp. 927929, 2012.
that the weight vector for the block of data [8] F. Albu and H. K. Kwan, Memory improved proportionate affine pro-
, we have . jection sign algorithm, Electron. Lett., vol. 48, no. 15, pp. 927929,
2012.
So, we obtain [9] Y. R. Zheng and V. H. Nascimento, Two variable step-size adaptive
algorithms for non-Gaussian interference environment using fraction-
ally lower-order moment minimization, Dig. Signal Process., vol. 23,
(16) no. 3, pp. 831844, 2013.
[10] J. Kivinen, M. K. Warmuth, and B. Hassibi, The -norm generaliza-
tion of the LMS algorithm for adaptive filtering, IEEE Tran. Signal
So, combining (12), (13), (14), (15) and (16) leads to Process., vol. 54, no. 5, pp. 17821793, 2006.
[11] A. Navia-Vazquez and J. Arenas-Garcia, Combination of recursive
least -norm algorithms for robust adaptive filtering in alpha-stable
(17) noise, IEEE Tran. Signal Process., vol. 60, no. 3, pp. 14781482,
2012.

You might also like