You are on page 1of 11

LEAST-MEAN-SQUARE ADAPTIVE FILTERS

LEAST-MEAN-SQUARE ADAPTIVE FILTERS

Edited by

S. Haykin and B. Widrow

A JOHN WILEY & SONS, INC. PUBLICATION

This book is printed on acid-free paper. Copyright q 2003 by John Wiley & Sons Inc. All rights reserved. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 7504744. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, New Jersey 07030, (201) 748-6011, fax (201) 748-6008, E-Mail: PERMREQ@WILEY.COM. For ordering and customer service, call 1-800-CALL-WILEY. Library of Congress Cataloging-in-Publication Data: Least-mean-square adaptive lters/edited by S. Haykin and B. Widrow p. cm. Includes bibliographical references and index. ISBN 0-471-21570-8 (cloth) 1. Adaptive ltersDesign and constructionMathematics. 2. Least squares. I. Widrow, Bernard, 1929- II. Haykin, Simon, 1931TK7872.F5L43 2003 621.38150 324dc21 2003041161 Printed in the United States of America. 10 9 8 7 6 5 4 3 2 1

This book is dedicated to Bernard Widrow for inventing the LMS lter and investigating its theory and applications Simon Haykin

CONTENTS
Contributors Introduction: The LMS Filter (Algorithm) Simon Haykin 1. On the Efciency of Adaptive Algorithms Bernard Widrow and Max Kamenetsky Traveling-Wave Model of Long LMS Filters Hans J. Butterweck Energy Conservation and the Learning Ability of LMS Adaptive Filters Ali H. Sayed and V. H. Nascimento On the Robustness of LMS Filters Babak Hassibi Dimension Analysis for Least-Mean-Square Algorithms Iven M. Y. Mareels, John Homer, and Robert R. Bitmead Control of LMS-Type Adaptive Filters Eberhard Hansler and Gerhard Uwe Schmidt Afne Projection Algorithms Steven L. Gay Proportionate Adaptation: New Paradigms in Adaptive Filters Zhe Chen, Simon Haykin, and Steven L. Gay Steady-State Dynamic Weight Behavior in (N)LMS Adaptive Filters A. A. (Louis) Beex and James R. Zeidler ix xi

2.

35

3.

79

4.

105

5.

145

6.

175

7.

241

8.

293

9.

335
vii

viii

CONTENTS

10.

Error Whitening Wiener Filters: Theory and Algorithms Jose C. Principe, Yadunandana N. Rao, and Deniz Erdogmus Index

445

491

CONTRIBUTORS

A. A. (LOUIS ) BEEX , Systems GroupDSP Research Laboratory, The Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061-0111 ROBERT R. BITMEAD , Department of Mechanical and Aerospace Engineering, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 920930411 HANS BUTTERWECK , Technische Universiteit Eindhoven, Faculteit Elektrotechniek, EH 5.29, Postbus 513, 5600 MB Eindhoven, Netherlands ZHE CHEN , Department of Electrical and Computer Engineering, CRL 102, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada L8S 4K1 DENIZ ERDOGMUS , Computational NeuroEngineering Laboratory, EB 451, Building 33, University of Florida, Gainesville, FL 32611 STEVEN L. GAY, Acoustics and Speech Research Department, Bell Labs, Room 2D-531, 600 Mountain Ave., Murray Hill, NJ 07974
PROF. DR .-ING . EBERHARD HA NSLER , Institute of Communication Technology, Darmstadt University of Technology, Merckstrasse 25, D-64283 Darmstadt, Germany

BABAK HASSIBI , Department of Electrical Engineering, 1200 East California Blvd., M/C 136-93, California Institute of Technology, Pasadena, CA 91101 SIMON HAYKIN , Department of Electrical and Computer Engineering, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada L8S 4K1 JOHN HOMER , School of Computer Science and Electrical Engineering, The University of Queensland, Brisbane 4072 MAX KAMENETSKY, Stanford University, David Packard Electrical Engineering, 350 Serra Mall, Room 263, Stanford, CA 94305-9510 IVEN M. Y. MAREELS , Department of Electrical and Electronic Engineering, The University of Melbourne, Melbourne Vic 3010
ix

CONTRIBUTORS

V. H. NASCIMENTO , Department of Electronic Systems Engineering, University of Sao Paulo, Brazil JOSE C. PRINCIPE , Computational NeuroEngineering Laboratory, EB 451, Building 33, University of Florida, Gainesville, FL 32611 YADUNANDANA N. RAO , Computational NeuroEngineering Laboratory, EB 451, Building 33, University of Florida, Gainesville, FL 32611 ALI H. SAYED , Department of Electrical Engineering, Room 44-123A Engineering IV Bldg, University of California, Los Angeles, CA 90095-1594 GERHARD UWE SCHMIDT, Institute of Communication Technology, Darmstadt University of Technology, Merckstrasse 25, D-64283 Darmstadt, Germany BERNARD WIDROW, Stanford University, David Packard Electrical Engineering, 350 Serra Mall, Room 273, Stanford, CA 94305-9510 JAMES R. ZEIDLER , Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA 92092

INTRODUCTION: THE LMS FILTER (ALGORITHM)


SIMON HAYKIN

The earliest work on adaptive lters may be traced back to the late 1950s, during which time a number of researchers were working independently on theories and applications of such lters. From this early work, the least-mean-square LMS algorithm emerged as a simple, yet effective, algorithm for the design of adaptive transversal (tapped-delay-line) lters. The LMS algorithm was devised by Widrow and Hoff in 1959 in their study of a pattern-recognition machine known as the adaptive linear element, commonly referred to as the Adaline [1, 2]. The LMS algorithm is a stochastic gradient algorithm in that it iterates each tap weight of the transversal lter in the direction of the instantaneous gradient of the squared error signal with respect to the tap weight in question. ^ Let wn denote the tap-weight vector of the LMS lter, computed at iteration (time step) n. The adaptive operation of the lter is completely described by the recursive equation (assuming complex data) ^ ^ ^ wn 1 wn m undn wH nun*; 1

where un is the tap-input vector, dn is the desired response, and m is the step-size parameter. The quantity enclosed in square brackets is the error signal. The asterisk denotes complex conjugation, and the superscript H denotes Hermitian transposition (i.e., ordinary transposition combined with complex conjugation). Equation (1) is testimony to the simplicity of the LMS lter. This simplicity, coupled with desirable properties of the LMS lter (discussed in the chapters of this book) and practical applications [3, 4], has made the LMS lter and its variants an important part of the adaptive signal processing kit of tools, not just for the past 40 years but for many years to come. Simply put, the LMS lter has withstood the test of time. Although the LMS lter is very simple in computational terms, its mathematical analysis is profoundly complicated because of its stochastic and nonlinear nature. Indeed, despite the extensive effort that has been expended in the literature to
xi

xii

INTRODUCTION: THE LMS FILTER (ALGORITHM)

analyze the LMS lter, we still do not have a direct mathematical theory for its stability and steady-state performance, and probably we never will. Nevertheless, we do have a good understanding of its behavior in a stationary as well as a nonstationary environment, as demonstrated in the chapters of this book. The stochastic nature of the LMS lter manifests itself in the fact that in a stationary environment, and under the assumption of a small step-size parameter, the lter executes a form of Brownian motion. Specically, the small step-size theory of the LMS lter is almost exactly described by the discrete-time version of the Langevin equation1 [3]: Dnk n nk n 1 nk n m l k nk n f k n; k 1; 2; . . . ; M; 2

which is naturally split into two parts: a damping force m l k nk n and a stochastic force f k n. The terms used herein are dened as follows: M order (i.e., number of taps) of the transversal lter around which the LMS lter is built l k kth eigenvalue of the correlation matrix of the input vector un, which is denoted by R f k n kth component of the vector m QH une*n o Q unitary matrix whose M columns constitute an orthogonal set of eigerivectors associated with the eigenvalues of the correlation matrix R eo n optimum error signal produced by the corresponding Wiener lter driven by the input vector un and the desired response dn To illustrate the validity of Eq. (2) as the description of small step-size theory of the LMS lter, we present the results of a computer experiment on a classic example of adaptive equalization. The example involves an unknown linear channel whose impulse response is described by the raised cosine [3]   8  < 1 1 cos 2p n 2 ; n 1; 2; 3; W 3 hn 2 : 0; otherwise where the parameter W controls the amount of amplitude distortion produced by the channel, with the distortion increasing with W. Equivalently, the parameter W controls the eigenvalue spread (i.e., the ratio of the largest eigenvaiue to the smallest eigenvalue) of the correlation matrix of the tap inputs of the equalizer, with the eigenvalue spread increasing with W. The equalizer has M 11 taps. Figure 1 presents the learning curves of the equalizer trained using the LMS algorithm with the step-size parameter m 0:0075 and varying W. Each learning curve was obtained by averaging the squared value of the error signal en versus the number of iterations n over an ensemble of 100 independent trials of the experiment. The
1

The Langevin equation is the engineers version of stochastic differential (difference) equations.

INTRODUCTION: THE LMS FILTER (ALGORITHM)

xiii

Figure 1 Learning curves of the LMS algorithm applied to the adaptive equalization of a communication channel whose impulse response is described by Eq. (3) for varying eigenvalue spreads: Theory is represented by continuous well-dened curves. Experimental results are represented by uctuating curves.

continuous curves shown in Figure 1 are theoretical, obtained by applying Eq. (2). The curves with relatively small uctuations are the results of experimental work. Figure 1 demonstrates close agreement between theory and experiment. It should, however, be reemphasized that application of Eq. (2) is limited to small values of the step-size parameter m . Chapters in this book deal with cases when m is large.

REFERENCES
1. B. Widrow and M. E. Hoff, Jr. (1960). Adaptive Switching Circuits, IRE WESCON Conv. Rec., Part 4, pp. 96 104. 2. B. Widrow (1966). Adaptive Filters I: Fundamentals, Rep. SEL-66-126 (TR-6764-6), Stanford Electronic Laboratories, Stanford, CA. 3. S. Haykin (2002). Adaptive Filter Theory, 4th Edition, Prentice-Hall. 4. B. Widrow and S. D. Stearns (1985). Adaptive Signal Processing, Prentice-Hall.

You might also like