You are on page 1of 411

CIRCUITS, DEVICES AND SYSTEMS SERIES 19

Test and Diagnosis of


Analogue, Mixed-signal and
RF Integrated Circuits

Other volumes in this series:


Volume 2
Volume 3
Volume 4
Volume 5
Volume 6
Volume 8
Volume 9
Volume 10
Volume 11
Volume 12
Volume 13
Volume 14
Volume 15
Volume 16
Volume 17
Volume 18
Volume 19
Volume 20
Volume 21

Analogue IC design: the current-mode approach C. Toumazou, F.J. Lidgey and


D.G. Haigh (Editors)
Analogue-digital ASICs: circuit techniques, design tools and applications
R.S. Soin, F. Maloberti and J. France (Editors)
Algorithmic and knowledge-based CAD for VLSI G.E. Taylor and G. Russell
(Editors)
Switched currents: an analogue technique for digital technology C.
Toumazou, J.B.C. Hughes and N.C. Battersby (Editors)
High-frequency circuit engineering F. Nibler et al.
Low-power high-frequency microelectronics: a unied approach G. Machado
(Editor)
VLSI testing: digital and mixed analogue/digital techniques S.L. Hurst
Distributed feedback semiconductor lasers J.E. Carroll, J.E.A. Whiteaway and
R.G.S. Plumb
Selected topics in advanced solid state and bre optic sensors S.M.
Vaezi-Nejad (Editor)
Strained silicon heterostructures: materials and devices C.K. Maiti,
N.B. Chakrabarti and S.K. Ray
RFIC and MMIC design and technology I.D. Robertson and S. Lucyzyn (Editors)
Design of high frequency integrated analogue lters Y. Sun (Editor)
Foundations of digital signal processing: theory, algorithms and hardware
design P. Gaydecki
Wireless communications circuits and systems Y. Sun (Editor)
The switching function: analysis of power electronic circuits C. Marouchos
System on chip: next generation electronics B. Al-Hashimi (Editor)
Test and diagnosis of analogue, mixed-signal and RF integrated circuits: the
system on chip approach Y. Sun (Editor)
Low power and low voltage circuit design with the FGMOS transistor
E. Rodriguez-Villegas
Technology computer aided design for Si, SiGe and GaAs integrated circuits
C.K. Maiti and G.A. Armstrong

Test and Diagnosis of


Analogue, Mixed-signal and
RF Integrated Circuits
The system on chip approach

Edited by Yichuang Sun

The Institution of Engineering and Technology

Published by The Institution of Engineering and Technology, London, United Kingdom


2008 The Institution of Engineering and Technology
First published 2008
This publication is copyright under the Berne Convention and the Universal Copyright
Convention. All rights reserved. Apart from any fair dealing for the purposes of research
or private study, or criticism or review, as permitted under the Copyright, Designs and
Patents Act, 1988, this publication may be reproduced, stored or transmitted, in any
form or by any means, only with the prior permission in writing of the publishers, or in
the case of reprographic reproduction in accordance with the terms of licences issued
by the Copyright Licensing Agency. Inquiries concerning reproduction outside those
terms should be sent to the publishers at the undermentioned address:
The Institution of Engineering and Technology
Michael Faraday House
Six Hills Way, Stevenage
Herts, SG1 2AY, United Kingdom
www.theiet.org
While the authors and the publishers believe that the information and guidance given in
this work are correct, all parties must rely upon their own skill and judgement when
making use of them. Neither the authors nor the publishers assume any liability to
anyone for any loss or damage caused by any error or omission in the work, whether
such error or omission is the result of negligence or any other cause. Any and all such
liability is disclaimed.
The moral rights of the authors to be identied as authors of this work have been
asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

British Library Cataloguing in Publication Data


Test and diagnosis of analogue, mixed-signal and RF
integrated circuits : the system on chip approach.
(Circuits, devices & systems ; v. 19)
1. Linear integrated circuits Testing 2. Mixed signal
circuits Testing 3. Radio frequency integrated circuits Testing
I. Sun, Yichuang II. Institution of Engineering and Technology
621.38150287
ISBN 978-0-86341-745-0

Typeset in India by Newgen Imaging Systems (P) Ltd, Chennai


Printed in the UK by Athenaeum Press Ltd, Gateshead, Tyne & Wear

Dedication

To my wife Xiaohui, son Bo and


daughter Lucy

Contents

Preface
List of contributors
1

Fault diagnosis of linear and non-linear analogue circuits


Yichuang Sun
1.1
1.2

1.3

1.4

1.5

Introduction
Multiple-fault diagnosis of linear circuits
1.2.1
Fault incremental circuit
1.2.2
Branch-fault diagnosis
1.2.3
Testability analysis and design for testability
1.2.4
Bilinear function and multiple excitation method
1.2.5
Node-fault diagnosis
1.2.6
Parameter identification after k-node fault location
1.2.7
Cutset-fault diagnosis
1.2.8
Tolerance effects and treatment
Class-fault diagnosis of analogue circuits
1.3.1
Class-fault diagnosis and general algebraic method for
classification
1.3.2
Class-fault diagnosis and topological technique for
classification
1.3.3
t-class-fault diagnosis and topological method for
classification
Fault diagnosis of non-linear circuits
1.4.1
Fault modelling and fault incremental circuits
1.4.2
Fault location and identification
1.4.3
Alternative fault incremental circuits and fault
diagnosis
Recent advances in fault diagnosis of analogue circuits
1.5.1
Test node selection and test signal generation

xv
xix
1
1
3
3
4
6
8
9
10
12
15
15
16
18
19
21
21
24
26
29
29

viii

Test and diagnosis of analogue, mixed-signal and RF integrated circuits


1.5.2
1.5.3
1.5.4
1.6
1.7

Summary
References

30
31
31
32
33

Symbolic function approaches for analogue fault diagnosis


Stefano Manetti and Maria Cristina Piccirilli

37

2.1
2.2

37
39
40
40
41
42
47
52
57
57

2.3

2.4

2.5

2.6
2.7
3

Symbolic approach for fault diagnosis of analogue


circuits
Neural-network- and wavelet-based methods for
analogue fault diagnosis
Hierarchical approach for large-scale circuit fault
diagnosis

Introduction
Symbolic analysis
2.2.1
Symbolic analysis techniques
2.2.2
The SAPWIN program
Testability and ambiguity groups
2.3.1
Algorithms for testability evaluation
2.3.2
Ambiguity groups
2.3.3
Singular-value decomposition approach
2.3.4
Testability analysis of non-linear circuits
Fault diagnosis of linear analogue circuits
2.4.1
Techniques based on bilinear decomposition of fault
equations
2.4.2
NewtonRaphson-based approach
2.4.3
Selection of the test frequencies
Fault diagnosis of non-linear circuits
2.5.1
PWL models
2.5.2
Transient analysis models for reactive components
2.5.3
The Katznelson-type algorithm
2.5.4
Circuit fault diagnosis application
2.5.5
The SAPDEC program
Conclusions
References

Neural-network-based approaches for analogue circuit fault


diagnosis
Yichuang Sun and Yigang He
3.1
3.2

Introduction
Fault diagnosis of analogue circuits with tolerances using
artificial neural networks
3.2.1
Artificial neural networks
3.2.2
Fault diagnosis of analogue circuits
3.2.3
Fault diagnosis using ANNs

59
62
67
71
72
73
73
74
75
77
77

83
83
84
85
87
88

List of contents
3.2.4

3.3

3.4

3.5
3.6
4

Hierarchical/decomposition techniques for large-scale analogue


diagnosis
Peter Shepherd
4.1
4.2

4.3

4.4
4.5
5

Neural-network approach for fault diagnosis of


large-scale analogue circuits
3.2.5
Illustrative examples
Wavelet-based neural-network technique for fault diagnosis of
analogue circuits with noise
3.3.1
Wavelet decomposition
3.3.2
Wavelet feature extraction of noisy signals
3.3.3
WNNs
3.3.4
WNN algorithm for fault diagnosis
3.3.5
Example circuits and results
Neural-network-based L1 -norm optimization approach for fault
diagnosis of non-linear circuits
3.4.1
L1 -norm optimization approach for fault location of
non-linear circuits
3.4.2
NNs applied to L1 -norm fault diagnosis of non-linear
circuits
3.4.3
Illustrative example
Summary
References

ix

Introduction
4.1.1
Diagnosis definitions
Background to analogue fault diagnosis
4.2.1
Simulation before test
4.2.2
Simulation after test
Hierarchical techniques
4.3.1
Simulation after test
4.3.2
Simulation before test
4.3.3
Mixed SBT/SAT approaches
Conclusions
References

90
90
94
94
95
96
97
98
100
103
105
109
110
111

113
113
114
115
115
116
121
121
131
135
137
138

DFT and BIST techniques for analogue and mixed-signal test


Mona Safi-Harb and Gordon Roberts

141

5.1
5.2
5.3

141
142
146
146
147
148

Introduction
Background
Signal generation
5.3.1
Direct digital frequency synthesis
5.3.2
Oscillator-based approaches
5.3.3
Memory-based signal generation

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

5.4
5.5

5.6
5.7

5.8
5.9
5.10
6

5.3.4
Multi-tones
5.3.5
Area overhead
Signal capture
Timing measurements and jitter analysers
5.5.1
Single counter
5.5.2
Analogue-based interpolation techniques:
time-to-voltage converter
5.5.3
Digital phase-initerpolation techniques: delay line
5.5.4
Vernier delay line
5.5.5
Component-invariant VDL for jitter
measurement
5.5.6
Analogue-based jitter measurement device
5.5.7
Time amplification
5.5.8
PLL and DLL injection methods for PLL tests
Calibration techniques for TMU and TDC
Complete on-chip test core: proposed architecture in
Reference 11 and its versatile applications
5.7.1
Attractive and flexible architecture
5.7.2
Oscilloscope/curve tracing
5.7.3
Coherent sampling
5.7.4
Time domain reflectometry/transmission
5.7.5
Crosstalk
5.7.6
Supply/substrate noise
5.7.7
RF testing amplifier resonance
5.7.8
Limitations of the proposed architecture in
Reference 11
Recent trends
Conclusions
References

149
150
151
154
154
155
156
157
159
160
162
163
164
166
166
168
169
169
169
170
171
172
172
173
174

Design-for-testability of analogue filters


Yichuang Sun and Masood-ul Hasan

179

6.1
6.2

179
181
181
186
188
188
189
190
192
193
196
199

6.3

6.4

Introduction
DfT by bypassing
6.2.1
Bypassing by bandwidth broadening
6.2.2
Bypassing using duplicated/switched opamp
DfT by multiplexing
6.3.1
Tow-Thomas biquad filter
6.3.2
The KerwinHuelsmanNewcomb biquad filter
6.3.3
Second-order OTA-C filter
OBT of analogue filters
6.4.1
Test transformations of active-RC filters
6.4.2
OBT of OTA-C filters
6.4.3
OBT of SC biquadratic filter

List of contents
6.5

6.6
6.7
7

Test of A/D converters: From converter characteristics to built-in


self-test proposals
Andreas Lechner and Andrew Richardson
7.1
7.2

7.3

7.4
7.5
7.6
8

Testing of high-order analogue filters


6.5.1
Testing of high-order filters using bypassing
6.5.2
Testing of high-order cascade filters using
multiplexing
6.5.3
Test of MLF OTA-C filters using multiplexing
6.5.4
OBT structures for high-order OTA-C filters
Summary
References

Introduction
A/D conversion
7.2.1
Static A/D converter performance parameters
7.2.2
Dynamic A/D converter performance parameters
A/D converter test approaches
7.3.1
Set-up for A/D converter test
7.3.2
Capturing the test response
7.3.3
Static performance parameter test
7.3.4
Dynamic performance parameter test
A/D converter built-in self-test
Summary and conclusions
References

xi
201
202
203
205
207
210
210

213
213
214
216
218
220
220
221
222
226
228
231
232

Test of  converters
Gildas Leger and Adoracin Rueda

235

8.1
8.2

235
236

8.3

8.4

8.5

Introduction
An overview of  modulation: opening the ADC black box
8.2.1
Principle of operation:  modulation and noise
shaping
8.2.2
Digital filtering and decimation
8.2.3
 modulator architecture
Characterization of  converters
8.3.1
Consequences of  modulation for ADC
characterization
8.3.2
Static performance
8.3.3
Dynamic performance
8.3.4
Applying a FFT with success
Test of  converters
8.4.1
Limitations of the functional approach
8.4.2
The built-in self-test approach
Model-based testing
8.5.1
Model-based test concepts

236
238
239
243
243
244
246
248
254
255
255
259
259

xii

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

8.6
8.7
9

Phase-locked loop test methodologies: Current characterization


and production test practices
Martin John Burbidge and Andrew Richardson
9.1

9.2

9.3
9.4
9.5
10

8.5.2
Polynomial model-based BIST
8.5.3
Behavioural model-based BIST
Conclusions
References

Introduction: Phase-locked loop operation and test


motivations
9.1.1
PLL key elements operation and test issues
9.1.2
Typical CP-PLL test specifications
Traditional test techniques
9.2.1
Characterization focused tests
9.2.2
Production test focused
BIST techniques
Summary and conclusions
References

On-chip testing techniques for RF wireless transceiver systems


and components
Alberto Valdes-Garcia, Jose Silva-Martinez,
Edgar Sanchez-Sinencio
10.1
10.2

10.3

10.4

10.5
10.6

Introduction
Frequency-response test system for analogue baseband
circuits
10.2.1 Principle of operation
10.2.2 Testing methodology
10.2.3 Implementation as a complete on-chip test system
with a digital interface
10.2.4 Experimental evaluation of the FRCS
CMOS amplitude detector for on-chip testing of
RF circuits
10.3.1 Gain and 1-dB compression point measurement
with amplitude detectors
10.3.2 CMOS RF amplitude detector design
10.3.3 Experimental results
Architecture for on-chip testing of wireless transceivers
10.4.1 Switched loop-back architecture
10.4.2 Overall testing strategy
10.4.3 Simulation results
Summary and outlook
References

262
264
271
273

277

277
277
282
287
287
298
301
306
306

309

309
311
311
313
314
319
324
327
328
330
333
333
337
339
342
343

List of contents
11

xiii

Tuning and calibration of analogue, mixed-signal and RF circuits


James Moritz and Yichuang Sun

347

11.1
11.2

347
348
348
349
352
354
355
359
360
365
365
366
368
370
371

11.3

11.4

11.5
11.6
Index

Introduction
On-chip filter tuning
11.2.1 Tuning system requirements for on-chip filters
11.2.2 Frequency tuning and Q tuning
11.2.3 Online and offline tuning
11.2.4 Masterslave tuning
11.2.5 Frequency tuning methods
11.2.6 Q tuning techniques
11.2.7 Tuning of high-order leapfrog filters
Self-calibration techniques for PLL frequency synthesizers
11.3.1 Need for calibration in PLL synthesizers
11.3.2 PLL synthesizer with calibrated VCO
11.3.3 Automatic PLL calibration
11.3.4 Other PLL synthesizer calibration applications
On-chip antenna impedance matching
11.4.1 Requirement for on-chip antenna impedance
matching
11.4.2 Matching network
11.4.3 Impedance sensors
11.4.4 Tuning algorithms
Conclusions
References

371
373
376
377
378
378
383

Preface

System on chip (SoC) integrated circuits (ICs) for communications, multimedia and
computer applications are receiving considerable international attention. One example of a SoC is a single-chip transceiver. Modern microelectronic design processes
adopt a mixed-signal approach since a SoC is a mixed-signal system that includes
both analogue and digital circuits. There are several IC technologies currently available, however, the low-cost and readily available CMOS technique is the mainstream
technology used in IC production for applications such as computer hard disk drive
systems, sensors and sensing systems for health care, video, image and display systems and cable modems for wired communications, radio frequency (RF) transceivers
for wireless communications and high-speed transceivers for optical communications.
Currently, microelectronic circuits and systems are mainly based on submicron and
deep-submicron CMOS technologies, although nano-CMOS technology has already
been used in computer, communication and multimedia chip design. While still pushing the limits of CMOS, preparation for the post-CMOS era is well under way with
many other potential alternatives being actively pursued.
There is an increasing interest in the testing of SoC devices as automatic testing
becomes crucially important to drive down the overall cost of SoC devices due to the
imperfect nature of the manufacturing process and its associated tolerances. Traditional external test has become more and more irrelevant for SoC devices, because
these devices have a very limited number of test nodes. Design for testability (DfT)
and built-in self-test (BIST) approaches have thus been the choice for many applications. The concept of on chip test systems including test generation, measurement and
processing has also been proposed for complex integrated systems. Test and fault diagnosis of analogue and mixed-signal circuits, however, is much more difficult than that
of digital circuits due to tolerances, parasitics and non-linearities, and thus it remains
a bottleneck for automatic SoC test. Recently, the closely related tuning, calibration
and correction issues of analogue, mixed-signal and RF circuits have been intensively
studied. However, the papers on testing, diagnosis and tuning have been published
in a diverse range of journals and conferences, and thus they have been treated quite
separately by the associated communities. For example, work on tuning has been
mainly published in journals and conferences concerned with circuit design and has
not therefore come to the attention of the testing community. Similarly, analogue fault

xvi

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

diagnosis was mainly investigated by circuit theorists in the past, although it has now
become a serious topic in the testing community.
The scope of this book is to consider the whole range of automatic testing, diagnosis and tuning of analogue, mixed-signal and RF ICs and systems. It aims to provide
a comprehensive treatment of testing, diagnosis and tuning in a coherent way and
to report systematically the most recent developments in all these areas in a single
source for the first time. The book attempts to provide a balanced view of the three
important topics, however, stress has been put on the testing side. Motivated by recent
SoC test concepts, the diagnosis, testing and tuning issues of analogue, mixed-signal
and RF circuits are addressed, in particular, from the SoC perspective, which forms
another unique feature of this book.
The book contains 11 chapters written by leading international researchers in
the subject areas. It covers three theme topics: diagnosis, testing and tuning. The
first four chapters are concerned with fault diagnosis of analogue circuits. Chapter
1 systematically presents various circuit-theory-based diagnosis methodologies for
both linear and non-linear circuits including some material not previously available
in the public domain. This chapter also serves as an overview of fault diagnosis.
The following three chapters cover the three most popular diagnosis approaches;
the symbolic function, neural network and hierarchical decomposition techniques,
respectively. Then testing of analogue, mixed-signal and RF ICs is discussed extensively in Chapters 5-10. Chapter 5 gives a general review of all aspects of testing with
emphasis on DfT and BIST. Chapters 610 focus in depth on recent advances in testing analogue filters, data converters, sigma-delta modulators, phase-locked loops, RF
transceivers and components, respectively. Finally, Chapter 11 discusses auto-tuning
and calibration of analogue, mixed-signal and RF circuits including continuous-time
filters, voltage-controlled oscillators and phase-locked loops synthesizers, impedance
matching networks and antenna tuning units.
The book can be used as a text or reference for a broad range of readers from
both academia and industry. It is especially useful for those who wish to gain a
viewpoint from which to understand the relationship of diagnosis, testing and tuning.
An indispensible reference companion to researchers and engineers in electronic and
electrical engineering, the book is also intended to be a text for graduate and senior
undergraduate students, as may be appropriate.
I would like to thank staff members in the Publishing Department of the IET
for their support and assistance, especially the former Commissioning Editors Sarah
Kramer and Nick Canty and the current Commissioning Editor, Lisa Reading. I am
very grateful to the chapter authors for their considerable efforts in contributing these
high-quality chapters; their professionalism is highly appreciated. I must also thank
my wife Xiaohui, son Bo and daughter Lucy for their understanding and support;
without them behind me this book would not have been possible.
As a final note, it has been my long dream to write or edit something in the topic
area of this book. The first research paper published in my academic career was about
fault diagnosis in analogue circuits. This was over 20 years ago when I studied for
the MSc degree. The real motivation for doing this book, however, came along with
the proposal for a special issue on analogue and mixed-signal test for SoCs for IEE

Preface xvii
Proceedings: Circuits, Devices and Systems (published in 2004). It has since been
a long journey for the book to come into being as you see now, however, the book
has indeed been significantly improved with the time during the editorial process.
I sincerely hope that the efforts from the editor and authors pay off as a truly useful
and long-lasting companion in your successful career.
Yichuang Sun

Contributors

Martin John Burbidge


Department of Engineering
Lancaster University
Lancaster, UK
Masood-ul Hasan
School of Electronic, Communication
and Electrical Engineering
University of Hertfordshire
Hatfield, Herts, UK
Yigang He
College of Electrical and Information
Engineering
Hunan University
Changsha, Hunan, China
Andreas Lechner
Centre for Microsystems Engineering
Department of Engineering
Lancaster University
Lancaster, UK
Gildas Leger
Instituto de Microelectronica
de Seville (IMSE-CNM)
Universidad de Seville
Seville, Spain

Stefano Manetti
Department of Electronics and
Telecommunications
University of Florence
Firenze, Italy
James Moritz
School of Electronic, Communication
and Electrical Engineering
University of Hertfordshire
Hatfield, Herts, UK
Maria Cristina Piccirilli
Department of Electronics and
Telecommunications
University of Florence
Firenze, Italy
Andrew Richardson
Centre for Microsystems Engineering
Department of Engineering
Lancaster University
Lancaster, UK
Gordon Roberts
Department of Electrical Engineering
McGill University
Montreal, Quebec, Canada

xx

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Adoracin Rueda
Instituto de Microelectronica
de Sevilla
Centro Nacional de
Microelectronica
Edificio CICA, Sevilla
Spain

Mona Safi-Harb
Department of Electrical
Engineering
McGill University
Montreal, Quebec, Canada

Edgar Sanchez-Sinencio
Analog and Mixed-signal Center
Department of Electrical and
Computer Engineering
Texas A&M University
College Station, Texas, USA

Peter Shepherd
Department of Electronic &
Electrical Engineering
University of Bath
Claverton Down, Bath, UK
Jose Silva-Martinez
Analog and Mixed-signal Center
Department of Electrical and
Computer Engineering
Texas A&M University
College Station, Texas, USA
Yichuang Sun
School of Electronic, Communication
and Electrical Engineering
University of Hertfordshire
Hatfield, Herts, UK
Alberto Valdes-Garcia
Communication IC Design
IBM T. J. Watson Research Center
New York, USA

Chapter 1

Fault diagnosis of linear and non-linear


analogue circuits
Yichuang Sun

1.1

Introduction

Fault diagnosis of analogue circuits is becoming ever-increasingly important owing


to the rapidly increasing complexity of integrated circuits (ICs) and systems [162].
Recent interest in mixed-signal systems on a chip provides further motivation for
analogue fault diagnosis automation. Fault diagnosis of analogue circuits started with
an investigation of the solvability of network component values in 1960 [13] and has
been an active research area ever since. Methods for analogue fault diagnosis can
be broadly divided into simulation before test (SBT) or simulation-after-test (SAT)
techniques depending on whether simulation is mainly conducted before test or after
test. The most representative SBT technique is the fault dictionary approach, while
SAT techniques include the parameter identification and fault verification approaches.
The types of fault most widely considered from the viewpoints of fault diagnosis
and location are soft faults and hard faults. The former are caused by deviations in
component values from their nominal ones, whereas the latter refer to catastrophic
changes such as open circuits and short circuits.
The fault dictionary method is concerned with the construction of a fault dictionary by simulating the effects of a set of typical faults and recording the pattern of
the observable outputs [11, 12]. To construct a fault dictionary, all potential faults are
listed and the stimuli are selected. The circuit under test (CUT) is then simulated for
the fault-free case and all faulty cases. The signatures of the responses are stored and
organized in the dictionary. Ambiguity sets are put together as a single entry. After
test, the measured signatures are compared with those stored in the dictionary for
the fittest to decide the faults. The fault dictionary method has the smallest after-test
computation levels mainly resulting from the comparison of measured test data with
those already stored in the dictionary to decide the faults. Because of this, the fault

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

dictionary method is used practically in the fault diagnosis of analogue and mixedsignal circuits, especially for single hard-fault diagnosis. The drawback of the method
is the large number of SBT computations that are needed for the construction of a
fault dictionary, especially for multiple-fault and soft-fault diagnosis of a large circuit. Methods for effective fault simulation and large-change sensitivity computation
are thus needed [610]. Tolerance effects need to be considered as simulations are
conducted at nominal values for fault-free components.
The parameter identification approach calculates all actual component values from
a set of linear or non-linear equations after test and compares them with their nominal
values to decide which components are faulty [1317]. The method is useful for
circuit design modification and tuning. There is no restriction on the number of
faults and tolerance is not a problem in this method because the method targets all
actual component values. However, the method normally assumes that all circuit
nodes are accessible and thus it is not practical for modern IC diagnosis [15, 16]. In
addition, some parameter identification requires solving non-linear equations [13, 14],
which is computationally demanding especially for large-scale circuits. The parameter
identification method has thus become more of a topic of theoretical interest in circuit
diagnosis, in contrast to circuit analysis and circuit design. The only exception is
perhaps the optimization-based identification technique [17] that can have limited
tests for approximate, but optimized, component value calculation. The optimizationbased method will be discussed in the context of the neural network approach in
Chapter 3.
The fault verification method [1839] is concerned with fault location of analogue
circuits with a small number of test nodes and a limited number of faults by using linear diagnosis equations. Indeed, modern highly integrated systems have very limited
external accessibility and normally only a few components become faulty simultaneously. Under the assumption that the number of faults is fewer than the number of
accessible nodes, the fault locations of a circuit can be determined by simply checking
the consistency of a set of linear equations. Thus, the SAT computation burden of
the method is small. The fault verification method is suitable for all types of fault,
and component values can also be determined after fault location. Tolerance effects
are, however, of concern in this method because fault-free components are assumed
to take their nominal values. The fault verification method has attracted considerable
attention, with the k-fault diagnosis approach [1839] being widely investigated.
This chapter systematically introduces k-fault diagnosis theory and methods for
both linear and non-linear circuits as well as the derivative class-fault diagnosis
approach. We also give a general overview of recent research in fault diagnosis of
analogue circuits. Throughout the chapter, a unified discussion is adopted based on
the fault incremental circuit concept. In Section 1.2, we introduce the fault incremental circuit of linear circuits and discuss various k-fault diagnosis methods including
branch-, node- and cutset-fault diagnoses and various practical issues such as component value determination and testability analysis and design. A class-fault diagnosis
theory without structural restrictions for fault location is introduced in Section 1.3,
which comprises both algebraic and topological classification methods. In Section 1.4,
the fault incremental circuit of non-linear circuits is constructed and a series of linear

Fault diagnosis of linear and non-linear analogue circuits

methods and special considerations of non-linear circuit fault diagnosis are discussed.
We also introduce some of the latest advances in fault diagnosis of analogue circuits
in Section 1.5. A summary of the chapter is given in Section 1.6.

1.2

Multiple-fault diagnosis of linear circuits

The k-fault diagnosis methods [1839] have been widely investigated because of
various advantages such as the need for only a limited number of test nodes and use
of linear fault diagnosis equations. It is also practical to assume a limited number of
simultaneous faults. The k-fault diagnosis theory is very systematic and is based on
circuit analysis and circuit design methods.

1.2.1

Fault incremental circuit

Consider a linear circuit, which contains b branches and n nodes. Assume that the
circuit does not contain controlled sources and multi-terminal devices. The branch
equation in the nominal state can be written as
Ib = Yb Vb

(1.1)

where Ib is the branch current vector, Vb is the branch voltage vector and Yb is the
branch admittance matrix.
When the circuit is faulty, component values will have deviation of Yb , which
will cause changes in the branch currents and voltages by Ib and Vb , respectively.
The branch equation of the faulty circuit can then be written as
Ib + Ib = (Yb + Yb )(Vb + Vb )

(1.2)

Subtracting Equation (1.1) from Equation (1.2) we have


Ib = Yb Vb + Yb (Vb + Vb )

(1.3)

Equation (1.3) can also be written as


Ib = Yb Vb + Xb

(1.4)

Xb = Yb (Vb + Vb )

(1.5)

Note that Xb can be used to judge whether a branch or component is faulty or not
by verifying if the corresponding element in Xb is non-zero.
Equation (1.4) can be considered to be the branch equation of a circuit with the
branch current vector being Ib , the branch voltage vector being Vb and the branch
admittance matrix being Yb . Xb can be viewed as excitation sources due to faults the
so-called fault compensation sources. We call this circuit a fault incremental circuit.
Assuming that the nominal circuit and faulty circuit have the same normal or test
input signals, the subtraction of the inputs of the two circuits will be equal to zero,
that is, an open circuit for a current source and a short-circuit for a voltage source
in the fault incremental circuit. Also, note that the fault incremental circuit has the
same topology as the nominal circuit. By applying Kirchhoffs current law (KCL)

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

and Kirchhoffs voltage law to the fault incremental circuit, we can derive numerous
equations useful for fault diagnosis of analogue circuits.
For linear controlled sources and multi-terminal devices or subcircuits, we can also
derive the corresponding branch equations in the fault incremental circuit [3235].
For example, for a VCCS with i1 = gm v2 , it can be shown that i1 = gm v2 + x1
in the fault incremental circuit, where x1 = gm (v2 + v2 ). This remains a VCCS
with an incremental current in the controlled branch, an incremental voltage in the
controlling branch and a fault compensation current source in the controlled branch.
For a three-terminal linear device, y-parameters can be used to describe its terminal
characteristics, with one terminal being taken to be common:
i1 = y11 v1 + y12 v2
i2 = y21 v1 + y22 v2
We can derive the corresponding equations in the fault incremental circuit as
i1 = y11 v1 + y12 v2 + x1
i2 = y21 v1 + y22 v2 + x2
where
x1 = y11 (v1 + v1 ) + y12 (v2 + v2 )
x2 = y21 (v1 + v1 ) + y22 (v2 + v2 )
Although the device has four y-parameters, only two fault compensation current
sources are used, one for each branch in a T -type equivalent circuit in the fault
incremental circuit. If either of x1 or x2 is not equal to zero, the three-terminal
device is faulty. Only if both x1 and x2 are zero, is it fault free. Similarly, we can
also develop a star model for multi-terminal linear devices or subcircuits [35]. A  or
delta model is not preferred owing to the existence of loops (will become clear later),
use of more branches and consequence of possible additional internal nodes.
A fault incremental circuit will become a differential incremental circuit if Xb =
Yb (Vb + Vb ) is replaced by Xb = Yb Vb . The differential incremental circuit
is useful for differential sensitivity and tolerance effects analysis, whereas the fault
incremental circuit can be used for large-change sensitivity analysis, fault simulation
and fault diagnosis.

1.2.2

Branch-fault diagnosis

Branch-fault diagnosis was first published by Biernacki and Bandler [18] and generalised to non-linear circuits by Sun and Lin [3234] and Sun [35]. The k-branch-fault
diagnosis method [1823, 3235] assumes that there are k-branch faults in the circuit
and requires that the number of accessible nodes, m is larger than k. As discussed
above, the change of a components value with respect to its nominal can be represented by a current source in parallel with the component and if the fault compensation
source current is non-zero, the component is faulty.
A branch is said to be faulty if its component is faulty.

Fault diagnosis of linear and non-linear analogue circuits

Consider a linear circuit with b branches and n nodes (excluding the ground
node), of which m are accessible and l inaccessible. Assume that the nominal circuit
and faulty circuit have the same current input, then the input current to an accessible
node in the fault incremental circuit is zero. On applying KCL to the fault incremental
circuit, that is, AIb = 0 (where A is the node incident matrix) and noting that Vb =
AT Vn (where Vn is the nodal voltage increment vector), and on substituting it in
Equation (1.4) we can derive:
Znb Xb = Vn

(1.6)

where Znb = (AYb AT )1 A and Xb = Yb (Vb + Vb ) as given in Equation (1.5).
Dividing Znb = [Zmb T , Zlb T ]T and Vn = [Vm T , Vl T ]T according to external (accessible) and internal (inaccessible) nodes, the branch-fault diagnosis equation
can be derived as
Zmb Xb = Vm

(1.7)

and the formula for calculating the internal node voltages is given by
Vl = Zlb Xb

(1.8)

For ease of understanding and derivation, we assume no tolerance initially. If there


are only k branches that are faulty and k < m. For the k faulty branches corresponding
to the k-column matrix Zmk in Zmb , because only the elements Xk corresponding to
the k faulty branches in Xb are non-zero, Equation (1.7) becomes:
Zmk Xk = Vm

(1.9)

Suppose that rank[Zmk ] = k. If rank[Zmk , Vm ] = k, Equation (1.9) is consistent.


We can solve the equation to obtain the following solution:
Xk = (ZTmk Zmk )1 ZTmk Vm

(1.10)

The non-zero elements of Xk in Equation (1.10) indicate the faulty branches. By
checking consistency of the equations of different k branches, we can determine the
k faulty branches. Because we do not know which k components are faulty, we have
to consider all possible combinations of k out of b branches in the CUT. If there are
more than one k-branch combinations whose corresponding equations are consistent,
the k faulty branches cannot be uniquely determined, as they are not distinguishable
from other consistent k-branch combinations.
More generally, the k-fault diagnosis problem is to find the solutions of Xb
from the underdetermined equation in Equation (1.7), which contains only k nonzero elements. This further becomes a problem of checking the consistency of a
series of overdetermined equations similar to Equation (1.9) corresponding to all
k-branch combinations. A detailed discussion of the problems and methods can be
found in References 18 and 26.
After location of the k faulty branches, we can calculate Vl using Equation (1.8),
then Vb = AT Vn , and further we can calculate Yb from Equation (1.5).

1.2.3

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Testability analysis and design for testability

A general mathematical algebraic theory and algorithms for solving k-fault diagnosis
equations that are suitable for all k-fault diagnosis methods such as branch-, nodeand cutset-fault diagnosis have been thoroughly and rigorously studied in Reference
26. Several interesting and useful theorems and algorithms have been proposed. In
this section, we focus on topological aspects of k-fault diagnosis. This is because
topological testability conditions are more straightforward and useful than algebraic
conditions. Checking topological conditions is much simpler than verifying algebraic
conditions, as the former can be done by inspection only, while the latter requires
numerical computation. Topological conditions can also be used to guide design for
better testability through selection of test nodes, test input signals and topological
structures of the CUT. Sun [22] and Sun and He [23] have investigated testability
analysis and design, on the basis of k-branch-fault diagnosis and k-component value
identification methods. In this section, we discuss topological aspects of k-fault diagnosis, including topological conditions, testability analysis and design for testability,
mainly based on the results obtained in References 22, 23 and 3234.
Definition 1.1 A circuit is said to be k-branch-fault testable if any k faulty branches
can be determined uniquely from test input, accessible node voltages and nominal
component values.
As we have discussed in Section 1.2.2, the equation of the k faulty branches is
consistent. If in the CUT, there are more than one k-branch combinations whose corresponding equations are also consistent, then we will be unable to determine the faulty
branches through consistency verification and thus the circuit is not testable. Therefore, it is important to investigate testability conditions. The following conditions can
be demonstrated.
Theorem 1.1 The necessary and almost sufficient condition for kbranch faults to
be testable is that for all (k + 1)-branch combinations, the corresponding equation
coefficient matrices are of full rank, that is, rank [Zm(k+1) ] = k + 1.
So there are two algebraic requirements that are important: rank [Zmk ] = k and
rank [Zm(k+1) ] = k + 1. The first one is for the equation to be solvable, which is
always assumed to be true and the second is for a unique solution. In the following,
we will give the topological equivalents of both.
Definition 1.2 A cutset is said to be dependent if all accessible nodes and the
reference node are in one of the two parts into which the cutset divides the circuit.
A simple dependent cutset is one in which there is only one inaccessible node in
one part.

Fault diagnosis of linear and non-linear analogue circuits

Theorem 1.2 The necessary and almost sufficient condition for rank [Zmk ] = k of
all k-branch combinations is that the CUT does not have any loops and dependent
cutsets which contain k branches.
Theorem 1.3 The necessary and almost sufficient condition for k-branch faults to
be testable (rank [Zm(k+1) ] = k + 1 for all (k + 1)-branch combinations) is that the
CUT does not have any loops and dependent cutsets that contain (k + 1) branches.
When k = 1, the necessary and almost sufficient condition for a single branch
fault to be testable becomes that the circuit does not have any two branches in parallel
or forming a dependent cutset.
A loop is called the minimum loop if it contains the fewest number of branches
among all loops. A dependent cutset is called the minimum dependent cutset it contains the fewest number of branches among all dependent cutsets. Denote lmin and
cmin as the number of branches in the minimum loop and minimum dependent cutset,
respectively. Then we have the following theorems.
Theorem 1.4 The necessary and almost sufficient condition for k-branch faults to
be testable is k < lmin 1 if lmin cmin or k < cmin 1 if cmin lmin .
It is necessary to find out loops and dependent cutsets to determine lmin and cmin .
To seek loops is relatively easy, which can be conducted in the CUT, N. However,
dependent cutsets are a little difficult to look for, especially for large circuits. The
following theorem provides a simple method for this purpose, that is, to find dependent
cutsets in N0 , instead of N, equivalently.
Theorem 1.5 Let N0 be the circuit obtained by connecting all accessible nodes to
the reference node in the original circuit N. Then all cutsets in N0 are dependent and
N0 contains all cutsets in N.
Note that k-branch-fault testability is dependent on both loops and dependent cutsets. Increasing the number of branches in the minimum loop and minimum dependent
cutset may allow more simultaneous branch faults to be testable. This is useful when
k is not known.
It is also noted that a non-dependent cutset does not pose any restriction on testability. Whether or not a cutset is dependent will depend on the number and position
of accessible nodes. Therefore, proper selection of accessible nodes can change the
dependency of a cutset and thus the testability of k-branch faults. The greater the
number of nodes accessible the smaller will be the number of dependent cutsets. If
all circuit nodes are accessible, there will be no dependent cutset. Testability will
then be completely decided by the condition on loops, that is, k < lmin 1. Choosing different accessible nodes may change a dependent cutset to a non-dependent
cutset, thus improving testability. However, selection of accessible nodes will not
change the conditions on loops. Therefore, if testability is decided by loop conditions
only, for example, when lmin cmin , changing accessible nodes will not improve the

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

testability. However, since dependency of cutsets is related to the number and position of accessible nodes, when testability is decided by conditions on cutsets only,
we will want to select accessible nodes to eliminate dependent cutsets or increase
the number of branches in the minimum dependent cutset to improve the testability.
To increase the number of branches in the minimum dependent cutset, it is always
useful to choose those nodes containing a smaller number of branches as accessible
nodes, because all branches connected to an inaccessible node constitute a dependent
cutset. Generally, to choose some node in the part without the reference node of the
circuit that a dependent cutset divides as an accessible node can make the minimum
dependent cutset not dependent. Finally, the number of accessible nodes must be
larger than the number of faulty branches, as is always assumed. If possible, having
more accessible nodes is always useful as in many cases we do not know exactly how
many faults may happen and the number of dependent cutsets may also be reduced.
In summary, we need to meet m k + 1, lmin > k + 1 and cmin > k + 1.
For a more graph-theory-based discussion of testability, readers may refer to
Reference 23 where detailed testability analysis and design for testability procedures
are given and other equivalent testability conditions are proposed.
We can enhance testability of a circuit by using multiple excitations. For example,
by checking the invariance of the k component values under two different excitations,
we can identify the k faulty components. The CUT can now have (k +1)-branch loops
and dependent cutsets, as in these cases we can still uniquely determine the faulty
branches. Using multiple excitations, all circuits will be single-fault diagnosable.
This will be detailed on the basis of a bilinear method in the next section.

1.2.4

Bilinear function and multiple excitation method

The k-branch combination test can be repeated for different excitations. To generate
two excitations, the same input signal can be applied to two different accessible
nodes or two input signals with different amplitudes can be applied to the same
accessible node. The real fault indicator vectors obtained from different excitations
should be in agreement. Below, a two-excitation method for k-branch-fault location
and component value identification is given.
On the basis of the k-branch-fault diagnosis method in Section 1.2.2, assuming that
rank[Zmk ] = k we can derive the following bilinear relation mapping the measured
node voltage space to the component admittance space as [21]:
col(Yk ) = diag[ATk (Vn + Tnm Vm )ZL
mk Vm ]

(1.11)

where Ak is the node incident matrix corresponding to the k components


L T T
T
1 T
ZL
mk = (Zmk Zmk ) Zmk and Tnm = [Umm , (Zlk Zmk ) ]

After location of the k faulty branches, we can use the bilinear relation in Equation
(1.11) to determine the k faulty component values. More usefully, a multiple excitation
method can be developed based on checking the invariance of the corresponding kcomponent values under different excitations for a unique identification of the faulty
branches and components.

Fault diagnosis of linear and non-linear analogue circuits

If we use two independent current excitations with the same frequency and calculate col(Yk )s of all k-component combinations under each excitation, as col(Yk )1
and col(Yk )2 , respectively, and by denoting rk = col(Yk )1 col(Yk )2 , we can
determine the kfaulty branches by checking if rk is equal to zero. This method
can realize the simultaneous determination of the faulty branches and faulty component values as calculated under any excitation. The multiple excitation method can
enhance diagnosability. Equivalent faulty k-branch combinations can be eliminated
as for these k-branch combinations, rk is not equal to zero (because component values
of real fault-free k-branch combinations will change with different excitations). Now,
the only condition for the unique identification of k faulty components is that rank
[Zmk ] = k or the k branches do not form loops or dependent cutsets. Thus, during
the checking of different combinations of k components, once a k-component combination is found to have rk = 0, we can stop further checking and this k-component
combination is the faulty one. For a.c. circuits, multiple test frequencies may also be
used, however, component values R, L, C rather than their admittances should be
used since admittances are frequency dependent [21]. A similar bilinear relation and
two-excitation method for non-linear circuits [39] will be discussed in Section 1.4.2.

1.2.5

Node-fault diagnosis

Node-fault diagnosis was first proposed by Huang et al. [24] and generalised to nonlinear circuits by Sun [35, 36]. A node is said to be faulty if at least one of the branches
connected to it is faulty. Instead of locating faulty branches in a circuit directly, we
locate the faulty nodes. It is assumed that the number of faulty nodes is smaller than
the number of accessible nodes.
Similar to the derivation of the branch-fault diagnosis equation, applying KCL to
the fault incremental circuit, we have
Znn Xn = Vn

(1.12)

where
T 1
Znn = Y1
n = (AYb A )

(1.13)

Xn = Yn (Vn + Vn ) = AYb AT (Vn + Vn )


Assume that the circuit has m external nodes and l internal nodes. Dividing Znn =[Zmn T ,
Zln T ]T and Vn = [Vm T , Vl T ]T accordingly, the node-fault diagnosis equation
can be derived from Equation (1.12) as [24, 35, 36]:
Zmn Xn = Vm

(1.14)

and the formula for calculating the internal node voltages is given by
Vl = Zln Xn

(1.15)

Assume that there are only k faulty nodes and k < m. Then checking the consistency
of all possible combinations of the k nodes we can locate the faulty nodes from
Equation (1.14). Node-fault diagnosis may require less computation owing to n < b

10

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

and is less restrictive in topological structures than branch-fault diagnosis. However,


further location of faulty branches and determination of faulty component values
require additional computation and thus additional topological restrictions.
After faulty node location, we can easily determine that all branches connected to
fault-free nodes are not faulty. All possible faulty branches are connected to the faulty
nodes and the ground. However, not every such branch may be faulty. After faulty
node location and considering that the internal node voltages can be calculated using
Equation (1.15), the faulty branches and faulty component values may be determined
by Equation (1.16) [36]:
Xn = AXb = AYb AT (Vn + Vn )

(1.16)

However, we should note that if the number of possible faulty branches is larger
than the number of faulty nodes, k then it may not be possible to locate the faulty
branches using this equation. This is the case when the possible faulty branches
form loops. Otherwise, the node incident matrix corresponding to the possible faulty
branches connected to the faulty nodes is left invertible. There are topological restrictions on this method; the requirement of a full column rank requires that the possible
faulty branches do not form loops. A graph method has been proposed for locating
faulty branches in a faulty circuit with the fault-free nodes and associated branches
taken away [36]. A multiple excitation method will be given in the next section which
will overcome the above limitations.

1.2.6

Parameter identification after k-node fault location

This section addresses how to determine the faulty component values after k-node
fault location. For branch-fault diagnosis, after fault location we can easily determine
the values of the faulty components, Yk , from Xk = Yk (Vk + Vk ) as Vk can
be obtained with Xk being a known excitation. The bilinear relation method can also
be used as discussed in Section 1.2.4. Furthermore, the general bilinear relations of
analogue circuits given in References 7, 8 and 2022 may also be used to determine
the faulty component values as well.
After fault location using node-fault diagnosis, determination of faulty component
values is not so simple. Below we present a method for this, which has not been
published in the literature. Assume that there are only k faulty nodes, the ith faulty
node contains mi branches, i = 1, 2, , k. Without loss of generality, for the ith
faulty node we derive the component value determination equations. The jth branch
connected to node i in the fault incremental circuit in Section 1.2.1 can be described as
ij = yj vj + yj (vj + vj )
Applying KCL to node i, we have:
(v1 + v1 )y1 + (v2 + v2 )y2 + + (vmi + vmi )ymi
= (y1 v1 + y2 v2 + + ymi vmi )
Note that after faulty node location, all internal node voltages can be calculated.
Thus, all branch voltages can be computed. So applying mi excitations with the same

Fault diagnosis of linear and non-linear analogue circuits

11

frequency, we can obtain the following identification equations:


(v1 + v1 )(1) y1 + (v2 + v2 )(1) y2 + + (vmi + vmi )(1) ymi
= (y1 v1 (1) + y2 v2 (1) + + ymi vmi (1) )
(v1 + v1 )(2) y1 + (v2 + v2 )(2) y2 + + (vmi + vmi )(2) ymi
= (y1 v1 (2) + y2 v2 (2) + + ymi vmi (2) )
(v1 + v1 )(mi) y1 + (v2 + v2 )(mi) y2 + + (vmi + vmi )(mi) ymi
= (y1 v1 (mi) + y2 v2 (mi) + + ymi vmi (mi) )
To solve the equations for y1 , y2 , , ymi , the coefficient voltage matrix
must be of full rank. This requires application of mi independent excitations with the
same frequency. If we include the frequency in the coefficients and try to determine
R, C, L directly, then multiple test frequencies may also be used to obtain mi
independent equations.
If we do the same for all k faulty nodes, we can determine all faulty component
values, as every faulty component must be connected to one of the faulty
 nodes.
However, this method may require too many equations and excitations,
mi and
max {mi }, respectively. According to node-fault diagnosis, all branches connected to
the non-faulty nodes are fault free and they are known after faulty node location. Only
those components in the branches between faulty nodes or faulty nodes and ground
need to be determined. Thus, the method can be simplified.
Assume that only the first hi branches of node i are not connected to fault-free
nodes, i = 1, 2, . . . , k. Then only hi independent excitations are required. Because
yj = 0, j = hi + 1, hi + 2, . . . , mi , we have:
(v1 + v1 )(1) y1 + (v2 + v2 )(1) y2 + + (vhi + vhi )(1) yhi
= (y1 v1 (1) + y2 v2 (1) + + ymi vmi (1) )
(v1 + v1 )(2) y1 + (v2 + v2 )(2) y2 + + (vhi + vhi )(2) yhi
= (y1 v1 (2) + y2 v2 (2) + + ymi vmi (2) )
(v1 + v1 )(hi) y1 + (v2 + v2 )(hi) y2 + + (vhi + vhi )(hi) yhi
= (y1 v1 (hi) + y2 v2 (hi) + + ymi vmi (hi) )
In the simplest case of hi = 1, we will have:
y1 = (y1 v1 + y2 v2 + + ymi vmi )/(v1 + v1 )
If we do the same simplification for all k faulty nodes, we can determine
 all faulty
component values, but with the total number of equations reduced by (mi hi ) and
also a lower number
 of excitations. This is, however, still not the minimum, although
it now requires hi equations and max {hi } excitations. Further simplification is still
possible. This is because for a branch connected to two faulty nodes, the corresponding
yj will appear in the equations of the two faulty nodes and is thus unnecessarily

12

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

solved twice. Since once a yj is obtained from one faulty node, it can be used as
a known value for the other faulty node, then the other faulty node should have one
equation less to solve. Theoretically, the minimum number of equations needed is the
number of branches between faulty nodes or the faulty nodes and ground.
A faulty node that has a grounded branch is said to be independent because it
contains a branch that is not owned by other faulty nodes. Otherwise, it is said to be
dependent. A dependent node may not need to be dealt with as its branch component
values can be obtained by solving other faulty node equations. In practice we can use
the following steps to make sure that we solve the minimum number of equations.
Supposing that the first hi branches are not connected to the faulty nodes that
have already been dealt with, then only hi independent excitations are needed for
node i. The new equations can be written as
(v1 + v1 )(1) y1 + + (vhi + vhi )(1) yhi
= (y1 v1 (1) + + ymi vmi (1) ) [(vhi +1 + vhi +1 )(1) yhi +1 +
+ (vhi + vhi )(1) yhi ]
(v1 + v1 )(2) y1 + + (vhi + vhi )(2) yhi
= (y1 v1 (2) + + ymi vmi (2) ) [(vhi +1 + vhi +1 )(2) yhi +1 +
+ (vhi + vhi )(2) yhi ] . . .

(v1 + v1 )(hi ) y1 + + (vhi + vhi )(hi ) yhi


= (y1 v1 (hi ) + + ymi vmi (hi ) ) [(vhi +1 + vhi +1 )(hi ) yhi +1 +

+ (vhi + vhi )(hi ) yhi ]


The total number of equations from all faulty nodes that are dealt with in this
method is equal to the number of possible faulty branches, no matter in what order
we deal with the faulty nodes. Starting with the faulty node that contains the maximum
number of faulty branches will need the maximum number of excitations. Starting
with the faulty node that contains the fewest branches may result in the minimum
number of excitations as when coming to deal with the node that contains the most
faulty branches some of these branches may have already been solved, thus the number
of excitations needed for the node could be smaller than the number of its faulty
branches.

1.2.7

Cutset-fault diagnosis

Research on multiple-fault diagnosis has been mainly focused on branch- and nodefault diagnosis, as discussed in the previous sections. This section is concerned with
cutset-fault diagnosis as proposed and investigated for both linear and non-linear
circuits by Sun [25, 37, 38]. Relations of branch-, node- and cutset-fault diagnosis
methods are also discussed.
A branch is said to be measurable if the two nodes to which the branch is connected are accessible. The branch voltage of a measurable branch can be obtained

Fault diagnosis of linear and non-linear analogue circuits

13

by measurement. In a linear circuit, assume that a tree has t branches of which p


branches are measurable and the other q branches are not measurable, p + q = t. We
use Vb and Vt to represent the branch voltage vector and tree branch voltage vector,
respectively. Vp and Vq are the measurable tree branch voltages and unmeasurable
tree branch voltages, respectively.
Applying KCL to the fault incremental circuit, that is, DIb = 0 and noting that
Vb = DT Vt , where D is the cutset incident matrix and using Equations (1.4) and
(5) we have:
Ztt Xt = Vt

(1.17)

where
T 1
Ztt = Y1
t = (DYb D )

(1.18)

Xt = Yt (Vt + Vt ) = DYb DT (Vt + Vt )


A cutset is said to be faulty if it contains at least one faulty branch. According to
the expression for Xt in Equation (1.18) we can see that the non-zero elements of
it must correspond to the faulty cutsets.
Dividing Ztt = [Zpt T , Zqt T ]T and Vt = [Vp T , Vq T ]T , the cutset-fault
diagnosis equation can be derived from Equation (1.17) as [25]:
Zpt Xt = Vp

(1.19)

and the formula for calculating the unmeasurable tree branch voltages is given by
Vq = Zqt Xt

(1.20)

Similar to the branch- and node-fault diagnosis, assuming that there are k
simultaneous faulty cutsets and k < p, we can locate the faulty cutsets from
Equation (1.19).
After faulty-cutset location, we can easily determine that all branches in the faultfree cutsets are not faulty. All possible faulty branches are in the faulty cutsets. If a
faulty cutset contains only one possible faulty branch, this branch must be faulty. The
faulty branches and faulty component values in the faulty cutsets after calculating the
internal tree branch voltages in Equation (1.20) may be determined by Equation (1.21):
Xt = DXb = DYb DT (Vt + Vt )

(1.21)

The method proposed for parameter identification for node-fault diagnosis in


Section 1.2.6 may also be easily extended to be suitable for the cutset-fault diagnosis
based on dealing with trees, not nodes, owing to similarity between the node- and
cutset-fault diagnosis methods.
The node-fault diagnosis method can be seen as a special case of the cutset-fault
diagnosis method when a cutset reduces to a node. The cutset-fault diagnosis method
is more flexible and less restrictive than the branch- and node-fault diagnosis methods.
This is due to the selectability of trees. A proper choice of a tree can not only locate
the faulty cutsets uniquely, but can also locate the faulty branches in the faulty cutsets.
As in branch- and node-fault diagnosis, only voltage measurements are required in
cutset-fault diagnosis.

14

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

1.2.7.1 Selection of tree


An analogue circuit contains several possible trees, which give us an extra degree
of freedom to consider which tree or trees should be used in order to enhance diagnosability [25, 28]. Different trees correspond to different cutsets and thus different
faulty cutsets. The faulty cutsets for one tree may not be uniquely locatable, but the
faulty cutsets for another tree could be locatable. Different trees will also result in
different branch-cutset relations; the faulty branches may be easier to determine from
the faulty cutsets of one tree than those of another. Interesting illustrations can be
found in References 25 and 28.
When choosing a tree we should try to use all accessible nodes to make the
most measurable tree branches to allow the maximum number of faulty cutsets to be
diagnosable for the available accessible nodes. For this purpose, it may be useful to
know the relation between the tree voltages and node voltages, that is, Vt = At Vn ,
where At is the node incident submatrix corresponding to the tree branches.
1.2.7.2 Branch-fault diagnosis equations based on cutset analysis
Branch-fault diagnosis equations are derived on the basis of nodal analysis in Section 1.2.2. Here we derive another set of branch-fault diagnosis equations on the
basis of cutset analysis. This has not been investigated in the literature. By cutset analysis of the fault incremental circuit we can obtain Ztb Xb = Vt , where
Ztb = (DYb DT )1 D. Dividing Ztb = [Zpb T , Zqb T ]T , the branch-fault diagnosis
equation can be derived as
Zpb Xb = Vp

(1.22)

and the formula for calculating the internal tree branch voltages is given by Vq =
Zqb Xb . Equation (1.22) may benefit from the selectability of trees.
1.2.7.3 Relation of branch-, node- and cutset-fault diagnosis
The three fault vectors are linked by Xn = AXb , Xt = DXb and Xn =
At Xt . The matrix At is invertible and the inversion of it can be obtained using a
simple graph algorithm. If any of Xb , Xn and Xt is known, the other two may
be found using the relations. Also, the cutset-fault diagnosis method will become the
node-fault diagnosis method when a cutset reduces to a node.
1.2.7.4 Loop- and mesh-fault diagnosis
Theoretically, we can also very easily define and derive the loop- and mesh-fault
diagnosis problems, but unfortunately they are not useful practically because for kloop faults and k-mesh faults, loop- and mesh-fault diagnosis methods require that
m (>k) loop and m mesh currents should be measurable, respectively. Measuring
branch currents is not preferred in ICs as it will need breaking the connections (not
in situ). The loop- and mesh-fault diagnosis methods are mentioned here merely for
theoretical completeness of k-fault diagnosis.

Fault diagnosis of linear and non-linear analogue circuits

1.2.8

15

Tolerance effects and treatment

In all k-fault diagnosis methods, all non-faulty components are assumed to take on
their nominal values. However, in actual circuits, the values of non-faulty components will randomly fall in the tolerance range due to the existence of tolerance. The
reliability of diagnosis results will thus be affected; sometimes this may result in false
fault declaration or real faults missed and thus make fault diagnosis less accurate. The
tolerance effect will become more severe when the fault and tolerance ratio is small.
To solve this problem, one method is to apply a threshold reflecting tolerance effects
for compatibility checking. Another method is to use some optimization method to
search for the faults by minimizing an objective error function. We can also discount
tolerance effects from the actual circuit to have a modified circuit with net changes
caused by the faults only. This method may need a separate tolerance analysis of
the CUT by using the differential incremental circuit concept, a special case of the
fault incremental circuit, as mentioned in Section 1.2.1. Some detailed discussion of
tolerance effects on fault diagnosis may be found in References 4 and 5.

1.3

Class-fault diagnosis of analogue circuits

Multiple-fault diagnosis has been discussed in Section 1.2. It mainly contains


branch-, node- and cutset-fault diagnosis methods and all k-fault diagnosis methods
can be extended to non-linear circuits as will be discussed in Section 1.4. The primary advantage is the linearity of the methods, even for non-linear circuits. However,
they have rather strong topological conditions which not only limit their application, but also make design for testability difficult. Some researchers have tried to
assuage the topological restrictions by applying multiple excitations. This works
only to some extent, since quite tight topological constraints still exist. There is
therefore a need for the development of fault diagnosis methods without topological limitations. This was first realized by Togawa et al. [27] who proposed, a
TF -equivalence class approach for the branch-fault diagnosis problem. A more general class-fault diagnosis method was proposed in Reference 28. This method not only
applies to branch-fault diagnosis, but is suitable for node- and cutset-fault diagnosis
problems as well. References 29 and 30 have studied branch-set-based class-fault
diagnosis from a topological point of view. A technique of classifying branch sets
directly from the circuit graph was presented. A method of systematic formation
of the class table similar to a fault dictionary was also given. Another topological approach for class-fault diagnosis was presented in Reference 31, with a new
type of class being defined. This method guarantees that every class has a full rank
branch set for consistency verification. The class-fault diagnosis technique aims at
isolating the faulty region or subcircuit, has no topological restrictions and requires
only modest after-test computation. It can be considered as a combination of the
SBT and SAT methods. After determination of the faulty class, one can also locate
the faulty branch set in the faulty class. This section gives a detailed discussion
of class-fault diagnosis, mainly based on the work by Sun [2830] and Sun and
Fidler [31].

16

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

1.3.1

Class-fault diagnosis and general algebraic method


for classification

A general algebraic approach for class-fault diagnosis was proposed in Reference


28. This method is suitable for classifications based on any k-fault diagnosis method
including branch-, node- and cutset-fault diagnosis. However, rather than putting the
problem in a too mathematical way, without loss of generality we use the branchfault diagnosis equation to introduce the algebraic classification method to make the
general method sound more physical.
As discussed in Section 1.2.2, k-branch-fault diagnosis involves a number of equations corresponding to different k-branch combinations. We use Si = {i1 , i2 , . . . , ik }
to denote the ith set of k branches. If Si contains all k faulty branches in the circuit, it
is called the faulty set. If Si is faulty, then Zmki Xki = Vm , where Zmki = [zi1 , zi2 ,
, zik ], zij is the jth column vector of matrix Zmki and Xki = [xi1 , xi2 , , xik ]T . Si is
said to be of full column rank if rank [Zmki ] = k. It is assumed that all k-branch sets
are of full column rank. We denote:
Zmki = Zmki (Zmki T Zmki )1 Zmki T = Zmki Zmki L

(1.23)

From Equations (1.9), (1.10) and (1.23), we know that for the faulty branch set
Zmki Vm = Vm , which is equivalent to rank[Zmki , Vm ] = k.
Definition 1.3 The k-branch set Sj is said to be dependent on the k-branch set Si if
Zmki Zmkj = Zmkj .
The dependence relation is an equivalence relation. We can use this relation to
classify all k-branch sets, that is, if Si and Sj are dependent, Si and Sj belong to
the same class; otherwise they fall into two different classes. It can be proved that
Zmki Zmkj = Zmkj is equivalent to rank[Zmki , zj ] = k, = 1, 2, , k.
Theorem 1.6 Assume that j = i. If for j = 1 , 2 , , u , we have Zmki Zmkj = Zmkj
and for all other j, Zmki Zmkj = Zmkj , then k-branch sets Si , S1 , S2 , . . . , Su form a
class.
Theorem 1.7 Assume j = i1 , i2 , , ik . If for j = 1 , 2 , . . . , e , Zmki zj = zj and for
all other j, Zmki zj = zj , the k-branch sets formed by k + e branches i1 , i2 , , ik and
1 , 2 , , e form a class.
Theorem 1.8 If for some (k + 1) branches, rank [Zm(k+1) ] = k, then all k-branch
sets formed by these (k + 1) branches belong to the same class.
If Zmki Vm = Vm , the k-branch set Si is said to be consistent. It can be proved
that if Si and Sj are dependent, they both are consistent or both are inconsistent. It can
also be proved that if Si and Sj are consistent simultaneously, Si and Sj are dependent.
A class Ci is said to be faulty if it contains the faulty branch set. A class Ci is said
to be consistent, if the k-branch sets in the class are consistent. If Si is faulty, it is

Fault diagnosis of linear and non-linear analogue circuits

17

consistent and if Si is inconsistent, it is not faulty. Thus, the faulty class must be
the consistent class and an inconsistent class is not faulty. Owing to the equivalence
relation, if there is one consistent branch set, the class is consistent and if there is one
inconsistent branch set, the class is inconsistent. There is only one consistent class and
the faulty class can be uniquely determined. Clearly, we can identify the faulty class
by checking only one branch set in each class and once a consistent set/class is found,
we do not need to check any more as the remaining classes are not faulty. When the
number of classes is equal to the number of branch sets, that is, each class contains
only one branch set, the class-fault diagnosis method reduces to the k-branch-fault
diagnosis method.
In the above we assume that all k-branch sets are of full column rank. In the cases
that there are some branch sets which are not of full column rank, the method can also
be used with some generalization. This can be possible by putting all k-branch sets
which are not of full rank together as a class, called the non-full rank class. We can find
all branch sets with rank [Zmki ] < k by checking determinants det(ZTmki Zmki ) = 0.
For all full rank branch sets we do classification and identification as normal. If a
normal full rank class is faulty by consistency checking, the fault diagnosis is completed. If none of the full rank classes is faulty, we judge the non-full rank class as
the faulty class.
Classification should be conducted from k = 1 to k = m 1, unless we know
the k value. Classes are determined for each k value using the method given in the
above. A class table similar to a fault dictionary can be formed before test. Zmki of
one branch set (any one of the k-branch sets) in each class computable before test
can also be included in the class table for consistency checking to identify the faulty
class after test.
The class-fault diagnosis method may be suitable for relatively large-scale circuits
as it targets at a faulty region and a class may actually be a subcircuit. In classfault diagnosis, the after-test computation level is small because classification can be
conducted and the class table can be constructed before test. The number of classes is
smaller than the number of branch sets and due to the unique identifiability, we can stop
checking once a class is found to be consistent thus the times of consistency checking
is at worst equal to the number of classes. There is also no need for testability design
due to the unique identifiability. The method has no restriction; the only assumption
is m > k. The method can also be used to classify k-node sets and k-cutset sets [28].
After the determination of the faulty class, we can further determine the k faulty
branches. If the faulty class contains only one k-branch set, the branch set is the
faulty. Otherwise, the two-excitation methods based on the invariance of the faulty
component values may be used to identify the faulty branch set from others in the
faulty class.
The class-fault diagnosis technique is the best combination of the SBT and SAT
methods, retaining the advantages of both. It uses compatibility verification of linear
equations, but the class table is very similar to a fault dictionary. It can deal with
multiple soft faults and the after-test computation level is small. Topological classification methods to be introduced in the next section can make classification simpler
and computation before test smaller.

18

1.3.2

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Class-fault diagnosis and topological technique for classification

On the basis of the algebraic classification theory discussed in the above, we present
a topological classification method. First we give some definitions. If some of the
branches in a loop also constitute a cutset, we say the loop contains a cutset. If some
of the branches in a cutset also constitute a loop, we say the cutset contains a loop.
Theorem 1.9 We can find the k-branch sets which are not of full rank topologically
as below [29, 30].
1. When branches i1 , i2 , , ik form a loop or dependent cutset, Si is not of full
rank.
2. The (k + 1), k-branch sets formed by the (k + 1) branches in a (k + 1)-branch
loop containing a dependent cutset are not of full rank. The (k + 1), k-branch
sets formed by the (k + 1) branches in a (k + 1)-branch-dependent cutset
containing a loop are not of full rank.
3. The k-branch sets formed by the (k + 1) branches in a (k + 1)-branch loop
or dependent cutset that shares k branches with another loop containing a
dependent cutset are not of full rank. The k-branch sets formed by the (k + 1)
branches in a (k + 1)-branch dependent cutset or loop that shares k branches
with another dependent cutset containing a loop are not of full rank.
Theorem 1.10 For all normal full rank k-branch sets we can classify them
topologically as below [29, 30]:
1. The (k + 1), k-branch sets in a (k + 1)-branch loop belong to the same class.
The (k + 1), k-branch sets in a (k + 1)-branch dependent cutset belong to the
same class. As a special case of the latter, supposing that an inaccessible node
has (k + 1) branches, the (k + 1), k-branch sets formed by the (k + 1) branches
connected to the node belong to the same class.
2. When a (k +1)-branch loop or dependent cutset shares k branches with another
(k + 1)-branch loop or dependent cutset, all k-branch sets formed by the
branches in the both belong to the same class.
3. In a (k + 2)-branch loop containing a dependent cutset, all those k-branch sets
that do not form the dependent cutset belong to the same class. Similarly, in a
(k + 2)-branch dependent cutset containing a loop, all those k-branch sets that
do not form the loop belong to the same class.
To use the topological classification theorems, we need to find all related loops
and dependent cutsets in the CUT in order to find all non-full rank k-branch sets
and classify all full rank k-branch sets. Dependent cutsets can be found equivalently
in N0 , since it has the same dependent cutsets as those in N, as shown in Theorem
1.5. A systematic algorithm for the construction of the complete dictionary-like class
table has been given in References 29 and 30. The faulty class can be identified
by verifying the consistency of any k-branch set in each class, as discussed in the
preceding section. If none of the full rank classes is consistent, then the non-full rank
class is the faulty class.

Fault diagnosis of linear and non-linear analogue circuits

1.3.3

19

t-class-fault diagnosis and topological method for classification

We take a different view of class-fault diagnosis, which may not necessarily be based
on the k-fault diagnosis method. We focus on the effects of faults at the output,
Vm . Two sets of different numbers of faulty branches can cause the same Vm .
For example, using the current source shifting theorems [62], in a three-branch loop
of the fault incremental circuit, the effect of the three branches being faulty can
be equivalent to any two branches being faulty as shifting one fault compensation
current source to other two branches would not change Vm . They can thus be put
into the same class. Using the branch-fault diagnosis equation Zmb Xb = Vm ,
this means that for the two-branch set, Zm2 X2 = Vm and for the three-branch
set, Zm3 X3 = Vm . Although Zm3 is not of full rank due to the loop, Zm2 is of
full rank. So even if there are three faulty branches, by checking the consistency of
Zm2 X2 = Vm we can still identify the faulty class. Note here that two is not the
number of real faults and the number of faults is k = 3. Generalizing the above simple
consideration, another topological method of class-fault diagnosis of analogue circuits
is described in Reference 31. This method allows the k faulty-branch set to be not of
full rank, which can not be dealt with in a normal way in the methods presented above.
1.3.3.1 Classification theorem
The following discussion is based on the branch-fault diagnosis equation, but
introducing a new type of classification.
Definition 1.4 An i-branch set and a j-branch set are said to be t-dependent if the
same Vm is caused when their branches are faulty.
For the two-branch sets, we have Zmi Xi = Zmj Xj = Vm . Note that i and
j can be equal, for example, i = j = k, which is the case of class-fault diagnosis
in Sections 1.3.1 and 1.3.2 and they can also be different, which is a new case. To
reflect the difference we use the phrase t-dependence to define the relation, where
t will become clear later. It is evident that this dependence relationship is also an
equivalence relation. So we can classify branch sets in a circuit by combining all
dependent branch sets into a class. Obviously each concerned branch set can lie in
one and only one class. A branch set is a class itself only if it is not dependent on
other branch sets. A topological classification theorem is given below.
Theorem 1.11 [31] The (t + 1)-branch set and (t + 1), t-branch sets formed by the
(t + 1) branches in a (t + 1)-branch loop belong to the same class. The (t + 1)-branch
set and (t + 1), t-branch sets formed by the (t + 1) branches in a (t + 1)-branchdependent cutset belong to the same class. As a special case of the latter, supposing
that an inaccessible node has (t + 1) branches connected to it, the (t + 1)-branch set
and (t + 1), t-branch sets formed by the (t + 1) branches belong to the same class.
Note that Xb can be looked upon as a fault excitation current source vector.
Therefore, the above theorem can be easily proved by means of the theorems of
current source and voltage source shift [62]. On the basis of the above theorem some

20

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

more complicated classification theorems may be developed by making full use of


the transitive property of the dependence relation.
1.3.3.2 Classification technique
We discuss how to use Theorem 1.11 to classify branch sets of a circuit. For any
t-branch set, 1 t m 1, where m is the number of accessible nodes, find all other
j-branch sets, j t, which are dependent on the t-branch set. Then put these j-branch
sets in the class in which the t-branch set stays. In this way the t-branch set is the
smallest in the sense that it has the smallest number of branches compared with other
branch sets in the same class. In order to do classification more efficiently and make
the class table more regular, it may be beneficial to take the following measures:
1. Branches in a set are arranged in the order of small numbers to large ones.
2. Branch sets in a class are ranked according to the number of branches contained
in the sets, from small to large. If two sets have the same number of branches,
the set with the smaller branch numbers should be put before the other one.
3. Class order numbers, beginning with 1, are determined by branch order numbers
of the first set in each class, from small to large.
4. The whole classification process may start from t = 1 and end at t = m1, that
is, first find out all classes whose smallest branch sets have only one branch,
then those of which the smallest sets have two branches, and so on.
1.3.3.3 Identification of the faulty class
From the classification technique given above we can see that there is at least one
t-branch set in each class and the first set in a class is definitely a t-branch set. From
that we also know that the submatrices Zmt of Zmb corresponding to t-branch sets are of
full column rank. Thus, the faulty class can be identified by verifying the consistency
of the corresponding equation Zmt Xt = Vm of the first (or any) t-branch set in
each class. The whole identification process should start from t = 1 and end with
t = m 1. In most cases, once consistency is satisfied for some t-branch set, the class
in which it lies is the faulty class and no further verification is needed.
In a mathematical sense, a class is the maximum class, meaning that its branch
sets are not dependent with the sets of other classes. Thus, there is only one consistent
class and this class is the faulty class. Therefore, for t < m, the faulty class is uniquely
identifiable without any other conditions.
1.3.3.4 Location of the faulty branch set
After determining the faulty class we can also further distinguish the faulty branch
set. Assume that there are k simultaneously faulty branches in the circuit. The
k faulty-branch set must be in the faulty class. If the faulty class contains only one
branch set (it must be a t-branch set), then the branch set must be faulty. Thus, we
have k = t (note that k is the number of faulty branches). Otherwise, branch sets in
the faulty class may be divided into two parts; one contains all t-branch sets, the other
accommodates all j-branch sets, j t + 1. Note that Zmt corresponding to a t-branch

Fault diagnosis of linear and non-linear analogue circuits

21

set is of full column rank, whereas Zmj corresponding to a j-branch set, j t + 1, is


not of full column rank [29, 30]. It is known that parameter values of the faulty branch
set are independent of excitations. Thus, by changing excitations and verifying the
invariance of all parameter values in each t-branch set we can determine the faulty
set if it is a t-branch set and know that k = t and we can be sure that the faulty set
is a j-branch set, j t + 1 and k > t if all t-branch sets are variant. If further there
is only one j-branch set in the second part of the faulty class (in this event, we have
j = t + 1), then it can be definitely deduced that this set is the faulty set and k = t + 1.
It should be pointed out that in the latter two cases, unlike k in other multiplefault diagnosis methods in Section 1.2 and the class-fault diagnosis method in Sections
1.3.1 and 1.3.2, here t is no longer the number of faulty branches of the circuit. The
approach only requires t < m, not k < m as do others. When k m, other methods
will fail, but this method is still valid as long as t < m (this case is probable owing
to the fact that t k). This implies that the method needs fewer accessible nodes
(m = t + 1, not k + 1) or in other words, it may diagnose more simultaneous faults. In
addition, the method also applies to the situation that Zmk is not of full column rank
(usually because faulty branches form loops or dependent cutsets).

1.4

Fault diagnosis of non-linear circuits

Analogue circuit fault diagnosis has proved to be a very difficult problem. Fault
diagnosis of non-linear circuits is even more difficult due to the challenge in fault
modelling and the complexity of non-linear circuits. There has been less work on
fault diagnosis of non-linear circuits than that of linear circuits in the literature. As
practical circuits are always non-linear; devices such as diodes, transistors, and so
on, are non-linear, development of efficient methods for fault diagnosis of non-linear
circuits become particularly important. Sun and co-workers [3239] have conducted
extensive research into fault diagnosis of non-linear circuits and in References 3239
they have proposed a series of linear methods. This section summarizes some of the
results.

1.4.1

Fault modelling and fault incremental circuits

Fault modelling is a difficult task. For linear circuits, component value deviation
from their nominal value is used to judge whether or not a circuit has faults. Parameter identification methods were developed to try to calculate all component values
to determine such deviations. Subsequently, modelling faults by equivalent compensation current sources was proposed. In this method, the component value change is
equivalently described by a fault excitation source. If the fault source current is not
equal to zero, the corresponding component is faulty. Here the real component values
are not the target, but the incremental current sources caused by faults which are used
as an indicator/verifier. This modelling method has resulted in a large range of fault
verification methods. For non-linear circuits, fault modelling by component value
deviation is possible, but is not a convenient or preferred choice, as in many cases

22

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

there is no direct single value which can represent the state of a non-linear component
like a linear one. A non-linear component often contains several parameters and any
parameter-based method may result in too many non-linear equations. Fortunately,
the fault compensation source method can be easily extended to non-linear circuits.
Any two-terminal non-linear component can be represented by a single compensation source no matter how many parameters are in the characterization function. The
resulting diagnosis equations are accurately (not approximately) linear, although the
circuit is non-linear, thus reduced computation, time and memory. This is obviously
an attractive feature.
In the fault modelling of non-linear circuits [3235], the key problem is that a
change in the operation point of a non-linear component could be caused by either a
fault in itself or by faults in other components. The fault model must be able to tell
the real fault from the fake ones. Whether or not a non-linear component is faulty
should be decided by whether the actual operating point of the component falls on to
the nominal characteristic curve, so as to distinguish it from the fake fault due to the
operation point movement of the non-linear component caused by the faults in other
components.
Consider a nominal non-linear resistive component of the characteristic of
i = g(v)

(1.24)

If the non-linear component is not faulty, the actual branch current and voltage due
to faults in the circuit will satisfy:
i + i = g(v + v)

(1.25)

that is, the actual operation point remains on the nominal non-linear curve, although
moved from the nominal point (i, v); otherwise, the component is faulty since the real
operation point shifts away from the nominal non-linear curve, which means that the
non-linear characteristic has changed.
Introducing
x = i + i g(v + v)

(1.26)

we can then use x to determine whether or not the non-linear component is faulty. If
x is not equal to zero, the component is faulty, otherwise it is not faulty according to
Equation (1.25). Using Equations (1.26) and (1.24), we can write i = g(v + v)
g(v) + x and further:
i = yv + x

(1.27)

where
g(v + v) g(v)
(1.28)
v
which is the incremental conductance at the nominal operation point and can be
calculated when v is known.
Equation (1.27) can be seen as a branch equation where i and v are the branch
current and voltage, respectively; y is the branch admittance and x is a current source.
y=

Fault diagnosis of linear and non-linear analogue circuits

23

Suppose that the circuit has c non-linear components and all non-linear components
are two-terminal voltage controlled non-linear resistors and have the characteristic
i = g(v). For all non-linear components, we can write:
Ic = Yc Vc + Xc

(1.29)

Equation (1.29) can be treated as the branch equation corresponding to the nonlinear branches in the fault incremental circuit where Yc is the branch admittance
matrix with individual element given by Equation (1.28), Ic , the branch current vector, Vc the branch voltage vector and Xc the current source vector with individual
element given by equation (1.26).
Suppose that the CUT has b linear resistor branches, the branch equation of the
linear branches in the fault incremental circuit was derived in Section 1.2.1 and is
rewritten with new equation numbers for convenience:
Ib = Yb Vb + Xb

(1.30)

Xb = Yb (Vb + Vb )

(1.31)

Assume that the circuit to be considered has a branches, of which b branches are
linear and c non-linear, a = b + c. The components are numbered in the order of
linear to non-linear elements. The branch equation of the fault incremental circuit can
be written by combining Equations (1.30) and (1.29) as
Ia = Ya Va + Xa

(1.32)

where Ia =[Ib T , Ic T ]T , Va =[Vb T , Vc T ]T , Xa = [Xb T , Xc T ]T and Ya =
diag{Yb , Yc }.
Note that the fault incremental circuit is linear, although the nominal and faulty
circuits are non-linear. This will make fault diagnosis of non-linear circuits much simpler, in the same complexity of linear circuits. Also note that during the derivation,
no approximation is made. So the linearization method is accurate. The traditional
linearization method uses the differential conductance at the nominal operation point,
g(v)/v, causing an inaccuracy in the calculated x and thus an inaccuracy in the
fault diagnosis. Similar to fault diagnosis of linear circuits based on the fault incremental circuit, we can derive branch-, node- and cutset-fault diagnosis equations of
non-linear circuits based on the formulated fault incremental circuit.
Non-linear controlled sources and three-terminal devices can also be modelled.
For example, for a non-linear VCCS with i1 = gm (v2 ), we have i1 = ym v2 + x1
in the fault incremental circuit, where ym = [gm (v2 +v2 )gm (v2 )]/v2 and x1 =
i1 + i1 gm (v2 + v2 ). This remains a VCCC with the incremental current of the
controlled branch, incremental voltage of the controlling branch and a compensation
current source in the controlled branch.
Suppose that a three-terminal non-linear device, with one terminal as common,
has the following functions:
i1 = g1 (v1 , v2 )
i2 = g2 (v1 , v2 )

24

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

We can derive the corresponding branch equations in a T model in the fault


incremental circuit, given by
i1 = y1 v1 + x1
i2 = y2 v2 + x2
where
[g1 (v1 + v1 , v2 + v2 ) g1 (v1 , v2 )]
v1
[g2 (v1 + v1 , v2 + v2 ) g2 (v1 , v2 )]
y2 =
v2
x1 = i1 + i1 g1 (v1 + v1 , v2 + v2 )

y1

x2 = i2 + i2 g2 (v1 + v1 , v2 + v2 )


Clearly, if either x1 or x2 is not equal to zero, the three-terminal non-linear
device is faulty. It is fault free only if both x1 and x2 are equal to zero. The
modelling method is also suitable for developing a star model for multiple terminal
non-linear devices or subcircuits [35]. Note that the  model or delta models are not
preferred because of loops and more branches.

1.4.2

Fault location and identification

Assume that a non-linear resistive circuit has n nodes (excluding the reference node),
m of which are accessible. Also assume that all non-linear branches are measurable
so that Yc can be calculated once Vc is measured. Using the fault incremental circuit
we can derive the branch-fault diagnosis equation as [32, 33, 35]:
Zma Xa = Vm

(1.33)

and the formula for calculating the internal node voltages is given by
Vl = Zla Xa

(1.34)
AT )1 A

where [Zma Zla


= Zna = (AYa
and [Vm Vl
= Vn .
According to the k-branch-fault diagnosis theory, solving Xa from Equation
(1.33) we can locate the k faulty branches; the non-zero elements in Xb indicate the
faulty linear branches and non-zero elements in Xc determine the faulty non-linear
branches. Owing to the introduction of the linear fault incremental circuit, the whole
theory and methods of k-branch-fault diagnosis of linear circuits, including both the
algebraic and topological techniques discussed in Section 1.2, are directly applicable
to the non-linear circuit fault diagnosis. It is noted that the branch-fault diagnosis equation can also be formulated based on the cutset analysis as discussed in Section 1.2.7.
Also, as for linear circuits, using the fault incremental circuit of non-linear circuits,
we can derive the node- [35, 36] and cutset-fault diagnosis equations for non-linear
circuits [37, 38], which are the same as those for linear circuits in Section 1.2 in form.
Now, a node and a cutset may contain some non-linear components. A faulty node
and a faulty cutset could thus be caused by either faulty linear branches or faulty
non-linear branches or both. After determining Xn and Xt , we can locate the
T,

T ]T

T,

T ]T

Fault diagnosis of linear and non-linear analogue circuits

25

faulty nodes and cutsets by non-zero elements in Xn and Xt respectively. Further
determination of the faulty branches (linear or non-linear) can be conducted based on
Xn = AXa and Xt = DXa for node- and cutset-fault diagnosis respectively.
In the above we have assumed that all non-linear branches are measurable. Thus,
non-linear components are connected among accessible nodes and ground and are in
the chosen tree as measurable tree branches. This may limit the number of non-linear
components and when choosing test nodes we need to select those nodes connected
by non-linear components. This may not be a serious problem, since in practical electronic circuits and systems, linear components are dominant; there are usually only a
very few non-linear components in an analogue circuit. It is noted that the coefficients
of all diagnosis equations are determined after test as calculation of Yc can only be
obtained after Vc is measured. However, this is a rather simple computation. Also,
partitioning the node admittance matrix or the cutset admittance matrix according to
accessible and inaccessible nodes or tree branches, only the block of the dimension of
m m or p p corresponding to the accessible nodes or tree branches is related to Yc .
All the other three blocks can be obtained before test because they do not contain Yc .
Using block-based matrix manipulation, the contribution of the m m or p p block
can be moved to the right-hand side to be with the incremental accessible node or tree
branch voltage vector for after-test computation and thus the main coefficient matrix
in the left-hand side of the diagnosis equations can still be computed before test [35].
In the next section, we will further discuss other ways of dealing with non-linear
components.
1.4.2.1 Bilinear function for k-fault parameter identification
For non-linear circuit fault diagnosis we focus on fault location by compensation
current sources rather than parameter identification, as for non-linear components,
defining deviation in branch admittances is not possible or would not provide any
useful further information about the faulty state or nature. We can, however, continue
to use the deviation model for linear components Equation (1.31) to determine the
values of the faulty linear components.
Generally, we can determine the values of the faulty linear components using the
methods similar to those used for linear circuits after branch-, node- and cutset-fault
diagnosis, treating Xc as known current sources. Here we mention that bilinear
relations between linear component value increments and accessible node voltage
increments can also be established for non-linear circuits. This allows us to determine
the values of faulty linear components and develop a two-excitation method with
enhanced diagnosability. Suppose that there are k1 faulty linear components and k2
faulty non-linear components, k1 + k2 = k. On the basis of the branch-fault diagnosis
method and equations derived using the nodal analysis method, for non-linear circuits,
we can also derive a bilinear relation given by Reference 39:
col(Yk1 ) = {diag[Ak1 T (Vn + Tnm Vm )]}1 Wk1k Zmk L Vm

(1.35)

where Wk1k =[Uk1k1 , Ok1k2 ] and Uk1k1 is a unity matrix of dimension of k1 k1 and
Ok1k2 is a zero matrix of k1 k2 . The meanings of the other symbols including Zmk L
and Tnm are the same as those in Section 1.2.4.

26

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Similar to the linear circuits in Section 1.2.4, the bilinear relation in


Equation (1.35) can be used to calculate the faulty linear component values. Also,
a two-excitation method can be developed for fault location based on the bilinear
relation. If the calculated k1 linear component values are the same under the two
independent excitations, the corresponding k branches including the k2 non-linear
components are faulty. Multiple excitations can be achieved by applying the same
current source to different test nodes or different current sources to the same test node.
According to the above discussion it is also clear that class-fault diagnosis methods
in Section 1.3 are also directly applicable to non-linear circuits based on the fault
incremental circuit.

1.4.3

Alternative fault incremental circuits and fault diagnosis

The fundamental issues of fault diagnosis of non-linear circuits have been discussed
above. Now we address some alternative, possibly more useful, solutions for different
situations of non-linear circuit fault diagnosis.
1.4.3.1 Alternative fault models of non-linear components [33, 34]
To lift the requirement that all non-linear components are measurable, we can do some
equivalent transformations to the non-linear branches. For a non-linear component
i = g(v), as modelled in Section 1.4.1, the corresponding branch equation in the
fault incremental circuit is given by Equation (1.27). If we add to and subtract from
Equation (1.27) the same item y v, the equation remains the same. That is
i = y v + (yv y v + x)

(1.36)

Introducing:
x = (y y )v + x

(1.37)

Equation (1.36) becomes:


i = y v + x

(1.38)

Equation (1.38) has the same form as Equation (1.27). So, the corresponding
branch in the fault incremental circuit has y as the branch admittance and x as the
compensation current source.
Two points are interesting. One is that y can be any value and can thus be set
arbitrarily before test. If we use Equation (1.38) for all non-linear components, the
admittance matrix of the fault incremental circuit will be known before test and so
are all diagnosis equation coefficients. The other is that x may not be zero even for
a fault-free non-linear component unless y is chosen to be equal to y. However, if we
can determine x , x may be determined from Equation (1.37) after y is calculated.
The true fault state of the non-linear component can still be found.
1.4.3.2 Quasi-fault incremental circuit and fault diagnosis [33, 34, 36]
We consider two cases here. The first case is that we use the alternative model for
all non-linear branches, irrespective of whether or not a non-linear component is

Fault diagnosis of linear and non-linear analogue circuits

27

measurable. On the basis of Equations (1.38) and (1.37) we can write the branch
equation for all non-linear branches as
Ic = Yc Vc + Xc

(1.39)

Xc = (Yc Yc )Vc + Xc

(1.40)

The overall branch equation of the fault incremental circuit can be obtained by
combining Equations (1.30) and (1.39) as
Ia = Ya Va + Xa

Xa =

[XTb , Xc T ]T ,

(1.41)

Ya = diag{Yb , Yc }

We call the transformed fault incremental circuit of Equation (1.41) the quasi-fault
incremental circuit. We can derive branch-, node- and cutset-fault diagnosis equations
on the basis of the quasi-fault incremental circuit. Taking branch-fault diagnosis as an
example, after determination of X a , we cannot immediately judge the state of the
non-linear components. However, Vc can be calculated and thus Yc . Then we can
calculate Xc from Equation (1.40) and use it to decide if a non-linear component is
faulty.
Because Yc can be chosen before test, diagnosis equation coefficients can be
obtained before test. This can reduce computation after test and is good for online
test. Because all non-linear branches will be in the faulty branch set and any node
or cutset that contains a non-linear component will behave as a faulty one whether
or not the non-linear component is faulty, the number of possible faulty branches,
nodes and cutsets for branch-, node- and cutset-fault diagnosis will increase. This
may require more test nodes for a circuit that contains more non-linear components.
In the search of faults, only the k-branch sets that contain all non-linear components,
k-node sets containing all nodes connected by non-linear components and k-cutset
sets containing all cutsets with non-linear components need to be considered.
1.4.3.3 Mixed-fault incremental circuit and fault diagnosis [3739]
The second case is that for measurable non-linear components we still use the original
model Equation (1.27), while for unmeasurable non-linear components we use the
alternative model of Equation (1.38). Assuming that there are c1 measurable nonlinear branches and c2 unmeasurable non-linear branches, c1 + c2 = c, then we have
the corresponding branch equations of the respective non-linear branches in the fault
incremental circuit as
Ic1 = Yc1 Vc1 + Xc1

(1.42)

for measurable non-linear branches and


Ic2 = Yc2 Vc2 + Xc2

Xc2 = (Yc2 Yc2 )Vc2 + Xc2


for unmeasurable non-linear branches.

(1.43)
(1.44)

28

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

The overall branch equation of the fault incremental circuit can be obtained by
combining Equations (1.30) (1.42) and (1.43) as
= Ya Va + Xa

Ia

(1.45)

Xa = [Xb T , Xc1 T , Xc2 T ]T , Ya = diag{Yb , Yc1 , Yc2 }


After determining Xa , for measurable non-linear branches we can use Xc1
to directly determine whether measurable non-linear components are faulty, while
for unmeasurable non-linear components we can use Xc2 to further calculate Xc2
from Equation (1.44) and then determine the real state of the unmeasurable non-linear
components. It is noted that after k-fault diagnosis, Vc2 (and any branch voltages)
can be calculated and thus Yc2 can be computed. The diagnosis equation coefficients
can be obtained after test. A fault-free non-linear component that is not measurable
makes the branch, nodes and cutset that contain it look as if faulty. Therefore, in the
fault search, only the k-branch sets that contain all unmeasurable non-linear components, k-node sets containing all nodes connected by unmeasurable non-linear
components and k-cutset sets containing all cutsets with unmeasurable non-linear
components need to be considered. The overall performance of this mixed method
should be between the use of original models for all non-linear components and the
use of alternative models for all non-linear components.
The relations between the three types of incremental circuit are summarized in
Reference 39.
1.4.3.4 Two-step diagnosis methods [33, 34, 3638]
For fault diagnosis of non-linear circuits based on the fault incremental circuit in
Section 1.4.1, we can locate faulty branches, nodes and cutsets for branch-, node- and
cutset-fault diagnosis straightaway, after solving the diagnosis equations. However,
fault diagnosis based on quasi- and mixed-fault incremental circuits, takes two steps
to decide on those branches, nodes and cutsets that contain non-linear components
that are not measurable.
For fault diagnosis of non-linear circuits based on the quasi-fault incremental
circuit, we give the following definitions. A branch is called a linear branch if it
contains a linear component only. Otherwise, it is called a non-linear branch. A node
is called a linear node if all branches connected to it are linear; otherwise, it is called a
non-linear node. Similarly, a cutset is called a linear cutset if all branches in the cutset
are linear; otherwise, it is called a non-linear cutset. Dividing between linear and
non-linear, we have X a = [Xb T , Xc T ]T , Xn = [Xnb T , Xnc T ]T , Xt =
[Xtb T , Xtc T ]T for branch-, node- and cutset-fault diagnosis, respectively. We can
also derive the corresponding branch-, node- and cutset-fault diagnosis equations as
Zma Xa = Vm

(1.46)

Xc = Xc + (Yc Yc )Vc


Zmn Xn = Vm

(1.47)
(1.48)

Fault diagnosis of linear and non-linear analogue circuits

29

Xnc = Xnc + Ancc (Yc Yc )Vc

(1.49)

(1.50)

Zpt Xt = Vp


Xtc = Xtc + Dtcc (Yc Yc )Vc

(1.51)

where Ancc is the incident matrix of nc non-linear nodes corresponding to c nonlinear branches. Dtcc is a tcc submatrix of D. All equation coefficients are a function
of Ya .
On the basis of the above equations, the two-step method can now be stated
as follows. Solve corresponding diagnosis equations in equations (1.46) (1.48) and
(1.50) and decide the fault status of linear branches, linear nodes and linear cutsets
by Xb , Xnb and Xtb , respectively. Then calculate Vc and Yc and further Xc ,
Xnc and Xtc using Equations (1.47) (1.49) and (1.51) and use them to decide the
fault status of non-linear branches, non-linear nodes and non-linear cutsets.
Similarly, the two-step method can also be explained for fault diagnosis of nonlinear circuits on the basis of the mixed-fault incremental circuit. On the basis of
the above two alternative fault incremental circuits, we can also discuss class-fault
diagnosis and linear component value determination issues [39].

1.5

Recent advances in fault diagnosis of analogue circuits

The following areas and methods of fault diagnosis of analogue circuits have received
particular attention in recent years. Some promising results have been achieved. We
briefly summarize them here, leaving the details to be covered in the following three
chapters.

1.5.1

Test node selection and test signal generation

Considerable research has been conducted on testability analysis and design in terms
of test nodes, test excitations and topological structures of analogue circuits based on
the k-fault diagnosis methods of fault verification [2224]. In addition to the early
work on the faulty dictionary method [11, 12], References 40 and 41 have proposed
computationally efficient methods for test node selection in analogue fault dictionary
techniques. Test node selection techniques can be classified into one of two categories:
selection by inclusion or selection by exclusion. For the inclusion method, the desired
optimum set of test nodes is initialized at zero, then a new test point is added to it if
needed. For the exclusion approach, the desired optimum set is initialized to include
all available test nodes. Then a test node is deleted if its exclusion does not degrade
the degree of fault diagnosis. In Reference 40, strategies and methods for inclusion
of a node and exclusion of a node as a test node have been proposed. An inclusion
method is developed for test node selection by transforming the problem of selection
of test measurements into the well-known sorting problem and minimal test sets are
obtained by an efficient-sorting-based exclusion method. Reference 41 has further
proposed an entropy-based method for optimum test point selection, based on an

30

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

integer-coded dictionary. The minimum test set is found by using the entropy index
of test points.
References 42 and 43 have studied the testability of analogue circuits in the frequency domain using the fault observability concept. Steady frequency responses are
used. Methods for choosing input frequencies and test nodes to enhance fault observability of the CUT are proposed. The methods proposed in References 42 and 43 are
based on differential sensitivity and incremental sensitivity analysis, respectively. The
differential-sensitivity-based method is realistic for manipulating soft faults, while
for large deviation and hard faults, accuracy increases with the use of incremental
sensitivity.
References 44 and 45 have investigated the testability of analogue circuits in
the time domain. Transient time responses are used. The proposed test generation
method in Reference 44 is targeted towards detecting specification violations caused
by parametric faults. The relationship between circuit parameters and circuit functions
is used for deriving optimum transient tests. An algorithm for generating the optimum
transient stimulus and for determining the time points at which the output needs to be
sampled is presented. The research on the optimum input stimulus and sampling points
is formulated as an optimization problem where the parameters of the stimulus (the
amplitude and pulse widths for pulse trains, slope for ramp stimulus) are optimized.
The test approach is demonstrated by deriving the optimum piecewise linear (PWL)
input waveform for transient testing. The PWL input stimulus is used because any
general transient waveform can be approximated by PWL segments.
In Reference 45 a method of selecting transient and a.c. stimuli has been presented
based on genetic algorithms and wavelet packet decomposition. The method minimizes the ambiguity of faults in the CUT. It also reduces memory and computation
costs because matrix calculation is not required in the optimization. The stimuli here
are in accordance with PWL transient and a.c. sources. A PWL source is defined by
the given time interval of two neighbouring inflexion points, the number of inflexion
points (the first point is (0, 0)) and the magnitude of the point of inflexion of PWL
with its value that can be varied within a range. An a.c. source is defined by the
test frequency and corresponding magnitude. The frequency can be changed within a
range, with the first test frequency being equal to zero (d.c. test) and the total number
of test frequencies can be chosen. Using wavelet packet decomposition to formulate
the objective function and genetic algorithms to optimize it, we can obtain the magnitudes of inflexion points of transient PWL sources or the values of test frequencies
of a.c. sources.

1.5.2

Symbolic approach for fault diagnosis of analogue circuits

Symbolic analysis is a powerful technique for the generation of transfer functions of


a circuit, with the variable of complex frequency and some or all of the circuit components represented in symbolic form. Symbolic analysis provides insight into the
behaviour of a circuit because of the explicit expression of the circuit performances
with respect to the frequency and component parameters. Modern symbolic analysis performs much more than this and has become an important method for design

Fault diagnosis of linear and non-linear analogue circuits

31

automation of analogue circuits. Symbolic analysis has been successfully applied in


many areas of analogue circuits, such as behavioural modelling, statistical analysis
and fault diagnosis. Some early work uses circuit symbolic functions for component
value computation [13, 14]. Symbolic expressions of incremental (matrix) form are
also used for large sensitivity analysis and fault simulation [610]. Applications of
symbolic analysis in testability analysis and fault location of analogue circuits have
been investigated in References 46 and 47. It has been demonstrated that the symbolic
approach is particularly useful for testability analysis, although for fault location, there
are other strong contenders based on neural networks or genetic algorithms. However,
testability analysis is an essential step in analogue fault diagnosis, thus the symbolic
method has an important role to play in the diagnosis process. Ambiguity group issues
in analogue fault diagnosis [49] have been particularly addressed using the symbolic
analysis method [4648]. Chapter 2 will discuss the symbolic approach for analogue
fault diagnosis in great detail.

1.5.3

Neural-network- and wavelet-based methods for analogue fault


diagnosis

Generally, tolerance effects make the parameter values of circuit components uncertain and the computational equations of traditional methods complex. The non-linear
characteristic of the relation between the circuit performance and its constituent components makes it even more difficult to diagnose faults online and may lead to a false
diagnosis. To overcome these problems, a robust and fast fault diagnosis method
taking tolerances into account is needed. Neural networks have the advantages of
large-scale parallel processing, parallel storing, robust adaptive learning and online
computation. Neural networks provide a mechanism for adaptive pattern classification. They are therefore ideal for fault diagnosis of analogue circuits with tolerances.
Several neural-network-based approaches have been recently proposed for analogue
fault diagnosis and they appear to be very promising [5056]. Most studies make use
of the adaptive and robust classification features of neural networks [5055] however, other studies simply use the neural networks as a fast and efficient optimization
method [56]. More recently, wavelet-based techniques have been proposed for fault
diagnosis and testing of analogue circuits [45, 52, 55]. A neural-network-based fault
diagnosis method has been developed in Reference 52 using a wavelet transform
as a preprocessor to reduce the number of input features to the neural network. In
Reference 55 the authors have used a wavelet transform and packets to extract appropriate feature vectors from the signals sampled from the CUT under various faulty
conditions. Chapter 3 will describe different methods for fault diagnosis of analogue
circuits using neural networks and wavelet transforms.

1.5.4

Hierarchical approach for large-scale circuit fault diagnosis

The size and complexity of integrated analogue circuits and systems has continued
to grow at a remarkable pace during recent years. Many of the fault diagnosis methods proposed, however, are only suitable for relatively small circuits. For large-scale

32

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

analogue circuits, the decomposition and hierarchical approach has attracted considerable attention in recent years [35, 5761]. Some early work on fault diagnosis of
large-scale analogue circuits is based on the decomposition of circuits and verification
of certain KCL equations [58]. This method divides the CUT into a number of subcircuits based on nodal decomposition and requires that measurement nodes are the
decomposition nodes. Branch decomposition and branch-node mixed decomposition
methods can also be used. A simple method for cascaded analogue systems has also
been proposed [59]. This method first decomposes a large-scale circuit into a cascaded
structure and then verifies the invariance of simple voltage ratios of different stages to
isolate the faulty stage(s). This method has the minimum computation cost both after
and before test as it only needs to calculate the voltage ratios of accessible nodes and
does not need to solve any linear or non-linear equations. Another method is to first
divide the CUT into a number of subcircuits, then find the equivalent circuits of the
subcircuits and use k-fault diagnosis methods to locate the faults of the large-scale
circuits by diagnosing the equivalent circuits [35]. Here an m-terminal subcircuit is
equivalently described by (m1) branches no matter how complex the inside of the
subcircuit. If any of the (m1) equivalent branches is faulty, then the subcircuit is
faulty. More recently, hierarchical methods based on component connection models
[57] have been proposed [60, 61]. The application of hierarchical techniques to the
fault diagnosis of large-scale analogue circuits will be reviewed in Chapter 4.

1.6

Summary

A review of fault diagnosis techniques for analogue circuits with a focus on fault
verification methods has been represented. A systematic treatment of the k-fault
diagnosis theory and methods for both linear and non-linear circuits as well as the
class-fault diagnosis technique has been given. The fault incremental circuit for
both linear and non-linear circuits has been introduced, based on which a coherent
discussion on different fault diagnosis methods has been achieved.
The k-fault diagnosis method involves only linear equations after test and requires
only a few accessible nodes. Both algebraic and topological methods have been presented in detail for fault verification and testability analysis in branch-fault diagnosis.
A bilinear method for k-component value determination and a multiple excitation
method for parameter identification in node-fault diagnosis have been described. The
cutset-fault diagnosis method has also been discussed, which is more flexible and less
restrictive than branch- and node-fault diagnosis methods owing to the selectability
of trees in a circuit.
A class-fault diagnosis theory for fault location has been introduced, which comprises both algebraic and topological classification methods. The class-fault diagnosis
method classifies branch sets, node-sets or cutset-sets according to an equivalent relation. The faulty class can be uniquely identified by checking consistency of any set
in a class. This method has no structural restriction and classification can be carried
out before test. Class-fault diagnosis can be viewed as a combination of the fault
dictionary and fault verification methods.

Fault diagnosis of linear and non-linear analogue circuits

33

Linear methods and special considerations for fault diagnosis of non-linear circuits have been discussed. Faults in non-linear circuits are accurately modelled by
compensation current sources and the linear fault incremental circuit has been constructed. Linear equations for fault diagnosis of non-linear circuits can be derived
based on the fault incremental circuit. All k-fault diagnosis and class-fault diagnosis
methods developed for linear circuits have been extended to non-linear circuits using
the fault incremental circuit.
Some latest advances in fault diagnosis of analogue circuits have been reviewed,
including selection and design of test points and test signals. The next three chapters
will continue the discussion of fault diagnosis of analogue circuits with a detailed
coverage of three topical fault diagnosis methods: the symbolic function, neural
network and hierarchical methods in Chapters 2, 3 and 4, respectively.

1.7

References

1 Bandler, J.W., Salama, A.E.: Fault diagnosis of analog circuits, Proceedings of


IEEE, 1985;73 (8):1279325
2 Sun, Y.: Analog fault diagnosis theory and approaches, Journal of Dalian
Maritime University, 1989;15 (4):6775
3 Ozawa, T. (ed.): Analog Methods for Computer-Aided Circuit Analysis and
Diagnosis (Marcel Dekker, Inc., New York, 1988)
4 Liu, R.W. (ed.): Testing and Diagnosis of Analog Circuits and Systems (Van
Nostrand Reinhold, New York, USA, 1991)
5 Huertas, J.L.: Test and design for testability of analog and mixed-signal integrated circuits: theoretical basis and pragmatical approaches in Dedieu, H. (ed.),
Circuit Theory and Design 93, Selected Topics in Circuits and Systems (Elsevier,
Amsterdam, Holland, 1993)
6 Temes, G.C.: Efficient methods of fault simulation, Proceedings of 20th IEEE
Midwest Symposium on Circuits and Systems, 1977, pp. 1914
7 Sun, Y.: Bilinear relations of networks and their applications, Proceedings of
URSI International Symposium on Signals, Systems and Electronics, Erlangen,
Germany, 1989, pp. 1057
8 Sun, Y.: Bilinear transformations between network functions and parameters,
Journal of China Institute of Communications, 1991; 12 (5): 7680
9 Sun, Y.: Computation of large-change sensitivity of cascade networks, Proceedings of IEEE International Symposium on Circuits and Systems, New Orleans,
USA, 1990, pp. 27713
10 Sun, Y.: Sensitivity analysis of cascade networks, Proceedings of CSEE and
IEEE Beijing Section National Conference on CAA and CAD, Zhejiang, 1988,
pp. 21420
11 Hochwald, W., Bastian, J.D.: A DC approach for analog fault dictionary
determination, IEEE Transactions on Circuits and Systems, July 1979;26
(7):5239

34

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

12 Lin, P.-M., Elcherif, Y.S.: Analog circuits fault dictionary new approaches
and implementation, International Journal of Circuit Theory and Applications,
1985;13 (2): 14972
13 Berkowitz, R.S.: Conditions for networkelement-value solvability, IRE Transactions on Circuit Theory, 1962;6 (3):249
14 Navid, N., Wilson, A.N. Jr.: A theory and algorithm for analog circuit fault
diagnosis, IEEE Transactions on Circuits and Systems, 1979;26 (7):44057
15 Trick, T.N., Mayeda, W., Sakla, A.A.: Calculation of parameter values from node
voltage measurements, IEEE Transactions on Circuits and Systems, 1979;26 (7):
46673
16 Roytman, L.M., Swamy, M.N.S.: One method of the circuit diagnosis,
Proceedings of IEEE, 1981;69 (5):6612
17 Bandler, J.W., Biernacki, R.M., Salama, A.E., Starzyk, J.A.: Fault isolation in
linear analog circuits using the L1 norm, Proceedings of IEEE International
Symposium on Circuits and Systems, 1982, pp. 11403
18 Biernacki, R.M., Bandler, J.W.: Multiple-fault location in analog circuits, IEEE
Transactions on Circuits and Systems, 1981;28 (5):3616
19 Starzyk, J.A., Bandler, J.W.: Multiport approach to multiple fault location in
analog circuits, IEEE Transactions on Circuits and Systems, 1983;30 (10):7625
20 Trick, T.N., Li, Y.: A sensitivity based algorithm for fault isolation in analog
circuits, Proceedings of IEEE International Symposium on Circuits and Systems,
1983, pp. 10981101
21 Sun, Y.: Bilinear relations for fault diagnosis of linear circuits, Proceedings of
CSEE and IEEE Beijing Section National Conference on CAA and CAD, Zhejiang,
1988
22 Sun, Y.: Determination of k-fault-element values and design of testability in
analog circuits, Journal of Electronic Measurement and Instrument, 1988;2
(3):2531
23 Sun, Y., He, Y.: Topological conditions, analysis and design for testability in
analogue circuits, Journal of Hunan University, 2002;29 (1):8592
24 Huang, Z.F., Lin, C., Liu, R.W.: Node-fault diagnosis and a design of testability,
IEEE Transactions on Circuit and Systems, 1983; 30 (5):25765
25 Sun, Y.: Faulty-cutset diagnosis of analog circuits, Proceedings of CIE 3rd
National Conference on CAD, Tianjin, 1988, pp. 3-143-18
26 Sun, Y.: Theory and algorithms of solving a class of linear algebraic equations,
Proceedings of CSEE and IEEE Beijing Section National Conference on CAA and
CAD, Zhejiang, 1988
27 Togawa, Y., Matsumato, T., Arai, H.: The TF -equivalence class approach to
analog fault diagnosis problems, IEEE Transactions on Circuits and Systems,
1986;33 (10):9921009
28 Sun, Y.: Class-fault diagnosis of analog circuits theory and approaches,
Journal of China Institute of Communications, 1990;11 (5):238
29 Sun, Y.: Faulty class identification of analog circuits, Proceedings of CIE 3rd
National Conference on CAD, Tianjin, 1988, pp. 3-403-43

Fault diagnosis of linear and non-linear analogue circuits

35

30 Sun, Y.: Investigation of diagnosis of the faulty branch-set class in analog


circuits, Journal of China Institute of Communications, 1992;13 (3):6671
31 Sun, Y., Fidler, J.K.: A topological method of class-fault diagnosis of analog
circuits, Proceedings of IEEE Midwest Symposium on Circuits and Systems,
Washington DC, USA, 1992, pp. 48991
32 Sun, Y., Lin, Z.X.: Investigation of nonlinear circuit fault diagnosis, Proceedings
of CIE National Conference on LSICAD, Huangshan, 1985
33 Sun, Y., Lin, Z.X.: Fault diagnosis of nonlinear circuits, Journal of Dalian
Maritime University, 1986;12 (1):7383
34 Sun, Y., Lin, Z.X.: Quasi-fault incremental circuit approach for nonlinear circuit
fault diagnosis, Acta Electronica Sinica, 1987;15 (5):828
35 Sun, Y.: Fault location in large-scale nonlinear circuits, Journal of Dalian
Maritime University, 1978;13 (4):1019
36 Sun, Y.: A method of the diagnosis of faulty nodes in nonlinear circuits, Journal
of China Institute of Communications, 1987;8 (5):926
37 Sun, Y.: Cutset-fault diagnosis of nonlinear circuits, Proceedings of China
International Conference on Circuits and Systems, Nanjing, 1989, pp. 83840
38 Sun, Y.: Faulty-cutset diagnosis in nonlinear circuits, Acta Electronica Sinica,
1990;18 (4):304
39 Sun, Y.: Bilinear relation and fault diagnosis of nonlinear circuits, Microelectronics and Computer, 1990;7 (6):325
40 Prasad, V.C., Babu, N.S.C.: Selection of test nodes for analog fault diagnosis in
dictionary approach, IEEE Transactions on Instrumentation and Measurement,
2000;49 (6):128997
41 Starzyk, J.A., Liu, D., Liu, Z.H., Nelson, D.E., Rutkowski, J.O.: Entropybased optimum test points selection for analog fault dictionary techniques, IEEE
Transactions on Instrumentation and Measurement, 2004;53 (3):75461
42 Slamani, M., Kaminska, B.: Fault observability analysis of analog circuits in
frequency domain, IEEE Transactions on Circuits and Systems II: Analog and
Digital Signal Processing, 1996;43 (2):1349
43 Slamani, M., Kaminska, B.: Multi-frequency analysis of faults in analog circuits,
IEEE Design and Test of Computers, 1995;12 (2):7080
44 Variyam, P.N., Chatterjee, A.: Specification-driven test generation for analog
circuits, IEEE Transactions on Computer-Aided Design of Integrated Circuits
and Systems, 2000;19 (10):1189201
45 Tan, Y., He, Y., Sun, Y. et al.,: Minimizing ambiguity of faults and design of test
stimuli in analogue circuit fault diagnosis, submitted for publication
46 Fedi, G., Giomi, R., Luchetta, A., Manetti, S., Piccirilli, M.C.: On the
application of symbolic techniques to the multiple fault location in low testability analog circuits, IEEE Transactions on Circuits and Systems, 1998;45
(10):13838
47 Fedi, G., Manetti, S., Piccirilli, M.C., Starzyk, J.: Determination of an optimum
set of testable components in the fault diagnosis of analog linear circuits, IEEE
Transactions on Circuits and Systems, 1999;46 (7):77987

36

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

48 Starzyk, J.A., Pang, J., Manetti, S., Piccirilli, M.C., Fedi, G.: Finding ambiguity
groups in low testability analog circuits, IEEE Transactions on Circuits and
Systems, 2000;47 (8):112537
49 Stenbakken, G.N., Souders, T.M., Stewart, G.W.: Ambiguity groups and
testability, IEEE Transactions on Circuits and Systems, 1989;38 (5):9417
50 Spina, R., Upadhyaya, S.: Linear circuit fault diagnosis using neuromorphic
analyzers, IEEE Transactions on Circuits and Systems-II, 1997;44 (3):18896
51 Aminian, F., Aminian, M., Collins, H.W.: Analog fault diagnosis of actual circuits
using neural networks, IEEE Transactions on Instrumentation and Measurement,
2002;51 (3):54450
52 Aminian, M., Aminian, F.: Neural-network based analog circuit fault diagnosis
using wavelet transform as preprocessor, IEEE Transactions on Circuits and
Systems-II, 2000;47 (2):1516
53 He, Y., Ding, Y., Sun, Y.: Fault diagnosis of analog circuits with tolerances using
artificial neural networks, Proceedings of IEEE APCCAS, Tianjin, China, 2000,
pp. 2925
54 He, Y., Tan, Y., Sun, Y.: A neural network approach for fault diagnosis of large
scale analog circuits, Proceedings of IEEE ISCAS, Arizona, USA, 2002, pp.
1536
55 He, Y., Tan, Y., Sun, Y.: Wavelet neural network approach for fault diagnosis
of analog circuits, IEE Proceedings Circuits, Devices and Systems, 2004;151
(4):37984
56 He, Y., Sun, Y.: A neural-based L1-norm optimization approach for fault diagnosis of nonlinear circuits with tolerances, IEE Proceedings Circuits, Devices
and Systems, 2001;148 (4):2238
57 Wu, C.C., Nakazima, K., Wei, C.L., Saeks, R.: Analog fault diagnosis with
failure bounds, IEEE Transactions on Circuits and Systems, 1982;29 (5):27784
58 Salama, A.E., Starzyk, J.A., Bandler, J.W.: A unified decomposition approach
for fault location in large scale analog circuits, IEEE Transactions on Circuits
and Systems, 1984;31 (7):60922
59 Sun, Y.: Fault diagnosis of large-scale linear networks, Journal of Dalian Maritime University, 1985; 11 (3), also Proceedings of CIE National Conference on
LSICAD, Huangshan, 1985, pp. 95101
60 Ho, C.K., Shepherd, P.R., Eberhardt, F., Tenten, W.: Hierarchical fault diagnosis of analog integrated circuits, IEEE Transactions on Circuits and Systems,
2001;48 (8):9219
61 Sheu, H.T., Chang, Y.H.: Robust fault diagnosis for large-scale analog circuits
with measurement noises, IEEE Transactions on Circuits and Systems, 1997;44
(3):198209
62 Sun, Y.: Some theorems on the shift of nonideal sources and circuit equivalence,
Electronic Science and Technology, 1987;17 (5):1820.

Chapter 2

Symbolic function approaches for analogue


fault diagnosis
Stefano Manetti and Maria Cristina Piccirilli

2.1

Introduction

In analogue circuits or in the analogue part of mixed digitalanalogue systems, fault


diagnosis is a very complex task due to the lack of simple fault models and the presence
of component tolerances and circuit non-linearities. For these reasons, the automation
level of fault diagnosis procedures in the analogue field has not yet achieved the
development level achieved in the digital field, in which well-consolidated techniques
for automated test and fault diagnosis are commonly used.
Analogue fault diagnosis procedures are usually classified into two categories: the
simulation-after-test approach (SAT) and the simulation-before-test approach (SBT).
The SAT techniques need more computational time than the SBT techniques based
on generated offline fault dictionaries. Usually, SBT is suitable for single catastrophic
fault location because of the very-large dictionary size in multiple soft-fault situations.
The techniques presented in this chapter are of the SAT type and are devoted to
parametric (soft) faults, that is, it is assumed that all faults are expressed as parameter
variations, without influencing the circuit topology. In this class of problems, the use
of the symbolic approach can be very useful to develop efficient testing methodologies
and design for testability tools.
The parametric fault diagnosis can be viewed as a problem in which, given a circuit
structure and some input-output (I/O) relations, we want to obtain the component
values (Figure 2.1). Then, from a theoretical point of view, we can consider softfault diagnosis as a parameter identification problem. In practice, however, in a fault
diagnosis problem, it is not necessary to obtain precise values of all components.
Usually it is sufficient to determine what components are out of a predefined tolerance
band, that is, what components are faulty. In this kind of problem, the symbolic

38

Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Circuit structure
Cj
Rj

I /O relations

Rk
Lm
gn

Component values
........
Ri = 10 k
Cj = 5 pF
Rk = 15 k
Lm = 3.3 mH
gn = 3
........

Figure 2.1

The parametric fault diagnosis problem

approach is a natural choice, because an I/O relation, in which the component values
are the unknowns, is properly represented by a symbolic I/O relation.
The chapter is organized as follows: In Section 2.2, a brief review on symbolic
analysis is reported. Section 2.3 is dedicated to symbolic procedures for testability
analysis, that is, testability evaluation and ambiguity group determination. As it will
be shown, the testability and ambiguity group concepts are of fundamental importance
for determining the solvability degree of the fault diagnosis problem respectively at
the global level and at a component level, once the test points have been selected.
So, testability analysis is essential to both the designer, who must know which test
points to make accessible, and the test engineer, who must know how many and what
parameters can be uniquely isolated by the planned tests.
In Section 2.4 fault diagnosis procedures based on the use of symbolic techniques
are reported.
Both Sections 2.3 and 2.4 refer to analogue linear or linearized circuits. This
is not so big a restriction, because, the analogue part of modern complex systems
is almost all linear, while the non-linear functions are moved toward the digital
part [1]. However, in Section 2.5 a brief description of a possible use of symbolic
methods for testability analysis and fault diagnosis of non-linear analogue circuits is
reported.

Symbolic function approaches for analogue fault diagnosis

2.2

39

Symbolic analysis

Circuit symbolic analysis is a technique able to produce a characteristic of a circuit,


usually a network function, with the independent variable (time or frequency), the
dependent variables (voltages and currents) and some or all of the circuit elements
represented in symbolic form. A symbolic analyser is, then, a computer program
that receives the circuit description as input and can automatically carry out the
symbolic analysis and thus generate the symbolic expression for the desired circuit
characteristic.
In the past years, symbolic analysis of linear and linearized analogue circuits
has gained a growing interest in the electronic design community [27]. Symbolic
analysis, indeed, can improve the insight into the behaviour of analogue circuits.
A symbolic analyser can be used to automatically generate interpretable analytic
expressions for the a.c. behaviour of analogue circuits as a function of device
parameters. The expressions generated by a symbolic simulator can serve as a description of an analogue circuit and can be used for interactive and/or automatic sizing
and optimization of the circuit performance. Symbolic analysis has been successfully applied in several other circuit application domain, as behavioural modelling,
statistical analysis, testability analysis and fault diagnosis. In this chapter, applications relevant to testability analysis and fault diagnosis of analogue circuits will be
considered.
Almost all the symbolic analysis programs realized until now concern the
analysis of lumped, linear (or linearized), time-invariant circuits in the frequency
domain. The obtained symbolic network functions are rational functions of the
complex frequency s and of the circuit elements pj , that are represented by
symbols

F(s) =

 i
s ai (p1 , ..., pm )
N(s, p1 , ..., pm )
= i i
D(s, p1 , ..., pm )
i s bi (p1 , ..., pm )

(2.1)

The coefficients ai () and bi () are symbolic polynomial functions of the circuit


elements pj .
In a fully symbolic analysis all the circuit elements are represented by symbols, while in a mixed symbolic-numerical analysis only part of the elements are
represented by symbols and the others by their numerical values. If, finally, no symbolic circuit elements are requested, a rational function with numerical coefficients
is obtained and the only symbol is the frequency s.
Circuit symbolic analysis is complementary to numerical analysis, where the
variables and the circuit elements are represented by numbers and qualitative analysis, where qualitative values (increase or decrease) are used for voltages and
currents.

40

2.2.1

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Symbolic analysis techniques

During the past 30 years, several algorithms and computer programs for circuit symbolic analysis have been introduced. These symbolic techniques can be classified,
according to the basic method used, as follows:
1. Algebraic methods:
numerical interpolation methods
parameter extraction methods
determinant expansion methods.
2. Topological methods:
tree enumeration methods
two-graph method
directed-tree enumeration method
flowgraph methods
signal-flow-graph method
Coates-flow-graph method.
The algebraic methods are based on the idea of generating the symbolic circuit
equations, using symbolic manipulations of algebraic expressions, directly solving
the linear system that describes the circuit behaviour, obtained, for example, using
the modified nodal analysis (MNA) technique. Several computer programs have been
realized in the past following this way. Interesting results have been obtained, in
particular, using determinant expansion methods.
The topological methods are based, essentially, on the enumeration of some
subgraphs of the circuit graph. Among these methods, the two-graph method is particularly efficient. The efficiency of the method is mainly owing to the fact that
it, intrinsically, does not generate cancelling terms. In fact, the presence of cancelling terms can produce a severe overhead in computational times, due to the
post-processing needed to elaborate the cancellations.
The basic two-graph method works only on circuits that contain resistors, capacitors, inductors and voltage-controlled current sources; but it is possible to include all
the other circuit elements using simple preliminary network transformations.

2.2.2

The SAPWIN program

A personal computer program based on the use of the two-graph method is SAPWIN,
developed during the last years by the authors.
SAPWIN is an integrated package of schematic capture, symbolic analysis and
graphic post-processing for linear analogue circuits. The program provides several
tools to create the schema of a linear analogue circuit, to perform its symbolic analysis
and to show the results in graphic form. In the schematic capture option, the main
screen is a white sheet where the user can draw a circuit by using typical Windows
tools to copy, cut, paste, move and edit a component or a part of the circuit. All the
passive components, controlled sources and many linear models of active devices
(operational amplifiers and small-signal equivalent models of BJTs and MOSFETs)

Symbolic function approaches for analogue fault diagnosis

41

are available. The program can produce symbolic network functions where each component can appear with its symbolic name or with a numerical value. The graphical
post-processor is able to show the network function and to plot gain, phase, delay,
pole and zero position, time-domain step and impulse response. The program can be
freely downloaded at the address http://cirlab.det.unifi.it/SapWin.
The symbolic expressions generated by SAPWIN are also saved, in a particular
format, in a binary file, which can constitute an interface to other programs. During
the past years, several applications have been developed using SAPWIN as a symbolic
simulator engine, such as symbolic sensitivity analysis, transient analysis of power
electronic circuits, testability evaluation and circuit fault diagnosis. All the programs
presented in this chapter are based on the use of SAPWIN.

2.3

Testability and ambiguity groups

In general, a method for locating a fault in an analogue circuit consists in the measure
of all its internal parameters, comparing the measured values with their nominal
working ranges. This kind of measurement, as can be imagined, is not straightforward
and, often, it is not possible to characterize all the parameters. The possibility of
actually accessing this information depends on which kind of measurements are made
on the circuit, as well as on the internal topology of the circuit itself. Then the selection
of the set of measurements, that is, of the test points, is an essential problem in fault
diagnosis applications, because not all the possible test points can be reached in
an easy way. For example, usually it is very difficult to measure currents without
breaking connections or, for complex circuits, a great number of measurements could
not be economically convenient. In other words, the test point selection must take into
account practical measurement problems that are strictly tied with the used technology
and with the application field of the circuit under consideration. So, in order to perform
test point selection, it is necessary to have a quantitative index to compare different
possible choices. The testability measure concept meets this requirement.
Testability is strictly tied to the concept of network-element-value-solvability,
which was first introduced by Berkowitz [8]. Successively, a very useful testability
measure was introduced by Saeks and co-workers [912]. Other definitions have been
presented in subsequent years (see, for example, References 1315); and, then, there
is not a universal definition of analogue testability. However, the Saeks definition has
been the most widely used [1619], because it provides a well-defined quantitative
measure of testability. In fact, once a set of test points are selected by representing
the circuit under test (CUT) through a set of equations non-linear with respect to
the component parameters, the testability definition gives a measure of solvability of
these equations and indicates the ambiguity resulting from an attempt to solve such
equations in a neighbourhood of almost any failure. Therefore, this testability measure
allows to know a priori if a unique solution of the fault diagnosis problem exists.
Furthermore, if this solution does not exist, it gives a quantitative measure of how far
we are from it, that is, how many components cannot be diagnosed with the given test
point set.

42

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

When testability is low, an important concept is that of ambiguity groups. An


ambiguity group is, essentially, a group of components where, in case of fault, it
is not possible to uniquely identify the faulty one. A canonical ambiguity group is a
minimal ambiguity group, that is, a group that does not contain, within it, ambiguity
groups of lower order. The canonical ambiguity groups give information about the
solvability of the fault diagnosis problem with respect to each component, in case of
bounded number of faults (k-fault hypothesis) [20].
Summarizing, once a set of test points are selected, independently of the method
effectively used in phase of fault location, testability measure gives a theoretical
and rigorous upper limit to the degree of solvability of fault diagnosis problem at a
global level, while the ambiguity group determination gives the solvability degree at
a component level. If these important concepts are not taken into account properly,
the quality of the obtained results is severely limited [21].
Algorithms for evaluating testability measure as previously defined have been
developed by the authors in the past years, using a numerical approach [22, 23]. These
algorithms were utilized for the implementation of programs for analogue network
testability calculation. However, these methods were suitable only for network of
moderate size, because of the inevitable round-off errors introduced by numerical
algorithms, which render the obtained testability only an estimate. This limitation has
been overcome with the introduction of the symbolic approach [2429] through an
efficient manipulation of algebraic expressions [3032]. Using testability evaluation
algorithms, it is not difficult to realize procedures for canonical ambiguity group
determination [28, 29, 33].

2.3.1

Algorithms for testability evaluation

The analogue CUT can be considered as a multiple-input multiple-output linear timeinvariant system. Using the MNA, the circuit can be described using the following
equation:




y(p, s)
x(s)
A(p, s)
=
(2.2)
E(p, s)
0
where p = [p1 p2 . . . pm ]t is the vector of the potentially faulty parameters, assuming
that all the faults are expressed as parameter variations, without influencing the circuit
topology (faults as short and open are not considered, that is, the approach is suitable
for parametric faults and not for catastrophic faults), x(s) = [x1 (s) x2 (s) . . . xnx (s)]t
is the input vector, A(p, s) is the characteristic matrix, conformable to the vectors,
y(p, s) = [y1 (p, s) y2 (p, s) . . . yny (p, s)]t is the vector of the output test points (voltages and/or currents) and E(p, s) = [E1 (p, s) E2 (p, s) . . . Ene (p, s)]t is the vector
of the inaccessible node voltages and/or currents of all the elements that do not have
an admittance representation.
The fault diagnosis equations of the CUT are constituted by the network
functions relevant to each test point output and to each input. They can be

Symbolic function approaches for analogue fault diagnosis

43

obtained from Equation (2.2) by applying the superposition principle and have the
following form:
(j)

hi (p, s) =

(j)

yi (p, s)
det Aij (p, s)
= (1)i+j
xj (s)
det A(p, s)
i = 1, . . . , ny
j = 1, . . . , nx

(2.3)
(j)

with Aij (p, s) minor of the matrix A(p, s) and yi (p, s) the ith output due to the
contribution of input xj only. As it can be easily noted, the total number of the fault
diagnosis equations is equal to the product of the number of outputs and inputs.
Let (s) = (rk (s)) be the Jacobian matrix associated with the algebraic diagnosis Equation (2.3) evaluated at a generic frequency s and at a nominal value p0 of
the parameters. From Equation (2.3) we obtain for rk (s):


det Aij (p, s) 
i+j
rk (s) = (1)
(2.4)
pk
det A(p, s) p =p
0
where r = (i j) nx + j. The matrix(s) is rational in s and, from Equation (2.4),
we get that the functions (det A(p, s))2 p = p rk (s)are polynomial functions in s. As
0
shown in References 912, the testability measure T of the analogue system, evaluated
in a suitable neighbourhood of the nominal value p0 , is given by the maximum number
of linearly independent columns of (s)
T = rankcol ((s))

(2.5)

2.3.1.1 Numerical approach


A numerical algorithm for the evaluation of the previously defined testability measure,
that is, for the evaluation of the maximum number of linearly independent columns
of the matrix , can be based on the following considerations [22, 23].
The matrix can be expressed in the form:
1
(p, s) =
P(s)
(2.6)
(det A(p0 , s))2
where P(s) is a polynomial matrix in which the (r, k)th entry is a polynomial in s of
degree drk . An upper estimate d for the degree of such a polynomial can easily be
carried out on the basis of the type of components present in the CUT. Indeed, such
a degree cannot be larger than twice the number of components dependent on the
frequency that appear in the network under consideration [23].
The testability measure T , that is, the number of linearly independent columns
of the Jacobian matrix, coincides with the number of linearly independent columns
of the polynomial matrix P(s). In fact, by representing the polynomials of P(s) as a
linear combination of suitable orthogonal polynomials, in Reference 23 it was proved
that T = rank(C), where C is a matrix composed of the coefficients obtained by
expanding the polynomials of P(s) into a series of orthogonal polynomials.

44

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

This method provides a valid mean for the numerical computation of testability.
However, the numerical programs obtained in this way are of a very high computational complexity. First, the calculation of the coefficients of the polynomials prk
requires the knowledge of the values assumed by the polynomials in at least d + 1
points, where d is the degree of the polynomial; this degree must be a priori estimated,
on the basis of the type of the components present in the CUT. Therefore, for large
circuits, the numerical calculation of a considerable number of circuit sensitivities is
required. Furthermore, the program must take into account the inevitable round-off
errors introduced by the algorithm used for sensitivity computation. This problem
was partially overcome by using two different polynomial expansions (for example,
Reference 23). Nevertheless, for large circuits these errors could have a magnitude
so large that the obtained testability values must be considered only as an estimate of
the true testability.
2.3.1.2 Symbolic approach
The drawbacks of the previous numerical approach are overcome if we are able to
determine the polynomial matrix directly in a completely symbolic form. In fact, it
has been proven [24] that the number of linearly independent columns of the matrix
P(s) is equal to the rank of a matrix B constituted by the coefficients of the polynomial
functions of P(s). Then, the entries of the matrix B are independent with respect to
the complex frequency s. In other words, by expressing P(s) in the following way:

p11 (s) p12 (s) . . . p1m (s)

...

(2.7)
P(s) =

...
pl1 (s) pl2 (s) . . . plm (s)
0 + b1 s + + bd sd and d = max {deg p , deg p , . . . deg p },
with prk (s) = brk
11
12
lm
rk
rk
we have rankcol P(s) = rank B, where B is a matrix of order (d + 1) l m(l = nx ny )
of the following form:
0
0
0
. . . b1m
b11 b12

...

0
0
0

b
1l1 b1l2 . . . b1lm

b
11 b12 . . . b1m

...

(2.8)
B=
b1 b1 . . . b1
l1
l2
lm

...

bd bd . . . bd
11
12
1m

...
d
d
d
bl1 bl2 . . . blm

whose entries are the coefficients of the polynomials prk (s).


Thus, in order to determine the testability measure T of the circuit under consideration, it is sufficient to evaluate the rank of the numerical matrix B, composed of
the polynomial coefficients brk .

Symbolic function approaches for analogue fault diagnosis

45

In a numerical computation of testability, the previous result is not easily applicable, because the computation of the coefficients brk by means of classical numerical
analysis algorithms is very difficult and may cause considerable drawbacks, particularly for large networks. The result is very useful if the coefficients brk are in
completely symbolic form. In fact, in this case, they are functions of circuit parameters, to which we can assign arbitrary values, because testability is independent of
component values [10]. Furthermore, since the matrix B is, essentially, a sensitivity
matrix of the CUT, starting from a fully symbolic generation of the network functions
corresponding to the selected fault diagnosis equations, it is very easy to obtain symbolic sensitivity functions [3032]. As a consequence, the use of a symbolic approach
simplifies the testability measure procedure and reduces round-off errors, because the
entries of B are not affected by any computational error.
An important simplification of this procedure, reported in Reference 25, results
from the fact that the testability measure can be evaluated as rank of a matrix BC ,
constituted by the derivatives of the coefficients of the fault diagnosis equations with
respect to the potentially faulty circuit parameters.
For the sake of simplicity, let us consider a circuit with only one test point and
only one input. In this case there is only one fault diagnosis equation that must be
expressed in the following form
n

ai (p) si
N(p, s)
h(s, p) = m1i = 0
=
m
j
D(p, s)
j = 0 bj (p)s + s

(2.9)

t

with p = p1 , p2 , . . . , pR vector of potentially faulty parameters, n and m degrees
of numerator and denominator, respectively. The matrix BC , of order (m + n + 1) R,
constituted by the derivatives of the coefficients of h(s,p) in Equation (2.9) with respect
to the R unknown parameters, is the following:

BC =

a0
p1
a1
p1

a0
p2
a1
p2

an
p1
b0
p1

an
p2
b0
p2

bm1
p1

bm1
p2

...

...

...

...

...
...
...
...
...
...
...

a0
pR
a1
pR

...

an
pR
b0
pR

...

bm1
pR

(2.10)

As shown in Reference 25, the matrix BC has the same rank of the previously
defined matrix B, because the rows of B are linear combination of the rows of
BC . Then the testability value can be computed as the rank of BC by assigning
arbitrary values to the parameters pi and by applying classical triangularization methods. If the CUT is a multiple-input multiple-output system, that is, if there is more
than one fault diagnosis equation, the same result can be easily obtained. This is a
noteworthy simplification from a computational point of view, because derivatives of

46

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

the coefficients of fault diagnosis equations are simpler to compute with respect to
derivatives of fault diagnosis equations.
The described procedure has been implemented in the program, SYmbolic FAult
Diagnosis (SYFAD) [33, 34], based on the software package SAPWIN [3537].
It should be noted that, from this procedure, it is possible to derive some necessary conditions for a testable circuit (that is, a circuit with maximum of testability)
which are very simple to apply. These necessary conditions are simply based on the
consideration that, for a maximum of testability, the matrix BC must have a rank
equal to the number of unknown parameters, that is, equal to the number of columns.
Then, for a circuit with a given set of test points, we have the following first necessary
condition:
Necessary condition for maximum of testability is that the number of coefficients in the
fault diagnosis equations must be equal or greater than the number of unknown parameters.

Another interesting necessary condition follows from the consideration that the
number of coefficients depends on the order of the network. In fact, the maximum
number of coefficients of a network function is 2N + 1, if the network is of order
N. From this consideration and from the previous necessary condition, it is possible to determine the minimum number of fault diagnosis equations and then of test
points, necessary for maximum testability or, given the number of test points, it is
possible to determine the maximum number of unknown parameters for a maximum
of testability. For the single test point case, we have Mp = 2N + 1, where Mp
is the maximum number of unknown parameters, that is, the maximum number of
parameters that are possible to determine with the given fault diagnosis equation. For
the multiple-test point case, since all the fault diagnosis equations are characterized
by the same denominator, we have Mp = N + n(N + 1), where n is the number
of fault diagnosis equations. In summary, we have the following second necessary
condition:
For a circuit of order N, with n test points, a necessary condition for a maximum of
testability is that the number of potentially faulty parameters is equal or lower than
N + n(N + 1).

Obviously, in a real application, several other practical considerations must be


taken into account in the choice of test points. For example, depending on the technology used for the realization of the circuit, current test points could be very difficult
to use. Voltage test points are, usually, preferred. On the other hand, it is easy to
prove that, using only voltage test points, that is, using only fault diagnosis equations
represented as voltage transfer functions, the testability can never be a maximum,
if all the circuit components are considered potentially faulty. In fact, such network functions are invariant with respect to a component value amplitude scaling. In
these cases, the maximum achievable testability is the number of circuit components
minus one.
It is worth pointing out that, in the procedure of testability evaluation, the matrix
BC is determined starting from rational functions whose denominator has a unitary
coefficient in the highest-order term. If this coefficient is different from one, it is

Symbolic function approaches for analogue fault diagnosis

47

necessary to divide all the coefficients of the rational functions by the coefficient of
the highest term of the denominator, with a consequent complication in the evaluation
of the derivatives (derivative of a rational function instead of a polynomial function).
In this case an increase in computing speed can be obtained by applying the approach
presented in References 26 and 27, where the testability evaluation is performed
starting from fault diagnosis equations with the coefficient of the highest-order term
of the denominator different from one.

2.3.2

Ambiguity groups

At this point it is important to discuss the importance of the testability concept and
to understand the information given by canonical ambiguity group determination.
As was previously mentioned, once the matrix BC has been determined, testability
evaluation can be performed by assigning arbitrary values to the component parameters and triangularizing BC . The disadvantage of considering as a testability matrix
the matrix BC instead of the Jacobian matrix consists in the fact that the testability
meaning of solvability measure of the fault diagnosis equations is less immediate.
However, this limitation can be overcome by splitting the fault diagnosis equation
solution into two phase. In the first phase, starting from the measurements carried
out on the selected test points at different frequencies, the coefficients of the fault
diagnosis equations are evaluated, eventually exploiting a least-squares procedure in
order to minimize the error due to measurement inaccuracy [34]. In the second phase,
the circuit parameter values are obtained by solving the non-linear system constituted
by the equations expressing the previously determined coefficients as functions of the
circuit parameters. In this way, by considering k-fault diagnosis equations expressed
as follows:

n1  (l)
i
i=0 ai (p)/bm (p) s
Nl (p, s)
hl (p, s) =
=



j
D(p, s)
sm + m1
j=0 bj (p)/bm (p) s

l = 1, . . . , K
(2.11)

the following non-linear system has to be solved:

(1)

a0 (p)
bm (p)
(K)

a0 (p)
bm (p)
b0 (p)
bm (p)
(l)

(1)

= A0

(K)

= B0

= A0
..
.

(1)

an1 (p)
bm (p)
(K)

anK (p)
bm (p)
bm1 (p)
bm (p)

(1)

= An1

(K)

(2.12)

= AnK

= Bm1

where Ai and Bj (i = 0,...,nl , j = 0,...,m1) are the coefficients of the fault


diagnosis equations in expression (2.11), which have been calculated in the previous phase. The Jacobian matrix of this system coincides with the matrix BC ,

48

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

reported in Equation (2.13) for the case of kfault equations and bm different
from one




 (1) 
(1)
(1)
a0 /bm
a0 /bm
a0 /bm
...

p2
pR
p1

.
.
.
.
.
.
 ... 





(1)
(1)
(1)
an1 /bm
an1 /bm

an1 /bm

.
.
.
p1
p2
pR

.
.
.
.
.
.
.
.
.
 ... 




(K)
(K)

a(K) /bm
a0 /bm
a0 /bm
0

pR
p1
p2
(2.13)
BC =

 ... 
.
.
.
.
.
.





(K)
(K)

an(K)
/bm
anK /bm
anK /bm
K

.
.
.

p1
p2
pR

(b0 /bm )
(b0 /bm )
(b0 /bm )

p1
p2
pR

.
.
.

...
...
...

(bm1 /bm )
(bm1 /bm )
(bm1 /bm )
.
.
.
p1
p2
pR
Hence, all the information provided by a Jacobian matrix with respect to its
corresponding non-linear system can be obtained from the matrix BC .
Summarizing, independent of the used fault location method, the testability value
T = rank BC gives information on the solvability degree of the problem, as explained
by the following:
If T is equal to the number of unknown elements, the parameter values can be
theoretically uniquely determined starting from a set of measurements carried out
on the test points.
If T is lower than the number R of unknown parameters, a locally unique solution
can be determined only if RT components are considered not faulty.
Generally T is not maximum and the hypothesis of a bounded number k of faulty
elements is made (k-fault hypothesis), where k T . Then, important information is
given by the testability value: the solvability degree of the fault diagnosis problem
and, consequently, the maximum possible fault hypothesis k.
In the case of low testability and k-fault hypothesis, at most a number of faults
equal to the testability value can be considered. However, under this hypothesis,
whatever fault location method is used, it is necessary to be able to select as potentially
faulty parameters a set of elements that represents, as well as possible, all the circuit
components. To this end, the determination of both the canonical ambiguity groups
and surely testable group is of fundamental importance. In order to understand this
statement better, some definitions and a theorem [20] are now reported.
The matrix BC does not only give information about the global solvability degree
of the fault diagnosis problem. In fact, by noticing that each column is relevant to a
specific parameter of the circuit and by considering the linearly dependent columns of
BC , other information can be obtained. For example, if a column is linearly dependent
with respect to another one, this means that a variation of the corresponding parameter

Symbolic function approaches for analogue fault diagnosis

49

provides a variation on the fault equation coefficients indistinguishable with respect


to that produced by the variation of the parameter corresponding to the other column.
This means that the two parameters are not testable and they constitute an ambiguity
group of the second order. By extending this reasoning to groups of linearly dependent
columns of BC , ambiguity groups of higher order can be found. Then, the following
definitions can be formulated [20]:
Definition 2.1 A set of j parameters constitutes an ambiguity group of order j if the
corresponding j columns of the testability matrix BC are linearly dependent.
Definition 2.2 A set of k parameters constitutes a canonical ambiguity group of
order k if the corresponding k columns of the testability matrix BC are linearly dependent and every subset of this group of columns is constituted by linearly independent
columns.
Definition 2.3 A set of m parameters constitutes a global ambiguity group of order
m if it is obtained by unifying canonical ambiguity groups having at least an element
in common.
Definition 2.4 A set of n parameters, whose corresponding columns of the testability
matrix BC do not belong to any ambiguity group, constitutes a surely testable group
of order n.
Obviously the number of surely testable parameters cannot be greater than the
testability value, that is, the rank of the matrix BC .
In Reference 20, two important theorems have been demonstrated, which can be
consolidated into a single theorem which considers that a canonical ambiguity group
having null intersection with respect to all the other canonical ambiguity groups can
be considered as a global ambiguity group.
Theorem 2.1 A circuit is k-fault testable if all the global ambiguity groups have
been obtained by unifying canonical ambiguity groups of order at least equal
to (k + 2).
At this point we define an optimum set of testable components as follows:
Definition 2.5 A group of components constitutes an optimum set of testable
components if it represents all the circuit components and if it gives a unique solution
for the fault diagnosis equations under the k-fault hypothesis.
The procedure of selection of the optimum set of testable components consists in
the following steps:
1. Evaluation of the circuit testability T .
2. Determination of all the canonical ambiguity groups.

50

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

3. Determination of all the global ambiguity groups.


4. Determination of the surely testable group.
5. k-fault hypothesis, with k ka 2 where ka is the order of the smallest
canonical ambiguity group.
6. Selection of components belonging to the surely testable group.
7. For each global ambiguity group, selection of at most ki 2 components as
representatives of the corresponding global ambiguity group, with ki minimum
order of the canonical ambiguity groups constituting the ith global ambiguity
group.
With this kind of selection, each element belonging to the surely testable group is
representative of itself, while the elements selected for each global ambiguity group
are representative of all the elements of the corresponding global ambiguity group.
When the number k of possible simultaneous faults is chosen a priori and an optimum
set of testable components does not exist (or when for whatever value of k the optimum
set does not exist, as in the case of the presence of canonical ambiguity groups of the
second order), only one component has to be selected as being representative of the
global ambiguity groups obtained by unifying canonical ambiguity groups of order
less than or equal to (k + 1), while for the surely testable group and for the other
global ambiguity groups the steps 6 and 7 of the procedure have to be applied. If a
unique solution does not exist, by proceeding in this way we are able to choose a set
of components which represents as well as possible all the circuit elements and, in
the fault location phase, it will be eventually possible to confine the presence of faults
to well-defined groups of components belonging to global ambiguity groups.
It is important to remark that this procedure of component selection is independent of the method used in the fault location phase. Furthermore, once the elements
representative of all the circuit components have been chosen on the basis of the
previous procedure, the isolation of the faulty components is up to the chosen fault
location method. If the selected component set is optimum, the result given by the
used fault location method can be theoretically unique. Otherwise, always on the basis
of the selected components, it is possible to interpret in the best way the obtained
results, as it will be shown in the following example. Finally, as the set of components to be selected is not unique, the eligibility of the most suitable one could be
given by practical considerations, as, for example, the set containing the highest number of components with less reliability or by the features of the subsequent chosen
fault location method (algorithms using a symbolic approach, neural networks, fuzzy
analyser, etc.).
As an example, let us consider the SallenKey band-pass filter, shown in
Figure 2.2. Vo is the chosen test point. The program SYFAD is able to yield both
testability and canonical ambiguity groups, as will be shown in the next section. In
Figure 2.3 the program results are shown. As can be seen, there are two canonical
ambiguity groups without elements in common, that can be considered also as global
ambiguity groups. The first group is of the second order and, then, it is not possible
to select a set of components giving a unique solution. The surely testable group is
constituted by G1 and C1 . As the testability is equal to three, we can take into account

Symbolic function approaches for analogue fault diagnosis

51

R2

+
+

Vi R1

C1 C2

R3

Vo

R4 R5

Figure 2.2

Sallen-Key band-pass filter

Testability value: 3
Total number of components: 7
Canonical ambiguity groups:
G 5 G4
C2 G2 G3

Figure 2.3

Program results for the circuit in Figure 2.2

at most a three-fault hypothesis, that is, a possible solution can be obtained if only
three component values are considered as unknowns. On the base of the previous
procedure, the elements to select as representative of the circuit components are the
surely testable group components and only one component belonging to one of the
two canonical ambiguity groups. Let us suppose, for example, the situation of a single
fault. Independent of the used fault location method, if the obtained solution gives
as faulty element C1 or G1 , we can localize the fault with certainty, because both
C1 and G1 belong to the surely testable group. If we locate as the potentially faulty
element a component belonging to the second-order canonical ambiguity group, we
can only know that there is a fault in this ambiguity group, but we cannot locate
it exactly because there is not a unique solution. Instead, if we obtain as the faulty
element a component belonging to the third-order ambiguity group, we have a unique
solution and then we can localize the fault with certainty. In other words, a fault in a
component of this group can be counterbalanced only by simultaneous faults on all

52

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

the other components of the same group. However, by the hypothesis of single fault,
this situation cannot occur.

2.3.3

Singular-value decomposition approach

From the previous section, it is possible to understand the importance of the canonical
ambiguity group determination. In fact, from knowledge of the order of the minimum
canonical ambiguity group it is possible to establish if a circuit is k-fault testable or
not. Furthermore, it is important to know also which are the canonical ambiguity
groups, in order to suitably choose the potentially faulty elements. In other words,
knowledge of the canonical ambiguity groups allows us to determine, taking into
account also their intersections, that is, the global ambiguity groups, the testable
groups. These are groups of potentially faulty components giving a solution to the
problem of fault location. This solution will be unique if the circuit is k-fault testable,
otherwise it will allow us to confine the presence of faults to well-defined groups
of components belonging to global ambiguity groups [20]. So, the importance of
the canonical ambiguity group determination is twofold: (i) the possibility of establishing a priori if a circuit is k-fault testable; and (ii) the possibility of determining
testable groups of components, which are easily obtainable, through a combinatorial
procedure, starting from the canonical ambiguity group knowledge.
One of the first algorithms for ambiguity group determination was presented in
Reference [19]. In References 33 and 34 a combinatorial method, based on a symbolic
approach, has been implemented in the program SYFAD. In Figure 2.4 the flowchart
of the algorithm for the canonical ambiguity group determination is shown: in the
figure, T indicates the circuit testability and R the total number of potentially faulty
parameters. Summarizing, the procedure can be described as a process that evaluates
the testability of the whole combinations of groups of k components, starting from a
group constituted by only one component and increasing it until the maximum allowed
number (that is obviously the testability value of the circuit). In the development of
the procedure, if an ambiguity group is found, the further combinations that include it
as a subsystem are not considered: so the canonical ambiguity groups are determined.
In the procedure a classical total pivot method is used on BC and its submatrices.
Subsequently, another efficient numerical procedure for ambiguity group determination, based on the QR factorization of the testability matrix, was proposed in
References 38 and 39. However, this last method, even though not combinatorial,
is a very complex technique to search for canonical and global ambiguity groups.
Furthermore, although the QR decomposition approach presents several interesting
features, it suffers from problems related to round-off errors. These problems become
particularly critical when the dimensions of the testability matrix (or, in circuit terms,
the circuit size) increase. In fact, the procedures for testability and ambiguity group
determination are strictly tied to the numerical rank evaluation of the testability matrix
and of some its submatrices. As is well known, the matrix rank computation using
QR decomposition or other triangularization methods, is affected by round-off errors,
especially if the matrix is rank deficient and the numerical rank obtained is, often,

Symbolic function approaches for analogue fault diagnosis

Figure 2.4

53

Flowchart of the combinatorial algorithm for the canonical ambiguity


group determination

only an estimate of the effective rank. These numerical problems are mostly overcome by the use of the singular-value decomposition (SVD) approach which is a
powerful technique in many matrix computations and analyses and has the advantage
of being more robust to numerical errors. The SVD approach allows us to obtain the
effective numerical rank of the matrix, taking into account round-off errors [40]. So,
by exploiting the great numerical robustness of the SVD approach, an accurate evaluation of testability value and an efficient procedure for canonical ambiguity group
determination can be obtained, as it will be shown in the following [28].
As known [40], a matrix BC with m rows and n columns can be written as follows
in terms of its SVD:
BC = UVT

(2.14)

54

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

where U and V are two square matrices of order m and n, respectively and is
a diagonal matrix of dimension m n. If BC has rank k, the first k elements i ,
called singular values, on the diagonal of  are different from zero and are related
by 1 2 k > 0. The matrix is unique, the matrices U and V are not
unique, but are unitary. This means that they have maximum rank and their rows and
columns are orthonormal. In our case BC is the testability matrix, then n is equal to
the number of potentially faulty parameters. As known, the testability value does not
depend on component values [10]. Then, by assigning, for example, arbitrary values
to the circuit parameters, the numerical value of the entries of BC can be evaluated
and, by applying SVD, the testability value T = rankBC can be determined from the
number of singular values.
Now, V being a unitary matrix and rank BC = T , by multiplying for V both the
members of Equation (2.14), the following expression can be obtained:
BC V = U = [UT T |0]

(2.15)

where UT indicates the matrix constituted by the first T columns of U and T the
square submatrix of containing the singular values. The matrix UT T has dimension
m T , the null submatrix 0 has dimension m (n T ). At this point, the following
equations can be written:
BC VT = UT T
BC VnT = 0m(nT )

(2.16)

where VT indicates the matrix of dimension n T constituted by the first T columns


of the matrix V and VnT the matrix of dimension n (n T ) constituted by the
last n T columns of the matrix V. By recalling that the kernel of a matrix BC (ker
BC ) of dimension m n is the set of vectors v such that BC v = 0 and that rank
BC + dim(ker BC ) = n, the dimension of ker BC is equal to n T if rank BC = T .
The columns of VnT are linearly independent, because VnT is a submatrix of V.
Then, the columns of VnT constitute a basis for ker BC . Each column of VnT
gives a linear combination of columns of BC and, then, we can think to associate
each column of VnT to an ambiguity group, but we do not know if it is canonical,
global or, eventually, a union of disjoint canonical ambiguity groups. We know only
that vectors of dimension n, representing linear combinations of columns of BC ,
giving canonical ambiguity groups, belong to ker BC . Being the columns of VnT a
basis for ker BC , certainly these vectors can be generated by the columns of VnT .
Unfortunately, we do not know a priori the canonical ambiguity groups, then we do not
know what kind of linear combination of VnT columns gives the canonical ambiguity
groups.
Now, let us consider the symmetric matrix H = VnT VTnT of dimension n n
and with entries lower than one. It has rank equal to n T , being the product of
two matrices of rank n T and derives from the product of each row of VnT for
itself and for all the other rows. By multiplying BC for H, the following expression is
obtained:
BC H = BC VnT VTnT = 0mn

(2.17)

Symbolic function approaches for analogue fault diagnosis

55

Equation (2.17) means that the columns of H belongs to ker BC . Furthermore each
row and, consequently, each column of H refer to the corresponding column of BC ,
that is, in our case, to a specific circuit parameter. In Reference 28 the following
theorem and corollaries have been demonstrated.
Theorem 2.2 If in the matrix BC there are only disjoint canonical ambiguity groups,
they are identified by the entries different from zero of the columns of the matrix H.
Corollary 1 If in the matrix BC there are canonical ambiguity groups with non-null
intersection, that is, there are global ambiguity groups, the matrix H provides the
disjoint global ambiguity groups.
Corollary 2 If VnT and then H, have a null row (also a column for H), it
corresponds to a surely testable element.
Furthermore, in Reference 28 it has also been shown that, if the matrix H has all
the entries different from zero, then this means that one of the following conditions
occurs: there are surely testable elements or there is a unique global ambiguity group.
In any case, since we do not know a priori which is the situation for a given circuit,
it is necessary to consider a procedure giving the canonical ambiguity groups. If
in H disjoint global ambiguity groups are located, it is again necessary to consider
a procedure giving the canonical ambiguity groups. In practice, the procedure of
canonical ambiguity group determination ends at the evaluation of H only if H is
constituted by blocks of order two, which locate second-order canonical ambiguity
groups, otherwise it must continue.
In order to determine a canonical ambiguity group starting from the basis constituted by the columns of VnT , it is necessary to determine a suitable vector v of
dimension n T which, multiplied for VnT , yields the vector of dimension n representing the canonical ambiguity group. In Reference 28 it was demonstrated that
vectors belonging to ker BC and giving canonical ambiguity groups can be obtained
by locating the submatrices S of VnT , with dimension (n T 1)(n T ), whose
rank is equal to n T 1. These submatrices have a kernel with a unitary dimension,
whose basis x (a vector with n T entries) can be easily obtained through the SVD of
these matrices. In fact x corresponds to the last column of the matrix V of the SVD of
these matrices. By multiplying VnT for the basis x of the kernels of all the matrices
S, canonical ambiguity groups can be obtained [28]. Each vector y = VnT x has
corresponding null entries in the rows of the matrix S relevant to x, because x is a
basis of ker S.
The program, Testability and Ambiguity Group Analysis (TAGA) [29] permits us
to determine testability and canonical ambiguity groups of a linear analogue circuit
on the basis of the theoretical treatment reported in Reference 28 and previously
summarized. It exploits symbolic analysis techniques and it is based on the software
package SAPWIN. Once the symbolic network functions have been determined, the
testability matrix BC is built initially in symbolic form and then in numerical form, by

56

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

assigning arbitrary values to the circuit parameters. At this point the following steps
are performed:
1. SVD of the testability matrix BC and determination of the testability value T
and of the matrix VnT .
2. Determination of the matrix H = VnT VTnT . If the matrix H is constituted only
by blocks of order two, second-order canonical ambiguity groups are located,
then stop; otherwise go to step 3.
3. Selection of a submatrix S of VnT with n T 1 rows and n T columns.
4. SVD of S. If rank S < n T 1, go to step 3. If rank S = n T 1, go to
step 5.
5. Multiplication of VnT and vector x, basis of ker S. If the obtained vector y,
with dimension n, has non-zero entries, except those relevant to the rows of S,
a canonical ambiguity group of order T + 1 has been located, then go to step 8.
If the obtained vector y, with dimension n, has other null entries, besides those
relevant to the rows of S, a canonical ambiguity group of order lower or equal
to T has been located, then go to step 6.
6. Insertion of the obtained canonical ambiguity group in a matrix, called the
ambiguity matrix, where number of rows is equal to n and number of columns
is equal to the total number of determined canonical ambiguity groups.
7. If all the possible submatrices S have been considered, stop. If not all the
possible submatrices S have been considered, go to step 3, discarding the
submatrices S having null rows, because they certainly have a rank less than
n T 1.
8. If all the possible combinations of T elements relevant to the canonical ambiguity group of order T + 1 give testable groups of components, then go to
step 7.
The proof of the statements in step 5 are in Reference 28. Furthermore, if there
are surely testable elements in the CUT, they correspond to null rows in the ambiguity
matrix, because each row of the ambiguity matrix corresponds to a specific potentially
faulty circuit element and surely testable elements cannot belong to any canonical
ambiguity group of order at most equal to T [28].
It is important to remark that the availability of network functions in symbolic
form strongly reduces the computational effort in the determination of entries of the
matrix BC , because they can be simply led back to derivatives of sums of products.
Let us consider, as an example, the circuit shown in Figure 2.5. The output V0 has
been chosen as the test point. In Figure 2.6 the matrices VnT and H are shown. As it
can be noted, the matrix H has columns where entries are all different from zero. Then,
the whole procedure of canonical ambiguity group determination has to be applied
and, in a very short time, the results are obtained, as shown in Figure 2.6, where the
ambiguity matrix and the canonical ambiguity groups are reported. In the ambiguity
matrix it is possible to locate three surely testable components (C1 , R1 , R4 ) and ten
canonical ambiguity groups. The computational times are very short: on a Pentium
III 500 MHz; the symbolic analysis, performed by SAPWIN, requires 70 min and the
canonical ambiguity group determination, performed by TAGA, requires 50 min.

Symbolic function approaches for analogue fault diagnosis

C1

R3

R1

C2

R4
+

V0

2.3.4

R6

R2

Figure 2.5

57

R5

Tow-Thomas band-pass filter

Testability analysis of non-linear circuits

The previously presented definitions and methods are based on the study of network
functions in the transformed domain. Thus, they are rigorously applicable only for
linear circuits. However, by means of some considerations, they can give useful
information also for the fault diagnosis of non-linear circuits.
To discuss such considerations it is useful to take into account two different kinds
of non-linear circuits: those in which the non-linear behaviour is structural, that is,
the presence of non-linear components is essential to the desired behaviour (rectifiers,
mixers, modulators and so on) and those in which the non-linear behaviour can be
considered as being parasitic.
For the latter case the above presented techniques of testability analysis can be
applied to a linearized model of the CUT and can be used directly to optimize the
selection of test points in the circuit. Obviously the non-linear behaviour, which could
be prevalent in fault conditions, will render much more difficult the fault location
phase.
For the former case, the use of the proposed techniques can be useful if it is possible
to represent the non-linear circuit by means of suitable piece-wise linear (PWL)
models. In this case the testability analysis can be performed on the corresponding
PWL circuit. This aspect will be further discussed in subsection 2.5.1.

2.4

Fault diagnosis of linear analogue circuits

In the past years, a noteworthy number of techniques have been proposed for the
fault diagnosis of analogue linear and non-linear networks (excellent presentations of
the state of the art in this field can be found in References 16, 41 and 42). All these
techniques can be classified in two basic groups: SBT techniques and SAT techniques.
Both SBT and SAT techniques share a combination of simulations and measurements,
the difference depending on the time sequence in which they are applied. In the

58

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Testability = 3
Matrix Vn T :
0.000

0.522

0.110

0.000

0.000

0.234

0.709

0.399

0.366 0.247

0.580 0.122

0.000

0.000

0.077 0.251

0.595

0.563 0.226

0.186

0.531 0.605

0.106

0.000

0.000

0.451

0.000

0.529 0.111

0.000

0.093 0.436

0.432 0.372 0.236

0.085

0.036

0.134 0.214 0.906

C2

R1

Matrix H :
C1
C1

R2

R3

R4

R5

0.284 0.044 0.316 0.068 0.048 0.288 0.096

R6
0.048

C2 0.044

0.912

0.049 0.136 0.096

R1 0.316

0.049

0.351

0.075

R2 0.068 0.136

0.075

0.791 0.147

0.069 0.297

0.149

R3 0.048 0.096

0.053 0.147

0.897

0.048 0.209

0.105

R4 0.288

0.321

0.069

0.048

0.293

0.098 0.049

0.107 0.297 0.209

0.098

0.580

0.212

0.105 0.049

0.212

0.894

0.045

R5 0.096 0.193
R6

0.048

0.097 0.054

0.149

0.053

0.045 0.193
0.321

0.097

0.107 0.054

Ambiguity matrix:
C1

0.00

0.00

0.00 0.00

0.00

0.00

0.00

0.00 0.00

C2

0.00

0.00

0.00 0.00

0.00

0.00 0.67 0.72

0.67 0.61

R1

0.00

0.00

0.00 0.00

0.00

0.00

0.00

0.00 0.00

R2

0.00

0.00

0.00 0.76

0.81 0.77

0.00

0.00

0.00 0.79

R3

0.00 0.70 0.75 0.00

0.00

0.64

0.00

0.00 0.74 0.00

R4

0.00

0.00

0.00 0.00

0.00

0.00

0.00

0.00

0.00 0.00

R5

0.65

0.00

0.66 0.00 -0.59

0.00

0.00

0.69

0.00 0.00

R6

0.76 0.71

0.00 0.65

0.00 0.75

0.00

0.00 0.00

0.00

0.00

0.00

Canonical ambiguity groups (10):


(R5,R6) (R3,R6) (R3,R5) (R2,R6) (R2,R5)
(R2,R3) (C2,R6) (C2,R5) (C2,R3) (C2,R2)

Figure 2.6

TAGA results for the Tow-Thomas band-pass filter

former case SBT, the CUT is simulated under different faults and, after a set of
measurements, a comparison between the actual circuit response to a set of stimuli
and the presimulation gives an estimation on how probable is a given fault. There are
many different procedures, but they often rely on constructing a fault dictionary, that
is, a prestored data set corresponding to the value of some network variables when a
given fault exists in the circuit. These techniques are especially suited to the location

Symbolic function approaches for analogue fault diagnosis

59

of hard or catastrophic faults for two reasons: the first one is that they are generally
based on the assumption that any fault influences the large-signal behaviour of the
network, the second one is that the dictionary size becomes very large in multiple
soft-fault situations.
The SAT approaches are suitable to cases where the faults perturb the small-signal
behaviour, that is, they are especially suitable to diagnose parametric faults (that is,
deviations of parameter values from a given tolerance). In these methods, starting from
the measurements carried out on the selected test points, the network parameters are
reconstructed and compared to those of the fault-free network to identify the fault.
The use of symbolic methods is particularly suited for SAT techniques and, in
particular, for those based on parameter identification. This is due to the fact that
SAT approaches need more computational time than SBT approaches and, using a
symbolic approach, noteworthy advantages can be reached, not only in computational
terms, but also in terms of including automatically, in the fault diagnosis procedure,
testability analysis, that, as already specified in the previous section, is a necessary
and preliminary step for whatever method of fault diagnosis.
In this section, methods of fault diagnosis based on parameter identification are
considered. In these techniques the aim is the estimation of the effective values of
the circuit parameters. To this end it is necessary to know a series of measurements
carried out on a previously selected test point set, the circuit topology and the nominal
values of the components. Once these data are known, a set of equations representing
the circuit is determined. These equations are non-linear with respect to the parameter
values, which represent the unknowns. Their solution gives the effective values of the
circuit parameters. In both determination and solution of the non-linear equation set,
symbolic analysis can be advantageously used, as will be shown in this section.
In parametric fault diagnosis techniques, the measurements can either be in the
frequency domain or the time domain. Generally, the procedures based on time
domain measurements do not exploit symbolic techniques in the fault location phase.
Nevertheless, also for these procedures, if a symbolic approach is used for testability
analysis, a considerable improvement in the quality of the results can be obtained. An
example of this kind is reported in Reference 43, where a neural network approach is
used in the fault location phase and a symbolic testability analysis is used for sizing
and training the network.
On the contrary, symbolic techniques are used in parametric fault diagnosis methods based on frequency domain measurements. So, in the following, only this kind
of procedure is considered. For all the techniques presented, the quite realistic k-fault
hypothesis is made, by also taking into account the component tolerances. The use of
a symbolic approach gives noteworthy advantages, not only in the phases of testability
analysis and solution of the fault diagnosis equations, but also in the search of the
best frequencies at which the measurements have to be carried out.

2.4.1

Techniques based on bilinear decomposition of fault equations

In this subsection two fault diagnosis techniques based on a bilinear decomposition


of the fault equations are presented. Since in practical circuits the single-fault case

60

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

is the most frequent, the double-fault case is less frequent and the case of all faulty
components is almost impossible, in the following the procedures of location and
estimation of the faulty components in the case of single-fault hypothesis will be
described.
2.4.1.1 First technique
Let us suppose that the fault diagnosis equations of the analogue, linear, time-invariant
CUT are constituted by the network functions relevant to the selected test points. The
coefficients of these equations are related to the circuit parameters in a linear way,
that is, each coefficient can be considered as a bilinear function with respect to the
single-circuit parameter. Under the hypothesis of single fault, considering, for the
sake of simplicity, only one network function h(s, p) and fixing all the parameters,
except one, at the nominal value, the following bilinear function can be obtained:
h(s, p) =

a(s) + b(s) p
c(s) + d(s) p

(2.18)

where p + is a circuit parameter and a(s), b(s), c(s) and d(s) are polynomial
functions. If s = j, Equation (2.18) becomes the following:
h ( j, p) =

a ( j) + b ( j) p
c ( j) + d ( j) p

(2.19)

By considering the measured value of the fault equation at a fixed frequency different from the pole frequencies (that is, different from the frequencies that make
the denominator of Equation (2.19) equal to zero), Equation (2.19) can be inverted
with respect to p and, because it has a co-domain in the complex number field, the
following equations for the real and imaginary parts can be obtained


a ( j) h ( j) c ( j)
p = Re
(2.20)
h ( j) d ( j) b ( j)


a ( j) h ( j) c ( j)
=0
(2.21)
Im
h ( j) d ( j) b ( j)
At this point, the procedure of fault location can be so summarized [44]. One element
at a time is considered faulty and all the other are considered to be well working.
Once the test frequency has been fixed and the measured value in the test point has
been collected, for each circuit parameter the imaginary part is evaluated through
the corresponding Equation (2.21) by substituting the nominal value for all the other
parameters. The evaluation of Equation (2.21) is repeated many times, one for each
element considered faulty. Only for the effectively faulty component is the imaginary
part null and the real part gives an estimate of its value. Obviously, this is true if the
component is surely testable, otherwise the considerations in Reference 20 have to
be applied.
The symbolic approach is of fundamental importance in the implementation of
this procedure, because the availability in completely symbolic form of a(s), b(s),
c(s) and d(s) strongly reduces the computational complexity.

Symbolic function approaches for analogue fault diagnosis

61

The extension of the procedure to the double-fault case, the component tolerance
consideration and a description of the realized system for the full automation of the
procedure are reported in References 4446 respectively.
2.4.1.2 Second technique
Let us consider the fault diagnosis equations of the CUT as in Equation (2.11). By
exploiting the measurements carried out on the test points, the coefficients of the
network functions can be determined by applying a least-squares procedure. In theory,
a number of measurements equal to the number of unknowns is required. In practice,
a number of measurements also much larger than the number of unknowns is used
in order to minimize the errors due to measurement inaccuracy. Once the coefficients
have been evaluated, the component values can be determined by exploiting the
system in Equation (2.12). Let us consider the hypothesis of a single fault [47]. By
testability evaluation, the rank of the Jacobian matrix BC of the system in Equation
(2.12) has been determined, so the linearly independent rows of BC , that is, the linearly
independent equations of the system in Equation (2.12), are known. By indicating
with p a potentially faulty parameter and by choosing among the linearly independent
equations of the system in Equation (2.12) two of them dependent on p, the following
bilinear system can be determined, where Ai (l) and Aj (k) are the coefficient values,
while mi (l) , mj (k) , qi (l) and qj (k) are numerical values obtained by replacing the
nominal value to each circuit parameter considered not faulty:
Ai (l) = mi (l) p + qi (l)
Aj (k) = mj (k) p + qj (k)

(2.22)

By substituting the first equation into the second one, the following expression can
be obtained, where Mand Q are numerical terms:



Aj (k) = mj (k) Ai (l) qi (l) /mi (l) + qj (k) = MAi (l) + Q
(2.23)
Equation (2.23) is verified by replacing Ai (l) and Aj (k) with the values obtained by the
measurements only if the potentially faulty parameter is the faulty one and the others
are really not faulty. So, by repeating this procedure for each circuit parameter, the
faulty element can be located. Furthermore, the faulty element can also be estimated
by inverting one of the equations in Equation (2.22), as, for example:


Ai (l) qi (l)
p=
(2.24)
mi (l)
If it happens that a parameter p appears in only one equation, this means that this
equation is independent of all the others and can be used in its bilinear form for
evaluating the p value. If this value is out of its tolerance, range, the parameter p is
faulty, because the parameter p, appearing in only one coefficient of the system in
Equation (2.12), certainly does not belong to any canonical ambiguity group, that is,
it is surely testable and, then, distinguishable with respect to all the other parameters.

62

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

If the parameter p belongs to an ambiguity group, the considerations discussed in


Reference 20 have to be taken into account.
In the implementation of the procedure, the use of symbolic techniques is of fundamental importance. Because the availability of the network functions in completely
symbolic form permits us not only to easily evaluate the testability matrix BC , but
also to speed up the repeated solution of Equation (2.22) divided by Equation (2.24)
with different potentially faulty elements.
The extension of the procedure to the double-fault case, the component tolerance consideration, applicative examples and a scheme of the software package
implementing this fault diagnosis technique are reported in Reference 47.
The two techniques described in this subsection have similar performances.
Considering the single-fault hypothesis, the difference between the two procedures
consists in the fact that in the first technique the value of the imaginary part in Equation
(2.21) gives information about the faulty element, whereas in the second technique
the verification of Equation (2.23) gives a proof of the faulty element.

2.4.2

NewtonRaphson-based approach

The two bilinear techniques previously described are suitable for the single- and
double-fault cases, because they become excessively complex for a fault hypothesis
greater than two. In this subsection a procedure for parametric fault diagnosis based
on the classical NewtonRaphson method is presented [34]. It is suitable for any
possible fault hypothesis.
Let us consider the fault diagnosis equations expressed in Equation (2.11). As
reported in subsection 2.3.2, the parameter evaluation is led back to the solution of
the non-linear system reported in Equation (2.12), whose testability matrix is reported
in Equation (2.13).
By indicating with R the total number of circuit parameters, with k the number of
potentially faulty parameters and with T the testability value (T R and k T ), the
fundamental steps of the fault diagnosis procedure can be summarized as
1. Evaluation of T .
2. Determination of all the possible combinations of the k testable parameters.
3. Application of the NewtonRaphson method to each testable group of k
parameters.
A group of k elements is testable if the related columns of BC are linearly independent. If we want to know if a group of k elements is testable, we must triangularize
the submatrix of BC constituted by the columns related to the selected parameters.
If the k elements we have chosen are testable, the first k rows of the triangularized
matrix show the independent equations. So, we have a k-equations and k-unknowns
non-linear system and solve it employing the classical NewtonRaphson method, by
assigning to the other R k parameters their nominal values. As is well known, in
the NewtonRaphson method it is necessary to evaluate the Jacobian matrix that, in
this case, is a submatrix of the testability matrix BC . To ensure convergence to the
NewtonRaphson procedure, the starting point has to be chosen close enough to the

Symbolic function approaches for analogue fault diagnosis

63

solution. To overcome the problem of not really knowing the solution positions, a
grid of initial points is chosen as reported in Reference 34.
It is worth pointing out that the components considered to be working well, due to
their tolerances, yield a deviation in the solution for the components considered faulty,
that is, the solution is affected by the tolerances of the components considered to be
working well. The more the testability decreases (that is, the number of parameters
that cannot be considered unknowns increases), the more the error grows. In extremely
unlucky cases the tolerance effect can completely change the solution results. The
error entity is tightly dependent on the circuit behaviour, that is, on the network
sensitivity with respect to the circuit components. In fact, if a component that gives
a high value of sensitivity has a small deviation with respect to the nominal value, it
could produce completely wrong solutions if it is not considered unknown. However,
it is possible to affirm that high-sensitivity circuit components are often realized with
smaller tolerance intervals, in the sense that also a small deviation with respect to the
nominal value must be considered as a parametric fault.
Each solution obtained with the NewtonRaphson method gives a possible set of
faulty components.
The flow diagram of the algorithm of fault solution determination is shown in
Figure 2.7.
Multiple solutions can be present for the following reasons:

By solving the system with respect to any possible group of k-testable components,
we obtain several solutions, each one indicating a different possible fault situation,
that is, a different parameter group whose values are out of tolerance.
Owing to the system non-linearity, multiple solutions can exist for each parameter
group.

It is worth pointing out that, very often, several of the solutions are equivalent.
In fact, let B be a set of n components (n < R), with values out of tolerance, which
constitute the faulty components of one of the solutions. Assuming n < k, all the
groups of k components that include the set B will have, among their solutions, the
solution in which the components of set B are out of tolerance and the remaining k n
are within tolerance. So, the solution of the system with respect to each combination
of k components leads to multiple equivalent solutions, one for any combination of k
testable components that includes the set B. Then, it is useful to synthesize all these
solutions into a unique one. In practice, the solution list can be remarkably reduced by
applying the procedure shown in the flowchart of Figure 2.8. Referring to this figure,
once the whole set of N solutions (set 1) has been determined, a set (set 2) constituted
by all the possible faulty component groups has to be built. This set is obviously
empty in the first step of the algorithm. We consider iteratively each solution and, if
its related group of faulty components and their values are different from the already
stored ones (in the limits of given tolerances), we add it to set 2.
In the automation of the fault diagnosis procedure, the availability of the network
functions in completely symbolic form permits us to simplify not only the testability
analysis, but also the repeated solution of the non-linear system with different combinations of potentially faulty parameters. In fact, the Jacobian matrices relevant to the

64

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Figure 2.7

Flowchart of the algorithm of fault solution determination

application of the NewtonRaphson method to each testable component combination


are simply submatrices of the testability matrix.
Let us consider, as an example of application of the procedure, the SallenKey
low-pass filter shown in Figure 2.9. The voltages Vo and Va are chosen as test points.
The testability of the circuit is equal to four, then the fault hypothesis can be at
most k = 4. The canonical ambiguity groups are G3 , G4 and G1 , G2 , C1 , C2 , then:
If k = 1, each component is testable.
If k = 2, only the group G3 G4 is not testable.
If k = 3, each group that does not include G3 G4 is testable.

Symbolic function approaches for analogue fault diagnosis

65

Set 1 (The whole set of N solutions)


Set 2 (The whole set of faulty
component groups)

i=1

i=i+1
No
Select the faulty component
group related to the i th solution

i=N?

STOP
Yes

Is there a group
in set 2
with the same
faulty components and
the same values?

No

Add the faulty


component
group
to set 2

Yes

Figure 2.8

Flowchart of the algorithm for the reduction of the fault list

If k = 4, each group that includes G3 G4 is not testable and also the group
G1 G2 C1 C2 is not testable.

The faults have been simulated by substituting some components with others of
different value. The nominal values of the circuit components are the following:
G1 = G2 = G3 = 1 103 1 (R1 = R2 = R3 = 1 k )
G4 = 1.786 103 1 (R4 = 560 )
C1 = C2 = 47 nF
O1 : TL081

66

Test and diagnosis of analogue, mixed-signal and RF integrated circuits


C1

G1

O1

G2

+
+
V1

Va
C2

Vo

G4

G3

Figure 2.9

SallenKey low-pass circuit

The circuit has been made by using a simple wiring board and standard components with 5 per cent of tolerance. A double parametric fault case has been simulated
by substituting the capacitor C2 with a capacitor of value equal to 20 nF and the
resistor R2 with a resistor of value equal to 1460 .
The amplitude and phase responses related to the selected test points have been
measured using an acquisition board interfaced with a personal computer. Forty
measurements related to a sweep of frequencies between 250 and 10 000 Hz have
been acquired (input signal amplitude equal to 0.1 V). This range has been chosen taking into account the frequency response of the circuit, in order to include
the high-sensitivity region. The collected results, related to the two selected test
points Vo and Va , have been used as inputs for the software program SYFAD, which
implements the procedure of fault location. Choosing to solve the fault diagnosis
equations with respect to the set of all the possible testable combinations of four
components, the program has selected the following solutions among all the possible
ones (the numbers in parenthesis are the ratios between the obtained values and the
nominal ones):
G1 = 2.338 103 (2.337 79)
G2 = 1.701 103 (1.700 78)
C1 = 1.163 107 (2.4747)
or
G2 = 6.873 104 (0.687 267)
C2 = 1.899 108 (0.404 09)
or
G1 = 1.375 103 (1.374 54)

Symbolic function approaches for analogue fault diagnosis

67

C1 = 6.839 108 (1.455 04)


C2 = 2.763 108 (0.587 966)
The program is able to include in the solution list the right solution, the second
one, which gives also an estimate of the substituted values.

2.4.3

Selection of the test frequencies

In parametric fault diagnosis based on measurements in the frequency domain, the


choice of the frequencies where the measurements have to be performed influences
the fault location. In fact, the solution of the fault diagnosis equations is perturbed
by measurement errors and component tolerances. The choice of a suitable set of
measurement frequencies allows us to minimize the effect of these perturbations.
The use of symbolic techniques can be very useful in solving this problem. In the
following paragraphs, two procedures for selecting the set of frequencies that leads
to a better location of parametric faults in analogue linear circuits are summarized
[48, 49]. In their automation, symbolic techniques are advantageously used.
Starting from the fault diagnosis equations expressed as Equation (2.11), testability T and canonical ambiguity groups are determined [28] in order to locate a group of
T testable components [20]. At this point, the NewtonRaphson method can be used
to solve the fault diagnosis equations with T unknown parameters. To this end, it is
necessary that the number of equations is at least equal to the number of unknowns.
So, the equations are evaluated at several frequencies in order to obtain the desired
number of equations. At this point there is a problem regarding the choice of the measurement frequencies to minimize the effect of measurement errors and component
tolerances.
Let us remember that, given a linear system and its associated matrix A m n ,
the condition number cond(A) is defined as
 
cond (A) = Ap A+ p
(2.25)
where Ap is any matrix p-norm and A+ is the pseudo-inverse of A. The condition
number is always 1. A system whose associated matrix has a small condition number
is usually referred to as well conditioned, whereas a system with a large condition
number is referred to as ill conditioned. The condition number definition comes from
the calculation of the sensitivity of the solution of a system of linear equations with
respect to the possible perturbations in the known data vector and in the elements
of the matrix itself (coefficients of the system equations). Let us consider a system
of linear equations A x = b, where, for the sake of simplicity, A n n and
is non-singular. We want to evaluate how small perturbations of the data in b and
in A could affect the solution vector x. Considering a simultaneous variation in the
vector b and in the matrix A, by means of suitable mathematical elaborations [48],
the following inequality can be obtained:


 x
 b  A
(2.26)
cond (A)
+
x
b
A

68

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

This inequality provides an upper bound on the error, that is, the worst-case error in
the solution of x. To obtain the condition number of the matrix, the SVD method can
be used.
In the case under analysis, the non-linear fault equations are solved by the Newton
Raphson method, that performs a step-by-step linearization. Then, in order to relate
the previous terms, relative to the linear case, with the actual problem, let us note
that the following associations with the generic case exist: b is the vector of the gain
measurements, that is, each entry of b is a measurement of the amplitude in decibels
of a network response at a different frequency (the extension to the case of gain and
phase measurements is not difficult), consequently the entries of b are the measurement errors; the entries of the matrix A are the entries of the Jacobian matrix (a
generic entry has the form 20 log |hk (ji , p)|/pj with the subscript k indicating the
kth fault equation and the subscript i the corresponding ith measurement frequency),
the entries of A are given by the tolerances of circuit components considered well
working (not faulty). The solution vector x is related to the values of the components
belonging to the testable group. Moreover, it should be highlighted that every column
j of the matrix A is related to a different component belonging to the testable group,
while each row i of the same matrix is related to a different frequency for each test
point; therefore, in order to get a square matrix, the number of performed measurements has to be suitably chosen for each test point. At this point, the choice of a set
of frequencies in a zone where the condition number is minimum is suitable for minimizing the deviation in the solution vector, that is, the error in the resulting component
values.
On the other hand, the condition number alone does not take into account the size
of derivatives in a given frequency range; the condition number could be good in a
frequency zone where high variations of component values with respect to nominal
values result in a small variation of network function amplitude. Then, in addition to
the condition number, it could be useful to take into account the norm of the matrix A,
which gives a measure of the sensitivity of the network functions with respect to the
component variations at a given set of frequencies. Taking into account the previous
observations, a Test Error Index (TEI) of the following form can be introduced:
TEI =

1
1
cond (J) 1
=

J2
min
max

(2.27)

where min and max represent the minimum and maximum singular values of the
Jacobian matrix. The TEI has been chosen in this way, because, in order to minimize
the worst-case error in the solution, the norm of the matrix must be as high as possible
and its condition number must be as low as possible, that is, as near as possible to one.
Consequently, by looking for the minimum of Equation (2.27), both the requirements
are satisfied. In order to find the most suitable frequency set, that is, the set where the
previous index number is minimum, two different procedures can be used. The first
one is based on a heuristic approach [48] and is suitable for the case of a single test
point, the second one is based on the use of genetic algorithms and is more general,
but requires more computational time [49].

Symbolic function approaches for analogue fault diagnosis

69

In the first procedure, under the hypothesis of only one test point, the logarithm of
the TEI is evaluated on different frequency sets, constituted by a number of frequencies, spaced by octaves, equal to the number of unknown parameters. The minimum
of the TEI is determined and the corresponding set of frequencies constitutes the optimum set of frequencies. In Reference 48 an applicative example is reported, in which
a double-fault situation is considered. By using a new version of program SYFAD,
performing parameter inversion through the NewtonRaphson algorithm, it is shown
that the double fault is correctly identified if the measurement frequencies are determined by minimizing the TEI volume, while the NewtonRaphson algorithm does
not converge or give completely incorrect results for other frequency sets.
In the second approach [49], an optimization procedure, based on a genetic algorithm, performs the choice of both the testable parameter group and the frequency set
that better leads to locate parametric faults. In fact, the Jacobian matrix associated
with the fault equations depends not only on frequencies, but also on the selected
testable group. Then, even if all the possible testable groups are theoretically equivalent, in the phase of TEI minimization, a testable group could be better than another
one, owing to the different sensitivities of the network functions to the parameters
of each testable groups. Consequently, the algorithm of TEI minimization also has
to take into account this aspect (note that, in the previous method, the testable group
is randomly chosen). A description of the genetic algorithm is reported in Reference
49. The steps of the fault diagnosis procedure exploiting this approach of frequency
selection are summarized below:
1. A list of all the possible testable groups is generated, through a combinatorial
procedure taking into account the canonical ambiguity groups determined in
the phase of testability analysis.
2. The genetic algorithm determines, starting from the nominal component values
po , the testable group and the test frequencies.
3. The fault equations, relevant to the testable group determined in step 2, are
solved with the NewtonRaphson algorithm by using measurements carried
out on the frequencies determined in step 2.
4. The genetic algorithm determines new test frequencies, starting from the
solution p of the previous step (the testable group is unchanged).
5. With the NewtonRaphson algorithm a new solution p* is determined. If, i,
|(pi pi )/pi | 100 , with fixed a priori, stop, otherwise go to step 4.
The test frequency set will be that used in the last application of the Newton
Raphson algorithm.
In the automation of both the procedures of frequency selection, the use of
symbolic techniques gives great advantages. In fact, the availability of network
functions in symbolic form strongly reduces the computational effort in both the
testability analysis phase and the determination of the frequency-dependent Jacobian
matrix.
We conclude this subsection with an example relevant to the second described
procedure. In Figure 2.10 a two-stage common emitter (CE) audio amplifier is shown.
For the transistors, a simplified model with the same parameter values is considered.

70

Test and diagnosis of analogue, mixed-signal and RF integrated circuits


+Vcc
R5
R3
R1

Vout
Q1

Q2
R7

R2
R4

Figure 2.10

R9

C1

+
Vin

R6

R8

C3

C4

Two-stage CE audio amplifier

The nominal values of the 17 parameters are (5 per cent tolerance): R1 = 1 k ,


R2 = 10 k , R3 = 200 k , R4 = 100 , R5 = 15 k , R6 = 36 k , R7 =
4 k , R8 = 330 , R9 = 4 k , C1 = 100 nF, C2 = 100 nF, C3 = 100 nF,
C4 = 100 nF, hie_Q1 = hie_Q2 = 1000 , hfe_Q1 = hfe_Q2 = 100. Only one test
point, corresponding to Vout , is considered. The diagnosis procedure of the circuit is
performed by supposing that all the parameters are potentially faulty. The testability
analysis gives T = 7 and 4125 testable groups. Using an initial population of 40
members, the genetic algorithm finds, in 215 steps, seven testable parameters, hie_Q2 ,
hfe_Q1 , R2 , R4 , R5 , R8 and R9 and seven test frequencies: f1 = 1.66 Hz, f2 = 3.67
Hz, f3 = 5.61 Hz, f4 = 20.75 Hz, f5 = 24.80 Hz, f6 = 41.60 Hz, f7 = 101.4 Hz.
At this point, a double fault is simulated by replacing the nominal values of R4 and
hfe_Q1 with the following values: R4 = 150 and hfe_Q1 = 80. The measures of
amplitude of Vout , with the faulty parameter values, performed at the test frequencies
recognized in the previous step, are affected by an error with Gaussian distribution
( = 0.0, = 0.05). The constant is chosen to be 5 per cent. The program gives in
179 iteration a first gauge of the parameter values:
R4 = 152.57 ,
R2 = 9573.8 ,

R9 = 3918 ,
hfe_Q1
= 80.603,

R5 = 14420 ,


hie_Q2
= 1004.8

R8 = 329.54

The analysis of the results already suggests that R4 and hfe_Q1 are the faulty parameters.
Now, considering the testable group found in the first step, the calculation of the best
frequencies with these parameter values is performed again by the genetic algorithm.
The new set of frequencies, found in 12 iterations, is: f1 = 1.58 Hz, f2 = 3.70 Hz,
f3 = 9.36 Hz, f4 = 23.22 Hz, f5 = 37.72 Hz, f6 = 55.20 Hz, f7 = 78.53 Hz. By
repeating the diagnosis with the new set of frequencies, the following values of the
testable parameters are determined:


R2 = 9552 ,

R9 = 3915.6 ,

R4 = 152.3 ,
R5 = 14 409 ,
R8 = 330.03


= 80.984,
hie_Q2
= 1023
hfe_Q1

Symbolic function approaches for analogue fault diagnosis

71

Comparing these values with the previous ones, the following percentage deviations
are obtained
|R R |
|R R |
R2 % = 2R 2 % = 0.228%,
R4 % = 4R 4 % = 0.177%
4

R5 %

R9 % =

|R5 R5 |

% = 0.076%,

R5

R9 R9
R9


 


hfe_Q1 hfe_Q1


hfe_Q1


 


hie_Q2 hie_Q2


hie_Q2

R8 %

|R8 R8 |
R8

% = 0.148%

% = 0.061%


hfe_Q1
%=

% = 0.473%


hie_Q2
%=

% = 1.81%

When all the percentage deviations are less than , then the procedure is completed
in only one cycle. By comparing the obtained values with the nominal ones, we have:
R2 % =
R5 % =
R9 % =

|R2 R2 |
R2

R5 R5
R5
R9 R9
R9

hfe_Q1 %
hie_Q2 %

|
|

% = 4.48%,
% = 3.94%,

R4 % =
R8 % =

|R4 R4 |
R4

R8 R8
R8

% = 52.3%
% = 0.009%

% = 2.11%,






hfe_Q1 hfe_Q1

=
% = 19.02%,
hfe_Q1





hie_Q2 hie_Q2 
=
% = 2.3%
hie_Q2

Considering the tolerance in every parameter, the faulty parameters are R4 and hfe_Q1
with the following fault values:
R4 = 152.3 ,

hfe_Q1 = 80.984

Comparing these values with the actual fault values, we have an error of 1.53 and
1.23 per cent respectively.

2.5

Fault diagnosis of non-linear circuits

The previously presented fault diagnosis methodologies are applicable only to linear
circuits or to linearized models of the CUT. They are not applicable to circuits in
which the non-linear behaviour is structural, that is, if it is essential to the requested
electrical behaviour. However, the symbolic approach can also be usefully applied
in these cases. The aim of this section is to present an example of this kind of
application [50].
A field in which the symbolic approach can give advantages with respect to the
numerical techniques is constituted by those applications that require the repetition
of a high number of simulations performed on the same circuit topology with the
variation of component values and/or input signal values. In this kind of application

72

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

the symbolic approach can be used to generate the requested network functions of
the analysed circuit in parametric form. In this way, circuit analysis is performed
only once and, during the simulation phase, only a parameter substitution and an
expression evaluation are required to obtain numerical results. This approach can
be used to generate autonomous programs devoted to the numerical simulation of a
particular circuit. Furthermore, for a complex circuit, these simulators can be devoted
to parts of the circuit in order to obtain a simulator library.
In this section a program package, developed by the authors following the outlined approach, is presented. The program, named, Symbolic Analysis Program for
Diagnosis of Electronic Circuits (SAPDEC) , is able to produce devoted simulators
for non-linear analogue circuits and is aimed for fault diagnosis applications. The
output of the program package is an autonomous executable program, a simulator
devoted to a given circuit structure, instead of a network function in symbolic form.
The generated simulators work with inputs and outputs in numerical form, nevertheless, they are very efficient because they strongly exploit the symbolic approach;
in fact:
1. They use, for numerical simulation, the closed symbolic form of the requested
network functions.
2. They are devoted to a given circuit structure.
3. They are independent from both the component values and input values, which
must be indicated only at run time, before numerical simulation.
The generated symbolic simulators produce time domain simulations and are able
to work on non-linear circuits. To this end the following methods have been used:
1. Non-linear components are replaced by suitable PWL models.
2. Reactive elements are simulated by their backward-difference models.
3. A Katznelson-type algorithm is used for time domain response calculation.

2.5.1

PWL models

With the PWL technique [51] the voltagecurrent characteristic of any non-linear
electronic device is replaced by PWL segments obtained by means of the individuation
of one or more corner points on the characteristic. The so obtained PWL characteristic
describes approximately the element behaviour in the different operating regions in
which it can work. It is evident that the increase of the corner point number and,
consequently, of the linearity region number allows us to obtain a higher precision
in the simulation of the real component; obviously, in this way, the corresponding
model becomes more complex.
It is worth pointing out that the symbolic analysis is completely independent of the
number of PWL characteristic corner points; in fact, from a symbolic analysis point of
view, each non-linear component of a PWL model is represented by a single symbol.
However, the increase in the number of corner points influences the computational
time in the numerical simulation phase, so a trade-off between a small number of
corner points and requested accuracy must be realized for each model.

Symbolic function approaches for analogue fault diagnosis


A

Vk + 1

73

Ik + 1

Gi

I = (C/ T )Vk
I k + 1 = (C/ T )(V k + 1Vk)
B

Figure 2.11

Backward-difference model of a capacitor

It is also worth pointing out that, by exploiting PWL models for non-linear devices,
testability analysis of non-linear circuits can be performed through the methods presented in Section 2.3. with a testability value that is independent of circuit parameter
value, testability evaluation and ambiguity group determination can be performed
starting from the symbolic network functions obtained by replacing the non-linear
devices with their PWL models and assigning arbitrary values to both the parameters
corresponding to linear components and the ones corresponding to PWL models of
non-linear components [52] in the methods of Section 2.3.

2.5.2

Transient analysis models for reactive components

The reactive components are made time independent by using the backward-difference
algorithm and the corresponding circuit models are constituted by a conductance in
parallel with a current source [50]. In Figure 2.11 the model of a capacitor is shown
as an example: the conductance value is a function of the sampling time T and
of the capacitance value, while the current value depends on the sampling time T ,
the capacitance value and the voltage value at the previous time step. In this way,
neither the Laplace variable nor integrodifferential operations are used and the circuit
becomes, from the symbolic analysis point of view, without memory and, from the
numerical simulation point of view, time discrete.

2.5.3

The Katznelson-type algorithm

The Katznelson algorithm is an iterative process, which allows one to determine the
d.c. solution of a PWL circuit [53].
It must be noted that, using a symbolic approach, the time domain simulation
can be obtained with an algorithm derived from the standard Katznelson algorithm,
but simpler. The difference consists, prevalently, in the fact that, with the symbolic
approach, the program works on the closed-form expressions of the network functions
and then, for each step of the algorithm, there is not a linear system solution, but only
an expression evaluation [50].

74

2.5.4

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Circuit fault diagnosis application

Following the approach outlined, the program package SAPDEC is able to generate
simulators devoted to any part of a suitably partitioned circuit.
The program can be used to realize a library of devoted simulators. Each simulator
of the library is devoted to a part of the circuit and can be directly used for circuit
fault diagnosis. The input signals for these simulators can be constituted by the actual
signals on the CUT, suitably measured and stored on a file. The circuit responses,
produced by the simulators and stored in another file, can be compared by means
of qualitative and/or quantitative methods with the actual responses measured on the
CUT. From this comparison, it is possible to test the correctness of the behaviour
of the considered part. When a faulty part has been located, it is possible to locate
the fault at a component level. This last phase will require repeated simulations with
variations of component values.
The realization of the simulators library for a given equipment requires a preliminary phase constituted by the decomposition of the CUT in small parts by means of
suitable partitioning techniques. Then, for each part, the following operations have
to be carried out:
1. Representation of non-linear components by means of appropriate PWL
models.
2. Symbolic analysis of each part with the generation of the required network
functions as C language statements.
3. Generation of the devoted simulator by means of the compilation of the produced functions and linking with a standard module, in C language, which
implements a PWL simulation technique by means of a Katznelson-type
algorithm.
The above-mentioned operations are performed, in an automatic and autonomous
way, by the program SAPDEC.
An important characteristic of the proposed approach is constituted by the fact
that the input signals for the simulators are constituted by the actual signals on the
CUT, measured in fault conditions. In fact, usually, for fault diagnosis, test signals
are used instead of actual input signals; this can be very difficult to do and could not
reflect the real behaviour in many cases, as, for example, in equipment with strong
feedback and with internally generated signals, such as d.c.d.c. converters.
As regards the decomposition of the CUT, some considerations can be done. At
present, this partition is not performed automatically and has to be done by the user
who must try to obtain a trade-off among several objectives. An important point is the
choice of the size of the blocks; they must be small, not only to obtain faster simulators,
but also to have blocks characterized by a high testability, in which it is possible to
determine the faulty components starting from the measurements performed on I/O
nodes.
On the other hand, very small blocks increase the cost of the test because they
complicate the phase of location of the faulty block (or blocks) and, generally, they
involve a high number of I/O nodes (which, obviously, must be accessible).

Symbolic function approaches for analogue fault diagnosis

75

A possible way to follow is the iterative use of existing partitioning techniques


[5456] and of the above-presented algorithms for testability computation applied to
the obtained parts, until a good trade-off is obtained.

2.5.5

The SAPDEC program

In Figure 2.12 the block diagram of SAPDEC is shown. The program requires an
ASCII file describing the CUT.
The form of this file is, for many aspects, similar to that required by the SPICE
program. Each device of the circuit is represented in the input file by one line. The
allowed linear components are a conductor, an inductor, a capacitor, independent
voltage and current sources, four controlled sources, a mutual inductance and the
ideal transformer. The non-linear components are the following: a non-linear conductor, diode, a voltage-controlled switch, an operational amplifier and bipolar and
MOS transistors. Suitable commands, included in the input file, must be used to communicate to the program the output nodes list and the names of the files containing
the input signal samples and component values.
Once the program has started, all the component numerical values (both linear and
non-linear) are stored and both non-linear and reactive components are automatically
replaced by the corresponding equivalent circuits. Then the symbolic evaluation of
the requested network functions is carried out by means of a program written in the
C++ language. These network functions are generated in the form of C language statements and are automatically assembled with the standard module, in the C language,

SPICE-like
circuit description

Generation of
symbolic
network functions

SAPDEC

Symbolic
network
functions

C
compiler

Simulation
algorithm

Devoted
simulator

Figure 2.12

Block diagram of SAPDEC operations

76

Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Component
values file

Input file

Devoted simulator

Output file

Figure 2.13

Devoted simulator operation flow diagram

which implements the Katznelson-type simulation algorithm. Finally, the compilation


and linking of the obtained source program is automatically performed, so realizing
the devoted simulator, which is independent of the component values, input sample
values and sampling time. The obtained simulator is able to produce a file containing
the output signal samples, result of the simulation, starting from the input sample
file (containing also the used sampling time) and from the component value file. The
non-linear component values are in the form of a list of parameters for semiconductor
components and for the operational amplifier (for example, hfe , Vbeo , Vcesat , etc., for
the bipolar transistor), while, for non-linear conductors and diodes, the slope (in the
form of conductance) and the corner voltage of each PWL characteristic linearity
region are given. For these last elements, all the corner current numerical values are
automatically calculated by supposing the corresponding characteristic crossing the
point (0, 0); this assumption makes univocal the corner point determination without
loss of generality, because all the considered components have an IV characteristic crossing the point (0, 0) (for example, photovoltaic components have not been
considered).
In Figure 2.13 the devoted simulator flow diagram is shown. Once the simulator
is realized, by changing only the files containing, respectively, the component values
and the input samples, repeated simulations can be obtained in a very fast way.
Suitable PWL models have been chosen by the authors for transistors and operational amplifiers in such a way to realize a trade-off between a low number of
components and simulation accuracy, taking into account the specific application
requirements [50]. The models presented in Reference 50 are very simple, so the
obtainable simulation accuracy is not very high, but it is acceptable for fault diagnosis as well as for other application fields. Obviously, the use of more complex models
permits us to obtain more accurate results.
Summarizing, the simulators produced by SAPDEC have the following characteristics:
They are very compact.
They are very fast.

Symbolic function approaches for analogue fault diagnosis

77

The input signal samples are read from a file.


The output signal samples (the simulation result) are stored in a file.
The component values are read from a file.

It is worth pointing out that the files for input signals and for output signals have
the same structure. Then it is also possible to use as input signals for a given block
the simulated output signals of another block.

2.6

Conclusions

An overview on the application of symbolic methodologies in the field of fault


diagnosis of analogue circuits has been presented.
It is important to remark that in the analogue fault diagnosis two phases can be
considered: the first one is the phase of testability analysis, whereas the second one is
the phase of fault location. For what concerns the phase of testability analysis, symbolic approach gives excellent results. For what concerns the phase of fault location,
symbolic approach is not the only possible approach [57, 58], because good results
can be reached also by using, for example, neural networks or genetic algorithms.
However, by having testability analysis as a preliminary step of analogue fault diagnosis, necessary for whatever kind of fault location procedure, the symbolic approach
results in any case are very useful and can give a noteworthy contribution to overcome
the gap between analogue and digital fields.

2.7

References

1 Bushnell, M.L., Agrawal, V.D.: Essentials of Electronic Testing for Digital,


Memory and Mixed-Signal VLSI circuits (Kluwer, Norwell, MA, 2000), ch. 10,
p. 310
2 Gielen, G.G., Sansen, W.: Symbolic Analysis for Automated Design of Analog
Integrated Circuits (Kluwer, Boston, MA, 1991)
3 Huelsman, L.P., Gielen, G.G.: Symbolic Analysis of Analog Circuits Techniques
and Applications (Kluwer, Boston, MA, USA, 1993)
4 Fernandez, F.V., Rodriguez-Vazquez, A., Huertas, J., Gielen, G.G.: Symbolic
Analysis Techniques Applications to Analog Design Automation (IEEE Press,
New York, 1998)
5 Wambacq, P., Gielen, G.G., Sansen, W.: Symbolic network analysis methods for
practical analog integrated circuits: a survey, IEEE Transactions on Circuits and
Systems - II, 1998;45:133141
6 Tan, S.X.D., Shi, C.J.R.: Efficient approximation of symbolic expressions for
analog behavioral modeling and analysis, IEEE Transactions on Computer-Aided
Design of Integrated Circuits and Systems, 2004;23:90718
7 Gielen, G., Walscharts, H., Sansen, W.: Analog circuit design optimization based
on symbolic simulation and simulated annealing, IEEE Journal of Solid State
Circuits, 1990;25:70713

78

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

8 Berkowitz, R.S.: Conditions for network-element-value solvability, IEEE


Transactions on Circuit Theory, 1962;9:249
9 Saeks, R.: A measure of testability and its application to test point selection
theory, Proceedings of 20th Midwest Symposium on Circuits and Systems, Texas
Tech. University, Lubbok, TX, 1977, pp.57683
10 Sen, N., Saeks, R.: Fault diagnosis for linear systems via multifrequency
measurement, IEEE Transactions on Circuits and Systems, 1979;26:45765
11 Chen, H.M.S., Saeks, R.: A search algorithm for the solution of multifrequency fault diagnosis equations, IEEE Transactions on Circuits and Systems,
1979;26:58994
12 Saeks, R., Sen, N., Chen, H.M.S., Lu, K.S., Sangani, S., De Carlo, R.A.: Fault
Analysis in Electronic Circuits and Systems, Technical report, Inst. for Electron.
Sci., Texas Tech. Univ., Lubbok, 1978
13 Temes, G.: Efficient method of fault simulation, Proceedings of 20th Midwest
Symposium on Circuits and Systems, Texas Tech. Univ., Lubbok, TX, 1977
14 Dejka, W.J.: A review of measures of testability for analog systems, Proceedings of AUTOTESTCON, Massachusetts, 1977, pp. 115122
15 Priester, R.W., Clary, J.B.: New measures of testability and test complexity
for linear analog failure analysis, IEEE Transactions on Circuits and Systems,
1981;28:108892
16 Bandler, J.W., Salama, A.E.: Fault diagnosis of analog circuits, Proceedings of
IEEE, 1985;73:1279325
17 Starzyk, J.A., Dai, H.: Multifrequency measurements of testability in analog
circuits, Proceedings of IEEE International Symposium on Circuits and Systems,
Philadelphia, PA, 1987
18 Stenbakken, G.N., Souders, T.M.: Test point selection and testability measures
via QR factorization of linear models, IEEE Transactions on Instrumentation
and Measurement, 1987;36:40610
19 Stebbakken, G.N., Souders, T.M., Stewart, G.W.: Ambiguity groups and testability, IEEE Transactions on Instrumentation and Measurement, 1989;38:9417
20 Fedi, G., Manetti, S., Piccirilli, M.C., Starzyk, J.: Determination of an optimum
set of testable components in the fault diagnosis of analog linear circuits, IEEE
Transactions on Circuits and Systems I, 1999;46:77987
21 Fedi, G., Manetti, S., Piccirilli, M.C.: Comments on linear circuit fault diagnosis
using neuromorphic analyzers, IEEE Transactions on Circuits and Systems II,
1999;46:4835
22 Iuculano, G., Liberatore, A., Manetti, S., Marini, M.: Multifrequency measurement of testability with application to large linear analog systems, IEEE
Transactions on Circuits and Systems, 1986;23:6448
23 Catelani, M., Iuculano, G., Liberatore, A., Manetti, S., Marini, M.: Improvements to numerical testability evaluation, IEEE Transactions on Instrumentation
and Measurement, 1987;36:9027
24 Carmassi, R., Catelani, M., Iuculano, G., Liberatore, A., Manetti, S., Marini,
M.: Analog network testability measurement: a symbolic formulation approach,
IEEE Transactions on Instrumentation and Measurement, 1991;40:9305

Symbolic function approaches for analogue fault diagnosis

79

25 Liberatore, A., Manetti, S., Piccirilli, M.C.: A new efficient method for analog circuit testability measurement, Proceedings of IEEE Instrumentation and
Measurement Technology Conference, Hamamatsu, Japan, 1994, pp. 1936
26 Catelani, M., Fedi, G., Luchetta, A., Manetti, S., Marini, M., Piccirilli, M.C.:
A new symbolic approach for testability measurement of analog networks,
Proceedings of MELECON96, Bari, Italy, 1996, pp. 51720
27 Fedi, G., Luchetta, A., Manetti, S., Piccirilli, M.C.: A new symbolic method for
analog circuit testability evaluation, IEEE Transactions on Instrumentation and
Measurement, 1998;47:55465
28 Manetti, S., Piccirilli, M.C.: A singular-value decomposition approach for ambiguity group determination in analog circuits, IEEE Transactions on Circuits and
Systems I, 2003;50:47787
29 Grasso, F., Manetti, S., Piccirilli, M.C.: A program for ambiguity group determination in analog circuits using singular-value decomposition, Proceedings of
ECCTD03, Cracow, Poland, 2003, pp. 5760
30 Liberatore, A., Manetti, S.: SAPEC A personal computer program for the symbolic analysis of electric circuits, Proceedings of IEEE International Symposium
on Circuits and Systems, Helsinki, Finland, 1988, pp. 897900
31 Manetti, S.: A new approach to automatic symbolic analysis of electric circuits,
IEE Proceedings Circuits, Devices and Systems, 1991;138:228
32 Liberatore, A., Manetti, S.: Network sensitivity analysis via symbolic formulation, Proceedings of IEEE International Symposium on Circuits and Systems,
Portland, OR, 1989, pp. 7058
33 Fedi, G., Giomi, R., Luchetta, A., Manetti, S., Piccirilli, M.C.: Symbolic algorithm for ambiguity group determination in analog fault diagnosis, Proceedings
of ECCTD97, Budapest, Hungary, 1997, pp. 128691
34 Fedi, G., Giomi, R., Luchetta, A., Manetti, S., Piccirilli, M.C.: On the application
of symbolic techniques to the multiple fault location in low testability analog
circuits, IEEE Transactions on Circuits and Systems II, 1998;45:13838
35 Liberatore, A., Luchetta, A., Manetti, S., Piccirilli, M.C.: A new symbolic
program package for the interactive design of analog circuits, Proceedings of
IEEE International Symposium on Circuits and Systems, Seattle, WA, 1995,
pp. 220912
36 Luchetta, A., Manetti, S., Piccirilli, M.C.: A Windows package for symbolic
and numerical simulation of analog circuits, Proceedings of Electrosoft96,
San Miniato, Italy, 1996, pp. 11523
37 Luchetta, A., Manetti, S., Reatti, A.: SAPWIN-A symbolic simulator as a support
in electrical engineering education, IEEE Transactions on Education, vol.44, p.
9 and CD-ROM support, 2001.
38 Starzyk, J., Pang, J., Fedi, G., Giomi, R., Manetti, S., Piccirilli, M.C.: A software
program for ambiguity group determination in low testability analog circuits,
Proceedings of ECCTD99, Stresa, Italy, 1999, pp. 6036
39 Starzyk, J., Pang, J., Manetti, S., Piccirilli, M.C., Fedi, G.: Finding ambiguity
groups in low testability analog circuits, IEEE Transactions on Circuits and
Systems I, 2000;47:112537

80

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

40 Golub, G.H., van Loan, C.F.: Matrix Computations (John Hopkins University
Press, Baltimore, MD, 1983)
41 Liu, R.: Testing and Diagnosis of Analog Circuits and Systems (Van Nostrand
Reinhold, New York, 1991)
42 Huertas, J.L.: Test and design for testability of analog and mixed-signal integrated circuits: theoretical basis and pragmatical approaches, Proceedings of
ECCTD93, Davos, Switzerland, 1993, pp. 75151
43 Cannas, B., Fanni, A., Manetti, S., Montisci, A., Piccirilli, M.C.: Neural network-based analog fault diagnosis using testability analysis, Neural
Computing and Applications, 2004;13:28898
44 Fedi, G., Liberatore, A., Luchetta, A., Manetti, S., Piccirilli, M.C.: A symbolic approach to the fault location in analog circuits, Proceedings of IEEE
International Symposium on Circuits and Systems, Atlanta, GA, 1996, pp. 8103
45 Catelani, M., Fedi, G., Giraldi, S., Luchetta, A., Manetti, S., Piccirilli, M.C.:
A new symbolic approach to the fault diagnosis of analog circuits, Proceedings
of IEEE Instrumentation and Measurement Technology Conference, Brussels,
Belgium, 1996, pp. 11825
46 Catelani, M., Fedi, G., Giraldi, S., Luchetta, A., Manetti, S., Piccirilli, M.C.: A
fully automated measurement system for the fault diagnosis of analog electronic
circuits, Proceedings of XIV IMEKO World Congress, Tampere, Finland, 1997,
pp. 527
47 Fedi, G., Luchetta, A., Manetti, S., Piccirilli, M.C.: Multiple fault diagnosis
of analog circuits using a new symbolic approach, Proceedings of 6th International Workshop on Symbolic Methods and Application in Circuit Design, Lisbon,
Portugal, 2000, pp. 13943
48 Grasso, F., Luchetta, A., Manetti, S., Piccirilli, M.C.: Symbolic techniques
for the selection of test frequencies in analog fault diagnosis, Analog Inegrated
Circuits and Signal Processing, 2004;40:20513
49 Grasso, F., Manetti, S., Piccirilli, M.C.: An appproach to analog fault diagnosis
using genetic algorithms, Proceedings of MELECON04, Dubrovnik, Croatia,
2004, pp. 11114
50 Manetti, S., Piccirilli, M.C.: Symbolic simulators for the fault diagnosis of
nonlinear analog circuits, Analog Integrated Circuits and Signal Processing,
1993;3:5972
51 Vlach, J., Singhal, K.: Computer Methods for Circuit Analysis and Design, 2nd
edn (Van Rostrand Reinhold, New York, 1994)
52 Fedi, G., Giomi, R., Manetti, S., Piccirilli, M.C.: A symbolic approach for
testability evaluation in fault diagnosis of nonlinear analog circuits, Proceedings
of IEEE International Symposium on Circuits and Systems, Monterey, CA, 1998,
pp. 912
53 Katznelson, J.: An algorithm for solving nonlinear resistor networks, Bell System
Technical Journal, 1965;44:160520
54 Konkzykowska, A., Starzyk, J.: Computer analysis of large signal flowgraphs
by hierarchical decomposition methods, Proceedings of ECCTD80, Warsaw,
Poland, 1980, pp. 40813

Symbolic function approaches for analogue fault diagnosis

81

55 Starzyk, J.: Signal-flow-graph analysis by decomposition method, IEE


Proceedings Circuits, Devices and Systems, 1980;127:816
56 Konkzykowska, A., Starzyk, J.: Flowgraph analysis of large electronic networks, IEEE Transactions on Circuits and Systems, 1986;33:30215
57 Luchetta, A., Manetti, S., Piccirilli, M.C.: Critical comparison among some
analog fault diagnosis procedures based on symbolic techniques, Proceedings of
DATE02, Paris, France, 2002, p. 1105
58 Grasso, F., Luchetta, A., Manetti, S., Piccirilli, M.C.: Symbolic techniques in
parametric fault diagnosis of analog circuits, Proceedings of BEC02, Tallinn,
Estonia, 2002, pp. 2714

Chapter 3

Neural-network-based approaches for analogue


circuit fault diagnosis
Yichuang Sun and Yigang He

3.1

Introduction

Fault diagnosis of analogue circuits has been an active research area since the 1970s.
Various useful techniques have been proposed in the literature such as the fault dictionary technique, the parameter identification technique and the fault verification
method [111]. The fault dictionary technique is widely used in practical engineering
applications because of its simplicity and effectiveness. However, the traditional fault
dictionary technique can only detect hard faults and its application is largely limited
to small to medium-sized analogue circuits [5]. To solve these problems, several artificial neural network (ANN)-based approaches have been proposed for analogue fault
diagnosis and they have proved to be very promising [1225]. The neural-networkbased fault dictionary technique [2023] can locate and identify not only hard faults
but also soft faults because neural networks are capable of robust classification even
in noisy environments. Furthermore, in the neural-network-based fault dictionary
technique, looking up a dictionary to locate and identify faults is actually carried out
at the same time as setting up the dictionary. It thus reduces the computational effort
and has better real-time features. The method is also suitable for large-scale analogue
circuits.
More recently, wavelet-based techniques have been proposed for fault diagnosis and testing of analogue circuits [18, 19, 24, 25]. References 18 and 19 develop a
neural-network-based fault diagnosis method using wavelet transform as a preprocessor to reduce the number of input features to the neural network. However, selecting
the approximation coefficients as the features from the output node of the circuit
and treating the details as noise and setting them to zero may lead to the loss of valid
information, thus resulting in a high probability of ambiguous solutions and low diagnosability. Also, additional processors are needed to decompose the details, resulting

84

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

in an increase in the computation cost. In References 24 and 25 the authors use a


wavelet transform and packets to extract appropriate feature vectors from the signals
sampled from the circuit under test (CUT) under various faulty conditions. Candidate
features are generated from the test points by wavelet de-noising and wavelet decomposition and optimal feature vectors are selected to train the wavelet neural networks
(WNNs) by principal component analysis (PCA) and normalization of approximation
and detail coefficients.
Neural networks have also been used as an optimization method for fault diagnosis
[26, 27]. Among the many fault diagnosis methods, the L1 optimization technique
is a very important parameter identification approach [28, 29], which is insensitive
to tolerances. This method has been successfully used to isolate the most likely
faulty elements in linear analogue circuits and when combined with neural networks
real-time testing becomes possible for linear circuits with tolerances. Several fault
verification methods have been proposed for non-linear circuit fault diagnosis [30
34]. On the basis of these linearization principles, parameter identification methods
can be developed for non-linear circuits. In particular, the L1 optimization method
can be extended and modified for fault diagnosis of non-linear circuits with tolerances
[26, 27]. Neural networks can also be used to make the method more effective and
faster for non-linear circuit fault location [26, 27].
This chapter is concerned with fault diagnosis of analogue circuits using neural
networks. Neural-network-based dictionary methods for analogue circuit fault diagnosis will be discussed in Section 3.2. Fault diagnosis of analogue circuits with noise,
using a wavelet transform and neural networks will be described in Section 3.3. In
Section 3.4, a neural-network-based L1 -norm optimization technique for fault diagnosis of non-linear analogue circuits with tolerances will be introduced. A summary
is given in Section 3.5.

3.2

Fault diagnosis of analogue circuits with tolerances using artificial


neural networks

Component tolerances, non-linearity and a poor fault model make analogue fault
location particularly challenging. Generally, tolerance effects make the parameter
values of circuit components uncertain and the computational equations of traditional methods complex. The non-linear characteristic of the relation between the
circuit performance and its constituent components makes it even more difficult to
diagnose faults online and may lead to a false diagnosis. To overcome these problems, a robust and fast fault diagnosis method taking tolerances into account is
thus needed. ANNs have the advantages of large-scale parallel processing, parallel storing, robust adaptive learning and online computation. They are therefore ideal
for fault diagnosis of analogue circuits with tolerances. The process of creating a
fault dictionary, memorizing the dictionary and verifying it can be simultaneously
completed by ANNs, thus the computation time can be reduced enormously. The
robustness of ANNs can effectively deal with tolerance effects and measurement noise
as well.

Neural-network-based approaches for analogue circuit fault diagnosis

85

This section discusses methods for analogue fault diagnosis using neural networks
[20, 21]. The primary focus is to provide a robust diagnosis using a mechanism to
deal with the problem of component tolerances and reduce testing time. The approach
is based on the k-fault diagnosis method and backward propagation neural networks
(BPNNs). Section 3.2.1 describes ANNs (especially BPNNs). Section 3.2.2 discusses
the theoretical basis and framework of fault diagnosis of analogue circuits. The neuralnetwork-based diagnosis method is described in Section 3.2.3. Section 3.2.4 addresses
fault location of large-scale analogue circuits using ANNs. Simulation results of two
examples are presented in Section 3.2.5.

3.2.1

Artificial neural networks

In recent years, ANNs have received considerable attention from the research community and have been applied successfully in various fields, such as chemical processes,
digital circuitry, and control systems. This is because ANNs provide a mechanism for
adaptive pattern classification. Even in unfavourable environments, they can still have
robust classification. Choosing a suitable ANN architecture is vital for the successful
application of ANNs. To date the most popular ANN architecture is the BPNN. One
of the significant features of neural networks when applied in fault diagnosis and
testing is that online diagnosis is fast once the network is trained. In addition, ANN
classifiers require fewer fault features than traditional classifiers. Furthermore, neural
networks are capable of performing fault classification at hierarchical levels.
On the basis of learning strategies, ANNs fall into one of two categories: supervised and unsupervised. The BPNN is a supervised network. Typical BPNNs have two
or three layers of interconnecting weights. Figure 3.1 shows a standard three-layer
neural network. Each input node is connected to a hidden layer node and each node
of the final hidden layer is connected to an output node in a similar way. The nodes
of hidden layers are connected to each other as well. This makes the BPNN a fully
connected network topology. Learning takes place during the propagation of input
patterns from the input nodes to the output nodes. The outputs are compared with the
desired target values and an error is produced. Then the weights are adapted to minimize the error. Since the desired target values should be known, this is a supervised
learning process.

y
x

Figure 3.1

BPNN architecture

86

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Consider an s-layer, m-input and b-output BPNN with input vector x =



T
[x1 x2 xm ]T and output y = y1 y2 yb . The hidden layers and the output layer
can be described as general layers and Ii(s) and Oi(s) are the input and output of the ith
(s)
(s)
neuron in the sth layer, respectively. Ii and Oi are defined as
(s)

Ii

B


(s)

(s1)

Wij Oj

, i = 1, 2, . . . , A

(3.1)

j=1

 
(s)
(s)
Oi = fs Ii

(3.2)

where A and B are the number of neurons of the sth and the (s1)th layer, respectively.
(s)
Wij represents the weight connecting the jth neuron of the (s1)th layer and the ith

neuron of the sth layer. Function fs ( ) is the limiting function through which Ii(s) is
passed and it must be non-decreasing and differentiable over all time. A common
limiting function is the sigmoid in the following form:
fs (I) =

1
1 + exp (I)

(3.3)

The generalized delta rule that performs a gradient descent over an error surface is
utilized to adapt the weights. Also, the initial values of the weights are assumed to be
random numbers evenly distributed between 0.5 and 0.5.
For an input pattern P of the BPNN, the output error of the output layer can be
calculated as

(yi di )2
EP = 21
i

where di is the expected output of the ith output node in the output layer.
The error signal at the jth node in the sth layer is generally given by
(s)

jP =

EP
(s)

I jP

For the output layer, the error signal is


  (s) 

(s)
jP = dP OjP f  IjP
where dP and Ojp are the target and actual output values, respectively.
For the hidden layer, the error signal is
(s)

jP =

C


(s+1)

iP

(s+1)

WijP

 
(s)
f  IjP

i=1

where C is the number of nodes of the (s + 1)th layer.

Neural-network-based approaches for analogue circuit fault diagnosis

87

Thus, we can derive:


EP
(s)

WijP

(s)

(s1)

= jP OjP

The weight adaptation is defined as


(s)
(s)
WijP (T + 1) = WijP (T )

EP (T + 1)
(s)
WijP (T )

(s)
+ WijP (T 1)

(3.4)

where is the learning rate and 0 < < 1, is the momentum factor and
(s)
WijP
(T 1) is added to improve the speed of learning and generally = 0.9.
It can be seen that the BPNN has the following ideal advantages for fault diagnosis
with tolerances:

The BPNN processes information very quickly. Since the architecture of the
BPNN is parallel, neurons can work simultaneously. The computation speed
of a highly parallel processing neural network far exceeds that of a traditional
computer.
The BPNN has the function of association and the capability to gain complete fault
features from fragmentary information, that is, the BPNN has good robustness.

3.2.2

Fault diagnosis of analogue circuits

The methods for analogue fault diagnosis fall into two categories: simulation before
test (SBT) and simulation after test (SAT). The fault dictionary technique is a SBT
method [5]. The parameter identification technique [6] and the fault verification
method [711] belong to the SAT category. The k-branch-fault diagnosis method
[710] assumes that there are k faults in the circuit and requires that the number
of accessible nodes, m is larger than k. In addition, the change of value of an element with respect to its nominal value can be represented by a current source in
parallel with the element and verifying whether the substitution source currents are
non-zero, we can locate the faulty elements. Reference 15 proposes a neural-network
dictionary based on normal dictionary methods and accessible node voltages, while
in Reference 14, a neural-network-based dictionary approach is developed using
the admittance-function-based parameter identification method. In the following a
neural-network-based SBT method is described using the k-fault verification method.
For a linear circuit with n nodes (excluding the ground node), m accessible nodes
and b branches, the k-branch-fault diagnosis equation can be derived as [711]
Vm = Zmb Jb

(3.5)

where Jb is the substitution current source vector, Vm is the voltage increment vector
measured at the testing nodes and Zmb denotes the transfer impedance matrix.
For the clarity of derivation we assume no tolerance initially. According to the
k-branch-fault diagnosis theory, for k faults corresponding to a k-column matrix Zmk
in Zmb , because only those elements Jk corresponding to the k faulty branches in Jb
are non-zero, Equation (3.5) becomes:
Vm = Zmk Jk

(3.6)

88

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

This equation is compatible or rank [Zmk Vm ]= k and can be solved to give the
solution of
Jk = (ZTmk Zmk )1 ZTmk Vm
For a single fault occurring in the circuit (k = 1), Zmk becomes a single column vector
Zmf and Jk a single variable Jf , where f is the number of the faulty branch.
T

Denoting Zmf = z1f z2f zmf and Vm = [v1 v2 vm ]T , it can be
derived that:
vi = Dzif ,

i = 1, 2, . . . , m

(3.7)

where D is any non-zero constant.


Equation (3.7) can be further written as
vi


m
2
j=1 vj

zif
= 
m

2
j=1 zjf

(3.8)

where i = 1, 2, , m.
Thus, the single fault diagnosis becomes the checking of Equation (3.8) for all b
branches (f = 1, 2, , b).
The k-branch-fault diagnosis method can effectively locate faults in circuits without tolerances. However, for circuits with tolerances, the values of Vm and fault
features are influenced by the tolerances, which will make the contributions of faults
to the two values ambiguous and the testing process becomes slow. In this situation,
fault location results may not be accurate and sometimes a false diagnosis may result.
Fortunately, the advantages of memorizing and associating of ANNs can make up for
this. In order to improve the online characteristics and achieve robustness of diagnosis, we present a method that combines the k-fault diagnosis method with the highly
parallel processing BPNN in the next section.

3.2.3

Fault diagnosis using ANNs

Using the BPNN to diagnose faults in analogue circuits with tolerances involves
establishing the neural-network structure, generating fault features, forming sample
training groups, training the network and diagnosing the faults [20, 21]. In the diagnosis system the BPNN functions as a classifier. For simplicity we consider a single
soft-fault diagnosis. We select m testing nodes according to the topology of the faulty
circuit with b components.
3.2.3.1 The BPNN algorithm
The two hidden layer, m-input and b-output neural network is adopted. As far as
the hidden neurons are concerned, their number is determined by the complexity of
the circuit and the difficulty in classifying the faults. Generally speaking, the more
elements there are in the circuit, the larger the number of hidden nodes needed. The
selection of the number of layers, the number of nodes in a layer, activation functions
and initial values of the weights to be adapted will all affect the learning rate, the

Neural-network-based approaches for analogue circuit fault diagnosis

89

complexity of computation and the effectiveness of the neural network for a specific
problem. To date, there is no absolute rule to design the structure of a BPNN and
results are very much empirical in nature.
3.2.3.2 Fault feature generation
Using the principles of pattern recognition, feature generation
aims at obtaining the

m
2
essential characteristics of input patterns. Since vi /
j=1 vj (i = 1, 2, , m)
can be measured and are used for search of the faulty element among all branches, they
are utilized as the fault feature values of the BPNN and as the inputs of a dimension
of m, of the BPNN. The outputs of the BPNN correspond to the circuit elements to
be diagnosed, having a dimension of b.
3.2.3.3 Constitution of sample and actual feature groups
By using Equation (3.8), under the condition that all elements have nominal
 values,
m
2
the sample feature values of the circuit can be obtained by calculating zif /
j=1 zjf
(i = 1, 2, , m), which is determined by the circuit topology. Because there are
increases and decreases in the values of elements, the minimum sample training
groups with 2b single faults are formed.
With the excitation current,
 we can measure Vm and obtain the actual feature
m
2
values by calculating vi /
j=1 vj (i = 1, 2, , m) which correspond to different
single faults. The groups of actual feature values are thus formed.
3.2.3.4 Training the BPNN and diagnosing faults
In the diagnosis system, because tolerances influence the actual feature values, BPNN
is used as a classifier to process the feature values affected by tolerances and locate
the fault. According to the k-fault diagnosis method and the BPPN algorithm, 2b
samples and large numbers of actual feature values are input to the BPNN to locate
single faults. During the process, training requires a large number of iterations before
the network converges to a minimal error. Some improvements in the algorithm may
be used to reduce the testing time. To locate the fault from the output of the BPNN,
we utilize that if the value of the output node is more than 0.5 (the threshold value of
the activation function of the neurons in the output layer), the corresponding element
is deemed faulty; otherwise, it is fault free.
The BPNN will not be fully trained until it converges to the target value. The
above process must be done before test. After the test, the measured feature vector
is applied to the trained BPNN and the output of the BPNN is the number of the
faulty branch/components. The steps involved in the fault diagnosis of a circuit can
be summarized as follows:
1. Define faults of interest.
2. Apply the test signal to the CUT and calculate the feature vectors under various
faulty or fault-free conditions.
3. Pass the feature vectors through the BPNN and train the BPNN.

90

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

4. Identify the fault class at the output of the BPNN.


5. Measure and calculate the practical feature vector and apply it through the
trained BPNN.

3.2.4

Neural-network approach for fault diagnosis of large-scale


analogue circuits

For large-scale circuit diagnosis, a block-based methodology should be used for


various practical reasons [22, 23]. Rather than dealing with the whole circuit directly,
a large-scale analogue circuit is partitioned into a number of subcircuits according
to some rules, for example, according to its structure or function. Each subcircuit
matches a certain BPNN. The subcircuit can be diagnosed using the corresponding
independent BPNN as discussed in the above. Thus, the diagnosis procedure for a
large-scale circuit can be easily summarized as follows [22, 23]:
1. Divide the large-scale circuit into subcircuit 1, subcircuit 2, , subcircuit n.
2. Define the faults of interest for each subcircuit.
3. Apply an appropriate signal to the large-scale circuit, measure accessible node
voltages and calculate feature vectors.
4. For each subcircuit, pass the feature vectors through the corresponding BPNN
under the fault-free and different faulty conditions and train the BPNN.
5. Calculate the measured feature vector of subcircuit 1 and apply it through the
trained BPNN 1.
6. Identify the fault class at the output of BPNN 1.
7. Repeat steps 5 and 6 for subcircuits 2, 3, , until the last subcircuit n.

3.2.5

Illustrative examples

3.2.5.1 Example 1
The neural-network-based fault diagnosis method is first illustrated using the circuit
shown in Figure 3.2.
In Figure 3.2, there are eight resistors. The nominal value of each resistor is
1 and each element has tolerance of 5 per cent. Suppose that a single soft-fault
R8
R2
1

R1

Figure 3.2

A resistive circuit

R4
2

R3

R6
3

R5

R7

Neural-network-based approaches for analogue circuit fault diagnosis


Table 3.1

Sample feature values of R3


X0

R3

Table 3.2
R3
value
(ohms)
0.2
0.9
1.2
2.5

91

X1

X2

0.6396

0.6396

0.4264

0.6396

0.6396

0.4264

Results of BPNN in diagnosis

X0

X1

X2

Output node
3 (R3 ) value

0.6486
0.6529
0.6439
0.6482

0.6305
0.6233
0.6352
0.6412

0.4264
0.4304
0.4260
0.4259

0.8495
0.8491
0.9480
0.9510

Maximum
value in other
output nodes
0.1195
0.1194
0.0569
0.0546

has occurred in it. According to the topology of the circuit, three testing nodes are
selected, which are numbered nodes 1, 3 and 4. Thus, the BPNN should have three
input nodes in the input layer and eight output nodes in the output layer. In addition,
two hidden layers with eight hidden nodes each are designed. The BPNN algorithm is
simulated by computer in the C language. Also, PSpice is used to simulate the circuit
to obtain Vm . Because the diagnosis principle is the same for every resistor in the
circuit, we arbitrarily select R3 as an example to demonstrate the method described.
The sample feature values of R3 (X0 , X1 , X2 ) are calculated and shown in Table 3.1.
These sample feature values of R3 are input to the BPNN in order that the BPNN is
trained and can memorize the information learned before. After over 5000 times of
training and when the overall error is less than 0.03, the training of the BPNN is
completed and the knowledge of the sample features is stored in it.
Now suppose R3 is faulty and when it is 0.2, 0.9, 1.2 and 2.5 respectively, the
values of the other resistors are within the tolerance range of 5 per cent (here the
values of the seven resistors are selected arbitrarily as R1 = 1.04 , R2 = 0.99 ,
R4 = 1.02 , R5 = 0.98 , R6 = 1.01 , R7 = 0.987 , R8 = 0.964 ). With
the excitation of a 1 A current to testing node 1, the actual feature values of the
three testing nodes are obtained by getting Vm and calculating the left-hand part
of Equation (3.8). Then, the actual feature values (X0 , X1 , X2 ) of the four situations
are input to the input nodes of the BPNN, respectively, to classify and locate the
corresponding faulty element. The results are shown in Table 3.2.
From Table 3.2, it can be seen that the diagnosis result is correct. For output
node 3 the value of the output layer is more than 0.5 and those of the other output
nodes are less than 0.5, which shows that R3 is the faulty element. Also, when R3

92

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

is 0.9 , which is the case that the fault is very small and comparatively difficult
to detect, the BPNN-based k-fault diagnosis method can still successfully locate it.
Furthermore, for the other seven resistors, the method has also been proven to be
effective by simulation. In addition, once the BPNN is trained, the diagnosis process
becomes very simple and fast.
Compared with the traditional k-fault diagnosis method, the BPNN-based method
has clear advantages. The BPNN-based method requires less computation and is very
fast. Computation is needed only once to obtain sufficient sample and actual feature
values of testing nodes for a particular circuit. Also, the problem of component
tolerance can be successfully handled by the robustness of the BPNN. Hence, the
neural-network-based diagnosis method is more robust and faster and can be used in
real-time testing.
3.2.5.2 Example 2
A second circuit is shown in Figure 3.3. This is a large-scale analogue circuit. It is
decomposed into four subcircuits (marked in dashed lines), denoted by x1 , x2 , x3 ,
x4 , according to the nature of the circuit. Assume that R14 and Q3 are faulty; the
value of R14 is changed to 450 and the base of Q7 is open. Following the steps in
Section 3.2.4, Table 3.3 is produced, containing accessible node voltages.

R7
400

R1
600

10
Q3 Q4
11

5
R2
820
1

R3
42

vi1

R11
1K
14

13
R9
780

R10
780

R5
42

+Vcc
+6v
Q7
Q8
Vo1
21

R13
45

v i2

R14
X3
45
18

X2

R4
42

R12
1K
15
Q5 Q 6
16 17

12

4
Q1 Q2

R8
400

X1

22
R17
1K

R15
470

Vo2
R18
1K

>
8

Q9
9

Q10
19

X4
R6
500

Q11
20
R16
370
23

Figure 3.3

A large-scale analogue circuit

VEE
6v

Items

Node order no.


Node voltage Vio
Node voltage Vi

Node order no.


Node voltage Vio
Node voltage Vi

Node order no.


Node voltage Vio
Node voltage Vi

Node order no.


Node voltage Vio
Node voltage Vi

BP 1

BP 2

BP 3

BP 4

7
0.8422
0.8330

8
2.7457
2.9178

7
0.8422
0.8330

3
1.0122
1.3384

8
2.7457
2.9178

14
3.6267
2.7878

12
0.3233
0.0146

4
1.0122
1.3152

23
6.0000
6.0000

15
3.6267
0.3886

13
0.3233
0.5615

6
6.0000
6.0000

Values

21
2.8859
2.0556

7
0.8422
0.8330

22
2.8859
7.275109

0 0 0 1

Feature vectors

0.0535 0.9986 0

0.0213 0.1040 0.4978 0.7829 0.3578

0.0223 0.8171 0.5760

Diagnosis data for Example 2 (Vio are the nominal accessible node voltages and Vi the measured)

NN

Table 3.3

Neural-network-based approaches for analogue circuit fault diagnosis


93

94

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

The feature vectors are passed through the corresponding BPNNs and the following results are obtained from the outputs of the BPNNs: subcircuit 1 (x1 ) and subcircuit
4 (x4 ) were fault free; subcircuit 2 (x2 ) and subcircuit 3 (x3 ) were faulty. The faulty
elements were identified as Q3 and R14 , which are the same as originally assumed.

3.3

Wavelet-based neural-network technique for fault diagnosis of


analogue circuits with noise

Fault diagnosis of analogue circuits is subject to many difficult problems, such


as a poor fault model, noise, non-linearity and tolerance effects. As discussed in
Section 3.2, neural-network-based approaches can be used to tackle these problems.
These methods have been shown to be robust and fast, and suitable for real-time fault
diagnosis. More recently, wavelet-based techniques have been proposed to detect
transient faults in dynamic systems such as chemical process by decomposing output
signals into elementary building blocks of wavelet transforms. The local changes
of the signals caused by the faults can be identified by analysing the non-stationary
components of the output signals. However, little work has been done on fault diagnosis and testing of analogue circuits using wavelet transforms. References 18 and
19 propose a neural-network-based fault diagnosis method using wavelet transforms
as a preprocessor to reduce the number of input features to the neural network. However, selecting the approximation coefficients as the features from the output node of
the circuit and treating the details as noise and setting them to zero may lead to the
loss of valid information, thus resulting in a high probability of ambiguous cases and
low levels of diagnosability. Additional processors are also needed to decompose the
details, resulting in the increase in the computation cost.
A method for fault diagnosis of analogue circuits based on the combination of
neural networks and wavelet transforms [24, 25] is presented in this section. Using
wavelet decomposition as a tool to remove noise from the sampled signals, optimal
feature information is extracted by wavelet de-noising, multi-resolution decomposition, PCA and data normalization. The features are applied to the WNN and the
fault patterns are classified. Diagnosis principles and procedures are described. The
reliability of the method and comparison with others are shown by two active filter
examples.

3.3.1

Wavelet decomposition

Wavelets were introduced by Grossmann and Morlet as a function (t) whose dilations and translations can be used for expansions
of 
L 2 (R) functions if (t) satisfies
2


the admissible condition: C = () /|| d < . Let V2j (j Z) be a
2
multi-resolution approximation
 j of L (R). There exists an orthonormal basis (scaling
j/2
function) (j, k) = 2 2 t k (k Z) of any V2j by dilating a function (t)
with a coefficient 2j and translating the resulting function on a grid whose interval is
proportional to 2j . Any set of multi-resolution approximation spaces can be written as

Neural-network-based approaches for analogue circuit fault diagnosis


N

f V0 = VN Wk and f (t) L 2 (R) can be expressed as f = fN +


k=1

95

k=1 gk .

Thus, the orthogonal projections of f (t) in various frequency bands are obtained. The
projection pj1 of f (t) on Vj1 is given by


pj1 f =
djk j,k
cjk j,k +
kZ

kZ

where cjk

and djk

are the coefficients of the scaling function and wavelet function on 2j


of f (t) respectively. The approximation and detail signals representing the high-scale
and low-scale components of a signal are captured through orthogonal transforms. At
each consecutive level, the length of the wavelet coefficients is reduced by a factor of
two owing to the down-sampling that can be effectively used to minimize the number
of sample points while preserving the information contained in the original signal.

3.3.2

Wavelet feature extraction of noisy signals

There are two types of noise source in a circuit: interior noise, for example, thermal
noise of a resistor or scintillation noise generated by a semiconductor element and
exterior noise such as disturbance of input signals. A noisy signal s(t) can be written
as s(t) = f (t) + e(t), where f (t) and e(t) are the principal content and the noise,
respectively. With wavelet decomposition being accomplished, detail sections are the
superposition of the details of f (t) and e(t).
In practice, f (t) can usually be expressed as low-frequency signals and e(t) as
high-frequency ones. Thus, noise removal can be executed by minimizing the noise
contained in the detail coefficients by first wavelet decomposing s(t) and then reconstructing the resulting signals. The proposed method in References 18 and 19 selects
the approximation coefficients as the features from the output node of the circuit
and treating the details as noise and just setting them to zero can lead to loss of
valid information, thus resulting in a high probability of ambiguous cases [17]. The
technique presented here is to extract the candidate features generated from the test
points by wavelet noise removal and wavelet decomposition. The optimal feature
vectors for training neural networks are then obtained by PCA and normalization of
approximation and detail coefficients.
The proposed algorithm can be summarized as follows:
1. Approximate cjk and djk details at various levels are obtained by N level
orthogonal decomposition of the original sampled signals.
2. Removing noise from all djk s.
3. Calculating the energy of every frequency band for the signals with noise
removed:
 2
E = Eopp Edet , the elements in Eopp and Edet : Ej opp =
cjk ,
Ej det

 2
=
djk
k

96

Test and diagnosis of analogue, mixed-signal and RF integrated circuits

4. Construction of candidate features:


In each frequency band there must be a strong deviation in energy T due to
faults, which can be extracted as a feature.
Let Emax = max{E1 , E2 , E3 , E4 , }, Emin = min{E1 , E2 , E3 , E4 , } and
define
2Ei Emax Emin
Ei =
(i = 1, 2, . . .)
Emax


then the feature vector can be formed as T = E 1 , E 2 , E 3 , . . . .
5. Selection of the optimal sets for training neural networks by PCA and normalization, to eliminate the redundant contents and reduce the complexity of
the neural networks employed. PCA is applied to reduce the dimensionality
of the feature vectors and thus reduce the input space of the neural networks
employed in the fault classification after wavelet transforms, while preserving relevant information for fault identification and removing redundant and
irrelevant information that are side effects of the classification performance.
Data normalization is adopted to avoid large dynamic ranges resulting from the
non-convergence of neural networks or singular patterns which differ from the
other pattern orders in their magnitudes.

3.3.3

WNNs

Neural networks have many features suitable for fault diagnosis. We now combine
wavelet transforms and neural networks for fault diagnosis of analogue circuits. The
WNN is shown in Figure 3.4. This is a multi-layer feedback architecture with wavelets,
allowing the minimum time to converge to its global maximum. The WNN employs
a wavelet base rather than a sigmoid function, which discriminates it from general
BPNNS. The function of mapping from Rm to Rn can be expressed as

 m


p


t bj

(3.9)
yi (t) = 1
wij
xk (t)
aj
j=1

k=1

In Equation (3.9), (.) and 1 (.) are the wavelet bases; xk (t) and yi (t) are the kth
input and ith output respectively. The weight functions in the hidden layer and output
layer are wavelet functions. The sum square error performance function is expected
x1(t )

xk(t )

xm(t )

Figure 3.4

t b1
a1
t bk

ak

wij

wi1

wip

t bm

am

WNNs (one neuron illustrated) and algorithm

1(.)

yi (t )

Neural-network-based approaches for analogue circuit fault diagnosis

97

to reach minimum by feeding information forward and feeding error backward, thus
updating the weights and bias parameters according to its learning algorithms. A
momentum and adaptive learning rule is adopted to reduce the sensitivity of the local
details of error surfaces and shorten the learning time.

3.3.4

WNN algorithm for fault diagnosis

The fault diagnosis method based on WNNs identifies the fault class by the output of
the WNN trained with various fault patterns.
Definition 3.1 Test points of a circuit are those accessible nodes in a circuit whose
sensitivities to component parameters are not equal to zero.
Definition 3.2 Pattern extraction nodes (PEN) of a circuit are those test points in a
circuit which are distributed uniformly.
From definition 3.2, using PENs can reduce the probability of missing a fault, thus
increasing the diagnosability. The sketch of the WNN algorithm for fault diagnosis
is given in Figure 3.4. It contains three main steps described below.
1. Extraction of candidate patterns and feature vectors.
To extract candidate features from the PENs of a circuit, 500 Monte Carlo
analyses are conducted for every fault pattern of the sampled circuits with tolerances; 350 of them are used to train WNNs and the other 150 are adopted for
simulation. Then optimal features for training neural networks are obtained by
first selecting candidate sets from wavelet coefficients using PCA and normalization (according to steps 15 as mentioned in Section 3.3.2). Assuming that
the feature vector of the ith PEN is TVi = [A1 , A2 , . . . , An ], then for all PENs,
we have TV = [TV1 , TV2 , . . . , TVq ], where q is the number of PENs.
2. Design and training of WNNs.
A multi-layer feedback neural network whose output number is equal to
the number of fault classes is used. The error performance function is given by
q 
N 
2

l
E = 21
yd,i
(3.10)
(t) yil (t)
l=1 i=1
l
where N is the total number of training patterns and yd,i
(t) and yil (t) are
the desired and real output associated with feature TVi for the lth neuron,
respectively. Also, yil (t) is given in Equation (3.9).
To minimize the sum square error function in Equation (3.10) the weights
and coefficients in Equation (3.9) or Figure 3.4 can be updated using the
following formulas

wij (k + 1) = wij (k) + w Dwij (k) + w [wij (k) wij (k 1) ]


aj (k + 1) = aj (k) + a Daj (k) + a [aj (k) aj (k 1) ]
bj (k + 1) = bj (k) + b Dbj (k) + b [bj (k) bj (k 1) ]
E
E
E
Dwij = w
, Daj = a
, Dbj = b
ij
j
j
(3.11)

98

Test and diagnosis of analogue, mixed-signal and RF integrated circuits


where a , b and w are the learning rates for updating weights/coefficients aj ,
bj and wij and a , b and w are the momentum factors, respectively.
Using Equations (3.9), (3.10) and (3.11) we can derive:
Dwij =








p
m


tb
tb
l
wij
xkl (t) aj j
yd,i
(t)yil (t) xkl (t) aj j w ij 1

q
m 
N

l=1 i=1 k=1

Daj =

q
m 
N

l
yd,i
(t)yil (t) w ij 1

l=1 i=1 k=1

Dbj =

j=1

wij

j=1

m

k=1

xkl (t)

k=1

tbj
aj

1
aj2





tb
wij xkl (t) t bj a j aj j





p
m
  tb 


tb
l
wij
xkl (t) aj j
yd,i
(t)yil (t) a1j wij xkl (t) t bj aj j w ij 1

q
m 
N

l=1 i=1 k=1




j=1

k=1

(3.12)

3. Fault simulation and diagnosis.


To test and verify the proposed WNN, 150 Monte Carlo analyses of each
fault pattern are carried out.
On inputting the WNN with the measured data from the CUT, due to the
change in the feature values, the outputs of the WNN will show the fault patterns.

3.3.5

Example circuits and results

The two circuits chosen are the same as those in References 1719 for convenience
of comparison, and are shown in Figures 3.5 and 3.6. The circuit in Figure 3.5 is
a SallenKey bandpass filter. The nominal values of its components that result in a
centre frequency of 160 kHz are shown in the figure. The resistors and capacitors are
assumed to have tolerances of 5 and 10 per cent, respectively. The primary motivation
for selecting this filter and its associated faults described later in this section is to
compare our results with those in References 17 and 19. If we assume that R2 , R3 ,
C1 and C2 are 50 per cent higher or lower than their respective nominal values shown
in Figure 3.5, we have the fault classes R2 , R2 , R3 , R3 , C1 , C1 , C2 and

R1

R2

1k

3k

C1 5n

out

R3
C2

2k

OUT
+

R5

5n
R4
4k

Figure 3.5

A 160 kHz SallenKey bandpass filter

4k

Neural-network-based approaches for analogue circuit fault diagnosis

99

r2
C1 5 n

6200
C2

r1
R3

r 52
5100

5n

6200

r 51

r4

6200

2
OUT

1600

5100

OUT

OUT

r 62
10 k

0
r 64
10 k

r 63
10 k

OUT
+

r 61
10 k

Figure 3.6

Out

A 10 kHz four opamp biquad high-pass filter

C2. The notations and stand for high and low, respectively. In order to generate
training data for different fault classes, we set faulty components in the circuit and
vary resisters and capacitors within their tolerances.
Figure 3.6 shows a four opamp biquad high-pass filter of a cut-off frequency of 10
kHz. Its nominal component values are given in the figure. Faulty component values
for this circuit are set to be the same as those in Reference 19 for convenience of
comparison. The tolerances of 5 and 10 per cent are used for resistors and capacitors
to make the example practical.
The impulse responses of the filter circuit are simulated to train the WNN with
the filter input of a single pulse of height 5 V and duration 10 s. We adopt a
WNN architecture of N1 -38-N2 , where N1 is the number of input patterns and N2
is the number of fault patterns. For the fault diagnosis of the SallenKey filter in
Figure 3.5, the method presented in Reference 17 requires a three-layer BPNN. This
network has 49 inputs, 10 first-layer and 10 second-layer neurons, resulting in about
700 adjustable parameters. During the training phase, an error function of these
parameters must be minimized to obtain the optimal weight and bias values. The
trained network was able to properly classify 95 per cent of the test patterns. Reference
19 for diagnosing nine fault classes (eight faulty components plus the no-fault class)
in the same SallenKey bandpass filter, requires a neural network with four inputs, six
first-layer and eight output-layer neurons. Their results show that the neural network
can not distinguish between the NF (no-fault class) and the R2 fault classes. If
these two classes are combined into one ambiguity group and we use eight output
neurons accordingly, the neural network can correctly classify 97 per cent of the test
patterns. Using the method described above, the trained WNN is capable of 100 per
cent correct classification of the test data, although the WNN used is somewhat more
complicated.
Using the method in Reference 19 to diagnose the 13 single faults assumed in Table
I of Reference 19 for the four opamp biquad high-pass filter of Figure 3.6 requires
a neural network with five inputs, 16 first-layer and 13 output-layer neurons. Their

100 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


trained network was able to properly classify 95 per cent of the test patterns. In this
example, using the WNN presented above, 3250 training patterns are obtained from
6500 Monte Carlo PSpice simulations to train the neural network. The measurement
data associated with the fault features as well as the other 1950 Monte Carlo simulation
specimens are applied to simulate the fault set. Using the technique presented above,
the measured fault features and the feature vectors due to the 1950 Monte Carlo
PSpice simulations are selected to determine the faults. Although the method cannot
identify all the faults, because some features are overlapped to some extent when
component tolerances are near 10 per cent, however, even in the presence of 10 per
cent component tolerance, the WNN method correctly classifies 99.56 per cent of
the test data. For example, the WNN fault diagnosis system can distinguish the fault
classes C2 , R1 , R4 which are misclassified in Reference 19. Besides, the
fault classes of the NF and R1 that cannot be distinguished in Reference 19 have
been identified correctly using the method. Figures 3.7 and 3.8 respectively show the
waveforms associated with NF and R1 , sampled from node 2 (one of the PENs).
In each case, the noisy and de-noised signals and their multi-resolution coefficients
are all given, as shown in Figures 3.7 and 3.8. In these figures, ca5 and cdj, j = 1, 2,
, 5 are the coefficients a5 and dj respectively. Also note that these waveforms are
distinct from one and another.

3.4

Neural-network-based L1 -norm optimization approach for fault


diagnosis of non-linear circuits

Analogue circuit fault location has proved to be an extremely difficult problem. This
is mainly because of component tolerances and the non-linear nature of the problem. Among the many fault diagnosis methods, the L1 optimization technique is
a very important parameter identification approach [28, 29], which is insensitive to
tolerances. This method has been successfully used to isolate the most likely faulty elements in linear analogue circuits and when combined with neural networks, real-time
testing becomes possible for linear circuits with tolerances. Some fault verification
methods have been proposed for non-linear circuit fault diagnosis [3034]. On the
basis of these linearization principles, parameter identification methods can be developed for non-linear circuits. In particular, the L1 optimization method can be extended
and modified for fault diagnosis of non-linear circuits with tolerances. Neural networks can also be used to make the method more effective and faster for non-linear
circuit fault location.
This section deals with fault diagnosis in non-linear analogue circuits with tolerances under an insufficient number of independent voltage measurements. The
L1 -norm optimization problem for different scenarios of non-linear fault diagnosis is
formulated. A neural-network-based approach for solving the non-linear constrained
L1 -norm optimization problem is presented and utilized in locating the most likely
faulty elements in non-linear circuits. The validity of the method is verified and
simulation examples are presented.

Neural-network-based approaches for analogue circuit fault diagnosis 101


Noisy signal

1.965

1.96

1.955

1.95
0

50

100

150

200

250

De-noise signal

1.965

1.96

1.955

1.95
0

50

100
3

ca5
5

150

200

250

cd5
cd4

10

0.01

11.1
0

0.01

11.05
10
0

10

5
3

10

0.02
0

cd3

5
3

10

10

10

cd2
2

10

10
3

20

cd1

0
5
2

0
0

4
5

5
0

Figure 3.7

20

40

6
0

50

100

100

200

Noisy, de-noised signals and their level 15 coefficients associated with


NF

102 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Noisy signal
1.965

1.96

1.955

1.95

50

100

150

200

250

300

350

400

300

350

400

De-noised signal
1.965

1.96

1.955

1.95
0

50

100

ca5

10
5

10

200

250

cd5

15

150

20

cd4

0.6

0.6

0.4

0.4

0.2

0.2

0.2

cd3

10

20

0.2

cd2

0.5

0.5

20

40

cd1
1.5
1
0.5

0.5

Figure 3.8

50

0.5

0
0

50

100

0.5

100

200

Noisy, de-noised signals and their level 15 coefficients associated


with R1

Neural-network-based approaches for analogue circuit fault diagnosis 103

3.4.1

L1 -norm optimization approach for fault location of non-linear


circuits

Assume that a non-linear resistive circuit has n nodes (excluding the reference node),
m of which are accessible. There are b branches, of which p elements are linear and
q non-linear, b = p + q. The components are numbered in the order of linear to nonlinear elements. For simplicity, we assume that all non-linear elements are voltage
controlled, with characteristics being denoted as
ip+1 = fp+1 (vp+1 ), . . . , ip+q = fp+q (vp+q )
When the non-linear circuit is fault free, the non-linear component will work at its
static point Q0 and its voltagecurrent relation can be described as iQ0 = y0 vQ0 ,
where y0 is the value of static conductance at working point Q0 , and iQ0 and vQ0 are
the current and voltage at point Q0 , respectively. When the circuit is faulty, no matter
whether or not the non-linear element is faulty, the static parameter will change from
y0 to y0 + y, where y represents the increment from y0 . The change y can be
equivalently described by a parallel current source vy where v is the actual voltage
[3033]. For the linear elements, as is well known, the change in a component value
from its nominal can be represented by a current source. For a small-signal excitation,
which lies in the neighbourhood of its working point Q0 , the non-linear resistive
element can be replaced by a linear resistor. According to the superposition theorem,
we can derive [3033]:
Vm = Hmb Eb

(3.13a)

Eb = [e1 , e2 , . . . , eb ]T

(3.13b)

ei = vi yi

(3.13c)

where Vm is the increment vector of the voltages of accessible nodes due to faults,
Hmb is the coefficient matrix that relates the accessible nodal voltages to the equivalent
current source vector Eb , which can be calculated from the nominal linear conductances and working point conductances of non-linear components, vi is the actual
branch voltage for the component i, yi (i = 1, 2, , p) is the change in the conductance of the linear component and yi (i = p+1, , p + q) is the deviation from the
static conductance of the non-linear element.
Equation (3.13) is an underdetermined system of linear equations for parameters
Eb . Therefore the L1 -norm optimization problem may be stated as
minimize

b


|ei |

(3.14a)

i=1

subject to
Vm = Hmb Eb

(3.14b)

The result of the optimization problem in Equation (3.14) provides us with Eb .


Then, the network can be simulated using the external excitation source and obtained

104 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


equivalent current sources Eb to find vi , i =1, , b and ii for non-linear components, i = p+1, , b. The conductance change in every network component
can be easily computed using Equation (3.13c). Comparing the change in every
linear component with its allowed tolerance, the faulty linear components can be
readily located. For a non-linear resistive element we need to further check the
relation:
iQ0 + i = (y0 + y)(vQ0 + v)

(3.15)

to determine whether or not the element is faulty or in other words, whether or not the
actual working point remains on the normal characteristic curve within the tolerance
limits [3034]. If Equation (3.15) holds within its tolerance, the non-linear element is
fault free and the y, the result of working point Q0 moving along its characteristic
function curve, is caused by other faulty elements. If Equation (3.15) does not hold,
the non-linear element is faulty.
Equation (3.14) is restricted to a single excitation. In fact, multiple excitations
can be used to enhance diagnosability and provide better results. For k excitations
applied to the faulty network, the L1 -norm problem is formulated as
minimize


b

yi


y
i=1

(3.16a)

i0

subject to

1 1
Hmb Vb
V1m
V2 H2 V2
m

mb b
... =
...
Vkm
Hkmb Vkb

Y

(3.16b)

where Vb = diag (v1 , v2 , . . . , vb ), Y = [y1 , y2 , . . . , yb ]T , yi0 (i = 1, . . . , p)


represent nominal values of linear elements and yi0 (i = p + 1, . . . , p + q) represent
static conductances of non-linear elements at working point Q0 . The superscripts 1
to k refer to different excitations.
Traditionally, a linear programming algorithm is applied to solve the problem in
Equation (3.16). To preserve the linear relationship between Vm and Y, the actual
branch voltages vi , i = 1, . . ., b have to be assumed as known values. Therefore,
a repeated iterative procedure is needed and its online computation cost is large. If the
actual voltages vi are also regarded as variables to be optimized, the values of Y can
be obtained after only one optimization process. In this case, the L1 -norm problem
can be stated as


b 

yi vi




minimize
y + v
i=1

i0

i0

(3.17a)

Neural-network-based approaches for analogue circuit fault diagnosis 105


subject to
1 1
Hmb Vb
V1m
V2 H2 V2
m =
mb b

...
...
Vkm
Hkmb Vkb

Y

(3.17b)

where vi0 represents the nominal branch voltage, vi is the change in the voltage due
to faults and tolerance and vi + vi0 the actual voltage vi .
From Equation (3.17) we can obtain Y. For a linear element if y/y0 exceeds
its allowed tolerance significantly, we can consider it to be faulty. However, for
a non-linear resistive element, we cannot simply draw a conclusion. For analogue
circuits with tolerances, the relation of the voltage and current of a non-linear resistive
element can be represented by a set of curves instead of a single one due to tolerance,
the nominal voltage-to-current characteristic of the non-linear element being in the
centre of the zone. Therefore, for a non-linear component, after determining y/y0
we need to simulate the non-linear circuit again to judge whether or not the component
is faulty. If the actual VI curve of the non-linear element significantly deviates from
the tolerance zone of curves, the non-linear element can be considered as a faulty one.
Otherwise, if the actual curve falls within the zone, the non-linear element is fault free.

3.4.2

NNs applied to L1 -norm fault diagnosis of non-linear circuits

According to the above analyses, the L1 -norm problem of non-linear circuit fault diagnosis has three representations corresponding to Equations (3.14), (3.16) and (3.17),
respectively. The L1 -norm problem in Equation (3.14) belongs to the underdetermined linear equation parameter estimation problem, while the L1 -norm problem in
Equation (3.17) is the problem of non-linear parameter estimation. L1 -norm problem
of Equation (3.16), is solved traditionally by using the linear programming algorithm
and hence can be considered as the linear parameter estimation problem.
The L1 -norm problem in Equation (3.14) is simple and can be solved using
the Linear Programming Neural Network (LPNN) algorithms. We can easily transform the problem of Equation (3.14) into the standard linear programming problem.
Introducing new variables xi :
xi = ei ,
xi = 0,

xb+i = 0, ei 0
xb+i = ei ,
ei 0

(3.18)

where ei = xi xb+i i = 1, 2, . . . , b, the L1 -norm problem in Equation (3.14) can


be formulated as the following standard linear programming problem
minimize CT X

(3.19a)

AX = B
X0

(3.19b)

subject to

106 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


where

C = [C1 , C2 , . . . , C2b ]T = [1, 1, . . . , 1]T


X = [x1 , x2 , . . . , x2b ]T
B = [B1 , B2 , . . . , Bm ]T = [V1 , V2 , . . . , Vm ]T
A = [Hmb , Hmb ]mx2b

The L1 -norm problem in Equation (3.16) can also be transformed into a standard
linear programming problem in the same way as the above and can be solved using
the LPNN.
The L1 -norm problem in Equation (3.17) belongs to the non-linear constrained
optimization problem. The solution of a non-linear constrained optimization problem
generally constitutes a difficult and often frustrating task. The search for new insights
and more effective solutions remains an active research endeavour. To solve a nonlinear constrained optimization problem by using neural networks, the key step is
to derive a computational energy function (Lyapunov function) E so that the lowest
energy state will correspond to the desired solution. There have been various neuralnetwork-based optimization techniques such as the exterior penalty function method,
the augmented Lagrange multiplier method, and so on. However, existing references
discuss only the unconstrained L1 -norm optimization problem. Here, an effective
method for solving the non-linear constrained L1 -norm optimization problem such as
Equation (3.17) is presented.
Although aiming at solving the problem in Equation (3.17), the approach is developed in a general way. The symbols to be used may have different meanings from
(and should not be confused with) those in the above, though they may be the same
for the convention. Applying the general formation to the problem in Equation (3.17)
is straightforward.
A general non-linear constrained L1 -norm optimization problem can be
described as
m



fj (X)
minimize

(3.20a)

j=1

subject to
Ci (X) = 0

(i = 1, 2, . . . , l)

(3.20b)

where the vector X = [x1 , x2 , . . . , xn ]T is the set of n parameters to


be optimized, fj (X) = fj (x1 , x2 , . . . , xn ) , (j = 1, 2, . . . , m) and Ci (X) =
Ci (x1 , x2 , . . . , xn ) , (i = 1, 2, . . . , l) are non-linear, continuously differentiable
functions.
Using the exact penalty function method, we can transform the constrained
problem in Equation (3.20) into an unconstrained L1 optimization problem of
minimizing:
E(X, R) =

m
l



fj (X) +
ri |Ci (X)|
j=1

i=1

(3.21)

Neural-network-based approaches for analogue circuit fault diagnosis 107


where R = [r1 , . . . , rl ]T , ri > 0, i = 1, 2, . . . , l, which are the penalty parameters.
For the unconstrained problem in Equation (3.21) there is an important theorem
that is stated below.
Theorem 3.1 Assume that X satisfies the first-order sufficient conditions to be a
strong local minimum for the problem in Equation (3.20) and assume that:

ri > i , i = 1, . . . , l
(3.22)
where 1 , . . . , l are the KuhnTucker multipliers for equality constraints. Then X
is a strong local minimum of E(X, R) in Equation (3.21).
For those ri satisfying the conditions of the theorem, the optimization solution X
of Equation (3.21) is the solution of Equation (3.20). Using a neural-network method
to solve the problem in Equation (3.21), E(X, R) can be considered as the computation
energy function of the neural network. Implementing the continuous-time steepest
descent method, the minimization of the energy function E(X, R) in Equation (3.21)
can be mapped into a system of differential equations, given by
 m

l

dxj
Ci (X)
fi (X) 
(fi + sfi fi )
+
(ci + sci ci )ri
= j
xj
xj
dt
i=1

xj (0) = xj0 ,

i=1

j = 1, . . . , n

dfi
= fi sfi fi (X),
dt
dci
= ci sci Ci (X),
dt
where

fi (0) = fi0 ,

(3.23a)
i = 1, . . . , m

ci (0) = cio ,

i = 1, . . . , l

(3.23b)
(3.23c)

1
fi (X) > 0
(i = 1, . . . , m)
1 fi (X) < 0

1
Ci (X) > 0
ci = sgn(Ci (X)) =
(i = 1, . . . , l)
1 Ci (X) < 0

0
fi (X)  = 0
sfi =
(i = 1, . . . , m)
1
fi (X) = 0

0
Ci (X)  = 0
sci =
(i = 1, . . . , l)
1
Ci (X) = 0

fi 1, |ci | 1, j , fi , ci > 0
fi = sgn(fi (X)) =

Equation (3.23) can be implemented directly by an ANN depicted in Figure 3.9.


This ANN may be considered as the Hopfield model neural network, which is a
gradient-like system. It consists of adders, amplifiers, hard limiters and two integrators
(a lossy unlimited integrator and a lossless limited integrator with saturation). The

108 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


f 1
Sf 1

VCC =1
f 1

f1(X)

Sf 1
Sf 1

VCC =1

C1(X)

SC1

C1

C1
VCC =1

SC1
+

SC1

VCC =1
f1(X)/xj

control network

X
+

r1

C1(X)/xj

xj0

xj

computing network for xj (j =1,...,n)


x1 xj xn

Figure 3.9

Architecture of artificial neural network for solving L1 -norm optimization problem

neural network
 will movefrom any initial state X0 that lies in the neighbourhood
N (X ) = X0 X0 X  < , > 0 } of X in a direction that tends to decrease
the cost function being minimized. Eventually, a stable state in the network will be
reached that corresponds to a local minimum of the cost function.
It can be proved that the stable state X satisfies the necessary conditions for
optimality of the function in Equation (3.21). Obviously, dE(X, R)/dt 0. It should
be noted that dE(X, R)/dt = 0, if and only if dx/dt = 0, that is, the neural network
is in the steady state, and


i fi (X ) +
Vi fi (X ) = 0
(3.24)
iA
/

iA

where
fi (X) = fi (X),

(i = 1, . . . , m)

fi (X) = ri Ci (X),

(i = m + 1, . . . , m + l)

I = {1, . . . , m, m + 1, . . . , m + l}
A = A(X ) = {i|fi (X ) = 0, i I}

V 1, i A
i

i = sgn(fi (X )),

i
/A

This means that stable equilibrium point X of the neural network satisfies the necessary conditions for optimality of Equation (3.21). According to theorem 3.1, the stable

Neural-network-based approaches for analogue circuit fault diagnosis 109

G2
G1

G8
G3
G7

3
G5
G4

4
G6
5

Figure 3.10

Non-linear resistive circuit

state X of the neural network corresponds to the solution of the L1 -norm problem in
Equation (3.20).
Note that fi , ci are used as adaptive control parameters to accelerate the minimization process. From the ANN depicted in Figure 3.9, it follows that variables
fi , ci are controlled by the inner state of the neural network, which will make the
neural network have a smooth dynamic process and as a result of this, the neural
network can quickly converge to a stable state that is a local minimum of E(X, R).
Because the neural-network computation energy function in Equation (3.21) is derived
from the exact penalty approach, on applying the ANN in Figure 3.9 (or the neural
network algorithm of Equation (3.23)), more
accurate results can be obtained, with
appropriate penalty parameters ri ( ri > i ) not being large. The main advantage
of the neural-network-based method for solving the L1 -norm problem in comparison
with other known methods is that it avoids the error problem caused by approximating
the absolute value function, thus providing a high-precision solution without use of
large penalty parameters. The effectiveness and performance of the neural-network
architecture and algorithm have been successfully simulated. One example is given
below.

3.4.3

Illustrative example

Consider the non-linear resistive network shown in Figure 3.10, with the nominal
values of linear elements 16 being yi0 = 1 (i = 1, , 6) and the characteristics of
non-linear resistive elements 7 and 8 being i7 = 10V73 and i8 = 5V82 , respectively.
Both linear element parameters and static conductances (y70 and y80 ) of non-linear
elements are with a tolerance of 0.05. Nodes 1, 3, 4, 5 are assumed to be accessible
with node 5 being taken as the ground. Node 2 is assumed to be internal where no
measurement can be performed.
For a single small signal current excitation with 10 mA at node 1, the
changes in the accessible nodal voltages due to faults can be obtained as Vm =
[0.0044, 0.001, 0.000 4598]T
Construct the matrices required by Equation (3.17) using the nominal/static component values and solve the L1 problem in Equation (3.17) using the neural network
described in Equation (3.23). The neural network with ri = 10 (i = 1, 2, 3), zero
initial state, j = 106 , fj = ci = 107 (j = 1, . . . , 8; i = 1, 2, 3) has been simulated using the fourth-order RungeKutta method. The equilibrium point of the
neural network is the solution of the L1 problem, given by y1 /y10 = 0.5071,

110 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


y3 /y30 = 0.037, y7 /y70 = 0.430 65, y8 /y80 = 0.4802, yi /yi0 = 0,
i = 2, 4, 5, 6. Linear elements 2, 4, 5, 6 are normal as the conductance change is
zero. The conductance change in linear element 1 significantly exceeds its allowed
tolerance, therefore we can judge it is faulty. The change in linear element 3 slightly
exceeds its allowed tolerance, but we can still consider it to be non-faulty. The changes
in non-linear element static conductances significantly exceed their allowed tolerances. We simulate the faulty non-linear circuit again and find that only the VI curve
of non-linear element 7 significantly deviates from its tolerance characteristic zone,
hence element 7 is faulty and element 8 is fault free. In fact, in our original setting up,
linear element 1 and non-linear element 7 are assumed faulty. It can thus be seen that
the method can correctly locate the faults. Meanwhile, the validity of the presented
neural-network algorithm for solving non-linear constrained L1 -norm optimization
problem is also confirmed.

3.5

Summary

This chapter addressed the application of neural networks in fault diagnosis of analogue circuits. A fault dictionary method based on neural networks has been presented.
This method is robust to element tolerances and requires little after-test computation.
The diagnosis of soft-faults has been shown, while the method is also suitable for hard
faults. Significant diagnosis precision has been reached by training a large number
of samples in the BPNN. While the faulty samples trained can be easily identified,
the BPNN can also detect untrained faulty samples. Therefore, the fault diagnosis
method presented can not only quickly detect the faults in the traditional dictionary
but can also detect the faults not in the dictionary. As has been demonstrated, the
method is also suitable for large-scale circuit fault diagnosis.
A method for fault diagnosis of noisy analogue circuits using WNNs has also
been described. In this technique, candidate features are extracted from the energy
in every frequency band of the signals sampled from the PENs in a circuit de-noised
by wavelet analysis and are employed to select optimal feature vectors by PCA, data
normalization and wavelet multi-resolution decomposition. The optimal feature sets
are then used to train the WNN. The method is characterized by its high diagnosability.
It can distinguish the ambiguity sets or some misclassified faults that other methods
cannot identify and is robust to noise. However, some overlapped ranges appear as
the component tolerances increase to 10 per cent.
Fault diagnosis of non-linear circuits taking tolerances into account is the most
challenging topic in analogue fault diagnosis. A neural-network-based L1 -norm optimization approach has been introduced for fault diagnosis of non-linear resistive
circuits with tolerances. The neural-network-based L1 -norm method can solve various linear and non-linear equations. A fault diagnosis example has been presented,
which shows that the method can effectively locate faults in non-linear circuits. The
method is robust to tolerance levels and suitable for online fault diagnosis of nonlinear circuits as it requires fewer steps in the L1 optimization and the use of neural
networks further speeds up the diagnosis process.

Neural-network-based approaches for analogue circuit fault diagnosis 111

3.6

References

1 Bandler, J.W., Salama, A.E.: Fault diagnosis of analog circuits, Proceedings of


IEEE, 1985;73 (8):1279325
2 Ozawa, T. (ed.): Analog Methods for Computer-Aided Circuit Analysis and
Diagnosis (Marcel Dekker, New York, 1988)
3 Liu, R.W. (ed.): Testing and Diagnosis of Analog Circuits and Systems (Van
Nostrand Reinhold, New York, USA, 1991)
4 Huertas, J.L.: Test and design for testability of analog and mixed-signal integrated circuits: theoretical basis and pragmatical approaches, in Dedieu, H. (ed.),
Circuit Theory and Design 93, Selected Topics in Circuits and Systems (Elsevier,
Amsterdam, The Netherlands 1993), pp. 75151
5 Hochwald, W., Bastian, J.D.: A DC approach for analog fault dictionary
determination, IEEE Transactions on Circuits and Systems, July 1979;26:5239
6 Navid, N., Wilson, A.N. Jr.: A theory and algorithm for analog circuit fault
diagnosis, IEEE Transactions on Circuits and Systems, 1979;26:44057
7 Biernacki, R.M., Bandler, J.W.: Multiple-fault location in analog circuits, IEEE
Transactions on Circuits and Systems, 1981;28:3616
8 Sun, Y.: Determination of k-fault-element values and design of testability in
analog circuits, Journal of Electronic Measurement and Instrument, 1988;2
(3):2531
9 Sun, Y., He, Y.: Topological conditions, analysis and design for testability in
analogue circuits, Journal of Hunan University, 2002;29 (1):8592
10 Sun, Y.: Theory and algorithms of solving a class of linear algebraic equations,
Proceedings of CSEE and IEEE Beijing Section National Conference on CAA and
CAD, China, 1988
11 Huang, Z.F., Lin, C., Liu, R.W.: Node-fault diagnosis and a design of testability,
IEEE Transactions on Circuits and Systems, 1983;30:25765
12 Meador, J., Wu, A., Tseng, C.T., Line, T.S.: Fast diagnosis of integrated circuit
faults using feed forward neural networks, Proceedings IEEE INNS International Joint Conference on Neural Networks, Seattle, WA, July 1997;1:26972
13 Maidon, Y., Jervis, B.W., Dutton, N., Lesage, S.: Diagnosis of multifaults in analogue circuits using multilayer perceptrons, IEE Proceedings Circuits, Devices
and Systems, 1997;144 (3):14954
14 Starzyk, J.A., El-Gamal, M.A.: Artificial neural network for testing analog
circuits, Proceedings of IEEE ISCAS, 1990;3:18514
15 Rutkowski, J.: A two stage neural network DC fault dictionary, Proceedings of
IEEE ISCAS, London, 1994;16:299302
16 Catelani, M., Gori, M.: On the application of neural networks to fault diagnosis
of electronic analog circuits, Measurement, 1996;17 (2):7380
17 Spina, R., Upadhyaya, S.: Linear circuit fault diagnosis using neuromorphic analyzers, IEEE Transactions on Circuits and Systems-II, March 1997;44 (3):18896
18 Aminian, M., Aminian, F.: Neural-network based analog circuit fault diagnosis using wavelet transform as preprocessor, IEEE Transactions on Circuits and
Systems-II, 2000;47 (2):1516

112 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


19 Aminian, F., Aminian, M., Collins, H.W.: Analog fault diagnosis of actual circuits
using neural networks, IEEE Transactions on Instrumentation and Measurement,
June 2002;51 (3):54450
20 He, Y., Ding, Y., Sun, Y.: Fault diagnosis of analog circuits with tolerances using
artificial neural networks, Proceedings of IEEE APCCAS, Tianjin, China, 2000,
pp. 2925
21 Deng, Y., He, Y., Sun, Y.: Fault diagnosis of analog circuits with tolerances
using back-propagation neural networks, Journal of Hunan University, 2000;27
(2):5664
22 He, Y., Tan, Y., Sun, Y.: A neural network approach for fault diagnosis of largescale analog circuits, Proceedings of IEEE ISCAS, Arizona, USA, 2002, pp.
1536
23 He, Y., Tan, Y., Sun, Y.: Class-based neural network method for fault location of
large-scale analogue circuits, Proceedings of IEEE ISCAS, Bangkok, Thailand,
2003, pp. 7336
24 He, Y., Tan, Y., Sun, Y.: Wavelet neural network approach for fault diagnosis
of analog circuits, IEE Proceedings Circuits, Devices and Systems, 2004;151
(4):37984
25 He, Y., Tan, Y., Sun, Y.: Fault diagnosis of analog circuits based on wavelet packets, Proceedings of IEEE TENCON, Chiang Mai, Thailand, 2004, pp. 26770
26 He, Y., Sun, Y.: A neural-based L1-norm optimization approach for fault diagnosis of nonlinear circuits with tolerances, IEE Proceedings Circuits, Devices
and Systems, 2001;148 (4):2238
27 He, Y., Sun, Y.: Fault isolation in nonlinear analog circuits with tolerance
using the neural network based L1-norm, Proceedings of IEEE ISCAS, Sydney,
Australia, 2001, pp. 8547
28 Bandler, J.W., Biernacki, R.M., Salama, A.E., Starzyk, J.A.: Fault isolation in
linear analog circuits using the L1 norm, Proceedings of IEEE ICAS, 1982; pp.
11403
29 Bandler, J.W., Kellermann, W., Madsen, K.: Nonlinear L1-optimisation algorithm for design, modeling and diagnosis of networks, IEEE Transactions on
Circuits and Systems, 1987;34 (2):17481
30 Sun, Y., Lin, Z.X.: Fault diagnosis of nonlinear circuits, Journal of Dalian
Maritime University, 1986;12 (1):7383
31 Sun, Y., Lin, Z.X.: Quasi-fault incremental circuit approach for nonlinear circuit
fault diagnosis, Acta Electronica Sinica, 1987;15 (5):828
32 Sun, Y.: A method of the diagnosis of faulty nodes in nonlinear circuits, Journal
of China Institute of Communications, 1987;8 (5):926
33 Sun, Y.: Faulty-cut diagnosis in nonlinear circuits, Acta Electronica Sinica,
1990;18 (4):304
34 Sun, Y.: Bilinear relation and fault diagnosis of nonlinear circuits, Microelectronics and Computer, 1990;7 (6):325

Chapter 4

Hierarchical/decomposition techniques for


large-scale analogue diagnosis
Peter Shepherd

4.1

Introduction

The size and complexity of integrated circuits (ICs) and related systems has continued to grow at a remarkable pace during recent years. This has included much
larger-scale analogue circuits and the development of complex analogue/mixed-signal
(AMS) circuits. Whereas the testing and diagnosis techniques for digital circuits are
well developed and have largely kept pace with the growth in complexity of the ICs,
analogue test and diagnosis methods have always been less mature than their digital
counterparts. There are a number of reasons for this fact. First, the stuck-at fault
modelling and a structured approach to testing has been widely exploited on the digital
side, whereas there is no real equivalent in the analogue world for translating physical faults into a simple electrical model. The second major problem with analogue
circuits is the continuous nature of the signals, giving rise to an almost infinite number of possible faults within the circuit. Third, there is the problem of the tolerance
associated with component and signal parameters, resulting in the definition of faulty
and fault-free conditions being somewhat blurred. Other problems inherent in largescale analogue circuit evaluation include the non-linearities of certain components
and feedback systems within the circuits.
However, even though circuits have grown in size and complexity, the design
tools to realise these circuits have matched this development. This means that complex circuits are becoming increasingly available as custom design items for a growing
number of engineers. Unfortunately, the test and diagnosis tools have not matched
this rate of development; so although the cost of IC production has seen a steady
decrease in terms of cost per component, the test and maintenance cost has increased
proportionally. While some standard analogue test and diagnosis procedures have
been developed over the years, many of these are only applicable to relatively small

114 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


circuits, often requiring access to a number of nodes internal to the circuit. Therefore,
with the increasing size and complexity of circuits, new approaches must be adopted.
One such approach that has attracted attention in recent years is the concept of a
hierarchical approach to the test and diagnosis problem whereby the circuit is viewed
at a number of different levels of circuit abstraction, from the lowest level of basic
component (transistors, resistors, capacitors, etc.) through higher levels of functionality (e.g., op-amps, comparators and reference circuits) to much larger functional
blocks (amplifiers, filters, phase-locked loops, etc.). By considering the circuit in this
hierarchical way, the problem can be reduced to a tractable size.
This chapter looks at some of the techniques for fault diagnosis in analogue
circuits in which this hierarchical approach is exploited. Related issues are those
of hierarchical tolerance and sensitivity analysis, which share many of the same
problems and some of the same solutions as fault diagnosis. Although a full treatment
of tolerance and sensitivity analysis is beyond the scope of this chapter, some mention
will be made where it impinges directly on the diagnosis techniques.

4.1.1

Diagnosis definitions

There are a number of levels of diagnosis when referring to analogue circuits, as


described in Reference 1. These consist of: (i) fault detection (FD); (ii) fault location
or fault isolation (FI); and (iii) fault value evaluation.
4.1.1.1 FD or identification
FD is simply to measure whether the circuit is good or faulty, that is, whether it
is working within its specification, in effect go/no-go testing. As analogue testing is
based on a functional approach, strictly this level of diagnosis should test the circuit
under all operating conditions and all possible input conditions. It is therefore not
necessarily as straightforward as it seems at first sight.
4.1.1.2 Fault location
Fault location is used to determine which component(s) is/are faulty within the circuit. The main purpose of this operation is to be able to replace the faulty component
in order to repair the circuit. This was of particular importance in earlier days when
analogue circuits were breadboarded with discrete components and small-scale ICs.
While the concept of repair of circuits has shrunk in importance with the advent of
complex, large-scale AMS ICs, the concept has still been maintained in recent hierarchical diagnosis techniques with the concept of the least replaceable units (LRUs) [2].
4.1.1.3 Fault value evaluation
While it may be sufficient to be able to detect and locate a fault within a circuit or
to replace the faulty component for repair purposes, a lot of diagnostic testing is
carried out during circuit and product development. In this case it is often important
to determine not only the faulty component, but also the parameter values of the
component and how far they are from the nominal fault-free values. This information

Hierarchical/decomposition techniques for large-scale analogue diagnosis 115


can then be used, for example, to adjust the design to make the overall performance
less sensitive to critical components.

4.2

Background to analogue fault diagnosis

A major complication to analogue testing and diagnosis is the inherent tolerance of


the various component values. This in turn leads to a range of acceptable values
for the output signals. The fault-free specification is therefore not a single value
for each signal parameter, but a range of values. A nominal fault-free response can
be derived using the nominal values of the constituent components, but additional
work must be done to define the acceptable limits of the output signals to define a
good or faulty response. These evaluations are done using circuit simulators on
computers and many advanced and powerful simulator packages are available on
the market. The standard method for deriving the output signal variations from the
component tolerances is through the Monte Carlo method, which consists of a very
large number of simulations in which the individual component values are varied in a
random way, thus leading to a statistical description of the variation in outputs. This is
a straightforward approach, but suffers from a huge computational processing cost. A
number of alternative methods have been proposed to derive the same information but
with a much reduced computational cost, which are beyond the scope of this chapter.
It will be seen that implementing the various diagnosis approaches also requires
dedicated simulation tasks and, very broadly, the approaches can be divided into two
categories: simulation before test (SBT) and simulation after test (SAT).

4.2.1

Simulation before test

As the name suggests, the majority of the simulation work relevant to the diagnosis routine is performed before measuring the circuit under test (CUT). The main
approach is to perform a series of fault simulations, that is, to introduce a particular
fault into the circuit description, simulate the circuit and record the details of the
faulty response. This process is repeated for different faults and so a series of faulty
responses are derived. Sometimes different faults give rise to similar responses and
these faults can be grouped together in a set. This process has been variously called
fault grouping, fault clustering and fault collapsing in the literature. Note that there
are basically two classifications of faults which can be simulated. These are hard (or
catastrophic) faults, which are typically modelled as short or open-circuits and soft
(or parametric) faults, which consist of component parameters having a value outside of their nominal range, but which still operate functionally. In analogue circuits
there are theoretically an infinite number of possible hard and soft faults, so clearly
a limited set of these must be taken in the SBT approaches in order to make the
problem tractable. Certainly a reduced set of hard faults can be envisioned, branches
can be open-circuited or nodes short-circuited to other nodes. Either an exhaustive
set of such faults can be simulated or a reduced set of the more likely faults to occur
can be constructed. One method for creating a realistic fault list is inductive fault
analysis (IFA) [3], which is based on the physical translation of processing faults into

116 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


electrical equivalent circuits. This is based on knowledge of the particular processing
technology and its statistics. Therefore, SBT approaches lend themselves more easily
towards the diagnosis of hard faults. If soft faults are to be considered, it is impractical to take account of all possible variations. Therefore, a reduced set of typical
parameter deviations are simulated and the results recorded.
In any event, whatever the choice of the total set of faults decided upon, the results
of the numerous fault simulations, after fault clustering will be represented by a fault
dictionary.
4.2.1.1 Fault dictionaries
The fault dictionary is the main database for diagnosis in SBT approaches. It consists
of a series of characteristic output responses corresponding to the chosen fault set.
The diagnosis approach in its simplest form is to compare the measured response with
the responses recorded in the dictionary and when a matching response is detected,
this determines the group of possible faults which gives rise to this response. If
the group consists of a single fault, then the diagnosis is complete. Otherwise
more sophisticated systems have to be constructed to narrow the diagnosis down to a
single fault (or possibly a single component). Examples of more sophisticated fault
dictionary techniques, in particular within hierarchical approaches will be described
later in Section 4.3.2.

4.2.2

Simulation after test

SAT approaches consist of processing the measurements made on a CUT, primarily to


examine the voltagecurrent relationships of the components within the circuit to see
if they have the expected behaviour. Any component whose behaviour is outside of
its allowed tolerance limits is deemed to be faulty and so the three levels of diagnosis
can be achieved in one routine. While at first sight this appears to be a relatively
straightforward and powerful approach, it does require access to potentially all the
nodes for voltage measurement and all branches for current measurement. It can
also involve a very large and complex computational load for large-scale circuits.
The lack of test point access is the major problem in the practical implementation of
SAT approaches. However, a great deal of work has been done in this area from the
theoretical point of view and these will now be described in detail.
4.2.2.1 Self-test algorithm
The self-test (ST) algorithm has been developed largely by Wey and co-workers [4,
5]. The first step of the algorithm is to represent the CUT in a graphical form. Here the
components are represented by directed branches. Simple one-terminal components
(such as resistors and capacitors) are represented by a single branch, two-terminal
components (such as transistors) can be represented by two branches, and so on. The
direction of the branch, indicated by an arrow, denotes the direction of current flow
or voltage drop across the component. The complete graph represents the topology
of the CUT. A simple example of a circuit and its graph is given in Figure 4.1 [6].

Hierarchical/decomposition techniques for large-scale analogue diagnosis 117


C3

R1
3

V1

R2

R4

V1

R1

R2

C3

R4

0
0

Figure 4.1

Example circuit and its associated graph [6]

Once the CUT graph has been derived, it is possible to define a tree for the
graph. A tree is a subset of the graph edges which connects all the nodes without
completing any closed loops. The co-tree is the complement subset of the tree. Given
a particular graph, there may be many different ways of defining tree/co-tree pairs.
Once a particular tree has been defined, the component connection model (CCM) [7]
can be used to separate the CUT model into component behaviour and topological
description. The behaviour is modelled using the matrix equation:
b = Za

(4.1)

where




a = v itree and b = i vtree
cotree
cotree
are the input and output vectors, respectively. Z is called the component transfer
matrix and describes the linear voltagecurrent relationships of the components in
CUT. The topology the CUT is described by the connection equation:
a = L11 b + L12 u

(4.2)

where u is a stimulus vector. The results from the measurement of the CUT are
described by the measurement equation:
y = L21 b + L22 u

(4.3)

where y is the test point vector containing the measurement results. The Lij are the
connection matrices which are derived from the node incidence matrices referring
to the tree/co-tree partition. From the simple example circuit of Figure 4.1, we can
derive a tree such that V1 , R1 and C3 form the tree and R2 and R4 form the co-tree. We
consider V1 to be the stimulus component, and so we can derive the various vector
equations


iR1
vR1
iC
vC
3
3
a=
b=
(4.4)
vR2 ,
iR2 and u = (uV1 )
vR4
iR4

118 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


The component equation then becomes:

R1
0
0
0

1

vR1
0
0
0 iR1

jC3
vC
iC3
3 =
1

iR2 0
0
0 vR2

R
2
iR4

1 vR4
0
0
0
R4
The connection equation is

iR1
0
0
iC 0
0
3 =
vR2 1 0
1 1
vR4

1
0
0
0


1
vR1
0
vC 0
1
3 + u
0 iR2 1 V1
0
1
iR4

(4.5)

(4.6)

Suppose that we have as our test point the current iR1 and vR2 , then the measurement
equation becomes:

vR1

0 0 1 1
iR1
vC3 + 0 uV
=
(4.7)
1
1 0 0 0 iR2
1
vR2
iR4
Clearly, with large-scale circuits, a number of possible tree/co-tree pairs are
possible and as the connection matrices depend on this partition, different sets of
CCM equations are possible. An optimal tree-generation procedure is proposed in
Reference 8, which ensures the sparsest matrix system in order to minimize the computational burden. Once the optimal tree/co-tree partition has been determined, test
points need to be determined in order to ensure diagnosability of the CUT. An arbitrary
diagnosis depth (number of allowable faults present) can be specified. As the diagnosis depth is increased, so the number of test points needs to be increased. Often the
algorithm is run on the basis of only one fault being present (a diagnosis depth of unity).
The construction of the CCM equation set is only the first part of the diagnosis
algorithm. The second stage is the implementation of the ST algorithm itself. In this
stage, the components in the circuit are divided up into tester and testee groups. In
the first instance, all the components in the tester group are assumed to be fault free.
So the a and b matrices are split into tester (superscript 1) and testee (superscript 2)
elements respectively:
1

b
a
a = 2 and b = 2
a
b
We now form what is termed the pseudo-circuit description. First the CCM
equations are re-written according to the tester/testee partition:

1
1
0
Z
a
b
=
(4.8)
a2
0 Z2
b2

Hierarchical/decomposition techniques for large-scale analogue diagnosis 119


1
11
L11
a
=
2
a
L21
11

y = L121



L112
b1
u
+
b2
L12
L212
11

 b1
2
+ L22 u
L21
b2
L12
11

(4.9)
(4.10)

k
where the matrices Lkl
ij and Lij are obtained by appropriately picking up the rows and
columns of the connection matrices Lij . Solving these equations for testee quantities
yields the so-called pseudo-circuit equation [5]:

1

K11 K12
a
b
=
(4.11)
K21 K22
yp
up

where

a
y = 2 ,
b
p

K12 =

(L112

u
u =
,
y
p

2 1
L12
11 (L21 ) L22


K22 =

2 1
L212 L22
11 (L21 ) L22

(L221 )1 L22

12 2 1 1
K11 = L11
11 L11 (L21 ) L21 ,
2 1
L12
11 (L21 ) ),
2 1
L22
11 (L21 )

K21 =



22 2 1 1
L21
11 L11 (L21 ) L21
(L221 )1 L221

(L221 )1

This equation is solved to obtain the testee quantities a2 and b2 based on the
knowledge of the test stimuli u and the measured results y. Whether a particular
testee component is fault free or not is determined by whether the results obtained from
solving the pseudo-circuit equations agrees with the expected behaviour described by
Z2 . For ideal fault-free behaviour, the two values of b2 should be identical. However,
there will be an allowable tolerance, so the test is whether the difference between
the two vectors is within a certain tolerance band. Remember that the tester/testee
partition was done without knowledge of whether the components were faulty or fault
free and the algorithm operates on the assumption that all the components in the tester
group are fault free. Therefore, it is unlikely that this first pass of the ST algorithm
will provide a reliable diagnosis result. However, there will be some components in
the testee group which can be reliably said to be fault free. These can therefore be
moved into the tester partition group (being swapped with other circuit components)
and the ST algorithm re-run. Further testee components will be identified as fault free
and the iterative process continues until it is known that all the components in the
tester group are indeed fault free, at which point the diagnosis is known to be reliable
and the process is complete.
It should be noted that strictly this algorithm is only valid for parametric faults.
If catastrophic faults (short- or open-circuit) are present then this will change the
topology of the original circuit and original graph and tree/co-tree definitions will
be in error. However, it will be seen in the next section that a hierarchical extension
to this process can indeed diagnose catastrophic faults provided they are completely
within a grouped subcircuit.

120 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


4.2.2.2 Diagnosis via sensitivity analysis
An alternative approach to fault diagnosis is via a sensitivity analysis combined with
functional testing [9]. This approach makes use of measurements of the CUT and then
comparison with the parameters of the ideal good circuit. An estimate of the relative
deviation of each parameter can then be made, which reflects the influence of one
or more bad components within the CUT. If the sensitivity of the output parameters
with respect to the parameter variations is calculated (or possibly measured) then FD
and location can be performed. The algorithm requires that the system of equations
being solved is linearly independent, therefore the number of test points has to equal
or exceed the number of components. Also, the sensitivity equation system being
solved is a linear one, so is often a first-order approximation to the real variations
and an iterative solution is usually required. In summary, the approach consists of the
following steps:
1. For a known good circuit, calculate the circuit characteristics that will be measured when the circuit is under test and the circuit components nominal values.
The circuit characteristics are such things as gains or cut-off frequencies. The
authors refer to these characteristics as the circuit output parameters.
2. Measure the output parameters of the CUT.
3. Determine the relative deviations of the output parameters from their nominal
values calculated in step 1.
4. Compute the sensitivities of the parameters at the nominal values of the circuit
components. It would also be possible to measure these sensitivities; but this
approach is not detailed in Reference 9.
5. Form the relationship between the measured deviations of the output parameters and the sensitivity values computed in step 4. This is given by a set of n
equations, where n is the number of output parameters, of the form:
k
T1
T1
i=1 Sxi (xi /xi )
(4.12)
=
k
T1
1 + i=1 SxDi 1 (xi /xi )
where T1 to Tn are the nominal values of the n output parameters, T1 etc.
are the measured deviations of the output parameters, SxT11 etc. are the computed sensitivities of the output parameters with respect to the k component
variations, SxD11 etc. are the computed sensitivities of the denominators of the
output parameters with respect to the k component variations, xi are the nominal values of the k components and xi are the deviations of the k components,
which are to be calculated. As there is a set of n linearly independent equations to solve for the k values of xi , hence the requirement that the number
of measured parameters, n, must be greater than or equal to the number of
components, k.
6. Determine the solutions (xi /xi ) of the equation system of step 5. The algorithm may have to be iterated from step 3 if sufficient precision is not achieved.
7. Having determined the deviations of the parameter values, these can be compared with the nominal values and the acceptable tolerances, as well as a further

Hierarchical/decomposition techniques for large-scale analogue diagnosis 121


possible definition of the boundary between soft and hard fault values, in order
to classify each component value as: (i) within acceptable tolerance range;
(ii) out of tolerance range (parametric fault); or (iii) catastrophic fault.
Therefore, the algorithm is potentially very powerful in terms of achieving all three
levels of circuit diagnosis, but at the cost of requiring a lot of independent parameter
measures and a very high computation cost for large-scale circuits.

4.3

Hierarchical techniques

It can be seen from the descriptions in the previous section that for both SBT and SAT
diagnosis approaches, the computational effort increases enormously with increasing
circuit size. In the SBT case, as the number of components increases, so does the
number of possible faults and therefore the number of simulations required in order
to derive a sufficiently extensive and reliable fault dictionary. In SAT approaches, as
these are often matrix-based calculations, the size of the vectors and matrices grows
proportionally with the complexity of the CUT, but the processing increases at a
greater rate, particularly when matrix inversions are required.
In both cases, the computational burden can be made more tractable by the use of
hierarchical techniques. This basically takes the form of grouping components into
subcircuits and treating these subcircuits as entities in their own right, thus effectively
reducing the total number of components. This can be extended to a number of
different levels, with larger groupings being made, perhaps including the subcircuits
into larger blocks. The diagnosis can be implemented at different levels and if required
the blocks can be expanded back out to pinpoint a fault that had been identified with
a particular block.
A number of different approaches to hierarchical diagnosis have been proposed,
for both SBT and SAT techniques (and also combinations of the two). Some of these
are described in the remainder of this chapter. Although not an exhaustive treatment,
it highlights some of the more important procedures described in the literature in
recent years.

4.3.1

Simulation after test

We will look first at the hierarchical extensions based on SAT approaches. Some
of these are based on the ST and CCM combined system described in the previous
section. There have also been approaches which make use of sensitivity analysis and
also neural networks to achieve the diagnosis process.
4.3.1.1 Extensions using the ST algorithm
A very straightforward approach to adapting the ST/CCM to hierarchical applications
is described in Reference 10. Here the subcircuit grouping is performed and the
subcircuit is then effectively treated as a black box and is represented by a section
of graph corresponding to the terminals of the subcircuit. For example, an operational

122 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Vip

Vin

VDD

Vo

VDD
Vip

Vin

Vo

VSS

Figure 4.2

VSS

Operational amplifier and its associated hierarchical graph representation [10]

amplifier, which is a very common building block in analogue circuits, could be


represented as shown in Figure 4.2.
In the previous description of the circuit graph, each one-terminal component
was treated as a single edge in the graph. Now we have n-terminal components that
in turn are represented by a set of n edges connecting the input and output nodes to
a common reference point (or n 1 edges if one of the terminals is the reference
point). Following a similar relationship to Equation (4.1), the hierarchical block can
be described by the matrix equation:
bhier = Zhier ahier

(4.13)

The hierarchical component transfer matrix Zhier is part of the overall transfer
matrix Z.
Clearly, by adopting this approach, the size of the overall circuit graph can be
considerably reduced from the flat circuit representation and the matrix solution
problem becomes more tractable. However, there are a number of complications
arising from this approach. The next step of the CCM algorithm is to partition the
graph into a tree/co-tree pair. On remembering the definition of the a and b vectors
from Equation (4.1), and then when considering the hierarchical component, the
constituent edge currents and voltages must be part of the a and b vectors in the
same way. An optimal tree-generation algorithm was proposed in Reference 8 for
non-hierarchical circuits. This consists of ordering the various components by their
type (voltage sources, capacitors, resistors, inductors and current sources). The earlier
components in the list are preferred tree components, the latter are preferred co-tree.
This preference list has to be adapted to take account of the hierarchical components as
well. As described in Reference 11, this includes the hierarchical tree edges between
the voltage sources and capacitors and the hierarchical co-tree edges between the
inductors and current sources in the preferred listing. In order to prioritise components
in the same class, the edge weight of each component is calculated as the sum of
other edges connected to the edge under consideration. For components with equal
weight, they are further prioritised by the parameter value.

Hierarchical/decomposition techniques for large-scale analogue diagnosis 123


The main complication in applying the ST algorithm is in determining the
tester/testee partition and the subsequent exchange of components. As the hierarchical components are treated as single entities, even though they may have several
edges, they must be completely in one group or the other. It is not permissible to
have some of the hierarchical components edges in the tester group and some in the
testee. This leads to a rather complex and sometimes restrictive, system of testing and
reordering at each iteration of the ST algorithm.
One fortunate side product of this hierarchical approach is that it is possible to
include hard faults in the diagnosis, provided that they are completely contained within
a hierarchical block (and not connected with any of the input or output nodes). The
restriction to including open- or short-circuit faults in the non-hierarchical approach
was that such faults alter the topology of the circuit and therefore the original graph.
In the hierarchical case, with the fault contained within subcircuit, the graph section
for the subcircuit is not altered.
An alternative approach, which also uses the ST/CCM method as a starting point,
is termed the robust component failure detection scheme [12]. In this scheme the FD
and FI, the first two levels of fault diagnosis, are achieved in two separate operations.
The scheme is described as robust as it is tolerant of both parameter variations and
measurement errors. During the formation of the CCM, additional terms are included
in the a, b and y vectors to represent the parameter variations and measurement errors.
Additionally, an extra term is included in the b vector, termed the fault vector f, which
represents the total effect of any fault present within the circuit. The transfer matrices
are formed in the normal CCM method followed by the generation of a residual, which
is a vector which under the conditions of no parameter variation, no measurement error
and no faults, would be identically zero. Given a bound to the component tolerances
and measurement error, then the residual is also within a specified bound. The FD
method consists of setting up the test conditions at one or more frequencies, observing
the output signals, constructing the residual and checking if it is within the prescribed
bound (fault free) or outside (fault detected).
In order to achieve FI, the residual generation is altered such that a residual filter is
constructed such that each of the outputs of the filter is sensitive to only one component
failure and robust to all other component failures. The generation of this system relies
on the invertibility of the L11 matrix in the CCM formation. L11 is dependent on the
tree/co-tree partition, but it is proved in Reference 12 that the matrix is guaranteed
to be invertible under any tree/co-tree partition provided that the number of branches
in the graph is twice the value of the effective number of nodes in the graph. (The
effective number of nodes is one less than the actual number, owing to one node being
the reference node.) This approach also requires full rank measurement to generate the
output vector. This can be difficult (or impossible) to achieve in large-scale circuits, in
addition to the condition concerning the number of nodes and branches not necessarily
being true in the general circuit case. The proposed solution is therefore to adopt a
hierarchical approach, in a similar way to the previously described method. As well as
making the measurement task more flexible, the node/branch condition can be assured,
although this does impose a restriction on the choice of hierarchical partitioning, as
opposed to the complete freedom of the previous approach.

124 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


B1
B2

B1
B2
B4

B5

B3
B4

Figure 4.3

B3

B5

Example hierarchical circuit and its associated BPT [13]

4.3.1.2 Symbolic analysis


The approach described in Section 4.2.2.2, using sensitivity analysis combined with
functional testing as a method of fault diagnosis, also suffers the same problems of
the computational cost when translated to large-scale circuits. Here the problem is
one of the computation of sensitivities within a large-scale system and the subsequent
solution of a large number of connected, but linearly independent, equation sets. The
computational effort may be eased by translating the circuit into a hierarchical description. A possible solution to the calculation of the sensitivities of circuit behaviour to
individual component tolerances using symbolic analysis has been suggested in Reference 13, with further details in Reference 6. This symbolic analysis derives symbolic
expressions for the behaviour of linear analogue circuits. This derivation needs to be
performed only once, later simulations, such as fault simulation or sensitivity analysis, are replaced by evaluation of the symbolic network expressions. A hierarchical
approach to symbolic analysis was described in Reference 14, which leads to the generation of a sequence of expressions (SOE). The SOE and subsequent computation
effort, only grows linearly with circuit complexity and hence can lead to a much faster
method of simulation.
The hierarchical partitioning process is performed by the use of a binary partition
tree (BPT), an illustration of which is given in Figure 4.3, which shows a block of
circuitry that has been partitioned into a number of blocks at different hierarchical
levels along with the associated BPT.
The behavioural description of each block is generated via modified nodal analysis
(MNA) [15]. After partitioning, the subcircuits are individually examined, starting
with the leaves of the BPT and continuing in a bottom-up fashion. All equations of the
MNA description that refer to the internal nodes of a subcircuit are eliminated using the
Gauss algorithm. Then the matrices of the subcircuits are combined in a hierarchical
way according to the BPT, and subsequent equations referring to nodes internal to
the new level of hierarchy are also eliminated. While this process is proceeding,
the derived arithmetical operations are stored. The result is a sequence of nested
expressions with hierarchical backward dependency, which can be described by the
following equation set:
H1 = f (s, X)

Hierarchical/decomposition techniques for large-scale analogue diagnosis 125


H2 = f (s, X, H1 )
..
.

(4.14)

Hk = f (s, X, H1 , . . . , Hk1 )
H = f (s, X, H1 , . . . , Hk )
The final expression gives the transfer function H of the overall circuit where s is the
Laplace variable and X the set of component parameters.
A fault diagnosis approach using this sort of symbolic method was described
in Reference 16 in which simulation of single and double parametric faults were
simulated. The simulation time for a large-scale circuit was shown to be 15 times
faster using SOE rather than a traditional numerical simulation. Further improvements
in speed are described in Reference 13, which includes: (i) only re-evaluating the part
of the SOE that is influenced by a parameter variation; and (ii) optimum choice of
the BPT so that the number of expressions influenced by a parameter is minimized.
These two methods are now described in detail. For method (i), this is derived from
Reference 17 in which the SOE approach was applied to sensitivity analysis. In order
to identify the equations of the SOE that are influenced by a particular parameter x,
use is made of a graphical technique. Here, the dependencies implied in the SOE are
represented by connecting arrows. As an example, consider the SOE equation system
given in Equation (4.15) and the corresponding expression graph given in Figure 4.4.
H = Hk /Hk2
Hk = Hk1 + Hk2
Hk1 = 3Hk5 + 3
Hk2 = 2Hk4
..
.

(4.15)

H4 = H3 /H1
H3 = 5H2
H2 = x2
H1 = x1
Each expression in the SOE is represented by a node, Ha and the dependency of
Ha on Hb is represented by an edge (Ha , Hb ) from vertex Ha to Hb . As the SOE is
constructed so that there are only ever backward dependencies, the resulting graph
has no loops, that is, it is acyclic and is referred to as a directed acyclic graph (DAG)
[17]. The final term H is called the root node and refers to the overall transfer function
of the system and the leaf nodes of the system refer solely to the circuit parameters,
for example, H1 and H2 . The DAG is then used to accelerate the fault simulation
in the following fashion. Given a particular parameter of interest, x, we follow the
DAG paths in the opposite direction to the arrows, that is, bottom-up, from the leaf

126 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


H
Hk
Hk1
Hk2

H4
H3
H2 = x2
H1 = x1

Figure 4.4

DAG of equation system (4.15) [13]


H = H11
H12
H21
H22

H41
H42

H4k4

Figure 4.5

H1k1

H2k2
H51
H52

H31
H32

H3k3

H5k5

BPT of hierarchical system of Figure 4.3 [13]

node to the root node. In this way, only the expressions that are influenced by x are
re-evaluated, potentially leading to great savings in computation time.
For method (ii), additional acceleration of computation can be made by reducing
the number of expressions influenced by a parameter. The aim is to minimize the average number of influenced expressions per parameter, resulting in an average reduction
in the computation cost. The number of expressions influenced by a parameter is the
sum of the lengths of the paths in the DAG that connect from the root node to the leaf
node of the parameter under consideration. Therefore, the aim of this method is to
minimize the average length of the DAG paths from root to leaves. Clearly the SOE
represents the functionality of the circuit, so this cannot be altered itself. However, the
way in which the circuit is hierarchically partitioned can be altered and it is through
this choice that the optimization is achieved. A heuristic solution to this problem was
introduced in Reference 18 for sensitivity analysis. It relies on the fact that the DAG
and the BPT are strongly related. As an example, consider the DAG representation
that is related to the BPT illustrated in Figure 4.3 as shown in Figure 4.5.
For each node in the BPT there is a sample set of equations from the SOE and
therefore a corresponding set of nodes from the DAG. Similarly, as the BPT represents
the dependency between different hierarchical blocks through the connecting directed
edges, there is a close similarity between the paths in the BPT and paths in the DAG.
Therefore, for most typical circuit structures, the length, lpt , of the path in the BPT

Hierarchical/decomposition techniques for large-scale analogue diagnosis 127


B1
B2

B3

B4

B5

B4

B5

B1
B2

B4

Figure 4.6

B3

B5

B6

B7

Balanced (lower) and unbalanced (upper) binary partition trees [13]

and the lengths, lDAG , of corresponding paths in the DAG are proportional to each
other, lDAG lpt . Therefore, minimizing lpt minimizes lDAG . We have now reduced
the problem to determining the BPT that will minimize the average value of lpt . The
solution to this comes from graph theory, where it is known that the solution is a
maximally balanced tree structure, as illustrated in Figure 4.6.
Here the two extremes of balance for a tree structure are illustrated, a totally
unbalanced tree and a maximally balanced partition tree. The respective average
lengths of the paths are given by
lpt (unbalanced) = n + 1 1
n
2
lpt (balanced) = log2 n

(4.16)
(4.17)

where n is the number of leaves of the tree (number of circuit parameters). Therefore, a
maximum improvement of O(n)/O(log2 n) can be achieved by choosing a maximally
balanced BPT as the basis for the SOE analysis.
Having established the advantages of the hierarchical approach to tolerance/sensitivity analysis and fault simulation yielded by symbolic analysis, both SAT
and SBT applications can now be envisioned. Although SBT approaches are further
detailed in the subsequent section of this chapter, both the applications of symbolic
analysis will be outlined here.
In respect of SAT algorithms, the symbolic approach to the calculation of sensitivities is directly applicable to the method of diagnosis described in Section 4.2.2.2.

128 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


One of the main difficulties with this approach, apart from requiring a large enough
number of measurements, is the iterative calculation of sensitivities, due to the linear
nature of the equation system. This may require many sensitivity calculations, especially for large-scale circuits. Therefore, the post-measurement computation burden
can be excessively high. However, employing the SOE approach to multi-parameter
sensitivity analysis can greatly ease this computation task.
In Reference 17 a method has been described for the calculation of the sensitivity
with respect to a parameter, x, which follows the DAG paths in a bottom-up fashion, starting with the leaf node representing the parameter x and ending at the root
node representing H. For example, considering the SOE and DAG associated with
Figure 4.4, the sensitivity of H with respect to x1 is calculated in a successive fashion,
represented by the following equation system:
H1
=1
x1
H4
H4 H1
=
x1
H1 x1
..
.
Hi
=
x1


(Hi ,Hj )DAG

(4.18)
Hi Hj
Hj x1

..
.
H
=
x1


(H,Hj )DAG

H Hj
Hj x1

The summing condition (Hi , Hj ) DAG((Hi , Hj ) = edge from Hi to Hj ) is a consequence of the fact that the edges represent the explicit dependencies between the
expressions, that is
/ DAG
(Hi , Hj )

Hi
=0
Hj

(4.19)

To perform multi-parameter sensitivity analysis using this procedure, Equation (4.18)


has to be evaluated separately for each parameter x X. This requires a considerable
computational effort, particularly as the process may need to be iterative. Therefore,
two improvements are described in Reference 13 to improve the speed of computation.
First, the use of the maximally balanced BPT, as described above, can be used to
minimize the computation involved in Equation (4.18). Second, a parallel procedure
for the calculation of multi-parameter sensitivity analysis is proposed. This tackles
the problem from a top-down approach and calculates the sensitivities with respect to
all parameters in parallel. Starting with the root node of the DAG, partial derivatives
of the circuit transfer function H with respect to each expression Hj of the SOE are
calculated proceeding downwards until the leaf nodes are reached. On the basis of

Hierarchical/decomposition techniques for large-scale analogue diagnosis 129


the DAG of Figure 4.4, the following equation set can be derived:
H
=1
H
H
H
=
Hk
Hk
H
H Hk
=
Hk1
Hk Hk1
..
.
H
=
Hj

(4.20)

(Hi ,Hj )DAG

H Hi
Hi Hj

..
.
H
=
H1


(Hi ,Hj )DAG

H Hi
Hi H1

As the leaf expressions Hn of the SOE correspond to a circuit parameter, then the
sensitivities of the network function are given by the partial derivatives of H with
respect to the leaf expressions:
sen(H, xn ) =

H
H
=
xn
Hn

(4.21)

Therefore, evaluating Equation (4.20) generates all the sensitivities in parallel. Further
details of the method are given in Reference 6. As one sensitivity term is generated
for each leaf node of the DAG and as the number of leaf nodes is expected to increase
linearly with increasing circuit complexity [14, 17], this indicates that the computational expense of this parallel system is expected to grow only linearly with circuit
complexity.
In respect of the SBT approach, the symbolic approach can be applied to the
generation of a fault dictionary through fault simulation. The process follows the
following steps:
1. The circuit is divided into a hierarchical system employing a maximally
balanced BPT.
2. The SOE for the system is established.
3. Using the nominal circuit parameters (leaf nodes), the nominal SOE is
evaluated to yield the nominal value of the transfer function H.
4. For each fault simulation, the parameter under consideration is changed to its
faulty value and also the corresponding leaf node is allocated a token.
5. Proceeding bottom-up through the graph, each node that has a token, passes the
token on to all the predecessor DAG nodes and all the respective expressions
are re-evaluated.

130 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


6. The process is continued until the root node, yielding the fault simulated
transfer function Hf .
The process can be run for both single fault conditions, where only a single leaf is
allocated a token and fault parameter value, and multiple-fault conditions, where all
the relevant leaf nodes are allocated tokens and fault values. Any nodes on the DAG
that are not passed tokens during the process and therefore re-evaluated, remain with
their nominal values determined in step 3. Once all the required fault simulations
have been completed, the fault dictionary can be built in the traditional manner.
In Reference 13 the method is applied to the fault simulation of an active bandpass
filter circuit with 44 circuit parameters. Here an additional increase in simulation speed
by a factor of five was observed when a fully balanced BPT and token passing were
implemented, above the factor of 15 observed in Reference 14 which represented the
gain from using the SOE symbolic analysis compared to traditional fault simulation
techniques.
4.3.1.3 Neural network approaches
Neural network approaches have been investigated over several years for analogue
circuit fault diagnosis. Neural networks are particularly appropriate for solving this
problem due to the aspects of component tolerance, measurement error and nonlinearities associated with analogue circuits. The ability of neural networks to learn
and adapt, as well as having a parallel processing aspect to the problem have meant
that they have been successfully implemented in a number of different ways and
there are over a hundred publications in the literature which describe the theory and
application of these techniques.
The number of publications that deal directly with the hierarchical aspects of diagnosis and that make use of neural networks is very much smaller, however. One early
technique which is worthy of note is described in Reference 19. Here the author considers the overall CUT at a number of different hierarchical levels: system, functional
block, subcircuit and component. As with many other neural network techniques, the
diagnosis operates on signatures generated at the outputs of each particular circuit
block. In this case, the technique is an extension of the fault dictionary approach
to diagnosis. Taking the hierarchical approach eases the burden of generating the
dictionary due to a decrease in the number of fault simulations required. The procedure consists of two stages, first the generation of a fault dictionary that takes place
in a bottom-up fashion through the hierarchical layers, followed by the diagnosis
routine, operating on measured responses from the CUT, which progresses in a topdown fashion down through the hierarchical layers to pinpoint the location of the
fault.
The starting point is to generate fault signatures at the lowest level of the hierarchy.
These are then grouped into clusters, using a Kohonen neural network, and stored in
a database. Use is made of macromodels to translate the fault dictionary process to
the next level of the hierarchy, again using the signatures to train the neural network
and derive the appropriate weightings. The reduction in fault simulations required in
the next level of the hierarchy is achieved through the use of the fault clusters. The

Hierarchical/decomposition techniques for large-scale analogue diagnosis 131


method is to take one example from a cluster as being representative of that cluster
and use this, via the generated macromodel, for the fault simulation at the next level.
The process is repeated upwards to the highest level of the hierarchy.
Once the response of the CUT has been measured, the process is reversed, using
the trained networks to operate on the signature to trace down through the hierarchical
levels and locate the fault. However, this technique does require that in moving
down through the hierarchical levels, the block in which the fault has been identified
must then be isolated and the terminal nodes made accessible. Additionally, there
is the problem of the fault clustering, and the transference of one representative
fault signature through the macromodelling, which was performed in the bottom-up
process of neural network training. In order to track back through the hierarchical
layers to provide detailed diagnosis results, the database has to be extended to separate
the faults in all the clusters. This is achieved by training the neural network with a
new set of signatures for the faults in each cluster. This can be achieved by either
measuring the data at a new node or performing a different type of analysis (e.g., a
time domain measurement or a phase analysis, etc.). The reasoning behind this is that
a different type of analysis of the response from the different faults, which provided
similar signatures in one form of analysis, may well provide sufficiently different
responses when analysed under different conditions.

4.3.2

Simulation before test

As outlined in Section 4.2.1, the main approach to SBT has been through the use
of fault dictionaries. There is a requirement for constructing a database of responses
from a set of simulations, each one introducing a particular fault into the circuit.
The main problem comes in selecting a suitable set of faults to examine, which is
comprehensive enough to provide a reliable database and yet which is still within
a reasonable computational effort. As the circuit complexity grows, the number of
faults to be simulated rises and so having a hierarchical approach to the problem can
ease the computational requirements.
One group which has been pioneering in this particular area is based at Georgia
Institute of Technology in Atlanta, GA, USA and this section will describe some of
this groups work. One main publication that introduced this work is Reference 2,
which describes the development and functioning of the MiST PROFIT (Mixed-Signal
Test Program for Fault Insertion and Testing) software. This has been implemented by
Cadence Design Systems in their IC software suite. The software includes hierarchical
fault simulation, fault clustering and hierarchical diagnosis.
The basis of the hierarchical fault modelling approach is illustrated in Figure 4.7.
Here the circuit consists of N levels of hierarchy, with the highest level representing
the complete assembled system. Level 1, the lowest level, consists of the leaf cells.
Depending on the nature of the circuit, these may be at the transistor level or possibly
subcircuit (e.g., op-amp) level. In any event, the concept of the LRU is used in
this approach, which may or may not coincide with the leaf cells. For example, in
Figure 4.7 where the LRUs are indicated by shaded boxes, module A is designated as
a LRU, but is not at level 1 of the hierarchy. The point about LRUs in this approach

132 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Spec(1)

Level N

Spec(n)

Module A

Level 1

Figure 4.7

Module B

Spec(1)
Spec(n )

Spec(1)
Spec(n)

MiST PROFIT hierarchical fault modelling method [2]

is that they represent the lowest point in the hierarchy to which the diagnosis will be
performed. This is a practical consideration from the point of view of being able to
repair a faulty circuit or system.
The specifications for each level of the hierarchy are indicated in Figure 4.7. At
the top level there are n specifications, at the other levels there are varying numbers
of specifications appropriate to the different levels of abstraction. The key to this
approach though is the relationship between the specifications at various levels and,
in particular, how faults are propagated from lower to higher levels. This is performed
through behavioural modelling and is the pivotal aspect of the approach. A fault at the
top level of the hierarchy can be represented by one or more out of range values for the
spec(1),, spec(n) or may occur because of a structural fault in the interconnections
between the modules at the N1 level. Therefore, knowledge of the faulty values of
the spec() parameters and the structural interconnection faults must be known from
the basic faults introduced in order to form the fault dictionary. The fault simulation
process must therefore be a bottom-up approach.
Starting with the leaf cells, which are the basic building blocks of the circuit,
the specifications for these cells are known in terms of nominal values and tolerance
ranges. A value outside of the accepted range, whether it represents a parametric
deviation (soft fault) or deviation to a range extremity (hard fault) can be introduced
into the fault modelling process. By simulation, the effect of the introduced faults
can be quantified. These effects now have to be translated into the next level of the
hierarchy via behavioural modelling of the module they affect. However, during this
process, the concept of fault clustering can be introduced. Suppose two different faults
at the leaf cell level give rise to substantially the same simulated response (within a
specified bound). There is no need to translate behavioural models of the individual
faults a single model will suffice for both faults. There is no loss of diagnostic
power here (in terms of fault location) as the two (or more) faults that give rise to the
characteristic response originate in the same cell.
During the fault simulation process, there are two possible approaches to fault
propagation. First, injection of a chosen fault into the leaf cell and propagation of
the results of the fault into the higher levels of the hierarchy or, second, computation

Hierarchical/decomposition techniques for large-scale analogue diagnosis 133


of the effect of the fault on the behaviour of the leaf cell, repeating this for all the
specified faults, and construction of an overall behavioural model that can then be
applied in a one-pass way to the next level of the hierarchy. The first option involves
more simulation, the second option requires additional modelling effort but is faster
from the diagnostic simulation point of view. The MiST PROFIT software supports
both approaches.
Given that there are n specifications for a particular module, then this represents an
n-dimensional area of space. Hard faults represent a point in this space, whereas soft
faults represent a trajectory or vector within this space. These vectors are represented
by non-linear, real functions and while the functions are computed for one particular
parametric fault, all the other circuit parameters are held at their nominal values. In
the MiST PROFIT approach, component tolerances can also be taken into account.
These are computed separately as described shortly. The fault points and vectors have
to be derived through simulation.
Tolerance values are often computed via Monte Carlo approaches, but these suffer
from a huge computation cost, often requiring many thousands of circuit simulation
runs. This makes them unsuitable for large-scale circuits. So the MiST PROFIT
software takes an alternative route to derive the component tolerance effects, termed
the tolerance band approach. A method of propagating the tolerance effects through
the different levels of the hierarchy is also described. A tolerance band approach was
introduced in Reference 20, but this depends on the invariance of the signs of the
sensitivities under fault conditions. This condition is not generally met as the output
values are often non-linear and non-monotonic with respect to parameter variations.
Therefore, to obtain a more accurate bound of the tolerance effects, the signs of the
sensitivities have to be re-evaluated for each fault value, and therefore the following
algorithm has been implemented:
For each hard fault:
1. Compute the sensitivity of the measurement to the parameter, p.
2. For each parameter, calculate upper and lower bounds to the parameter, the
upper bound is given by the nominal value of p plus the sign of the sensitivity
multiplied by the tolerance value of p whereas the lower bound is given by
the nominal value minus the sign of the sensitivity multiplied by the tolerance
value.
3. The circuit is simulated using these two bound values of p in order to establish
the upper and lower tolerance bounds.
As two additional simulations have to be made for each fault value, it is not
practical to implement this for the soft-fault case, as a number of tolerance calculations
have to be performed along the fault vector space. Therefore, an alternative approach
is adopted based on the following heuristic. Consider a specification as a function of a
particular parameter p, that is, spec(i) = fi (p) for any i between one and n, for a model
with n specifications and some non-linear function f . If the sign of the sensitivity of
spec(i) to any one circuit parameter for two different values of p are different, then it is
highly likely that the signs of the sensitivities to all the circuit parameters for the two
different values of p are also different. The relationship spec(i) = fi (p) is a mapping

134 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


of the fault trajectory in n-dimensional space into a two-dimensional relationship and
it has the same characteristic of piecewise linear (PWL) sections defined by pairs
of end points. If there exists two contiguous, non-monotonic sections of this PWL
relation, then the signs of the sensitivities will be different for these two segments
and the sign of the sensitivities of all spec(i) to all the circuit parameters must be
re-evaluated. This can then be used to compute the upper and lower bound sets, as
was done for the hard-fault case and the tolerance bounds for the two segments can
then be calculated. Although this requires a re-calculation for each non-monotonic
section of the relationship, in general these are relatively low in number and in practice
the required computation effort is very much less than a Monte Carlo approach, but
has improved accuracy over the basic fault-band approach of Reference 20. The
tolerance effects are propagated hierarchically bottom-up from the leaf cells to the
top level.
Now we concentrate on the hierarchical fault modelling that will lead to the
construction of the fault dictionary. Even just considering hard faults, the number of
possible faults to be introduced can be extremely large and so fault clustering will
be required. Representative faults can then be injected into the lower levels of the
hierarchy using the behavioural modelling approach mentioned earlier. The set of all
computed faults (after fault clustering) at all hierarchical levels provides the complete
fault dictionary. The effect of the fault at the top level will produce a point in the ndimensional space that is outside of the allowable tolerance box. This is referred to
in Reference 2 as the observed fault syndrome. A similarity measure is then applied
to this measured result to identify the entry in the fault dictionary that most closely
matches this point. Given that there are tolerance boxes associated with each point,
this may generate an ordered list of possible entries with decreasing likelihood of
match. From this, the likeliest faulty module can be identified. If the module is an
LRU, the diagnosis process is complete. If not, the diagnosis must proceed down to
the next level of hierarchy.
At this point there are two approaches that can be considered that have an impact
on the original fault dictionary construction and the manner of the fault clustering.
The first option is, when considering diagnosis at a lower level, to base the diagnosis
on additional measurements on the faulty module in order to determine which of the
sub-modules, of which it is constructed, is faulty. The second option is to diagnose
down to the LRU based only on the observed fault syndrome at the top level. In the
first case, fault clustering can be done such that faults in modules at a particular level
associated with a common module at the next highest level can be clustered together.
This is termed in Reference 2 as clustering for simulation. In the second option,
to enable differentiation between faulty modules at the lower level purely from the
observed fault syndrome, these faults cannot be clustered. This is termed clustering for
diagnosis. In the first approach the simulation effort required in constructing the fault
dictionary is much smaller as more clustering can occur at the lower levels. However,
in the subsequent diagnosis, additional measurements will need to be made, requiring
access to internal nodes of the circuit, which may be difficult or even impossible. In
the second approach the total simulation burden is much higher, but the system can
be fully diagnosed from top-level measurements.

Hierarchical/decomposition techniques for large-scale analogue diagnosis 135


For hard fault simulation and diagnosis, the diagnosis procedure is based on a
comparison of the positions in the n-dimensional space (taking account also of the
tolerance boxes). With parametric faults, there is an associated trajectory in the space
which, for convenience, is constructed in a PWL fashion. The diagnosis search is
therefore more complicated as the search must be made for a best fit positional point
on a trajectory, again taking account of the tolerance regions around the trajectory.
However, if successful, this can yield both fault location and fault value diagnosis
information.

4.3.3

Mixed SBT/SAT approaches

The group at Georgia Tech have further extended their diagnosis approach for complex
AMS systems by implementing both SBT and SAT approaches in a combined method
[21]. This method aims at both fault location and fault value evaluation, using the
fault dictionary SBT approach for the fault location and an SAT approach for the
fault value evaluation. As a running illustration in Reference 21, a biquadratic filter is
considered and three levels of hierarchy are used the top level being the overall filter
circuit, the next level down consists of op-amp, resistor and capacitor components,
and the lowest level considers the nine transistors that comprise each op-amp circuit.
This is not the only circuit that the authors have used the diagnosis tools on, but it
provides a useful example system.
The first stage is the hierarchical fault modelling, which follows along the same
lines as described in the previous section. At the topmost level the functional block
comprising the filter is characterized by a magnitude response for the transfer function at three separate frequencies. At the next hierarchical level down the op-amps are
characterized by two parameters, the voltage gain, Av and the gain-bandwidth product
(GBW), while the resistors and capacitors are simply characterized by their resistance
and capacitance respectively. Finally, at the lowest hierarchical level, the MOSFET
devices that comprise the op-amps are characterized by two parameters, the threshold
voltage, Vth and the width/length dimension ratio of the gate, W /L. The leaf cells of
the hierarchical system are the resistors, capacitors and transistors and these are also
defined as the LRUs. Therefore, LRUs exist at two different levels of the hierarchical structure. In the construction of the fault dictionary, the process starts with the
introduction of a series of faults into the transistors. These translate into variations of
the two parameters Vth and W /L, which in turn, through fault simulation, translate
into variations of the op-amp parameters Av and GBW. Once all the transistor level
faults have been simulated and translated into the Av , GBW specification space for
the op-amp, fault clustering can then take place. This produces a reduced set of fault
syndromes that will be entered into the fault dictionary. Reference 21 details certain
rules to be followed in the fault clustering process to provide a critical set of fault
syndromes so that complete diagnostic accuracy can be ensured.
The fault propagation process is then continued up through the hierarchy, based on
the critical fault syndromes, again by simulation at the next level of hierarchy. In this
example case the next level is in fact the top level, consisting of the filter circuit. Here
the fault syndromes from the Av , GBW specification space for the various op-amps

136 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


are translated into the filter specification space which consists of the transfer function
magnitude at three specified frequencies. By only operating on this critical set of fault
syndromes, the number of entries in the fault dictionary and the number of higher
level fault simulations is minimized, thus leading to the most compact form of fault
dictionary that stores only those faults that contain diagnostic information. Once the
fault dictionary has been built, the next stage is to perform the diagnostic work based
on the measurements of the CUT.
For the fault location diagnosis, this is based purely on the SBT data from the
fault dictionary. Measurements of the three transfer functions magnitudes are made
and these represent a point in the three-dimensional specification space. If a fault
is present, this will be different from the point representing the fault-free condition
(with some associated tolerance box). This is termed in Reference 21 as the observed
fault syndrome. The faulty block at the next lowest hierarchical level is determined
via a nearest neighbour calculation between the observed fault syndrome and the
critical fault syndromes in the fault dictionary. In this example, this would identify
one (or possibly more if multiple fault simulations were performed) of the op-amps.
Measurements can then be made on the voltage gain and gain-bandwidth product of
the op-amp(s) and using critical fault syndromes at this level of hierarchy, the faulty
transistor(s) can be identified.
Once the faulty component has been identified, the diagnosis can then move to
the second stage, fault value identification. This requires additional SAT procedures
as outlined here. The basis for this routine in Reference 21 is the use of non-linear
regression models to approximate the functional relationship between the values of the
measurements made on the CUT and the parameters of the circuit under each particular
fault. In this way, the simulation burden can be eased as circuit-level simulation is
replaced by a simple function evaluation. The regression tool used is the Multi-variate
Adaptive Regression Splines (MARS) technique [22]. In addition, as this approach
explicitly solves for the parameters of the CUT, the technique does not suffer from the
fault masking effects that can arise due to component tolerances. In fact, this technique
requires a two-stage process, first construction of the non-linear regression model via
circuit simulation and second the post-test stage whereby an iterative algorithm solves
for the parameter values of the CUT.
The regression model is built on the assumption that a single fault has occurred,
this parameter is varied between its two extreme values and the remaining parameters
of the CUT are allowed to vary within their tolerance limits. Simulations are based on
these variations (in Reference 21 the allied Cadence simulation and extraction tools
were used). The MARS technique is then used to build up the required regression
models, based on this simulation data. This provides a model of the variation of
measurements made on the CUT with the variation in parameters of the CUT.
The second stage is the fault value evaluation diagnosis procedure, which is an
iterative procedure, using a modified NewtonRaphson algorithm. This comprises the
following stages. As input, the algorithm takes the measurements made on the CUT,
the knowledge of the faulty component and the MARS regression models. A coarse
search for a solution is made to provide an initial value. The Jacobian of the regression
matrix is then computed and a check is made of the convergence to see if the solution
to the regression model matches the measured values within a certain threshold value,

Hierarchical/decomposition techniques for large-scale analogue diagnosis 137


if not the process is iterated. There are two issues in using this approach. First, the
set of measurements may not uniquely identify the parameters of the CUT. If there
are dependent equations in the system, the Jacobian will become ill-conditioned and
there will be an infinite set of solutions. In test terms, this means that there exist
ambiguity sets and these must be eliminated for the process to provide a solution.
The presence of ambiguity sets is identified using the procedure of Reference 23 and
they are eliminated by ensuring that the parameters in the ambiguity groups are held
constant. The second issue is the convergence of the NewtonRaphson method, which
can fail to converge if the initial point is a long way from the solution. Hence, the
initial coarse search, the authors also use a damping factor in the iteration process to
further improve the convergence.
In Reference 21 the method is illustrated by using a slew-rate filter as an example
circuit and the authors demonstrate 100 per cent diagnostic accuracy over a range of
levels of introduced faults.
The same authors have further refined this method to include the generation of
optimized test waveforms in the time domain [24]. These transient waveform stimuli
are generated automatically, based on the CUT, its constituent models, a list of the
faults to be diagnosed and, in particular, the list of observable nodes in the circuit
that can be probed at the measurement stage. The waveforms can consist of either
steps or ramps in voltage. There is also a time-to-observe parameter for the test,
which specifies the period of time after the application of the test waveform when the
signal at the test point should be observed. Both of these aspects are optimized by
minimizing a cost function based on diagnostic success of the test signal. The ramps,
which is the method implemented in Reference 24, are generated by starting with a
random slope for the waveform section, applying this to the CUT and then calculating
the first-order sensitivities of the faults to the slope of the ramp. Depending on the
values of these sensitivities, the slope is adjusted and an iterative procedure is applied
to derive the optimum values of the slope and the time-to-observe parameter.

4.4

Conclusions

This chapter described the application of hierarchical techniques to the diagnosis of


large-scale analogue and mixed-signal circuits. The impetus behind the development
of these techniques has been the continuing growth in size and complexity of these
circuits. Often the computation effort required to implement traditional approaches
to circuit diagnosis grows exponentially with circuit complexity and the hierarchical
approach attempts to mitigate this effort by grouping blocks of circuitry that can be
dealt with as individual modules.
The techniques fall basically into two approaches, SBT and SAT and sometimes a
combination of the two. The first approach is usually concerned with the construction
of a fault dictionary by simulating the effects of a set of typical faults and recording the pattern of the observable outputs. The number of entries in the dictionary
can be reduced by the method of fault clustering, whereby similar output signals
from different simulated faults are recorded as a single entry, although this approach
can reduce the diagnostic coverage of the final dictionary. The second technique

138 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


concentrates on the simulation effort after the measurements have been performed in
order to trace the faulty component(s).
There are drawbacks to both of these approaches. In the SBT approach, the
number of faults simulated and therefore contributing to the dictionary, must be
a finite subset of the infinite set of possible faults (assuming parametric faults are
considered). However, a good representative subset is usually achievable, given such
fault modelling approaches as IFA. The SAT approach generally requires access to
a high proportion (or indeed all) of the internal circuit nodes in order to perform
measurements and achieve a full diagnostic capability. These techniques were often
developed for circuits that were breadboarded or on single-layer printed circuit boards
where access to all the nodes (for voltage measurement, at least) was possible. With
the implementation of these circuits on multilayer boards, multi-chip modules and
monolithic ICs, this access to internal nodes becomes even more difficult and may
require inclusion of test overhead circuitry to enable observability of internal nodes.
While many of the techniques described in this chapter are relatively well
matured and quite powerful, there is still a lot of work to be done in this field to
achieve diagnosis approaches that are readily integrated into the available modern IC
technologies.

4.5

References

1 Bandler, J.W., Salama, A.E.: Fault diagnosis of analog circuits, Proceedings of


the IEEE, August 1985;73:1279325
2 Voorakaranam, R., Chakrabarti, S., Hou, J., Gomes, A., Cherubal, S., Chatterjee,
A., Kao, W.: Hierarchical specification-driven analog fault modelling for efficient
fault simulation and diagnosis, Proceedings of International Test Conference,
16 November 1997, pp. 90312
3 Corsi, F.: Inductive fault analysis revisited (integrated circuits), IEE Proceedings
on Circuits, Devices and Systems, April 1991;138 (2):25363
4 Wu, C.C., Nakajima, K., Wey, C.L., Saeks, R.: Analog fault diagnosis with
failure bounds, IEEE Transactions on Circuits and Systems, 1982;29:27784
5 Wey, C.L.: Design of testability for analog fault diagnosis, International Journal
of Circuit Theory and Application, 1987;15 (2):12342
6 Eberhardt, F.: Symbolic tolerance and sensitivity analysis of large scale electronic
circuits, PhD thesis, University of Bath, 1999
7 DeCarlo, R.A., Saeks, R.: Interconnected Dynamical Systems (Marcel-Dekker,
New York, 1981)
8 Ho, C.K., Shepherd, P.R., Tenten, W., Kainer, R.: Improvements in analogue fault
diagnosis techniques, Proceedings of 2nd IEEE International Mixed-Signal Test
Workshop, Quebec City, 1996, pp. 8197
9 Slamani, M., Kaminska, B.: Analog circuit fault diagnosis based on sensitivity
computation and functional testing, IEEE Design and Test of Computers, 1992;9
(1):309

Hierarchical/decomposition techniques for large-scale analogue diagnosis 139


10 Ho, C.K., Shepherd, P.R., Eberhardt, F., Tenten, W.: Hierarchical fault diagnosis of analog integrated circuits, IEEE Transactions on Circuits and Systems I,
2001;48 (8):9219
11 Ho, C.K., Shepherd, P.R., Tenten, W., Eberhardt, F.: Heirarchical approach to
analogue fault diagnosis, Proceedings of 3rd IEEE International Mixed-Signal
Test Workshop, Seattle USA, 36 June 1997, pp. 2530
12 Sheu, H-T., Chang, Y.-H.: Hierarchical frequency domain robust component
failure detection scheme for large scale analogue circuits with component tolerances, IEE Proceedings Circuits, Devices and Systems, February 1996;143
(1):5360
13 Eberhardt, F., Tenten, W., Shepherd, P.R.: Symbolic parametric fault simulation
and diagnosis of large scale linear analogue circuits, IEEE Proceedings of 5th
International Mixed-Signal Test Workshop, Whistler, B.C. Canada, June 1999,
pp. 2218
14 Hassoun, M.H., Lin, P.M.: A hierarchical network approach to symbolic analysis of large scale networks, IEEE Transactions on Circuits and Systems, April
1995;42 (4):20111
15 Ho, C., Ruehli, A.E., Brennan, P.A.: The modified nodal approach to
network analysis, IEEE Transactions on Circuits and Systems, June 1975;
25:5049
16 Wei, T.W., Wong, M.W.T., Lee, Y.S.: Fault diagnosis of large scale analog circuits
based on symbolic method, Proceedings of 3rd IEEE International Mixed Signal
Test Workshop, Seattle, USA, 36 June 1997, pp. 38
17 Echtenkamp, J., Hassoun, M.H.: Implementation issues for symbolic sensitivity
analysis, Proceedings of the 39th Midwest Symposium on Circuits and Systems,
1996;IIII, Ch. 319, pp. 42932
18 Eberhardt, F., Tenten, W., Shepherd, P.R.: Improvements in hierarchical
symbolic sensitivity analysis, Electronics Letters, February 1999;35 (4):2613
19 Somayajula, S.S.: A neural network approach to hierarchical analog fault diagnosis, Proceedings of IEEE Systems Readiness Technology Conference, 2023
September 1993, pp. 699706
20 Pahwa, A., Rohrer, R.: Band faults: efficient approximation of fault bands for
the simulation before diagnosis of linear circuits, IEEE Transactions on Circuits
and Systems, February 1982;29 (2):818
21 Chakrabarti, S., Cherubal, S., Chatterjee, A.: Fault diagnosis for mixed-signal
electronic systems, Proceedings of IEEE Aerospace Conference, 613 March
1999;3:16979
22 Friedman, J.H.: Multivariate adaptive regression splines, The Annals of
Statistics, 1991;19 (1):1141
23 Liu, E., Kao, W., Felt, E., Sangiovanni-Vincentelli, A.: Analog testability analysis and fault diagnosis using behavioural modelling, Proceedings of IEEE Custom
Integrated Circuits Conference, 1994, pp. 4136
24 Chakrabarti, S., Chatterjee, A. Diagnostic test pattern generation for analog
circuits using hierarchical models, Proceedings of 12th VLSI Design Conference,
710 January 1999, pp. 51823

Chapter 5

DFT and BIST techniques for analogue


and mixed-signal test
Mona Safi-Harb and Gordon Roberts

5.1

Introduction

The continuous decrease in the cost to manufacture a transistor, mainly due to the
exponential decrease in the CMOS technology minimum feature length, has enabled
higher levels of integration and the creation of extremely sophisticated and complex
designs and systems on chip (SoCs). This increase in packing density has been coupled
with a cost-of-test function that has remained fairly constant over the past two decades.
In fact, the Semiconductor Industry Association (SIA) predicts that by the year 2014,
testing a transistor with a projected minimum length of 35 nm might cost more than
its manufacture [1].
Many reasons have contributed to a fairly flat cost-of-test function over the past
years. Although transistor dimensions have been shrinking, the same can not be
said about the number of input/output (I/O) operations needed. In fact, the increased
packing density and operational speeds have been inevitably linked to an increased
pin count. First, maintaining a constant pin countbandwidth ratio can be achieved
through parallelism. Second, the increased power consumption implies an increased
number of dedicated supply and ground pins for reliability reasons. Third, the
increased complexity and the multiple functionalities implemented in todays SoCs
entail the need for an increased number of probing pins for debugging and testing purposes. All the above-mentioned reasons, among others, have resulted in an increased
test cost.
Testing high-speed analogue and mixed-signal designs, in particular, is becoming
a more difficult task, and observing critical nodes in a system is becoming increasingly
challenging. As the technology keeps scaling, especially past the 90 nm technology,
metal layers and packing densities are increasing as a function of signal bandwidth
and rise times extending beyond the gigahertz range. Viewing tools such as wafer or

142 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


on-chip probing are no longer feasible since the large parasitic capacitance loading
of a contacting probe would dramatically disturb the normal operation of the circuit.
On the other hand, the automatic test equipment (ATE) interface has become a major
bottleneck to deliver signals with high fidelity, due to the significant distances the
signals have to travel at such operational speeds. In addition, the ATE cost is exploding
to keep up with the ability to test complex integrated SoCs. In fact, a $20 000 000
ATE system, capable of testing such complicated systems, was forecasted by the
SIA roadmap. Embedded test techniques, benefitting from electrical proximity, area
overhead scaling and bandwidth improvements, hence leading to at-speed testing,
therefore constitute the key to an economically viable test platform.
Test solutions can be placed on the chip and are then known as a structural test or
built-in self test (BIST), on the board level or as part of the requirements of the ATE.
Each solution will entail verification of signal fidelity and responsibility to different
people (the designer, the test engineer or the ATE manufacturer), different calibration
techniques and different test instruments, all of which directly impact the test cost,
and therefore the overall part cost to the consumer. It is the purpose of this chapter to
highlight some of the work and latest developments on embedded mixed-signal testing
(BIST), and the work that has been accomplished so far on this topic, specifically for
the purpose of design validation and characterization. Nonetheless, it is important to
point out that there is a lot of effort on placing more components on the board, as well
as trying to combat the exploding costs of big ATE systems through low-cost ones,
specifically to combat the volume or production testing of semiconductor devices,
but that discussion is beyond the scope of this chapter. Some of the recent trends in
the testing industry will also be briefly highlighted.

5.2

Background

The standard test methodologies for testing digital circuits are simple and consist
largely of scan chains, automatic test pattern generators and are usually used to test
for catastrophic and processing/manufacturing errors. In fact, digital testing including
digital BIST has become quite mature and is now cost effective [2, 3]. The same can
not be said about analogue tests that are performed for a totally different reason:
meeting the design specifications under process variations, mismatches and device
loading effects. While digital circuits are either good or bad, analogue circuits are
tested for their functionality within acceptable upper and lower performance limits as
shown in Figure 5.1. They have a nominal behaviour and an uncertainty range. The
acceptable uncertainty range and the error or deviation from the nominal behaviour is
heavily dependant on the application. In todays high-resolution systems, it could well
be within 0.1 per cent or lower. This makes the requirements extremely demanding
on the precision of the test equipments and methods used to perform those tests.
Added to this problem is the increased test cost when testing is performed after the
integration of the component to be tested into a bigger system. As a rule of thumb, it
costs tens times more to locate and repair a problem at the next stage when compared
to the previous one [4]. Testing at early design stages is therefore economically

DFT and BIST techniques for analogue and mixed-signal test 143
(a)

(b)

pdf

Good

pdf
Good

Bad

Bad

Bad
0

Figure 5.1

XD

Digital Fn

XL

XV Analogue Fn

Functional behavioural description: (a) digital (b) analogue

beneficial. This paradigm where, early on in the design stages, trade-offs between
functionality, performance and feasibility/ease of test are considered has come to be
known as design for testability (DfT).
Ultimately, one would want to reduce, if not eliminate, the test challenges as
semiconductor devices exhibit better performance and higher level of integration. The
most basic test set-up for analogue circuits consists of exciting the device under test
(DUT) with a known analogue signal such as a d.c., sine, ramp, or arbitrary waveform, and then extracting the output information for further analysis. Commonly,
the input stimulus is periodic to allow for mathematically averaging the test results,
through long observation time intervals, to reduce the effect of noise [5]. Generally,
the stimulus is generated using a signal generator and the output instrument is a root
mean square (RMS) meter that measures the amount of RMS power over a narrow but
variable frequency band. A preferred test set-up is the digital signal processing (DSP)based measurement for both signal generation and capture. Most, if not all, modern
test instruments rely on powerful DSP techniques for ease of automation [6] and
increased accuracy and repeatability. Most mixed-signal circuits rely on the presence
of some components such as a digital-to-analogue converter (DAC) and an analogueto-digital converter (ADC). In some cases, it is those components themselves that
constitute the DUT. Testing converters can be achieved by gaining access to internal nodes through some analogue switches (usually CMOS transmission gates). The
major drawback for such method is the increased I/O pin count and the degradation
due to the non-idealities in the switches, especially at high speed, even though some
techniques have been proposed to correct for some of these degradation effects [7].
Nonetheless, researchers looked to define a mixed-signal test bus standard compatible
with the existing IEEE 1149.1 boundary scan standard [8] to facilitate the testing of
mixed-signal components. One of the earliest BIST to be devised was as a go/no-go
test for an ADC [9]. The technique relies on the generation of an analogue ramp signal, and a digital finite-state machine is used to compare the measured voltage to the
expected one. A decision is then made about whether or not the ADC passes the test.
While not a major drawback on the functionality of the devised BIST, the proposed
test technique in Reference 9 relies on an untested analogue ramp generation that constitutes a drawback on the overall popularity of the method. An alternative approach
would therefore be to devise signal generation schemes that can be controlled, tuned
and easily transferred to and from the chip in a digital format. Several techniques have
been proposed for on-chip signal generation and they are the subject of Section 5.3.

144 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Digital

DSP

Digital

ADC
Signal
generator

Digital

Anti-aliasing
filter

DAC
Analogue
Analogue
output

Figure 5.2

Multiplexer

Analogue
input

Block diagram of the MADBIST scheme [5]

Here, it suffices to mention that with the use of sigma-delta ()-based schemes,
it is possible to overcome the drawback of the analogue ramp, as was proposed by
Toner and Roberts [5] in another BIST scheme, referred to as mixed-analogue-digital
BIST (MADBIST). The method relies on the presence of a DAC and an ADC on a
single integrated circuit (IC) as is the case in a coder/decoder (CODEC), for example.
Figure 5.2, which illustrates such a scheme.
In the MADBIST scheme, first the ADC is tested alone using a digital  based
oscillator excitation. Once the ADC passes the test, the DAC is then tested using either
the DSP engine or the signal generator. The analogue response of the DAC is then
looped back and digitized using the ADC. Once the ADC and then the DAC pass the
test respectively, they can be used to characterize other circuit behaviours. In fact, this
technique was used to successfully test circuits with bandpass responses as in wireless
communications. In Reference 10, MADBIST was extended to a superhetrodyne
transceiver architecture by employing a bandpass  oscillator for the stimulus
source which was then mixed down using a local oscillator and digitized using the
ADC. Once tested, the DAC and transmit path are then characterized using the loopback configuration explained above.
To further extend the capabilities of on-chip testing, a complete on-chip mixedsignal tester was then proposed in Reference 11, which is capable of a multitude
of on-chip testing functions, all the while relying on transferring the information
to/from the IC core in a purely digital format. The architecture format is generic and
is shown in Figure 5.3. The functional diagram is identical to that of a generic DSPbased test system. Its unified clock guarantees coherence between the generation and
measurement subsystems that is important from a repeatability and reproducibility
point of view, especially in a production testing environment. This architecture in
particular is the simplest among all those presented above and is versatile enough to
perform many testing functions as will be shown in Section 5.7.
Of particular interest to the architecture proposed in Reference 11, besides its
simplicity and its digital interfacing, is its potential to achieve a more economical
test platform in an SoC environment. SoC developers are moving towards integration
of third-party intellectual properties (IPs) and embedding the various IP cores in an

DFT and BIST techniques for analogue and mixed-signal test 145
Clock source
Arbitrary
waveform
generator

Program

Periodic bit
stream
generator

Figure 5.3

Analogue
reconstruction
filter

DUT

Waveform
digitizer

To DSP

+
Programmable
reference

Block diagram of the complete tester on chip [11]

architecture to provide functionality and performance. The SoC developers have also
the responsibility of testing each IP individually. While attractive to maintain the
integration trend, the resultant test time and cost has inevitably increased as well.
Parallel testing can be used to combat this difficulty, avoiding therefore sequential
testing where a significant amount of DUT, DUT interfaces and ATE resources remain
idle for a significant amount of time. However, incorporating more of the specialized
analogue instruments (arbitrary waveform generators and digitizers) within the same
test system is one of the cost drivers for mixed-signal ATEs, placing a bound on the
upper limit of parallelism that can be achieved. In fact, modest parallelism is already
in use today by industry to test devices on different wafers, using external probe cards
[12]. However, reliable operation of a high pin count probe is difficult, placing an
upper constraint to the parallel testing, a constraint that does not appear to be able
to keep up with the integration level and therefore, the increased IC pin count, I/O
bandwidth and the complexity and variation in the nature of the IPs integrated, the
semiconductor industry has been facing.
Concurrent testing, which relies on devising an optimum strategy for the DUT
and/or ATE resource utilization to maintain a high tester throughput can help offset some of the test time cost that is due to idle test resources. The shared-resource
architecture available in todays tester equipment cannot support an on-the-fly reconfiguration of the pins, periods, timing, levels, patterns and sequencing of the ATE. On
the other hand, embedded or BIST techniques can improve the degree of concurrency
significantly. Embedded techniques, such as the one proposed in Reference 11, benefit
from an increased level of integration due to the mere fact of technology scaling that
allows multiple embedded test core integration in critical locations. This, together with
the manufacturing cost, bandwidth limitation and area overhead, all scaling favourably
with the technology evolution. This allows for parallelism and at-speed tests to be
exploited to a level that could potentially track the trend in technology/manufacturing
evolution.
Before presenting the architecture and its measurement capability in more
details, a description of some of the most important building blocks that led to the
implementation of such architecture are detailed first.

146 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


+

C1

C2

Figure 5.4

Conventional analogue signal generation

W1

W
Z1

D
Sine
ROM

D-bit
DAC

Analogue
filter

Out

W
Phase accumulator

Figure 5.5

5.3

Digitally driven analogue signal generation based on DDFS

Signal generation

Conventional analogue signal generation relys on tuned or relaxation oscillator circuits as shown in Figure 5.4. The problem with this approach is that it is not suitable
as an on-chip solution, DfT or BIST technique; first they are sensitive to process variations since their amplitude and frequency depend on absolute component values,
second, they are inflexible and difficult to control and do not allow multi-tone signal
generation and finally, their quality is largely dependent on the quality factor, Q, of
the reactive components unless piezoelectric crystals are used.

5.3.1

Direct digital frequency synthesis

An early signal generation method that is more robust and flexible is known as the
direct digital frequency synthesis (DDFS) method [13] whereby a digital bitstream is
first numerically created and then converted to analogue form using a DAC followed
by a filtering operation. One such form is shown in Figure 5.5.
The read only memory (ROM) can store up to D-bit accuracy and can have up to 2W
words recorded. The phase accumulator enables the user to scan the ROM (digitally)
with different increments changing therefore the resultant sine-wave frequency, fout ,
according to
fout =

M fs
2W

(5.1)

DFT and BIST techniques for analogue and mixed-signal test 147
Z1
+

Z1

M-bits

D-bit
DAC

Analogue
filter

Out

Typically a DAC
k

Figure 5.6

Digital resonator

where w is the number of bits at the output of the phase accumulator, M is the number of
complete sine-wave cycles and fs is the sampling frequency. The amplitude precision
that is a function of D, the ROM word width, is then given according to
ADDFS =

Amax
2D+1

(5.2)

The above method requires the use of a DAC which needs to be tested and characterized if it is to be used in a BIST. The number of bits required from the DAC is
dictated by the resolution required for the analogue stimulus, which is often multi-bit.
This, in turn, entails a large silicon area, sophisticated design and increased test time,
all of which are not desirable.

5.3.2

Oscillator-based approaches

An alternative approach to generating digital sine waves is through the use of a


digital resonator circuit [14] that simulates the inductor-capacitor (LC) resonator and
is shown in Figure 5.6. The two integrators in a loop with the multiplier cause the
system to oscillate. The frequency, amplitude and phase of the sine wave can be
arbitrary. The tuning is achieved through setting the initial condition of the registers
and varying the coefficient k. The digital output is then converted to an analogue form
using a DAC, and typically, a  DAC where a 1-bit digital output can be encoded
into an infinite-precision signal using pulse-density-modulated digital patterns.
The major drawback to the digital resonator method is the need for a multi-bit
multiplier which consumes a lot of power and silicon area and can limit the frequency
range of operation. An implementation that gets around this problem is to replace
the multi-bit multiplier with a 1-bit multiplier/multiplexer [15]. This architecture
is shown in Figure 5.7. Note that the DAC can be implemented as a high-order
 modulator, giving a much higher signal resolution while maintaining a 1-bit
multiplexer. As mentioned earlier, this advantage is the result of the inherent property
of  modulation where a 1-bit digital signal is a pulse-density-modulated version
of an analogue signal with near-infinite precision.
The previously proposed architecture was further used in a BIST application in
Reference 16. Modifications to the basic architecture were then added transforming
the oscillator into a multi-tone generator [17]. Arbitrary precision signals were then

148 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Z1
M-bits
+

Z1

DAC
(STF=1)

Analogue
filter

Out

MUX
0

S
1

1-bit
k

Figure 5.7

Improved digital resonator with the multiplier replaced with a 1-bit


multiplexer
sine

or
DC

Figure 5.8

Looping back of a selected set of a  modulator output bitstream

demonstrated in Reference 18. The extension to high-frequency signals was then


performed in Reference 19 with the use of bandpass oscillators.
While good candidates for on-chip signal generation, they suffer from some drawbacks such as the need for a cascade of adders, slowing down the speed of operation.
In some cases, an increased level of design difficulty might arise and limit the range
of application. The solution lies in memory-based generation.

5.3.3

Memory-based signal generation

The idea of generating a digital bitstream was introduced to BIST by reproducing


it and periodically repeating it as shown in Figure 5.8. As little as 100 bits could
be enough for a good accuracy which significantly reduces the hardware required
[20]. The idea is to record a portion of the bitstream and reproduce it periodically
by looping it back. The creation of the original bitstream is usually done according
to a preselected noise transfer function (given a required resolution), which is then
mapped into a software-implemented modulator. The parameters representing the
input frequency, the number of complete cycles of the input and the total number of
samples, also referred to as fin , M and N, are then chosen according to the coherency

DFT and BIST techniques for analogue and mixed-signal test 149

1, A1, 1
+

DAC

Low-pass
filter

Analogue
output

2, A2, 2
Multi-bit digital
adder
N, AN, N

Figure 5.9

Conceptual illustration of a multi-bit signal generation

requirement [21]. N is chosen given a certain maximum memory length. The bitstream
is then generated according to a set of criteria such as the signal-to-noise ratio, dynamic
range, amplitude precision and so on.
The practicality of choosing the appropriate bitstream using the minimum hardware needed while maintaining a required resolution in terms of amplitude, phase and
spurious-free dynamic range were analysed in detail in Reference 22. Small changes
in the bitstream can lead to changes as large as 1040 dB in the quality or resolution
of the signal. As a result, an optimization can be run to achieve the best resolution
for a given number of bits and a given hardware availability.

5.3.4

Multi-tones

Multi-tone signal generation is particularly important for characterizing blocks such


as filters. They can reduce test time by stimulating the DUT (also referred to as circuit
under test or CUT) only once with a multitude of signals and then relying on DSPtechniques such as the fast Fourier transform algorithm to extract the magnitude and
phase responses at each individual frequency. More details on analogue filter testing
can be found in Chapter 6. Another important application of multi-tone signals is
in the testing of inter-modulation distortion. This is particularly important in radio
frequency (RF) testing where measures such as the third-order input inter-modulation
product (IIP3), 1-dB compression point and so on, require the use of a minimum of two
tones. The repeatability and accuracy of the results is usually at its best if coherency,
also known as the M/N sampling principle, is maintained as it is under this condition
that maximum frequency resolution per bin is obtained.
Multi-tone signal generation is conceptually illustrated in Figure 5.9, where a
multi-bit adder and a multi-bit DAC are needed for analogue signal reconstruction,
increasing therefore the hardware complexity. However, the  bitstream signal
generation method presented in the previous subsection is readily extendible to the
multi-tone case by simply storing a new sequence of bits in the ROM, with the new
bits now corresponding to a software-generated multi-tone rather than a single-tone

150 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


signal. No additional hardware (such as multi-bit adders and DACs for analogue
signal reconstruction) is needed. Hence, another added testimony to the advantages
of this signal generation method for BIST.

5.3.5

Area overhead

An important criterion in any BIST solution is the area overhead it entails. While it is
argued that the area occupied by the test structure benefits from technology scaling,
especially in the case of digital implementations, it is always desired to minimize
the silicon space and therefore cost, occupied by the test circuit. The memory-based
signal generation scheme presented in Section 5.3.3, which was seen to improve the
test stimulus generation capabilities from a repeatability point of view when compared
to its analogue-based stimulus counterpart, can be improved even further. Commonly,
the DUT has a front-end low-pass filter; an example would be an ADC with a preceding
anti-aliasing filter. In this case, the analogue filter that precedes the memory-based
bitstream can be removed altogether, while relying on the built-in filtering operation
of the CUT [16]. This concept is illustrated graphically in Figure 5.10. Later, it will be
seen how this same area-savings concept can be applied to the testing of phase-locked
loops (PLLs).

DUT
~
(a)

bitstream
generator

Filter

DUT

Analogue test signal


(using a filtered
digital bitstream)

(b)

bitstream
generator

Filter

Circuit

Digital test signal

DUT
(c)

Figure 5.10

Area overhead and partitioning of the bitstream signal generation


method with (a) analogue stimulus, (b) analogue stimulus using DSPbased techniques and explicit filtering operation and (c) digital test
stimulus, while relying on the DUT built-in (implicit) filtering operation

DFT and BIST techniques for analogue and mixed-signal test 151

5.4

Signal capture

Testing in general comprises of first sending a known stimulus and then capturing
the resultant waveform of the CUT for further analysis. As discussed previously,
the interface to/from the CUT is preferably in digital form to ease the transfer of
information. The previous sections discussed the reliable generation of on-chip test
stimulus. Signal generation constitutes just one aspect of the testing of analogue and
mixed-signal circuits. This section discusses the other aspect of testing; the analogue
signal capture.
The signal capture of on-chip analogue waveforms underwent an evolution. First,
a simple analogue bus was used to transport this information directly off chip through
analogue pads [23]. Later, an analogue buffer was included on chip to efficiently
drive the pads and interconnect paths external to the chip. This evolution is illustrated
graphically in Figure 5.11. In both the cases above, the information is exported off
chip in analogue form and is then digitized using external equipment. Perhaps a better
way to export analogue information is by digitizing it first. This led to the modification
shown in Figure 5.12, whereby the analogue buffer is replaced with a 1-bit digitizer or
a simple comparator. Here, too, the digitization is achieved externally, shown in one
possible implementation using the successive approximation register (SAR), and with
external reference voltages feeding the comparator, usually and commonly generated
using an external DAC.

Ain

Analogue
function
F/F

DTout

Analogue
function
F/F

F/F

... Analogue
function

Aout

F/F

DTin
AT1
AT2

Ain

Analogue
function

DTin

F/F

Analogue
function
F/F

F/F

... Analogue
function
F/F

AT1
AT2

Figure 5.11

Evolution of the analogue signal capture capabilities

ATout

Aout

152 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Aref

1-bit
output

Ain
DTin

Analogue
function

Analogue
function

F/F

F/F

... Analogue
function

F/F

Aout

F/F

DTout
AT1

On-chip

Ain

On-chip (or) tester


+

SAR
Digital
output
MSB
DAC

Figure 5.12

LSB
Aref

Signal capture with focus on the comparator and the digitization


process

Another important evolution to the front-end sampling process is the use of undersampling. This becomes essential when the analogue waveform to be captured is very
fast. In general, capturing an analogue signal comprises sampling and holding the
analogue information first, and then converting this analogue signal into a digital
representation using an ADC. There exist many classes of ADCs each suitable for
a given application. Whatever the class of choice might be, the front-end sampling
in ADCs have to obey the Nyquist criterion; that is, the sampling of the information
having a bandwidth, BW, has to be done at a high enough sampling rate, fs , given
by 2BW . As the input information occupies a higher bandwidth, the sampling rate
has to correspondingly increase making the design of the ADC equivalently harder,
as well as more area and power consuming. Instead, testing applications make use
of an important property of the signal to be captured and that is its periodicity. Any
signal that needs to be captured can be made periodic by repeating the triggering of
the event that causes such an output signal to exist using an externally generated,

DFT and BIST techniques for analogue and mixed-signal test 153

Figure 5.13
Nodes
of
CUT

Illustration of the undersampling algorithm

S/H

S/H
+

Digital multiple-pass ADC


controller

Figure 5.14

Quantization level

Programmable
reference

14
12
10
8
6
4
2
0

Vout
Aref

Out

Aref
1

15
Pass number

16

Illustration of the undersampling and multi-pass methods

accurate and arbitrarily slow clock. Each time the external clock is triggered, it is also
slightly delayed. This periodicity feature in the signal to be captured and incremental delay in the external trigger give rise to an interesting capture method known as
undersampling, illustrated in Figure 5.13. For that, a slowly running clock (slower
than the minimum required by the Nyquist criterion) is used to capture a waveform,
so that the clock frequency is slightly offseted with respect to the input signal period.
That is, if the clock frequency is T + T (with T  T ) and the input frequency
is T , then the signal using a multi-pass approach can be captured with an effective
resolution of T . This method has been demonstrated to be an efficient way of capturing high frequency and broadband signals where the input information bandwidth
can be brought down in frequency, making the transport of this information off chip
easier and less challenging, as was first demonstrated in the implementation of the
integrated on-chip sampler in Reference 24.
In order to make the digitization also included on chip, a multi-pass approach
was first introduced in Reference 25 whereby the undersampling approach is still
maintained in the front-end sample and hold stage and then further demonstrated and
improved in Reference 11 with the inclusion of the comparator and reference level
generator on chip. The top-level diagram of the circuit that performs such a function
with the corresponding timing and voltage diagram are shown in Figure 5.14 and
operates as described next.
The programmable reference is first used to generate one d.c. level. The sampledand-held voltage of the CUT is then compared to this reference level and quantized
using a 1-bit ADC (or simply a comparator). The next run through, the d.c. reference

154 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


voltage is maintained constant and the clock edge for the sampling operation is moved
by T . The new sampled-and-held information of the CUT is then compared to the
same reference voltage. This sequence is then repeated until a complete cycle of
the input to be captured is covered. Once this is done, the programmable reference
voltage is then incremented to the next step, one least-significant-bit (LSB) away from
the previous reference level and the whole previous cycle of incrementing T is then
repeated. The above is then repeated until all d.c. reference voltages are covered. This
is referred to as the multi-pass approach. This implies that a time resolution of T
and a voltage resolution of an LSB can be achieved in the time and voltage domains,
respectively. Undersampling, together with a multi-pass approach, combined with an
embedded and reliable d.c. signal generation scheme is now used as a complete and
as an on-chip oscilloscope tool.

5.5

Timing measurements and jitter analysers

An important feature in BIST techniques is the ability to measure fine time intervals
for a multitude of purposes. Most high-speed mixed-signal capture systems today rely
on undersampling. Undersampling allows the capture of high-frequency narrowband
signals by shifting the signal components into a much lower frequency range (or
bandwidth) which is then easily digitized with low-speed components. This was
detailed in Section 5.4. The achieved results are in large a function of the resolution and
accuracy attained by the time intervals. For that, many circuits capable of generating
fine delays and the corresponding circuits that allow for on-chip characterization of
such circuit components are summarized in this section.

5.5.1

Single counter

The simplest form of time measurement between two edges is through the use of a
single counter triggered with a fast running clock as shown in Figure 5.15. The use
of an N-bit register at the output acts as an N-bit counter. The number (or count) of
clock edges that elapsed between two data events (in Figure 5.15, the events are the
rising edges of the start and stop signals) are computed using an N-bit counter. The
output count corresponds to a thermometer-code digital representation of the time
interval, T .
The resolution attained by this method is largely dependent on the clock frequency
with respect to the time difference (or data) to be measured. The higher the frequency
of the clock, the better is the counter accuracy and the overall count resolution. As
the technology is shrinking sizes, the time differences to be measured are decreasing.
Intervals on the order of or even more than the fastest clock that can be generated
sometimes need to be measured. On the other hand, the task of generating clocks
much faster than the data to be measured is becoming a much more difficult task
and in some cases, not feasible. As a result, better approaches are needed and are
highlighted next.

DFT and BIST techniques for analogue and mixed-signal test 155
T
Start

Q
Count
Counter

Stop
D

Clock

Figure 5.15

A simple counter method for the measurement of a time interval

Vout

Vin

Vin

ADC

Out

Final DC valve proportional


to pulse width

Figure 5.16

5.5.2

Interpolation-based time-to-digital converter [26]

Analogue-based interpolation techniques: time-to-voltage converter

One of the most basic building blocks in time measurement is the time-to-voltage
converter, also known as the interpolation-based time-to-digital converter (TDC).
The idea behind such a circuit is to convert the time difference between the edges
to be measured into complementary pulses using appropriate digital logic. The pulse
width is then integrated on a capacitor, C, as shown in Figure 5.16 [26]. The ramp
final d.c. value (more accurately, the step size on the capacitor) is directly related to
the pulse width. Using a high-resolution ADC, this d.c. value can then be digitized
performing therefore a digitization of the time difference or data to be measured.
The disadvantage of the above method is that it relies on the absolute value of the
capacitor, C. It also requires the design of a good ADC. While the ADC is required
to digitize d.c. levels only, making its design task slightly easier than high-frequency

156 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Vref

I/n

Vout

Vin

Vin

TMU

Out

Stretched pulses

Vref
I

Figure 5.17

(I )

(I/n)

Interpolation-based TDC insensitive to the absolute value of the


capacitor, also known as the time or pulse stretcher [27]

ADCs, nonetheless, this ADC can be power hungry and could be a tedious and time
consuming task.
A better approach that is insensitive to the absolute value of the capacitor relies
on the concept of charging and then discharging the same capacitor by currents that
are scaled versions of each other. The system [27] is shown in Figure 5.17 and
accomplishes two advantages: (i) it does not rely on the actual capacitor value since
the capacitor is now only used as a mean of storing charge and then discharging it at a
slower rate; and (ii) it performs pulse stretching which will make the original pulse to
be measured much larger, making the task of quantizing it a lot easier. In this case, a
single threshold comparator (1-bit ADC) can be used to detect the threshold crossing
times. A relatively low-resolution time measurement unit (TMU) can then be used to
digitize the time difference. The TMU can be a simple counter, as explained above
in Section 5.5.1 or one of the other potential TMUs that will be discussed next.
The techniques in References 26 and 27 can become power hungry if a narrow
pulse is to be measured. Trade-offs exists in the choice of the biasing current, I, and
the bit-resolution of the ADC (for a given integration capacitor, C); the larger I the
lower the ADC resolution required. However, as the pulse width decreases and in
order to maintain the same resolution requirement on the ADC while using the same
capacitor, C, the biasing current and therefore the power dissipation has to increase.
In fact, for very small pulse widths, the differential pair might even fail to respond fast
enough to the changes in the pulse. For that, digital phase-interpolation techniques
offer an alternative to the analogue-based interpolation schemes.

5.5.3

Digital phase-interpolation techniques: delay line

Through the use of a chain of delays that will delay the clock and/or data as it is
propagating down the chain, generation [28] and measurement of fine delays can be
achieved. With the use of an edge-triggered D-flip-flop (DFF), delayed clock or data
edges can be obtained. This, unlike the analogue techniques that rely on an ADC, are
known as sampling phase time measurement units, and fall more in the category of
digital time measurement techniques.

DFT and BIST techniques for analogue and mixed-signal test 157
Data

1

1
D

1
D

CLK

Count_M

Count_2

Count_1

Figure 5.18

A delay-line used to generate multi-phase clock edges. Also can be used


to measure clock jitter with a time resolution set by the minimum gate
delay offered by the technology

The operation of such a TDC is analogous to a flash ADC, where the analogue
quantity to be converted into a digital word is a time interval. They operate by comparing a signal edge to various reference edges all displaced in time. Typically, these
devices measure the time difference between two edges, often denoted as the START
and STOP edges. The START signal usually initiates the measurements while the
STOP edge terminates it. Given that the delay through each stage is known a priori
(which will require a calibration step), the final state of the delay lines can be read
through a set of DFFs and which is directly related to the time interval to be measured.
Usually the use of such delay lines has a limited time dynamic range. Some TDCs
employ time range extension techniques which rely on counters for a coarse time
measurement and the delay lines for fine intervals digitization. This is identical to the
coarse/fine quantizers in ADCs. Other techniques include pulse stretching [29], pulse
shrinking and time interpolation.
The use of the above devices extends to applications such as laser ranging and
high-energy physics experiments. With the addition of a counter at the output, this
simple circuit can be used to measure the accumulated jitter of a data signal (Data)
with respect to a master clock (CLK), as shown in Figure 5.18.
The above circuit can be used for time resolutions down to the gate delay created
by the technology in which they are implemented. To overcome the above limitations,
a Vernier delay line (VDL) can be used.

5.5.4

Vernier delay line

In a VDL, both the data to be digitized or analysed, as well as the clock signal are
delayed with two slightly offset delays as shown in Figure 5.19. Using this arrangement, time resolutions as small as res = (2 1 ) can be achieved, provided that
2 > 1 (sometimes also referred to as s and f for slow and fast respectively). In this

158 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Data

1

1
D

CLK

2

2

2

Count_M

Count_2

Count_1

Figure 5.19

1

A VDL achieving sub-gate time resolution

case and having a total of N delay stages, the time range that can be captured is given
by range = N (2 1 ).
Usually these delays can be implemented using identical gates that are purposely
slightly mismatched. A few picoseconds timing resolution can be achieved in this
method, equivalent to a deep sub-gate delay sampling resolution. VDL samplers
have previously been used to perform time interval measurements [30] and data
recovery [31].
When Vernier samplers are used, data is latched at different moments in time,
leading to synchronization issues that must be considered when interfacing the block
with other units. Read-out structures exist though, allowing for continuous operation
and synchronization of the outcoming data [32]. For the purpose of jitter measurement,
this synchronization block is not needed.
The circuit was indeed used for jitter measurement and implemented [33] in a
standard 0.35-m CMOS technology, achieving a jitter measurement resolution of
res = 18 ps. The RMS jitter was measured to be 27 ps and the peak to peak jitter was
324 ps. For jitter measurements, the same circuit can be configured with the addition
of the appropriate counters, as shown in Figure 5.19.
Note that, in general, these delay stages are voltage controlled to allow for tuning
ranges, and, more often, in a negative feedback arrangement, known as a delaylocked loop (DLL), the delay stages are a lot more robust to noise and jitter due to the
feedback nature of the implementation. For that it is worth mentioning that, almost
exclusively, DLLs are now relying on a linear voltage-controlled delay (VCD) cell
introduced by Maneatis [34]. The linear aspect of the cell stems from the use of a
diode connected load in parallel with the traditional load. This gives the load of the
cell a more linear aspect, extending the linearity range of the delay cell. The biasing
of these cells are also made more robust to supply noise and variations due to the use
of a uniform biasing circuit to generate both the N- and P-sides biasing. The same
biasing is also used for all blocks where variations affecting one will affect the other
in a uniform manner.

DFT and BIST techniques for analogue and mixed-signal test 159
Single VDL stage
s

s

s

Data
D Q

f

D Q

D Q

f

f

CLK

Count_1

Count_2

Count_N

Data

D Q
Counter

CLK

Figure 5.20

5.5.5

A component-invariant, single-stage VDL jitter measurement device


[35]

Component-invariant VDL for jitter measurement

The disadvantages of the previously proposed VDL, namely (i) the increased number
of stages for large dynamic time ranges; (ii) the matching requirements between the
many stages and (iii) the area and power dissipation overheads, can be overcome
with the use of a single-stage component-invariant VDL [35]. The proposed system
is shown in Figure 5.20.
The single stage consists of two triggered delay elements, one triggered by the data
and the other by the reference clock. The counter acts as a phase detector. This method
was indeed implemented in 0.18-m CMOS technology and only using standard cells
which facilitates the design task even further. The area occupied by a single stage of
the VDL is 0.12 mm2 , which is at least an order of magnitude less in area overhead
when compared to other methods. The measured resolution in this circuit was 19 ps.
The test time was approximately 150 ns/sample, for a clock running at 6.66 MHz.
Note that the inverters in the feedback loop were implemented as VCD cells to allow
for calibration and tuning.

160 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

5.5.6

Analogue-based jitter measurement device

The delay line structures that were presented in the previous sections can be referred
to as digital-type jitter measurement devices. Recently, an analogue-type macro,
shown in Figure 5.21, has been introduced and acts as an on-chip jitter spectrum
analyser [36]. The basic idea is to convert the time difference between the edges of
a reference clock and the jittery clock or to measure the signal into analogue form
using a phase-frequency detector (PFD) followed by a charge pump. The voltage
stored on the capacitor which represents the time information of interest is then digitized using an ADC. The speed of the proposed macro in Reference 36 is limited
by that of the ADC, as well as the ability of the PFD to resolve small time differences. A calibration step is usually required in such a design to remove any effects

Icharge pump. Icp


Reference clock
Reference clock

Up
Phase
frequency
detector

Measured clock

Vdd/2
Vx

Down
Icp

LPF
(optional)

ADC

Reference
clock

Measured
clock
Up

Down

Vx

Vdd/2
Digitized using the ADC

Figure 5.21

An analogue-based macro for jitter measurement [36]

Out

DFT and BIST techniques for analogue and mixed-signal test 161
of processvoltagetemperature variations. A jitter range of 200 ps in a 0.18-m
CMOS technology was demonstrated with a sensitivity of 3.2 mV/ps.
A similar idea was presented in Reference 37, but at the board level, in order to
provide a production-type testing of the accumulated over many periods timing jitter,
and was applied to the board level testing of data transceivers. No external jitter-free
clock is needed as a reference, which makes the implementation more attractive. The
clock to be measured is delayed and both the jittery signal and its delayed version
are then used to control a PFDcharge pumpADC combination with the jitter then
being digitized using an ADC. The comparators in the ADC were implemented using
a chain of inverters that were sized according to different switching thresholds, acting
therefore as some sort of multi-bit digitization. The system was also used as a BIST
technique to measure the jitter [38] and experimentally verified in Reference 39. The
measured jitter, accumulated over eight periods, on a 1 GHz clock was successfully
tested and evaluated at 3050 ps. The performance was then slightly pushed in a more
recent design with detailed experimental results presented recently in Reference 40.
Later, an embedded eye-opening monitor (EOM) was successfully implemented in
Reference 41. The purpose of such a monitor is to continuously compare the horizontal
and vertical openings of the eye diagram of an incoming digital high-speed link, as
illustrated conceptually in Figure 5.22. The horizontal measure gives information
about the amount of time jitter present in the system, while the vertical one is related
to the amplitude jitter of the system. Given some prespecified acceptable voltage and
phase (time) limits set by the application under consideration and that could be fed
to the embedded EOM solution, a pass or fail is then generated by the system. The
accumulated count of fail, also related to the bit error rate (BER) of the system, can
be fed to an equalizer that is usually used to adaptively compensate, in a feedback
mechanism, for the digital signal degradation. The circuit was experimentally verified

Voltage

Horizontal opening

Vhigh
Vertical opening

Mask
Vlow

early

Figure 5.22

late

Time

An example of the received eye diagram of high-speed link, with acceptable openings shown, both in the voltage and time scales, defining the
mask. Violations of those limits are detected by the EOM suggested in
Reference 41

162 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Signal

V1

Reference

V2

Signal first

Out

Reference first

Figure 5.23

A MUTEX time amplifier [42]

to successfully test for eye-opening degradation of digital signals running between


1 Gb/s and up to 12.5 Gb/s, from a single 1.2 V supply.
This rush of recent papers containing embedded solutions for signal integrity
measures, as highlighted in this section by discussion of the jitter and eye diagram
measurement techniques, is testimony to the pressing need for ideas and techniques
to allow otherwise untestable state-of-the-art electronics testable. While the current
section has highlighted some of the most common techniques used for jitter measurements, the next section will highlight some new ideas that have the potential of being
applied to jitter measurements, among many others.

5.5.7

Time amplification

Analogous to voltage amplification in ADCs where a front-end programmable gain


amplifier (PGA) can be used to extend the voltage dynamic range of the measurements,
time amplification has recently emerged as a way to amplify the time difference
between two events. The principle of time amplification involves comparing the phase
of two inputs and then producing two outputs that differ in time by a multiple of the
input phase difference. Two techniques have been proposed for the purpose of time
amplification. The first [42] is based on the mutual exclusive (MUTEX) circuit shown
in Figure 5.23.
The cross-coupled NAND gates form a bistable circuit while the output transistors switch only when the difference in voltage between nodes V1 and V2 , say V ,
reach a certain value. The OR gate at the output is used to detect this switching
action. Time amplification occurs when the input time difference is small enough to
cause the bistable circuit to exhibit metastability. The voltage difference V , is then
given by
V = t et /

(5.3)

where is the conversion factor from the time to initial voltage at nodes V1 and V2 ,
t is the time difference between the rising edges of the signal and reference and

DFT and BIST techniques for analogue and mixed-signal test 163

1

M1

M2

2

M3

Figure 5.24

M4

A single-stage time amplifier [43]

is the device time constant. By measuring the time t between the moment that the
inputs switch to when the OR gates switches, t can be found.
The previously proposed circuit is compact area wise, but its use is limited for
only a few picoseconds in input time range. The gain also is at the single digit level.
Cascading might get around the latter problem.
A second method proposed for time amplification [43] is shown in Figure 5.24.
The circuit consists of two cross-coupled differential pairs with passive RC loads
attached. Upon arrival of the rising edges of
1 and
2 , the amplifier bias current
is steered around the differential pairs and into the passive loads. This causes the
voltage at the drains of transistors M1 and M2 to be equal at a certain time and that
of M3 and M4 to coincide a short time later. This effectively produces a time interval
proportional to the input time difference which can then be detected by a voltage
comparator.
The second time amplification method proposed in Reference 43, while more area
and power consuming, works for very large input ranges extending therefore the input
time dynamic range. Its gain can also be at least an order of magnitude higher, using
only a single stage. The circuit has been built in a 0.18-m CMOS technology and
was experimentally shown to achieve a gain of 200 s/s for an input range of 5300 ps,
giving therefore an output time difference of 160 ns.
Time amplification, when thought of as analogous to the use of PGAs in ADCs,
is the perfect block to precede a TDC. The reason being that with a front-end time
amplification stage, a low-resolution TDC can be used to get an overall high-resolution
TMU.

5.5.8

PLL and DLL - injection methods for PLL tests

PLLs and DLLs are essential blocks in communication devices. They are used for
clock de-skewing in clock distribution networks, clock synchronization on chip, clock
multiplications, and so on. These blocks are mainly characterized for their ability to

164 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


PDM sinusoidal
Reference
i

Phase
detect

Charge
pump

Low-pass
filter

Output
o

N
Vary N to N + 1

Figure 5.25

PLL system view

lock or track the reference clock fast (hence the tracking or locking time characteristic), as well their phase or jitter noise. Testing for these is of paramount importance
in todays SoCs.
An embedded technique for the measurement of the jitter transfer function of a
PLL was suggested in Reference 44. The technique relies on one of three methods
where the PLL is excited by a controlled amount of phase jitter from which the loop
dynamics can be measured. These techniques, shown in Figure 5.25, include phase
modulating the input reference, i , sinusoidal injection (using a  bitstream or a
PDM representation of the input signal) at the input of the low-pass filter or varying
the divide-by-N counter between N and N + 1.
All three techniques have been verified experimentally and tested on commercial PLLs, allowing one to easily implement these testing techniques for on-chip
characterization of jitter in PLLs.
Given the proposed PLL testing method technique presented in Reference 44,
it is beneficial to draw some analogies with the voltage measurement or stimulating schemes presented earlier in Section 5.3.5. A pulse-density modulated signal is
injected into the PLL. Due to the inherent low-pass filter present in PLLs, the testing
or stimulating of such systems, similar to the testing of ADCs, can be achieved in
a purely digital manner without the need for an additional low-pass filter. Silicon
area savings and reduced circuit complexity can be achieved, and is an added bonus
of the proposed PLL BIST. So stimulating the PLL is, here too, done using only
a digital interface [45]. Another analogy can be drawn with respect to the voltage
domain testing. While in the analogue stimulus generation, it is the amplitude that is
modulated, in the case of a PLL, it is the case for the phases or clock edges, as shown
in Figure 5.26.

5.6

Calibration techniques for TMU and TDC

Calibration is an important procedure that measurement instruments, whether built


in or external to the IC to be tested, should undergo before use. Calibration is usually
carried out by exciting the instrument to be calibrated with a series of known input
signals and then correlating the output to the corresponding input each time. In the
special case of time measurements circuits, the inputs normally consist of a series

DFT and BIST techniques for analogue and mixed-signal test 165

bitstream
generator
(Amplitude)

Filter

Circuit
(ADC)

Amplitude
modulation

Circuit
(PLL)

Phase
modulation

DUT

bitstream
generator
(Phase
placement)

Filter
DUT

Figure 5.26

Analogy between stimulating an ADC and a PLL with a  bitstream,


for testing purposes

of edges with known time intervals. However, as the desired calibration resolution
becomes smaller than a few picoseconds, such a task becomes more difficult; on chip,
mismatches and jitter put a lower bound on reliable achievable timing generators,
while off chip, edge and pulse generators can produce such intervals accurately at
additional costs. Calibration methods and their associated trade-offs are therefore
important and will be the subject of Chapter 11. Here, we restrict the discussion to
the calibration of time measurement instruments and in particular, to the flip-flop
calibration of what is known as the sampling-offset TDC or SOTDC for short [46]. A
sampling-offset TDC is a type of flash converter that relies solely on flip-flop transistor
mismatch, instead of separate delay buffers, to obtain fine temporal resolution. While
a rather specific type of TDC, it is probably one of the more challenging types to
calibrate due to the very fine temporal resolutions that this TDC can achieve, making
therefore the task of measuring and calibrating such small time differences a difficult
task. In fact, it was shown in Reference 47 that mismatches due to process variation can
produce temporal offsets from 30 ps down to 2 ps, depending on the implementation
technology and architecture chosen for the flip-flop. Those flip-flops need therefore
to be calibrated first before they can be used as TMUs.
In Reference 47, an indirect calibration technique was proposed that involves
the use of two uncorrelated signals (practically, two square waves running at
slightly offset frequencies) to find the relative offsets of the flip-flops used in the
SOTDC.
Finding the absolute values of the offset, which is statistically referred to as
the mean of a distribution of offsets, requires a direct calibration technique. This
technique was introduced in Reference 48. It involves sending two edges to the flipflop to be calibrated, with time difference, T , tightly controlled, and repeating
the measurements many times to get a (normal or Gaussian) distribution. T is then
changed and the same experiment is repeated. A counter or accumulator is then used to

166 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


find the cumulative density function (CDF) of the distributions. The point on the CDF
that corresponds to a probability of exactly 50 per cent is the mean of the distribution,
which is the actual absolute offset of the flip-flop. Although experimentally verified,
an improved calibration scheme was then developed in Reference 48 to get around
the problem of having to tightly generate T (which is more often done off chip for
resolution purposes at the expense of increased cost as discussed earlier). The basic
idea involves intentionally blowing up the noise distribution by externally injecting
temporal noise into the flip-flop with a standard deviation an order of magnitude
(or even more) bigger than the offset standard deviation that needs to be measured.
The flipflop standard deviation proper to the flip-flop alone will be somewhat lost
with the new distribution, but the mean value becomes much easier to measure as
the need for generating fine T s is eliminated. With this proposed method, temporal
offsets on the order of approximately 10 ps were successfully measured in a prototype
implemented in a 0.18-m CMOS technology.

5.7

Complete on-chip test core: proposed architecture in Reference 11


and its versatile applications

Some of the above BIST techniques that were highlighted in previous sections have
been incorporated in a single system that was used to perform a full set of tests,
emulating therefore the function of a mixed-signal tester on chip. The advantages of
such a proposed system in Reference 11 include a full digital input/output access,
a coherent system for signal generation and capture, a fully programmable d.c. and
a.c. systems, and a single-comparator or 1-bit ADC, which with an on-chip DLL
can perform a multi-pass capture digitization. The proposed system [11] was shown
earlier in Figure 5.3. This section is dedicated to show some of it versatile applications
that were indeed built, tested and characterized.

5.7.1

Attractive and flexible architecture

Perhaps of most significance, from a BIST point of view, are two important aspects
that the architecture in Reference 11 offers. First, its almost all-digital implementation makes it very attractive from a future scaling perspective whereby the occupied
area overhead is expected to decrease with newer CMOS technologies. As shown
in Figure 5.27, with the exception of a crude low-pass filter for d.c. generation, an
analogue filter for a.c. generation, two samples and holds (S/H) and a comparator,
the remaining consists of an all-memory implementation. Notice that theoretically,
only one S/H is needed and that is at the output of the CUT, where the information is
analogue and might be varying. However, for practical reasons, and more specifically,
in order to combat capacitor charge leakage that occurs when the charge is held for
a long time, an identical S/H is placed on the other terminal of the comparator [25],
namely where the d.c. voltage is fed. This provides symmetry and therefore identical
charge loss at both comparison terminals.
Another very important aspect of the proposed architecture is its digital-in digitalout scan capabilities, shown in Figure 5.28. This is particularly important from a signal

DFT and BIST techniques for analogue and mixed-signal test 167
DIV

CLK

Analogue
filter

CUT

S/H

S/H

DSP
N log2 N

N2

Figure 5.27

Architecture for an almost all-digital on-chip oscilloscope

Original
analogue/mixedsignal core
+

Scan path

Scannable cells for reguar core I/O and for internal "scan chain" acces

Figure 5.28

Emphasis on the digital-in digital-out interface of the proposed BIST

integrity perspective whereby a digital interface, both at the input and output terminals,
is a lot more immune to noise and signal degradation caused by the interconnect
paths.
Last but not least, its flexibility from an area overhead perspective is what adds to
its BIST value. As highlighted in Figure 5.29, the proposed test core can be greatly
reduced if the area is of paramount importance. The a.c. and d.c. memory scan chains

...

...

168 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Analogue/mixed
signal
core
+

...

On-chip mixed-signal test circuits


Digital scan-chains

Figure 5.29

...

...

Chip boundary

Possibility of a reduced on-chip core, and therefore, reduced area, while


maintaining a digital-in digital-out interface

can be off chip using external equipment. Similarly, the memory that holds the digital
logic and performs the back-end DSP capabilities can also be external, both, while still
maintaining digital-in digital-out interface. In this case, the abbreviated mixed-signal
test core consists of simple digital buffers (to restore the rise and fall times of the
digital bitstream), the crude low-order d.c. low-pass filter, and the single-comparator
performing the digitization in a multi-pass approach.

5.7.2

Oscilloscope/curve tracing

The system was first checked for its ability to perform signal generation, as well as
signal capture. Fully digital d.c. and a.c. signal-memory-based generation systems
are incorporated. The programming of the memory is achieved with a sub routine and
optimized using software. The memories are then loaded with the bitstream, through
a global clock. With appropriate on-chip low-pass filtering, and by using the DLL,
which controls the sample-and-hold clock (all of which are generated using the same
global clock), and a 1-bit comparator, a multi-pass algorithm allows the capture of
the generated signals. The digitized version of the output is then exported for further
analysis.
Experimental results from d.c. curve tracing showed a linearity of 10 bits in a
0.35-m CMOS technology, for an effective capture rate of 4 GHz, corresponding to a time resolution of 200 ps. Single and multi-tone generation have also been
demonstrated in the same technology, as well as 0.25- and 0.18-m CMOS technologies. Spectral purity as good as 65 dB at 500 kHz and 40 dB at 0.5 GHz have
been achieved. The capture method was also tested demonstrating a resolution of
approximately 12 bits.

DFT and BIST techniques for analogue and mixed-signal test 169

5.7.3

Coherent sampling

Coherency is an important and essential feature in production testing where repeatability and reproducibility of the test results is in large a function of the signal generation
and output capture triggering time. A single master clock clocking the different parts
of the complete system ensures coherency and edge synchronization. With shorter
distances, as is the case on chip, delays between the different subsystems is less
critical. In the case of relatively larger chips or high speed and/or high performance,
localized PLL and DLL might be necessary.
The proposed system does indeed have an inherent coherency that makes it even
more attractive for production testing.

5.7.4

Time domain reflectometry/transmission

With a clock rate in excess of 10 GHz by 2010 [1], clock pulses as little as 100 ps will
be needed to cross from one end of the chip to the other. On the other hand, it takes
about 67 ps for an electromagnetic wave to travel 1 cm in silicon dioxide, a delay
comparable to the clock period. Signal integrity analysis such as time domain reflectometry (TDR), time domain transmission (TDT), crosstalk and so on is therefore
of paramount importance. Owing to their broadband nature, capturing such highfrequency signals on chip is very costly. Embedded tools are therefore essential for
such characterization tasks.
Board level TDR have also been experimentally proven using the system above.
The digitizer core, introducing only a few femtofarads of capacitive loading can be,
and was in fact, used as a tool for testing TDR and TDT on a board. For that, only
the digitization part of the system was used, and a 6-bit resolution at an effective
sampling rate of 10 GHz was demonstrated [49]. External clocks with a time offset
of 100 ps were used in this particular experiment.

5.7.5

Crosstalk

One other ultimate application for digital communication in deep-submicron technologies is the crosstalk that is becoming more pronounced as technologies scale
down, speeds go up and interconnect traces become longer and noisier. The increased
level of packing density is inevitably introducing lines that are in proximity to one
another where quiet lines, in proximity of aggressor lines get transformed into what
are known as victim lines. This crosstalk effect was indeed captured using the versatile
system proposed above [49].
An earlier version was also implemented in Reference 50. The embedded circuit
was also used to measure digital crosstalk on a victim line due to aggressor lines
switching. In this implementation, only the sample-and-hold function was placed on
chip, together with a VCD line that was externally controlled with a varying d.c.
voltage to generate the delayed clock system. Buffers were then used to export the
d.c. analogue sampled-and-held voltage, and the signal was reconstructed externally.
The circuit relies on external equipment for the most part (which is not always an
undesired effect, in fact it is more desired in a testing environment for more control

170 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


and tuning). Nonetheless, the system was among the earliest to measure interconnect
crosstalk in an embedded fashion and requires therefore attention and credit.

5.7.6

Supply/substrate noise

An important metric in signal integrity measurement is broadband (random) noise


characterization. While switching or more generally, deterministic noise can be
characterized using undersampling, capturing random noise needs to be approached
differently as was recently presented in Reference 51 for the measurement of a system supply random noise. The authors in Reference 51 rely on capturing indirectly
the dynamics of the noise as a function of time, by measuring the autocorrelation function, R, of the noise signal, x(t). The autocorrelation is given by the
expected value of the random process, R( ) = E [x(t + /2) x(t /2)]. The
Fourier transform of the autocorrelation gives the power spectral density of the supply noise. Measuring the autocorrelation function is important in this particular case
as it seems to be the only way to capture or quantify broadband noise without the
aliasing problems associated with the undersampling method. The implementation
is interesting as well and is shown in Figure 5.30. Only two samplers are used,
with an external pulse generator to generate a variable , together with a digitization process that relies on a voltage-controlled oscillator (VCO) for achieving the
high-resolution conversion. The sampled-and-held value of the supply noise is used

ADC

External
pulse
generator
(with
variable )

Supply (to be
characterized)

Out
(digital out for
post-processing)

Two samplers

+
Buffer

VCO

Sampler

Counter
Sampler

S/H value controls the


oscillation frequency
of the VCO

Figure 5.30

Supply noise measurement block diagram [51]

Oscillation frequency
is then measured using
a high-speed counter

DFT and BIST techniques for analogue and mixed-signal test 171
to control the frequency of oscillation of the VCO. This frequency is then measured
using a high-frequency counter and exported off chip in a digital manner. Calibration
is necessary in this implementation in order to capture the voltagefrequencydigital
bitstream relationship.
The system in Reference 51 was implemented in a 0.13-m CMOS technology and experimentally verified to capture both the deterministic nature of the noise
(largely captured using undersampling), as well as the stationary noise in a 4-Gb/s
serial link system. The stationary noise was captured using the autocorrelation function and was in large due, and correlated, to the clock in the system. The power
spectral density revealed the highest noise contribution at 200 MHz agreeing with the
system clock. Other noise contributions in the PSD occurred at other frequencies that
were directly related to some switching activity in the system. So the proposed system
in Reference 51 was indeed capable of capturing both deterministic (also referred to
as periodic) and stationary properties of the supply noise in a Gb/s serial link system.
Also recently, an on-chip system to characterize substrate integrity beyond 1 GHz
was implemented in a 0.13-m CMOS technology [52] and successfully tested. The
relevance of this paper is on one hand its circuit implementation for measuring substrate integrity, which confirms the need for embedded approaches. On the other hand,
the papers conclusion confirms that in an SoC, integrity issues have to be studied and
can not be ignored, especially beyond 1 GHz of operational speed.

5.7.7

RF testing amplifier resonance

One other test was also performed on the proposed system in Reference 11, and
that is through the capture of an RF low-noise amplifier (LNA) frequency response,
particularly around its resonance frequency. The CUT was implemented on chip
and its frequency response was tested through the multi-tone signal generation and
multi-pass single-comparator capture system proposed. A 1.2-GHz centre resonance
frequency was successfully measured with 29 dB of spurious-free dynamic range [49].
More focused RF BIST testing has been proposed in Reference 53. An example
diagram for testing RF gain and noise figure is shown in Figure 5.31. A noise diode
generates a broadband RF noise signal, and an output diode, preceded by an LNA for
amplification purposes, acts as an RF detector. Narrowband filters are used to filter
out the broadband noise. Sweeping of the power levels is achieved by varying the

Narrowband
filter

Noise diode

Figure 5.31

DUT

Calibration
path

Focused board level RF testing

Narrowband
filter

LNA

RF detector

172 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


d.c. bias of the diode. Calibration is also made possible to characterize and verify the
correct functionality of the board level test path.
Additional block diagrams for other RF testing functions can be found in more
details in Reference 53. They all fall in the category of RF-to-d.c. or RF-to-analogue
testing, whereby the high-frequency signals are converted to low-frequency or d.c.
signals, which are then captured with more ease and higher accuracy.

5.7.8

Limitations of the proposed architecture in Reference 11

With all the above applications that were indeed experimentally verified, the system
is in fact versatile, almost all-digital with the exception of one comparator, two lowpass filters and two sample-and-hold systems. The circuit was proven to perform
test capabilities that are otherwise non-achievable or, to say the least, very expensive
to test for. Despite its versatility, some limitations exist for the proposed system in
Reference 11 and these are highlighted next.
Comparator offset is one such limitation; the comparator needs to be fully characterized for its offset, as well as dynamically tested, two tasks not easily done, or at
best, test is time consuming and require some additional consideration.
The other limitation, albeit less severe, lies in the uncertainty associated with the
rise/fall time mismatch of the digital bitstream in the on-chip memory bitstream d.c.
generation. This, however, can be taken care of at the design level and accounts for
the worst-case process variations.
One last limitation is the increased test time that each test will require due to the
multi-pass approach. The dead time needed for the d.c. signal generation subsystem
to settle to an acceptable level within an acceptable resolution, each time the d.c.
generation block updates its output level, is another source of increased test time.
This was a trade-off between design complexity and test time that the authors had to
consider.

5.8

Recent trends

If the cost of a component has to be brought down to track Moores law, its testing cost
has to go down as well. While most of the recent tools are mainly for characterization
and device functional testing, more needs to be done about production testing. One
important criterion in production testing is the ability to calibrate all devices while
using simple calibration techniques, with as little test time overhead as possible to be
a production worthy solution. It is therefore important to highlight some of the latest
test concerns and techniques that have emerged in recent years, mainly to reduce
overall test time and cost.
Adaptive test control and collection and test floor statistical process control are
now emerging topics that are believed to decrease the overall test time through investigating the effect of gathering statistical parameters about the die, wafer and lot, and
feeding them back to a test control section through some interactive interface. As
more parts are tested, it is believed that the variations in the parts are better understood, allowing the test control to enable or disable tests, re-order tests, for example,

DFT and BIST techniques for analogue and mixed-signal test 173
allowing tests that are catching the defects to be run first [54]. This has the potential
effect of centring the distribution of the devices performance more tightly around
its mean; in other words, getting test results with less variance or standard deviation.
Once this is achieved, the remaining devices in the production line can be easily
scanned and binned more quickly. However, this solution does not address the issue
of mean shifting that could happen if there is a sudden change in the environmental
set-up. Also, the time it takes to gather a statistically valid set of data that works more
or less globally is not yet defined. This is an important criterion since having a set that
works for only a small percentage of the devices to be tested is not an economically
feasible solution. In other words, the time offset introduced by the proposed method
should not have a detrimental effect on the overall test time. Otherwise the proposed
method is not justified.
A design for manufacturability technique based on a manufacturable-byconstruction design was also recently proposed in Reference 55. The idea proposed
is specifically intended for the nanometre era and puts forward the concept of incorporating accurate physical and layout models of a particular process as part of the
computer-aided design tool used to simulate the system. Such models are then continuously and dynamically updated based on the yield losses. The concept was
experimentally verified on five different SoCs implemented in a 0.13-m CMOS
process, including a baseband cell phone, a micro-controller and a graphics chip.
Experimental results show a yield improvement varying between 4 and 12 per cent,
depending on the nature of the system implemented on the chip. The yield improvement was measured with respect to previous revisions of the same ICs implemented
using traditional methods.
Recent questions and efforts are also entailing the consideration by the ATE
manufacturing industry to what is known as an open architecture with module instruments to standardize test platforms and increase their lifetime, which resulted in The
Semiconductor Test Consortium formed between Intel and the Japanese Advantest
Corp. [56].
Finally, the testing of multiple Gb/s serial links and buses has been the focus
of recent panel discussions [57]. Some of the questions that have been addressed
include the appropriateness of DfT/BIST for such tests, whether such measures
are, or will be, the bottleneck for analogue tests, rather than the RF front-end in
mobile/wireless computing, and finally, whether it is necessary to even consider testing for jitter, noise and BER from a cost and economics perspective in a production
environment.

5.9

Conclusions

In summary, it is clear that test solutions and design for test techniques are important,
but where the test solutions are implemented and how they are partitioned, especially
in an SoC era, have an effect on the overall test cost. Improvising the optimum test
strategy that is affordable, achieves a high yield and minimizes the time to market is
a difficult task.

174 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Test solutions and platforms can be partitioned anywhere on the chip, the board
or as part of the requirements of the ATE. Each solution will entail responsibility to
different people (designer, test engineer or ATE manufacturer), different calibration
techniques and different test instruments, all of which directly impact the test cost
and, therefore, the overall part cost to the consumer. This chapter focused mainly on
the latest developments in DfT and BIST techniques and the embedded test structures of analogue and mixed-signal communication systems for the purpose of design
validation and characterization.
Emerging ideas and latest efforts to decrease the cost of test include adaptive
testing where environmental factors are accounted for and fed back to the testing
algorithm. This could potentially result in a more economical long-term production
testing, but is yet to be verified and justified. On the ATE level, ideas such as concurrent test and open architecture are also being considered. Despite the differences
in the views and the abundance in the suggested solutions for test, testing continues to be an active research area. A great number of mixed-signal test solutions will
have to continue to emerge to respond to the constantly pressing needs for shipping
better, faster and more economically feasible (cheaper) devices to the electronics
consumers.

5.10 References
1 The 1997 National Technology Roadmap for Semiconductors (Semiconductors
Industry Association, San Jose, CA, 1997)
2 Tsui, F.F.: LSI/VLSI Testability Design (McGraw Hill, New York, 1986)
3 Bardell, P.H., McAnney, W.H., Savir, J.: Built-in Test for VLSI: Pseudorandom
Techniques (John Wiley and Sons, New York, 1987)
4 Davis, B.: The Economics of Automatic Testing (McGraw Hill, UK, 1982)
5 Toner, M.F., Roberts, G.W.: A BIST scheme for an SNR, gain tracking and
frequency response test of a sigma-delta ADC, IEEE Transactions on Circuits
and Systems II: Analog and Digital Signal Processing, 1995; 42 (1): 115
6 Grochowski, A., Bhattacharya, D., Viswanathan, T.R., Laker, K.: Integrated
circuit testing for quality assurance in manufacturing: history, current status and
future trends, IEEE Transactions on Circuits and Systems II: Analog and Digital
Signal Processing, 1997; 44 (8): 61033
7 Sunder, S.: A low cost 100 MHz analog test bus, Proceedings of IEEE VLSI Test
Symposium, Princetown, NJ, 1995, pp. 603
8 Osseiran, A.: Getting to a test standard for mixed-signal boards, Proceedings of
IEEE Midwest Symposium on Circuits and Systems, Rio de Janeiro, Brazil, 1995,
pp. 115761
9 DeWitt, M.R., Gross, G.F. Jr., Ramanchandran, R.: Built-in Self-Test for Analog
to Digital Converters, US Patent no. 5132 685, 1992
10 Veillette, B.R., Roberts, G.W.: A built-in self-test strategy for wireless communication systems, Proceedings of IEEE International Test Conference, Washington,
DC, 1995, pp. 9309

DFT and BIST techniques for analogue and mixed-signal test 175
11 Hafed, M.M., Abaskahroun, N., Roberts, G.W.: A 4 GHz effective sample rate
integrated test core for analog and mixed-signal circuits, IEEE Journal of Solid
State Circuits, 2002; 37 (4): 499514
12 Zimmermann, K.F.: SiPROBE A new technology for wafer probing,
Proceedings of IEEE International Test Conference, Washington, DC, 1995,
pp. 10612
13 Tierney, J., Rader, C.M., Gold, B.: A digital frequency synthesizer, IEEE
Transactions on Audio and Electroacoustic, 1971; 19: 4857
14 Bruton, L.: Low sensitivity digital ladder filters, IEEE Transactions on Circuits
and Systems, 1975; 22 (3): 16876
15 Lu, A.K., Roberts, G.W., Johns, D.A.: High-quality analog oscillator using
oversampling D/A conversion techniques, IEEE Transactions on Circuits and
Systems II: Analog and Digital Signal Processing, 1994; 41 (7): 43744
16 Toner, M.F., Roberts, G.W.: Towards built-in-self-test for SNR testing of a
mixed-signal IC, Proceedings of IEEE International Symposium on Circuits and
Systems, Chicago, IL, 1993, pp. 1599602
17 Lu, A.K., Roberts, G.W.: An analog multi-tone signal generation for builtin-self-test applications, Proceedings of IEEE International Test Conference,
Washington, DC, 1994, pp. 6509
18 Haurie, X., Roberts, G.W.: Arbitrary precision signal generation for bandlimited mixed-signal testing, Proceedings of IEEE International Test Conference,
Washington, DC, 1995, pp. 7886
19 Veillette, B., Roberts, G.W.: High-frequency signal generation using deltasigma modulation techniques, Proceedings of IEEE International Symposium
on Circuits and Systems, Seattle, Washington, 1995, pp. 63740
20 Hawrysh, E.M., Roberts, G.W.: An integration of memory-based analog signal
generation into current DFT architectures, Proceedings of IEEE International
Test Conference, Washington, DC, 1996, pp. 52837
21 Burns, M., Roberts, G.W.: An Introduction to Mixed-Signal IC Test and
Measurement (Oxford University Press, New York, 2001)
22 Dufort, B., Roberts, G.W.: On-chip signal generation for mixed-signal built-in
self test, IEEE Journal of Solid State Circuits, 1999; 34 (3): 31830
23 Parker, K.P., McDermid, J.E., Oresjo, S.: Structure and metrology for an analog
testability bus, Proceedings of IEEE International Test Conference, Baltimore,
MD, 1993, pp. 30922
24 Larsson, P., Svensson, S.: Measuring high-bandwidth signals in CMOS circuits,
Electronics Letters, 1993; 29 (20): 17612
25 Hajjar, A., Roberts, G.W.: A high speed and area efficient on-chip analog waveform extractor, Proceedings of IEEE International Test Conference, Washington,
DC, 1998, pp. 68897
26 Stevens, A.E., van Berg, R., van der Spiegel, J., Williams, H.H.: A time-tovoltage converter and analog memory for colliding beam detectors, IEEE Journal
of Solid State Circuits, 1989; 24 (6): 174852
27 Sumner, R.L.: Apparatus and Method for Measuring Time Intervals With Very
High Resolution, US Patent 6137 749, 2000

176 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


28 Rahkonen, T.E., Kostamovaara, J.T.: The use of stabilized CMOS delay lines
for the digitization of short time intervals, IEEE Journal of Solid State Circuits,
1994; 28 (8): 88794
29 Chen, P., Liu, S.: A cyclic CMOS time-to-digital converter with deep
sub-nanosecond resolution, Proceedings of IEEE Custom Integrated Circuits
Conference, San Diego, CA, 1999, pp. 6058
30 Dudek, P., Szczepanski, S., Hatfield, J.: A CMOS high resolution time-to-digital
converter utilising a Vernier delay line, IEEE Journal of Solid State Circuits,
2000; 35 (2): 2407
31 Kang, J., Liu, W., Cavin III, R.K.: A CMOS high-speed data recovery circuit using the matched delay sampling technique, IEEE Journal of Solid State
Circuits, 1997; 32 (10): 158896
32 Andreani, P., Bigongiari, F., Roncella, R., Saletti, R., Terreni, P., Bigongiari,
A., Lippi, M.: Multihit multichannel time-to-digital conversion with +/- 1%
differential nonlinearity and near optimal time resolution, IEEE Journal of Solid
State Circuits, 1998; 33 (4): 6506
33 Abaskharoun, N., Roberts, G.W.: Circuits for on-chip sub-nanosecond signal
capture and characterization, Proceedings of IEEE Custom Integrated Circuits
Conference, San Diego, CA, 2001, pp. 2514
34 Maneatis, J.G.: Low-jitter process independent DLL and PLL based on selfbiased techniques, IEEE Journal of Solid State Circuits, 1996; 31 (11):
172332
35 Chan, A.H., Roberts, G.W.: A deep sub-micron timing measurement circuit
using a single-stage Vernier delay line, Proceedings of IEEE Custom Integrated
Circuits Conference, Orlando, FL, 2002, pp. 7780
36 Takamiya, M., Inohara, H., Mizuno, M.: On-chip jitter-spectrum-analyzer
for high-speed digital designs, Proceedings of IEEE International Solid State
Circuits Conference, San Francisco, CA, 2004, pp. 350532
37 Yamaguchi, T., Ishida, M., Soma, M., Ichiyama, K., Christian, K., Oshawa,
K., Sugai, M.: A real time jitter measurement board for high-performance
computer and communication systems, Proceedings of IEEE International Test
Conference, Charlotte, NC, 2004, pp. 7784
38 Lin, H., Taylor, K., Chong, A., Chan, E., Soma, M., Haggag, H., Huard, J.,
Braat, J.: CMOS built-in test architecture for high-speed jitter measurement
technique, Proceedings of IEEE International Test Conference, Charlotte, NC,
2003, pp. 6776
39 Taylor, K., Nelson, B., Chong, A., Nguyen, H., Lin, H., Soma, M., Haggag,
H., Huard, J., Braatz, J.: Experimental results for high-speed jitter measurement
technique, Proceedings of IEEE International Test Conference, Charlotte, NC,
2004, pp. 8594
40 Ishida, M., Ichiyama, K., Yamaguchi, T., Soma, M., Suda, M., Okayasu,
T., Watanabe, D., Yamamoto, K.: Programmable on-chip picosecond jittermeasurement circuit without a reference-clock input, Proceedings of IEEE
International Solid-State Circuits Conference, San Francisco, CA, 2005,
pp. 5124

DFT and BIST techniques for analogue and mixed-signal test 177
41 Anulai, B., Rylyakob, A., Rylov, S., Hajimiri, A.: A 10 Gb/s eye-opening monitor in 0.13 m CMOS, Proceedings of IEEE International Solid-State Circuits
Conference, San Francisco, CA, 2005, pp. 3324
42 Abas, A.M., Bystrov, A., Kinniment, D.J., Maevsky, O.V., Russell, G.,
Yakovlev, A.V.: Time difference amplifier, Electronics Letters, 2002; 38 (23):
14378
43 Oulmane, M., Roberts, G.W.: A CMOS time-amplifier for femto-second resolution timing measurement, Proceedings of IEEE International Symposium on
Circuits and Systems, London, 2004, pp. 50912
44 Veillette, B., Roberts, G.W.: On-chip measurement of the jitter transfer function
of charge pump phase-locked loops, IEEE Journal of Solid State Circuits, 1998;
33 (3): 48391
45 Veillette, B., Roberts, G.W.: Stimulus generation for built-in-self-test of chargepump phase-locked-loops, Proceedings of IEEE International Test Conference,
Washington, DC, 1997, pp. 397400
46 Gutnik, V.: Analysis and Characterization of Random Skew and Jitter in a Novel
Clock Network, Ph.D. dissertation, Massachusetts Institute of Technology, USA,
2000
47 Gutnik, V., Chandrakasan, A.: On-chip time measurement, Proceedings of IEEE
Symposium on VLSI Circuits, Orlando, FL, 2000, pp. 523
48 Levine, P., Roberts, G.W.: A high-resolution flash time-to-digital converter
and calibration scheme, Proceedings of IEEE International Test Conference,
Charlotte, NC, 2004, pp. 114857
49 Hafed, M., Roberts, G.W.: A 5-channel, variable resolution, 10-GHz sampling rate coherent tester/oscilloscope IC and associated test vehicles, Proceedings of IEEE Custom Integrated Circuits Conference, San Jose, CA, 2003,
pp. 6214
50 Delmas-Bendhia, S., Caignet, F., Sicard, E., Roca, M.: On-chip sampling in
CMOS integrated circuits, IEEE Transactions on Electromagnetic Compatibility,
1999; 41 (4): 4036
51 Alon, E., Stojanovic, V., Horowitz, M.: Circuits and techniques for highresolution measurement of on-chip power supply noise, IEEE Journal of Solid
State Circuits, 2005; 40 (4): 8208
52 Nagata, M., Fukazama, M., Hamanishi, N., Shiochi, M., Lida, T., Watanabe, J.,
Murasaka, M., Iwata, A.: Substrate integrity beyond 1 GHz, Proceedings of
IEEE International Solid-State Circuits Conference, San Francisco, CA, 2005,
pp. 2668
53 Ferrario, J., Wolf, R., Moss, S., Slamani, M.: A low-cost test solution for wireless
phone RFICs, IEEE Communications Magazine, 2003; 41 (9): 828
54 Rehani, M., Abercrombie, D., Madge, R., Teisher, J., Saw, J.: ATE data collection
a comprehensive requirements proposal to maximize ROI of test, Proceedings
of IEEE International Test Conference, Charlotte, NC, 2004, pp. 1819
55 Strojwas, A., Kibarian, J.: Design for manufacturability in the nanometer era:
system implementation and silicon results, Proceedings of IEEE International
Solid-State Circuits Conference, San Francisco, CA, 2005, pp. 2689

178 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


56 LaPedus, M.: Intels Casual learning algorithm to reduce IC test costs, EE
Times, May 6, 2004
57 Li, M.: Is design to production the ultimate answer for jitter, noise and
BER challenges for multi Gb/s ICs?, Proceedings of IEEE International Test
Conference, Charlotte, NC, 2004, p. 1433

Chapter 6

Design-for-testability of analogue filters


Yichuang Sun and Masood-ul Hasan

6.1

Introduction

Test and diagnosis techniques for digital systems have been developed for over three
decades. Advances in technology, increasing integration and mixed-signal designs
demand similar techniques for testing analogue circuitry. Design for testability (DfT)
for analogue circuits is one of the most challenging jobs in mixed-signal system on
chip design owing to the sensitivity of circuit performance with respect to component
variations and process technologies. A large portion of test development time and
total test time is spent on analogue circuits because of the broad specifications and
the strong dependency of circuit performance on circuit components. To ensure that a
design is testable is an even more formidable task, since testability is not well defined
within the context of analogue circuits. Testing of analogue circuits based on circuit
functionality and specification under typical operational conditions may result in poor
fault coverage, long testing times and the requirement for dedicated test equipment.
Furthermore, the small number of input/output (I/O) pins of an analogue integrated
circuit compared with that of digital circuits, the complexity due to continuous signal
values in the time domain and the inherent interaction between various circuit parameters make it almost impossible to design an efficient DfT for functional verification
and diagnosis. Therefore, an efficient DfT procedure is required that uses a single
signal as input or self-generated input signal, has access to several internal nodes,
and has an output that contains sufficient information about the circuit under test.
A number of test methods can be found in the literature and various corresponding
DfT techniques have been proposed [115]. DfT methods can be generally divided
into two categories. The first seeks to enhance the controllability and observability
of the internal nodes of a circuit under test in order to utilize only the normal circuit
input and output nodes to test the circuit. The second is to convert the function of a

180 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


circuit under test, in order to generate an output signal that contains the performance
of the circuit to determine its malfunction. The most promising DfT methods are the
bypassing [25], multiplexing [69] and oscillation-based test (OBT) [1015]. These
methods, though quite general, are particularly useful for the testing of analogue
filters.
Analogue filters are necessary and are possibly one of the most crucial components of mixed-signal system designs and have been widely used in many
important areas such as video signal processing, communications systems, computer
systems, telephone circuitry, broadcasting systems, and control and instrumentation systems. Research has been very active in developing new high-performance
integrated analogue filters [1634]. Indeed, several types of filters have been
proposed [16, 17]. However, the most popular analogue filters in practice are
continuous-time active-RC filters [1823], OTA-C filters [2330] and sampleddata switched-capacitor (SC) filters [3134]. Active-RC and SC filters are well
known and have been around for a long time; OTA-C filters are a new type of
filter. They only use an operational transconductance amplifier (OTA) and capacitor
(C) and are very suitable for high-frequency applications. OTA-C filters were proposed in the mid-1980s and have become the most popular filters in many practical
applications.
In this chapter we are concerned with DfT of analogue filters. Three popular
DfT techniques are introduced, namely, bypassing, multiplexing and OBT. Applications of these DfT techniques in active-RC, OTA-C and SC filters are discussed
in detail. Different DfT and built-in self-test (BIST) methods for low- and highorder active-RC, OTA-C and SC filters are presented. Throughout the chapter, many
DfT design choices are given for particular types of filters. Although many typical
filter structures are illustrated, in most cases, these DfT methods are also applicable to other filter architectures. Two things are worth noting here. One is that
the popular MOSFET-C filters [17, 22, 23] in the literature may be treated in the
same way as active-RC filters from the DfT viewpoint since they derive directly
from active-RC filters with the resistors being replaced by tuneable MOSFETs,
and as a result we do not treat them separately. The other is that single-ended filter structures are used for ease of understanding in the chapter. The reader should,
however, realize that the methods are also suitable for fully differential/balanced
structures [5, 16, 17, 23].
This chapter is organized in the following way. Section 6.2 is concerned with the
bypassing DfT method, including bandwidth broadening and switched-operational
amplifier (opamp) techniques as well as application to active-RC and SC filters. The
multiplexing test approach is discussed in Section 6.3, with examples of active-RC
and OTA-C filters being given. Section 6.4 addresses the OBT strategy for active-RC,
OTA-C and SC filters, with many test cases being presented. Section 6.5 discusses
testing of high-order analogue filters using bypassing, multiplexing and oscillationbased DfT methods, particular attention is given to high-order OTA-C filters. Finally,
a summary of the chapter is given in Section 6.6.

Design-for-testability of analogue filters 181

6.2

DfT by bypassing

Two basic design approaches are commonly used in DfT methodologies for fault
detection and diagnosis in analogue integrated filters. The first approach is based
on splitting the filter under test (FUT) into a few isolated parts, injecting external
test stimuli and taking outputs by multiplexing. The second approach is an I/O DfT
technique based on the partitioning of the FUT into the filter stages. Each filter stage is
then separately tested by bypassing the other stages. Bypassing a stage can be realized
either by bypassing the capacitors (bandwidth broadening) of the stage using MOS
switches or using a duplicate opamp structure at the interface between two stages.
The multiplexing approach will be discussed in Section 6.3. This section addresses
the bypassing approach.

6.2.1

Bypassing by bandwidth broadening

The bypassing methodology [2] is applicable to the class of active analogue filters
based on the standard operational amplifier. In this DfT technique, testability is defined
as controllability and observability of the significant waveforms within the filter structure. It permits full control and observation of I/O signals from the input of the first
stage and the output of the last stage of the filter. Therefore, the first stage input is
controllable and the last stage output is observable. The bypassing method can be
divided into two steps, detection of out-of-specification faults and diagnosis of faults.
The detection of out-of-specification faults is based on the comparison between the
ideal output and the measured data. The diagnosis step involves test generation techniques and fault identification for a given circuit. Digital scan design principles can
be used directly in the modified forms of active analogue filters. These techniques
require sequential structures and one class exhibiting this configuration is a multistage active analogue filter. A signal travels sequentially through the stages, each of
which has a well-defined function and whose gain and bandwidth are determined
by the appropriate input and feedback impedances. The gain and bandwidth of the
individual stage will modify the signal before passing on to the next stage. Analogue
scanning to control and observe signals within the filter is possible if the bandwidth of
each stage can be dynamically broadened in the test mode. Such bandwidth broadening may cause gain change. However, this will not pose problems since any change in
the gains can be fixed by the programming of the test equipment. Bandwidth expansion is performed by reducing the capacitive effects of the impedances of the stages
that are not under test. All the impedances in the active filter are based on the four
basic combinations of resistors and capacitors: single resistor, single capacitor, RC
series and RC parallel.
6.2.1.1 Active-RC filters
The following transformations on the basis of impedance modifications may be
required in test mode.

182 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


PMOS

NMOS
C

Figure 6.1

Single capacitor transformation

(a)

(b)
PMOS

NMOS
R

Figure 6.2

Series RC branch transformation

An ideal resistor has an unlimited bandwidth and does not need to be modified in
test mode.
The single capacitor branch transformation requires two MOS switches as shown
in Figure 6.1. The impedance in the normal mode ZN is approximately the same as
the original impedance without MOS switches only if the on-resistance of the NMOS
switch RS is small enough so that the zero created does not affect the frequency
response of the stage. The size of the PMOS switch does not matter since its onresistance only affects the gain in the test mode.
Two possible transformations of the series RC branch are as shown in Figure 6.2.
A switch in parallel with the capacitor makes the branch resistive in the test mode
or a switch in series with the capacitor disconnects the branch in the test mode as
shown in Figures 6.2(a) and (b) respectively. To avoid significant perturbations of the
original pole-zero locations in the series switch configuration, the on-resistance of
the NMOS switch must be much less than the series resistance of the branch.
The parallel RC branch may be considered as a combination of a single resistor
branch and a single capacitor branch. The parallel RC branch requires only one switch.
The switch is either in series with the capacitor in order to disconnect it or in parallel
with the capacitor to short it out in test mode, as shown in Figures 6.3(a) and (b),
respectively. To reduce the effect on normal filter performance, the on-resistance of the
NMOS switch must be small and the off-resistance of the PMOS switch must be large.
The modified three-opamp, second-order active-RC filter is shown in Figure 6.4.
The modification requires only three extra MOS switches to invoke each stage of the
FUT in expanded bandwidth in test mode.

Design-for-testability of analogue filters 183


(a)

(b)
R

PMOS

NMOS
C

Figure 6.3

Parallel RC branch transformation


R6
T1

Vin

R1

Figure 6.4

T2
C1
C2 T2

R5
R2

R4

R3

Vout

Modified second-order active-RC filter

The test methodology is very simple. The FUT is first tested in normal mode by
setting control switches, T1 = T2 = high level. If the FUT fails, the test mode is
activated and all stages except the stage under test are transformed to simple gain
stages, with all capacitors disconnected by setting the control switches at a low level.
Thus, the input signal can pass through preceding inverting amplifier stages to the
input of the stage under test and the output signal of the stage under test can pass
through succeeding inverting amplifier stages to the output of the filter, so that any
individual stage can be tested from the input and output of the filter.
To isolate the faulty stage(s), one stage is tested at a time until all stages are
tested. The input test waveforms depend upon the overall filter topology and transfer
functions of stages. A circuit simulator provides the expected output waveforms, gain
and phase. Given a filter of n stages, n + 2 simulations are required per fault. The
simulated and measured data are interpreted to identify and isolate the faults. These
data should include signals as functions of time, magnitude and phase responses,
Fourier spectra and d.c. bias conditions.
6.2.1.2 SC filters
The bandwidth broadening DfT methodology can be extended to SC filters using a
timing strategy [3]. The timing waveform will convert each stage of the filter into
a simple gain stage without any extra MOS switches. MOS switches are already

184 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


2

C2

2
1

C4
1

Vin

Figure 6.5

C1

2
1

Vout

First-order lowpass SC filter

included in the basic SC resistor realizations. The requirement for test signal propagation through the SC structure is thus established with the ONOFF sequence of
these built-in switches. The output signal may not be an exact duplicate of the input
but still contains most of the information in the input. The output voltage will be
scaled by inexact gain of the stages or subsystems of the filter. Hence, the timing
strategy permits full control and observation of I/O signals from the input to the first
stage to the output of the last stage of the filter. A timing methodology for signal
propagation does not only account for the test requirements, but also considers the
proper operations of SC filters. The following combinations are needed to produce
the test timing signals:
1. The master clock, which is the OR combination of two non-overlapping clocks,
1 and 2 .
2. The test enable control signal for each stage.
3. The phase combinations of the master clock and test enable signal.
The clock distribution into the SC structures is needed to permit the selection of
normal mode or test mode of operations.
The basic lowpass single-stage SC filter [31] is given in Figure 6.5.
In the test mode, the path for the input signal to the output can be created such
that the switches in series with capacitors are closed and the switches used to ground
any capacitor are opened. Let T be the test control signal, which remains high during
the test mode and low in the normal mode. The proper switch control waveforms can
be defined as
for signal switches:
1S = T + 1
2S = T + 2

(6.1)

for grounding switches:


1O = T 1
2O = T 2

(6.2)

Design-for-testability of analogue filters 185


10
20

C11

11
10

11

C1111

Vin
20

10

Figure 6.6

10

11

C12

10
20

C21

10

CC1

10
20

11
10

22 C
21
12

12
22

CC2

11

C31 11

20

10

11

C32

10

CC3

Vout

Third-order SC filter with built-in sensitization

The subscripts, S and O, are added to the clock phases to stress the functions of these
signals with respect to the switches.
During the test mode, the filter operates in continuous fashion and its transfer
function is given by
Vout =

C1
Vin
C2 + C 4

(6.3)

Equation (6.3) shows that the input signal Vin is transmitted through the circuit with
its amplitude scaled by the capacitor ratio.
Now we apply the same technique to a third-order lowpass SC filter [32]
shown in Figure 6.6. Assuming that we are interested in testing of stage 2 and
the only accessible points of the circuit are the input of stage 1 and output of
stage 3.
The functional testing of stage 2 requires two signal propagation conditions:
1. Establishing a path through stage 1 to control the input stage 2.
2. Establishing a path through stage 3 to observe the output stage 2.
Therefore the switches can be divided into three distinct groups:
1. The grounding switches in stages 1 and 3 remain open during the testing of
stage 2.
2. The signal switches in stages 1 and 3 remain closed during the testing of
stage 2.
3. The switches in inter-stage feedback circuits remain open to ensure controllability and observability and to avoid possible instability.
In test mode, stage 2 should be in normal operation, that is, the switches in stage
2 are controlled by the normal two-phase clock. Three test control lines are required
to enable testing of each of the three stages. These lines are designated as T1 , T2 and
T3 . The clock waveforms for both normal and test are defined as follows:
1. For grounding and inter-stage feedback switches:
iO = (T1 + T2 + T3 ) i
where i denotes clock phase i, i = 1, 2.

(6.4)

186 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Mode
test /filter

Input

V+

Vt

S1

Vout
S2

Figure 6.7

Output

Modified opamp for active- RC/SC filters

2. For signal switches:


ij = i +

k=3


Tk

(6.5)

k=1,k=j

where ij means clock phase i in stage j.


A path is thus established through this multi-stage SC filter so that the signal
waveform can propagate from the input of the filter to the input of any stage and from
the output of any stage to the output of the filter.

6.2.2

Bypassing using duplicated/switched opamp

The duplicated or switched opamp [4] methodology can be applied without any modification to both continuous-time and SC filters, providing a unified DfT strategy with
better performance in terms of signal degradation. The modified opamp has duplicate
input stages and two MOS switches in the small signal path of the filter and is shown
in Figure 6.7.
In filter mode, switch S1 is closed and S2 is open, the opamp operates normally
and the circuit under test behaves as a filter with very small performance degradation.
In test mode, S1 is open and S2 is closed and the opamp operates as a unity gain
follower, passing the test signal to the output. Owing to the use of switches, the
circuit is often called the switched opamp, alternatively the duplication of input stage
leads to the name of duplicated opamp.
6.2.2.1 Second-order active-RC filter
A three-stage second-order RC filter using the duplicate input opamp is shown in
Figure 6.8. The filter consists of three different types of stage, depending on the

Design-for-testability of analogue filters 187


R6
R5

R1

Vin

C1

C2

V
V+

R2
V1

Vt
Mode T/F

R4

V
V+

R3
V2

Vt

Mode T/F

V
V+

Vout
V3

Vt
Mode T/F

Figure 6.8

Second-order RC filter using duplicate input opamp

feedback impedance, which can be a R, a C or RC in parallel. The Vt terminal is


connected to the input of the stage, that is, the output of the previous stage.
To test the ith stage, all the stages are put into test mode except the ith stage.
Therefore, the input of the ith stage under test will be equal to the input of the filter,
that is
Vi1 = Vi2 = . . . = Vin

(6.6)

The output of the ith stage under test is given by


Vi = Vi+1 = . . . = Vout

(6.7)

From the above equations it can be seen that every stage can be tested from the filter
input and output due to the use of the switched opamp.
6.2.2.2 Third-order SC filter
Figure 6.9 shows a third-order SC filter [32] using the duplicate input opamp as shown
in Figure 6.7. Each stage is able to perform its filter function in normal mode or to
work as unity gain stage in test mode.
The filter testing procedure is as follows. The FUT is first tested in filter mode.
If the FUT fails, the test mode is activated and all stages are transformed into simple
unity gain stages except the stage to be tested for fault. The stage under test is a lossy
or ideal integrator. Testing can be conducted by comparing the measured results with
the expected transfer function of the stage under test. Further designs of switched
opamps and their applications in analogue filter testing can be found in Reference 5.

188 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


10 C11
20

11

Vin

11

20 C11 10

V
V+
Vt

10

11

10 C12
11

10

10

20 C
20
21

10

CC1

11
10 C32

22

12

12 C21 22

Mode T/F

11
10

CC2

V
V+
Vt

11
20 C
31

11
10

CC3
V
V+
Vt

Vout

Mode T/F
Mode T/F

Figure 6.9

6.3

Third-order lowpass SC filter using duplicate input opamps

DfT by multiplexing

The multiplexing DfT technique has been proposed to increase access to internal
circuit nodes. Through a demultiplexer, a test input signal can be applied to internal nodes, while a multiplexer can be used to take outputs from internal nodes. The
controllability and observability of the filter are thus enhanced. When using the multiplexing DfT technique, the FUT may be divided into a number of functional blocks
or stages. The input demultiplexer routes the input signal to the inputs of different
stages and the outputs of the stages are loaded to the primary output by the output
multiplexer [6]. Testing and diagnosis of embedded blocks or internal stages will thus
become much easier.
For integrator-based test, for example, the FUT is divided into separate test stages
using MOS switches such that each stage represents a basic integrator function [9].
Individual integrators are tested separately against their expected performances to
isolate the faulty stages. The diagnosis procedure then further identifies the specific
faults in the faulty stages. Normally, the filter can be divided into two possible types of
integrator: lossy integrator and ideal integrator. Time, amplitude and phase responses
may be tested for these integrators. The implementation of the multiplexing-based
DfT requires only few MOS switches. The value of the MOS switch-on resistance is
chosen such that it does not affect the performance of the filter in normal mode.

6.3.1

Tow-Thomas biquad filter

The testable three-opamp Tow-Thomas (TT) biquad [23] circuit is shown in


Figure 6.10. The modified circuit requires three MOS switches and two multiplexes
and in normal mode performs the same function as the original filter. The operation
of the testable TT biquad filter is given in Table 6.1.
It is possible to obtain lowpass and bandpass second-order filter functions from
V2 or V3 and V1 respectively. The MOS switches in Figure 6.10 will divide the TT
three-opamp biquad into three stages. In the test mode of operation all switches are
opened. Each individual stage is now converted into an integrator. For the lossy

Design-for-testability of analogue filters 189


R3
Demultiplexer

R1

Vin

R4

C2

C1

V1

R6
S1

R5

V2

S1
V3
Vout

Multiplexer

A0 A1

R2

S1

A0 A1

Figure 6.10

Testable TT three-opamp biquad filter


Table 6.1

Testable TT biquad RC filter

S1

A1

A0

Mode

Operation

1
0
0
0

0
0
1
1

0
1
0
1

Normal
Test
Test
Test

Filter
Lossy integrator
Ideal integrator
Amplifier

integrator, stage 1 we have:


Vout =

R1
(1 et/R1 C1 )Vin
R4

(6.8)

The relation of the ideal integrator, stage 2 is given by


Vout =

t
R2 C

Vin

(6.9)

For the ideal integrator, therefore a square-wave input will produce a triangular-wave
output for each stage if the circuit is fault free. If there is fault in the stage, the output
of the stage will not be an ideal triangular wave for a square-wave input. Amplitude
and phase responses in the frequency domain could equally be used as test indicators.

6.3.2

The KerwinHuelsmanNewcomb biquad filter

The circuit in Figure 6.11 is referred to as the KerwinHuelsmanNewcomb (KHN)


biquad [23] and it simultaneously displays lowpass (V3 ), bandpass (V2 ) and highpass
(V1 ) characteristics. The testable circuit in Figure 6.11 involves four extra MOS
switches and four channel analogue demultiplexer and multiplexer. The value of the

190 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Vin

R4
R1

S1

V1

R5

C1

V2

S1

R6

A0 A1
R2

V2

S1
V3

S1

Multiplexer

Demultiplexer

R3

Vout

A0 A1

Figure 6.11

Testable KHN biquad filter circuit using multiplexing


Table 6.2

Testable KHN biquad filter

S1

A1

A0

Mode

Operation

1
0
0
0

0
0
1
1

0
1
0
1

Normal
Test
Test
Test

Filter
Amplifier
Ideal integrator
Ideal integrator

switch resistances are chosen such that the pole frequency movement is negligible
and the new zeros introduced by the switches are as far outside the original filter
bandwidth as possible. The operation of the testable KHN filter in Figure 6.11 is
given in Table 6.2. In normal mode operation, all control switches designated as S1
are closed with address pins A0 and A1 at zero level and the circuit performs the
same function as the original filter. The fault diagnosis method involves the following
procedure:
1. Set the KHN filter in test mode by placing all switches (S1 ) in open position.
2. Observe the output waveforms of the stage under test in the KHN filter by
assigning the respective address of the stage, as given in Table 6.2.
Each stage is investigated step by step to locate the fault. The multiplexing technique can be used to observe the fault in any stage. The function of each stage is
simply an ideal integrator or amplifier.

6.3.3

Second-order OTA-C filter

In this section, we present multiplexing-based test techniques for the TT OTA-C filter.
This can be considered to be the OTA-C equivalent of the TT active-RC filter, which
consists of an ideal integrator and a lossy integrator in a single loop. TT filters have

S2
+

V01

gm1

A0 A1 S1

S2

C1

gm2

C2

V02

gm3
+

Multiplexer

Vin

Demultiplexer

Design-for-testability of analogue filters 191

Vout

A0 A1

Figure 6.12

Testable TT OTA-C filter circuit using multiplexing


Table 6.3

Testable TT OTA-C biquad filter

S1

S2

A1

A0

Mode

Operation

0
1
1

1
0
0

0
0
1

0
1
0

Normal
Test
Test

Filter
Ideal integrator
Lossy integrator

excellent low sensitivity to parasitic input capacitances and are suitable for cascade
synthesis of active filters at high frequencies. Multiplexing-based DfT is directly
applicable to the TT using only MOS switches as shown in Figure 6.12.
The values of the switch resistances are chosen so that the modified filter performs
the same function as the original filter. The optimum selection of aspect ratio between
length and width of the control switches will produce negligible phase perturbation
and insignificant increase in the total harmonic distortion due to the non-linearity of
the MOS switch.
The modified TT filter is first tested in normal mode. In normal mode operation,
control switches designated as S2 are closed and S1 are opened, as shown in Table 6.3.
In the case that failure occurs, the test mode will be activated, with switches S2 open
and S1 closed and the individual stages will be tested sequentially to isolate the faulty
stage. During testing, the TT filter in Figure 6.12 will become two individual stages,
an ideal integrator, stage 1 and a lossy integrator, stage 2. The transfer function of
stage 1 can be derived as
Vout =

gm1
Vin
sC1

(6.10)

The output of stage 2 is given by


Vout =

gm2
Vin
sC2 + gm3

(6.11)

192 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


The DfT design has very little impact on the circuit performance of the filter since only
switches S2 are within the integrator loop. All switches are opened and closed simultaneously, therefore the combination of n-type and p-type are used which requires
very low pin overhead.
The multiplexing technique is general and also suitable for SC filter testing.
Similar to the application to active-RC and OTA-C filters described, the multiplexerbased DfT is easily implemented and requires minor modification to the original
multi-stage SC filter without degrading the performance of the circuit. The modified
filter structure provides controllability and observability of the internal nodes of each
filter stage. The technique sequentially tests every stage of the filter and hence reduces
test time and has greatly reduced area overhead.

6.4

OBT of analogue filters

OBT procedures for analogue filters, based on transformation of the FUT into an
oscillator have been recently introduced [1113]. The oscillation-based DfT structure
uses vectorless output frequency comparison between fault-free and faulty circuits
and consequently reduces test time, test cost, test complexity and area overhead.
Furthermore, the testing of high-frequency filter circuits becomes easier because no
external test signal is required for this test method. OBT shows greatly improved
detection and diagnostic capabilities associated with a number of catastrophic and
parametric faults. Application of the oscillation-based DfT scheme to low-order analogue filters of different types is discussed, because these structures are commonly
used individually as filters and also as building blocks for high-order filters.
In OBT, the circuit that we want to test is transformed into an oscillating circuit
and the frequency of oscillation is measured. The frequency of the fault-free circuit
is taken as a reference value. Discrepancy between the oscillation frequency and the
reference value indicates possible faults. Fault detection can be performed as a BIST
or in the frame of an external tester. In BIST, the original circuit is modified by
inserting some test control logic that provides for oscillation during test mode. In the
external tester, the oscillation is achieved by an external feedback loop network that
is normally implemented as part of a dedicated tester.
An ideal quadrature oscillator consists of two lossless integrators (inverting and
non-inverting) cascaded in a loop, resulting in a characteristic equation with a pair of
roots lying on the imaginary axis of the complex frequency plane. In practice, however, parasitics may cause the roots to be inside the left half of the complex frequency
plane, hence preventing the oscillation from starting. Any practical oscillator must
be designed to have its poles initially located inside the right-half complex frequency
plane in order to assure self-starting oscillation. Most of the existing theory for sinusoidal oscillator analysis [26] models the oscillator structure with a basic feedback
loop. The feedback loop may be positive, negative or a combination of both. The
quadrature oscillator model can ideally be described by a second-order characteristic
equation:
(s2 bs + 20 )V0 (s) = 0

(6.12)

Design-for-testability of analogue filters 193


The oscillation frequency 0 can be obtained by first substituting s = j into
Equation (6.12) and considering real and imaginary parts separately. The oscillation conditions are obtained from the Barkhausen criterion [11]. It states that, at the
frequency of oscillation 0 , the signal must transverse the loop with no attenuation
and no phase shift.
Consider the general form of the second-order transfer function, where z and
p are the natural frequencies of the zero and the pole and Qz and Qp are the quality
factors:
Vout (s)
s2 + (z /Qz ) s + z2

=K 2 
Vin (s)
s + z /Qp s + p2

(6.13)

The poles of transfer function can be expressed in terms of quantities p and Qp ,


given by
p
p  2
p1,2 = j =
j
4Qp 1
(6.14)
2Qp
2Qp
To make the network oscillate with constant amplitude and resonant frequency p , the
poles must stay on the j-axis, that is, Qp . This condition is normally satisfied
when the quality factor is sufficiently high. To keep the frequency of the oscillation
at the undamped pole, the quality factor should be increased only by changing the
values of the components that do not affect p .

6.4.1

Test transformations of active-RC filters

The most popular method for high-order filter design is the cascade method due to
its simplicity of design and modularity of structure. Second-order filters are the basic
sections in the cascade structures. Therefore, in this section, we briefly discuss the
oscillation-based DfT techniques for second-order active-RC filters [12]. Filter to
oscillator transformation methods, using only MOS switches, for KHN state-variable
biquad, TT biquad and SallenKey filters are presented and discussed.
6.4.1.1 KHN state-variable filter
The modified form of the KHN state-variable filter is shown in Figure 6.13. It can be
used for simultaneous realization of lowpass, bandpass and highpass characteristics.
All three filters have the same poles. Only one extra MOS switch is inserted in
the original KHN filter and the modified KHN performs the same functions with
negligible pole frequency movement. In the normal mode of operation, the control
switch designated as S1 is closed. The oscillation output may be taken from any of
the filter nodes; there may be reasons for the output to be taken from the lowpass,
bandpass or highpass nodes.
The transfer function of lowpass filter at node V3 can be described by
1/R1 R2 C1 C2
V3 (s)
= K1 2
Vin (s)
s + K1 ((R5 /R6 )/R1 C1 ) s + ((R4 /R3 )/R1 R2 C1 C2 )

(6.15)

194 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


R3

R4

C1

Vin

R5

R1

V1
R6

Figure 6.13

S1

C2
V2

R2

V3

KHN state-variable RC filter based on OBT

and the frequency of the pole and the quality factor are given by


R4
R 3 R1 R2 C 1 C 2
 
R3 R2 C2
R5
1
= K1
R6
R4 R 1 C 1
Q

0 =

(6.16)

where K1 = R6 (R3 + R4 )/R3 (R5 + R6 ).


To put the filter into oscillation with constant amplitude the quality factor has to
be infinite. In the other words, the network will oscillate with resonant frequency
0 if quality factor Q . We can see from Equation (6.16) that the oscillation
condition without affecting 0 will be satisfied if R5 /R6 = 0. As a result, switch S1 is
opened to disconnect R6 during test mode. We can also short circuit R5 to transfer the
filter to an oscillator. It is interesting to note that KHN biquad can also be converted
to an oscillator by forming a positive feedback loop connecting the bandpass output
to the filter input to the inverting terminal through a resistor [11].
6.4.1.2 TT biquad filter
The TT three-opamp biquad was proposed by Tow [20] and studied by Thomas [21].
The modified circuit of TT three-opamp biquad is shown in Figure 6.14. The DfT
technique has very little impact on the performance of the filter since only a single
switch S1 is used.
The TT biquad has both lowpass (V2 , V3 ) and bandpass (V1 ) functions. The
lowpass transfer function of biquad filter can be described as
1/R2 R4 C1 C2
V2
= 2
Vin
s + (1/R1 C1 ) s + (R6 /R2 R3 R5 C1 C2 )

(6.17)

Design-for-testability of analogue filters 195


R3
R1

Vin

S1

C1

R4

C2

V1

Figure 6.14

R6

R2
R5
V2

V3

Testable circuit of TT RC biquad filter using OBT

The frequency of pole and quality factor are given by the expressions:

R6
0 =
R2 R3 R5 C1 C2

1
1 R 2 R3 C2
=
Q
R1
C1

(6.18)

It is clear from Equation (6.18) that both the Q factor and pole frequency 0 can be
independently adjusted. From the above expressions we can see that the condition for
oscillation Q without affecting 0 will be satisfied if R1 . This is realized
by inserting switch S1 to disconnect R1 from the circuit. In the test mode the filter
will oscillate at resonance frequency 0 . Deviations in the oscillation frequency with
respect to the resonance frequency indicate faulty behaviour of the circuit. The amount
of frequency deviation will determine the possible type of fault, either catastrophic
or parametric, as well as the specific location where the fault has occurred.
6.4.1.3 The SallenKey filter
The SallenKey filter is one of the most popular second-order filters [18]. It is shown
in Figure 6.15. It employs an opamp arranged as a VCVS with gain K and an RC
network.
Its transfer function is given by
Vout (s)
K/R1 R2 C1 C2
= 2
Vin (s)
s + ((1/R2 C2 )+(1/R1 C1 )+(1/R2 C1 )(K/R2 C2 )) s+(1/R1 R2 C1 C2 )
(6.19)
The quality factor of the filter is given by



1
R 2 C2
R1 C2
R1 C1
=
+
+ (1 K)
Q
R1 C1
R 2 C1
R 2 C2

(6.20)

196 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


C1

Vin R1

R2

+
Vout

C2

Figure 6.15

RA

RB

Second-order lowpass SallenKey filter

where the amplifier gain K is equal to 1 + (RB /RA ). We can put the SallenKey
filter into oscillation by substituting 1/Q = 0 in Equation (6.20). As a result, we
get amplifier gain K = (R2 C2 + R1 C2 + R1 C1 )/R1 C1 . Some external control of the
value of RB /RA must be provided to obtain the required value of K in test mode.
Note, however, that even when the passive elements are in perfect adjustment, the
finite bandwidth of a real amplifier causes dissimilar effects on the pole and zero
positions. We can also put the SallenKey filter into oscillation by adding a feedback
loop containing a high-gain inverter [12].

6.4.2

OBT of OTA-C filters

In this section we will present techniques for converting OTA-C filters into an oscillator using MOS switches. The conversion methods for two-integrator loop, TT and
KHN OTA-C filters are proposed and discussed.
6.4.2.1 Two-integrator loop OTA-C filter
Two-integrator loop OTA-C filters are a very popular category of filters that have
very low sensitivity and can be used alone or as a section in a high-order cascade
filter design [25, 27]. A second- or higher-order system for any type of OTA-C filter
has the potential for oscillation. This ability can be used to convert the FUT into an
oscillator by establishing the oscillation condition in its transfer function using the
strategy shown in Figure 6.16.
In the normal filter mode, the switch S1 is open and M1 and M2 are open-circuited,
but M3 short-circuited. The transfer function of the lowpass second-order filter can
be derived as
H(s) =

gm1 gm2 /C1 C2


Vout
= 2
Vin
s + (gm2 /C2 ) s + (gm1 gm2 /C1 C2 )

(6.21)

Design-for-testability of analogue filters 197


M3
Vin
t Close = Osc
M1
1
2
S
1
V1 +

V01
+
gm1

C1

M2

+
gm2

V02 =Vout

C2

Figure 6.16

Testable two-integrator loop OTA-C filter incorporating the OBT


method

The cut-off frequency 0 and the quality factor Q are given by



gm1 gm2
0 =
C1 C2

gm1 C2
Q=
gm2 C1

(6.22)

(6.23)

To put the network into oscillation with constant amplitude, the poles must be placed
on the imaginary j axis. By closing the switch S1 , the filter network will be converted
into an oscillator, as M1 and M2 are now short-circuited and M3 open-circuited. The
characteristic equation of the resulting oscillator can be described as
gm1 gm2
=0
(6.24)
s2 +
C1 C2
with the poles given by

gm1 gm2
s1 , s2 = j
C1 C2

(6.25)

6.4.2.2 TT OTA-C filter


The TT second-order OTA-C filter is the OTA-C equivalent of the TT active-RC filter.
The TT biquad is the most popular biquad in practice. It is easily converted into an
oscillator using only MOS switches as shown in Figure 6.17.
When switch S1 is open, M1 is open-circuited and M2 short-circuited, and the
circuit behaves as a normal TT-filter. The lowpass transfer function of the TT-filter
can be derived as
Vout
(gm1 gm2 /C1 C2 )
H(s) =
(6.26)
= 2
Vin
s + s (gm3 /C2 ) + (gm1 gm2 /C1 C2 )
The cut-off frequency 0 and the quality factor Q are given by

gm1 gm2
0 =
C1 C2

(6.27)

198 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


M2
Vin
t Close = Osc M1
1

V1 + S1

V01
+
gm1
+
gm2
Vout

C2
C1

+
gm3

V02

Figure 6.17

Testable TT OTA-C filter incorporating the OBT method



gm1 gm2 C2
C1

1
Q=
gm3

(6.28)

To put the TT-filter into oscillation with constant amplitude the quality factor must be
infinite. The network will then oscillate with resonant frequency 0 if quality factor
Q . By closing the switch S1 , M1 is short-circuited and M2 open-circuited, the
filter network will be converted into an oscillator and the poles are given by

gm1 gm2
s1 , s2 = j
(6.29)
C1 C 2
From Equation (6.28) we can see that the condition for oscillation will be satisfied if
gm3 = 0, without affecting the resonant frequency. In Figure 6.17 this can be realized
by switching off the gm3 OTA.
6.4.2.3 KHN OTA-C filter
The filter in Figure 6.18 is the OTA-C equivalent of the KHN active-RC biquad,
in which the two feedback paths share a single OTA resistor. The KHN OTA-C
filter can simultaneously perform lowpass, bandpass and highpass functions. The
implementation of oscillation-based DfT requires only two extra MOS switches to
the original circuit. This modified KHN performs the same functions with negligible
pole frequency movement.
The lowpass transfer function is given by
(gm1 gm2 /C1 C2 )
VLP
= 2
Vin
s + (gm1 gm3 /gm5 C1 ) s + (gm1 gm2 gm4 /gm5 C1 C2 )
The cut-off frequency 0 and the quality factor Q are given by

gm4 gm1 gm2
0 =
gm5 C1 C2

(6.30)

(6.31)


1
Q=
gm3

gm4 gm5 gm2 C1


gm1 C2

(6.32)

Design-for-testability of analogue filters 199

VHP

Vin
t Close = Osc M1
1

V1 +

M2

gm5
+

S1

+
gm4

+
gm3

VBP

gm1
+
C1

+
gm2
C

VLP
2

Figure 6.18

Testable KHN OTA-C filter including oscillation-based DfT

Equations (6.31) and (6.32) shows that we can change the cut-off frequency 0
and quality factor Q of the filter independently. The KHN filter will oscillate at
resonant frequency 0 if the quality factor Q . The condition of oscillation will
be satisfied by substituting gm3 = 0 in Equation (6.32). By closing the switch S1 ,
M1 is short-circuited and M2 open-circuited, the filter network is converted into an
oscillator and the oscillator frequency is the resonant frequency of the filter.

6.4.3

OBT of SC biquadratic filter

In this section we present the conversion of an SC biquadratic filter into an oscillator


using a non-linear feedback element [7, 15, 33].
Considering the FleisherLaker SC biquad [33] shown in Figure 6.19, the general
biquadratic transfer function in the z-domain H(z) is
H(z) =

Vout (z)
(a2 z2 + a1 z + a0 )
=
Vin (z)
(z2 + b1 z + b0 )

(6.33)

The coefficients of the equation depend upon the particular type of the filter. They are
related to the normalized capacitors in the SC biquad in Figure 6.19 as follows:


(C05 + C06 )(C07 + C08 )
a2 = C01 +
(6.34)
(1 + C09 )

C01 (C05 + C06 )C08 C06 C07


a1 =
(6.35)
(1 + C09 )
C06 C08
a0 =
(6.36)
(1 + C09 )
2 + C09 + (C07 + C08 )C02
b1 =
(6.37)
(1 + C09 )
1 C08 C02
b0 =
(6.38)
(1 + C09 )

200 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


clk2

C07
clk1

C09

C08
C03
Vin

clk2
clk1

C01

clk2

clk1

C04
V1

clk2
clk1

C02

clk2

clk1

Vout

C05

clk2

C06

Figure 6.19

FleisherLaker biquad filter


Filter
H(z)

V Non-linear block

N(A)
V

Figure 6.20

Filter to oscillator conversion using non-linear block in feedback loop

To convert the biquadratic filter into an oscillator requires a circuit to force a displacement of a pair of poles to the unit circle. A non-linear block in the filter feedback
loop [26, 34] can generate self-sustained robust oscillations. The oscillation condition
and approximation for the frequency and amplitude of the resulting oscillator for the
system in Figure 6.20 would be determined by the roots of
1 [N(A) H(z)] = 0

(6.39)

where N(A) represents the transfer function of the non-linear block as a function of
the amplitude A of the first harmonic of its input. We consider the non-linear function
formed by an analogue comparator providing one of the two voltages V , as shown

Design-for-testability of analogue filters 201


in Figure 6.20. The corresponding N(A) function is
4V
(6.40)
A
where V can be a positive or negative reference voltage. For the generic second-order
equation in Equations (6.39) and (6.40) we have:
N(A) =

z2 2r cos( )z + r 2 = 0

(6.41)

where
2r cos( ) =

b1 a1 N(A)
,
1 a2 N(A)

r2 =

b0 a0 N(A)
1 a2 N(A)

(6.42)

It can be shown that the system will oscillate if


a0  = a1
sign(V ) = sign(a2 a0 ) = sign(a2 b0 a0 )

(6.43)

The above equations mean that the poles are on the unit circle and the oscillation
amplitude is stable. The amplitude A0 and frequency 0 of oscillation are given by
4 |V | a0 a2
(6.44)

b0 1


1 b1 a1 N(A0 )

(6.45)
0 = fa cos
2
1 a2 N(A0 )
The integrator-based second-order SC filter in Figure 6.19 will be converted into an
oscillator if at least one of the transfer functions at the outputs belongs to the set of
the functions fulfilling the required conditions in Equation (6.43). An important fact
derived from Equation (6.44) is that the amplitude of oscillation can be controlled
by varying the voltage V . Therefore, we can select the amplitude to achieve the best
testing conditions for the biquad filter.
The OBT and diagnosis procedure using the structure shown in Figure 6.20 for
two-integrator loop SC biquad is described as follows. The OBT divides the FUT
into two modes of operation, filter mode and test mode. The system is first tested in
filter mode. Then in test mode the filter is converted into a quadrature oscillator and
the frequency of oscillation is evaluated. Deviations in the oscillation frequency with
respect to the nominal value given by Equation (6.44) indicate faulty behaviour of the
FUT. The amount of frequency deviation will determine the possible type of fault,
either catastrophic or parametric, as well as the specific location where the fault has
occurred.
A0 =

6.5

Testing of high-order analogue filters

Two main approaches are found in the literature [16, 23] for the realization of
high-order filters. The first is to cascade second-order stages without feedback (cascade filter) or through the application of negative feedback, multiple loop feedbacks

202 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


(MLFs). The second is the use of combinations of active and passive components in
order to simulate the operation of high-order LC ladders. The main problem related to
the testing of both types of high-order filter is the controllability and observability of
deeply embedded internal nodes. Controllability and observability can be increased by
partitioning the system into accessible blocks. These blocks are first-order or secondorder sections representing integrators or biquadratic transfer functions respectively.
The problems encountered in partitioning and testing of any high-order filter can be
summarized as
1. How can the FUT be efficiently decomposed into basic functional blocks?
2. How can each stage be tested?
3. How can two or more consecutive stages be tested to check the effects of loading
and impedance mismatches?
4. How can the faulty stage be isolated and the faulty parts diagnosed?
5. What type of test waveforms should be applied to diagnose the faulty
components?
It is clear from these questions that DfT techniques will strongly depend upon
the configuration and type of the high-order filter. The application of multiplexing,
bypass and OBT DfT techniques to high-order filters will be discussed in the next
section.

6.5.1

Testing of high-order filters using bypassing

The testing of analogue systems normally deals with the verification of the functional
and electrical behaviour of the system under test. The verification process requires
measuring the output signal with respect to the input test signal at several internal
nodes. However, in integrated systems, access to the deep internal input and output nodes is severely limited due to the limited number of chip I/O pins. Several
approaches have been reported [614] to enhance external access to deep input and
output nodes of the system. The two basic techniques, namely, multiplexing and
bypassing have been commonly used in digital systems in recent decades. These
techniques are equally applicable to analogue systems, specifically for high-order
filters. There are two major issues related to the accessibility of internal nodes:
1. The isolation of the node from the rest of the system before applying the test
stimulus.
2. The effect on performance of the original system due to the insertion of external
switches for controllability and observability of the subsystem.
In bypass techniques, the internal node is made accessible at the primary input
and output by reconfiguration of all the stages as buffer stages except the stage under
test. The two bypassing approaches, bandwidth broadening and duplicated/switched
opamp, have been discussed for low-order filters in Sections 6.2.1 and 6.2.2. The
switched opamp bypass technique has some advantages over bandwidth broadening
that make it more efficient in the fault diagnosis of high-order filters [5]. The switched
opamp cell avoids the back-driving effect and can reduce the impact of the extra

Design-for-testability of analogue filters 203


Analogue block 1

Input

Analogue block 2

Analogue block 3

Analogue block n

V+

V+

V+

V+

Vt

Vt

Vt

Vt

Mode T/F

Mode T/F

Mode T/F

Mode T/F

Figure 6.21

Output

Testable nth-order filter using bypass strategy based on switched opamp

components. The basic block for a switched opamp is illustrated in Figure 6.7. It has
two operation modes defined by a digital mode control input T /F. At logic zero, the
opamp operates normally and the circuit under test behaves as filter with very small
performance degradation. When T /F has a value of one, the analogue block acts as
buffer, passing the input signal to the output of the block. The implementation of the
bypass scheme using switched opamps for the nth-order filter is shown in Figure 6.21.
The testable nth-order filter based on switched opamps is easily divided into
separate analogue blocks, each block being a first- or second-order filter. To test
the ith block, all blocks except the block under test (BUT) are put into test mode,
operating as buffers. The test signal at the input of the system enters the input node
of the BUT via the buffer stages and the output node of BUT is then observed at
the primary output of the system through subsequent buffer stages. The only block
operating as a first- or second-order filter is the BUT. Therefore, the input to the BUT
will be equal to the primary input of the filter, that is
Vi1 = Vi2 = = Vin

(6.46)

where i = 1, 2, 3, , n
The output of the BUT is equal to the primary output of the filter:
Vi = Vi+1 = = Vout

(6.47)

In Figure 6.21, although we did not show the coupling and feedback between different
stages, the test method is also suitable for MLF structures.

6.5.2

Testing of high-order cascade filters using multiplexing

The cascade connection of second-order sections is the most popular and useful
method for realization of high-order filter function. The testing of the cascade system requires the controllability and observability of internal nodes of the filter.
The controllability and observability can be increased by partitioning the system into accessible blocks. The cascade filter structure can be easily divided into
the blocks of second-order sections representing biquadratic transfer functions.
The programmable-biquad-based DfT architecture of a cascade filter is shown in
Figure 6.22. An analogue multiplexer (AMUX) can be used to select those biquads
with a minimum impact on normal filter operation. The input test signal is applied
simultaneously to the selected biquad and a programmable biquad [7, 15, 33, 34].

204 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

St2

Vin
Biquad-1
Si1

S01

Sc2
Si2

Stn

St3
Biquad-2
S02

Sc3
Si3

Biquad-3
S03

Scn

Vout

Biquad-n

Sin

Son

AMUX

Control logic

Figure 6.22

Programmable
Biquad

Comparator

Error signal

Block diagram of programmable biquad multiplexing test structure for


cascade filter

The control logic will programme the programmable biquad with the same transfer
function as the biquad under test. A programmable biquad is a universal biquadratic
section that can implement any of the basic filter types by electrical programming.
The comparator circuit compares the responses of the biquads to generate an error
signal. The system biquad will be considered fault free if the error signal lies inside
the comparison window.
The testable cascade filter structure in Figure 6.22 consists of the FUT, programmable biquad, comparator and control logic. The input multiplexer Si1 to Sin
connects the different biquad inputs to the programmable biquad input and the output
multiplexer, So1 to Son , connects the output node of each biquad to the comparator.
A set of switches, Sc2 to Scn , connect and disconnect each biquad output to the next
biquad input. An additional set of switches, St2 to Stn , act as a demultiplexer able to
distribute the input signal to the different biquad input nodes. The control logic is a
finite sequential machine that controls the operational modes as well as configuring
the programmable biquad according to the requirements of the biquad under test.
The DfT procedure has two operating modes, normal/filter mode and test mode. The
test mode of operation is further divided into two sub-modes, online test and offline
test. In online test mode, testing of the selected biquad is carried out during normal
operation of the filter, using normal signals rather than signals generated specifically
for testing. When working in online test mode, the control logic can connect the input
of any biquad in the cascade to the programmable biquads input as well as the same
biquads output to the comparator input. The control logic also programmes the programmable biquad to implement the same transfer function as the selected biquad.
We can perform functional comparison between the outputs of the selected biquad
and the programmed biquad with a margin range. If the selected biquad is fault free
then the comparator output will lie between the given tolerance limits, since the same
input signal is applied to the input of both biquads.
When the offline test mode is invoked, switches Scj split the filter into biquad stages
and the input is selectively applied to one of them and to the programmable biquad.
The control logic connects the output of the biquad under test to the comparator and
the comparator compares this output to the programmable biquad output for the same
input signal. The error signal from the comparator output will indicate the faulty

Design-for-testability of analogue filters 205


behaviour of the biquad under test, since the programmable biquad is programmed to
implement the same transfer function. The control logic selects one by one the biquad
stages of the filter and performs the comparison with the programmed biquad.
The above method is also suitable for testing of ladder-based filters [8]. For testing,
a ladder filter needs to be partitioned into second-order sections, which may not be as
straightforward as the cascade structure due to coupling. The programmable biquad
also needs modification to solve the stability problem due to the infinite Q of some
sections.

6.5.3

Test of MLF OTA-C filters using multiplexing

The DfT based on multiplexing can be directly applied to high-order MLF OTA-C
filters. Testability of a filter is defined as the controllability and observability of significant waveforms within the multi-stage filter structure. The significant waveforms
are the input/output signals of individual stages in the high-order filter configurations. The high-order filter can be divided into integrators using extra MOS switches
between input and output of two consecutive stages. The responses from each stage
of an analogue multi-stage filter are compared with correct response, to determine
whether it is faulty or fault free. The implementation of the multiplexing-based DfT
method requires the following modifications to the original filter and test steps:
1. Insert MOS switches to stage i, 1 i n,and define the controllable waveforms
necessary in both the normal and test mode.
2. Input/test signal is connected to the filter stages through a demultiplexer and
the output of the respective stage or the filter is observed through a multiplexer.
3. Set the MOS switches to logic 1 and logic 0 for normal and test mode
respectively.
4. The overall circuit topology and transfer function are used to generate the
necessary test waveforms to test each stage. A simulation tool then provides
the expected output waveforms, gain and phase.
5. Interpret the simulated results to recognize and isolate the fault.
The value of MOS switch ON resistance is chosen such that it does not affect
the performance of the original filter. The test methodology is straightforward. The
modified circuit is first tested in normal mode. If any malfunction or failure occurs,
the test mode is activated and all the individual stages are tested one by one to isolate
the faulty stage. Then the faulty stage must be further investigated to locate the fault.
The general MLF OTA-C filter is shown in Figure 6.23. The MLF OTA-C filter is composed of a feed-forward network of integrators connected in cascade
and a feedback network that contains pure wire connections only for canonical
realizations [28].
The feedback network may be described as
Vfi =

n

j=i

fij Voj

(6.48)

206 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Feedback network

Vf 1

Vf 2

gm1

Vin

Vo1

gm2

Vo2

C2

C1

Figure 6.23

Vf 3

Vfn

gm3
+

gmn

Vo3

Vout

C3

Cn

MLF nth-order OTA-C filter model


Feedback network

T3
T2
T1

+
gm1

A0 Am
S1

Figure 6.24

C1

----

Tn

S2

+
gm2

C2

S2

+
gm3
S2

C3

+
gmn
S

Cn 2

T3
T2
T1

Demultiplexer

Vin

Demultiplexer
----

Tn

A0

Vout

Am

Testable multi-stage MLF OTA-C filter structure

where fij is the voltage feedback coefficient from the output of integrator j to the
input of integrator i. The feedback coefficient fij can have zero or non-zero values
depending upon the feedback. Equation (6.48) can be written in the matrix form.
[Vf ] = [F] [VO ]

(6.49)

where [Vo ] = [Vo1 Vo2 Von ]t , the output voltages of integrators, [Vf ] = [Vf1 Vf2
Vfn ]t , the feedback voltage to the inverting input terminals of integrators and
[F] = [fij ]n n, the feedback coefficient matrix. The different feedback coefficients
will result in different filter structures. Thus, the feedback network classifies the filter
structures.
The modified MLF OTA-C filter using the multiplexing DfT technique is shown
in Figure 6.24. The operation of the modified MLF filter, in normal and test mode is
given in Table 6.4.
In normal operating mode, control switches S2 are closed and S1 are opened, while
the address pins from A0 to Am are at level 0. The fault-free circuit will perform the

Design-for-testability of analogue filters 207


Table 6.4

Testable MLF OTA-C filter

S1

S2

Am

A1

A0

Mode

Operation

0
1
1

1
0
0

0
0
0

0000
0000
0000

1111

0
0
1

0
1
0

Normal
Test
Test

Test

Filter
Integrator T1
Integrator T2

Integrator Tn

same function as the original filter. In cases where filter performance does not meet
specification, the OTA-C stages must be investigated separately. The fault diagnosis
method involves the following steps:
1. Activate the test mode of operation by closing switches S1 and opening switches
S2 . And set multiplexer address inputs A0 , . . . , Am to select an OTA-C stage
for testing.
2. Applying the input test signal to the selected OTA-C stage through the analogue
demultiplexer.
3. Observe the output of the selected OTA-C stage at the output terminal of the
AMUX. The function of each individual OTA-C stage is an ideal integrator.
The voltage transfer function of the stage can be defined as
H(s) =

gmi
sCi

(6.50)

where i is the number of the OTA-C stages and gmi and Ci are the transconductance and capacitance of the related OTA and capacitor. Therefore, the
output of the fault-free OTA-C stage will be a triangular wave in response to a
square-wave input.

6.5.4

OBT structures for high-order OTA-C filters

OBT structures for high-order OTA-C filters are based on the decomposition of the
filter into functional building blocks. The partitioning of the filter should be made
such that each individual block represents a biquadratic transfer function. Then these
blocks can easily be converted into oscillators by establishing the oscillation condition
in their transfer functions. During test mode operation, each block will oscillate at
a frequency that is a function of its component values and transconductance of the
OTAs. Deviations in the oscillation frequency from the expected frequency indicate
faulty behaviour of the components in the block. The sensitivity of the oscillation
frequency with respect to the variations of the component parameters will determine
the detectable range of the fault.

208 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


(a)
Vin

Sp

+
gm1

Sn

Sp

+
g
m3

+
g
m2

C1

C2

Sn

Sn

Sp

Sp

Sp

+
gm(n1)

+
g
m4

C3

C4

Sn

Sn

+
g
mn

Cn-1

Sn

Vout

Cn

Frequency counter

Vin

Sn

Sn

+
gm1

+
gm2

C2

C1

Sp

Sn
Sp

Sn

+
g
m3
Sp

+
g
m4

C3

Sp

Frequency counter

(b)

Sn

+
gm(n1)

C4

+
g
mn

Cn1
Sp

Sn

Vout
Cn

Sp

Sn

Frequency counter

(c)
Sp
Vin

+
gm1

Sn

+
gm2

Sn

C1

Figure 6.25

C2

Sp

Sp

+
g
m3
Sn

C3

+
g
m4
Sn

C4

Sp

Sp

+
gm(n1)

Sn

Cn1

+
g
mn

Vout
Cn

Sn

Testable high-order OTA-C filter structures based on OBT: (a) cascade


structure, (b) IFLF structure and (c) LF structure

Commonly used design approaches for high-order OTA-C filters are based on
cascade and MLF structures. Choice of the feedback network can result in the cascade,
inverse follow-the-leader feedback (IFLF) and leap-frog (LF) configurations [29].
These types of multi-stage (high-order) OTA-C filter structures can be easily modified
to implement the oscillation-based DfT technique as shown in Figure 6.25, where Sn
and Sp are the NMOS and PMOS transistor switches respectively.
Implementation of the oscillation-based DfT method requires the following
modifications to the original filter:
1. Decomposition of the filter into the biquadratic stages.
2. Isolation of biquadratic stages from each other.

Design-for-testability of analogue filters 209


3. Reconfiguration of the feedback network of each biquadratic stage to establish
the oscillation condition in its transfer function.
All these modifications can be carried out by insertion of MOS transistor switches
into the original filter circuits. Therefore, the MOS transistors are the key components
in OBT testing and provide the testable structure of the high-order OTA-C filter. The
accuracy of the transistors directly affects the accuracy and functionality of the FUT.
The most important characteristics of a transistor are the ON resistance, the OFF
resistance and the values of parasitic capacitors. The effective resistance of a MOS
transistor operating in the linear region may be expressed as
RDS =

L
(VGS VT )1
kW

(6.51)

where W and L are the channel width and length resepctively, VGS and VA are the
gate-source bias voltage and threshold voltage respectively, k = n Cox and where
n is the electron mobility and Cox is the oxide capacitance per unit area.
A larger aspect ratio will reduce the series resistance. However, the parasitic
capacitance is approximately proportional to the product of width and length. Therefore, choosing an optimum aspect ratio and a sensible point in the signal paths for
switch insertion will ensure a minimal impact on the performance of the filter. The
modified filter circuits shown in Figure 6.25 require two types of switches; switches
in the signal path to divide the filter into biquadratic blocks and switches in the feedback path to establish oscillation conditions. The switches in the signal path must be
realized using MOS transistors with minimum values of the ON resistance, whereas
the other switches can be designed for minimum size.
The modified filter circuit has two mode of operations, normal and test mode. In
normal mode of operation all switches designated Sp are closed whereas the switches
designated Sn are open and the circuit will perform the original filter functions. When
the test mode is invoked Sp switches are opened and Sn switches closed. Switches
Sp split the filter into biquad stages and switches Sn convert these biquad stages into
oscillators. The oscillation frequency of the oscillator is then

gi gi+1
0i =
i = odd, i = 1, 3, 5, . . . , n 1
(6.52)
Ci Ci+1
where n is the order of the filter and is even. When n is odd, the last integrator can
be combined with the (n1)th integrator to form an oscillator, although the (n1)th
integrator has already been tested. The condition of oscillation for two-integrator loop
biquadratic filters is discussed in Section 6.4.
The test and diagnosis procedure of OBT is straightforward. The FUT is first
tested in normal mode and the cut-off frequency of the FUT measured. The test mode
will be activated if the cut-off frequency deviates beyond the given tolerance band. In
test mode, the high-order filter is decomposed into individual biquad oscillators and
individual oscillator frequencies are measured to isolate the faulty stage. Comparison
between the frequency evaluated from Equation (6.52) and the measured frequency of
the corresponding oscillator stage identifies the faulty stage of the FUT. The deviation

210 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


from the fault-free frequency will identify the fault, catastrophic or parametric, and
the location of the fault in the stage.
The proposed design can be implemented for any type of high-order OTA-C filter
with little impact on the performance of the original filter. The area overhead depends
upon the type and order of the filter. Implementation for n-stages adds [nSn +(n1)Sp ]
extra MOS transistors for the cascade-type structure and [(n + 1)Sn + nSp ] for the
IFLF-type structure respectively. The area overhead of the LF-type structure is the
same as the cascade-type since only one feedback loop requires changing for both
cases. The percentages of area overhead are calculated for cascade, LF and IFLF
structures as follows
Overhead for LF and cascade =

nAn + (n 1)Ap
100%
A

(6.53)

(n + 1)An + nAp
100%
(6.54)
A
where A is the original circuit area, An is the area of switch Sn and Ap is the area of
switch Sp .
Overhead for IFLF =

6.6

Summary

This chapter has been concerned with DfT of, and test techniques for, analogue
integrated filters. Many different testable filter structures have been presented. Typical
DfT techniques, such as bypassing, multiplexing and OBT have been discussed. Most
popular filters such as active-RC, OTA-C and SCs filters have been covered. Test of
low-order and high-order filters have been addressed. DfT of OTA-C filters have
been investigated, particularly, because this topic has not been so well studied as the
testing of active-RC and SC filters. Many of the test concepts, structures and methods
described in the chapter are also suitable for other analogue circuits, although they
may be most useful for analogue filters as demonstrated in the chapter.

6.7

References

1 Wey, C.L.: Built-in self-test structure for analogue circuit fault diagnosis, IEEE
Transactions on Instrumentation and Measurement, 1990;39 (3):51721
2 Soma, M.: A design-for-test methodology for active analogue filters, Proceedings of IEEE International Test Conference, Washington, DC, September 1990,
pp. 18392
3 Soma, M., Kolarik, V.: A design-for-test technique for switched-capacitor filters,
Proceedings of IEEE VLSI Test Symposium, Princeton, NJ, April 1994, pp. 427
4 Vazquez, D., Rueda, A., Huertas, J.L., Richardson, A.M.D.: Practical DfT strategy for fault diagnosis in active analogue filters, Electronics Letters, July 1995;31
(15):12212

Design-for-testability of analogue filters 211


5 Vazquez, D., Rueda, A., Huertas, J.L., Peralias, E.: A high-Q bandpass fully differential SC filter with enhanced testability, IEEE Journal of Solid State Circuits,
1998;33 (7): 97686
6 Wagner, K.D., William, T.W.: Design for testability of mixed signal integrated
circuits, Proceedings of IEEE International Test Conference, Washington, DC,
September 1988, pp. 8239
7 Huertas, J.L., Rueda, A., Vazquez, D.: Testable switched-capacitor filters, IEEE
Journal of Solid State Circuits, 1993;28 (7): 71924
8 Vazquez, D., Rueda, A., Huertas, J.L.: A solution for the on-line test of analogue
ladder filters, Proceedings of IEEE VLSI Test Symposium, Princeton, NJ, April
1995, pp. 4853
9 Hsu, C.-C., Feng, W.-S.: Testable design of multiple-stage OTA-C filters, IEEE
Transactions on Instrumentation and Measurement, 2000;49 (5): 92934
10 Arabi, K., Kaminska, B.L.: Testing analogue and mixed-signal integrated circuits
using oscillation-test method, IEEE Transactions on Computer-Aided Design of
Integrated Circuits and Systems, 1997;16:74553
11 Arabi, K., Kaminska, B.L.: Oscillation-test methodology for low-cost testing of
active analogue filters, IEEE Transactions on Instrumentation and Measurement,
1999;48 (4): 798806
12 Zarnik, M.S., Novak, F., Macek, S.: Design of oscillation-based test structure for
active RC filters, IEE Proceedings Circuits, Devices and Systems, 2000;147
(5):297302
13 Hasan, M., Sun, Y.: Oscillation-based test structure and method of continuoustime OTA-C filters, Proceedings of the IEEE International Conference on
Electronics, Circuits and Systems, Nice, France, 2006, pp. 98101
14 Hasan, M., Sun, Y.: Design for testability of KHN OTA-C filters using oscillationbased test, Proceedings of IEEE Asia Pacific Conference on Circuits and Systems,
Singapore, 2006, pp. 9047
15 Huertas, G., Vazquez, D., Rueda, A., Huertas, J.L.: Effective oscillation-based
test for application to a DTMF filter bank, IEEE International Test Conference,
Atlantic City, NJ, September 1999, pp. 54955
16 Sun, Y. (ed.): Design of High-Frequency Integrated Analogue Filters (IEE Press,
UK, 2002)
17 Moritz, J., Sun, Y.: Design and tuning of continuous-time integrated filters in
Sun, Y. (ed.), Wireless Communication Circuits and Systems (IEE Press, UK,
2004), ch. 6
18 Sallen, R.P., Key, E.L.: A practical method of designing RC active filters, IRE
Transactions on Circuit Theory, 1955;2: 7485
19 Kerwin, W.J., Huelsman, L.P., Newcomb, R.W.: State-variable synthesis for
insensitive integrated circuit transfer functions, IEEE Journal of Solid State
Circuits, 1967;2 (3): 8792
20 Tow, J.: Active RC filter-a state-space realization, Proceedings of the IEEE,
1968;3: 11379
21 Thomas, L.C.: The biquad: part I- some practical design considerations, IEEE
Transactions on Circuit Theory, 1971;18:3507

212 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


22 Banu, M., Tsividis, Y.: MOSFET-C filters in Sun, Y. (ed.) Design of HighFrequency Integrated Analogue Filters (IEE Press, UK, 2002), ch. 2
23 Deliyannis, T., Sun, Y., Fidler, J.K.: Continuous-Time Active Filter Design (CRC
Press, Boca Raton, FL, 1999)
24 Sun, Y.: Architecture and design of OTA/gm-C filters in Sun, Y. (ed.) Design of
High-Frequency Integrated Analogue Filters (IEE Press, UK, 2002), ch. 1
25 Sanchez-Sinencio, E., Geiger, R.L., Nevarez-Lozano, H.: Generation of
continuous-time two integrator loop OTA filter structures, IEEE Transactions
on Circuits and Systems, 1988;35 (8):93646
26 Rodriguez-Vazquez, A., Linares-Barranco, B., Huertas, J.L., Sanchez-Sinencio,
E.: On the design of voltage-controlled sinusoidal oscillators using OTAs, IEEE
Transactions on Circuits and Systems, 1990;37 (2):198211
27 Sun, Y., Fidler, J.K.: Structure generation and design of multiple loop feedback
OTA-grounded capacitor filters, IEEE Transactions on Circuits and Systems-I,
1997;44 (1):111
28 Sun, Y., Fidler, J.K.: OTA-C realization of general high-order transfer functions,
Electronics Letters, 1993;29 (12):10578
29 Sun, Y., Fidler, J.K.: Synthesis and performance analysis of universal minimum component integrator-based IFLF OTA-grounded capacitor filter, IEE
Proceedings Circuits, Devices and Systems, 1996;143 (2):10714
30 Sun, Y.: Synthesis of leap-frog multiple loop feedback OTA-C filters, IEEE
Transactions on Circuits and Systems, II, 2006;53 (9):9615
31 Unbehauen, R., Cichocki, A.: MOS Switched-Capacitor and Continuous-Time
Integrated Circuits and Systems (Springer-Verlag, Berlin, 1989)
32 Allen, P.E., Sanchez-Sinencio, E.: Switched-Capacitor Circuits (Van Nostrand
Reinhold, New York, 1984)
33 Fleisher, P.E., Laker, K.R.: A family of switched capacitor biquad building
blocks, Bell System Technology Journal, 1979;58:223569
34 Fleisher, P.E., Ganesan, A., Laker, K.R.: A switched capacitor oscillator with
precision amplitude control and guaranteed start-up, IEEE Journal of Solid State
Circuits, 1985;20 (2):6417

Chapter 7

Test of A/D converters


From converter characteristics to built-in self-test proposals

Andreas Lechner and Andrew Richardson

7.1

Introduction

Analogue-to-digital (A/D) converters are mixed-signal functions that are frequently


used to create an interface between sensing and actuation devices in industrial control, transportation, consumer electronics and instrumentation industries. They are
also used in the conversion of analogue voice and video data in the computing and
communications communities. In control applications, the trend is towards medium
speed (10100 kHz) and high resolution (>16 bits) with test requirements focused
towards linearity testing. In communications applications, trends are similar, however,
dynamic performance tends to be critical, especially in voice processing applications. Consumer goods are another important application where a high conversion
speed (up to hundreds of megahertz) and low-to-medium resolution (812 bits)
are the norm.
Of interest therefore for the test engineer is the optimization of test programs for
verification of the key application specifications. Associated problems include the
range of specifications requiring verification and the very different implementations
used (for example, sigma-delta () versus flash). Today, most test strategies are
based on digital signal processing (DSP) testing where a known stimulus is injected
into the device and its digital output processed using, for example, fast Fourier transform (FFT) techniques to extract dynamic specifications, such as total harmonic distortion (THD) and inter-modulation distortion. The cost of implementing these measurements is becoming excessive. As an example, to measure the signal-to-noise and
distortion ratio (SINAD) of a converter with a 90 dB resolution will typically require
around 8000 samples that translates to around 0.10.2 s of test time when acquisition
and processing tasks are taken into account. THD measurements to an 80 dB resolution will require double this amount of time. Taking into account that in many cases

214 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


these measurements must be carried out for two or more gain settings and possible
input signal amplitudes, the test time can rapidly grow to several seconds. These
estimates should be considered in the context of total test time for a system-on-a-chip
(SoC) or mixed-signal integrated circuit (IC) that ideally needs to be below a second.
Note also that the converter will generally occupy only a small area of the device.
These issues illustrate the importance of utilizing the best possible methods available for converter testing. This chapter will present not only the conventional techniques for testing converters but a selection of new ideas and test implementations that
target both test time reduction and the demand for high-cost analogue test equipment.

7.2

A/D conversion

The process of A/D conversion is a transformation of a continuous analogue input


waveform into a sequence of discrete digital words. An A/D converter usually requires
a sample-and-hold operation at the front end of the signal path to ensure that the
analogue signal remains constant during the conversion process. The conversion process itself varies between different A/D converter architectures, which include serial,
folding,  and interpolating A/D converters [1, 2].
The converters transfer characteristics are commonly represented by a staircase
function. Figure 7.1 illustrates the transfer function of an ideal N-bit A/D converter,
where N is the converters resolution. This corresponds to the number of digitized bits.
For linear converters, the full-scale (FS) input range from Vmin to Vmax is segmented
into 2N equally sized bins (so-called code bins) of nominal width Q, also frequently
Digital
output
Code 2N1

T [2N1]

Representational
ideal straight line

Centre
of code k

Code k

T [1]
W [k]

Code 1

Figure 7.1

Ideal N-bit A/D converter transfer function

Vmax

Vmax Q

T [k+1]

T [k]
Vk

Vmin+2Q

Vmin+Q/2
Vmin+Q

Vmin

Code 0

Analogue
input

Test of A/D converters 215


Quantization
error
+Q/2

Q/2

Figure 7.2

Vmax

Input

A/D conversion quantization error

referred to as 1 LSB (least-significant bit):


Q=

FS
2N

(Vmax Vmin )
= 1 LSB
2N

(7.1)

The ideal code bin width Q, usually given in volts, may also be given as a percentage
of the full-scale range. By standard convention, the first code bin starts at voltage
Vmin and is numbered as 0, followed by the first code transition level T [1] to code
bin 1, up to the last code transition level T [2N 1] to the highest code bin [2N 1],
which reaches the maximum converter input voltage Vmax [3]. In the ideal case, all
code bin centres fall onto a straight line with equidistant code transition levels, as
illustrated in Figure 7.1. The analogue equivalent of a digital A/D converter output
code k corresponds to the location of the particular ideal code bin centre Vk on the
horizontal axis.
The quantization process itself introduces an error corresponding to the difference
between the A/D converters analogue input and the equivalent analogue value of its
output, which is depicted
over the full-scale range in Figure 7.2. With a root mean
square (RMS) value of Q/ (12) for a uniform
probability distribution between Q/2

and Q/2 and an RMS value of FS/(2 (2)) for a full-scale input sine wave, the
ideal or theoretical signal-to-noise ratio (SNR) for an N-bit converter can be given in
decibels as
  

2
2
(FS/2 2)
12
= 10 log10 2N
SNRideal = 10 log10

8
(Q/ 12)2
 
12
N
= 20 log10 [2 ] + 20 log10
= 6.02N + 1.76
(7.2)
8
For real A/D converters, further errors affect the conversion accuracy and converter
performance. The following sections will introduce the main static and dynamic
performance parameters that are usually verified to meet the specifications in production testing. Standardized performance parameters associated with the A/D converter
transient response and frequency response can be found in Reference 3.

216 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

7.2.1

Static A/D converter performance parameters

Apart from the systematic quantization error due to finite converter resolution, A/D
converters have further static errors mainly due to deviations in transition levels from
the ideal case and are affected by internally and externally generated noise. One of the
characteristic parameters that can indicate conversion errors is the real code widths. A
particular code bin width, W [k], can be determined from its adjacent code transition
levels T [k] and T [k + 1], as indicated in Figure 7.1:
W [k] = T [k + 1] T [k] for 1 k 2N 2

(7.3)

where code transition level T [k] corresponds to the analogue input voltage where half
the digital outputs are greater than or equal to code k, while the other half are below
code k.
In addition to the assessment of converter performance from transition levels and
code bin widths, the real transfer function may also be approximated by a straight line
for comparison with the ideal case. The straight line can be determined through a linear regression computation where the regions close to the upper and lower end of the
transfer function are ignored to avoid data corruption due to overdriving the converter
(input voltage exceeds the real full-scale range). The following main static performance parameters are introduced and described below: gain and offset, differential
non-linearity (DNL) and integral non-linearity (INL).
The basic effect of offset in A/D converters is frequently described as a uniform
lateral displacement of the transfer function, while a deviation from ideal gain corresponds to a difference in the transfer functions slope after offset compensation.
With regard to performance verification and test, offset and gain can be defined as
two parameters, VOS and G, in a straight-line fit for the real code transition levels,
as given on the left-hand side in Equation (7.4) [3, 4]. The values for offset and gain
can be determined through an optimization procedure aiming at minimum matching
error (k) between gain and offset adjusted real transition levels and the ideal values
(right-hand side of Equation (7.4)):
G T [k] + VOS + [k] = Q (k 1) + T [1]ideal

for 1 k 2N 1 (7.4)

where G is the gain, VOS the offset, Q the ideal code bin width, T [1]ideal the ideal first
transition level and T [k] the real transition level between codes k and (k 1). The
value for VOS corresponds to the analogue equivalent of the offset effect observed at
the output.
However, different optimization techniques yield slightly different values for
offset, gain and the remaining matching error. For example, the matching may be
achieved through mean squared value minimization for (k) for all k [3]; alternatively, the maximum of the matching errors may be reduced. Simpler offset and gain
measurements are often based on targeting an exact match in Equation (7.4) for the
first and last code transition levels, T [1] and T [2N 1] ((1) and (2N 1) equal
to zero) referred to as terminal-based offset and gain. An example for this case is
illustrated in Figure 7.3. An alternative methodology is to employ the straight-line
approximation of the real transfer function mentioned above. Offset and gain values

Test of A/D converters 217


Digital
output
W [m]
Code

2N1
Q
DNL[m]

Code m
Code n
INL[n]
Real
transfer
function

Ideal
transfer
function

Code 1

Figure 7.3

Vmax

T[2N1]

T [1]

Vmin

Code 0

Analog
input

Terminal-based DNL and INL in A/D converter transfer function

are then determined through matching this real straight line with the ideal straight
line, which again can deviate slightly from the optimization process results [3].
Differential non-linearity is a measure of the deviation of the gain and offset
corrected real code widths from the ideal value. DNL values are given in LSBs for
the codes 1 to (2N 2) as a function of k as
DNL[k] =

(W [k] Q)
Q

for 1 k 2N 2

(7.5)

where W [k] is the width of code k determined from the gain and offset corrected code
transition levels as given in Equation (7.3) and Q is the ideal code bin width. Note
that neither the real code bin widths nor the ideal value are defined at either end of the
transfer function. As an example, a DNL of approximately +1/4 LSB in code m is
included in Figure 7.3. The absolute or maximum DNL corresponds to the maximum
value of |DNL[k]| over the range of k given in Equation (7.5). A value of 1 for
DNL[k] corresponds to a missing code.
Integral non-linearity quantifies the absolute deviation of a gain and offset compensated transfer curve from the ideal case. INL values are given in LSBs at the code
transition levels as a function of k by
INL[k] =

[k]
Q

for 1 k 2N 2

(7.6)

218 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


where (k) is the matching error from Equation (7.4) and Q is the ideal code bin
width, both given in volts. Alternatively, INL[k] values can also be given as a percentage of the full-scale input range. As an example, Figure 7.3 also depicts an INL
of approximately +2/3 LSB in code n. Plots of INL[k] values over k provide useful
information on the converter performance, as the overall shape of the INL[k] curve
enables some initial conclusions on the predominance of even- or odd-order harmonics [5]. However, the exact values for the INL do depend on the type of gain and
offset correction methodology applied, which should be documented. The absolute
or maximum INL, usually provided as an A/D converter specification, corresponds to
the maximum value of |INL[k]| over the range of k given in Equation (7.6).
Some additional performance characteristics are defined for A/D converters. An
A/D converter is said to be monotonic, if the output is either consistently increasing
or decreasing with an either consistently increasing or decreasing input. If the change
at input and output are of the same direction, the converter is non-inverting. When
the change at the input and output are of opposite direction, the converter is inverting.
An A/D converter can also be affected by hysteresis. This is a condition where the
computation of the transfer function yields different results for an increasing and a
decreasing input stimulus that are beyond normal measurement uncertainties. For
more details see Reference 3.

7.2.2

Dynamic A/D converter performance parameters

A/D converter performance is also expressed in the frequency domain. This section
introduces the main dynamic performance parameters associated with the converters
output spectrum, while the determination of their values in converter testing is
described in Section 7.3.4.
Figure 7.4 illustrates an A/D converter output spectrum, a plot of frequency component magnitude over a range of frequency bins. Such a spectrum can be obtained
Amplitude
A1

ASi
AH3
AH2

fi

Figure 7.4

2fi

A/D converter output spectrum

AHk

3fi

kfi

Frequency

Test of A/D converters 219


from a spectrum analyser or a discrete Fourier transform (DFT) [6] through analysis
of the A/D converter response to a spectrally pure sine-wave input of frequency fi . The
original input signal can be identified as the fundamental of amplitude A1 . The second
to kth harmonic distortion component, AH2 to AHk , occur at non-aliased frequencies
that are integer multiples of fi . In addition, non-harmonic or spurious components,
such as ASi in Figure 7.4, can be seen at frequencies other than the input signal or
harmonics frequencies. The main dynamic performance parameters given below can
be extracted from the output spectrum in the form of ratios of RMS amplitudes of
particular spectral components, which also relates to signal power ratios. The calculation of these from the results of a DFT is outlined in Section 7.3.4.1. Note that
the input signal frequency and amplitude, and in some cases the sampling frequency,
have an impact on the actual performance parameter value and have to be provided
with test results and performance specifications.
The SINAD relates the input signal to the noise including harmonics. The SINAD
can be determined from RMS values for the input signal and the total noise (including
harmonics), which also relates to the power, P, carried in the corresponding signal
component. The SINAD is given in decibels as



Psignal
RMS(signal)
SINAD = 20 log10
(7.7)
= 10 log10
RMS(total noise)
Ptotal noise
The effective number of bits (ENOB) compares the performance of a real A/D
converter to the ideal case with regard to noise [7]. The ENOB is determined through


RMS(total noise)
ENOB = N log2
(7.8)
RMS(ideal noise)
where N is the number of bits of the real converter. In other words, an ideal A/D
converter with a resolution equal to the ENOB determined for a real A/D converter
will have the same RMS noise level for the specified input signal amplitude and
frequency. The ENOB and SINAD performance parameters can be correlated to each
other as analysed in Reference 3.
THD is a measure of the total output signal power contained in the second to kth
harmonic component, where k is usually in the range from five to ten (depending on
the ratio of the particular harmonic distortion power to the random noise power) [8].
The THD can be determined from RMS values of the input signal and the harmonic
components and is commonly expressed as the ratio of the powers in decibels:

k


2
i=2 AHi(rms)
Pharmonic

THD = 20 log10
(7.9)
= 10 log10
Pinput
A1(rms)
where A1(rms) is the RMS for the signal and AHi(rms) the RMS for the ith harmonic.
THD is given in decibels and usually with respect to a full-scale input (dBFS). Where
the THD is given in dBc, the unit is in decibels with respect to a carrier signal of
specified amplitude.

220 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


The spurious-free dynamic range (SFDR) quantifies the available dynamic range
as a ratio of the fundamental amplitude to the amplitude of the largest harmonic or
spurious component and is given in decibels:

A1
(7.10)
SFDR = 20 log10
max{AH(max) , AS(max) }
where AH(max) and AS(max) are the amplitudes of the largest harmonic component and
spurious component, respectively.
While the dynamic performance parameters introduced above are essential for an
understanding of A/D converter test methodologies (Section 7.3), an entire range of
further performance parameters is included in the IEEE standard 1241 [3], such as
various SNRs specified for particular bandwidths or for particular noise components.
Furthermore, some performance parameters are defined to assess inter-modulation
distortion in A/D converters with a two-tone or multiple tone sine-wave input.

7.3

A/D converter test approaches

This section introduces A/D converter test methodologies, for static and dynamic
performance parameter testing. The basic test set-up and other prerequisites are
briefly described in the next section. For further reference, an introduction to production test of ICs, ranging from test methodologies and design-for-test basics to
aspects relating to automatic test equipment (ATE) can be found in Reference 9.
Details on DSP-based testing of analogue and mixed-signal circuits are provided in
References 10 and 11.

7.3.1

Set-up for A/D converter test

The generic test set-up is illustrated in Figure 7.5. In generic terms, a suitable stimulus
supplied by a test source is applied to the A/D converter under test via some type of
test access mechanism. The test stimulus generator (TSG) block corresponds to one
or more sine waves, arbitrary waveform or pulse generator(s) depending on the type
of test to be executed. Generally, the response is captured for processing in a test sink
again facilitating a test access mechanism.
Clock generation and distribution

Test
stimulus
generator

Filter
(optional)

Test access
Mechanism

Test source

Figure 7.5

A/D converter test set-up

Test access
ADC

Mechanism

buffer
(optional)

Output
response
analyser
Test sink

Test of A/D converters 221


In a conventional A/D converter test set-up, test source and test sink are part of
the external ATE and are centrally controlled. The ATE interfaces with the IC via a
device interface board; functional IC input/output pins and IC-internal interconnects
may be facilitated as a test access mechanism. However, in the majority of cases,
some other means of test access has to be incorporated in the early stages of IC
design due to access restrictions, such as limited pin count or converters being deeply
embedded in a complex SoC. Systematic design methodologies that increase test
access, referred to as design-for-testability (DfT), are standardized at various system
levels. The IEEE standard 1149.1, also known as boundary-scan, supports digital IC
and board level tests [12]. Its extension to analogue and mixed-signal systems, IEEE
standard 1149.4, adds an analogue test bus to increase access to analogue IC pins and
internal nodes [13]. For SoC implementations, an IEEE standard for interfacing to
IC-internal subsystems, so-called embedded cores, and documentation of their test
requirements is expected to be approved in the near future [14].

7.3.2

Capturing the test response

The action of collecting a set of A/D converter output samples and transferring it to
the output response analyser (ORA) is commonly referred to as taking a data record.
The aim is to accumulate consecutive samples; however, for high-speed converters
interfacing restrictions may require output decimation [3]. This is a process in which
only every ith sample of a consecutive sequence is recorded at a lower speed than the
A/D converters sampling speed.
On the other hand, the A/D converter maximum sampling frequency restricts the
rate at which a waveform can be digitized and therefore the measurement bandwidth.
When sampling periodic waveforms, it is generally desirable to record an integer number of waveform periods while not resampling identical points at different cycles. This
can be assured by applying coherent sampling, Equation (7.11), where additionally
the number of samples in the record, M, and the number of cycles, N, are in the ratio
of relative prime numbers [10]:
fi = fs

N
M

(7.11)

where fs is the sampling frequency and fi is the input waveform frequency.


With such a sampling scheme it is possible to systematically rearrange consecutive
samples taken over multiple periods to an equivalent data record for a single period
taken at higher speed. This technique is called equivalent time sampling or time
shuffling and is illustrated in Figure 7.6.
Similarly it is possible to employ output decimation to achieve two special cases
of coherent sampling. First, in beat frequency testing, with N = M + 1 in Equation
(7.11), the difference between sampling and input signal frequency is very small.
Rearranging of samples is not required in the data record where successive samples
step slowly through the periodic waveform as illustrated in Figure 7.7(a). Second,
in envelope testing, where N = 1 + M/2 and M is a multiple of four, the sampling
frequency is nearly twice the input frequency stepping sequentially through both

Amplitude

222 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


2

9
4

16

10
15

17

14

18

11
5

21

20

12 13

Data
samples

19

Amplitude

Rearrange data samples


2
1

15

9 16 3

10
17
4
1
11
18

14

5
12 19

Figure 7.6

6 13

7
20

(21) Data
samples

Equivalent time sampling

Amplitude

(a)

Data
samples

Amplitude

(b)

Data
samples

Figure 7.7

(a) Beat frequency testing and (b) envelope testing

halves of the input waveform phases as illustrated in Figure 7.7(b).While the latter
sampling schemes allow a quick visualization of the waveforms shape, the sampling
techniques introduced can be employed in the test methodologies described in the
next sections.

7.3.3

Static performance parameter test

A/D converter performance can be verified in terms of static performance parameters,


introduced in Section 7.2.1, through assessment of the transfer function. One way to
compute the transfer function is to apply a d.c. voltage to the A/D converter that is
stepping through an analogue voltage range slightly larger than the full-scale input
range. For each step, a number of data pairs (input voltage/output code) have to
be computed. The transfer function can then be approximated as the curve of the

Test of A/D converters 223


mean output code over corresponding input voltage. The input voltage step size
and the number of output codes averaged for each input voltage step depends on
the ideal code bin width, the level of random noise and the required measurement
accuracy, which can be assessed through computation of the standard deviation of
the output.
In static performance production test, however, the use of continuous signals is
more desirable. The following sections introduce two A/D converter test methodologies widely in use today, which measure code transition levels, namely, feedback-loop
testing and histogram testing [3, 10, 11, 15]. From those values, static performance
parameters are determined as described in Section 7.2.1, which can then be compared
to the test thresholds.
7.3.3.1 Feedback-loop test methodology
In 1975 Corcoran et al. [16] published a test methodology for A/D converters that
incorporates a feedback loop to force the A/D converter input voltage to oscillate
around a desired code transition level. On the test source side, an analogue integrator
is employed that continuously integrates either a positive or a negative reference
voltage, Vref+ and Vref , for stimulus generation (Figure 7.8(a)). The reference
voltage to integrate is toggled depending on a comparison result between the A/D
converters output code, C and a set desired output code, D, after each conversion.
If C < D, the positive reference voltage is connected to the analogue integrator to
set a positive slope in the test stimulus. If C > D, the negative reference voltage is
chosen to obtain a negative slope in the test stimulus. Once the input stimulus has
reached the desired code transition level T [D], the feedback from the digital comparator enforces oscillation around T [D] at the converter input. Measuring the average
voltage at the A/D converter input yields the value of the particular code transition
level.
Several further adaptations of this technique, also referred to as servo testing, have
been described in the literature and were also included in IEEE standards [3, 17]. For
example, the test source can be based on digital-to-analogue (D/A) conversion of a
counter output, as illustrated in Figure 7.8(b), where the D/A converter resolution is
larger than the A/D converter resolution. Instead of toggling an analogue reference

(a)
Vref+

(b)
M >N
R

N
DAC

ADC

ADC

Vref
C
Digital
comparator

Figure 7.8

C
M

Counter or
accumulator

Digital
comparator

Feedback-loop configurations: (a) integrator (b) D/A conversion

224 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


voltage, the content of the counter can be incremented or decremented to obtain a
positive or negative slope in the test stimulus. Alternatively, an accumulator may be
chosen instead of a counter which allows increasing or decreasing the D/A converter
input by an adjustable amount [18]. In either case, additional lowpass filtering may
be incorporated to increase the effective test stimulus resolution. The test evaluation
may be performed by measuring the average A/D converter input voltage, as above,
or by extrapolating that voltage from the average D/A converter input through digital
processing.
The test can be automated easily. Starting at the first code transition level with
D = 1, the desired output code D is sequentially incremented each time the average
A/D converter input corresponding to T [D] has been determined. The loop is continued until the last transition level with D = 2N 1 is computed. If test time restrictions
prevent measurement of all transition levels, a basic linearity test may be performed
for code words formed by a set number of most significant bits while DNL testing is
only applied to a limited number of codes [19].
In feedback-loop A/D converter testing, stimulus accuracy is crucial; the positive
and negative slope rates must especially be equal, as any mismatch directly affects the
ratio of code occurrences at either side of the code transition level. Also, the effect of
test stimulus slope (and step size in D/A conversion) on the dynamics of the feedback
loop must be analysed in more detail [5, 15, 18]. Regarding test application time,
the number of conversions required to reach a particular code transition level and
to settle the feedback loop has to be assessed. This is especially true for high-speed
A/D converters, where the conversion delay may be smaller than the delay through the
feedback loop. Stable oscillation around a code transition level may not be achievable
in this case.
7.3.3.2 Histogram test methodology
In histogram testing, A/D converter code transition levels are not measured directly
but are determined through statistical analysis of converter activity [20]. For a known
periodic input stimulus, the histogram of code occurrences (code counts) is computed
over an integer number of input waveform periods. There are two types of histograms
employing test stimuli of different characteristics [21]. First, there is the ramp histogram, also called linear histogram, computed for a linear typically triangular
waveform. Second, the sine-wave histogram, frequently referred to as dynamic histogram, is collected for a sinusoidal input waveform. The computation is illustrated
for both types for an ideal 3-bit converter in Figure 7.9. Note that, in the illustration,
the triangular waveform overdrives the A/D converter causing higher code counts in
H[0] and H[7], while the sinusoidal wave only touches the boundaries of the full-scale
range.
Generally, histograms support analysis of the converters static performance
parameters. A missing code m is easily identified as the corresponding code count
H[m] is equal to zero. Also offset is easily identified as a shift in the code counts
and gain directly relates to average code count. However, the converter linearity is
assessed via the determination of code transition levels.

Test of A/D converters 225


(b)

Input
voltage
T [7]

T [1]

Code count H [k]

k
7
6
5
4
3
2
1
0
Code count H [k]

Input
voltage
T [7]

k
7
6
5
4
3
2
1
0

T [1]

(a)

Amplitude

Time

Time

Figure 7.9

Histogram generation: (a) linear and (b) sine wave

For ramp histograms, where ideal values for H[2] to H[2N 2] are equal, code
transition levels can be given as in the first part of Equation (7.12), where C is an
offset component and A a gain factor that is multiplied with the accumulated code
count up to the transition to code k [3]. As the widths of the extreme codes, 0 and 2N 1,
cannot be defined, their code counts are usually set to zero (H[0] = H[2N 1] = 0).
In these cases, C and A can be determined as shown in Equation (7.12), where the
first code transition level, T [1], is interpreted as the offset component. The gain
factor defines the proportion of the full-scale input range for a single sample in the
histogram, where Htot is the total number of samples in the entire histogram:
T [k] = C + A

k1

i=0

T [2N 1] T [1] 
H[i]
Htot
k1

H[i] = T [1] +

(7.12)

i=0

For sine-wave histograms, code transition levels have to be computed differently


as ideal values for H[2] to H[2N 2] are not equal [22]. With a stimulus v[t] =
A sin[t + ] + C, the transition levels can be given as [3, 4]:

 k1
i=0 H[i]

T [k] = C A cos
(7.13)
Htot
where offset component C and gain factor G correspond to the input sine waves offset
and amplitude.

226 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


For either type of histogram, high stimulus accuracy is essential as most deviations
(ramp linearity, sine-wave distortion) have a direct impact on the test result. For highfrequency stimuli, tests may also detect dynamic converter failure. The choice of sinewave histograms can be advantageous, as stimulus verification and high-frequency
signal generation are easier to achieve [11]. An advantage of ramp histograms is that
generally a lower number of samples is required due to constant ideal code counts.
The number of code counts is an important test parameter, as it is directly proportional
to test application time and depends on required test accuracy and confidence level. In
Reference 23, an equation is derived for the number of samples required for random
sampling of a full-scale sine-wave histogram. This number can be reduced through
controlled sampling and overdriving of the converter; a relationship is derived in
Reference 4.
A shortcoming of histogram testing in general is the loss of information associated
with the accumulation of code counts only and not their order of occurrence. Imagine a situation where code bins were swapped, leading to a non-monotonic transfer
function. There will be no effect on a ramp histogram and detection for a sine-wave
histogram depends on the code locations. A more realistic converter failure escaping
histogram testing, is the occurrence of code sparkles. Usually, this is a dynamic effect
where an output code of unexpected difference to its neighbours occurs. However,
such effects can become detectable via accumulation of each codes indices (locations) in a so-called weight array, which can be computed in addition to the histogram
accumulated in a tally array [10].

7.3.4

Dynamic performance parameter test

Generally, the aim in dynamic performance parameter testing is to identify the signals
components at the A/D converter output, such as the converted input signal, harmonics
and random noise, and to compute performance parameters introduced in Section
7.2.2. For the majority of these parameters and determination of signal components,
a transformation from the time domain to the frequency domain is required. A/D
converter testing employing discrete Fourier transformation is described in the next
section. However, some dynamic performance parameters can also be determined in
the time domain from an A/D converter model generated to match a data record taken
from a real converter. The so-called sine-fit testing is introduced in Section 7.3.4.2.
For either technique, it is assumed that a single tone sine-wave stimulus is applied to
the A/D converter.
7.3.4.1 Frequency domain test methodology
This section focuses on the application of frequency domain test to A/D converters. It
is beyond the scope of this chapter to provide an introduction to Fourier transformation
[6] or to discuss general aspects of DSP-based testing and spectrum analysis [10, 11]
in great detail.
A signal can be described in time or frequency domain where the Fourier analysis
is employed to move from one domain to the other without loss of information. For
coherent sampling of periodic signals with the number of samples taken in the time

Test of A/D converters 227


Amplitude/dB
0

A1

AH2

fi

Figure 7.10

2fi

AH5
AH3

AH4

3fi

4fi

ASi
AH6

5fi

6fi

AH7

AH8

7fi

8fi fs/2

Frequency

A/D converter output spectrum

domain being a power of two, the discrete Fourier transformation can be computed
more efficiently through FFT algorithms. If coherent sampling of all signal components cannot be guaranteed, a periodic repetition of the sampled waveform section
can lead to discontinuities at either end of the sampling interval causing spectral
leakage. In such cases, windowing has to be applied, a processing step in which
the sampled waveform section is mathematically manipulated to converge to zero
amplitude towards the interval boundaries, effectively removing discontinuities [24].
In either case, the A/D converter output signal is decomposed into its individual
frequency components for performance analysis. The frequency range covered by
the spectrum analysis depends on the rate of A/C converter output code sampling,
fs . The number of discrete frequency points, also referred to as frequency bins, is
determined by the number of samples, N, processed in the FFT. While accounting
for aliasing, signal and sampling frequencies have to be chosen to allow sufficient
spacing between the harmonics and the fundamental component. The graphical presentation of the spectrum obtained from the analysis, frequently referred to as FFT
plot, illustrates the particular signal component amplitude with its frequency on the
x-axis (Figure 7.10). The number of frequency bins is equal to N/2 and their widths
are equal to fs /N.
The following spectrum features can be identified in Figure 7.10. First the fundamental component, A1 , corresponding to the input signal, second the harmonic
distortion components, AH2 to AH8 , third large spurious components, such as ASi
and finally the remaining noise floor representing quantization and random noise.
Dynamic performance parameters, such as SINAD, THD and SFDR, can be calculated from the particular signal components real amplitudes (not in decibels) or the
power contained in them, as given in Section 7.2.2 and described in Reference 25
including multi-tone testing.

228 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


7.3.4.2 Sine-wave fitting test methodology
Fitting a sine-wave function to the data recorded when a sinusoidal stimulus is applied
to the A/D converter allows assessment of the general performance. In Reference 3,
sampling of at least five stimulus cycles is quoted as a rule of thumb, however, it
also has to be borne in mind that conversion errors may be concealed with increasing
record size due to averaging effects. Again, sampling and input stimulus frequency
should be selected to record data that is uniformly spaced over a single period of the
waveform. However, sampling of a non-integer number of cycles does not cause the
problems that have been mentioned for spectrum analysis above. In any case, the mathematical model of the fitted sine wave relates to the data yn sampled at time instants
tn as
yn = A cos(tn + ) + C

for

n = 1, . . . , M

(7.14)

where A is the amplitude, the phase and C the offset of the fitted sine wave of
angular frequency . When the frequencies of the input stimulus and the sampling and
therefore parameter of the fitted function are known, the remaining three sine-wave
parameters can be calculated through minimization of the RMS error between the data
record and the model (three-parameter least-square fit [3]). When the frequencies are
unknown or not stable, then the four-parameter least-square fit has to be employed.
Here, an iteration process beginning with an initial estimate for the angular frequency
surrounds the least-square minimization process. The value for is updated between
loops until the change in obtained sine-wave parameters remains small. The threeparameter and four-parameter fitting process is derived and described in far more
detail in Reference 17.
The performance parameter that is usually computed for the fitted A/D converter model is the ENOB [7], Equation (7.8). Some further performance analysis
can be achieved by test execution under different conditions, such as various input
stimulus amplitudes or frequencies as described in Reference 26. A potential problem is that this test methodology does not verify the converter performance over
its entire full-scale input range, as the test stimulus amplitude has to be chosen to
avoid clipping. Localized conversion error, affecting a very small section of the transfer function, may also escape unnoticed due to the averaging effect of the fitting
process.

7.4

A/D converter built-in self-test

Built-in self-test (BIST) for analogue and mixed-signal components has been identified as one of the major requirements for future economic deep sub-micron IC
test [27, 28]. The main advantage of BIST is to reduce test access requirements.
At the same time, the growing performance gap between the circuit under test and
the external tester is addressed by the migration of tester functions onto the chip. In
addition, parasitics induced from the external tester and the demands on the tester
can be reduced. Finally, analogue BIST is expected to eventually enable the use of

Test of A/D converters 229


Compress,
encode

Test clock
TSG
(opt. DAC)
Optionally
on chip

Figure 7.11

S&H,
ADC

signature of
ADC

Reference
Histogram +
Difference
generator
histogram
on chip

Optionally
on chip

HABIST scheme applied to ADC

cheaper, digital only or so-called DfT-testers that will help with the integration of
analogue virtual components including BIST for digital SoC applications. Here
the aim is to enable the SoC integrator to avoid the use of expensive mixed-signal
test equipment. Also, for multi-chip modules, on-chip test support hardware helps
to migrate the test of analogue circuitry to the wafer level. It is expected that the
reuse of BIST structures will significantly reduce escalating test generation costs, test
time and time-to-market for a range of devices. Full BIST has to include circuitry to
implement both TSG and ORA. This section briefly summarizes BIST solutions that
have been proposed for performance parameter testing of A/D converters, some of
which have been commercialized.
Most BIST approaches for A/D converter testing aim to implement one of the
converter testing techniques described in Section 7.3. In Reference 29 it is proposed
to accumulate a converter histogram in an on-chip RAM while the test stimulus is
generated externally. The accumulated code counts can be compared against test
thresholds on chip to test for DNL; further test analysis has to be performed off chip.
This test solution can be extended towards a full BIST by including an on-chip triangular waveform generator [30]. In a similar approach, the histogram-based analogue
BIST (HABIST), additional memory and ORA circuitry can be integrated to store
a reference histogram on chip for more complete static performance parameter testing of A/D converters [31]. This commercialized approach [32] also allows the use
of the tested A/D converter (ADC) with the BIST circuitry to apply histogram-based
testing to other analogue blocks included in the same IC. As illustrated in Figure 7.11,
the on-chip integration of a sine wave or saw tooth TSG is optional. The histogram
is accumulated in a RAM where the converter output provides the address and a
read-modify-write cycle updates the corresponding code count. The response analysis is performed after test data accumulation and subtraction of a golden reference
histogram. As for the TSG, on-chip implementation of the full ORA is optional.
Also the feedback-loop test methodology has been considered for a straightforward BIST implementation [33]. The oscillating input signal is generated through
the charging or discharging of a capacitor with a positive or a negative reference
current I, generated on chip (Figure 7.12). Testing for DNL and INL is based on the
measurement of the oscillation frequency on the switch control line (ctrl) similar to
feedback-loop testing (Section 7.3.3.1).

230 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


VDD
I
Switch
I

ADC

BIST

VSS

Oscillation BIST applied to an ADC

(a)

(b)
Signature of
ADC
BIST

(c)

Polynomial fitting algorithm


n/2 n/4
0
n/4 n/2
y
S0

S1

S3

S2

Test clock
Lowpass
filter

Figure 7.13

ADC

y = b0 + b1x + b2 x + b3x
2

Lowpass filtered test stimulus

Input voltage

Figure 7.12

Ctrl

S3 S3
S2
S1
S0

Time
S2
S1
S0

ADC BIST employing polynomial-fitting algorithm

The dynamic performance of an ADC can be assessed through spectrum analysis


or sine-wave curve-fitting as described above. It has been proposed to add an on-chip
TSG and to facilitate an available DSP core to implement a BIST scheme for either of
these test techniques [34]. Similarly, a BIST approach for mixed-signal ICs containing
a DSP, memory and both, ADCs and DACs has been proposed that computes an FFT
and evaluates test results on chip [35].
A BIST scheme to test an ADCDAC chain for dynamic performance without
the availability of a DSP is proposed in Reference 36. For high-resolution ADC testing, the response analysis is conducted by integrating evenly distributed n/4 samples
for each quarter of a ramp response (S0 to S4 in Figure 7.13(a) and (b)). The coefficients of a fitted third-order polynomial can be calculated from these four sums
and relate to d.c. offset and gain, and second- and third-order harmonic distortion expected for a sine-wave input. While the integration is conducted by BIST
circuitry, the on-chip extraction of coefficients, performance parameters and the
comparison to test thresholds is optional. The expected ramp input stimulus can
be approximated by a lowpass-filtered four-segment staircase-like stimulus, as illustrated in Figure 7.13(c) [37]. The BIST circuitry generates a pulse-width modulated
waveform with five different duty cycles. These are applied to an off-chip lowpass

Test of A/D converters 231


filter in turn to generate a rising and a falling step for each quarter of the converters input range. The integration process is conducted during the rising/falling
edges of the exponential (approximately 17 per cent of the step width, as illustrated by shaded regions in Figure 7.13(c)) to achieve a relatively linear output code
distribution.
While the advantages of analogue and mixed-signal BIST solutions are clear,
drawbacks due to limited test sets, excessive area overhead or a low confidence
in test results have hampered wide industrial use. BIST techniques summarized
above are mostly limited to particular ADC architectures. Histogram-based BIST,
for example, may result in excessive area overhead for high-resolution converters.
The polynomial-fitting algorithm BIST scheme is aimed at high-resolution converter
testing, but relies on the assumption that a third-order polynomial accurately fits the
test response.
More work may be required to identify converter performance parameters crucial
for testing. Test requirements and realistic failure modes will depend on particular
converter architectures. An example study can be found in Reference 38.

7.5

Summary and conclusions

This chapter discussed the key parameters and specifications normally targeted in
ADC testing, methods for extracting these performance parameters and potential
solutions for either implementing full self-test or migrating test resources from external test equipment to the device under test. Table 7.1 provides a summary of the
advantages and limitations of five of the main test methods used in A/D converter
testing.
The field now faces major new challenges, as the demand for higher-resolution
devices becomes the norm. The concept of design reuse in the form of integrating third-party designs is also having a major impact on the test requirements, as
in many cases system integrators wishing to utilize high-performance converter
functions will not normally have the engineering or production test equipment
required to test these devices. The concept of being able to supply an ADC with
an embedded test solution that requires only digital external test equipment is hence a
major goal.
In the case of on-chip test solutions, proposed or available commercially, limitations need to be understood before investing design effort. Histogram testing, for
example, will require a large amount of data to be stored and evaluated on chip while
requiring long test times. For servo-loop-based solutions, the oscillation around a single transition level may be difficult to achieve under realistic noise levels. Sine-wave
fitting will require some significant area overhead for the on-chip computation, as do
FFT-based solutions and may still not achieve satisfying measurement accuracy and
resolution. Further work is therefore required to quantify test times, associated cost
and measurement accuracies and generate quality test quality metrics.

232 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Table 7.1

Summary of ADC test approaches

Technique

Performance
parameters
tested

Major advantages

Main limitations

Histogram based

Static performance
(offset and gain
error, DNL,
INL, missing
codes, etc.)

Well-established,
complete linearity
test

Long test time,


amount of data, no
test for dynamic
performance, test
stimulus accuracy

Servo-loop

Static performance
(offset and gain
error, DNL, INL)

Accurate
measurement of
transition edges
(not based on
statistics)

Test stimulus
accuracy,
measurement
accuracy

Sine-wave
curve fitting

DNL, INL,
missing codes,
aperture uncertainty,
noise

Tests for dynamic


performance

Input frequency is a
submultiple of
sample frequency,
lack of
convergence of
algorithm,
measurement
accuracy

Beat frequency
testing

Dynamic
characteristic

Quick and simple


visual
demonstration of
ADC failures

No accurate test

FFT based

Dynamic
performance
(THD, SINAD,
SNR, ENOB)

Tests for dynamic


performance,
well-established

No tests for
linearity

7.6

References

1 van de Plassche, R.: Integrated Analog-to-Digital and Digital-to-Analog Converters (Kluwer, Amsterdam, The Netherlands, 1994)
2 Geiger, R.L., Allen, P.E., Strader, N.R.: VLSI Design Techniques for Analog and
Digital Circuits (McGraw-Hill, New York, 1990)
3 IEEE Standard 1241-2000: IEEE Standard for Terminology and Test Methods for
Analog-to-Digital Converters (Institute of Electrical and Electronics Engineers,
New York, 2000)

Test of A/D converters 233


4 Blair, J.: Histogram measurement of ADC nonlinearities using sine waves, IEEE
Transactions on Instrumentation and Measurement, 1994;43 (3):37383
5 Maxim Integrated Products, Maxim Application Note A177: INL /DNL Measurements for High-Speed Analog-to-Digital Converters (ADCs), September 2000
6 Oppenheim, A.V., Schafer, R.W.: Discrete-Time Signal Processing (Prentice Hall,
Englewood Cliffs, NJ, 1989)
7 Linnenbrink, T.: Effective bits: is that all there is?, IEEE Transactions on
Instrumentation and Measurement, 1984;33 (3):1847
8 Hofner, T.C.: Dynamic ADC testing part I. Defining and testing dynamic ADC
parameters, Microwaves and RF, 2000;39 (11):7584
9 Grochowski, A., Bhattacharya, D., Viswanathan, T.R., Laker, K.: Integrated circuit testing, IEEE Transactions on Circuits and Systems II: Analog and Digital
Signal Processing, 1997;44 (8):61033
10 Mahoney, M.: DSP-based Testing of Analog and Mixed-Signal Circuits (IEEE
Computer Society, Washington, DC, 1987)
11 Burns, M., Roberts, G.W.: An Introduction to Mixed-Signal IC Test and
Measurement (Oxford University Press, New York, 2001)
12 IEEE Standard 1149.1-2001: IEEE Standard Test Access Port and Boundary-Scan
Architecture 2001 (Institute of Electrical and Electronics Engineers, New York,
2001)
13 IEEE Standard 1149.4-1999: IEEE Standard for a Mixed-Signal Test Bus (Institute
of Electrical and Electronics Engineers, New York, 1999)
14 IEEE P1500 Working Group on a Standard for Embedded Core Test (SECT),
Available from [Accessed Jan 2008] http://grouper.ieee.org/groups/1500/2003.
15 Max, S.: Testing high speed high accuracy analog to digital converters embedded in systems on a chip, Proceedings of IEEE International Test Conference,
Atlantic City, NJ, USA, 2830 September 1999, pp. 76371
16 Corcoran, J.J., Hornak, T., Skov, P.B.: A high resolution error plotter for analogto-digital converters, IEEE Transactions on Instrumentation and Measurement,
1975;24 (4):3704
17 IEEE Standard 1057-1994 (R2001): IEEE Standard for Digitizing Waveform
Recorders (Institute of Electrical and Electronics Engineers, New York, 2001)
18 Max, S.: Optimum measurement ADC transitions using a feedback loop, Proceedings of 16th IEEE Instrumentation and Measurement Technology Conference,
Venice, Italy, 2426 May 1999, pp. 141520
19 Sounders, T.M., Flach, R.D.: An NBS calibration service for A/D and D/A converters, Proceedings of IEEE International Test Conference, Philadelphia, PA,
USA, 2729 October 1981, pp. 290303
20 Pretzl, G.: Dynamic testing of high speed aid converters, IEEE Journal of Solid
State Circuits, 1978;13:36871
21 Downing, O.J., Johnson, P.T.: A method for assessment of the performance of
high speed analog/digital converters, Electronics Letters, 1978;14 (8):23840
22 van den Bossche, M., Schoukens, J., Renneboog, J.: Dynamic testing and diagnosis of A/D converters, IEEE Transactions on Circuits and Systems, 1986;33
(8):77585

234 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


23 Doernberg, J., Lee, H.S., Hodges, D.A.: Full-speed testing of A/D converters,
IEEE Journal of Solid State Circuits, 1984;19 (6):8207
24 Harris, F.J.: On the use of windows for harmonic analysis with the discrete
Fourier transform, Proceedings of the IEEE, 1978;66 (1):5183
25 Hofner, T.C.: Dynamic ADC testing part 2. Measuring and evaluating dynamic
line parameters, Microwaves and RF, 2000;39 (13):7894
26 Peetz, B.E.: Dynamic testing of waveform recorders, IEEE Transactions on
Instrumentation and Measurement, 1983;32 (1):1217
27 Semiconductor Industry Association: International Technology Roadmap for
Semiconductors, 2001 edn. Available from http://www.sia-online.org [Accessed
Jan 2008]
28 Sunter, S.: Mini tutorial: mixed signal test. Presented at 7th IEEE International
Mixed-Signal Testing Workshop, Atlanta, GA, USA, 1315 June 2001
29 Bobba, R., Stevens, B.: Fast embedded A/D converter testing using the microcontrollers resources, Proceedings of IEEE International Test Conference,
Washington, DC, USA, 1014 September 1990, pp. 598604
30 Raczkowycz, J., Allott, S.: Embedded ADC characterization techniques using a
BIST structure, an ADC model and histogram data, Microelectronics Journal,
1996;27 (6):53949
31 Frisch, A., Almy, T.: HABIST: histogram-based analog built in self test, Proceedings of IEEE International Test Conference, Washington, DC, USA, 35
November 1997, pp. 7607
32 Fluence Technology Incorporated: BISTMaxxTM product catalog, 2000. Available
from http://www.fluence.com
33 Arabi, K., Kaminska, B.: Oscillation built-in self test (OBIST) scheme for functional and structural testing of analog and mixed-signal circuits, Proceedings
of IEEE International Test Conference, Washington, DC, USA, 35 November
1997, pp. 78695
34 Toner, M.F., Roberts, G.W.: A BIST scheme for a SNR, gain tracking, and frequency response test of a sigma-delta ADC, IEEE Transactions on Circuits and
Systems II: Analog and Digital Signal Processing, 1995;42 (1):115
35 Teraoka, E., Kengaku, T., Yasui, I., Ishikawa, K., Matsuo, T., Wakada, H.,
Sakashita, N., Shimazu, Y., Tokada, T.: A built-in self-test for ADC and DAC in a
single chip speech CODEC, Proceedings of IEEE International Test Conference,
Baltimore, MD, USA, 1721 October 1993, pp. 7916
36 Sunter, S.K., Nagi, N.: A simplified polynomial-fitting algorithm for DAC and
ADC BIST, Proceedings of IEEE International Test Conference, Washington,
DC, USA, 35 November 1997, pp. 38995
37 Roy, A., Sunter, S., Fudoli, A., Appello, D.: High accuracy stimulus generation
for A/D converter BIST, Proceedings of IEEE International Test Conference,
Baltimore, MD, USA, 810 October 2002, pp. 10319
38 Lechner, A., Richardson, A., Hermes, B.: Short circuit faults in state-of-the-art
ADCs are they hard or soft, Proceedings of 10th Asian Test Conference, Kyoto,
Japan, 1921 November 2001, pp. 41722

Chapter 8

Test of  converters
Gildas Leger and Adoracin Rueda

8.1

Introduction

Back in the 1960s, Cutler introduced the concept of  modulation [1]. Some years
later, Inose et al., applied this concept to analogue-to-digital converters (ADCs) [2].
Sigma-delta () converters attracted little interest at that time because they required
extensive digital processing. However, with newer processes and their ever decreasing
feature size, what was first considered to be a drawback is now a powerful advantage: a significant part of the conversion is realized by digital filters, allowing for a
reduced number of analogue parts, built of simple blocks. Nevertheless, the simplicity
of the hardware has been traded-off against behavioural complexity.  modulators are very difficult to study and raise a number of behavioural peculiarities (limit
cycles, chaotic behaviour, etc.) that represent an exciting challenge to the ingenuity
of researchers and are also an important concern for industry.
Owing to this inherent complexity, it is quite difficult to relate defects and in
general non-idealities to performance degradation. In linear time-invariant circuits, it
is usually possible to extract the impact of a defect on performance by considering
that the defect acts as a perturbation of the nominal situation. This operation is known
as defect-to-fault mapping. For instance, in a flash ADC, a defect in a comparator can
be modelled as a stuck-at fault or an unwanted offset that can be directly related to
the differential non-linearity. However, in the case of  converters, a given defect
can manifest itself only under given circumstances. For instance, it is known that
the appearance of limit cycles is of great concern, particularly in audio applications.
Indeed, the human ear can perceive these pseudo-periodic effects as deep as 20 dB
below the noise floor. A defect in an amplifier can affect its d.c. gain and cause
unwanted integrator leakage and limit cycles. Such a defect can be quite difficult to
detect with a simple functional test. Performing a good and accurate test of a 
modulator is, thus, far from straightforward. A stand-alone modulator in its own

236 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


package can be tested with success in a laboratory with appropriate bench equipment.
It can also be tested in a production environment with much more limited resources,
but the cost related to such tests is becoming prohibitive as the resolution of the
modulators increases. In the context of a system on chip (SoC), the test problems
multiply and the test of an embedded modulator can very well become impossible.
Research has thus been done and is still necessary to facilitate  modulator testing
and to decrease the overall product cost.
This chapter will address the issue of  converter tests in a general manner,
keeping in mind the particular needs of SoC. A brief description of the basic principles of  modulation is first included, mainly as a guide for those readers with
little or no knowledge of  modulators and their peculiarities that influence testing. The following section deals with characterization, which is closely related to
test. It intends to give valuable and specific information for the case of  modulators. For more general information on ADC testing, the reader should refer to
Chapter 7. Section 8.4 addresses more specifically the test of  modulators in
an SoC context, explaining the particular issues and reviewing some solutions proposed in the literature. Finally, Section 8.5 gives some details about a very promising
approach that has the potential to overcome most of the issues described in the
previous sections.

8.2

An overview of  modulation: opening the ADC black box

ADCs based on  modulation present a number of peculiarities when compared


with other ADCs. These peculiarities have non-negligible consequences for the
ADC characterization procedure. Some of the main issues will be introduced in this
section.

8.2.1

Principle of operation:  modulation and noise shaping

Figure 8.1 shows the structure of a  converter. The converter is divided into two
important domains: the analogue and the digital domains. In the analogue part, the 
modulator adaptively approximates the output low-resolution bit-stream to the input
signal and shapes the quantization noise at high frequencies. It is often preceded
by a simple anti-aliasing filter that removes potential high-frequency components
from the input signal. Then, the decimation filter removes the high-frequency noise
and converts the low-resolution high-rate bit-stream into a high-resolution low-rate
digital code.
The objective of  modulation is to shape the quantization error at high frequencies, as seen in Figure 8.2. This allows most of the quantization error to be filtered
and the performance greatly improves. Taking the concept to an extreme, it is even
possible to use a 1-bit quantizer and obtain high-precision converters by appropriate
quantization, noise shaping and filtering.
The noise-shaping capability of  modulators is achieved by feeding back the
quantization error to the input. Actually, a  modulator should be seen as an adaptive

Test of  converters 237

Bitstream

Anti-aliasing
filter

modulator

Decimation filter

Analog

Figure 8.1

1011101001010001
1001100101001010
1001010001010010
1100010100101010

Digital

Decomposition of a  ADC

Modulation

Figure 8.2

fs/2

fs/2

Quantization noise shaping


Quantizer
+

z1

Figure 8.3

+
Controller

Error-predicting  architecture

system. Let us imagine that only a low-resolution converter is available: if the input
signal is directly fed into that converter, the output will be a coarse approximation of
the input. However, the input signal is slow with respect to the maximum sampling
frequency of the low-resolution converter. In control theory, the simplest approach
to improve the behaviour of a system is to use a proportional controller: a quantity
proportional to the quantization error is subtracted from the input signal. This is
depicted in Figure 8.3. When considering a discrete-time situation, a delay has to
be introduced into the feedback loop. If the quantity subtracted from the input is
the entire quantization error, an architecture known as error predicting is obtained.
Modelling the low-resolution converter as an additive noise source [3], the transfer
function of the device can be resolved in the z-domain.

238 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Quantizer
X
L0=G/H

Y
ADC

L1=(H-1)/H

DAC

Figure 8.4

Generic representation of a  modulator of any order

The output is equal to the input signal plus a quantization noise that is shaped at
high frequencies by the function (1 z1 ). In control theory, the system performance
can be improved by taking an adequate controller (proportional, integral, differential
or any combination of them). In the same way,  modulation can be presented in a
generic way as in Figure 8.4: the input signal and the modulator output are combined
in a loop filter whose output is quantized by a coarse converter. The characteristics
of the loop filter further define the architecture of the modulator and the order of the
noise shaping.
The modulator state equation can be written as
U(z) = L0 (z)X(z) + L1 (z)Y (z)

(8.1)

And the inputoutput equation as


Y (z) = G(z)X(z) + H(z) (Y (z) U(z))

(8.2)

The function G(z) is known as the signal-transfer function. Similarly, H(z) is the
noise-transfer function (NTF). The term in the parenthesis represents the quantization
noise.

8.2.2

Digital filtering and decimation

As was said above, in order to retrieve the data at the wanted precision, the quantization noise has to be properly filtered. The cut-off frequency of the filter defines
the modulator baseband. Indeed both the quantization noise and the input signal are
affected by the filter. Once the filtering operation has been done, the frequency range
above the filter cut-off frequency is useless. For that reason, the filter output datastream is often decimated or down-sampled: only one sample out of N is considered.
In order to avoid the aliasing effect, N has to be set such that the output data rate is
twice the filter cut-off frequency. This process is illustrated in Figure 8.5.
The output spectrum of a  converter appears to be very similar to a Nyquistrate converter but the input signal is actually sampled at a much higher frequency
than the converter output rate. The oversampling ratio (OSR) is defined as
OSR =

fs
2fc

(8.3)

Test of  converters 239

Filtering

fs/2

fc

Figure 8.5

Decimation

fc

0 fc 2fc

fs/2

Spectral representations of the modulator bit-stream filtering and


decimation

1
1z1
reg

1zN
+
L
Accumulators

reg

fs

Figure 8.6

fs/2

reg

reg
L
Differentiators

fd =fs/N

Widely used decimate-by-N comb filter structure

where fc is the cut-off frequency of the filter and fs the sampling frequency of the
modulator. It is easy to show that the number N defined above for the decimation
operation is actually equal to the OSR.
Both the filtering and decimation operations have to be carried out with care. The
decimation cannot be performed directly on the bit-stream, as the high-frequency
noise would alias into the baseband. On the other hand, performing the whole decimation at the filter output may not be optimum in terms of power efficiency. Indeed, it
would force the entire filter to run at the maximum frequency. It may be more convenient to split the filter into several stages with decimators at intermediate frequencies.
Hence, finding an adequate decimation and filtering strategy for a given modulator
is an interesting optimization problem. One widely used structure, however, is that
presented in Figure 8.6.

8.2.3

 modulator architecture

With the advent of digital-oriented processes,  converters have gained more and
more interest. Research effort has been focused on both theory and implementation.
In order to get more benefits from noise shaping, high-order architectures have been
developed with a wide variety of loop-filter topologies. In parallel, these refinements
require a deeper understanding of the non-linear dynamics of complex  modulators. The topic of most relevance is without doubt the stability of the modulator
internal states [46].

240 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

z1
1-z1

1
1

Y
+

Figure 8.7

z-domain representation of a first-order  modulator

8.2.3.1 First-order modulators


Figure 8.7 depicts a first-order  modulator. The reason for the  label can
easily be seen in the figure. The quantizer output is subtracted from the input signal
to form the modulator quantization error: that is, the delta operation. Then the error
signal is integrated, which in a discrete-time basis, is a summing operation: the sigma
operation. The negative feedback tends to force the integrator input to zero on average.
As the average of the modulator output has to be close to the average of the modulator
input, this implies that the modulator error is ever smaller for an input closer to d.c.,
which is the same as saying that the quantization error is shaped at higher frequencies.
Actually, if the quantization error is considered to be uncorrelated from the input, the
z-domain description of the modulator can be derived from Figure 8.7, where the
quantizer is linearized and an additive noise source models the quantization error.
This provides a linear description of the  modulator that gives an insight into its
operation.
The following equation can be written that relates the modulator input X to its
output Y :


Y = z1 X + 1 z1 E

(8.4)

The modulator output is thus equal to the delayed modulator input plus the quantizer
error shaped by the function (1 z1 ). Considering the quantizer error as a white
noise that respects Benetts conditions [3] and assuming a large OSR (that is, the
baseband frequency is much lower than the sampling frequency), the quantization
noise power in the modulator baseband can be calculated as
PQ =

2
2

12
OSR3

(8.5)

where  is the quantizer step.


The noise shaping allows for a 9 dB noise reduction per octave of OSR, which
represents a 6 dB/octave improvement with respect to the gain for an unshaped
quantization noise.

Test of  converters 241

a1

z1
1z 1

a2

Figure 8.8

b1

z1
1z 1

1
1

E
Y
k

b2

z-domain representation of a second-order  modulator

8.2.3.2 Second-order modulators


The second-order modulator was introduced with the purpose of obtaining higherorder noise shaping in the form:
2

(8.6)
Y = z2 X + 1 z1 E
Such a noise shaping provides one more bit per octave of the OSR with respect to the
first-order modulator. Figure 8.8 shows a z-domain diagram of a general second-order
modulator.
In order to properly choose the branch coefficients (ai , bi ), a z-domain analysis has
to be performed. For that purpose, Ardalan and Paulos [7] showed that the quantizer
had to be considered as having an effective gain k. They demonstrated that such a gain
is actually signal dependent but its d.c. value adjusts to the value that makes the loop
gain to be one. For instance, Boser and Wooley [8] proposed to use a1 = b1 = 0.5
and a2 = b2 = 0.5. This solution has two main advantages. It allows the use
of single-branch integrators with a 0.5 gain. Another advantage is that the voltage
level density at the output of the two integrators is similar, which allows using the
same amplifier for both, without adjusting its output range. For such a modulator, the
effective gain settles on average to k = 4.
8.2.3.3 High-order modulators
Higher-order modulators can be obtained in two different ways: increasing the loopfilter order or cascading several low-order  stages.
The first technique is conceptually straightforward: filters G and H in Figure 8.4
can be tailored such that the quantization noise is shaped by a function (1 z1 )L ,
L being the order of the loop filter. Several possibilities exist for second-order modulators which do not represent much of an issue [810]. However, the complexity
increases with the order of the loop filter. This makes the practical implementation of
the technique far from direct or general. The main problem associated with single-loop
modulators is that of stability. Modulator stability is difficult to study, in particular
for low-resolution quantizers. The loop filter has to be designed such that its internal
states remain in manageable ranges. Otherwise, high capacitor ratios are required,
which increases power consumption and proper scaling places considerable pressure
on the noise budget. Figure 8.9 shows the LeeSodini generic diagram of an Lth order
single-loop  modulator. It makes use of local feedback and feed-forward loops to
locate the poles and zeros of the loop filter [11].

242 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


+
B1

B2

BL

A1

A0

A2

AL

LeeSodini architecture for high-order single-loop  modulators

Y1
+

Y2
+

Figure 8.10

Reconstruction filter

Figure 8.9

YN

Generic diagram of cascaded  modulators

The second technique consists in cascading several low-order stages [12] as shown
in Figure 8.10. The quantization error of stage i is digitized by stage i + 1, in some
way similar to pipeline converters where the residue of one stage conversion (i.e., the
quantization error) is the input of the next stage. A proper reconstruction filter has to
be designed that combines the output bit-streams of all the stages. As a result, all but
the last stage quantization errors are cancelled. Such structures benefit from a greater
simplicity than single-loop modulators and their design flow is better controlled. A
drawback is that noise cancellation within the reconstruction filter depends on the
characteristics of the different stages (integrator gain, amplifier d.c. gain and branch
coefficients). In other words, the digital reconstruction filter has to match the analogue
characteristics of the stages. These requirements put more stress on the design of the
analogue blocks as the overall modulator is more sensitive to integrator leakage and
branch mismatches than single-loop modulators.

Test of  converters 243

8.3

Characterization of  converters

The  converters are ADCs. For this reason, their performance can be described
by standard metrics, defined for any ADC. Similarly, there exist standard techniques
to measure these standard metrics. All this information about ADC testing is actually
gathered in the IEEE 1241-2000 standard [13]. Characterization of state-of-the-art
 converters is challenging in itself from a metrological viewpoint. Some 
converters claim for a precision of up to 24 bits. For such levels of accuracy, no
detail of the test set-up can be overlooked. However, these concerns are not specific
to  converters but are simply a consequence of their overwhelming capability
to reach high resolutions. What is intended in this section is to contemplate ADC
characterization from the viewpoint of  modulation. For more general information,
the reader can refer to Chapter 7.
The performance specifications of ADCs are usually divided into two categories:
static and dynamic. The meaning of these two words seems to identify the role of
these specifications to the field of application of the converter. As the first  modulators were of low order, they required a high OSR to reach a good precision.
Their baseband was thus limited to low frequencies. Then, evolutions in the modulator architecture allowed reducing the OSR while maintaining or even increasing
the resolution. For this reason, the market for  converters has evolved from the
low-frequency spectrum to the highest one. In the lowest range of frequency, 
modulators are used for instrumentation medical, seismology, d.c. meters, and so
on. At low frequencies, state-of-the-art converters claim for a resolution of up to
24 bits. In that case, the most important ADC performance parameters seem to be
the static ones: gain, offset, linearity. However, the noise figures are also of great
interest for those metering applications that require the detection of very small signals. The most important market for  converters can be found in the audio range.
Indeed, most CODECs use  modulators. In that case, dynamic specifications are
of interest. Moving forward in the frequency spectrum,  modulators can be found
in communication and video applications. The first target was ADSL and ADSL+
but now converters can be found that are designed for GSM, CDMA and AMPS
receivers.

8.3.1

Consequences of  modulation for ADC characterization

Opening the ADC black box results in numerous conclusions. The first one is almost
purely structural. Because  converters are so clearly divided into two domains
( modulator and digital filter), the building blocks are often sold separately. From
a characterization viewpoint, it is obvious that the converter performance depends
on the filter characteristics: it will define the OSR and also the amount of noise
effectively removed. It also affects the signal frequency response. Furthermore, it
must be correctly implemented such that finite precision arithmetic does not degrade
the final resolution. However, the filter is digital and to some extent its performance is
guaranteed by design. The filter has to be tested to ensure that no defect has modified
its structure or caused an important failure, but the characteristics of the filter should

244 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


not deviate from the simulations. On the other hand, the  modulator is made
of analogue blocks whose performance is sensitive to small process variations and
unexpected drifts. As a result, much of the characterization burden concentrates on
the modulator.
In most Nyquist-rate ADCs, the conversion is performed on a sample-to-sample
basis. The input signal is sampled at a given instant and that sample is in some way
compared to a voltage reference. The digital output code determines to what fraction
of the full scale the input sample corresponds. In flash converters, the input sample
is compared to a bank of references evenly distributed over the full-scale range. In
dual-slope converters, the time necessary to discharge a capacitor previously charged
at the value of the input sample is measured by a master clock. There exist a variety
of solutions to derive the digital output code, but in all cases a given output code
can be associated to a given input sample. For  converters, however, that is not
the case. Indeed, the output of a  converter is provided at a low rate, but the
input is sampled at a high rate. How can a digital output code be associated to a
given input sample? This absence of direct correspondence between a given input
sample and a given output code is even more significant considering a stand-alone
 modulator. The adaptive loop of the  modulator continuously processes the
input signal and the modulator output at a given instant depends not only on the input
sample at that instant but also on its internal state. The internal state depends on the
whole history of the conversion. Actually, if the same input signal is sent twice to a
 modulator in identical operating conditions, the two output bit-streams obtained
will be different. The low-frequency components may be identical and the output
of the decimation filter may be identical, but the actual modulator output would be
different.
The simplicity of  modulator largely relies on the use of low-resolution quantizers (even only 1 bit). The drawback is that the quantization error remains strongly
correlated to the quantizer input signal. The study of that correlation is overwhelmingly complicated by the modulator feedback loop. The non-linear dynamics of the
 modulators induce some effects that require particular attention. For instance, the
response of a first-order modulator to some d.c. input levels can be a periodic sequence.
Moreover, it has been shown that integrator leakage stabilizes those sequences known
as limit cycles, over a non-null range of d.c. inputs that thus form a dead-zone [14].
The shape of the d.c. transfer function of such a modulator is known as the devils
staircase [15]. Similarly, pseudo-periodic behaviours can be seen in the time domain
but do not appear in the frequency domain.
The characterization of  converters thus requires some knowledge of the ADC
internal structure. In particular, it may be interesting to characterize the  modulator
without the decimation filter. Otherwise, one may be led to erroneous interpretations
of the characterization results, in particular for static parameters.

8.3.2

Static performance

The static performance metrics are associated to a representation of the ADC as a


transfer function, a point-to-point mapping almost independent of the input signal.

Test of  converters 245


The ideal transfer function should be a perfect staircase. The static performance
metrics describe the deviations of the actual converter transfer function from the
perfect staircase. The first performance specifications may be those that affect only
absolute measurements, as are gain and offset errors. Apart from its staircase-like
appearance, which is due to the quantization process, the transfer function of an ADC
is expected to be a straight line of gain one with no offset. However, that objective is
hardly achieved in reality and gain and offset errors have to be measured (or calibrated)
for applications that require a good absolute precision. For that purpose, a simple
linear regression over a set of known d.c. input levels is sufficient. The number of
d.c. input levels determines the confidence interval associated to the measurement.
The value of the residues of the linear regression can also give a first insight into the
resolution of the converter.
Non-linearity arises when the transfer function cannot be represented by a straight
line anymore. The static parameters that represent how the actual transfer function
deviates from the straight staircase are integral non-linearity (INL) and differential
non-linearity DNL. The former is a representation of the distance of a given point of the
actual transfer function to the ideal transfer function. The latter is a representation of
actual size of a transition width (that is, the voltage difference between two consecutive
code transitions) with respect to the ideal size. This is illustrated in Figure 8.11. These
two metrics are closely related as the INL at code i is the sum of the DNL from code
0 to code i.
An important concept related to the static representation is that of monotonicity.
It is not a metric but a quality of the converter that is implicit in the INL and DNL.
The monotonicity of  converters is ensured by design. Indeed,  modulators
Output
code

Gain

DC input

Offset

Best-fit
line

INL

DNL
Ideal code
width

Figure 8.11

ADC transfer function and static metrics

246 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


can be seen as control loops so if they were not monotonous, they would be unstable.
Another important static aspect of ADCs appears if there are missing codes. The
output code of  converters is built by the decimation filter from the modulator
output bit-stream. Provided that the decimation filter is well designed and no rounding
operation limits the resolution, there should not be missing codes.
As described in the previous subsection, the  modulation breaks the traditional representation of the A/D conversion as a sample-to-code mapping. This does
not mean that the static metrics are useless but that their interpretation has to be done
with care. DNL does not provide much information but INL could describe general
trends in the transfer function like polynomial approximations. Anyway, measuring
the INL for all output codes does not make much sense either. As a result, the standard techniques used to measure the INL and the DNL of ADCs are not adapted
to  converters. One example of these techniques is the servo-loop method that
is used to locate code transitions. Apart from the previously commented concerns
on the concept of code transitions, a drawback of that technique is that the algorithm should take into account the potentially large latency of the decimation filter. It
should also be revised to take into account the influence of noise at high resolution.
Indeed, a  converter with a nominal resolution of 24 bits may have an effective
resolution of around 20 bits. Trying to locate transitions at a 24-bit resolution would
imply finding an oscillation buried into a 16 times higher noise. Also, an exhaustive test of all transitions would require an incredible amount of time: for resolution
above 14 bits, the number of codes is very large. Furthermore, in order to obtain the
static transfer function, the increase and decrease rate of the input voltage should be
very slow.
Another technique, histogram testing, requires the acquisition of several samples
per output code. The converter code density (that is, the number of occurrences for
each code) is compared to the input signal density to determine DNL and INL with
precision. The main advantage over servo-loop method is that the histogram can be
computed using sine-wave inputs or slow ramps. There is thus no reason to wonder
how the modulator processes the input signal as in the servo-loop method. However,
a histogram test is still difficult to perform for the highest resolutions.
Hence, it can be concluded that the techniques to measure the static performance
metrics are not quite adapted to  modulators. Even the metrics in themselves
suffer some conceptual limitations, apart the gain and offset measurements.

8.3.3

Dynamic performance

The dynamic category represents the time- and frequency-dependent behaviour of


the ADC. There are metrics associated to distortion such as total harmonic distortion (THD) and inter-modulation distortion (IMD). These metrics account for static
effects as well as purely dynamic effects such as hysteresis or settling errors. Other
non-idealities can be gathered under the concept of noise. The reader should notice,
however, that noise does not mean random (or more correctly stochastic) processes.
The first and most obvious noise source is the quantization process of the converter.

Test of  converters 247


The quantization noise is strongly correlated to the input signal. For most ADC architectures, the relatively high number of bits tends to de-correlate the quantization noise
from the input signal so that the former behaves as a random white noise. Apart from
quantization noise, other noise sources can occur and reduce the effective resolution.
Stochastic processes such as thermal noise and flicker noise are of importance. Clock
jitter should also be taken into account. Finally, cross-talk within the circuit or a
poor power-supply rejection ratio and common-mode rejection ratio may also cause
the appearance of unwanted tones in the  modulator baseband and consequently
in the converter output. The performance parameters related to noise effects are the
signal-to-noise-ratio (SNR) and spurious free dynamic range (SFDR). Depending on
the application, the metrics are tailored to isolate a given effect. Spurious tones may
be considered and accounted for as a distortion (even if they are not harmonics of the
input signal), while the noise would in that case account only for random-like effects.
The sum of all non-ideal effects (apart from gain and offset errors) is often considered
in a single performance metric that is called the effective number of bits (ENOB).
The ENOB is the number of bits that would have an ideal converter if its quantization
error power was equal to the overall error and noise power of the converter under
study.
As said before,  modulators raise a number of particular concerns about their
dynamic characteristics. Quantization noise in  modulators usually appears as a
spectrally shaped random noise, but in some conditions spurious tones can appear
due to internal coupling with the input signal or even idle tones. Similarly, the quantization noise power in the baseband can vary with the frequency and amplitude of the
signal.  modulators are also prone to exhibit pseudo-periodic outputs when they
are excited by d.c. levels. This phenomenon is known as idle tones and is difficult
to identify through spectral analysis. However, examination of the modulator time
response makes possible the detection of such effects. In order to cope with the low
control of the  modulators non-linear dynamics, the metrics associated to the
dynamic characteristics (THD, SNR, SFDR, etc.) are usually measured and plotted
over a broad range of input conditions. As most dynamic characterization techniques
rely on sine-wave input, the metrics of interest are measured for several amplitudes
and frequencies. A periodic test signal is thus sent to the converter input and a register
of N data points is acquired at the converter output. Two main analysis techniques
exist.
One is called sine-fit. It consists in finding the sine wave that best fits the converter
output in a least-square sense. There are two important variants of the technique. The
first and simplest one can be applied if the input sine-wave frequency is perfectly
controlled. That can be done if it comes from a digital-to-analogue converter (DAC)
driven by the same clock as the ADC. In that case, a linear-in-the-parameters regression can be applied to retrieve the sine-wave amplitude, offset and phase. The second
one has to be applied if the signal frequency is not known. In that case, non-linear
solvers have to be used that greatly increase computational efforts. Once the best-fit
sine wave has been found, the residue of the fit operation gives the overall converter
error power. Refinements can be included in the method to take into account possible
harmonics or other tones at known frequencies in the fitting algorithm.

248 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


The other and almost ubiquitous analysis technique is the spectral analysis based
on the fast Fourier transform (FFT). The FFT differs from the discrete Fourier transform (DFT) in that N, the number of acquired points (that is, the length of the
acquisition register), has to be a power of two. In that case, the DFT algorithm can
be simplified to a great extent. Apart from that peculiarity the concepts are identical.
The Fourier transform (FT) of a signal is its exact representation in the frequency
domain. Actually, a signal can be represented as the linear combination of an infinite
number of base functions that are complex exponentials of the form ej2ft .
The FT of a signal is nothing more than the coefficient set of the linear combination, that is, the values of the projection of the signal over each one of the complex
exponentials. That information gives a clear representation of how the signal power
is spectrally distributed.
For  modulators, which rely on quantization noise shaping to obtain their high
resolution, the FFT is almost unavoidable. However, FFTs should be applied with
care because the simplicity of the result interpretation contrasts with the subtlety of
the underlying concepts. For that reason, the next section is devoted to study the main
issues for the correct application of a FFT to  modulators.

8.3.4

Applying a FFT with success

The first issue to consider is that performing a FFT over a finite number of points
gives an approximation of the FT of the signal under study. Actually, if N points
are acquired at a frequency facq , the outcome of the FFT is the Fourier series of
the periodic signal of period N/facq that best approximates the acquired samples.
Most of the time, however, the acquired signal does not have the required period. It
may even not be periodic at all, due to noise or spurious components. For that reason,
spectral leakage occurs. The signal components at frequencies other than the available
frequency bins (k facq /N, with k varying from zero to N1) will leak and spread
across adjacent bins, making them unobservable. Actually, the obtained spectrum can
be considered as the FT of an infinite-length version of the analysed signal multiplied
by a rectangular signal with N ones and an infinite number of zeros. That signal is
called a rectangular window. The multiplication in the time domain corresponds to a
convolution in the frequency domain. So a spectral line at a given frequency (a Dirac
distribution) in the ideal FT of the signal will appear as a version of the rectangular
window spectrum centred at the same frequency. More exactly, what will appear in
the FFT output are the samples of the displaced rectangular window spectrum that
corresponds to the available frequency bins. This is illustrated in Figure 8.12.
If the spectral line exactly corresponds to one of the FFT bins it means that it can
be represented adequately by a Fourier series of length N. This corresponds to case
(a) in Figure 8.12. In that case, the rectangular window spectrum is sampled at its
maximum, and the rest of the samples exactly correspond to the nulls of the window
spectrum. However, if the spectral line falls between two FFT bins (case (b)), the
rectangular window spectrum is not sampled at its maximum on the main lobe. Part
of the missing signal power leaks into adjacent FFT bins that sample the rectangular
window sidelobes.

Test of  converters 249


(a)
Magnitude in dBFS

(b)

Frequency

Figure 8.12

Spectrum of a rectangular window (a) for a coherent tone and (b) for
a non-coherent tone

Coherent sampling is the first technique to limit these undesirable effects. It consists in properly choosing the test frequencies such that they correspond to FFT bins
as exactly as possible. In practice, this can be implemented if the test signal generator
can be synchronized with the ADC. It can be shown [16] that the test frequencies
should be set to a fraction of the acquisition frequency:
ftest =

J
facq
N

(8.7)

where N is the number of samples in the acquisition register and J is an integer, prime
with N, that represents the number of test signal periods contained in the register.
This choice also ensures that all the samples are evenly distributed over the test signal
period and that no sample is repeated.
However, it is not always possible to control the test frequencies with a sufficient
accuracy. Similarly, there may be spurious tones in the converter output spectrum
at uncontrolled frequencies. In those cases, a window different from the rectangular
one is required. Spectral leakage occurs because the analysed signal is not periodic
with a period N/facq . The idea behind windowing is to force the acquired signal to
respect the periodicity condition. For that to be done, the signal has to be multiplied
by a function that continuously tends to zero at its edges. As a result, the power
contained in the sidelobes of the window spectrum can be greatly reduced. The
window has to be chosen such that the leakage of all components present in the
ADC output signal falls below the noise floor and thus does not corrupt spectrum
observation. The drawback of such an operation is that the tones present in the output
spectrum are no longer represented by a sharp spectral line at one FFT bin. Indeed,
the main lobe of the window is always sampled by number of adjacent FFT bins
greater than one. As a result, frequency resolution is lost. There is a trade-off between
frequency resolution and sidelobe attenuation. Figure 8.13 represents the spectrum
of several windows sampled for a 1024-point FFT. Figure 8.13(a) shows how the
window would be sampled for a non-coherent tone that would fall exactly between

250 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


(a)

(b)

Rectangular
RifeVincent (type II)

BlackmanHarris

Window power in dB

Hanning
100

50
200

100

150

100

200

200

0.1

0.2

0.3

0.4

0.5

0.13

0.135

0.14

0.13

0.135

0.14

Normalized frequency

Figure 8.13

(a) A 1024-point FFT of four windows in the worst case of non-coherent


sampling (signal between two FFT bins) and (b) main lobes of the
window spectra

two FFT bins. Figure 8.13(b) shows a close-up of the main lobes of the window
spectra for a coherent tone. Notice that for Figure 8.13(b), there is one marker per
FFT bin.
In order to limit spectral leakage, the authors in Reference 17 proposed to combine
sine-fit and FFT. A sine-fit is performed on the acquired register in order to evaluate
the gain and offset of the modulator. Then, an FFT is performed on the residue of
the sine-fit. As the high-power spectral line has been subtracted from the register,
the residue mainly contains noise, spurious components and harmonics. In most
cases, these components do not exhibit high power tones. A simple window or even
a rectangular window can be used. The spectral leakage of these components should
be buried below the noise floor. The overall spectrum (what the authors call pseudospectrum) can be reconstituted by adding manually the spectral line corresponding
to the input signal. The main drawback of this technique is obviously that it requires
the computational effort of both sine-fit and FFT.
The proper application of a FFT requires that three parameters must be determined:
the number of samples in a register, the number of averaged registers and the window
to be applied. The window type sounds too qualitative and it is useful to divide it into
four parameters: the main lobe width (for instance, 13 FFT bins for the RifeVincent
window of Figure 8.13(a)), the window energy, the maximum sidelobe power and
the asymptotic sidelobe power evolution. Figure 8.14 shows how these parameters
relate to the measurement objectives and to the set-up constraints through a number
of central concepts.
The required frequency resolution is defined by the need for tone discrimination
and affected by set-up limitations such as the frequency precision of the signal generator. For a given type of test, a number of tones are expected in the output spectrum.
For instance, in an inter-modulation test, the user has to calculate, as a function of the
input tone frequency, the expected frequency of the inter-modulation and distortion

Test of  converters 251


Objectives:
 Tone discrimination
 Lowest measurable
tone power
 Noise spectral
density resolution

Central concepts:
 Frequency resolution
 Noise floor
 Noise dispersion

Set-up constraints:

 Signal leakage

 Stimulus frequency
precision

 Noise leakage

FFT parameters:
 Number of samples
 Number of averages
 Window:
- Main lobe width
- Window energy
- Side-lobes power
- Side-lobes decay

 Expected noise power


 Input tone power
 Expected noise shape

Figure 8.14

Relating FFT parameters to test objectives and set-up constraints

tones. Similarly, expected spurious tones can be taken into account, such as 50 Hz (or
60 Hz) tones. All those components should be correctly discriminated by the FFT in
order to perform correct measurements. Frequency resolution is primarily driven by
the number of samples in the acquired register but the window type is also of great
importance. Indeed, the main lobe width for an efficient window (from a leakage
viewpoint) as for the RifeVincent window shown in Figure 8.13(a) is as large as 13
FFT bins. This means that the frequency resolution is reduced by a factor 13 with
respect to a rectangular window whose main lobe is only one-bin wide. In many cases
though, few tones are expected in the output spectrum and the frequency resolution
issue can easily be solved by a judicious choice of the test frequency.
The noise floor is the concept of most importance. The power of a random signal
spreads over a given frequency range. For a white noise, it spreads uniformly from
d.c. to half the acquisition frequency (facq /2). What the FFT measures is actually the
amount of noise power in a small bandwidth centred on each FFT bin. Obviously, the
larger the number of samples acquired, the smaller the bandwidth and the lower the
amount of noise falls in that bandwidth. The expected value for a noise bin is

Xk  = noise

2
N Ewin

(8.8)

where noise is the standard deviation of the white noise, N is the number of samples
in the acquisition register and Ewin is the energy of the applied window. Indeed,
the window is applied to the whole output data, including the noise and influences
the effective noise bandwidth. The energy of the window is simply calculated from

252 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


the time-domain samples of the window (wk ) by

2
N N1
k=0 wk
Ewin = 

2
N1
k=0 wk

(8.9)

On the other hand, the noise floor is related to the set-up constraints by the actual
noise power in the output data, which should be estimated a priori. The noise floor
has to be set to a value that enables the observation of the lowest expected tone power.
In other words, if a tone of 90 dB below full scale has to be detectable, the number
of samples and the window energy have to be chosen such that the noise floor of the
resulting FFT falls below 90 dB.
The noise dispersion should also be taken into account. It can be shown that the
random variable that corresponds to an FFT bin and whose mean value is expressed
in Equation (8.8) has a standard deviation of the same order as its mean value. As a
result, in the representation of the spectrum in decibels of the full scale, random noise
appears as a large band that goes from 12 dB above the expected power level down
to tens of decibels below. Averaging the magnitude of the FFT bins for K acquisition
registers helps to reduce the standard deviation of the noise FFT bins by a factor
of K 0.5 . For a significant number of averages, the noise floor tends to a continuous
line, which would be its ideal representation. Actually, the following equation could
be used to derive the FFT parameters from the requirement of the lowest detectable
tone:






N Ewin
3
1
10 log
+ 20 log 1 +
= Pspur (8.10)
10 log
2Pnoise
2
K
where Pnoise is the expected noise power of the converter, K is the number of averaged
registers and Pspur is the power of the minimum spur that has to be detected. Notice that
a full-scale tone is taken as the power reference in Equation (8.10). The last logarithmic
term in Equation (8.10) stands for the dispersion of the noise floor. Figure 8.15 intends
to facilitate comprehension of Equation (8.10).
The dispersion term should be maintained below the variations of the noise spectral
density that has to be detected. For instance, if an increase of 6 dB in the noise density
due to flicker noise has to be detected, the noise dispersion term should be lower than
6 dB, which implies averaging K = 10 FFT registers. Note that if the actual noise
power is higher than expected, the noise floor of the obtained FFT is increased. As
a result, the minimum detectable tone is higher. To compensate for this effect, the
number of points in the register should be increased to decrease the noise floor. An
extra term may be introduced into Equation (8.10) in order to account for unexpected
noise increases.
Returning to Figure 8.14, the concept of signal leakage has already been explained.
Considering the maximum input tone power and the frequency precision of the signal
generator available, the proper window should be selected such that the sidelobe power
falls below the noise floor. Notice that if the frequency precision of the generator is
better than half the FFT bin bandwidth, facq /(2N), the sidelobe power requirements

Test of  converters 253

Power in dBFS

Full-scale tone

Buried tone
Noise

Noise floor:
N Ewin
1
10log
2Pnoise
2

10log

Minimum
detectable
tone

Noise upper 3 limit:


3
K

20log 1+

Figure 8.15

FFT bins

N/2

FFT noise floor and noise dispersion

may be relaxed as the window spectra would not be worst-case sampled. Taking that
case to an extreme, if coherent sampling is available to the test set-up, no signal
leakage occurs.
For  converters, however, another leakage concept may have to be taken
into account: noise leakage. As was said in Section 8.3.1,  converters nonidealities are located mainly in the analogue part which is the  modulator. In that
sense, performing the FFT on the modulator bit-stream gives more insight into the
functioning of the  modulator because it is possible to check the correctness of
the noise shaping at high frequencies (beyond the cut-off frequency of the decimation
filter). If the FFT is performed on the output of the decimation filter, a number of
samples N has to be acquired at the filter output frequency (facq ) in a high-resolution
format (for instance, the filter output of a 24-bit precision filter can be in a 32-bit
format). If it is performed on the modulator bit-stream, a number of samples, N  , has
to be acquired at the sampling frequency of the modulator (which is equal to the filter
output frequency multiplied by the OSR) in a low-precision format (typically 1 bit).
Taking into account that the same non-idealities have to be detected in the baseband,
the same frequency resolution has to be selected in both cases. Hence, the FFT of the
modulator output bit-stream requires OSR times more points than the acquisition at
the filter output. The acquisition time is thus the same in both cases, and the memory
requirements should be of the same order due to the difference in the samples formats.
The drawback of performing an FFT on the modulator bit-stream is that it puts more
stress on the choice of the window that has to be applied to the data register. Indeed
in most ADCs, the noise spectral distribution is almost flat and its power is far lower
than full-scale signal power. As a result, noise leakage has little or no impact on the
output spectrum. This reasoning is also valid for a  converter if data is acquired
at the output of the decimation filter. However, if the FFT is performed directly at

254 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


the modulator output, the spectral density of the modulator quantization noise is not
flat at all and the leakage of high-frequency noise into the modulator baseband could
severely corrupt the FFT analysis. The window has to be chosen not only on the basis
of test signal leakage but also on the basis of spectrally shaped noise leakage. In other
words, the performance of the window depends on the first sidelobes attenuation
for signal (or tones) leakage, and on the asymptotic attenuation for noise leakage.
It can be seen in Figure 8.13 how the BlackmanHarris and RifeVincent windows
achieve a low sidelobe power. On the other hand, Hannings window induces more
signal leakage but with the drawback that the sidelobe power greatly decreases with
frequency. This window outperforms the BlackmanHarris window at relatively low
frequencies. It may thus be more suitable for avoiding the high-frequency noise of 
modulator output bit-stream leaking into the baseband. This may be particularly true if
a combination of sine-fit and FFT is performed, as the fundamental component that is
most likely to exhibit visible leakage is removed. In that case, noise leakage becomes
the dominant component, unless there are high-power spurious tones. In order to
properly choose the window, it could be useful to simulate a white noise filtered by
the theoretical NTF of the modulator and perform an FFT with the candidate windows.
That allows checking if the shape of the noise floor in the baseband is higher than
expected.
Summarizing the conclusions on the characterization of  converters, it can
be said that it requires an extensive use of spectral analysis (i.e., FFT) over a range
of input conditions (signal amplitude and frequency). Furthermore, FFT has to be
carried out with care and the test engineer should precisely know what the limitations
of the test set-up are and what has to be measured. Concerning the static metrics,
the effective gain and offset have also to be included. Odd effects, specific to the
non-linear dynamics of  modulators, such as dead-zones or limit cycles, may also
be found in the time domain.

8.4

Test of  converters

The term test is commonly used for characterization. Indeed, functional test is the
most-used practice in the field of mixed-signal circuit production test and is very
similar to characterization. It consists in measuring a given set of datasheet parameters and verifying that they lie within the specified limits. Nevertheless, the correct
definition of test is broader than that of characterization. Test should represent any
task that ensures directly or not within a given confidence interval that the circuit
is (and not just performs) as expected. For instance, if it were possible to check that
the geometry of all process layers is the expected one and that the electrical process
parameters are within specification over the whole wafer, circuit performance would
be ensured by design. As was said before, production test for mixed-signal circuits and
in particular for  modulators is usually functional: a subset of key specifications
are measured and the rest of the datasheet parameters are assumed to be correlated to
those measured parameters. It should be clear that functional test is not the perfect test
as it does not guarantee that the circuit is defect free. There exist other alternatives,

Test of  converters 255


none of which is perfect either but that may have some added value with respect to
characterization-like tests.

8.4.1

Limitations of the functional approach

In the context of SoCs, traditional functional approach is much more costly than for
stand-alone parts. For instance, an embedded modulator may not have its input and
output available. The functional test of the complete SoC may well hit the complexity barrier just as for digital integrated circuits. Solutions have thus to be found to
circumvent such issues.
Providing access to the nominal input of the modulator under test may be far from
easy in an SoC context. If the input of the modulator is not an input of the SoC, the
test signal cannot be sent directly through the pads of the chip. One solution could be
to implement a multiplexer at the modulator input so as to be able to select between
two input paths: the nominal one and the test one that would be connected to the
output pads. This obviously requires increasing the number of pads for test purposes.
Moreover, the test of high-resolution modulators requires the use of a test stimulus
of even higher resolution (both in terms of noise and distortion). The signal bus and
devices necessary to send the test signal to the modulator input thus need to maintain
that high-resolution criterion. In other words, the multiplexer at the modulator input
has to be designed for a precision higher than the modulator under test. In that sense,
standard approaches such as the IEEE 1149.4 [18, 19] test bus do not seem of much
help in the case of  modulators.
The modulator and the decimation filter output are digital which mean that these
signals can be sent to a test pad through any standard communication protocol with no
loss. Standard approaches such as the IEEE 1149.1 [20] test bus may be used. At first
sight, one may think that the observability is not much of an issue. The difficulties
arise with the amount of data to be shifted off chip. As a consequence, compression
techniques can be implemented on chip and the data may also be stored in an on-chip
memory. Though SoC are likely to include a digital signal processor (DSP) and RAM,
the use of these resources for modulator test has two non-negligible drawbacks. The
first one is test complexity, as these resources have to be configured to perform their
normal operation and the test of the modulator. The second one is that it impedes
the test of different parts of the SoC in parallel, which is one of the recommended
techniques to speed-up the test of such complex devices.
Solutions to improve the testability of mixed-signal circuits in SoC are necessary
in general and are of particular importance for  converters.

8.4.2

The built-in self-test approach

Much of the research done to improve the testability of mixed-signal circuits targets
built-in self-test (BIST) solutions. The concept is quite explicit: the objective is to
carry out most of the test on chip, from test signal generation to data analysis. An ideal
or full-BIST would require only a digital start signal and output a digital PASS/FAIL

256 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Digital
modulator

z1

z1

z1

K1
K1

z1

Figure 8.16

z1

z1

K2
K2

Low-pass
filter

KN
KN

On-chip mostly digital oscillator with selectable frequency

signal. However, most solutions reported lack of some of these functions and hence
they are actually partial BISTs.
8.4.2.1 Functional BIST
With respect to  converters, some solutions have been proposed that can be applied
to ADCs. Obviously, these are functional tests that suffer the same limitations as
characterization techniques. Histogram-based BIST [2123] and servo-loop-based
BIST [24] are therefore not quite adapted to  modulators. The most interesting
solutions in that field are undoubtedly those proposed by Gordon Roberts team. They
proposed two solutions for the on-chip generation of precise sine waves and another
for the on-chip evaluation of some converter characteristics.
The first one [25] consists in building a  digital oscillator, by embedding a
 attenuator in a digital resonator, as seen in Figure 8.16. The selection of the
attenuation coefficient (Ki ) defines the oscillation frequency. The digital  output
bit-stream encodes a very precise sine-wave tone (defined by the design of the digital
oscillator). Then, an analogue filter cuts down significant amounts of quantization
noise. This scheme has the advantage of being mostly digital and thus is very robust
and testable.
Lin and Liu [26] modified the technique so as to be able to implement multitone
waveforms. The core of their idea is to use time division multiplexing to accommodate
the additional tones. In order to maintain the same efficiency as the original scheme,
the master frequency has to be raised by a factor M (M being the number of tones).
Similarly, the order of the digital  modulator that can be seen in the loop in
Figure 8.16 also has to be multiplied by a factor M. Actually, each delay element
in the original modulator has to be replaced by M delay elements. This scheme
is thus practical only for a reduced number of tones. The other solution [27, 28],
sketched in Figure 8.17, consists in recording in a recycling register (i.e., a 1-bit
shift register which output is fed back to the input) a portion of a  encoded
signal.
The advantage with respect to the previous proposal is the flexibility of the encoded
signal, as the only a priori restriction is that the wanted signal be periodic with a maximum period equal to the length of the register. On the other hand, the drawbacks

Test of  converters 257

Low-pass
filter
Software
modulator

Figure 8.17

Register-based oscillator

concern the trade-off between precision and extra area. Indeed, the wider the register,
the more precise the encoded signal and the larger the required silicon area. Nevertheless, they also proposed to reuse the boundary-scan register for the generator
shift register. This would provide a potentially large register with a low overhead.
Alternatively, a RAM available on chip could also be reused. Notice that it is important to optimize the recorded bit-stream to obtain the best results. The bit-stream
recorded in the shift register is a portion of the output bit-stream from a software 
modulator encoding the wanted test stimulus. Optimization consists in choosing the
best performing bit-stream portion over the total bit-stream and in slightly varying
the software  modulator input signal parameters to get the best results. Indeed,
the SFDR results of a modulator can vary significantly with the input signal amplitude. These proposals are well developed and alternative generation methods of the
bit-stream have been shown to improve the obtained signal precision.
In Reference 29 the authors took the idea of Gordon Roberts team and built
a fourth-order  oscillator in a field programmable gate array to demonstrate the
validity of the approach. Their oscillator was designed to avoid the need of multipliers
and required around 6000 gates. They achieve a more than 110 dB dynamic range for
a tone at 4 kHz (the modulator master clock is set to 2.5 MHz).
For the output data analysis, Roberts team also proposed a solution to extract
some dynamic parameters in association to their sine-wave generation mechanism. In
Reference 30 they compare three possible techniques. The first one is straightforward.
It consists in the implementation of a FFT engine. Although it provides a good precision, it is not affordable in the majority of cases. The second one consists in using
a standard linear regression to do a sine-fit on the acquired data. The same master
clock is used for the sampling process and the test stimulus generation. So the input
frequency is precisely known, which avoids the necessity of using a non-linear fourparameter search. The precision of the SNR calculation is similar to the FFT, but less
hardware is necessary. However, some multiplications need to be done in real-time
and some memory is also required to tabulate the values of the sine and cosine at the
test stimulus frequency. The third and last proposed solution is to use a digital notch
filter to remove the test signal frequency component and calculate the noise power
and a selective bandpass filter to calculate the signal power. The required hardware

258 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


to implement this method is less than for the other two solutions, as no memory
is needed to tabulate cosine values and no real-time multiplication is required. The
price to be paid is a small reduction in SNR precision and also, the test time is slightly
increased to account for the filter settling time. Actually, the more selective the filter,
the better the SNR precision but the higher the settling time. Extensions of this work
[31] also showed that it was possible to extract harmonic distortion and IMD with
similar digital filtering.
Finally, Ong and Cheng [32] proposed to partially solve the problem of test stimulus generation duplicating the modulator feedback DAC to input digital sequences.
Similar to the scheme proposed by Gordon Roberts, a digital -like bit-stream
can be used as a test stimulus. The authors argue that sine waves encoded in that
way can be used to functionally test the modulator, performing an FFT on its output.
However, a potential limitation of the technique resides in the fact that the digital test
sequence does not pass through an anti-aliasing filter and thus contains a large amount
of high-frequency noise. This high-frequency noise may interact with the modulator
non-linear dynamics up to the point of causing instability. In any case, it has still to
be demonstrated that the functional metrics measured with such a test stimulus match
those measured in a conventional way.
8.4.2.2 Defect-oriented BIST
Defect-oriented BISTs offer more potential than functional BISTs in terms of requirement of on-chip resources but are usually very architecture dependent. The test
methodology may be the same for all  modulators but in any case its implementation requires particularizing the solution. Only few examples of structural tests
for  modulators can be found in the literature. In Reference 33, Mir et al. demonstrated that  modulators could be good candidates for reconfiguration schemes.
Indeed, the authors reconfigured the double loop of a bandpass  modulator into
two identical single loops. The test consisted of comparing the outputs of the two single loops, which should be identical for any input signal. This comparison reused the
modulator comparator. The results of the fault simulation carried out by the authors
produced good results with a fault coverage that converged to 100 per cent for a
small tolerance window. However, the simulated faults consisted only in catastrophic
stuck-on or stuck-open transistors, shorts and opens in routing lines and parametric
variation of capacitors. More work is thus required to identify which kind of test
stimulus can maximize fault coverage for a broader scope of fault classes.
A different kind of structural test methodology has been applied by Huertas et al.
[34]. This methodology is known as oscillation-based test and consists in converting
the circuit under test (in this case  modulators) into one or more oscillators.
Guidelines are given to build the oscillator such that its oscillation frequency and
amplitude are sensitive to defects. The main advantage of this solution is that it solves
the problem of providing a precise test stimulus, as no input is necessary. The solution
is well developed at a methodological level, which makes its application possible to
virtually any modulator architecture. However, some drawbacks are unavoidable: one
is that it is difficult with only two signatures (oscillation amplitude and frequency)

Test of  converters 259


to cover the scope of all possible faults, from catastrophic to parametric deviations.
This solution is thus limited by the possibility to realize realistic fault simulations.
Another drawback is that the non-linear dynamics of  modulators may alter the
oscillation results expected by analytical calculations.
De Venuto and Richardson [35] proposed to use an alternative input point to test
 modulators. Their intention is to inject a test signal at the input of the modulator
quantizer. This test signal is processed by the modulator just like the quantization
noise. In that sense, the authors argue that they can determine the modulator NTF
accurately. Although it is true that many defects or non-idealities can affect the modulator NTF and should thus be detected, others are intrinsically related to the input
signal. The best example is given by those defects that cause harmonic distortion such
as non-linear settling of the fist integrator. Such a defect would not be detected by
the proposed method. Similarly, it is worth wondering if the input of a test signal at
that point alters significantly the non-linear dynamics of the modulator under test. In
particular, much care should be taken for high-order modulators to ensure that they
are not driven into instability. Nevertheless, the main advantage of the approach is
that it is applicable, in principle, to any modulator architecture.
Ong and Cheng [36] also proposed another solution based on the use of pseudorandom sequences to detect integrator leakage in second-order modulators. Their
method relies on a heuristic search to find the region of the output spectrum that
exhibit sensitivity to integrator leakage. The fact that the proposed solution was
validated only through high-level simulation of a simple z-domain model makes its
reliability questionable.

8.5
8.5.1

Model-based testing
Model-based test concepts

Model-based testing has been developed with the objective of reducing the number
of measurements that have to be performed to exhaustively characterize a device.
For a  converter, all performance figures (SNR, THD, CMRR, etc.) should be
measured at several operating conditions, varying temperature and polarization, and at
several input conditions varying the test stimulus(i) amplitude(s) and frequency(ies).
The authors of Reference 37 pointed out that a large number of these performance
figures are correlated. Indeed, it is reasonable to think that the THD for a given sine
wave will be strongly correlated with the THD for another sine wave for another
frequency and with the THD for the same sine wave but at another temperature. They
concluded that in many cases, a model could be built that relates the large number of
performance figures to a much reduced number of independent parameters. Retrieving
the independent parameters would give access to the whole set of performance figures
but it may be done at a cost much inferior than that of measuring all the performance
figures in all the operating and stimulus conditions.
The key point of the approach was how to derive the correlations between the
performance figures and the independent parameters (i.e., how to build the model).

260 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


A commonly used method to simplify non-linear problems is to consider that only
small variations occur around the nominal operating point (set by design). For characterization purpose, it can be considered that performance variation from one circuit
to another is due to slight process variations. Considering the slight variations to
be a perturbation of the system, a first-order approximation can lead to sufficiently
accurate results. That is the basis of linear modelling. The mathematical description
of model-based testing makes use of matrices. Let s be the vector of M performance
figures and x the vector of P independent parameters. The model can be described by
an M P matrix A that relates each of the specification figures to a linear combination
of the parameters and by a vector of M performance figures s0 that corresponds to
the ideal values at the operating point. The following equation can thus be written:
s = A x + s0

(8.11)

The assumption of model-based testing is thus that M is greatly superior to P, which


implies that the matrix size M is too large. If a subset sb of P performance figures is
measured such that the reduced square matrix Ab is of rank P, the parameters x of the
model can be retrieved unambiguously and the rest of the specification figures can
then be calculated by applying the complete model. We have

x =Ab 1 (sb s0b )
(8.12)
s = A Ab 1 (sb s0 ) + s0
Owing to noise consideration and the limited precision of the performance measurements, it may be necessary to perform measurements over a set of specifications wider
than P and retrieve the model parameters in a least-square approach. The selection of
the number of specifications to be measured is an optimization problem that greatly
depends on the model and the way it was built.
One of the ways to derive an efficient model consists in identifying the mechanisms
that can potentially impact the specifications that have to be tested. Obviously, this
requires knowledge of the exact circuit architecture and implementation together with
a deep understanding of its functioning. Even more, statistics on the process variations
should also be available, as most parametric failures are due to abnormal drift of
some of these parameters. A systematic approach for the model derivation would
then consist in performing a sensitivity analysis around the normal operating point.
All parameters that significantly impact the specifications would be selected to form
the final matrix A and the operating point performance s0 . The main shortcoming of
such an approach is that the sensitivity analysis can only be applied to the parameters
selected by the designer. It thus requires a deep understanding of the architecture
and its non-idealities in order to avoid test escapes. In turn, a systematic approach
is unfeasible as all the process and device parameters should be considered in the
sensitivity analysis. This would undoubtedly provide a number of parameters much
higher than the number of specification figures that should be measured.
Alternatively, the model can also be derived in an empirical manner [38]. That
operation is often named blind modelling. From a statistically significant set of
N devices, an empirical model is derived. The number of devices that have to be

Test of  converters 261


fully characterized to generate the model put a fundamental limit on the maximum
achievable model order. The complete specification vectors s of the N devices are
concatenated to form an M N matrix. The average (over the N devices) specification vector s0 is subtracted from each column of that matrix. The singular value
decomposition of the obtained matrix allows identification of the model. This can be
easily understood since singular value decomposition defines the mapping of a vectorial space defined by arbitrary vectors (the specification vectors) to a vectorial space
defined by orthogonal vectors (the model parameter space). A possible optimization
of the model order would consist of selecting only those parameters whose singular value is above the variations related to measurement noise. The model quality is
determined by a lack-of-fit figure of merit, which is similar to the standard deviation of the residues in a least-square fit. The main advantage of such an approach is
that it can be easily generalized as the methodology does not require any particular
knowledge of the circuit under test. However, the modelling method still assumes
that the variations are small around an ideal operating point. One drawback of the
blind modelling approach with respect to the sensitivity analysis approach is that it
provides no insight into the validity range of this linear assumption.
Examples of model-based test applied to ADCs can be found in the literature
[39, 40]. However, in these cases, the model is used primarily to reduce the number
of measurements for static test. In Reference 39, a linear model is used to measure
the INL and DNL only for a reduced number of output codes and extrapolate the
maximum INL and DNL. In Reference 40, a linear model is also applied to the
histogram test. All the output codes are measured but the number of samples per code
is relaxed. The model is used to draw the INL and DNL information out of the noisy
code density obtained. Anyway, neither of the proposals make much sense for 
converters as they only consider static performance.
Despite its potential for test time (and thus test cost) reduction, model-based
testing does not relax the test requirements. The measurements to be performed are the
same measurements as for characterization, only their number is reduced. However,
model-based testing has a great potential for design-for-test (DfT). The brute-force
approach described above has the advantage of generality: the methodology can be
applied to any circuit and thus to any  converter. However, taking into account the
structure of the circuit under test may allow a more efficient use of the test resources.
On the one hand, the test stimuli requirements could be relaxed by adding specific
and easy-to-perform measurements to the initial measurements space. The additional
measurements could be used to retrieve the model parameters in the same manner
as above. On the other hand, the data analysis could also be simplified to some
extent. Test is not the same as characterization and the objective of a test is not to
explicitly provide performance figures. In a functional test, which most resembles
characterization, performance measurements are compared to a tolerance window
and the test outcome is a Pass/Fail result. In order to simplify model-based test for
DfT purposes, it would be possible to map the specification tolerance windows onto
the parameter space. The Pass/Fail result could thus be obtained a priori without the
need of calculating back the whole set of specifications. The operation described in
the second line of Equation (8.12) would be useless.

262 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


The drawback of this adaptation of model-based testing to DfT is a loss in the
generality of the approach. Creativity, and thus some research, is required to find
the extra measurements because they must be sensitive to the model parameters.
This requires having a deep understanding of the device and its non-ideal behaviour.
Actually, the way to find the extra measurements is the opposite of the brute-force
approach. Instead of deducing the model from the measurement space, a meaningful
model is used and the measurements are tailored to retrieve the associated parameters.
The term meaningful is used here to represent the fact that the model has to be related
to the circuit behaviour and its non-idealities while for blind modelling the resulting
parameters make no explicit sense to the user.
We will try to clarify those concepts with the study of two particular adaptations
of the model-based approach to BIST schemes for  modulators.

8.5.2

Polynomial model-based BIST

One example of a smart model-based test applied to  converters is proposed by


Sunter and Nagi in Reference 41. Their scheme is actually commercialized by Logic
Vision and is the main reason for qualifying it as a successful example, provided
the industry preference for functional tests, in particular for analogue parts. In their
approach, the  converter transfer function is modelled as a third-order polynomial.
Hence, the model parameters to be found are the four coefficients of the polynomial.
A minimum of four measurements thus have to be performed. In the original paper
that introduced the technique, the test stimulus was a ramp covering the modulator
full-scale in n output samples. Ideally, n should be equal to the number of output codes:
2Nb 1 for an Nb-bit converter. Four syndromes, S0 , S1 , S2 and S3 , are calculated by
simply accumulating and summing the converter output over the four quarters of the
converter full scale. That is illustrated in Figure 8.18. Simple arithmetic is then used to
retrieve performance specifications such as gain, offset, second-harmonic amplitude
and third-harmonic amplitude.
2Nb 1

Converter
Under test

Output code

Ideal response
to the ramp
Actual response

External ramp
generator

Time
S0

Figure 8.18

Polynomial model-based BIST

S1

S2

S3

Four test
syndromes

Test of  converters 263


Let the polynomial representing the transfer function be
y = b0 + b1 x + b2 x 2 + b3 x 3

(8.13)

The authors demonstrate that the coefficients of the polynomial can be written as a
function of the acquired syndromes

b0
b
1

b2
b3

1/n
0
0
0

0
4/n2
0
0

4/3n
0
16/nr 3
0

0
16/3n2
0
128/3n4

1
1

1
1

1
1
1
3

1
1
1
3

1
1
1
1

S0
S
1

S2
S3

(8.14)

From the model parameters, performance specifications can be retrieved. Indeed, a


sine wave of the form x = A cos(t) of amplitude A = n/2 distorted by the third-order
polynomial transfer function can be written as

y = b0 + b2

 





n2
n
3n3
n2
n3
+ b1 + b3
cos(t)+ b2
cos(2t)+ b3
cos(3t)
8
2
32
8
32
(8.15)

Performance parameters such as the offset, gain, second and third harmonics (further
noted A2 and A3 ) can be retrieved from the equation.
After inverting the matrices in Equation (8.14) and combining the result with
Equation (8.15), the proposed test can be formalized accordingly to Equation (8.12),
obtaining:

S0
S1
S2
S3
offset
gain
A2
A3

n/4
n/4
n/4
n/4
1
0
0
0

3n2 /32
n2 /32
n2 /32
3n2 /32
0
n/2
0
0

7n3 /192
n3 /192
n3 /192
7n3 /192
n2 /8
0
n2 /8
0

15n4 /1024
n4 /1024
n4 /1024
15n4 /1024
0
3n3 /32
0
n3 /32

b0
b1

b2
b3

(8.16)
The reader should notice that this matrix inversion is unnecessary as Equation (8.14)
allows us to retrieve the polynomial coefficients directly from the syndromes. It has
been done only to illustrate that the proposed test can be seen as a model-based test.
The application of the scheme described above to  modulators is particularly
appealing. First of all, the four syndromes are acquired by accumulating the converter
output. For a  converter, the operation can be performed directly on the modulator
bit-stream and thus only requires up-down counters. Moreover, it can be shown that
some non-idealities, such as amplifier settling error, map onto the transfer function in
a manner that is quite accurately approximated by a low-order polynomial. In some

264 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


way, the use of a third-order polynomial seems justified for  modulators. The
authors pointed out in their original paper that the accuracy of the method could
be compromised if the actual transfer function presents significant components of
degrees higher than three. That could be the case if clipping occurs. In those cases,
however, they state that the THD can be accurately estimated by taking the squared
sum of the second and third harmonics calculated according to the proposed test.
The validity range of the model is thus questionable, particularly in the context of
a defective device. The issue is not directly addressed in Reference 41 but some
empirical insight in the reliability of the model is actually provided. The proposed test
is performed on a group of commercial modulators and it is shown that the obtained
results are in accordance with standard tests. Another fundamental limitation of the
method is obviously that it only addresses effects that can impact the modulator
distortion. In terms of structural test, the fault coverage is thus intrinsically reduced.
For instance, a defect could greatly alter the d.c. gain of an amplifier. A  modulator
with such a defect would exhibit integrator leakage: part of the quantization noise
would leak into the modulator baseband. The modulator could be strongly out of
specifications in terms of SNR but the test proposed in Reference 41 would not
detect it.
Later work performed by Roy and Sunter [42] extends the solution to an exponential staircase that can partially be generated on chip. This solution requires a precise
passive filtering that has to be realized off chip. Actually, the authors do speak of a
built-out self-test, and this may be an issue in the context of SoC. Indeed, the proposed
scheme faces the same problem of signal integrity as a functional test.

8.5.3

Behavioural model-based BIST

Another valuable approach could be to use a behavioural model. Such an approach


has been proposed in References [4345]. The first argument in favour of behavioural
model-based test of  modulators is that behavioural models are almost mandatory
in the design flow (see Figure 8.19). Indeed, the non-linear dynamics of  modulators make analytical studies overwhelmingly complex. No closed-form expression
can thus be derived that relates performance figures such as THD or SNR to design
parameters. Unfortunately, electrical simulations of a complete  modulator are
far too large to allow design-space exploration. Hence, designers have been forced
to build behavioural models in a variety of high-level languages such as MATLAB
(and its Simulink extension) [46, 47], VHDL-AMS and even standard VHDL [48],
and so on. These models decompose modulators into functional macros that take
into account most of the effects that are known to influence performance [49]. The
validity of the model for test purpose is thus ensured by the fact that it is used precisely to explore the design space over a wide range of values. In addition, the model
elaboration does not represent any extra cost. From a functional test viewpoint, the
determination of the behavioural parameters that characterize performance degradation mechanisms allows us to indirectly test the modulator performance. From a
structural test viewpoint, the behavioural parameters are associated to performance
degradation mechanisms of the functional macros: integrators, comparators, switches,

Test of  converters 265

Performance
specifications

SNR
THD
SFDR
PSRR
Amplifier
DC gain
Slew-rate
Bandwidth
+

Behavioral
Behavioural
model

Hysteresis

Physical
implementation

Figure 8.19

Comparator

Behavioural
model-based test

Layout
/fabrication

Matching
Linearity

Macro-blocks
division

Macro-block
architecture choice
/transistor sizing

Electrical
implementation

Capacitor

Characterization
/functional test

Verification/electrical simulation

Designer expertise
/heuristic search
/high-level simul.

Real defects

Behavioural model in the design and test flow

and so on. Behavioural model-based test can thus be considered as hierarchical testing, and from that viewpoint, the approach is not so new [50]. Actually it has been
claimed [51] that inductive fault analysis for mixed-signal circuits should consider
macro performance degradations as fault classes. In other words, a behavioural model
level of abstraction is adequate for defect-oriented tests. In that sense, behavioural
model-based test offers valuable advantages for device debugging.
As was said before, the application of model-based test to DfT has to be focused on
relaxing the measurement requirements. This means that the behavioural parameters
have to be retrieved with simple tests. It has been shown in recent works that some
behavioural parameters, such as amplifier d.c. gain and settling errors (which are
related to slew rate and gain bandwidth), can be tested using digital stimuli that are
easily generated on chip [45]. The proposed tests can be roughly gathered in the set-up
of Figure 8.20.
The test stimuli are digital and can be generated on-chip at the cost of a linear
feedback shift register (LFSR) of only 6 bits. Those digital stimuli are then sent to the
modulator under test through the feedback DAC during the sampling phase. During
the integrating phase, the feedback DAC is driven by the modulator output, as usual.
That time-multiplexed use of the DAC is symbolized in Figure 8.20 by an extra input.
For the analysis of the test output, test signatures are computed by accumulating the
modulator output bit-stream minus the input sequence. This only requires a couple

266 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

XOR

DAC ADC

Modulator under test


Nominal analogue
input disabled in
test mode

Output bit-stream

Digital test input

AND

Up

AND

Down

Counter
z1

z1

z1

z1

Test stimulus generator

Figure 8.20

z1

z1

Signature

Test data analyser

Generic set-up of the digital tests proposed in References 4345

of logic gates and an up-down counter. They can be simply related to the modulator
behavioural parameters. However, the reader should notice that the test decision has to
be taken in the model parameter space. Indeed, the calculation of explicit performance
figures would require the simulation of the behavioural model. For device-debugging
purposes, the behavioural signatures should be shifted off chip. However, for test
purposes, tolerance windows have to be designed for each behavioural signature. The
performance specifications cannot be mapped onto the behavioural parameter space.
Hence, it is not possible to set the tolerance windows so as to obtain an equivalentfunctional test. However, behavioural parameters are closely related to the modulator
design flow. They correspond to performance specifications of the different macros.
When choosing one point of the design space at the behavioural model level, some
units are also taken on those parameters, according to the variations of the process
parameters. Those units can be used to establish the required tolerance window. For
instance, if an amplifier with a d.c. gain of 80 dB is considered necessary to meet
specifications, an amplifier with a nominal 90 dB d.c. gain will possibly be designed
such that in the worst-case process corner the d.c. gain is ensured to be higher than
80 dB. For test purposes, that 80-dB limit could serve as the parameter tolerance
window.
It is worth mentioning that this behavioural model-based solution is very attractive
in an SoC context as the different tests could be easily interfaced to a digital test bus
such as the IEEE 1149.1. Research has still to be done to cover more behavioural
parameters and extend the methodology to generic high-order architectures. The
digital tests proposed in References 4345 apply to first- and second-order modulators
and their cascade combinations, and the results seem quite promising.
Using the set-up sketched in Figure 8.20, the first integrator leakage of a secondorder modulator can be measured using a periodic digital sequence with a non-null
mean value. The mean value of the test sequence has to be calculated considering
that a digital 1 corresponds to the DAC positive level and a digital 0 to the DAC
negative level, which together define the modulator full scale. Leger and Rueda [43]
propose the use of a [1 1 1 1 1 0] sequence whose mean value is 2/3 for a (1;
1) normalized full scale. The signature is also built according to the test set-up of
Figure 8.20. The differences between the modulator output bit-stream and the input
sequence are accumulated over a given number N of samples. It is simply sensing how

Test of  converters 267


the modulator output deviates from the input in average (i.e., in d.c.). Two acquisitions
with opposite sequences are actually necessary to get rid of input-referred offset. The
signature is shown to be


signature = 4N Q (1 p1 ) 2 6
(8.17)
Q = 2/3
The term (1 p1 ) is the first integrator pole error. This pole error can be directly
related to the d.c. gain of the integrator amplifier [6]. It can be seen that the error
term is independent of the number of acquired samples N. This implies that the
correct determination of the pole error requires a number of samples that is inversely
proportional to the pole error, and thus, to a first approximation, proportional to the
amplifier nominal d.c. gain.
A very similar test is provided in Reference 43 to test integrator leakage in firstorder  modulators. It has been demonstrated in Reference 52 that a first-order 
modulator is transparent to digital sequences: the output bit-stream strictly follows
the input sequence. This effect is even stabilized by integrator leakage. The authors
propose to add an extra delay in the digital feedback path (a simple D-latch on the
DAC control switches) during test mode. With this modification, it is shown that the
test set-up of Figure 8.20 provides a signature proportional to the integrator pole error.
An additional condition is also set on the digital sequence: it has to be composed of L
ones and a single zero, with L superior to five. For hardware simplification, the same
sequence as above ([1 1 1 1 1 0]) can be used:

4N (1 p)
signature =
4
ln(3L 5)/(L 5)
(8.18)

L=6
The d.c. gain non-linearity of the first amplifier of a  modulator can cause harmonic
distortion. The error associated to d.c. gain non-linearity in amplifiers located further
in the  loop is usually negligible because it is partially shaped by the loop filter.
In a second-order  modulator, it can be shown that the first integrator output
mean value is proportional to the input mean value. The output of the integrator is the
output of the amplifier, so it can be expected that the effective d.c. gain of the amplifier
varies with the input mean value. As a result, the integrator pole error also depends
on the input mean value. The test of d.c. gain non-linearity for the first amplifier in a
second-order modulator simply relies on repeating the leakage test with two different
sequences mean value: typically a small one (noted Qs ) and a large one (noted Ql ).
In the ideal case, if the amplifier d.c. gain is linear, the obtained signatures should
follow the ratio:
signaturel
Ql
=
signatures
Qs

(8.19)

In the presence of non-linearity, the effective d.c. gain for the sequence of large mean
value Al should be lower than the effective gain for the sequence of small mean value
As . As a result, the pole error for Ql should be greater than for Qs . Thus, it can be

268 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Input sequence

Master clock
Sampling phase
Integrating phase

Figure 8.21

Diagram of input-dependent clock phases modification

written as
signaturel
Ql
As
=

signatures
Qs
Al

(8.20)

Notice that the signature has to be acquired over a large enough number of points
that the deviation of the effective gain from the actual gain is sensed. Typically, if a
variation of 1 per cent of the effective gain from the nominal gain has to be sensed,
it could be necessary to acquire 100 times the number of acquired points to test
the nominal gain (i.e., the leakage). Fortunately, the distortion induced by the nonlinearity of the amplifier d.c. gain is also proportional to the nominal d.c. gain. In other
words, if the d.c. gain is non-linear but very high, then the non-linearity will induce a
distortion that will fall below the noise floor of the converter. Only the non-linearities
associated with a low-nominal d.c. gain will have a significant impact. Translating
this information to the test, it means that acquiring a very large number of points to
detect d.c. gain non-linearity makes little sense, as it corresponds to a deviation that
has no impact on the modulator precision.
The test for integrator settling errors (which are related to amplifier slew-rate
and gain-bandwidth product) introduced in Reference 44 is the same for both firstand second-order modulators but requires modification of the modulator clocking
phase. The test sequence is a pseudo-random sequence that can be generated with
a 6-bit LFSR as shown in Figure 8.20. For a one-valued input sample, the clock
phases are doubled (their duration is two master clock periods), and for a zero-valued
input sample they remain unchanged (their duration is one master clock period). The
clocking modification is illustrated in Figure 8.21.
This input-dependent clocking allows unbalancing the integrator settling error.
For a one-valued input sample, the integrator has time to fully settle but not for a
zero-valued input sample. The unbalanced input-referred difference is sensed by the
signature analyser and accumulated over N samples. To get rid of any offset, another
acquisition has to be done inverting the clocking rule: the phases are doubled for a
zero-valued input sample and remain the same for a one-valued input sample. The
results of the two acquisitions are combined to give the offset-insensitive signature:

 
N
N
(8.21)
signature = er 4 + 3er 2
2
2

Test of  converters 269


Vref Vref Vref Vref
Test
sequence

S1

Disabled in
test mode

S3

Nominal
input

Test
sequence

1
2

1

2

S1

2

1

Vref

+
+

1

S2

Vref Vref

Figure 8.22

2

1

1

S3

Disabled in
normal operation

Modulator output
bit-stream

S2

Modulator output
bit-stream
Vref

Integrator modification with DAC replication

The term er corresponds to the settling error committed by the integrator for a onevalued input sample and a zero-valued feedback. This corresponds to the largest step
that can be input to the integrator.
The clocking modification can be implemented on chip at low cost. A simple
finite-state machine is required that consists of an input-dependent frequency divider
(a 2-bit counter). The obtained digital signal is then converted to usable clock phases
by a standard non-overlapping clock generator.
In order to perform all the above-explained digital tests, the schematic of the 
modulator should be modified, basically to allow digital test inputs [4345]. There
exist two straightforward solutions. The first one consists of disabling the nominal
input of the integrators and duplicating the DAC to send the test sequence during the
sampling phase. This is illustrated in Figure 8.22, where switches S1 S3 form the
duplicated DAC.
This approach has the advantage of being very easy to implement. However, a
drawback is that it adds extra switch parasitics to the input node. To avoid this issue,
Leger and Rueda [43] propose to reuse the feedback DAC during the sampling phase
to input the test sequence. The nominal input is disconnected and the feedback switch
is kept closed during both sampling and integrating phases. Only the DAC control
has to be modified to accommodate the double-sampling regime. This is illustrated
in Figure 8.23.
This other solution does not alter the analogue signal path but does put more
emphasis on the timing control of the DAC. Figure 8.23 shows a possible implementation of the DAC control with transmission gates for the sake of clarity but other
solutions may give better results. Notice that in both cases, the modifications can
easily be introduced in the modulator design flow and do not assume the addition of
complex elements such as buffers.

270 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Vref Vref

2

Disabled in
test mode

1
2
Nominal
input

2

1

2

1

Closed in
test mode

2
Vref Vref

Figure 8.23

Test sequence

1

1

2

Modulator output
bit-stream

1

Modulator output
bit-stream
Test sequence

Integrator with DAC control modification

It should be noticed that the two modifications described above can be realized
on any integrator, which means that a digital test sequence can be inputed at any
feedback node. In the case of a second-order modulator, by disabling the nominal
input of the second integrator and enabling the digital test input, an equivalent firstorder modulator is obtained. This is symbolized in the diagram of Figure 8.24, where
coefficients a2 and b2 are duplicated to represent the additional test-mode input. Thus,
the tests developed for first-order modulators can be used to test defects in the second
integrator of the reconfigured second-order modulators, without significant impact.
The proposed tests have been validated extensively by simulation. These simulations have been carried out in MATLAB using a behavioural model that implemented
most of the non-idealities described in Reference 48. The test signatures were shown
to be accurate for the isolated effects, only varying the parameter of interest and
maintaining the others at their nominal values [43, 44]. Simulations varying all the
parameters at the same time have also been realized [45]. In that case, the whole set
of proposed tests were performed. It was shown that the whole set of tests provided
high fault coverage if the test limits were set to the expected values of the signatures,
according to the nominal values of the behavioural parameters. Actually, 100 per cent
of the faults that affected the behavioural parameters involved in the proposed tests
were detected.
As a test methodology, behavioural model-based test for  modulators has
a great potential, in particular for converters embedded in SoCs. Indeed it opens
the door to structural tests that can relax hardware requirements but maintain a
close relation to the circuit functionality, and also device-debugging capabilities.
It can be considered as a trade-off between functional and defect-oriented tests. In
its current development state, digital tests have been proposed to simply evaluate
integrator leakage and settling errors. These digital tests do not alter the modulator topology and rely on proper  modulation. As a result, they have the ability,

Test of  converters 271


(a)

Test sequence
a2

a1
Disconnected
part

z1
1z1

b1

a2

z1
1z1

Output
bit-stream

b2

(b)

Test sequence
b2

a1

z1
1-z1

a2

b1

z1
1z1

Output
bit-stream

b2

Disconnected part

Figure 8.24

z-Domain representation of the test sequence input in a secondorder  modulator (a) to the first integrator and (b) to the second
integrator

beyond the behavioural parameter evaluation, to detect any catastrophic error in the
modulator signal path. Research has to be done to detect more behavioural parameters such as branch coefficient mismatches or non-linear switch-on resistance, for
example. Similarly, the digital test methodology should be extended to higher-order
architectures.

8.6

Conclusions

In this chapter, we have tried to provide insights into  modulator tests. It has
been shown that the ever-increasing levels of functionality integration, the ultimate
expression of which is SoC, raise new problems on how to test embedded components
such as  modulators. These issues may even compromise the test feasibility, or at
least they may displace test time from its prominent position in the list of factors that
determine the overall test cost.
Table 8.1 summarizes the information contained in the chapter. It is clear that
considerable research is still necessary to produce a satisfying solution, but the first
steps are encouraging. In particular, we believe that solutions based on behavioural
model-based BIST may greatly simplify the test requirements.

272 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Table 8.1

 modulator test solutions

Characterization
Static parameters:
Histogram Servo-loop

Dynamic parameters:
Sine-fit FFT
Functional test
References 2531

Reference 32

Defect-oriented test
Reconfiguration [33]

OBT [34]

NTF [35]

Pros.

Cons.

Gives access to gain and


offset errors, INL and DNL.

Exhaustive characterization
requires a large amount of time.
INL and DNL should be related
to transitions in  modulators.
Requires the input of a precise
stimulus.
Requires complex DSP.
Requires the input of a precise
stimulus.

Provide important datasheet


specifications (SNR, THD,
ENOB, etc.).
Provides a solution for precise
on-chip stimulus generation.
Digital filter solution to relax
data analysis.

The test stimulus is an


unfiltered  digital
sequence.

No extra hardware is required.


Any input signal can be used.

No test stimulus is required.


The data analysis is generic as
the signatures are always
amplitude and frequency.
A good methodology has
been developed to apply the
solution to any architecture
with relative effort.
The test stimulus does not
have to be as precise as the
modulator.
The methodology is
applicable to any modulator.

Requires a DSP on chip.


The area overhead associated to
stimulus generation may be large.
The reuse of on-chip resources
makes concurrent test of other
SoC parts difficult.
The test signature aspect is not
solved.
The potential effect of unfiltered
high-frequency noise in the input
on the result validity is not
addressed.
Requires a strong reconfiguration
of the modulator.
May be difficult to generalize to
other architectures.
The calculation of test coverage
for any input is difficult to
perform.
The calculation of test coverage
for other faults than capacitor
ratio errors is difficult.

The defect coverage may be low


as some non-idealities do not
impact the NTF.
The validity of the approach
should be better demonstrated.

Test of  converters 273


Table 8.2

Continued

Pseudo-random [36]

Model-based test
Model-based test
standard approach
[3740]

The input is digital and


relatively cheap to produce
on chip.

The validity of the heuristic


model obtained through
simulation is questionable.
Potential fault masking.
Only integrator leakage is
addressed.

Relaxes the number of


required measurements.
A good methodology exists.

The model is linear and performs


well for small variations but may
be limited for other defects.
The methodology is based on
standard specification
measurements: it requires a
precise stimulus and a DSP.
The model is taken a priori and
not justified: its validity range
may be questionable.
Only four parameters are
obtained: gain, offset,
second-order distortion and
third-order distortion.
The test stimulus generation still
requires off-chip components.
Research is still necessary to test
more behavioural parameters.
The extension of the approach to
other modulators would require
further research.

Ad-hoc model-based
BIST [41, 42]

The test stimulus can be


partially generated on chip.
The data analysis is very
simple and can be performed
on chip.
Important specifications can
be derived.

Behavioural
model-based BIST
[4345]

The test stimulus is digital.


The test requires few
resources and simple
modulator modifications.
The test signatures can be
used for device debugging.
The test strategy can easily be
integrated in the design flow.
The model is the validated by
design behavioural model.
The validity has been proven
for cascaded modulators of
first- and second-order
sections.

8.7

References

1 Cutler, C.C.: Transmission System Employing Quantization, US patent no. 2927


962, 1960
2 Inose, H., Yasuda, Y., Murakami, J.: A telemetering system by code modulation
DS modulation, IRE Transactions on Space Electronics and Telemetry, 1962;
8:2049

274 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


3 Bennett, W.R.: Spectra of quantized signals, Bell System Technical Journal,
1948;27:44672
4 Hein, S., Zakor, A.: On the stability of sigma delta modulators, IEEE
Transactions on Signal Processing, 1993;41 (7):232248
5 Pinault, S.C., Lopresti, P.V.: On the behavior of the double-loop sigma-delta
modulator, IEEE Transactions on Circuits and Systems II, 1993;40 (8):46779
6 Norsworthy, S.R., Schreier, R., Temes, G.: Delta-Sigma Data Converters: Theory,
Design, and Simulation (IEEE Press, New York, 1997)
7 Ardalan, S.H., Paulos, J.J.: An analysis of nonlinear behavior in deltasigma modulators, IEEE Transactions on Circuits and Systems, 1987;34
(6):593603
8 Boser, B.E., Wooley, B.A.: The design of sigma-delta modulation analog-todigital converters, IEEE Journal of Solid State Circuits, 1998;23 (6):1298308
9 Silva, J., Moon, U., Steensgaard, J., Temes, G.C.: Wideband low-distortion
delta-sigma ADC topology, Electronics Letters, 2001;37 (12):7378
10 Candy, J.C.: A use of double integration in sigma-delta modulation, IEEE
Transactions on Communications, 1985;33 (3):24958
11 Chao, K., Nadeem, S., Lee, W., Sodini, C.: A higher order topology for interpolative modulators for oversampling A/D conversion, IEEE Transactions on
Circuits and Systems, 1990;37:30918
12 Hayashi, T., Inabe, Y., Uchimura, K., Iwata, A.: A multistage delta-sigma
modulator without double integration loop, ISSCC Digest of Technical Papers,
February 1986, pp. 1823
13 IEEE standard 1241-2000: IEEE Standard for Terminology and Test Methods for
Analog to Digital Converters (IEEE, New York, 2001)
14 Kennedy, M.P., Krieg, K.R., Chua, L.O.: The devils staircase: the electrical engineers fractal, IEEE Transactions on Circuits and Systems, 1989;36
(8):11339
15 Feely, O., Chua, L.O.: The effect of integrator leak in  modulation, IEEE
Transactions on Circuits and Systems, 1991;38 (11):1293305
16 Mahoney, M.: DSP-based Testing of Analog and Mixed-Signals Circuits (IEEE
Computer Society Press, New York, 1987)
17 Breitenbach, A., Kale, I.: An almost leakage-free method for assessing 
modulator spectra, IEEE Transactions on Instrumentation and Measurement,
1998;47 (1):404
18 IEEE standard 1149.4: IEEE Standard for Mixed-Signal Test Bus (IEEE, New
York, 1999)
19 Osseiran, A.: Analog and Mixed-Signal Boundary-Scan. A Guide to the 1149.4
Test Standard (Kluwer, Dordrecht, The Netherlands, 1999)
20 IEEE standard 1149.1: IEEE Standard for Test Access Port and Boundary Scan
Architecture (IEEE, New York, 2001)
21 Azais, F., Bernard, S., Bertrand, Y., Renovell, M.: Towards an ADC BIST
scheme using the histogram test technique, Proceedings of the IEEE European
Test Workshop, Cascais, Portugal, May 2000, pp. 538

Test of  converters 275


22 Azais, F., Bernard, S., Bertrand, Y., Renovell, M.: Implementation of a linear
histogram BIST for ADCs, Proceedings of the Design, Automation and Test in
Europe Conference, Munich, Germany, March 2001, pp. 5905
23 Azais, F., Bernard, S., Bertrand, Y., Renovell, M.: Hardware resource minimization for histogram-based ADC BIST, Proceedings of the VLSI Test Symposium,
Montreal, Canada, April/May 2000, pp. 24752
24 Arabi, K., Kaminska, B.: Efficient and accurate testing of ADC using oscillation
test method, Proceedings of the European Design and Test Conference, Paris,
France, March 1997, pp. 34852
25 Lu, A., Roberts, G.W., Johns, D.A.: A high-quality analog oscillator using oversampling D/A conversion techniques, IEEE Transactions on Circuits and Systems
II, 1994;141:43744
26 Lin, W., Liu, B.: Multitone signal generator using noise-shaping technique, IEE
Proceedings Circuits, Devices and Systems, 2004;151 (1):2530
27 Dufort, B., Roberts, G.W.: On-chip analog signal generation for mixed-signal
built-in self-test, IEEE Journal of Solid State Circuits, 1999;34 (3):31830
28 Hafed, M., Roberts, G.: Sigma-delta techniques for integrated test and measurement, Proceedings of Instrumentation and Measurement Technology Conference,
Budapest, Hungary, May 2001, pp. 15716
29 Rebai, C., Dallet, D., Marchegay, P.: Signal generation using single-bit sigmadelta techniques, IEEE Transactions on Instrumentation and Measurement,
2004;53 (4):12404
30 Toner, M.F., Roberts, G.W.: A BIST scheme for an SNR test of a sigma-delta
ADC, Proceedings of International Test Conference, Baltimore, MD, October
1993, pp. 80514
31 Toner, M.F., Roberts, G.W.: A frequency response, harmonic distortion and intermodulation distortion test for BIST of a sigma-delta ADC, IEEE Transactions
on Circuits and Systems II, 1996;43 (8):60813
32 Ong, C.K., Cheng, K.T.: Self-testing second-order delta-sigma modulators using
digital stimulus, Proceedings of VLSI Test Symposium, Monterey, CA, April/May
2002, pp. 1238
33 Mir, S., Rueda, A., Huertas, J.L., Liberali, V.: A BIST technique for sigma delta
modulators based on circuit reconfiguration, Proceedings of International Mixed
Signal Testing Workshop, Seattle, WA, June 1997, pp. 17984
34 Huertas, G., Vazquez, D., Rueda, A., Huertas, J.L.: Oscillation-based test in
oversampling A/D converters, Microelectronics Journal, 2003;34 (10):92736
35 de Venuto, D., Richardson, A.: Testing high-resoltuion  ADCs by using
the noise transfer function, Proceedings of European Test Symposium, Corsica,
France, May 2004, pp. 1649
36 Ong, C.K., Cheng, K.T.: Testing second-order delta-sigma modulators using
pseudo-random patterns, Microelectronics Journal, 2002;33:80714
37 Stenbakken, G.N., Souders, T.M.: Linear error modeling of analog and mixedsignal devices, Proceedings of International Test Conference, Nashville, TN,
October 1991, pp. 57381

276 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


38 Stenbakken, G.N., Liu, H.: Empirical modeling methods using partial data,
IEEE Transactions on Instrumentation and Measurement, 2004;53 (2):2716
39 Capofreddi, P.D., Wooley, B.A.: The use of linear models in ADCs testing,
IEEE Transactions on Circuits and Systems I, 1997;44 (12):110513
40 Wegener, C., Kennedy, M.P.: Model-based testing of high-resolution ADCs,
Proceedings of International Symposium on Circuits and Systems, 2000;1:3358
41 Sunter, S.K., Nagi, N.: A simplified polynomial-fitting algorithm for DAC and
ADC BIST, Proceedings of International Test Conference, Washington, DC,
November 1997, pp. 38995
42 Roy, A., Sunter, S.: High accuracy stimulus generation for ADC BIST, Proceedings of International Test Conference, Baltimore, MD, October 2002, pp.
10319
43 Leger, G., Rueda, A.: Digital test for the extraction of integrator leakage in 1st
and 2nd order  modulators, IEE Proceedings Circuits, Devices and Systems,
2004;151 (4):34958
44 Leger, G., Rueda, A.: Digital BIST for settling errors in 1st and 2nd order 
modulators, Proceedings of IEEE IC Test Workshop, Madeira Island, Portugal,
July 2004, pp. 38
45 Leger, G., Rueda, A.: Digital BIST for amplifier parametric faults in  modulators, Proceedings of International Mixed Signals Testing Workshop, Cannes,
France, June 2005, pp. 228
46 Malcovati, P., Brigati, S., Francesconi, F., Maloberti, F., Cuinato, P., Baschirotto,
A.: Behavioral modeling of switched-capacitor sigma-delta modulators, IEEE
Transactions on Circuits and Systems I, 2003;50 (3):35264
47 Schreier, R., Temes, G.C.: Understanding Delta-Sigma Data Converters (IEEE
Press, New York, 2005)
48 Castro, R., et al.: Accurate VHDL-based simulation of  modulators,
Proceedings of International Symposium on Circuits and Systems, 2003;IV:6325
49 Medeiro, F., Prez-Verd, B., Rodrguez-Vzquez, A.: Top-down design of High
Performance Sigma-Delta Modulators (Kluwer, Amsterdam, The Netherlands,
1999)
50 Vinnakota, B.: Analog and mixed-signal test (Prentice Hall, Englewood Cliffs,
NJ, 1998)
51 Soma, M.: Fault models for analog-to-digital converters, Proceedings of IEEE
Pacific Rim Conference on Communications, Computers and Signal Processing,
Victoria, Canada, May 1991, pp. 5035
52 Schreier, R., Snelgrove, W.M.:  modulation is a mapping, Proceedings of
International Symposium on Circuits and Systems, Singapore, June 1991, pp.
241518

Chapter 9

Phase-locked loop test methodologies


Current characterization and production test practices

Martin John Burbidge and Andrew Richardson

9.1

Introduction: Phase-locked loop operation and test motivations

Phase-locked loops (PLLs) are incorporated into almost every large-scale mixedsignal and digital system on chip (SoC). Various types of PLL architectures exist
including fully analogue, fully digital, semi-digital and software based. Currently,
the most commonly used PLL architecture for SoC environments and chipset applications is the charge-pump (CP) semi-digital type. This architecture is commonly
used for clock-synthesis applications, such as the supply of a high-frequency on-chip
clock, which is derived from a low-frequency board-level clock. In addition, CPPLL architectures are now frequently used for demanding radio-frequency synthesis
and data synchronization applications. On-chip system blocks that rely on correct
PLL operation may include third-party intellectual property cores, analogue-to-digital
conversions (ADCs), digital-to-analogue conversions (DACs) and user-defined logic.
Basically, any on-chip function that requires a stable clock will be reliant on correct PLL operation. As a direct consequence it is essential that the PLL function
is reliably verified during both the design and debug phase and through production
testing.
This chapter focuses on test approaches related to embedded CP-PLLs used for
the purpose of clock generation for SoC. However, methods discussed will generally
apply to CP-PLLs used for other applications.

9.1.1

PLL key elements operation and test issues

The CP-PLL architecture of Figure 9.1 consists of a phase detector, a CP, a loop
filter (LF), a voltage-controlled oscillator (VCO) and a feedback divider (N). The
phase frequency detector (PFD) senses the relative timing differences between the

278 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

+I CH
PLLREF
Phase
detector
(PFD)
KPD

UP
DN
Loop
filter
F(s)

ICH

PLLFB

Figure 9.1

Voltage-controlled
oscillator
VCO
KVCO

Vc

Divide by N

Fosc = N PLLREF

Block diagram of a typical CP-PLL configuration

edges of the reference clock and VCO clock (feedback clock) and applies charge-up
or charge-down pulses to the CP that are proportional to the timing difference. The
pulses are most commonly used to switch current sources, which charge or discharge
a capacitor in the LF. The voltage at the output of the LF is applied to the input
of the VCO, which changes oscillation frequency as a function of its input voltage.
Note that ideally when the feedback and reference clocks are equal, that is, they are
both phase and frequency aligned, the CP transistors will operate in such a way as
to maintain the LF voltage at a constant value. In this condition, the PLL is locked
which implies that the output signal phase and frequency is aligned to the input within
a certain limit. Note that the division block up-converts the VCO output frequency
to an integer multiple of the frequency present on its reference input (PLLREF). It
follows that when the PLL is in its locked state:
Fout = N PLLREF

(9.1)

In Figure 9.1, the following conversion gains are used for the respective blocks.
KPD = phase detector gain = Ich /2 (A rad1 )
F(s) = LF transfer function
KVCO = VCO gain (rad s1 V1 )
Using feedback theory, the generalized transfer equation in the Laplace domain
for the system depicted in Figure 9.1 is
H(s) =

N KPD KVCO F(s)


o (s)
=
i (s)
sN + KPD KVCO F(s)

(9.2)

Note that by substituting suitable values for N and F(s) Equation (9.1) will generally
apply to any-order PLL system [1]. Specific transfer equations are provided as part
of the LF description.

Phase-locked loop test methodologies 279


1
D

PFDUP

PLLREF
R
To CP

PLLFB
R

Q
PFDDN

Figure 9.2

Typical implementation of type-4 PFD

It must be noted that, even for the case of a CP-PLL, the implementation details
for the blocks may vary widely; however, in many applications, designers attempt
to design the PLL to exhibit the response of a second-order system. This is owing
to the fact that second-order systems can be characterized using well-established
techniques. The response of a second-order CP-PLL will be generally considered in
this chapter [24].
A brief description of each of the blocks now follows. Further, basic principles of
CP-PLL operation are given in References 1, 3, 5 and 6.
9.1.1.1 Phase frequency detector
The phase detector most commonly used in CP-PLL implementations is the type4 edge-sensitive PFD. The PFD may be designed to operate on rising or falling
edges. For the purpose of this discussion, it will be assumed that the PFD is rising
edge-sensitive. A schematic of this type of PFD is shown below.
In Figure 9.2, PFDUP and PFDDN represent the control signals for the up and
down current sources, respectively. When considering the operation of the PFD, it
is also useful to consider the change in VCO output frequency. Considering phase
alignment of the PFD input signals, REF will be used to designate the instantaneous
phase of PLLREF and FB will be used to designate the instantaneous phase of the
PLLFB signal. Using this convention and with reference to Figures 9.2 and 9.3 the
PFD operation is now explained.
1. FB (t) leads i (t): LF voltage falls and VCO frequency falls to try and reduce
the difference between i (t) and FB (t).
2. i (t) leads FB (t): LF voltage rises and VCO frequency rises to try and reduce
the difference between i (t) and FB (t).
3. i (t) coincident with FB (t): the PLL is locked and in its stable state.

280 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

i(t)
FB(t)
i(t)= FB(t)
Leads

i(t)
Lags

PFDUP

PFDDN
VCO output
frequency

Figure 9.3

Operation of PFD and associated increase in VCO output frequency

9.1.1.2 Typical configurations for CP and F(s)


As above, the LF is designed to ensure that the whole system exhibits a second-order
response. A typical LF and CP configuration used for fully embedded CP-PLLs is
illustrated in Figure 9.4.
The Laplace domain transfer function for this circuit is
F(s) = R1 +

1
sC1

(9.3)

where we define the following [3]:


1 = R1 C1
Now if the above LF transfer function is substituted into Equation (9.2) for F(s),
the following equation can be derived:
H(s) =
where

O (s)
2 n s + n2
= 2
i (s)
s + 2 n s + n2

(9.4)




KO IP
n =
N2C1


2 KO IP
=
2 N2C1

(9.5)
(9.6)

It must be mentioned that for CP-PLLs, in general, and for embedded CP-PLLs,
specifically, the LF node can be considered as the critical controlling node of the PLL.
Any noise coupled into this node will generally manifest itself as a direct instantaneous

Phase-locked loop test methodologies 281


Vdd

I1

UP

M2

VCTRL

DN

M1

R1
+
I2

Figure 9.4

C1

Typical CP and LF configuration

alteration of the VCO output frequency, this action will be observed as PLL output
jitter. Consequently, PLL designers usually spend a great deal of design effort in
screening this node. In addition, correct LF operation is essential if the PLL is to
function properly over all desired operational ranges. Embedded LFs usually include
one or more large area MOSFET capacitors. These structures may be sensitive to spot
defects, such as gate oxide shorts [7].
Matching of the CP currents is also a critical part of PLL design. Leakage and
mismatch in the CP will lead to deterministic jitter on the PLL output.
9.1.1.3 Voltage controlled oscillator
For embedded CP-PLL configurations, the VCO is usually constructed as a currentstarved ring oscillator structure. This is primarily due to the ease of implementation
in CMOS technologies. The structure may be single ended or differential, with
differential configurations being preferred, due to their superior noise rejection capabilities. A typical single-ended current-starved ring oscillator structure is illustrated
in Figure 9.5.
In this circuit, VCTRL is the input control voltage taken from the LF node and
Fout is the VCO output signal. Note that to prevent excessive loading of the VCO, its
output is usually connected to buffer stages.

282 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


A1

A2

A3

n inverters n = odd; N>3


Vdd
M10

M7

M8

M9

M2

M4

M6

FFout
out
C3

M5

5fF

M13

C2

M12

M3

5fF

Figure 9.5

5fF

M11

VCTRL
Vcrtl

C1

M1

M14

Single-ended current starved ring oscillator structure

The transfer gain of the VCO is found from the ratio of output frequency deviation
to a corresponding change in control voltage. That is
KVCO =

F2 F1
(MHz/V) or (rad/s/V)
V2 V 1

(9.7)

where F2 is the output frequency corresponding to V2 and F1 is the output frequency


corresponding to V1 . An example of experimental measurement of the VCO gain is
given in Section 9.2.1.1.
9.1.1.4 Digital structures
The digital blocks of the CP-PLL are generally constructed from standard digital
structures. In some cases, feedback dividers and PFDs may be constructed from
standard cells. However, in many situations, to meet stringent timing requirements,
the digital structures are constructed using full-custom layout techniques. Digital
structures of the PLL are generally less sensitive than the analogue structures and
they are often modified to ease testing of the PLL.

9.1.2

Typical CP-PLL test specifications

Important functional characteristics that are often stated for CP-PLL performance are
listed below:

lock time and ability to achieve lock from system startup


capture range and lock range
phase/frequency step response time

Phase-locked loop test methodologies 283


Table 9.1

Common examples of test accessibility

PLL block

Structures
A

(1) PFD
(2) CP
(3) LF
(4) OSC
(5) DIV





D




Direct
access/modification

At speed testing
required

Commonly suggested
fault models

Yes
No
No
Yes
Yes

Yes
Yes
Yes
Yes
Yes

Single stuck at faults


MOS transistor faults*
MOS transistor faults*
MOS transistor faults*
Single stuck at faults

* MOS transistor catastrophic faults: gate to drain shorts; gate to source shorts; drain opens; source
opens.

overshoot
loop bandwidth (3 dB)
output jitter.

All of the above parameters are interrelated to a certain extent. For example, the
loop bandwidth will have an effect on the PLL output jitter. However, loop bandwidth,
lock time, overshoot and step response time are also directly related to the natural
frequency and damping of the system. It must be mentioned that, certain non-idealities
or faults may contribute to further jitter on the PLL output or increased lock time.
Examples of typical measurements for these parameters are provided in later sections.
Table 9.1 provides an initial analysis of testing issues for the PLL sub-blocks.
Fault models are suggested for use in fault coverage calculations for each of the
blocks. Further research and justification for the use of fault models in the key PLL
sub-blocks are given in References 713.
Note also that the fault models suggested in Table 9.1 can also be used to assess
the fault coverage of built-in self-test (BIST) techniques. It should be noted, however,
that many fault types are related to the structure realization of the PLL hence these
guidelines should be used with care. Faults that may be implementation-dependent
include:

problems with interconnect due to pinholes or resistive vias


coupling faults.

Obviously, a high-performance PLL will be routed in such a way as to attempt to


minimize the probability of these faults occurring.
9.1.2.1 Jitter overview
Jitter has been mentioned several times with respect to the PLL sub-block descriptions. Non-idealities, faults or bad design practices, such as poor matching of
structures, poor layout and insufficient decoupling between critical functions, can

284 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Skewed distribution
due to constant
deterministic phase
offset

Nominal
timing
Increasing deviation

Figure 9.6

Ideal distribution of
random jitter

Increasing + deviation

Ideal and non-ideal jitter probability density curves

lead to excessive jitter in the PLL output. Jitter may be divided into two main classes
as follows:

Random jitter (Gaussian jitter): This is caused by non-deterministic events, such


as coupling of electrical noise into the CP structures, the VCO control input or
the PLLs digital structures.
Deterministic or correlated jitter: This can be viewed as changes in the instantaneous phase or frequency in the PLL output that can be directly correlated to
changes in PLL operation or changes in the host system operation. Typical examples may be phase spurs due to CP mismatch or a notable change in output jitter
when a system output driver is switched.

For clock-synthesis-based applications, jitter can be regarded as the amount of time


variation that is present on the periodic output signal.
Figure 9.6 shows a typical spread or bell-shaped curve that would be used to
represent truly random or Gaussian-type jitter, superimposed upon this plot is a nonGaussian shape that may occur from deterministic jitter sources.
As a general approximation, the maximum random signal jitter will increase to a
peak-to-peak value of six standard deviations from the ideal case. However, it must
be noted that due to non-deterministic effects, the actual measured case may vary
significantly from this value. This statistical technique of viewing jitter is often used
in conjunction with jitter histogram measurements. Obviously, the confidence levels
of the measurements will increase when an increasing number of measurements are
taken. An example of this technique is given in later sections (see Section 9.2.2.1).

Phase-locked loop test methodologies 285


A measurement often quoted that relates to non-deterministic jitter is that of root
mean square (RMS)_jitter. The expression for RMS_jitter is given below.



N

1

(Ti T )2
(9.8)
RMS_jitter =
N 1
i=1

where T is the mean of the measured time intervals and is defined as


 
N
1
Ti
T=
N

(9.9)

i=1

In both of the above equations, N represents the total number of samples taken and
Ti represents the time dispersion of each individual sample.
For clock signals, jitter measurements are often classified in terms of short-term
jitter and long-term jitter. These terms are further described below:
Short-term jitter: This covers short-term variations in the clock signal output period.
Commonly used terms include:
Period jitter: This is defined as the maximum or minimum deviation (whichever
is the greatest) of the output period from the ideal period.
Cycle-to-cycle jitter: This is defined as the period difference between consecutive
clock cycles, that is cycle-to-cycle jitter = [period (n) period (n 1)]. It must
be noted that cycle-to-cycle jitter represents the upper bound for the period jitter.
Duty cycle distortion jitter: This is the change in the duty cycle relative to the
ideal duty cycle. The relationship often quoted for duty cycle is
Duty_cycle =

Highperiod
Highperiod + Lowperiod

100(%)

(9.10)

where Highperiod is the time duration when the signal is high during one cycle of
the waveform and Lowperiod is the time duration when the signal is low over one
period of the measured waveform. In an ideal situation, the duty cycle will be 50
per cent, the duty cycle distortion jitter will measure the deviation of the output
waveform duty cycle from the ideal position. Typical requirements for duty cycle
jitter is that it should be within 45 to 55 per cent [14, 15].
The above jitter parameters are often quoted as being measured in terms of degrees
deviation with respect to an ideal waveform. Another metric often encountered is that
of a unit interval (UI), where one UI is equivalent to 360 . A graphical representation
of a UI is given in Figure 9.7.
Long-term jitter: Provides a measure of the long-term stability of the PLL output;
that is, it represents the drift of the clock signal over time. It is usually specified
over a certain time interval (usually a second) and expressed in parts per million. For
example, a long-term jitter specification of 1 ppm would mean that a signal edge is
allowed to drift by 1 s from the ideal position in 1 s.

286 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


1UI, 360
UI, 270
UI, 180
UI, 90

Figure 9.7

Graphical representation of a UI
Ideal reference signal

Ideal cycle time

N1

Max long-term jitter,


averaged over many
cycles

Jittery output signal

T between N and N1 = cyclecycle jitter


Note: measured as worst case

Figure 9.8

Pictorial summary of various types of jitter

A graphical representation of all of the mentioned forms of jitter is provided in


Figure 9.8.
All of the above measurements, both long-term and short term rely on either
being able to discern the timing fluctuations of the signal when compared to an ideal
reference or discern the timing difference between consecutive cycles. Absolute jitter
specifications will be dependent on the specific application and will relate to the
maximum output frequency of the PLL.
For example, in a 622 Mbps SONET (Synchronous Optical Network) PLL [16]
the maximum peak-to-peak generated jitter is stated as 2.6 m UI. For this application,
one UI is calculated as
1
= 1.60 ns
(9.11)
1 UI =
622.08 MHz
Therefore, 2.6 m UI is equivalent to
2.6 m UI =

1.608 ns
1 UI
2.6 =
2.6 = 4.1808 ps
1000
1000

(9.12)

Phase-locked loop test methodologies 287


Further examples of jitter specifications and allowable tolerances are given in
Reference 17.
It can be seen from the above example that measurement of jitter requires highly
accurate and jitter-free reference signals that are a much higher frequency than that
of the device being measured.

9.2

Traditional test techniques

This section will explain traditional or commonly employed CP-PLL test techniques
that are used for the evaluation of PLL operation. Many of the techniques will be
applicable for an analogue or semi-digital type of PLL or CP-PLL, however, it must
be recognized that although the basic principles may hold, the test stimuli may have
to undergo slight modification for the fully analogue case. The section is subdivided
into two subsections, focusing on characterization and production test techniques,
respectively.

9.2.1

Characterization focused tests

In this subsection we will review typical techniques that are used to characterize a PLL
system. Characterization in this context will refer to measurements made by the PLL
designer upon an initial test die for the purpose of verifying correct circuit functionality and allowing generation of the device or data sheet [18]. Characterization-based
tests usually cover a greater number of operational parameters than that carried
out for production test. Also they can be carried out using specific special function test equipment and hardware, as apposed to general-purpose production test
equipment.
9.2.1.1 Operational-parameter-based measurements
The key parameter-based measurements employed for CP-PLL verification generally
consist of the following tests:
1. Lock range and capture range.
2. Transient response.
Correct operation from power up of the system incorporating the PLL. This
test is often ascertained using a frequency lock test (FLT).
Correct step response of the system, when the PLL is switched between
two frequencies or phases.
3. Phase transfer function (or jitter transfer function monitoring).
To ascertain correct 3 dB bandwidth of the PLL system.
To ascertain correct phase response of the PLL system.
The above tests can be considered to be full functionality tests as they are
carried out upon the whole PLL system. It must also be mentioned that for
second-order systems, both of the above techniques can be used to extract
defining parameters such as n (natural frequency) and (damping).

288 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Further tests are often carried out that are based upon structural decomposition of the PLL into its separate building blocks. These techniques are often
used to enhance production test techniques [19, 20]. Typical tests are
CP current monitoring is used to ascertain the CP gain and hence the phase
detector transfer characteristic. It is also used to monitor for excessive CP
mismatch
Direct control of the VCO is used to allow estimation of the VCO transfer
characteristic.
The decomposition tests are also often coupled with some form of noise immunity test that allow the designer ascertain the sensitivity of the VCO or CP
structures to noise on the supply rails of the PLL. As the CP currents and
VCO control inputs are critical controlling nodes of the PLL, and determine
the instantaneous output frequency of the PLL, any coupling of noise onto
these nodes will cause jitter on the PLL output. Thus, noise immunity tests are
particularly important in the initial characterization phases. A more detailed
discussion of the tests now follows.
9.2.1.1.1 Lock range and capture range
The normally encountered definitions for capture range and lock range are provided
below:

Capture range: Refers to the range of frequencies that the PLL can lock to when
lock does not already exist.
Lock range: The range of frequencies that the PLL can remain locked after lock
has been achieved.

For certain applications, these parameters are particularly important. For instance,
lock range would need to be evaluated for frequency demodulation applications.
When considering edge-sensitive CP-PLLs the lock range is usually equal to the
capture range.
For a CP-PLL synthesizer, the lock range would be ascertained in the following
manner for a single division ratio:
1. The CP-PLL would initially be allowed to lock to a reference frequency that
is in the correct range for a particular divider setting.
2. The reference frequency would be slowly increased until the CP-PLL can no
longer readjust its output to keep the PFDs inputs phase aligned.
3. When the CP-PLL fails to acquire constant lock, the reference frequency is
recorded.
This sequence is often aided by use of lock detect circuitry, which is used to provide
a digital output signal when the PLL has lost lock.
9.2.1.1.2 Transient-type response monitoring
Frequency lock test. An initial test that is carried out before more elaborate tests are
employed is the FLT. This test simply determines whether the PLL can achieve a

Phase-locked loop test methodologies 289


Application of reference
frequency at T0

PLL is locked, that is PLL output frequency is stable


and equals an integer multiple of the reference
frequency

PLL

Reference
clock
PLL output

Tlock

Figure 9.9

Graphical representation of a typical FLT

stable locked condition for a given operational configuration. Stability criteria will
be determined by the application and may consist of an allowable phase or frequency
error at the time of measurement. Typically, this test is carried out in conjunction
with a maximum specified time criteria, that is, if the PLL has failed to achieve lock
after a specified time, then the PLL is faulty. The start of test initiation for the FLT
is usually taken from system startup. It is common for this test to be carried out for
various PLL settings, such as, maximum and minimum divider ratios, different LF
settings, and so on. Owing to its simplicity and the fact that it will uncover many hard
faults and some soft faults in the PLL, this test is often used in many production test
applications. A graphical description of the FLT is given in Figure 9.9.
In the above diagram, T0 represents the start of the test and Tlock indicates the
time taken to achieve lock.
In many applications, the output frequency is simply measured after a predetermined time; this is often the case in automated-test-equipment-based test schemes,
where the tester master clock would be used to determine the time duration. Alternatively, in some situations, the PLL itself is fitted with LD (lock detect) circuitry
that produces a logic signal when the PLL has attained lock [20]. In this situation, a
digital counter is started at T0 and stopped by the LD signal, thus enabling accurate
lock time calculations to be made. Note that LD circuitry is not test specific, as it
is often included in PLL circuits to inform other system components when a stable clock signal is available. However, sometimes an LD connection is fitted solely
for design-for-testability (DfT) purposes. It must also be mentioned that, in certain
PLL applications, it may be acceptable to access the LF node. If this is the case, the
approximate settling time of the PLL can be monitored from this node. This technique
is sometimes used for characterization of chipset PLLs, however, due to problems

290 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


FSK
generator

PFD

LF

VCO

Divide by N

F1

F2

PLL input signal

Figure 9.10

F1

F2
LF Output

Basic equipment setup for PLL step response test

outlined in the previous sections, it appears to be less commonly used for test of fully
embedded PLLs.
Step response test. The step response monitoring of PLLs is a commonly used bench
characterization technique [2]; the basic hardware set-up is shown in Figure 9.10.
Further details relating to Figure 9.10 are given below:

The input signal step is applied by using a signal generator set-up capable of
producing a frequency shift keying (FSK) signal. The signal is toggled periodically
between F1 and F2 . Note that a suitable toggling frequency will allow the system
to reach the steady-state condition after each step transition.
If an external LF is used, it is sometimes possible to measure the output response
from the LF node. The signal measured at this node will be directly proportional
to the variation in output frequency that would be observed at the PLLs output.
Also note that as the VCO output frequency is directly proportional to the LF voltage, the step response can also be measured at the VCO output. In fact, this is the
technique that must be employed when LF access is prohibited. However, this technique can only be carried if test equipment with frequency trajectory (FT) probing
capabilities is available. This type of equipment allows a plot or oscilloscope trace of
instantaneous frequency against time to be made, thus providing a correct indication
of the transient step characteristics. Many high-specification bench-test equipment
products incorporate FT functions, but it is often hard to incorporate the technique
into a high-volume production test plan.
An alternative method of introducing a frequency step to the system involves
switching the feedback divider between N and N + 1. This method will produce
an output response from the PLL that is equivalent to the response that would be
observed for application of an FSK input frequency step equal to the PLLs reference
frequency. The technique can be easily verified with reference to Equation (9.1).

Phase-locked loop test methodologies 291


Step response for a second-order system

Frequency
(Mhz)
A1
Vstop
(Fstop)

VoutSS
FoutSS
A2

V
(F)

T
Settling time

Vstart
(Fstart)

Figure 9.11

0
Time (s)

Graphical representation of a PLL step response

The step response can be used to make estimates of the parameters outlined
in Section 9.1. To further illustrate the technique, a graphical representation for a
second-order system step response is provided in Figure 9.11.
In Figure 9.11, the dashed line indicates the application of the input step parameter,
and the solid line indicates the output response. Note that the parameters of interest
are shown as V -parameters and F-parameters to indicate the similarity between a
common second-order system response and a second-order PLL system response. An
explanation of the parameters is now given.

Vstart (Fstart ): It is the voltage or frequency before the input step is applied.
Vstop (Fstop ): It is the final value of the input stimulus signal.

V (
F): It is the amount by which the input signal is changed.
VoutSS (FoutSS ):This represents the final steady-state output value of the system.
Settling time: The amount of time it takes, after the application of the input step,
for the system to reach its steady-state value.
A1 : Peak overshoot of the signal.
A2 : Peak undershoot of signal.
T : Time difference between the consecutive peaks of the transient response.
Direct measurement of these parameters can be used to extract n and . Estimation
of the parameters is carried out using the following formulas that are taken from
Reference 2 and are also found in many control texts [4]. The formulas are valid
only for underdamped system, that is, one in which A1 , A2 and hence T can be
calculated. However, if this is the case, other parameters can be used to assess the
system performance such as delay time or rise time. This is true for many applications,

292 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Closed loop transfer function
20
15
10
5
0
5
10
15
20
0
20
40
60
80
100
120
140
160
180

p
0 dB asymptote
 3 dB

Frequency (rad/sec)

Figure 9.12

105

Generalized Bode plot for a second order system

when what is really desired is the overall knowledge of the transient shape of the step
response.
The damping factor can be found as follows:
=

ln(A1 /A2 )

1/2
+ ln(A1 /A2 )2

(9.13)

The natural frequency n can be found as follows:


n =

T 1 2

(9.14)

Furthermore, PLL system theory and control system theory texts [24] also contain normalized frequency and phase step response plots, where the amplitude and
time axis are normalized to the natural frequency of the system. Design engineers
commonly employ these types of plots in the initial system design phase.
9.2.1.1.3 Transfer function monitoring
In many applications, a PLL system is designed to produce a second-order response. It
must be noted that although second-order systems are considered here, measurement
of the transfer functions of higher-order PLLs can provide valuable information about
system operation and can be achieved using the methods explained here.
A Bode plot of the transfer function of a general unity gain second-order system
is shown in Figure 9.12.
Typical parameters of interest for a second-order system are highlighted in
Figure 9.12; these are now explained in context with a PLL system. 0 dB asymptote.
For a unity gain system, within the 3 dB frequency (see 3 dB below), the magnitude

Phase-locked loop test methodologies 293


of the gain will tend to one (0 dB) as the frequency of the excitation signal is reduced.
The slope of this decrease will be determined by the damping of the system. In a
similar manner, the relative phase lag between the input and output of the system will
tend to 0 . Note: for a PLL, the magnitude of the response within the loop bandwidth
can be assumed to be unity [8]. As explained in later sections, this is an important
observation when considering PLL test.
p . This is frequency where the magnitude of the system response is at its maximum. It is directly analogous to the natural frequency (n ) of the system. In addition,
the relative magnitude of the peak (above the unity gain value can be used to determine
the damping factor ( ) of the system. Relationships between , the decibel magnitude
and the normalized radian frequency are available in many texts concerning control
or PLL theory [24].
3 dB. Following Reference 1, this defines the one-sided loop bandwidth of the
PLL. The PLL will generally be able to track frequency variations of the input signal
that are within this bandwidth and reject variations that are above this bandwidth.
For normally encountered second-order systems, that is, ones with a voltage,
current or force inputs and corresponding voltage, current or displacement outputs,
the Bode plot is constructed by application of sinusoidally varying input signal at
different frequencies; the output of the system is then compared to the input signal to
produce magnitude and phase response information. However, the situation is slightly
different for the PLL systems we are considering. In this situation, the normal input
and output signals are considered to be continuous square wave signals. The PLLs
function is to phase align the input and output signals of the PLL system.
It can be seen from Equation (9.2) and Figure 9.1 that to experimentally measure
the PLL transfer function we need to apply a sinusoidal variation of phase about the
nominal phase of the input signal i (t), that is, we sinusoidally phase modulate the
normal input signal. The frequency of the phase change is then increased and the
output response is measured. The block diagram for an experimental bench type test
set-up is shown in Figure 9.13.
For the above test, the output response can be measured at the LF node or the VCO
output. The output of the LF node will be a sinusoidal varying voltage. The output of
the VCO will be a frequency (or phase) modulated signal. Magnitude measurements
taken at a sufficiently low point below p can be approximated to unity gain i.e. the
PLL exhibits 100 per cent feedback within the loop bandwidth. Additionally, the
phase lag can be approximated to 0 . This means that all measurements taken from
the PLL output can be referenced to the first measurement. Figure 9.14 shows the
method for phase measurement calculation between input and output.
T-cycle represents the time measured for one complete cycle of the input waveform.
T represents the measured time difference between the input and output
waveform.
Using T-cycle and
T , the phase difference (
) between the input and output
signal can be estimated using the following relationship:
(j) =
=

T
360
T -cycle

(9.15)

294 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Phase
modulation
generator

PFD

LF

VCO

Phase modulated
input signal for one
frequency
Phase reference

Phase variation of
input signal

The output response can


be measured at the LF
node or the VCO
output. This will depend
on the equipment used

Phase variation of
output signal for one
frequency

Figure 9.13

Illustrating application of a PLL transfer function measurement

T cycle

Input
waveform

Output
waveform
T

Figure 9.14

Measurement of phase difference between input and output

For measurement of the magnitude response it must be recalled that well within
the loop bandwidth the PLL response can be approximated as unity. It follows that
an initial output signal resulting from an input signal, whose modulation frequency
is sufficiently low, can be taken as a datum measurement. Thus, all subsequent measurements can be referenced at an initial output measurement, and knowledge of the
input signal is not required. For example, if an initial measurement was taken for

Phase-locked loop test methodologies 295


a modulation of 100 Hz, subsequent measurements could be carried out using the
following relationship.
|H(j) |(dB) = 20 log10

Vm100 Hz
(dB)
VmN

(9.16)

where Vm100 Hz is the peak-to-peak voltage measured at an input modulation frequency of 100 Hz and VmN is the Nth peak-to-peak voltage output for a corresponding
modulation frequency.
The technique described above for phase transfer monitoring is almost identical to
a similar test technique known as jitter transfer function monitoring [21, 22]. However,
in this case, a known jittery source signal is used as the phase modulation signal, as
apposed to the sine-wave modulation mentioned in this text.
9.2.1.1.4 Structural decomposition
This subsection will outline common structural decomposition tests that are often used
to ease PLL characterization. In the interests of brevity, emphasis on the analogue
subcircuits of the PLL will be given. With reference to Section 9.1 and the associated
equations, it can be seen that the PLL system is broken down into three main analoguetype blocks, consisting of the CP, the LF and the VCO. These are considered to be
critical parts of the PLL, hence much design effort is spent on these blocks. The blocks
are often designed independently so that the combination of the associated transfer
characteristics will yield the final desired PLL transfer function. In consequence,
it seems logical to attempt to verify the design parameters independently. Typical
parameters of interest that are often checked include the following:

absolute CP current
CP mismatch
VCO gain
VCO linearity.

If direct access to the LF control node is permitted, all of these tests can be
enabled using relatively simple methods. Also, to allow these tests to be carried out,
extensive design effort goes into construction of access structures that will place
minimal loading on the LF node. However, injection of noise into the loop is still a
possibility and the technique seems to be less commonly used. A brief explanation
of common test methods is now provided.
CP measurements:
A typical test set-up for measuring the CP current is shown in Figure 9.15.
Here, CPU is the up current control input, CPD is the down current control input,
TEST is the test initiation signal that couples the LF node to the external pin via a
transmission gate network, Rref is an external reference resistor and Vref is the voltage
generated across Rref due to the CP current. The tester senses Vref and thus the CP
current can be ascertained. A typical test sequence for the CP circuitry may contain

296 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

PLL system

TEST
External pin

V
r
e
f

Rref

POS

CPU

TEST

POS

VCO

CPD

Tester

NEG

NEG

Figure 9.15

NEG

Direct access measurement of CP current

the following steps:


1. Connect the LF node to the external reference network by enabling the TEST
signal.
2. Activate the up current source by disabling CPD and enabling CPU.
3. Wait a sufficient time for the network to settle.
4. Measure the resultant up current in the CP using the relationship:
ICPU =

Vrefup
Rref

(9.17)

5. Activate the down current source by disabling CPU and enabling CPD
6. Wait a sufficient time for the network to settle
7. Measure the resultant down current in the CP using the relationship:
Vrefdn
(9.18)
Rref
An estimate of the CP current mismatch can be found by subtracting the results of
Equations (9.17) and (9.18). Also, the CPU and CPD inputs can be often indirectly
controlled via the PFD inputs thus, removing the necessity of direct access to these
points and additionally providing some indication of correct PFD functionality.
Note that in the previous description, the test access point is connected to an
inherently capacitive node consisting of the VCO input transistor and the LF capacitors, respectively. In consequence, if no faults are present in these components, there
should be negligible current flow through their associated networks. It follows that
this type of test will give some indication of the LF structure and interconnect faults.
ICPD =

Phase-locked loop test methodologies 297


Test

External pin
F2

POS

Ideal transfer
function
CPU

TEST

POS

F1

Non-Ideal transfer
function

VCO

CPD
NEG

V1

NEG

Figure 9.16

V2

NEG

VCO test set-up

VCO measurements:
A typical test set-up to facilitate the measurement of the VCO gain and linearity is
shown in Figure 9.16, where CPU is the CP up current control input, CPD is the CP
down current control input and TEST is the test initiation control input.
A typical test sequence would be carried out as follows:
1. Initially, both CP control inputs are set to open the associated CP switch
transistors. This step is carried out to isolate the current sources from the
external control pin.
2. A voltage V1 is forced onto the external pin.
3. The external pin is connected to the LF node by activation of the TEST signal.
4. After settling, the corresponding output frequency, F1 , of the VCO is measured.
5. A higher voltage, V2 , is then forced onto the external pin.
6. After settling, the corresponding output frequency F2 of the VCO is measured.
In the above sequence of events, the values chosen for the forcing voltages will be
dependent on the application.
After taking the above measurements, the VCO gain can be determined using the
following relationship:
KVCO =

F2 F1
(Hz/V)
V2 V 1

(9.19)

In certain applications, it may be necessary to determine the VCO non-linearity. This


can be determined by taking incremental measurements of the VCO gain between the
points V1 and V2 in Figure 9.16.

298 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

9.2.2

Production test focused

In many situations, owing to the problems stated in previous sections, the FLT may
be the only test carried out on embedded PLLs. A particular test plan may therefore
include the criteria that the PLL must lock within a certain time for a certain set of
divider settings. Often to enhance the FLT results, structural decomposition and ad
hoc DfT techniques such as the ones outlined in the previous sections are used. The
PLL is also generally provided with various DfT techniques incorporated into the
digital structures, such as

Direct access to PFD control inputs and outputs.


Direct control of feedback divider inputs.
Direct clock propagation to feedback divider.
Direct monitoring of feedback divider.

In addition to these features, an embedded PLL will normally have a bypass mode
that allows other on-chip circuitry to receive clock signals from an external tester as
appose to the PLL itself. This mode allows other on-chip circuitry to be synchronized
to the tester during system test. In this situation, the PLL core is often placed in a
power down mode. Particular examples of generic production test methodologies are
provided in References 18 and 19.
Ad hoc DfT methods can be of use for PLL testing; however, some of the associated problems, such as noise injection and analogue test pin access can introduce
severe limitations, especially when considering test for multiple on-chip PLLs. In
consequence, there has been recent interest into fully embedded BIST techniques for
PLLs. An overview of BIST strategies is presented in Section 9.3.
9.2.2.1 Jitter measurements
This section will provide an outline of typical jitter measurement techniques. Accurate
jitter measurements generally require some form of accurate time-based reference.
Accuracy in this context refers to a reference signal that possesses good long-term
stability and small short-term jitter as the reference signal jitter will add to the device
under tests generated jitter and will be indistinguishable in the final measurement.
In consequence, the reference jitter should be at least an order of magnitude less
than the expected output jitter of the device under test. For the following discussions
it will be assumed that a good reference signal exists. It must be noted that much
of the literature, devoted to new jitter test techniques, appears to concentrate on
the generation of accurate time-based measurements. However, the basic analysis
principle often remain similar. Commonly used measurement and analysis techniques
include period measurements and histogram measurements. These techniques are
explained below.
Period-based jitter measurements
Period-based measurements essentially consist in measuring the time difference
between equally spaced cycles of a continuous periodic waveform. A graphical
representation of the technique is shown in Figure 9.17.

Phase-locked loop test methodologies 299


Start count N

Start count N + 1
Stop count N

Reference
signal

Stop count N + 1

Output signal
1

Count = 7

Count = 6
(n + 1)th cycle

nth cycle

Figure 9.17

Representation of period-based measurements


Conditioned input
N cycles

Input

Start

Stop

Main gate

Figure 9.18

Illustrating gating of period measurement signals

This technique essentially carries out a frequency counting operation on the PLL
output signal and will measure or count the number of PLL output transitions in
a predetermined time interval (gate time) determined by the reference signal. The
difference between successive counts will be related to the average period jitter of the
PLL waveform. Obviously, this method cannot be used to carry out the short-term
cycle-to-cycle jitter measurements. Accuracy of this technique will require that the
PLL output signal frequency is much higher than the gate signal.
The signals would be gated as shown in Figure 9.18 and would be used with fast
measurement circuitry to initiate and end the measurements.
Histogram analysis
Histogram-based analysis is often carried out using a strobe-based comparison
method. In this method, the clean reference signal is offset by multiples of equally
spaced time intervals, that is, the reference signal edge can accurately be offset from

300 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Reference
clock

Jitter
distribution

St1

St2 St3 St4 St5 St6 St7


Strobe edges

Figure 9.19

Example of strobe edge placement for seven edges

the ideal position by


N
T

(9.20)

where N represents the maximum number of time delays that the reference signal
edge can be displaced by and
T is the minimum time resolution.
The measured signal is then compared to ascertain how many times its edge
transition occurs after consecutive sets of the displaced edges. A jitter histogram
of the measured waveform is then constructed by incrementing N and counting the
occurrences of the rising edge over a predetermined set of measurements. A value
of N = 0, will correspond to a non-delayed version of the reference signal. The
measurement accuracy will be primarily dependent upon
T , and
T should be an
order of magnitude below the required resolution. For example, 100 ps measurement
accuracy would require a
T of 10 ps.
An illustration of strobe edge placement for N = 7 is shown in Figure 9.19. As an
example, the count values could be collected from a given set of 100 measurements
as shown in Table 9.2. The values from the table would then be used to construct the
appropriate jitter histogram as shown in Figure 9.20.
It must be mentioned that various other techniques exist and are used to facilitate
approximation of jitter, such as indirect measurements and Fourier-based methods.
For indirect measurements, a system function reliant on the PLL clock signal is tested.
Typical examples may include signal-to-noise ratio testing of ADC or DAC systems.
For Fourier-based methods, the signal of interest is viewed in the frequency domain
as appose to the time domain and the resultant phase noise plot is examined. Proportionality exists between phase noise within a given bandwidth and the corresponding
jitter measurement, thus allowing jitter estimation [23, 24].

Phase-locked loop test methodologies 301


Table 9.2

Example values for histogram-based


measurement

Strobe position
number

Failure count

Pass count

St1
St2
St3
St4
St5
St6
St7

4
15
35
50
65
85
90

96
85
65
52
35
15
10

50

35

35

15

15
10

4
St1

St2 St3 St4 St5 St6 St7


Strobe edges

Figure 9.20

9.3

Jitter histogram constructed from the values in Table 9.2

BIST techniques

Although the primary function of a PLL is relatively simple, previous sections have
shown that there are a wide range of specifications that are critical to the stability
and performance of PLL functions that need to be verified during engineering and
production test. These specifications range from lock time and capture range to key
parameters encoded in the phase transfer function such as damping and natural frequency. Parameters such as jitter are also becoming more critical as performance
specifications become more aggressive.
The challenge therefore associated with self-testing PLLs is to find solutions that

can be added to the design with minimal impact of the primary PLL function,
have minimal impact of the power consumption,

302 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

involve minimal silicon overhead,


can be implemented by non-specialists,
guarantee detection of faults through either direct or indirect measurement of key
specifications,
would be tolerant to normal circuit noise, component offsets process spreads,
temperature and supply variations.

The following section identifies several BIST strategies proposed for PLL structures. Only the basic techniques are described here. The reader should consult the
publications referenced for more information on practical implementation issues,
limitations and potential improvements.
A fully digital BIST solution was proposed by Sunter and Roy [10]. This solution
is restricted to semi-digital types of PLL and are based on the observation that the
open-loop gain is linearly dependent on the key parameters associated with the PLL,
that is
GOL =

Kp Kv G(s)
N s

where Kp is the gain of the phase detector that is a function of the CP current, G(s) is
the frequency-dependent gain of the lowpass filter or integrator, Kv is the gain of the
VCO in rad/s/volt, N is the digital divider integer and s is the Laplace variable.
The BIST solution opens the feedback loop to ensure that the output of the phase
detector is independent of the VCO frequency. This is achieved by adding a multiplexer to the PLL input as shown in Figure 9.21. A fully digital method is used to
derive an input signal with a temporary phase shift. The method uses signals tapped
Loop gain test
mode
Fref
ref

Phase detector
and CP

VCO

FB_clk

Divide by N
=Fref
Phase delay
circuit

Figure 9.21

=2Fref

Circuit for measuring loop again

Delay next cycle

Phase-locked loop test methodologies 303


from the divide by N block in the feedback loop to generate an output that is 180
out of phase relative to the input for 1-cycle of the reference clock only. This phase
shift is activated on receipt of a logic 1 on the delay next cycle signal.
This phase-shifted signal is applied to the phase detector via the input multiplexer.
The strategy used eliminates problems in measuring very fast frequency changes
on the VCO output that would result if a constant phase shift was applied. In this
architecture, the VCO frequency will change only during the cycle where the phase
is shifted. Following this cycle, the VCO will lock again (zero phase shift on the
input) hence it is relatively easy to measure the initial and final VCO frequencies and
calculate the difference. The relationship between the change in VCO frequency as a
function of the phase shift on the input and the reference clock is

VCO =

Kv ICP
fref C

and the open-loop gain is given by


GOL =

FB
Kv ICP
=
2 NC

FB
2fref

So in summary, a digital circuit can be added to the divide-by-N block to generate a


known phase-shifted input for 1-cycle (stimulus generator) and a multiplexer added
to the PLL input to allow the loop to be broken and the phase-shifted signal from the
stimulus generator to be applied. All that remains is the implementation of an on-chip
solution to measure the output frequency change. This can be achieved digitally using
a gated binary counter with a known reference frequency input.
Capture and lock range measurements are also possible using this architecture by
measuring the maximum and minimum frequencies of the lock/capture range. This
is achieved by continuously applying a frequency error to force the output frequency
to its maximum or minimum value at a controlled rate. The implementation involves
connecting the PLL input to an output derived from the divide-by-N block with a
frequency equal to, double of or half of the VCO output frequency. The VCO output
frequency is continually monitored until the rate of change approaches a defined
small value. This last frequency measurement is recorded. Lock time can also
be measured in this testing phase by closing the PLL loop after this maximum or
minimum frequency has been achieved and counting the number of reference clock
cycles to the point at which the PLL locks.
An alternative architecture for on-chip measurement of the phase transfer function
is described Burbidge et al. [25] and utilizes an input multiplexer as above, a digital
control function and a PFD with a single D-type flip-flop added to its output as shown
in Figure 9.22.
The purpose of this modified phase detector is to detect the peak output that
corresponds to the response of the PLL to the peak value of the input phase. If a
strobe signal is generated by the input stimulus generator when the input phase is
at its peak, measurement of the time delay between the input strobe and the output
change on the Q output of the D-type can generate the phase response at a fixed input
frequency. In addition, the point at which this D-type output changes corresponds to

304 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


1
D

PFDUP
0

PLLREF

R
MFREQ

PLLFB

Q
PFDDN

Existing or
additional
PFD

Figure 9.22

PLLREF
leading

PLLREF
lagging

Sampling of output frequency

EXTREF
M1
Input
modulator

Start

M2

0
A
1
B

Stop

PLLREF
C

PLL
forward
path

D
PLLFB

Feedback
path
1/N

Test clock
Divider

Phase
counter

Frequency
counter
Gate
control

Test
sequencer

TEST

Figure 9.23

BIST architecture

the PLL being locked; hence, measurement of the output frequency at this point will
allow the magnitude response of the PLL to be calculated at the reference frequency
of the input stimuli. Repeating this process for different values of input frequency will
allow the phase transfer function to be constructed. This modified phase detector and
the methodology described above is used within the overall BIST architecture shown
in Figure 9.23. The input multiplexer M2 is used to connect or break the feedback
loop and apply identical inputs to the PLL forward path to artificially lock the PLL.

Phase-locked loop test methodologies 305


Table 9.3

Test sequence

Test stage

M1

M2

(1) Set reference

A=C

B=D

Apply digital modulation with


frequency FN at start phase counter
(counter referenced to EXTREF)

(2) Set phase counter

A=C

B=D

Start phase counter at peak of input


modulation

(2) Monitor peak

A=C

B=D

Monitor for peak output signal


frequency

(3) Peak occurred, lock


PLL, stop phase
counter

A=C

A=D

Holds the output frequency constant

(4) Measure frequency


and phase

A=C

A=D

Count output frequency and store.


Store the result of the phase counter.

Comments

(5) Increase modulation frequency FN and repeat steps 1 to 4 until all frequencies of interest
have been monitored.

The algorithm used to construct the phase transfer function is as in Table 9.3.
Note that this technique requires an input stimulus generator that provides either a
frequency or phase-modulated input with a strobe signal generated at the peaks. Either
frequency modulation using a digitally controlled oscillator or phase modulation using
multiplexed delay lines can be used.
A third method of achieving a structural self-test of a PLL structure was proposed
by Kim et al. [9] and involves injecting a constant current into the PLL forward path
and monitoring the LF output that is usually a multiple of the input current injected
and a function of the impedance of the forward path.
In this approach, additional circuitry is placed between the PFD and CP with the
primary objective of applying known control signals directly to the CP transistors.
In the test, the PLL feedback path is broken and control signals referenced to a
common time base are applied to the CP control inputs. The oscillator frequency will
be proportional to the voltage present at the LF node, which is in turn dependent on
the current applied from the CP. Thus, if the output frequency can be determined,
information can be obtained about the forward path PLL blocks. The test proposal
suggests that the loop divider is reconfigured as a frequency counter. The test basically
comprises three steps as follows. Initially closing both of the CP transistors performs
a d.c. reference count. If the CP currents are matched, the voltage of the LF node
should be at approximately half the supply voltage. The measurement from this test
phase is used as a datum for all subsequent measurements. In the second stage of the
test, the LF is discharged for a known time. Finally, the LF is charged for a known

306 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


time. For all of the test stages the output is measured using the oscillator and the
frequency counter and is stored in a digital format. In all of the tests, the output
response is compared against acceptable limits and pass or fail criteria are evaluated.
This BIST strategy covers most if not all analogue faults and can be extended to
test the phase detector by increasing the complexity of the timing and control of the
input vectors to the PLL.
Finally, it should be noted that methods to measure jitter either directly or indirectly are currently being addressed. In Reference 10 a method is proposed that has
the additional advantage of utilizing minimal additional digital functions for implementation. To date, however, there are few other credible on-chip solutions. This is
an important goal as frequencies continue to rise.

9.4

Summary and conclusions

This chapter has summarized the types of PLL used within electronic systems, the
primary function of the core blocks and key specifications. Typical test strategies
and test parameters have been described and a number of DfT and BIST solutions
described.
It is clear that as circuit speeds increase and electronic systems rely more heavily
on accurate and stable clock control and synchronization, the integrity and stability
requirements of the PLL functions will become more aggressive increasing test time
and test complexity. Methods of designing PLLs to be more accurately and more
easily tested will hence become more important as the SoC industry grows.

9.5

References

1 Gardner, F.M.: Phase Lock Techniques, 2nd edn (Wiley Interscience, New York,
1979)
2 Best, R.: Phase Locked Loops, Design Simulation and Applications, 4th edn
(McGraw-Hill, New York, 2003)
3 Gardner, F.M.: Charge-pump phase-lock loops, IEEE Transactions on Communications, 1980;28:184958
4 Gayakwad, R., Sokoloff, L.: Analog and Digital Control Systems (Prentice Hall,
Eaglewood Cliffs, NJ, 1998)
5 Lee, T.H.: The Design of CMOS Radio-Frequency Integrated Circuits (Cambridge
University Press, Cambridge, 1998), pp. 438549
6 Johns, D.A., Martin, K.: Analog Integrated Circuit Design (John Wiley & Sons,
New York, 1997), pp. 64895
7 Sachdev, M.: Defect Oriented Testing for CMOS Analog and Digital Circuits
(Kluwer, Boston, MA, 1998), pp. 3738 and 7981
8 Kim, S., Soma, M.: Programmable self-checking BIST scheme for deep
sub micron PLL applications, Technical Report, Department of Electrical
Engineering, University of Washington, Seattle, WA

Phase-locked loop test methodologies 307


9 Kim, S., Soma, M., Risbud, D.: An effective defect-oriented BIST architecture for
high-speed phase-locked loops, Proceedings of 18th IEEE VLSI Test Symposium,
San Diego, CA, 30 April 1999, pp. 2317
10 Sunter, S., Roy, A.: BIST for phase-locked loops in digital applications,
Proceedings of 1999 IEEE International Test Conference, Atlantic City, NJ,
September 1999, pp. 53240
11 Goteti, P., Devarayanadurg, G., Soma, M.: DFT for embedded charge-pump PLL
systems incorporating IEEE 1149.1, Proceedings of IEEE Custom Integrated
Circuits Conference, Santa Clara, CA, August 1997, pp. 21013
12 Azais, F., Renovell, M., Bertrand, Y., Ivanov, A., Tabatabaei, S.: A unified
digital test technique for PLLs; catastophic faults covered, Proceeding of 5th
IEEE International Mixed Signal Testing Workshop, Edinburgh, UK, June 2006
13 Vinnakota, B.: Analog and Mixed-Signal Test (Prentice Hall, Englewood Cliffs,
NJ, 1998)
14 Philips Semiconductors, Data Sheet PCK857 66150MHz Phase Locked Loop
Differential 1: 10 SDRAM Clock Driver, 1 December 1998
15 Texas Instruments, Data Sheet TLC2933, High-performance Phase-locked loop,
April 1996 Revised June 1997
16 MAXIM Semiconductors, Data Sheet 19-1537; Rev 2, 622Mbps, 3.3V ClockRecovery and Data-Retiming IC with Limiting Amplifier, 19-1537; Rev 2; 12/01
17 MAXIM Semiconductors, Application Note: HFAN-4.3.0Jitter Specifications
Made Easy: A Heuristic Discussion of Fibre Channel and Gigabit Ethernet
Methods, Rev 0; 02/01
18 Burns, M., Roberts, G.: An Introduction to Mixed Signal IC Test and Measurement
(Oxford University Press, New York, 2002)
19 O K I A S I C PRODUCTS, Data Sheet, Phase-Locked Loop 0.35m, 0.5 m,
and 0.8 m Technology Macrofunction Family, September 1998
20 Chip Express, Application Notes, APLL005 CX2001, CX2002, CX3001,
CX3002 Application Notes, May, 1999
21 Lecroy, Application Brief, PLL Loop Bandwidth: Measuring Jitter Transfer
Function In Phase Locked Loops, 2000
22 Veillette, B.R., Roberts, G.W.: On-chip measurement of the jitter transfer function of charge pump phase locked loops, Proceedings of IEEE International Test
Conference, Washington, DC, November 1997, pp. 77685
23 Goldberg, B.G.: Digital Frequency Synthesis Demystified (LLH Technology
Publishing, Eagle Rocks, VA, 1999)
24 Goldberg, B.G.: PLL synthesizers: a switching speed tutorial, Microwave
Journal, 2001;41
25 Burbidge, M., Richardson, A., Tijou, J.: Techniques for automatic on-chip closed
loop transfer function monitoring for embedded charge pump phase locked loops.
Presented at DATE (Design Automation and Test in Europe); Munich, Germany,
2003

Chapter 10

On-chip testing techniques for RF wireless


transceiver systems and components
Alberto Valdes-Garcia, Jose Silva-Martinez,
Edgar Sanchez-Sinencio

10.1

Introduction

In the contemporary semiconductor industry, the incorporation of a comprehensive


testing strategy into the design flow of a wireless module is indispensable for its timely
development and economic success [13]. Modern transceivers are highly integrated
systems. The diverse nature of their specifications and components, as well as the
ever-increasing frequencies involved, makes their testing progressively more complex
and expensive. To increase the effectiveness and cost efficiency of analogue and radio
frequency (RF) tests in integrated systems is a challenge that has been addressed at
different levels. Recent efforts span defect modelling [4], development of algorithms
for automated test, functional verification through alternate tests [5, 6], design for
testability techniques [7] and built-in self-test (BIST) techniques [810].
In particular, BIST can become a high-impact resource due to the following
reasons: (i) to reduce the complexity and cost of the external automatic test equipment
(ATE) and its interface to the circuit under test (CUT), it is desirable to move some
of the testing functions to the test board and into the CUT itself [2, 3]; (ii) the
increase in packaging costs demands known good die testing solutions that can be
implemented at the wafer level [7]; and (iii) BIST can offer fault diagnosis capabilities
(i.e., to identify a faulty block in the system) that provides valuable feedback for yield
enhancement, thus accelerating the development of the product.
This chapter describes a set of techniques that improves the testability of a wireless
transceiver. The goal is to enable the measurement of the major performance metrics
of the transceiver and its individual building blocks at the wafer level avoiding the use
of RF instrumentation. Figure 10.1 illustrates this approach. The embedded testing

310 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Low-cost ATE
Test control and
analysis software

Transceiver on wafer
Hardware
interface

RF front-end
f ~ 15 GHz
Digital and
d.c test bus

 Calibration of on-chip test circuits


 Test of system and building blocks
 Fault location and diagnosis

Figure 10.1

On-chip
test
circuits
Analogue baseband
f ~ 1100 MHz

Wafer-level test of an integrated transceiver through a low-cost


interface

devices communicate with the ATE through an interface of low-rate digital data
and d.c. voltages. From the extracted information on the transceiver performance at
different intermediate stages, catastrophic and parametric faults can be detected and
located.
Throughout the chapter, special emphasis is made on the description of transistorlevel design techniques to implement embedded test devices that attain robustness,
transparency to CUT operation and minimum area overhead.
To address the problem of testing a system with a high degree of complexity
such as a modern transceiver, three separate tasks are defined: (i) test of the analogue
baseband components which involve frequencies in the range of megahertz; (ii) test
of the RF front-end section at frequencies in the range of gigahertz; and (iii) test of
the transceiver as a full system.
Section 10.2 deals with the first task. A robust method for built-in magnitude and
phase-response measurements based on an analogue multiplier is discussed. Based on
this technique, a complete frequency-response characterization system (FRCS) [11]
for analogue baseband components is described. Its performance is demonstrated
through an integrated prototype in which the gain and phase shift of two analogue
filters are measured at different frequencies up to 130 MHz.
One of the most difficult challenges in the implementation of BIST techniques
for integrated RF integrated circuits (ICs) is to observe high-frequency signal paths
without affecting the performance of the RF CUT. As a solution to this problem,
a very compact CMOS, RF amplitude detector [12], a methodology for its use in
the built-in measurement of the gain and 1 dB compression point of RF circuits are
described in Section 10.3. Measurement results for an integrated detector operating
in the range from 900 MHz to 2.4 GHz are discussed including its application in the
on-chip test of a 1.6 GHz low-noise amplifier (LNA).
Finally, to address the third task, Section 10.4 presents an overall testing strategy
for an integrated wireless transceiver that combines the use of the two above mentioned techniques with a switched loop-back architecture [10]. The capabilities of
this synergetic testing scheme are illustrated through its application on a 2.4 GHz
transceiver macromodel.

On-chip testing techniques for RF wireless transceiver systems and components 311
A
0

Signal
generator

Figure 10.2

10.2

B
A
H() = 
|H()| =

Acos(0 t)

CUT
H (0)

Bcos(t0 + )
APD

Conceptual description of a linear system characterization scheme

Frequency-response test system for analogue baseband circuits

A general analogue system, such as a line driver, equalizer or the baseband chain
in a transceiver consists of a cascade of building blocks or stages. At a given frequency 0 each stage is expected to show a gain or loss and a delay (phase shift)
within certain specifications; these characteristics can be described by a frequencyresponse function H(0 ). An effective way to detect and locate catastrophic and
parametric faults in these analogue systems is to test the frequency response H() of
each of their building blocks. A few BIST implementations for frequency-response
characterization of analogue circuits have been developed recently using sigma-delta
[13], switched-capacitor [14] and direct-digital-synthesis [15] techniques. These test
systems show different trade-offs in terms of complexity and performance. Even
though their frequency-response test capabilities have been demonstrated only in the
range from kilohertz to few megahertz, implementations in current deep-submicron
technologies may extend their frequency of operation.
This section describes an integrated FRCS that enables the test of the magnitude
and phasefrequency responses of a CUT through d.c. measurements. Robust analogue circuits implement the system that attains a frequency-response measurement
range in the range of hundreds of megahertz, suitable for contemporary wireless
analogue baseband circuits.

10.2.1

Principle of operation

At a given test frequency (0 ), the transfer function H(0 ) of a CUT can be obtained
by comparing the amplitude and phase between the signals at its input and output. By
implementing a signal generator (tunable over the bandwidth of interest for the characterization) and an amplitude-and-phase detector (APD), a FRCS can be obtained
[16] as shown in Figure 10.2.
Figure 10.3 presents a block diagram of an effective APD. An analogue multiplier
sequentially performs three multiplications between the input and output signals from
the CUT. For each operation, a d.c. voltage and a frequency component at 20 are
generated; the latter is suppressed by a lowpass filter (LPF).

312 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Step 1
Acos(0t)

CUT
H(0)

From
signal
generator

LPF

Step 2
Bcos(0t + ) Acos(0t)
To next
stage

d.c Output
to ADC
2
A
X=K 2

C<<20

Figure 10.3

CUT

Step 3
Bcos(0t + ) Acos(0t)

H(0)

CUT

Bcos(0t + )

H(0)

LPF
Y=K

C<<20

ABcos ()
2

LPF
Z=K

B
2

C<<20

Operation of the APD

The following three d.c. voltages are obtained:


X=K
Y=

A2
2

1
K A B cos ()
2

(10.1)
(10.2)

B2
(10.3)
2
where K is the gain of the multiplier, A and B are the amplitude of the signals at the
input and output of the CUT, respectively and is the phase shift introduced by the
CUT at 0 . From these d.c. outputs, a low-cost ATE can evaluate the absolute value
of the phase (||) and the gain (B/A) responses of the CUT at 0 by performing the
following simple operations:


Y
| | = cos1
(10.4)
X Z

B
Z
=
(10.5)
A
X
It is important to note that these operations do not imply a need for sophisticated off-chip equipment. Various inexpensive modern 8-bit microcontrollers have
the capability of working with trigonometric functions and other mathematical
operations.
From Equations (10.4) and (10.5) note that for the computation of the parameters
of interest (B/A and ||), neither the amplitude of the signal generator (A) nor the gain
of the multiplier (K) need to be set or known a priori. Hence, these parameters do not
require an accurate control. If the cut-off frequency (c ) of the LPF is small enough,
its variations will have a negligible effect on the accuracy of the measurements.
Z=K

On-chip testing techniques for RF wireless transceiver systems and components 313
Table 10.1
F
A
B
MAG
PHI
d.c.1
d.c.2
d.c.3

Test variables
Frequency of the signal applied to the CUT
Amplitude of the signal applied to the CUT
Amplitude of the signal at the output of the CUT
Magnitude response of the CUT (B/A) at F
Phase response of the CUT at F
d.c. voltage proportional to A2 /2
d.c. voltage proportional to B2 /2
d.c. voltage proportional to ABcos(PHI)

Moreover, any static d.c. offset that the multiplier may have can be measured when
no signal is present and then cancelled before the computations. In summary, this
technique for the measurement of magnitude and phase responses is inherently robust
to the effect that process variations can have on the main performance characteristics
of the building blocks, which makes it suitable for BIST applications.
The effect of the spectral content of the test signal is now analysed. Let HDi ,
be the relative voltage amplitude of the ith harmonic component (i = 2, 3, , n)
in respect of the amplitude A of the fundamental test tone. Under the pessimistic
assumption that the CUT does not introduce any attenuation or phase shift to either
of these frequency components, the d.c. error voltage (E) introduced by the harmonic
distortion components to each of the voltages X, Y and Z is given by
A2
A2 

THD = X THD
(HDi )2 = K
2
2
n

E=K

(10.6)

i=2

where THD is the total harmonic distortion of the signal generator given by the ratio
of the total power of the harmonics over the power of the fundamental tone. If THD is
as high as 0.1 (10 per cent), even in this pessimistic scenario, E would be equivalent
to only 0.01 (1 per cent) of X. This tolerance to harmonic components is an important
advantage since it eliminates the need for a low-distortion sinusoidal signal generator.

10.2.2

Testing methodology

A procedure for the automated test of a CUT using the described frequency-response
measurement technique is described next.
The control and output variables involved in a test process using the phase and
amplitude detector are summarized in Table 10.1.
From the specifications of the CUT, a set of N test frequencies [F1 F2 . . . FN ]
is defined. Through adequate fault modelling, the smallest N to attain the desired
fault coverage can be found. Even though the amplitude- and phase-detection is
independent of the amplitude of the on-chip signal generator, an appropriate amplitude
[Ai ] for the input signal (which does not necessarily have to be different for each
frequency) should be chosen to avoid saturation in the CUT. As described in the

314 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


i=1

Set the test vector i : [Fi, Ai]

Measure d.c1i, d.c 2i and d.c 3i


Compute MAGi and PHIi

Output vector
[MAGi PHIi]
meets spec.?

NO
i=i+1

Figure 10.4

NO
CUT FAIL!

YES
i = N?

CUT PASS!

CUT test procedure using the FRCS

previous section, MAG and PHI can be computed from the outputs of the phase and
amplitude detector (d.c.1, d.c.2 and d.c.3). From the expected magnitude and phase
responses of the CUT, each test vector [Fi Ai ] is associated with acceptable boundaries
for the output vector [MAGi, PHIi]. Using the described test parameters, the algorithm
shown in Figure 10.4 can be employed for the efficient functional verification of the
CUT. Note that the measurement of d.c.1i serves also as a self-verification of the
entire system at the ith frequency, since it involves all of the FRCS components but
not the CUT.

10.2.3

Implementation as a complete on-chip test system with a digital


interface

Based on the described robust technique for phase and amplitude detection, a complete
FRCS can be implemented [11]. Figure 10.5 presents the system architecture. It
consists of a frequency synthesizer, an APD, a demultiplexer that serves as an interface
between different nodes of the CUT and the APD. The circuit-level design of each
building block is described next. As shown in Figure 10.5 an ADC can also be added
at the output of the APD to make the FRCS interface fully digital. Since only a d.c.

On-chip testing techniques for RF wireless transceiver systems and components 315
Multi-stage analogue circuit under test

Digital ATE
Evaluation
of magnitude and
phase response

B
H() =
A
H() = 

Stage 1

Stage 2

Stage 3

Stage n

H 1 ()

H 2 ()

H 3 ()

Hn ()

Bcos(0t + )

0

Frequency selection

Acos(0t)

Frequency
synthesizer

Demultiplexer
n + 1 to 2

Node selection

Bcos(0t + )

Acos(0t)
Test data
Fault detection
and diagnosis

Figure 10.5

d.c to digital
converter

d.c

Intergrated
frequency
response
characterization
system

APD

Architecture of an integrated FRCS

voltage needs to be digitized, the ADC design can be robust and compact, and some
sample implementations are presented in References 11 and 17.
Figure 10.6(a) shows a block diagram of the analogue multiplier employed for
the APD. The complete transistor-level schematic is depicted in Figure 10.6(b). The
core of the four-quadrant multiplier (transistors M1 and M2 ) is based on the one in
Figure 7(c) in Reference 18. The inputs are the differential voltages VA and VB and
the output is the d.c. voltage VOUT . Transistors M1 operate in the triode region; the
multiplication operation takes place between their gate-to-source and drain-to-source
voltages and the result is the current IOUT and the drain of M2 .
Transistors M2 act as source followers. Ideally, the voltage at the source of transistors M2 should be just a d.c.-shifted version of the voltage signal applied to their
gates (B+ and B). However, the drain current of transistors M1 and M2 is the result
of the multiplication and its variations affect the operation of the source followers.
This results in an undesired phase shift on the voltage signals applied to the drain
of transistors M1 , which significantly degrades the phase detection accuracy of the
multiplier. To overcome this problem, transistors M3 (which operate in the saturation
region) are added to the multiplier core. These additional transistors provide a fixed
d.c. current to the source followers improving their transconductance and reducing
their sensitivity to the a.c. current variations. Simulation results show that this design
feature reduces the error in-phase detection from more than 10 to less than 1 .
The output currents from four single-ended multiplication branches are combined
to form a four-quadrant multiplier that is followed by an LPF. C1 and M4 (diodeconnected transistor) implement the dominant pole of the LPF. M6 and M7 perform a
differential to single-ended conversion. The second pole of the LPF is implemented
by the capacitor C2 and the passive resistor R1 . The d.c. operating point of VOUT can
be set through VBO and hence, no other active circuitry is required to set this voltage.
An important component of this system is the interface between the CUT and the
APD. As shown in Figure 10.5, through a demultiplexer, the frequency response at

316 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


(a)

B+
IOUT
LPF

IOUT
B+

M2

A+

M1
M3

IDC

A+

VOUT~AB

LPF

VB
A
LPF

B+
(b)
VBMP
VDD
M5

M4

M4

M5

M6

M6

C1

VB+

M2

M1
M3

Figure 10.6

M2

M1
M3

VA

VOUT

M2

M2

M1
M3

VB
C2

R1

VA+
M1
M3

M7
VBMN

M7

VBO

Analogue multiplier for the APD: (a) block diagram and (b) circuit
schematic

different stages of the CUT can be characterized. In addition, the multiplexer should
present a high input impedance (so that the performance of the CUT is not affected)
and provide the appropriate d.c. bias voltages to the phase and amplitude detector. A
circuit that complies with these functions is depicted in Figure 10.7.
The differential pair with active load composed by transistors M11 forms a buffer
with unity gain. The output of the buffers (differential voltages VA and VB ) are connected to the corresponding inputs of the APD. The d.c. operating point of the output
is easily set through the voltages VBA and VBB . The input capacitance as seen from
the input of the multiplexers switches is approximately 50 fF in a 0.35 m CMOS
implementation. This is an insignificant loading in the range of hundreds of megahertz.

On-chip testing techniques for RF wireless transceiver systems and components 317
VDD

VBBN

M9

M9

V N

VN+
V2+
V1+

M10 M10

VN+

V2
V1

VA

VA+

VB

VB+
M11 M11

M11 M11
VBA

Figure 10.7

V2+
V1+

M10 M10

VBB

Multiplexer/buffer circuit schematic

(a)
Up

fREF
PFD

Down

1 MHz

Off-chip
loop filter
VCO
R1

f
Frequency
selection

Charge
pump

fOUT

C2

C1
DIV

Programable
divider

(b)
Up

fREF
PFD

Down

Charge
pump

VCO
R1

C2

fOUT

Off-chip PLL components C1


Frequency
selection

Figure 10.8

fDIV~1 MHz
Programable
divider

PLL-based frequency synthesizer for the FRCS: (a) implemented circuit


(b) alternate implementation

The frequency synthesizer for the generation of the input signal to the CUT is
designed as a type-II phase-locked loop (PLL) with a 7-bit programmable counter,
spanning a range of 128 MHz in steps of 1 MHz. The block diagram is shown in
Figure 10.8(a).
One of the main advantages of using a PLL in this application is that to generate
the internal stimulus, only a relatively low-frequency signal (fREF = 1 MHz in this

318 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

LPF

VCO

LPF

VDD
M11

M11

M14

M14

VOUT
+

M12

M12

VC
C2

Figure 10.9

M15

VOUT

C1
M13

M13

M15

C2

Multivibrator-based VCO schematic

case) is required as a reference. In contrast, a sigma-delta-based signal generator


requires a clock that runs at a significantly higher speed than the generated signal,
although high spectral purity can be obtained [13]. The loop filter of the PLL can
be implemented with off-chip elements to reduce the silicon area. These passive
components can be easily incorporated into the test board of the chip. Moreover, in a
system-on-chip implementation, the reference signal can be obtained from an internal
clock. An alternate synthesizer implementation is shown in Figure 10.8(b), in this
case the loop is closed externally. The ATE receives the output of the divider and sets
the control voltage of the voltage-controlled oscillator (VCO). This approach uses the
same number of pins (a 1 MHz clock and a d.c. voltage) but further reduces the amount
of on-chip components and enables an independent verification of the loop operation.
The PLLs VCO defines the frequency range for the FRCS, therefore it must
have a wide tuning range. In addition, it should be compact and robust. A suitable
implementation is shown in Figure 10.9.
The VCO design is based on a multivibrator [19] which employs three pairs of
transistors (M11 to M13 ) and one capacitor (C1 ). The control voltage VC is applied
through a source follower (not shown for simplicity) to have a suitable voltage range
(between 1.5 and 2 V) for the output of the charge pump of the PLL. This oscillator shows an exponential frequency versus voltage characteristic, which results in a
frequency tuning range of more than two decades. Simulation results with process
corners show that the VCO can have a tuning range from at least 0.8 to 130 MHz.
Moreover, from simulations it was also observed that, if discrete tuning is introduced
by implementing C1 as a bank of two capacitors, the tuning range can be extended
to 0.1180 MHz. A differential, tunable first-order LPF is added to the VCO. The
LPF is formed by transistors M14 and M15 and capacitor C1 . The VCO frequency and

On-chip testing techniques for RF wireless transceiver systems and components 319

Amplitude and phase detector


Frequency
synthesizer

CUT 1
11 MHz
BPF

CUT 2
20 MHz
LPF

Algorithmc ADC

Figure 10.10

FRCS chip microphotograph

LPF are tuned simultaneously through VC to keep the oscillation amplitude relatively
constant over the entire frequency tuning range (within 3 dB of variation) and a total
harmonic distortion (TMD) less than 10 per cent.

10.2.4

Experimental evaluation of the FRCS

The FRCS is implemented in standard CMOS 0.35 m technology. Two different


fourth-order OTA-C filters are included as CUTs; a bandpass filter (BPF) with a centre
frequency of 11 MHz and a LPF with a cut-off frequency of 20 MHz. These filters
characteristics are common in the baseband section of communication systems. The
chip microphotograph is shown in Figure 10.10. The total area of the testing circuitry
(frequency synthesizer, analogue-to-digital converter (ADC) and APD) is 0.3 mm2 .
The system operates from a 3.3 V supply.
The performance of the APD is evaluated using two phase-locked external signal
generators for frequencies up to 120 MHz. The relative difference between the amplitude of two signals can be measured in a range of 10450 mV (33 dB) with an error
of less than 1 dB. That is, the 1 dB compression point of the detection characteristic is
at an input amplitude of 450 mV. The relative phase between the two input signals is
swept across 360 and the phase measurement performed with the APD is compared
against the measurement with a digital oscilloscope. Figure 10.11 shows the resultant
phase error as a function of the phase difference at 80 MHz. In general, up to 120
MHz the phase difference can be measured with an error of less than 1 in 95 per cent
of the overall 360 range with a peak error of less than 5 . The complete APD with
the input demultiplexer occupies an area of 310 m180 m and draws 3 mA.

320 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


5

Error in phase measurement (deg)

4.5
4
3.5
3
2.5
2
1.5
1
0.5
0

180 150 120

90

60

30

30

60

90

120

150

180

Phase (deg)

Figure 10.11

Measured error performance of the phase detector as compared with


an external measurement using a digital oscilloscope at 80 MHz

Experimental results for the VCO of Figure 10.9 are shown in Figure 10.12. The
output frequency varies from 0.5 to 140 MHz and the amplitude variations in this
range are within 3.5 dB, which are in good agreement with the design goals.
Figure 10.13 presents the output spectrum of the VC towards the low end of
the tuning range (at around 16 MHz) where a higher THD is observed. Throughout
the complete tuning range, the harmonic components are always below 20 dBc.
According to the analysis presented in Section 10.2.1, this harmonic distortion would
cause relative errors in the magnitude and phase measurements of less than 1 per cent.
The complete frequency synthesizer is operated with a reference frequency of 1
MHz and through the 7-bit programmable counter that covers a range from 1 to 128
MHz in steps of 1 MHz. The measured reference spurs are below 36 dBc. The area
of the entire synthesizer is 380 m390 m and the current consumption changes
from 1.5 to 4 mA as the output frequency increases.
Figure 10.14 describes the experimental set-up for the evaluation of the entire
system in the test of the integrated CUTs. Each fourth-order filter consists of two
OTA-C biquads and each biquad has two nodes of interest, namely bandpass (BP)
node and lowpass (LP) node. Nodes 2 and 4 (biquad outputs) are BP nodes in the 11
MHz BPF (CUT 1) and LP nodes in the 20 MHz LPF (CUT 2). Buffers are added
to the output node of each biquad so that their frequency response can be measured
with an external network analyser.
The results of the operation of the entire FRCS in the magnitude response characterization of the 11 MHz BPF at its two BP outputs are shown in Figure 10.15. These

On-chip testing techniques for RF wireless transceiver systems and components 321

Oscillation frequency (MHz)

(a)

103

102

101

100

101
1.4
(b)

1.5

1.6

1.7
1.8
1.9
Control voltage (V)

2.1

2.2

60
80
100
Frequency (MHz)

120

140

Peak amplitude (dBV)

9
10
11
12
13
14
15
16
17

Figure 10.12

20

40

VCO measurement results. (a) Tuning range (b) amplitude versus


frequency

results are compared against the characterization performed with a commercial network analyser. In this measurement, the dynamic range of the system is limited to
about 21 dB due to the 7-bit resolution of the ADC. The phase response of the filter
as measured by the FRCS is shown in Figure 10.16.
The corresponding results for the characterization of the 20 MHz LPF are presented in Figures 10.17 and 10.18. In this case the d.c. output of the APD is measured
through a data acquisition card with an accuracy of 10 bits. As it can be observed,

322 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Marker 1 (T1)

10

21.73 dBm
15.87174349 MHZ

Ref Lv1
10 dBm

RBW
VBW
SWT

RF Att

50 kHz
50 kHz
60 ms

20 dB

Unit

dBm

21.73 dBm
15.87174349 MHz
20.48 dB
1 (T1)
32.10420842 MHz
1 (T1)

20

30
40

1
15A

1VIEW
50
60
70
80
90
100
110

Figure 10.13

Output spectrum of the on-chip signal generator

Source

Balun

Commercial
network
analyser

CUT: Fourth order Gm-C filter

BiQuad 1
Node 1
PLL
output

Node 2

BiQuad 2
Node 3

Node 4

Balun

CH1

Balun

CH2

On-chip
buffers

MUX
inputs

Integrated frequency response


characterization system

Figure 10.14

Control inputs
Output

Experimental set-up for the evaluation of the FRCS

On-chip testing techniques for RF wireless transceiver systems and components 323
(a)
CH1 Ach log MAG

5 dB/REF 20 dB

Network analyser
Proposed system
Avg
16

IF BW 1 KHz
START 10 Hz

POWER 20 dBm

SWP 418.1 msec


STOP 20 MHz

(b)
CH1 Ach

log MAG

5 dB/REF 20 dB

Network analyser
Proposed system
Avg
16

IF BW 1 KHz
START 10 Hz

Figure 10.15

POWER 20 dBm

SWP 418.1 msec


STOP 20 MHz

Magnitude response test of the 11 MHz BPF. (a) Results for the first
biquad (second-order filter) and (b) results for the complete fourthorder filter

the APD is able to track the frequency response of the filter and perform phase
measurements in a dynamic range of 30 dB up to 130 MHz.
On average, in the test of both CUTs, the magnitude response measured by the offchip equipment is about 2 dB below the estimation of the FRCS. This discrepancy
is in good agreement with the simulated loss of the employed buffers and baluns.
Table 10.2 presents the performance summary of this integrated test system.

324 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


(a)

110

Phase magnitude (deg)

100
90
80
70
60
50
40
30
20
10
0

10

12

14

16

18

20

Frequency (MHz)
(b)

Phase magnitude (deg)

180
160
140
120
100
80
60
40
20
0

10

12

14

16

18

Frequency (MHz)

Figure 10.16

10.3

Phase response test of the 11 MHz BPF. (a) Results for the first biquad
(second-order filter) and (b) results for the complete fourth-order
filter

CMOS amplitude detector for on-chip testing of RF circuits

RF amplitude detectors and RF power detectors generate a d.c. voltage proportional


to the amplitude and power of an RF signal, respectively. These testing devices have
been employed as key components of low-cost RF tester architectures [2, 8]. For the
embedded test of the RF blocks in an integrated transceiver, to include on-chip buffers
to monitor the RF signal paths through an external spectrum or network analyser, is not
a practical test strategy. The cost of the required equipment and the area overhead due

On-chip testing techniques for RF wireless transceiver systems and components 325
(a)
CH1 Ach

log MAG

5 dB/REF 20 dB

Network analyser

Proposed system
Avg
16

IF BW 1 KHz
START 10 Hz

(b)

CH1 Ach

log MAG

POWER 20 dBm

SWP 418.1 msec


STOP 20 MHz

5 dB/REF 20 dB

Network analyser
Proposed system
Avg
16

IF BW 1 KHz
START 10 Hz

Figure 10.17

POWER 20 dBm

SWP 418.1 msec


STOP 20 MHz

Magnitude response test of the 20 MHz LPF. (a) Results for the first
biquad (second-order filter) and (b) results for the complete fourthorder filter

to the extra circuitry and output pads would be unaffordable. Therefore, it is desirable
to have an on-chip RF amplitude detector (RFD) to monitor the voltage magnitude
of RF signals through d.c. measurements. Different implementations of RFDs using
bipolar transistors on a SiGe process technology have been reported recently [9, 20].
The desired characteristics of a practical RFD are: (i) a high input impedance
at the testing frequency to prevent loading and performance degradation of the RF

326 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


(a)
180

Phase magnitude (deg)

160
140
120
100
80
60
40
20
0

20

40

60
80
100
Frequency (MHz)

120

140

(b)
180

Phase magnitude (deg)

160
140
120
100
80
60
40
20
0

Figure 10.18

10

20

30
40
Frequency (MHz)

50

60

Phase response test of the 20 MHz LPF. (a) Results for the first biquad
(second-order filter) and (b) results for the complete fourth-order
filter

CUT; (ii) a minimum area overhead; and (iii) a dynamic range suitable for the target
building blocks. In addition, the measurement method should be robust to the effect
that process variations may have on the detectors response. Other figures of merit
such as power consumption and temperature stability are not a priority since the RFD
would not be used during the normal operation of the system under test.

On-chip testing techniques for RF wireless transceiver systems and components 327
Table 10.2

Performance summary for the FRCS

Technology
Dynamic range for measurement of magnitude response
Resolution for phase measurements
Frequency range
Digital output resolution
Supply
Power consumption (at 130 MHz)
Area

0.35 m CMOS
30 dB
1
1130 MHz
7 bits
3.3 V
20 mW
0.3 mm2

As it will be described next, major performance metrics of integrated RF circuits


such as gain and 1-dB compression point can be tested in a robust and accurate way by
using simple RF detection devices that do not need to have a linear or predetermined
amplitude to d.c. voltage relationship.

10.3.1

Gain and 1-dB compression point measurement with amplitude


detectors

The following example illustrates an effective technique to measure the gain compression of an integrated RF device [21]. An RFD is used at the input of the RF CUT
and another at the output as shown in Figure 10.19. A macromodel is built to simulate this test set-up. The model of the RFD consists of an amplifier with high input
impedance followed by a rectifier and a second-order LPF (the design of this detector
architecture will be explained in the next section). Two different LNA models are
considered as CUT. LNA1 has a gain of 10 dB, output 1-dB compression point of
3 dBm, output IP3 of 7 dBm and noise figure of 4 dB. LNA2 represents a faulty
LNA with a gain of 8 dB, output 1 dB compression of 5 dBm and the same IP3
and noise figure (NF) as LNA1. The amplitude of the sinusoidal signal at the input
of the LNA (and the first detector) is swept from 20 to 0 dBm in steps of 2 dB.
Figure 10.19 shows the simulation results. For a given input amplitude, the gain of
the LNA can be measured as the distance in decibels from the response of the detector
at the output to the reference response (output of the detector at the input). As it can
be observed, the input amplitude (and corresponding output amplitude) for which the
gain decreases by 1 dB can be easily extrapolated.
Note that with the use of the reference response, the absolute gain and the
non-linearity of the RFDs RF to d.c. conversion characteristic do not affect the
measurement. In this way, process variations do not affect the measurement accuracy
significantly. The mismatch between the gains of the different detectors would be the
only remaining source of error. It is also important to mention that the d.c. offset that
may be present at the output of the detectors is not a matter of concern since it can be
measured (when no signal is present at the input) before the characterization process.

328 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


1.1

RF amplitude detector output (V)

RF
detctor 1

RF
detctor 2

0.9
RF CUT

0.8
RF IN

9 dB

0.7
0.6

RF OUT

DC output from RF detector 1

0.5

DC output from RF detector 2

0.4

DC output from RF detector 2


(CUT out of spec)

10 dB
0.3

Output 1 dB
compression
point

0.2
0.1
20

18

Figure 10.19

14

8
10
12
6
Input amplitude (dBm)

Example of on-chip RF measurements using amplitude detectors

Rectification

Post
rectification

VI conversion
and amplification

Class AB
rectifier

Low pass filtering


IV Conversion

Figure 10.20

10.3.2

16

Pre
rectification

RF IN

high Z
at RF

Small-signal
voltage gain

DC OUT

Block diagram of the RF root-mean-square (RMS) detector

CMOS RF amplitude detector design

The design of a practical CMOS RFD [12] is now described. It consists in three stages;
a conceptual block diagram is depicted in Figure 10.20. The first stage presents a high
impedance to the RF signal path, converts the sensed voltage to a current signal and
amplifies it. The second stage is a full-wave rectifier. The rectified waveform is then
filtered in the last stage to obtain its average value. The output is therefore a d.c.
voltage proportional to the amplitude of the RF signal at the input of the detector.
The circuit schematic of the RFD detector is shown in Figure 10.21. The
d.c. current sources IB1 IB5 are implemented using CMOS current mirrors; their
transistor-level schematics are omitted for simplicity. Transistor M1 senses the voltage at the RF node to be observed and converts it into current. Its size is chosen to
be small in order to present minimum parasitic loading; simulation results show that

On-chip testing techniques for RF wireless transceiver systems and components 329
VDD

M2

Input stage
R1

Class AB rectifier
IB3

M3

LPF

M13 IB5

M9
IB4

IRF

AGND
M8

M10

R3

RFIN
C2

M1

M7

DCOUT
R4

M11

IB2

IB1

C1

C3
M4

R2

IB4
M5

M6

M12 M14

M15

GND

Figure 10.21

Circuit schematic of the RF RMS detector

the input impedance seen from the gate of M1 is approximately equivalent to a 13


fF capacitance. The operating point of M1 is set through the bias current IB1 ; in this
way, its transconductance does not depend directly on the d.c. voltage at the observed
node. Capacitor C1 is employed to minimize the source degeneration impedance of
M1 at RF frequencies. This capacitance also helps to suppress drastically the noise
due to IB1 and improve the sensitivity of the detector.
Due to its small size, the input transistor is not able to generate enough current for
the rectification. To overcome this limitation, current amplification is performed by
the wideband current mirrors implemented by transistors M2 M5 . Using the technique
introduced in Reference 22, resistors R1 and R2 are placed at the current mirrors to
extend their effective frequency of amplification up to 2.4 GHz. The d.c. current
source IB2 is added to prevent unnecessary d.c. current amplification by the NMOS
current mirror (M4 M5 ).
The output current from the input stage (IRF) is AC coupled to the rectifier through
capacitor C2 . The class AB rectifier is formed by transistors M6 M15 . Transistors M7
and M8 are biased in weak inversion. Their operating point is controlled by the current
IB4 through the diode-connected transistors M10 and M11 . When IRF is positive, it
passes through M7 while M8 is driven into the cut-off region. The a.c. current received
by M7 is mirrored to the output of the rectifier by M6 and M12 . On the other hand,
during the negative half-cycle of IRF, M8 is activated and M7 is in the cut-off region.
During this cycle, the a.c. current is mirrored to M13 and then inverted by M14 and
M15 . Due to the additional inversion, both half-cycles of the RF current are added
with the same polarity and hence, full-wave rectification is accomplished.
The additional delay introduced by the extra inversion (performed on the negative
half-cycle) is not a major concern since it is only the average (d.c.) value of the sum of

330 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

RF amplitude
detector
Detector at
LNA ouput
LNA and
buffer
Detector at
LNA input

Figure 10.22

Microphotograph of the IC prototype

both currents that is important. The resultant rectified current is converted to voltage
by R3 . Finally, the passive LPF formed by R4 and C3 extract the d.c. component. This
passive pole also sets the settling time of the detector, which is designed to be in the
order of tenths of nanoseconds. AGND is set to 1.65 V (VDD = 3.3 V) to define the
d.c. operating point of the output of the detector (d.c.OUT ) as well as the d.c. voltage
at the source of M10 and M11 . It is important to note that all of the signal amplification
and rectification in the detector is done in current mode and all of the high-frequency
internal nodes are at low impedance. These characteristics prevent the occurrence of
large voltage swings and minimize the injection of substrate noise.
From simulation and experimental results, the ratio of the maximum and minimum
signal amplitudes that can be detected (dynamic range) by the rectifying circuit is 30
dB. The sensitivity of the detector is mainly controlled by IB4 . As this current is
reduced, the rectifier is sensitive to smaller signal amplitudes. On the other hand, if
IB4 is increased, the minimum detectable signal becomes larger but the compression
point of the rectifier is also moved to higher amplitudes. In this way, the useful range
of the detector can be set to higher or lower signal levels according to the expected
conditions of the RF node to be observed.

10.3.3

Experimental results

An IC prototype was fabricated in the TSMC 0.35 m CMOS process and measured
in a QFN package. The chip microphotograph is shown in Figure 10.22. The IC
includes a stand-alone RFD and an LNA with detectors at its input and output to
evaluate the on-chip testing capabilities of the device. The RFD occupies an area of
only 0.031 mm2 .
The response of the detector is evaluated at different frequencies. The experimental results are shown in Figure 10.23. For these measurements, an external RF

On-chip testing techniques for RF wireless transceiver systems and components 331
2
1.8

DC Output voltage (V)

1.6
1.4
1.2
1
1.9 GHZ
2.4 GHZ

0.9 GHZ
1.2 GHZ
1.6 GHZ

0.8
0.6
0.4
0.2
0
40

Figure 10.23

35

30

25

20 15 10 5
Input power (dBm)

10

Measured response of the proposed RF amplitude detector at different


frequencies

signal generator was employed and input match to 50  was assured through offchip components. In a range of 1.5 GHz (from 0.9 to 2.4 GHz) the detector shows
a conversion gain of approximately 50 mV/dBm in a dynamic range of 30 dB. The
minimum detectable signal is around 35 dBm. The wideband nature of the detectors response is an important advantage for test purposes; the device can be used
to monitor signal amplitudes at different points of an RF system even if they have
different frequency content (e.g., in a multi-standard or a dual-conversion transceiver
architecture) without any further tuning in the design. At 400 MHz and 2.8 GHz the
measured dynamic range is still greater than 20 dB.
Table 10.3 presents the performance summary for the RF amplitude detector. It is
worth mentioning that the fast settling time of the detector allows performing tens of
measurements (e.g., varying the input power or frequency) in just few microseconds.
The response of the proposed detector is not perfectly linear with respect to the
input amplitude in the entire dynamic range. However, as discussed previously, this
is not a limitation for on chip-test purposes. To evaluate the effectiveness of the
proposed RF test device to measure gain and 1 dB compression point of an RF
CUT, an LNA is integrated in the prototype IC. The LNA is a standard single-ended
inductively degenerated cascade amplifier [23] that has a gain of 10 dB at 1.6 GHz. The
degeneration and load inductors are implemented on-chip while the gate inductance is
the bonding wire that connects the LNA input to the package pad. A buffer is included
at the output of the LNA to measure its performance with off-chip equipment. The
buffer is a simple common source stage. Resistive source degeneration is employed in

332 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Table 10.3

RF
amplitude
performance summary

0.35 m
0.031 mm2
50 mV/dBm
>30 dB
0.92.4 GHz
3.3 V
10 mW
<40 ns

CMOS process
Area
Conversion gain
Dynamic range
Measured operating frequency
Supply voltage
Power consumption
Settling time

RF signal generator

Spectrum analyser

LNA

Figure 10.24

detector

Buffer

RFD 1

RFD 2

DC OUT 1

DC OUT 2

IC
prototype

Experimental set-up for the on-chip characterization of the LNA

the buffer to attain a 1 dB compression point higher than that of the LNA. Post-layout
simulation results show that the buffer has a loss of 10 dB at 1.6 GHz while driving
the 50  load of a spectrum analyser through the package parasitics.
The test set-up employed for the characterization of the LNA is shown in
Figure 10.24. The gain of the LNA is measured with the RFDs and also with external instrumentation for different power levels and at different frequencies. Some
significant examples of the performed measurements are presented next.
Figure 10.25 shows the measured d.c. voltage at the output of each detector for
different input power levels at 1.6 GHz. Employing the discussed technique the LNA
gain is measured as 9.5 dB and the input 1 dB compression point as 1 dBm. It
is worth mentioning that for the on-chip test of the LNA, the rectifier current (IB4 )

On-chip testing techniques for RF wireless transceiver systems and components 333
2
RMS Det. at input of LNA

RMS detector outputs (V)

1.8

RMS Det. at output of LNA

1.6
1.4
1.2
LNA Gain: 9.5 dB

1
0.8
LNA Gain: 8.5 dB

0.6
Input 1 dB Comp. Point: dBm

0.4
14 12 10

Figure 10.25

6 4 2
0
Input power (dBm)

Measured response of the RFDs at the input and output of the on-chip
LNA at 1.6 GHz

used in the RF detectors is higher than the one used in the measurements presented in
Figure 10.23. This shows how the useful range of the rectifier can be adjusted to test
an RF CUT at the signal levels of interest (e.g., around its 1 dB compression point).
Figure 10.26 presents a comparison of the LNA gain measured at 1.7 GHz with the
integrated detectors in comparison with the gain roll-off measured with an external
RF spectrum analyser. At low input power levels (<2 dBm) the estimated gain
appears to be lower due to the reduced gain of the RFD at the input in this range.
From the obtained experimental results at different frequencies and power levels it
is estimated that the practical accuracy of the method is 1 dB, which is adequate for
multiple wafer-level and production test purposes.

10.4

Architecture for on-chip testing of wireless transceivers

This section presents an integral testing strategy for integrated wireless transceivers
based on the BIST techniques discussed in Sections 10.2 and 10.3, in combination
with a switched loop-back architecture.

10.4.1

Switched loop-back architecture

A loop-back connection between the transmitter and receiver chains is one of the
earliest strategies to test the functionality of wireless and wire-line communication

334 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


14
Normalized gain measured with RF
signal generator and spectrum analyser

12

Estimated gain with RFDs

LNA gain (dB)

10
8
6
4

2
0
10

Figure 10.26

2
0
2
Input power (dBm)

10

Gain compression of the on-chip LNA at 1.7 GHz as measured by


external equipment and the embedded detectors

systems [24, 25]. It does not require an external stimulus and is effective to detect
catastrophic faults in the complete signal path. Figure 10.27(a) depicts this testing
scheme for a transceiver architecture with direct up-conversion. In a complete realization the base band sections include in-phase (I) and quadrature (Q) paths but in
this block diagram only one path is shown for simplicity.
In the loop-back configuration, the baseband section of the transmitter generates
a tone or a modulated signal with a centre frequency fB . With the input from the
local oscillator (LO) at a frequency fRF the up-converter generates a tone at fB + fRF .
The loop-back connection must attenuate the output of the power amplifier (PA) to
make it suitable for the dynamic range of the LNA. After the down-conversion with
the same tone from the LO, the resultant signal at the receiver baseband is centred at
fB . The characteristics of the demodulated or digitized signal can be analysed by the
ATE to evaluate the performance of the transceiver. In this configuration, the range
of values that fB can take is limited by the transmitter baseband.
Recent radio implementations use transmitter architectures in which the modulation of the transmitted signal is directly performed on the VCO [26, 27], avoiding
the up-conversion. As shown in Figure 10.27(b), the direct application of loop-back
test is not practical in this kind of transceiver. However, to introduce a switch in the
loop-back path can be useful to overcome this limitation. Figure 10.27(c) illustrates
the principle of operation of a switched loop-back technique applied to a transceiver
with direct VCO modulation.

On-chip testing techniques for RF wireless transceiver systems and components 335
(a)
M(f )

U(f )
f

fB

D(f )

Loop-back
connection

B(f )

fRF + fB

fB

fB

LNA

PA

DC offset
cancellation
LO(f )

Local
oscillator
DAC
and/or
modulator

f RF

Digital
signal
processor

Input data

Output Data

ADC
and/or
demodulator

(b)
Loop-back
connection

T(f )

D(f )

fRF

B(f )
f

DC

LNA

PA

DC offset
cancellation

VCO Control
PLL
modulator

ADC
and/or
demodulator
Output data

DSP

Input data

(c)
T(f )

S(f )

fRF

D(f )

fRF, fRF f SW

B(f )

DC fSW

fSW

fSW

PA

ATTN

LNA
DC offset
cancellation

VCO control
PLL
modulator

Switched loop-back

DSP
Input data

Figure 10.27

Output data

ADC
and/or
demodulator

Loop-back architectures. (a) Standard technique in an up-conversion


transmitter (b) standard technique in a direct VCO modulation
transmitter and (c) proposed switched configuration

336 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


If the signal in the loop-back path is switched at a frequency of fSW , two additional
tones are created at frequencies fRF fSW . After the mixing with fRF in the receiver,
both tones are down-converted to fSW . In this way, the frequency of the signal that
controls the switch determines the frequency of the signal at the baseband chain.
Conceptually, this is equivalent to introducing a mixer in the loop-back path; however,
a simple switch is a suitable frequency translation device in this application. As shown
in Figure 10.27(c), an important practical consideration is that in the off state, the
switch must connect the input of the LNA to a 50  resistor and not directly to ground
to preserve LNA stability.
The operation of the switch on the signal from the PA can be modelled as a
multiplication between the RF signal and square wave from zero to one. Such a train
of pulses can be described in the time domain as
P (t) =

Kn cos (nSW t + n ) = K0 + K1 cos (SW t + 1 )

n=0

+ K2 cos (2SW t + 2 ) +

(10.7)

where SW = 2 fSW and Kn , n are constants that define the amplitude and phase of
each frequency component respectively. The product of P(t) and the RF signal with
amplitude A and frequency fRF results in the switched signal S(t)
S (t) =





Kn cos (nSW t + n ) A cos (RF t)

n=0

A
2

[2K0 cos (RF t) + K1 cos ((RF SW ) t + 1 ) + K1 cos ((RF + SW ) t + 2 )

+ K2 cos ((RF + 3SW ) t + 3 ) + K2 cos ((RF 3SW ) t + 4 ) +


(10.8)

where 1 , 2 , , n are the phases corresponding to each of the new frequency


components. Finally, after the second multiplication at the mixer of the receiver, the
down-converted signal D(t) becomes:
D (t) =





Kn cos (nSW t + n ) A cos (RF t) B cos (RF t + )

n=0

= C0 + C1 cos (SW t + 1 ) + C2 cos (3SW t + 2 ) + C3 cos (5SW t + 3 ) +


+ E1 cos ((2RF + SW ) t + 1 ) + E1 cos ((2RF SW ) t + 2 ) +
(10.9)

where , n and n are phase constants. The final amplitude of each frequency
component (Cn , En ) depends on the amplitude B of the LO as well as on the conversion
gain of the mixer. The d.c. component C0 is blocked by the d.c. offset cancellation
circuitry and the frequency components located around 2fRF will have a negligible
amplitude since the output of a down-conversion mixer shows a LP characteristic.

On-chip testing techniques for RF wireless transceiver systems and components 337
In addition, C2 , C3 , , Cn depend on the non-dominant frequency components of
S(t) and hence will be small in comparison to C1 .
One of the most important advantages of this approach is that the loop-back
connection can have a simple on-chip implementation. A programmable attenuator
can be implemented with switches and a bank of resistors or capacitors and a simple
CMOS switch can perform the commutation of the signal at the input of the LNA. The
switching signal is a digital clock with frequency fSW in the range of megahertz which
can be easily applied to the transceiver on wafer. The ATE can have a direct control
over fSW and in this way frequency response of the transmitter and receiver chains
can be performed independently without any other modification to the transceiver
architecture.
One of the limitations of a stand-alone loop-back test is that it is not able to
identify the location of catastrophic faults (e.g., an open circuit in the signal path) and
some important parametric faults can pass undetected. For example, a higher gain in
the PA or mixer can mask a lower gain in the LNA. In this sense, a more effective
testing strategy incorporates means of verifying the receiver operation at different
intermediate stages of the signal path and not only at its end points.

10.4.2

Overall testing strategy

The joint application of the techniques described in this chapter can act in a synergetic
way to improve the testability of an entire integrated system. Figure 10.28 depicts the
block diagram of a transceiver using a direct conversion transmitter with a switched
loop-back connection, RFDs in the RF section and a FRCS in the baseband section. A
d.c. to digital converter (d.c.DC) [11] acts as an interface between the on-chip testing
circuitry and a digital port of the ATE.
With the exception of the baseband circuitry at the transmitter, the entire
transceiver chain can be tested by using the LO signal and the switched loop-back
connection. A complete end-to-end test requires the application of a low-frequency
RF out

1I

RF in

2I

7I
RFD

Baseband
in

f SW

RFD

PA
3
1Q

RFD

Baseband
out

6
7Q

Frequency
synthesizer

RFD

9Q

From RF
detectors

10Q

Figure 10.28

8Q

RFD

10I

From baseband
observation points
1, 2, 7, 8, 9 (I &Q)

9I

LNA

ATTN

2Q

Transciever
die

8I

RFD

Analog
multiplexer

APD

DC MUX

DC
to digital
converter

Integrated transceiver with improved testing capabilities

To ATE

338 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Table 10.4

Transceiver testing with the proposed techniques

Test

Test device and method to be


employed

Observation
nodes

LNA gain and 1 dB comp.


point.

RFDs. The input power to the LNA


is swept by changing the transmitter
output power or the loop-back
attenuator loss.

5, 6

PA gain and 1 dB comp.


point.

RFDs. The input power to the PA is


swept by varying the up-converter
gain.

Up-converter operation and


output power.

RFD

Synthesizer operation and


output power for I and Q
branches.

RFD

10 (I and Q)

Phase and magnitude


mismatch between I and Q
channels.

FRCS

7, 8, 9
(I and Q)

Transmitter filter transfer


function.

FRCS. Input frequency to the


transmitter is swept across the
desired characterization range.

Channel selection filter


transfer function.

FRCS. fSW is swept across the


desired characterization range.

7, 8
(I and Q)

Adjacent channel rejection

FRCS. fSW is set at the adjacent


channel frequency.

8 or 9

Base-band amplifier gain


programmability

FRCS

8, 9
(I and Q)

signal at the input of the transmitter either from the ATE or from an on-chip signal generator like the one proposed for the FRCS. The switch loop-back connection
guarantees the flow of a test stimulus throughout the transceiver path that can be used
by the embedded testing devices to perform measurements at different intermediate
points of the system.
By providing independent control of the frequency of the signals across the transmitter and receiver chains, and providing access to internal points in the RF and
baseband sections, the testability of the receiver is improved. Table 10.4 describes
the different tests that can be performed in this architecture. A complete testing solution for a given transceiver may not have to perform all of the possible tests. This

On-chip testing techniques for RF wireless transceiver systems and components 339
1. End-to-end Loop Back Test
Detection of catastrophic faults and
major performance deviations.

2. On-Chip Test of Building Blocks

FAIL

RF Section: LNA and PA gain and 1


dB comp. point., LO operation,
up-converter output power.

PASS
Overall system tests: Gain
programmability and 1 dB comp. point
of transmitter and reciever, output
SNR, adjacent channel rejection, ...

Baseband section: Down-converted


signal amplitude, magnitude and
phase mismatch between I and Q
channels, channel selection filter
transfer function.

System performance verified/fault diagnosis

Figure 10.29

Flow diagram of the proposed testing strategy

transceiver architecture with enhanced testability is meant to serve as a basis of a


comprehensive testing strategy in which test optimization and alternate testing techniques can be applied to optimize the fault coverage and the time/cost test efficiency.
The flow diagram in Figure 10.29 describes a hierarchical testing strategy for
the proposed architecture. As a first step, an end-to-end test is performed using the
switched loop-back connection but without involving the internal test devices. In this
case, the final output from the receiver chain is analysed by the ATE. This test provides
measurement results for the overall performance metrics of the transceiver (e.g., overall gain) and it can determine if a catastrophic fault is present in the system (e.g., if no
signal is present at the output or if its amplitude is far from the expected value). These
results from the system-level test can be used to select the subsequent block-level
tests, for example, to identify the cause of a specific faulty behaviour. In the second
step of the test process, the on-chip test circuitry is employed to locate a catastrophic
fault and/or to identify parametric faults such as a deviation in the frequency response
of the baseband filters or mismatch between the I and Q signal paths.
Table 10.5 summarizes the area overhead that the addition of the discussed set of
on-chip testing circuitry represents with respect to the size of recently reported 1.9
GHz [27] and 2.4 GHz [26, 28, 29] transceivers for various standards. Despite the
fact that the area for the testing devices is taken from prototypes in CMOS 0.35 m
technology, the area overhead is less than 10 per cent.

10.4.3

Simulation results

A macromodel for the transceiver architecture shown in Figure 10.28, including the
switched loop-back connection and the RF RMS detectors is built to analyse the performance of the proposed testing scheme. The components employed for the model
include the most important non-idealities expected from an integrated implementation such as noise, compression, non-linearity and finite isolation between terminals.

340 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Table 10.5

Area overhead analysis for reported transceivers

Ref.

Standard

[26]
[27]
[28]
[29]

Bluetooth
DECT
802.15.4 (ZigBee)
Bluetooth and 802.11b

Table 10.6

Analogue
area (mm2 )

CMOS
process
(m)

Overhead of 6 RF RMS
sdetectors, FRCS and
d.c.DC (0.45 mm2 ) (%)

5.9
9.4
8.75
16

0.18
0.25
0.18
0.18

7.6
4.8
5.1
2.8

Characteristics of modelled ZigBee transceiver

RF
Transmitter architecture
Transmitter power
PA gain
Receiver architecture
Sensitivity
RF front-end IIP3
RF front-end gain
Baseband filter

2.4 GHz
Direct conversion
0 dBm
15 dB
Low-IF; IF = 4 MHz
82 dBm
4 dBm
30 dB (LNA 15 dB + Mixer 15 dB)
Fifth-order bandpass polyphase

Table 10.6 summarizes the specific characteristics of the modelled architecture, which
are taken from the transceiver reported in Reference 28. An IEEE 802.15.4 implementation is chosen for the example because this standard is targeted for very low-cost
applications. The attenuator in the loop-back connection has a loss of 25 dB to bring
the 0 dBm output of the PA within the linear range of the receiver. The RFDs are
modelled according to the device described in Section 10.3.
Figure 10.30 shows the simulation results for a transceiver meeting specifications.
The frequency for the loop-back switching is 4 MHz, since this is the centre frequency
of the baseband filter. Figures 10.30(a) and (b) show the switched signal at the input
of the LNA in the time and frequency domains, respectively. Observe that the frequency components of interest (2400 4 MHz) are at least 10 dB above other tones.
Figures 10.30(c), (d) and (e) show the outputs of the RF RMS detectors at the output
of the PA, the LNA input and LNA output respectively. Finally, the expected 4 MHz
signal at the output of the baseband filter is shown in Figure 10.30(f).
Even though the output of the RFDs placed after the switch is intermittent, the
gain of the LNA can still be estimated provided that the d.c.DC. samples their output
at the appropriate rate. In the presented model, the d.c.DC. has around 100 ns to
sample the output of each detector. In a given scenario where the d.c.DC. is slower,
fSW can be set first to a lower value (so that the RFDs hold their output for a longer

On-chip testing techniques for RF wireless transceiver systems and components 341
(a)

(b)
30

LNA input power [dBm]

25

LNA input [mV]

20
10
0
10
20
30
1.0

1.2

35
40
45

1.4

Time (s)

(c)

30

2.380

2.388

2.396 2.400 2.404

2.412

2.420

Frequency (GHz)

(d)
70
60

Detector output [mV]

Detector output [mV]

800
600

400
200

50
40
30
20
10
0

25

50

75

100

100

200

Time (ns)

300

400

Time (ns)

(e)

(f)
450

600

350

IF filter output [mV]

Detector output [mV]

400
300
250
200
150
100
50

400
200
0
200
400
600

0
0

100

200

300

Time (ns)

Figure 10.30

400

Time (ns)

Simulation results for a transceiver meeting specifications. (a) Input


of the LNA in the time domain (b) input of the LNA in the frequency
domain (c) output of the RFD at the output of the PA (d) output of the
RFD at the input of the LNA (e) output of the RFD at the output of the
LNA and (f) output of the baseband filter

time) to test the LNA and then shift to a higher value to test the rest of the receiver
chain.
Figure 10.31 shows the simulation results for a transceiver in which some of the
individual building blocks do not meet the target specifications. The PA has a 2 dB
higher gain (12 dB total), the LNA has 5 dB less gain (10 dB total) and the channel
selection filter is not centred at 4 MHz but at 4.5 MHz. Figures 10.31(a) and (b)
show the output of the RFDs at the outputs of the PA and LNA, respectively. It can
be readily noticed that these final values are different from the ones in the case of

342 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


(a)

(b)
300

1.00

Detector output [mV]

Detector output [V]

1.20

0.80
0.60
0.40
0.20

250
200
150
100
50

0
0

25

50

75

(c)

100

Time (ns)

200

300

400

Time (s)

(d)

600

600

IF filter output [mV]

IF filter output [V]

100

400
200
0
200
400
600

400
200
0
200
400
600

Time (s)

Figure 10.31

Time (s)

Results for transceiver not meeting specifications. (a) Output of the


RFD at the output of the PA (b) output of the RFD at the output of the
LNA (c) output of the baseband filter for fSW = 4 MHz and (d) output
of the baseband filter for fSW = 4.5 MHz

Figure 10.30. Figures 10.31(c) and (d) show the output of the channel selection filter
for fSW = 4 MHz and fSW = 4.5 MHz.
Note that through a stand-alone end-to-end test, it would not be possible to determine the cause of a reduced amplitude at the end of the receiver baseband. Moreover,
if both PA and LNA exhibit a higher gain, the output of the receiver could show the
expected amplitude even if the filter has a deviated centre frequency. If this transceiver
was tested with a conventional loop-back test without the switch, by changing the
input frequency to the transmitter it could be determined that the fault is occurring at
the baseband but not so if it is on the transmitter or receiver side.

10.5

Summary and outlook

The combination of a switched loop-back architecture with the use of the recently
developed on-chip testing devices demonstrated in integrated implementations significantly enhances the testability of an RF transceiver. The on-chip testing devices
show that the direct, on-chip observation of analogue and RF building blocks at
megahertz and gigahertz frequencies can be performed in a CMOS process, and with

On-chip testing techniques for RF wireless transceiver systems and components 343
a minimum area and parasitic loading overhead. The presented strategy enables the
test of the entire wireless system and its individual building blocks at the wafer level
through digital information. The use of external analogue/RF equipment or components is avoided, allowing the implementation of a practical and cost-effective
test solution. Extending the proposed concepts to implementations in current deepsubmicron technologies opens significant opportunities for improved performance as
well as the solution to new challenges.

10.6

References

1 Ozev, S., Orailoglu, A., Olgaard, C.V.: Multilevel testability analysis and solutions for integrated Bluetooth transceivers, IEEE Design and Test of Computers,
2002;19 (5):8291
2 Ferrario, J., Wolf, R., Moss, S.: Architecting millisecond test solutions for
wireless phone RFICs, Proceedings of the IEEE International Test Conference,
Charlotte, NC, September 2003, pp. 132532
3 Akbay, S.S., Halder, A., Chatterjee, A., Keezer, D.: Low-cost test of embedded RF/analog/mixed-signal circuits in SOPs, IEEE Transactions on Advanced
Packaging, 2004;27 (2):35263
4 Acar, E., Ozev, S.: Defect-based RF testing using a new catastrophic fault model,
Proceedings of the IEEE International Test Conference, Austin, TX, November
2005, pp. 4219
5 Bhattacharya, S., Halder, A., Srinivasan, G., Chatterjee, A.: Alternate testing of
RF transceivers using optimized test stimulus for accurate prediction of system
specifications, Journal of Electronic Testing: Theory and Applications, 2005;21
(3):32339
6 Silva, E., de Gyvez, J.P., Gronthoud, G.: Functional vs. multi-VDD testing of
RF circuits, Proceedings of the IEEE International Test Conference, Austin, TX,
November 2005, pp. 41220
7 Ozev, S., Olgaard, C.: Wafer-level RF test and DFT for VCO modulating
transceiver architectures, Proceedings of the 22nd IEEE VLSI Test Symposium,
Napa Valley, CA, April 2004, pp. 21722
8 Bhattacharya, S., Chatterjee, A.: Use of embedded sensors for built-in-test of RF
circuits, Proceedings of the IEEE International Test Conference, Charlotte, NC,
September 2004, pp. 8019
9 Ryu, J.-Y., Kim, B.C., Sylla, I.: A new low-cost RF built-in self-test measurement for system-on-chip transceivers, IEEE Transactions on Instrumentation
and Measurement, 2006;55 (2):3818
10 Valdes-Garcia, A., Silva-Martinez, J., Snchez-Sinencio, E.: On-chip testing
techniques for wireless RF transceivers, IEEE Design and Test of Computers,
2006;23:26877
11 Valdes-Garcia, A., Hussein, F., Silva-Martinez, J., Sanchez-Sinencio, E.: An
integrated transfer function characterization system with a digital interface for
analog testing, IEEE Journal of Solid State Circuits, 2006;41 (10):230113

344 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


12 Valdes-Garcia, A., Venkatasubramanian, R., Silva-Martinez, J., SanchezSinencio, E.: A broadband CMOS RF amplitude detector for on-chip RF
measurements, IEEE Transactions on Instrumentation and Measurement, to
appear
13 Hafed, M.M., Abaskharoun, N., Roberts, G.W.: A 4-GHz effective sample rate
integrated test core for analog and mixed-signal circuits, IEEE Journal of Solid
State Circuits, 2002;37 (4):499514
14 Mendez-Rivera, G., Valdes-Garcia, A., Silva-Martinez, J., Snchez-Sinencio, E.:
An on-chip spectrum analyzer for analog built-in testing, Journal of Electronic
Testing: Theory and Applications, 2005;21 (3):20519
15 Dai, F.F., Stroud, C., Yang, D.: Automatic linearity and frequency response tests
with built-in pattern generator and analyzer, IEEE Transactions on Very Large
Scale Integration (VLSI) Systems, 2006;14 (6):56172
16 Valdes-Garcia, A., Silva-Martinez, J., Snchez-Sinencio, E.: An on-chip transfer
function characterization system for analog built-in testing, Proceedings of the
IEEE VLSI Test Symposium, Napa Valley, CA, May 2004, pp. 2616
17 Chi-Sheng, L., Bin-Da, L.: A new successive approximation architecture for
low-power low-cost CMOS A/D converter, IEEE Journal of Solid State Circuits,
2003;38 (1):5462
18 Han, G., Snchez-Sinencio, E.: CMOS transconductance multipliers: a tutorial,
IEEE Transactions on Circuits and Systems-II, 1998;45 (12):155063
19 Finvers, G., Filanovsky, I.M.: Analysis of source-coupled CMOS multivibrator,
IEEE Transactions on Circuits and Systems, 1988;35 (9):11825
20 Yin, Q., Eisenstadt, W.R., Fox, R.M., Zhang, T.: A translinear RMS detector for embedded test of RF ICs, IEEE Transactions on Instrumentation and
Measurement, 2005;54 (5):170814
21 Valdes-Garcia, A., Venkatasubramanian, R., Srinivasan, R., Silva-Martinez, J.,
Sanchez-Sinencio, E.: A CMOS RF RMS detector for built-in testing of wireless
transceivers, Proceedings of the IEEE VLSI Test Symposium, Palm Springs, CA,
May 2005, pp. 24954
22 Voo, T., Tomazou, C.: High-speed current mirror resistive compensation
technique, Electronics Letters, 1995;31 (4):24850
23 Lee, T.H.: The Design of CMOS Radio-Frequency Integrated Circuits, 1st edn
(Cambridge University Press, Cambridge, 1998), pp. 28892
24 Jarwala, M., Duy, L., Heutmaker, M.S.: End-to-end test strategy for wireless
systems, Proceedings of the IEEE International Test Conference, Washington,
DC, October 1995, pp. 9406
25 Yoon, J.-S., Eisenstadt, W.R.: Embedded loopback test for RF ICs, IEEE
Transactions on Instrumentation and Measurement, 2005;54 (5):171520
26 Ishikuro, H.: A single-chip CMOS Bluetooth transceiver with 1.5 MHz IF and
direct modulation transmitter, Proceedings of IEEE International Solid-State
Circuits Conference, San Francisco, CA, February 2003, pp. 905
27 Leeuwenburgh, J.: A 1.9 GHz fully integrated CMOS DECT transceiver, Proceedings of IEEE International Solid-State Circuits Conference, San Francisco,
CA, February 2003, pp. 11415

On-chip testing techniques for RF wireless transceiver systems and components 345
28 Choi, P.: An experimental coin-seized radio for extremely low-power WPAN
(IEEE 802.15.4) application at 2.4 GHz, IEEE Journal of Solid State Circuits,
2003;38 (12):225868
29 Byunghak, T.: A 2.4 GHz dual-mode 0.18-um CMOS transceiver for Bluetooth
and 802.11b, IEEE Journal of Solid State Circuits, 2004;39 (11):191626

Chapter 11

Tuning and calibration of analogue, mixed-signal


and RF circuits
James Moritz and Yichuang Sun

11.1

Introduction

The mixed-signal system-on-a-chip (SoC) has become one of the main drivers for
electronic circuit design. It has become normal to integrate complex systems with
both digital and analogue functions in a single chip, to produce systems such as
wireless transceivers, broadband modems, mobile phone handsets, digital broadcast
receivers and many other application devices. A major motivation for producing such
complex systems as an SoC is cost. With modern submicron semiconductor technology, the available complexity of an SoC is continually being rapidly increased, with
relatively little increase in the associated cost of fabrication. Eliminating the requirement for many of the external components formerly required also drastically reduces
the manufacturing costs of products incorporating such highly integrated SoCs, as
well as bringing technical advantages such as reduced size and power consumption.
From the viewpoint of the analogue and mixed-signal circuit designer, mixedsignal SoC design brings many challenges. The vast majority of circuitry in the
SoC is digital; economic requirements dictate that digital CMOS integrated circuit
(IC) processes are used to fabricate the SoC. However, such processes do not yield
optimized analogue components. A major source of difficulty in circuit design is the
variability of integrated components [1]. Each process step has a degree of variability
associated with it, leading to loose component tolerances. Components of the same
type integrated on the same die are subject to nearly identical processing, however, a
close matching of component value ratios is possible, even though absolute tolerances
are large. The ability to produce components with well-matched ratios has been
heavily exploited by circuit designers. However, as device geometries shrink with
each increment in technology feature size, statistical variations between components
integrated on the same die increase, leading to a deterioration in ratio matching.

348 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Different component parameters are dependent on different process steps; thus, while
capacitance might primarily depend on oxide thickness, transconductance is also a
strong function of dopant concentration. Therefore, there will be little correlation
between variations in different types of component parameter, even within the same
die. Most components integrated in silicon also have strongly temperature- and biasdependent parameters, and are also subject to ageing effects that lead to changes in
component parameters over time.
The variability of integrated components and the need for precise analogue functions in numerous important mixed-signal circuits have led to the development of
on-chip tuning and calibration techniques. Important examples of these techniques
are described in the following sections:

Section 11.2 On-chip automatic filter tuning. Filters are important building
blocks in virtually all applications where analogue signals are processed and,
for the vast majority of continuous-time filter designs, on-chip automatic tuning
schemes are an essential requirement in order to achieve the performance goals
demanded by the application.
Section 11.3 Self-calibration techniques for frequency synthesizers. Precise
frequency generation is also an essential requirement in a very wide range of
applications; for frequencies in the RF range, phase-locked loop (PLL) frequency
synthesis is widely used. The critical analogue circuit block in a PLL is the
voltage-controlled oscillator (VCO). VCO performance parameters are highly
process dependent, and on-chip VCO calibration techniques can be employed to
reduce the effects of process variations on VCO parameters, yielding improved
performance for the overall PLL system.
Section 11.4 On-chip antenna impedance matching. To achieve efficient operation of RF power amplifiers, the impedance of the load must be matched to that
required by the amplifier. Normally, the load is actually an antenna, which has a
highly variable impedance depending on the exact operating frequency and the
operating environment of the antenna. Automatic impedance matching maximizes
output and efficiency under varying load conditions.
The chapter is concluded in Section 11.5.

11.2
11.2.1

On-chip filter tuning


Tuning system requirements for on-chip filters

Examination of the block diagram of any mixed-signal system architecture reveals


that filters are an essential building block. To satisfy all design requirements, many
different types of filters operating over a wide range of frequency and bandwidth are
required. For a long time this created a barrier to the design of highly integrated, highfrequency mixed-signal systems, since many filtering functions had to be performed
using components that could not be integrated using commercial IC technologies,
such as inductors, quartz and ceramic resonators. Developments in semiconductor
processing and IC design techniques during the past several years now mean that it

Tuning and calibration of analogue, mixed-signal and RF circuits 349


is now possible to implement continuous-time active filters with useful performance
over an extremely wide frequency range, using only components available in standard IC processes. The possibility therefore exists of achieving the highly desirable
goal of integrating all the filter functions required in many mixed-signal SoC applications. The current trend is to implement these designs using standard CMOS digital
processes due to their low cost and ready availability.
From the filter designers point of view, a notable shortcoming of these current
IC technologies is the loose absolute tolerances that can be achieved on the values
of on-chip components. One solution is to utilize filters based on switched capacitor
(SC) or switched current circuit techniques, where the filter response is defined by
component ratios and a precise clock frequency. However, these techniques are not
feasible at high frequencies, and the sampled-data nature of these filters may also be a
limitation. Continuous-time filters, using circuit techniques such as MOSFET-C and
gm-C [25] are capable of operation at frequencies of several hundred megahertz.
With continuous-time filters, variations in absolute component values lead directly
to divergence of the achieved filter response from the intended design specifications.
It is not feasible to adjust component values after fabrication, and the component
parameters also depend on operating temperature and bias conditions. Therefore, an
almost universal requirement for integrated continuous-time filters is the need for
on-chip tuning, the tuning system may well be the most difficult challenge to the
designer in achieving satisfactory filter performance.

11.2.2

Frequency tuning and Q tuning

As discussed above, on-chip tuning of integrated filters is generally required due to


the wide tolerances of integrated components. The passband and cut-off frequencies
of a particular filter response are defined by the time constants in the circuit, which
are in turn determined by the component values, that is, RC, LC products or ratios
C/gm . Since the absolute tolerance of each type of component is quite large, there is
generally no correlation between the errors in value of different types of component,
the overall tolerance of the pole and zero frequencies is larger still as much as 50
per cent in typical IC processes.
In contrast to the very loose absolute tolerances of integrated components, ratio
matching between components of the same type on a single die is quite accurate.
Normally, ratios are maintained to better than a few percent, and may be within 0.1
per cent when special layout techniques are used. The shape of the filter response,
defined by the Q of the poles and zeros and, mid-band gain, K, of a filter are defined
primarily by ratios between similar components. Since these are well defined, it would
seem that Q and gain would automatically also be defined accurately. This can be true
for filters operating at moderate frequencies and with moderate Q, but high-Q designs
are severely affected by parasitic effects, in particular phase shifts caused by highfrequency parasitic poles and zeros in the amplifiers, and finite d.c. amplifier gain.
These factors are illustrated in the tunable second-order OTA-C section shown in
Figure 11.1. A single-ended OTA-C biquad is used here as an example for simplicity,
although the same considerations apply equally to other filter techniques. The circuit

350 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

Vin

+
g0

+
g1

g3
+

Vtune

VBP
C1

g2
C2

Bias current sources

Figure 11.1

Tuneable OTA-C biquad

generates simultaneous lowpass and bandpass outputs; here we focus on the bandpass
transfer function, but the lowpass case is very similar. The transconductances in the
filter are made tuneable by varying their bias currents. The capacitors are fixed.
In the case where all components are ideal, the transfer function of Figure 11.1 is
HBP (s) =

VBP
g0
(g3 /C2 ) s
=
Vin
g3 s2 + (g3 /C2 ) s + (g1 g2 /C1 C2 )

(11.1)

By equating coefficients with the standard form of the second-order bandpass transfer
function
HBP (s) = KBP

(0 /Q) s
s2 + (0 /Q) s + 02

we have

0 =

(11.2)


g 1 g2
,
C1 C2

1
Q=
g3

g 1 g 2 C2
,
C1

KBP =

g0
g3

(11.3)

Suppose processing variations cause the transconductances of the OTA to vary.


Because of the inherent matching between the components making up the OTAs,
all transconductances will change by the same factor kg . Similarly, process variations
will cause all the capacitor values to change by a factor kc , which in general will be
different from, and unrelated to, kg . Including the factors kg and kc in Equation (11.1)
gives a new transfer function


kg g3 /kc C2 s
VBP
g0


HBP (s) =
(11.4)
=
Vin
g3 s2 + k g /k C  s + k 2 g g /k 2 C C
g 3

c 2

g 1 2

1 2

0 has been changed by a factor kg /kc . In order to restore the design value of 0 , the
transconductances of the four OTAs are simultaneously tuned until kg = kc , in which
case Equation (11.4) reduces to Equation (11.1). In practice, this is achieved by tuning
the transconductances until 0 is equal to the design value. It is also possible to tune
Q independently of 0 by varying g3 alone. However, because Q is determined by

Tuning and calibration of analogue, mixed-signal and RF circuits 351


(a)

(b)
+
g0

Vin

g3
+

Vtune

|g |

+
g1

VBP

g2
Cp2

C2

C1

Gp1

Excess
phase
g

Bias current sources


0

Figure 11.2

p

(a) Non-ideal biquad and (b) excess phase

the relatively accurately defined ratios of C2 to C1 , and g1 , g2 to g3 , it would appear


that Q would remain virtually constant during the tuning process.
In a real design, however, circuit parasitics can have a major effect on the Q of the
circuit of Figure 11.1. An ideal OTA has a transconductance that is independent of
frequency, but real, non-ideal OTAs (g1  , g2  in Figure 11.2(a)) have finite bandwidth,
caused by parasitic high-frequency poles and zeros associated with circuit nodes
inside the OTA. Although these normally occur far above the passband frequency of
the filter, and have little effect on the magnitude of integrator gain within the passband
of the filter, they increase the phase shift of the integrators to slightly more than the
nominal 90 . This excess phase, as illustrated in Figure 11.2(b), can substantially
alter circuit Q.
In the frequency range of interest, where  p , the frequency-dependent OTA
gain g (s) can be modelled as an ideal transconductance g with an added phase shift
proportional to frequency:




s
s
1


 =g 1

(11.5)
g (s) = g
= g exp
p
p
1 + s/p
In the circuit of Figure 11.2(a), the most significant influence of excess phase occurs
in the two integrators made up of g1  , C1 and g2  , C2 . Substituting this frequencydependent transconductance for the ideal transconductors in Equation (11.1) and
making appropriate approximations for 0  p gives a new value of Q for the
circuit when excess phase is included:
Q

Q


1 2Q 0 /p

(11.6)

Q is significantly affected by quite small values of excess phase. For example, if the
design value of Q is 10 and p = 100 0 , giving rise to excess phase of about 0.6 ,
Q from Equation (11.6) is 12.5, an increase of 25 per cent. A design Q of 50 will
result in Q approaching infinity and instability.

352 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Another parasitic effect that modifies Q is the finite output conductance of OTAs.
The output of an ideal OTA behaves as a current source, but real OTAs have significant
output resistance. This can be modelled by conductances Gp1 and Gp2 shunting each
internal circuit node to ground, as shown in Figure 11.2(a). The transfer function of
this modified circuit is given by


H(s) =

s2 +



Gp2 /C2 + Gp1 /C1



 
g0 Gp1 /C1 C2 1 + C1 /Gp1 s
 




+ (g3 /C2 ) s + Gp1 Gp2 /C1 C2 + Gp1 g3 /C1 C2 + (g1 g2 /C1 C2 )

(11.7)
The Q of the modified circuit is approximated by
Q =

Q
 


1 + (Q/0 ) Gp1 /C1 + Gp2 /C2

(11.8)

Thus, increasing the output conductance of the OTAs reduces Q. This sets an upper
limit to the Q which can be achieved for a given set of transconductors and capacitors,
as the design Q in Equation (11.3) tends to infinity, the maximum achievable Q is

Qmax
=

1
 


(1/0 ) Gp1 /C1 + Gp2 /C2

(11.9)

In summary, the large tolerances of integrated components give rise to large frequency
errors in the filter response, so integrated filters almost always require on-chip tuning. Frequency tuning will often be sufficient for filters operating at modest values
of Q and frequency, typically lowpass and bandpass filters where the bandwidth
is of the same order as the centre frequency. However, in high-Q, high-frequency
filters, circuit parasitics, principally the excess phase and finite d.c. gain of the
active circuits, profoundly affect the Q of the filter response, so that Q must also
be tuned.

11.2.3

Online and offline tuning

The outline of a typical tuning system is shown in Figure 11.3. A well-defined reference signal is applied to the filter input. One or more parameters of the filter output
signal are measured by the frequency tuning control circuit and compared to a reference. The resultant error signal is used by the control circuit to calculate a correction
signal, which is then applied to the frequency tuning input of the filter. Thus, the
system forms a closed feedback loop in which the filter is forced to converge on the
desired frequency response. In a similar way, if implemented, the Q control circuit
generates a tuning signal that corrects the Q of the filter.
Desirable features of any on-chip tuning system are minimal chip area and low
power consumption. This requires simple hardware and minimal computation requirements for the tuning algorithm. Conversely, the functional requirements placed on
the filter design may be very complex, requiring several different performance goals
for cut-off frequencies, gain, group delay, and so on, which must be simultaneously
met. It is not usually possible for an on-chip tuning system to evaluate all the relevant parameters of filter performance since this requires many measurements to be

Tuning and calibration of analogue, mixed-signal and RF circuits 353


Signal input

H(s) Filter
Q Tuning
signal

Output

Frequency
tuning signal

Reference
Frequency
tuning control

Q Tuning
control

Figure 11.3

Outline of on-chip tuning scheme

performed on the filter output signal over a range of frequencies. In high-order filters, there are many tuneable components, so achieving the desired filter response
requires the control of a large number of variables simultaneously. For these reasons,
it is very difficult to directly tune a high-order filter using reasonably simple tuning
circuits.
In order for the filter to function correctly, the tuning system must operate when
the chip is first powered on. Also, component values will continuously drift while the
circuit is powered, due to changes in environmental and operating conditions, so it
is necessary to periodically repeat the tuning process during normal operation. This
creates a problem in that the reference will be present within the filter passband at the
same time as the desired signal, with the inevitable possibility of mutual interference
occurring between the tuning system and the rest of the transceiver signal processing.
The scheme of Figure 11.3 is therefore normally operated as an offline tuning system;
periodically the normal signal input to the filter is removed, and the reference signal
applied. The filter is then tuned, and the updated values of frequency and Q tuning
signals stored until the next tuning cycle occurs.
These offline tuning cycles can be readily accommodated in many system architectures; for example, many types of transceiver alternate between transmit and receive;
receiver filter tuning can take place during the transmit periods without affecting
receiver operation. However, the additional signal routing and the requirement to
store the tuning signals while the filter is online lead to added complication. Therefore, online tuning is widely used, where the tuning process proceeds continuously
and simultaneously with normal circuit operation. One way of achieving this is to
devise a reference signal that has minimal effect on subsequent signal processing, but
which at the same time can be used to measure the necessary filter parameters. An
example of this is described in Reference 6, where the reference signal is made nearly
orthogonal to the received signal using spread-spectrum techniques.

354 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

H(s)
Reference
signal

Master
filter

Q Tuning
signal

Frequency
tuning signal
Frequency
tuning control

Q Tuning
control

Signal
input

Figure 11.4

11.2.4

H(s)

Slave
filter

Signal
output

Masterslave tuning scheme

Masterslave tuning

A very widely used and important online tuning scheme is the masterslave tuning scheme outlined in Figure 11.4. This makes use of the inherent good matching
between components and circuit subsystems that are achieved within a single IC. Two
well-matched filter sections are used; the reference signal is applied to the master section, whilst the actual input signal is applied to the slave section. The tuning system
develops tuning signals in a closed feedback loop which correct errors in the response
of the master section, as in offline tuning. The same tuning signals are simultaneously
applied to the slave section. If the master and slave sections are identical and perfectly matched, the response of the slave filter will be the same as the master. Thus,
it is unnecessary to apply the reference signal to the master filter, which can operate
continuously.
In practice, master and slave are usually different. The master section is usually
a low-order filter, often a biquad, since this has a simple response for which it is
relatively easy to design tuning algorithms and is economic in its use of chip area
and power consumption. This is illustrated in Figure 11.5; the master filter is a single
biquad with a single resonance peak in its response. The slave section can be of
whatever order is required to meet the filter specifications, with the same tuning signals
applied to each section. The diagram shows the effect on the frequency responses of
master and slave as they are simultaneously tuned. Clearly, it is much easier to design
a tuning algorithm for the single biquad master when compared to the high-order
slave response, with its multiple maxima and minima. Thus, in addition to allowing
online tuning, the masterslave tuning scheme provides a solution to the problem
of tuning complex filters. A large proportion of high-order integrated filter designs
therefore utilize masterslave tuning in some form.

Tuning and calibration of analogue, mixed-signal and RF circuits 355


Master amplitude
response

Slave amplitude
response

fmin

Figure 11.5

fnom

fmax

Frequency

Tuning range of master and slave filter sections

The essential assumption made in the masterslave scheme is that the ratios of
components in the master and slave sections can be accurately realized, and will track
each other precisely as the master section is tuned. If this is the case, the cut-off
frequencies and Q of all the filter sections in the slave will exactly track those of
the master, and if the master section Q is maintained at the correct value the slave
filter response shape will remain correct as the filter frequency is tuned. There are
practical limitations on how closely this can be achieved, and a substantial amount
of design and layout effort must be expended to ensure that the master accurately
models the tuning behaviour of the slave. Since parasitic effects can substantially
alter the performance of the filter, these must also be accurately modelled in the
master section. These requirements can usually be best met by designing the slave
filter with circuits that are as near identical to the master as possible, and by making the
tuning reference signal frequency close to that of the signal frequency. This allows the
best matching between sections, and also ensures that frequency-dependent parasitic
effects are similar in master and slave. Synthesis techniques are required that result
in filter circuits using the minimum possible spread of component values.

11.2.5

Frequency tuning methods

For frequency tuning, the most commonly used input reference signal is derived from
a stable clock oscillator. This is convenient, since most systems already include an
accurate off-chip clock signal, usually derived from a quartz crystal, from which all
on-chip clock signals are derived through various forms of frequency synthesis. At
the output of the filter, phase comparison is the most widely used method of determining the state of tuning of the filter. An outline frequency tuning scheme is shown
in Figure 11.6.
In second-order filter sections, the phase difference between the filter input and
output reach well-defined values at the resonance frequency, 90 in the case of
a lowpass section and 0 for a bandpass section. Accurate reference phase shifts
independent of component value tolerances can be generated using digital counters

356 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Master filter

Signal
in

H(s)

Figure 11.6

H(s)

Signal
out

Tuning
input

Tuning
input
Reference
frequency

Slave filter

Phase
detector

Error
amplifier

Loop
filter

Outline frequency tuning scheme

operating on multiples of the reference frequency. An example of a phase detector is


an analogue multiplier or mixer. With two inputs of the same frequency applied to the
inputs, a d.c. output voltage is generated which is a function of the phase difference of
the input signals. This transfer function is non-linear and also depends on the signal
amplitudes; but if the phase difference is arranged to be 90 , the output will be zero.
Therefore, if the output of the phase detector is applied to the frequency tuning input of
the filter via an error amplifier and loop filter, feedback around the loop will cause the
cut-off frequency of the filter to reach a value giving the desired value of phase shift.
This system is, however, subject to several sources of error. With large initial
errors in filter resonant frequency, the reference signal is likely to be outside the filter
passband at the start of the tuning procedure. A filter section with high Q and therefore
rapid cut-off in amplitude will attenuate the reference signal to a low level that may
prevent the correct operation of the phase detector. This limits the scheme to low-Q
filter sections. A low-Q filter has a phase response that changes only slowly as the
frequency is tuned, therefore for a given tuning error, only a relatively small error
signal is produced at the phase detector output. Therefore the signal-to-noise ratio
inside the tuning loop is poor; small errors due to d.c. offsets in the phase detector
or parasitic phase shifts, give rise to large tuning errors. A further source of errors
is distortion in the reference or filter output waveforms; applying a non-sinusoidal
signal to the phase detector may result in incorrect output.
A modification of this method that avoids these problems is to replace the filter
section being tuned by a VCO as shown in Figure 11.7. This can be achieved by
applying feedback with the appropriate gain and phase to the filter section. Oscillation
then occurs at the resonant frequency of the filter. The output signal frequency is
compared with the reference frequency using a phase detector and the error signal
is used to tune the VCO as before. This system is therefore a PLL. The advantage
of using the output frequency of a VCO rather than the phase shift through the filter
section as the basis for tuning is that the system is independent of fixed errors in
the phase comparison; provided the phase difference between reference and VCO is
maintained constant, the VCO frequency, and therefore the resonant frequency of the
filter, is identical to the reference.
The main source of tuning error in this system is the mismatch in behaviour
between the filter section, operating at finite Q and with relatively low signal levels

Tuning and calibration of analogue, mixed-signal and RF circuits 357


Master filter

Signal
in

H(s)

Signal
out

Tuning
input

Tuning
input

VCO

Slave filter

Limiting
amplifier
Phase
detector

Error
amplifier

Loop
filter

Reference
frequency

Figure 11.7

PLL frequency tuning system

and the VCO which effectively operates at infinite Q and inherently requires a nonlinear amplitude limiting mechanism to achieve a stable signal amplitude. To ensure
that the frequency-determining elements operate within their linear range, the VCO
is usually implemented by adding a limiting amplifier to provide feedback around a
bandpass biquad filter section.
Many successful frequency tuning systems using frequency- or PLLs as described
have been implemented in practical designs, for example, in References 7 and 8.
These methods are well suited to masterslave designs, where the tuning loop can
operate continuously. This yields an extremely simple control system and is often
capable of frequency tuning accuracy within 1 per cent. These techniques become
increasingly difficult to apply at the highest frequencies, due to the increasingly severe
errors caused by excess phase, both in the filter or VCO and in the phase detector
itself.
Frequency tuning techniques can utilize the time-domain response of the filter.
The response of a high-Q bandpass filter to a step or impulse function is a damped
sinusoid at the filter output. The period of the sinusoid is approximately equal to the
resonant frequency of the filter. The filter output waveform is squared using a limiting
amplifier and the period measured using digital counter techniques. A tuning signal is
derived by comparing the measured period with the desired value. In order to achieve
good accuracy, high resolution in the period measurement is necessary. This requires
that the transient response has a long duration. The duration increases with Q and
filter order, and owing to this and the iterative nature of the measurement technique,
it is most appropriate for offline tuning of high-order, high-Q bandpass filters [9, 10].
This tuning control method is digital in nature, so is easily combined with switched
array tuning schemes.
A related technique is to measure the time constant of an integrator using a d.c.
charging current. An example of this technique using an OTA-C integrator is shown
in Figure 11.8.

358 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Signal
in

Slave filter

Vout

gm

Vout(max)
Peak
detector

Signal
out

Tuning
input

Vref

H(s)

Error
amplifier

gm control
input
Clock input
t

Figure 11.8

Frequency tuning scheme based on a ramp generator

An accurate clock signal is used to open the switch for a period t. During t, the
integrator output voltage is a linear ramp that reaches a maximum value of
Vout(max) = Vref t

gm
C

(11.10)

This maximum voltage is stored by the peak detector, and compared with the
reference voltage by the error amplifier. The resulting lowpass-filtered error signal is
applied to the OTA transconductance control input and causes the capacitor charging
current and therefore Vout (max) to vary. Over a large number of clock cycles, this
feedback loop causes Vout (max) to become equal to Vref :
Vref t

gm
= Vref ,
C

gm
=t
C

(11.11)

Since t is accurately defined by the clock signal, and the resonant frequency of
the filter is accurately proportional to gm /C due to well-defined ratios between all
transconductances and capacitances on the chip, the resonant frequency is set to the
correct value.
In order to avoid problems caused by unwanted phase shifts, frequency tuning methods have been devised
based on amplitude measurements. A second-order
response with Q greater than 1/ 2 contains a peak in its amplitude response plotted
against frequency. For high-Q values, the amplitude peak closely approximates the
resonant frequency. Tuning the resonant frequency of the filter with a fixed input
reference frequency will also produce a peak in the output response when the two
frequencies coincide. The tuning system only needs to detect when the maximum
output signal is achieved; the amplitude detector need therefore have neither high
accuracy nor linearity, provided it has a monotonic response.

Tuning and calibration of analogue, mixed-signal and RF circuits 359


Reference
frequency

Master filter

Envelope
detector
Venv

H(s)
Ramp
generator

Tuning
input

Peak
detector

Vpk

Tuning signal
to slave filter
Vtune

Comparator
Control
logic

Figure 11.9

Frequency tuning based on amplitude peak detection

A tuning scheme using this principle is shown in Figure 11.9. The reference signal
is applied to the biquad input, and the envelope detector produces a d.c. level, Venv ,
proportional to the amplitude of the filter output. In the first phase of the tuning cycle,
the filter tuning voltage Vtune is swept through its range by the ramp generator. At the
point where the resonant frequency of the filter coincides with the reference frequency,
the filter output amplitude and thus Venv reaches a maximum, and this value is stored
by the peak detector as Vpk . In the second tuning phase, Vtune is swept again and Venv
is compared with Vpk by the comparator. At the point where the resonant frequency
and reference frequency coincide, Venv is equal to Vpk and the control logic opens the
switch. Thus, the value of Vtune giving the correct filter resonant frequency is stored
on the hold capacitor, until the next tuning cycle begins.
In practice, the circuit of Figure 11.9 will suffer tuning errors due to parasitic
charge injection and loss from the tuning voltage holding capacitor offsets in the
comparator and peak detector. However, a more sophisticated implementation of this
technique has been described [11] in which these errors are largely eliminated.

11.2.6

Q tuning techniques

Frequency tuning ensures that the centre frequency or cut-off frequency of the filter
is tuned to the correct value; however, this does not necessarily ensure that the shape
of the frequency response is correct; this also depends on the Q of the filter sections.
As noted above, parasitic effects in particular may lead to severe distortion of the
filter response. The frequency tuning schemes described above are independent of Q.
However, in order to tune the filter Q, it is first necessary that the frequency tuning
process is completed, because Q is defined in terms of the way that the filter response
changes close to the resonant frequency. Any error in the filter resonant frequency will
therefore also result in errors in Q. A further difficulty is that although the designer
will attempt as far as possible to make Q unaffected by frequency tuning and vice
versa, they are never entirely independent. Inevitably, tuning Q will introduce a new
error in filter frequency, and correcting this error will alter Q again.

360 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Envelope
Reference amplitude, Vref
Reference
frequency

Tuning signal
to slave
Master filter
Vref

H(s)

Error
amplifier

Loop
filter

Q
Q Tuning
input

Figure 11.10

Q tuning scheme

Therefore, several iterations of frequency and Q tuning may be required to correctly tune the filter or the two processes must proceed simultaneously. The designer
must take the interdependence of both tuning processes into account in order to ensure
that convergence takes place [12]. This is especially difficult with high-Q filter sections, where as seen in Section 11.2.2, Q is sensitive to small changes in tuning, and
instability can easily occur.
The most widely used Q tuning technique [11, 13] utilizes the fact that in many
cases, the gain of a biquad at the resonant frequency is proportional to the Q. For
example, in the case of the OTA-C biquad of Figure 11.1, from Equations (11.1),
(11.2) and (11.3), we can derive expressions for the gain in terms of Q at 0 :




VBP
VLP
g0
= Q,



(11.12)
V = Q g g C /C
V
in
in
1 2 2
1
The Q tuning system of Figure 11.10 is an amplitude locked loop which operates
using this proportionality between gain and Q. It is assumed that separate frequency
tuning circuits maintain 0 of the filter exactly equal to the desired value and that the
gain of the filter is equal to the Q at 0 . The reference signal is attenuated by a factor
1/KQ by a potential divider and is applied to the filter input. The output amplitude
of the filter is therefore Vref Q/KQ . A pair of matched envelope detectors generate
d.c. levels proportional to Vref and the filter output, which are compared by an error
amplifier. The resulting feedback signal varies the Q of the filter so that the filter
output is equal to Vref , in which case Q = KQ . Since KQ is determined by component
ratios which can be made accurately, Q is also accurately defined.

11.2.7

Tuning of high-order leapfrog filters

Multiple loop feedback (MLF) filters are desirable for fully integrated filters because
of their low sensitivity. This applies especially to high-Q bandpass filters, due to
their high sensitivity to frequency tuning errors and the high Q required from each
filter section. However, the multiple feedback structure which is responsible for this
low sensitivity at the same time makes this type of filter more difficult to tune. The
multiple feedback paths existing between sections of the filter result in interaction

Tuning and calibration of analogue, mixed-signal and RF circuits 361


Amplitude
detector

S2

Rs

C2

L2

S(n1)
I2

C(n1) L(n1)

I4

I(n1)
RL

V1

V3
S1

C1 L1

VL

Vn
C3

C3

L3

Sn

Cn

Ln

Vs

o

Figure 11.11

LC bandpass tuning using Dishals method

between all filter sections. Thus, tuning any one section of the filter affects all poles
and zeros in the filter transfer function, modifying the filter transfer function in a
complex way. This makes the design of a tuning algorithm capable of realizing the
desired response extremely difficult. This section describes a tuning method based
on Dishals technique [14, 15] which overcomes this problem, and is applicable to
the leapfrog (LF) form of MLF filter and other types of filters based on LC ladder
simulation. This method can be illustrated using the LC ladder bandpass filter shown
in Figure 11.11.
Synthesis of this ladder filter with centre frequency 0 results in the inductor and
capacitor values in each branch of the ladder having the same resonant frequency,
1/Li Ci = 02 . To tune the filter, initially all switches in the series arms are opened and
all those in the shunt arms are closed. A signal is applied to the input at frequency
0 , and V1 is monitored by the amplitude detector. S1 is opened and C1 /L1 is tuned to
parallel resonance, that is, maximum amplitude of V1 . Since S2 is open, the resonator
C1 /L1 is isolated from the rest of the circuit, which therefore does not alter the resonant
frequency. Next, S2 is closed and C2 /L2 is tuned to series resonance and minimum
V1 . Since S3 is closed, C2 /L2 are also isolated from succeeding stages of the filter.
Each successive branch is then adjusted in turn, the shunt branches for maximum V1
and the series branches for minimum V1 , with the associated switch being opened or
closed. Since all preceding branches are already resonant, the reactive component of
their net series or shunt impedance is zero, and they are transparent at frequency 0 .
When Ln /Cn have been adjusted, the tuning process is complete. In tuning schemes for
second-order cascade filters, it is normally necessary to provide Q tuning capability.
This is not done when tuning using Dishals method as described, and so the tuning
process does not completely define the transfer function of the filter. The bandwidth
and ripple in the response are defined by ratios between component values in different
branches of the circuit, whilst the method described above only tunes the inductor
and capacitor in each individual branch in isolation. However, because all branches

362 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


Amplitude
detector

1
S2

S3

S(n1)

Sn
1
RL

1
Rs
1
sL2

1
sC1

Vs
Rs

V1
1

I2

1
1
sL2

Figure 11.12

1
sL(n1)

1
sC3
V3

1
1
sC1

1
sCn
I(n1)

1
1
sL3

Vn =VL

1
1
sC(n1)

1
sLn

LF simulation of LC bandpass filter

are resonant at 0 , the passband is symmetrical, insertion loss is minimized and gross
distortion of the frequency response does not occur.
Each LC resonator in the prototype is replaced by a two-integrator-loop biquad
with the same 0 . In the LC filter, coupling between resonators occurs because they
are connected together; in the LF filter, this coupling occurs via the feedback paths.
Therefore, the switches in Figures 11.11 and 11.12 perform an equivalent function.
As in the LC prototype, it is only necessary to monitor the test signal amplitude at
one point in the LF circuit, the output of the first integrator, V1 .
A single biquad making up part of the filter in Figure 11.12 is shown in
Figure 11.13(a). This could be implemented as the OTA-C biquad circuit of
Figure 11.13(b). The transfer function of Figure 11.13(a) is
s (1/RS C1 )
RS
2
R s + s (1/RS C1 ) + (1/L1 C1 )

H(s)

1
=
,
L 1 C1

1
0
=
,
Q
RS C1

KBP =

RS
R

(11.13)

where R is a scaling resistance. Thus, for Figure 11.13(b) we can write:


H(s)
0

g0
(g3 /C1 ) s
g3 s2 + (g3 /C1 ) s + (g1 g2 /C1 C2 )

g 1 g2
0
g0
g3
,
,
KBP =
=
=
C1 C2
Q
C1
g3
=

(11.14)

By equating coefficients in Equations (11.13) and (11.14), the circuit of


Figure 11.13(b) can be used to generate the transfer function of Figure 11.13(a), with
L1 = C2 /g1 g2 , RS = 1/g3 , R = 1/g0 . The complete LF filter can be implemented by
cascading a number of these biquads and providing the feedback path connections.
Only the first and final biquads have finite Q (corresponding to the terminating resistors of the prototype ladder network), so transconductor g3 is only required for these
stages. Suitable gain and impedance scaling will yield practical component values.

Tuning and calibration of analogue, mixed-signal and RF circuits 363


(a)

(b)
1/Rs
1/R

1
sC1

Vin

Vin

+
g0

Vout

Vo
C2

g3
+

Vtune

g1
C1

Bias current sources

1
sL1

Figure 11.13

+
g2

Single-filter biquad section and OTA-C implementation

In order to implement the tuning method, g0 g3 and the bias current sources
are dimensioned so that the ratio between the transconductances remains constant
as Vtune is varied. Similarly, the ratio of C1 /C2 will be preserved with variations in
absolute capacitance. Suppose process variations change all transconductances by a
factor kg and all capacitances by a factor kc . The transfer function of Figure 11.13(b)
then becomes:



s kg g3 /kc C1
kg g1 g2
g0




H (s) =
,

=
0
g3 s2 + s k g /k C  + k 2 g g /k 2 C C
kc C 1 C 2
g 3

c 1

g 1 2

1 2

(11.15)


0 is altered from 0 by a factor of kg /kc . The effect of tuning the circuit to resonance
using Dishals method is to force 0  to the design value 0 by changing Vtune ,
and hence kg g0 kg g3 . This is achieved when kg = kc . Substituting kg = kc into
Equation (11.15) gives the original transfer function. Thus, tuning only the pole
frequencies of the biquad also restores Q to the original value.
An on-chip tuning system which tunes the pole frequency of a single biquad by
detecting the peak of its amplitude response is described in detail in Reference 11.
This system is shown in elementary form in Figure 11.14. A test signal at 0 is
applied to the biquad input and Vo is rectified. The rectified signal Venv is applied to
a peak detector. In the first tuning phase, Vtune is swept through its range by a ramp
generator. At the point where the pole frequency of the biquad is equal to 0 , Venv is
a maximum, and this value is stored by the peak detector output Vpk . In the second
tuning phase, Vtune is swept again, and Venv is compared with Vpk . At the point where
both are equal, the control logic opens the switch, causing the current value of tuning
voltage to be stored on Chold , which is again the peak of the amplitude response.
Reference 11 describes a more sophisticated implementation in which the effects of
delays and offsets are cancelled.
This scheme may be extended as in Figure 11.15 to sequentially tune a number of
biquads making up the bandpass LF filter. Initially, Vtune1 is adjusted for peak output at

364 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


0

Vin

Venv

Vo

Peak
detector

Vpk

Vtune
Comparator

Chold
Ramp
generator

Figure 11.14

0

Control
logic

Simplified peak tuning scheme

Vo

Vin
Vtune1
Chold1

V1

Vtune2

Vtune3

VtuneN

Chold2

Chold3

CholdN

Ramp
generator
Venv

Peak
detector

Vpk Comparators

Control
logic
Minimum Vmin
detector

Figure 11.15

Tuning scheme extended to an LF bandpass filter

V1 . To isolate the first biquad from the rest of the filter, Vtune2 VtuneN are initialized
to zero, de-biasing the other biquads. After Vtune1 has been adjusted, Vtune2 is tuned
for minimum V1 . The minimum detector is a peak detector with inverted polarity.
The process is repeated with Vtune3 VtuneN until all biquads have been tuned.
The tuning scheme described above has a number of benefits:
The test signal is connected to the input node, and the filter response is measured at
the input resonator node of the filter throughout the tuning process. This minimizes

Tuning and calibration of analogue, mixed-signal and RF circuits 365


fref

fout
Phase/
frequency
comparator

Vtune
Loop filter

VCO

1/N
divider

Figure 11.16

Basic PLL synthesizer

the number of signal paths that must be added to the filter, minimizing additional
circuit parasitics.
A single test-signal frequency is required, equal to the filter centre frequency.
Often a suitable signal will already be available in the system as a carrier signal.
The tuning system need only detect amplitude maxima and minima; it is not necessary to measure accurate amplitude ratios or phase, reducing possible sources
of error in high-frequency applications.

11.3
11.3.1

Self-calibration techniques for PLL frequency synthesizers


Need for calibration in PLL synthesizers

PLL frequency synthesizers [16] are a widely used building block for integrated
applications such as wireless transceivers and clock generation for digital systems,
where it is required to generate a precise RF which is a multiple of a relatively low
reference frequency, such as might be obtained from a crystal oscillator. Figure 11.16
shows an elementary PLL synthesizer block diagram. The output signal from a VCO
is divided by an integer factor N by a programmable modulus digital counter. The
divided VCO output frequency is then compared with the reference source fref by
a phase/frequency detector, the output of which provides an error signal, Vtune , that
is applied to the VCO tuning input via the loop filter. The resulting feedback loop
forces the VCO output frequency to become equal to exactly N times the reference
frequency, and the phase difference between the inputs to the phase comparator is
such that the required value of Vtune is maintained at the loop filter output. A variation
on this theme is the fractional-N synthesizer, where an integer relationship is not
required between the reference and output frequencies.
A critical component in the PLL and fractional-N systems is the VCO. These are
usually either delay-based ring oscillator circuits where the delay in the cells making
up the ring is controlled by a tuning voltage or current in order to vary the output
frequency or feedback oscillators where the frequency-determining element is an LC
resonator. In LC resonator-based VCOs, tuning is usually achieved using varactor
effects in diodes or diode-connected MOS transistors to achieve a voltage-dependent
capacitance. The presence of phase noise in the VCO output signal is undesirable.

366 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


VCO
frequency

VCO frequency
tolerance
Required output
frequency range
VCO frequency
tolerance

VCO tuning voltage

Figure 11.17

Required VCO tuning range

There are usually stringent requirements on the spectral purity of the output signal,
especially the phase noise sidebands around the output frequency or time-domain
jitter in digital clock applications.
The VCO gain KVCO is the gradient of the VCO output frequency versus tuning
voltage function: as illustrated in Figure 11.17, KVCO often varies over the tuning
range of the VCO. The tuning range of a VCO must be large enough to cover the
range of output frequencies required for the synthesizer application, and also to cover
the tolerance on operating frequency resulting from the effect of process variations
on the frequency-determining component values. In many applications, the tolerance
on operating frequency is much larger than the required operating frequency range,
requiring a VCO with a wide tuning range compared to the actual operating frequency
range. Shrinking CMOS geometries results in lower supply voltages, which in turn
reduce the tuning voltage range that is feasible. The combination of small tuning voltage and large output frequency range results in a large value of KVCO being required.
Unfortunately, a VCO with high gain is inherently more noisy than one with low
gain, because a given noise level at the tuning voltage input will result in greater
phase noise in the output signal. LC resonator-based VCOs generally have superior
noise characteristics to relaxation oscillators, but have more restricted tuning ranges,
since the capacitance variation possible with low tuning voltages is restricted. A further problem with varactor-based tuning is that KVCO varies with the tuning voltage,
due to the non-linear relationship between voltage, capacitance and frequency. The
changing VCO gain makes it difficult to optimize the dynamic response of the PLL
feedback loop over the whole tuning range.

11.3.2

PLL synthesizer with calibrated VCO

One approach to realizing a large VCO tuning range while at the same time achieving low VCO gain is to utilize a band-switched VCO circuit such as that shown in
Figure 11.18. In this circuit, a relatively narrow tuning range is provided by applying
Vtune to MOS varactors. Coarse tuning over a wider range is provided by an array of
SCs. A digital band-selection word controls capacitor selection via MOS switches,

Tuning and calibration of analogue, mixed-signal and RF circuits 367


Vdd
Ltank
Vtune

Digital band
select inputs

Figure 11.18

VCO with band selection


Band n

VCO
frequency

VCO frequency
tolerance

Band 2
Band 1
Band 0

Required output
frequency range
VCO frequency
tolerance

VCO tuning voltage

Figure 11.19

Band-switched VCO tuning range

providing discrete coarse tuning steps, as illustrated in Figure 11.19. In this way, a
wide tuning range is provided as a series of overlapping narrow bands.
As well as reducing the required VCO gain, VCO tuning linearity can also be
improved, since by providing sufficient overlap between sub-bands, a relatively linear
portion of the voltage tuning transfer function can be used.
In order to implement this band-switching scheme, the frequency control system
must be able to select the correct sub-band in order to generate the required output

368 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


frequency. Because of process variations between chips, and operating environment
changes, different sub-band selections may be required for individual instances of the
VCO, and selection data may require updating over time due to ageing, bias voltage
and temperature changes, and so on. Sub-band selection cannot take place during
normal VCO operation, since this will result in transient changes in output frequency
while the PLL re-establishes a locked condition. Therefore, an automatic on-chip
calibration process is required which takes place after initial power-up of the system
or during periods when the parts of the system requiring the VCO signal are inactive.
A number of such calibration algorithms have been devised, and are described below.

11.3.3

Automatic PLL calibration

Open-loop [1720] and closed-loop [21, 22] PLL calibration algorithms have been
developed. In open-loop algorithms, the feedback loop is opened between the phase
comparator and the VCO, and fixed reference voltages representing the required upper
and lower limits of the tuning voltage range are applied to the VCO tuning voltage
input. The existing digital counters in the PLL are then used to determine the limits
within which the VCO tuning range lies for particular sub-bands. The tuning control
logic can then select the appropriate VCO sub-band for the required output frequency.
In closed-loop algorithms, the VCO tuning voltage is compared with fixed reference
voltages. If Vtune lies outside the desired tuning voltage range, the calibration logic
increments or decrements the digital tuning word until an acceptable tuning voltage
is obtained.
A closed-loop calibration algorithm is thus capable of continuously updating the
coarse tuning sub-band while the PLL is operating, allowing for continual changes
in operating conditions and PLL output frequency. However, selecting a different
sub-band while the PLL is operating will result in transient changes in VCO output
frequency while phase lock is reacquired. In the integer-N PLL synthesizer, this capture transient has a long duration, since the loop bandwidth is inevitably much smaller
than the reference frequency. Therefore, the PLL output signal may be lost for significant periods when coarse tuning occurs. In open-loop algorithms, the calibration
logic can store the required tuning data, so that the appropriate digital tuning word
can immediately be selected in response to a requirement to change the PLL output
frequency. This minimizes the time required for frequency changes. However, the
calibration process must be repeated when a change in operating conditions results
in changes in VCO output frequency.
A PLL synthesizer including an open-loop calibration scheme is illustrated in
Figure 11.20, and the calibration algorithm by the flow-chart of Figure 11.21. The
required tuning sub-band is identified as the one enabling the whole required output
frequency range to be covered with the available tuning voltage range. The existing
divide-by-N counter, reference source and phase/frequency comparator are used to
compare the VCO output frequency with the required tuning limits. For each sub-band,
the minimum tuning voltage is applied to the VCO tuning input, and the divide-byN is counter-programmed with the value of N corresponding to the lowest required
frequency in the tuning range. The frequency at the divider output is compared with the

Tuning and calibration of analogue, mixed-signal and RF circuits 369


fref > fout/N
Calibration
control logic

Select
Vmax, Vmin
Voltage
reference;
Vmax, Vmin

Coarse tune
digital inputs
Calibrate

fref
Phase/
frequency
comparator

Vtune
Operate

Loop filter
Select N
1/N
divider

Figure 11.20

Open-loop VCO calibration

Start calibration
Connect Vtune to
voltage reference

Set Vtune to Vmin

f < fref?

No

Select lower
frequency sub-band

No

Select higher
frequency sub-band

Yes
Set Vtune to Vmax

f > fref?
Yes
Reconnect Vtune to
loop filter
Calibration complete

Figure 11.21

Open-loop VCO calibration algorithm

fout
VCO

370 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


fref

fout
Phase/
frequency
comparator

Vtune

VCO

Loop filter
Coarse tune
digital inlputs
1/N
divider

Vmax

Calibration
logic
Vmin

Figure 11.22

Closed-loop VCO calibration algorithm

reference frequency; if it is lower than the reference frequency, the lower limit of the
required output frequency range is inside the tuning range of the VCO. If the divider
output frequency is higher than the reference frequency, the lowest required output
frequency is beyond the lower limit of the VCO tuning range, and a lower frequency
sub-band must be selected in order for the VCO to tune the required frequency range.
A similar procedure can be applied to the upper end of the VCO tuning range; the VCO
now has the maximum tuning voltage applied, and the divide-by-N is programmed
with the value of N corresponding to the maximum required output frequency. A
frequency at the divider output greater than the reference frequency indicates that
the maximum required output frequency is inside the VCO tuning range, while a
frequency below the reference frequency indicates that a higher-frequency sub-band
must be selected. This algorithm is run iteratively until a sub-band is selected that
satisfies both minimum and maximum tuning range requirements.
A PLL synthesizer including a closed-loop calibration scheme is shown in
Figure 11.22. The VCO tuning voltage is continuously compared with Vmax and
Vmin , representing the limits of allowable tuning voltage excursion. When Vtune moves
outside the maximum or minimum limit, the coarse tuning word is incremented or
decremented appropriately.

11.3.4

Other PLL synthesizer calibration applications

As well as VCO tuning range, other parameters in the PLL synthesizer system may
benefit from on-chip calibration schemes. A number of schemes have been described
to improve accuracy of quadrature signal generation, and to minimize jitter in VCO
output.

Tuning and calibration of analogue, mixed-signal and RF circuits 371


Many transceiver architectures, such as the low intermediate frequency (IF)
superheterodyne (superhet) or zero-IF (direct conversion) architectures utilize parallel in-phase and quadrature (I/Q) channels in order to suppress unwanted sideband
or image signals [23]. I/Q architectures generally require local oscillator signals
which are maintained in precise phase quadrature; any deviation from quadrature
leads to degradation of unwanted signal suppression. Quadrature signals may be
generated by applying the PLL synthesizer output signal to a separate phase shift
network; but this requires additional signal buffering, chip area and power consumption. A more efficient approach utilizes a quadrature VCO. Quadrature VCO circuits
include ring oscillators with multiple phase outputs, and VCOs using coupled LC
resonators [24, 25]. However, the accuracy of quadrature is limited by component
mismatches between sections of the VCO due to processing variations and circuit
parasitics. Quadrature calibration systems utilize a high-accuracy phase comparator
to detect phase errors between in-phase and quadrature VCO outputs, and a feedback
loop controls the relative phase by, in the case of ring oscillators, differentially tuning
the delay of each cell in the ring [24] or differentially tuning the resonant frequencies
of the tank circuits in the case of LC resonator-based VCOs [25].
As noted above, the phase noise or jitter present in the output signal of the PLL
synthesizer is affected by the parameters of the components making up the feedback
loop of the PLL, in particular the VCO gain. The jitter performance of the PLL as a
whole depends on the coarse tuning sub-band selected. In Reference 26, an on-chip
jitter measurement scheme is used to select the optimum sub-band in order to minimize
jitter. The jitter measurement system could also be utilized for built-in self-test of the
synthesizer. One cause of jitter in the VCO output is supply voltage sensitivity; noise
superimposed on the supply rail modulates the VCO output phase. An on-chip voltage
regulator can be used to reduce the supply noise level, but presents difficulty for lowvoltage design since the regulator reduces the available supply voltage for the VCO.
An alternative is the use of a compensation scheme that introduces a supply voltage
dependence of the opposite polarity, resulting in a net-zero VCO supply voltage
sensitivity. However, this type of compensation scheme generally only results in
optimum compensation over a narrow range of supply voltage, the value of which is
subject to process variations. In Reference 27, a calibration system adjusts the VCO
supply voltage to maintain minimum supply voltage sensitivity at the synthesizer
output frequency, minimizing jitter due to this cause.

11.4
11.4.1

On-chip antenna impedance matching


Requirement for on-chip antenna impedance matching

An essential component of any wireless system is the antenna. The antenna is a transducer that converts RF electrical power from a transmitter into an electromagnetic
wave propagating in free space, intercepts electromagnetic waves from a distant transmitter and converts them into electrical signals that are applied to the receiver input.
In order to maximize the transfer of power between transmitter and antenna, and
maximize the signal-to-noise ratio of the received signal, the electrical impedance at

372 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


the antenna terminals must match the requirements of the transmitter power amplifier and receiver low-noise amplifier. The degree of impedance mismatch is usually
represented by the reflection coefficient, :
=

Zant Z0
Zant + Z0

(11.16)

where Zant is the impedance of the antenna and Z0 is the load impedance required
by the power amplifier. Since both Zant and Z0 may be complex,  is also a complex
number, with magnitude between zero and one. When Zant and Z0 are equal, that is,
perfectly matched,  is zero. Antenna design is a compromise between many factors; achieving a desirable impedance must be traded off against radiation pattern,
bandwidth, efficiency and other factors. This is especially so when electrically small
antennas are required (that is, the dimensions of the antenna are small compared to the
operating wavelength), as is the case for many integrated wireless transceiver applications operating in the ultra-high frequency (UHF) range. Typically, the impedance
of such antennas varies rapidly with frequency and also due to environmental effects,
so it is not practical to obtain precise impedance matching either through antenna
design or using fixed impedance-matching networks.
A possible solution to the antenna-impedance-matching problem is to incorporate a tunable impedance-matching network between the transmitter/receiver and the
antenna. This has long been widespread practice in the medium frequency, high
frequency and very high frequency ranges where automatic antenna tuners with
discrete-component LC networks are used in order to achieve the relatively large
inductances and capacitances required at lower frequencies [28]. Recently, integrated on-chip matching networks have been investigated for UHF and microwave
transceiver antennas, since it is feasible to produce the smaller components required
in integrated form. An on-chip antenna tuner consists of three major components
(Figure 11.23):
An adjustable matching network, capable of producing the required impedance
transformation.
An impedance sensor which monitors the voltage and current relationships in the
matching system.
A control system, which includes a tuning algorithm that is capable of adjusting
the matching network component values to optimize the impedance match, in
response to feedback data from the impedance sensor.
Thus, the automatic antenna tuner functions as a feedback system, adjusting the
matching network components to optimize the transformed impedance at the matching
network input. The system can thus respond to changes in antenna impedance that
occur over time; in mobile and hand-held applications, large impedance changes
occur due to relative movements between the antenna and surrounding conducting
or dielectric objects, especially the users body. The issues involved in the design of
the major components of the automatic antenna tuner are described in the following
sections.

Tuning and calibration of analogue, mixed-signal and RF circuits 373


Antenna

TX PA
Matching network

Control
system

Figure 11.23

Automatic antenna tuner system outline

Source impedance

R1

Figure 11.24

11.4.2

Impedance
sensor

Load impedance

C1

C2

R2

network

Matching network

A large number of reactive impedance-transforming networks exist. The -network


of Figure 11.24 is widely used for discrete-component antenna tuners, and has also
been proposed for on-chip antenna tuning systems; it is possible in principle to
design component values to provide conjugate matching between any input and output
impedances.
For the simplified case shown where the source and load impedances are both
resistances, the following design formulae can be used:

XC1 = R1 /Q0 , XC2 = R2

Q02

(R1 /R2 )
,
+ 1 (R1 /R2 )

XL =

Q0 R1 + (R1 R2 /XC2 )
Q02 + 1
(11.17)

The parameter Q0 can be selected such that:



R2
1
Q0
R1

(11.18)

374 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

2C

4C

Figure 11.25

2nC

2L

4L

2nL

Binary SC and inductor arrays

Thus, it is possible to realize the required impedance transformation with an


infinite number of different sets of component values corresponding to different values
of Q0 . The -network design formulae shown here can be extended to the more general
case where source and load impedances are complex by absorbing the imaginary
component of the impedances into the shunt capacitances [29].
It is not currently feasible to implement variable capacitances and inductances as
integrated components, so switched arrays of fixed components are generally used,
as shown in Figure 11.25, which provide values adjustable in finite steps. There
are practical limitations on the range of impedances that can be matched, and the
accuracy with which matching can be achieved. Extremely large or small values of
load impedance require large values of capacitance and inductance, which occupy
excessive chip area. Thus, for a given matching network, a matching domain exists.
Antenna impedances within this domain can be matched, while those outside the
domain cannot [30, 31]. Inside the matching domain, the accuracy of the impedance
match depends on the resolution of the tuning array. The accuracy with which the
required impedance transformation can be produced depends on the resolution with
which network components can be adjusted; the minimum increment in the values of
C or L in the networks of Figure 11.25 are limited by circuit parasitics. For example,
practical matching networks at around 2.4 GHz might require total capacitances of
the order of picofarads, with inductances of several nanohenries. The layout parasitics
in the capacitor array could be of the order of tens of femtofarads. Thus, the useful
tuning resolution of a capacitor array would be limited at the order of 1 per cent at
this frequency.
The switched inductor of Figure 11.25 is problematic for an on-chip application,
since it is difficult to provide numerous integrated inductors without using excessive
chip area and introducing excessive shunt parasitic capacitance. The series connection
of the switches in the array also leads to a high total switch resistance, increasing overall losses due to the inductance. For limited bandwidth applications, the -network
with series tuneable inductor can be replaced with a shunt tuneable capacitor in conjunction with impedance inverting networks as shown in Figure 11.26, leading to the
circuit topology shown in Figure 11.27.
The matching network components inevitably have losses associated with them
that absorb some of the signal power. To maintain low losses in the matching network, the unloaded Q factor of the network components should be large compared

Tuning and calibration of analogue, mixed-signal and RF circuits 375


Series tunable
inductor

Series tunable
capacitor

Impedance
inverting network

Figure 11.26

Impedance
inverting network

Transformed matching network with shunt tuning capacitor


To
antenna

Input from
power
amplifier

Digital tuning
inputs

Figure 11.27

Modified -network for CMOS on-chip tuner application

to Q0 in Equation (11.18). Therefore, it is desirable to design matching networks


with Q0 as low as possible. Integrated inductors in particular have relatively high
losses (unloaded Q typically below ten for standard CMOS technologies). It is quite
possible that the signal losses in the matching network will exceed those due to
the impedance mismatch that would exist without the matching network for some
ranges of impedance [32, 33]. A number of designs have therefore favoured off-chip
inductors, using bond wires or inductors fabricated within low-temperature co-fired
ceramic substrates forming part of the chip packaging.
At frequencies greater than several gigahertz, wavelengths become short enough
that distributed transmission line matching structures may be considered for use
as matching networks. Switched sections of transmission line provide adjustable
impedance transformation, as shown in Figure 11.28 [3436]. The matching network
consists of an approximately /2 section of microstrip transmission line with shunt
impedances connected at intervals. One or more of the shunt impedance branches
may be grounded via the switches. The shunt-connected branches thus behave as
moveable shorted transmission line stubs whose position can be changed by selecting
different switches. A number of matching configurations are possible using the same
network, using single or multiple shunt stubs.
The performance of the switches used for matching network component selection
has a major effect on overall tuning system performance. The choice of switching
technique depends on the IC technology used. CMOS designs typically use NMOS
transistors operating in the triode region [32, 33]. GaAs or HEMT devices have also
been used with appropriate IC technologies for microwave applications [35, 37].

376 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


To
antenna

Input from
power
amplifier

Figure 11.28

Transmission line stub tuner

Active devices used as switches also introduce additional loss and circuit parasitics
and, since they are non-linear, generate harmonics and inter-modulation products.
They also place constraints on the power-handling capability of the matching network
due to their limited breakdown voltages. Active devices perform best when configured
as shunt switches, since the full supply voltage can be applied as bias between the
gate and source electrodes and, since the source is grounded, VGS is not modulated
by the signal voltage, as would be the case for a series switch. This minimizes the
switch-on resistance and reduces production of inter-modulation products due to
switch non-linearity. Selection of switching transistor dimensions is a compromise
between increased resistive losses in switches with small widths and increased shunt
capacitance in larger widths. MEMS have also been proposed as low-loss matching
network switches [34, 36].

11.4.3

Impedance sensors

The impedance sensor provides the automatic tuning control system with feedback to
determine if a satisfactory impedance match has been achieved. Numerous methods
for sensing the impedance match have been used. The simplest scheme is to detect
the amplitude of the transmitted signal at the antenna terminals [33]; at any given
frequency, this amplitude is a function of the transmitter power reaching the antenna,
so maximizing the amplitude also maximizes radiated power. However, the maximum
power condition does not necessarily coincide with the optimum load impedance
conditions for the transmitter power amplifier. Thus, the power amplifier may not
operate at maximum efficiency or minimum distortion levels with this scheme, and at
high power levels may be subjected to electrical over-stress. A phase detector can be
used at the power amplifier output to monitor the phase relationship between the output
voltage and current from the power amplifier. The control system then adjusts the
matching network for minimum phase difference between voltage and current, that is,
making the load impedance at the power amplifier (PA) output resistive. This scheme
does not detect a mismatch in the resistance level. However, for high-Q antenna
structures, such as electrically small loops, the largest proportion of the impedance
mismatch is normally due to the reactive component of the antenna impedance, and
ensuring the antenna system is tuned to resonance in this way results in a substantial
improvement in power transfer [38].
A directional coupler, equipped with a detector at the reverse coupled port, connected between the PA and matching network provides an output signal which is a
function of the reflection coefficient at the input to the matching network. In this case,

Tuning and calibration of analogue, mixed-signal and RF circuits 377


the control system minimizes the detector output, which optimizes the match to the
impedance level for which the coupler is designed. In one application, the tuning network itself has been utilized as a six-port coupler, with the non-linear characteristic
of the switching devices also performing an amplitude detection function, providing
the impedance data for the control system [35].

11.4.4

Tuning algorithms

A major challenge in devising tuning algorithms for automatic antenna tuning systems is that only incomplete input data is usually available to the tuning algorithm. As
noted above, in systems where antenna tuning is required, the antenna impedance is
usually subject to large and unpredictable variations. The on-chip matching network
itself is also subject to large uncertainties due to processing variations. In most cases,
impedance sensors are not capable of providing complete impedance data, but only
a signal that gives some indication of degree of mismatch. Due to these unknown
variables in the system, it is normal to use iterative tuning algorithms that attempt to
converge on the best combination of network component values. Another important
consideration for tuning algorithms is speed. The tuning process will cause amplitude
and phase modulation of the signal radiated as matching network component values
are changed [33], so transmitted data may be corrupted if tuning occurs during transmission. Therefore, it is desirable to perform tuning during limited idle periods or at
least minimize loss of data by minimizing the duration of the tuning process.
In systems where the number of possible matching network component combinations are relatively small, it may be feasible to perform an exhaustive search of
all possible combinations in order to find the one producing the optimum impedance
match. However, when a large number of possible network combinations exist or
there are restrictions on time available for tuning, tuning algorithms are required that
minimize the number of network combinations that must be tested. One approach
to achieve this is to generate a look-up table of matching network data for different
operating frequencies during an initialization phase; the control system then selects
the appropriate tuning data from the table as the operating frequency is varied. This
scheme achieves rapid tuning, but is not capable of responding to variations in antenna
impedance that occur over time without repeating the initialization process. Functional
tuning algorithms have been developed that utilize impedance sensor outputs as feedback data in order to iteratively converge on the optimum matching network values.
Rapid convergence is facilitated by using an impedance sensor that provides both signal amplitude and phase information for the tuning algorithm. More complex sensors
are required to achieve this. Simple sensors typically provide only amplitude information to the tuning algorithm. The algorithm must then use a partly trial-and-error
procedure, since no feedback information is provided regarding the relative magnitude and phase of the antenna impedance, only the degree of impedance mismatch.
Genetic algorithms [39] have been applied to this type of tuning algorithm; initially,
the system must perform many iterations to achieve a satisfactory impedance match.
With continued operation, the genetic algorithm adapts to the system and changing
antenna parameters, without requiring explicit rules to achieve impedance matching.

378 Test and diagnosis of analogue, mixed-signal and RF integrated circuits

11.5

Conclusions

The variability of integrated components is often a challenge to analogue and mixedsignal IC design and, as has been described in the preceding sections, in many
instances requires the provision of on-chip tuning systems to provide analogue functions with the required precision. On-chip tuning is therefore an essential feature
in the implementation of high-performance analogue signal processing for mixedsignal SoCs, and in many cases the design of a satisfactory tuning system represents
a substantial proportion of the overall circuit design challenge.
Most continuous-time filters require on-chip tuning in order to achieve the required
response with adequate accuracy; tuning system design is therefore an integral part
of the overall filter design. Integrated continuous-time filters with on-chip tuning are
widely deployed in systems such as wireless transceivers, digital broadcast receivers,
cable modems, hard disk drive read channels and many other applications. Currently
available tuning techniques such as those described in Section 11.2 are capable of
giving satisfactory performance for lowpass and low-Q bandpass designs for frequencies up to hundreds of megahertz. The continual demand for increased bandwidths,
combined with improved RF performance provides challenges for filter and tuning
system design, particularly for very-high-frequency and high-Q designs, where circuit
parasitics and non-ideal device behaviour become dominant features in filter circuit
performance.
The PLL frequency synthesizer is also a very widely deployed sub-system, with a
huge range of applications in signal and clock generation. Within the PLL, the VCO is
the most critical analogue component, having a major impact on the phase noise and
jitter of the synthesizer output signal spectrum. There is continual demand to increase
operating frequencies and improve performance with regard to spectral purity. The
calibration techniques described in Section 11.3 provide a valuable contribution by
optimizing the VCO performance.
On-chip automatic antenna tuning is not yet widely deployed in integrated wireless
systems. However, with the trend for multi-standard, multi-band, wide bandwidth
operation and continual pressure to reduce the size of antennas while retaining overall
power efficiency, it is likely that antenna tuning systems will soon become useful
or essential. Significant challenges exist in providing low-loss matching network
components and switching, while also achieving adequate power handling. Tuning
algorithm design also remains an area for further study.

11.6

References

1 Nimmo, R.: Analogue electronics, the poor relation?, Proceedings of IEE Symposium on Analogue Signal Processing, Oxford, 1 November 2000, pp. 1/11/5
2 Deliyannis, T., Sun, Y., Fidler, J.K.: Continuous-Time Active Filter Design (CRC
Press, Boca Raton, FL, 1999)

Tuning and calibration of analogue, mixed-signal and RF circuits 379


3 Sun, Y., Fidler, J.K.: Structure generation and design of multiple loop feedback
OTA-grounded capacitor filters, IEEE Transactions on Circuits and Systems-I,
1997;44 (1):111
4 Sun, Y.: Design of High Frequency Integrated Analogue Filters (The Institution
of Electrical Engineers, London, 2002)
5 Banu, M., Tsividis, Y.: Fully active integrated RC filters in MOS technology,
IEEE Journal of Solid State Circuits, 1983;18 (6):64451
6 Kuhn, W.B., Elshabini-Riad, A., Stephenson, F.W.: A new tuning technique for
implementing very high Q, continuous-time, bandpass filters in radio receiver
applications, Proceedings of IEEE ISCAS 94, 30 May5 June 1994, Vol. 5,
pp. 25760
7 Krummenacher, F., Joehl, N.: A 4 MHz CMOS continuous-time filter with onchip automatic tuning, IEEE Journal of Solid State Circuits, 1988;23 (3):7508
8 Shi, B., Shan, W., Andreani, P.: A 57 dB image band rejection CMOS Gm-C
polyphase filter with automatic frequency tuning for bluetooth, Proceedings of
IEEE ISCAS 2002, 2002, Vol. 5, pp. 16972
9 Yamazaki, H., Oishi, K., Gotoh, K.: An accurate center frequency tuning scheme
for 450 kHz CMOS Gm C bandpass filters, IEEE Journal of Solid State Circuits,
1999;34 (12):16917
10 Pham, T.K., Allen, P.E.: A design of a low-power, high accuracy, constant-Qtuning continuous-time bandpass filter, Proceedings of IEEE ISCAS 2002, 2002,
Vol. 4, pp. 63942
11 Karsilayan, A.I., Schaumann, R.: Mixed-mode automatic tuning scheme for
high-Q continuous-time filters, IEE Proceedings Circuits, Devices and Systems,
2000;147 (1):5764
12 Linares-Barranco, B., Serrano-Gotarredona, T.: A loss control feedback loop
for VCO stable amplitude tuning of RF integrated filters, Proceedings of IEEE
ISCAS 2002, 2002, Vol. 1, pp. 5214
13 Li, D., Tsividis, Y.: Design techniques for automatically tuned gigahertz range
active LC filters, IEEE Journal of Solid State Circuits, 2002;37 (8):96777
14 Moritz, J.R., Sun, Y.: 100 MHz, 6th order leapfrog Gm-C high Q bandpass filter
and on-chip tuning scheme, Proceedings of IEEE ISCAS 2006, Kos, Greece,
2124 May 2006, pp. 23814
15 Dishal, M.: Alignment and adjustment of synchronously tuned multiple resonant
circuit filters, Electrical Communication, 1952; 15464
16 Kroupa, V.F.: Phase Lock Loops and Frequency Synthesis (Wiley, Chichester,
2003)
17 Wilson, W.B., Moon, U.-K., Lakshmikumar, K.R., Dai, L.: A CMOS selfcalibrating frequency synthesiser, IEEE Journal of Solid State Circuits, 2000;35
(10):14374
18 Lee, K.-S., Sung, E.-Y., Hwang, I.-C., Park, B.-H.: Fast AFC technique using a
code estimation and binary search algorithm for wideband frequency synthesis,
Proceedings of IEEE ESSCIRC 2005, Grenoble, France, September 2005, pp.
1814

380 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


19 Lin, T.-H., Lai, Y.-J.: An agile VCO frequency calibration technique for a
10-GHz CMOS PLL, IEEE Journal of Solid State Circuits, 2007;42 (2):
3409
20 Lee, S.T., Fang, S.J., Allstot, D.J., Bellaouar, A., Fridi, A.R., Fontaine, P.A.: A
quad-band GSM-GPRS transmitter with digital auto-calibration, IEEE Journal
of Solid State Circuits, 2004;39 (12):220014
21 Aktas, A., Ismail, M.: CMOS PLL calibration techniques, IEEE Circuits and
Devices Magazine, 2004;20:611
22 Lin, T.-H., Kaiser,W.J.: A 900-MHz 2.5-mA CMOS frequency synthesiser with
an automatic SC tuning loop, IEEE Journal of Solid State Circuits, March
2001;36 (3):42431
23 Razavi, B.: Challenges in portable RF transceiver design, IEEE Circuits and
Devices Magazine, 1996;12 (5):1225
24 Park, C.-H, Kim, O., Kim, B.: A 1.8 GHz self-calibrated phase-locked loop
with precise I/Q matching, IEEE Journal of Solid State Circuits, May 2001;36
(5):77783
25 Ahn, H.K., Park, I.-C., Kim, B.: A 5-GHz self-calibrated I/Q clock generator using a quadrature LC-VCO, Proceedings of IEEE ISCAS 2003, Bangkok,
Thailand, 2528 May 2003, pp. I-797I-800
26 Ali, S., Margala, M.: A 2.4-GHz auto-calibration frequency synthesiser with onchip built-in-self-test solution, Proceedings of IEEE ISCAS-2006, Kos, Greece,
May 2006, pp. 46514
27 Wu, T., Mayaram, K., Moon U-K.: An on-chip calibration technique for
reducing supply voltage sensitivity in ring oscillators, Digest of Technical Papers IEEE 2006 Symposium on VLSI Circuits, Hawaii, June 2006,
pp. 1023
28 Moritz, J.R., Sun, Y.: Frequency agile antenna tuning and matching, Proceedings of 8th International IEE Conference on HF Radio Systems and Techniques,
2000 (IEE Conf. Publ. no. 474), Bath, UK, 1013 July 2000, pp. 16974
29 Sun, Y., Fidler, J.K.: Design method for impedance matching networks, IEE
Proceedings Circuits, Devices and Systems, 1996;143 (4):18694
30 Sun, Y., Fidler, J.K.: Component value ranges of tuneable impedance matching
networks in RF communications systems, Proceedings of IEE Conference on
HF Radio Systems and Techniques, Leicester, UK, 710 July 1997, Conference
publication no. 411, pp. 1859
31 Sun, Y., Fidler, J.K.: Determination of the impedance matching domain of passive
LC ladder networks: theory and implementation, Journal of the Franklin Institute,
1996;333(B) (2):14155
32 Chamseddine, A., Haslett, J.W., Okoniewski, M.: CMOS silicon-on-sapphire
tunable matching networks, EURASIP Journal on Wireless Communications and
Networking, Vol. 2006, pp. 111
33 Sjoblom, P., Sjoland, H.: An adaptive impedance tuning CMOS circuit for ISM
2.4-GHz band, IEEE Transactions on Circuits and Systems 1, June 2005;52
(6):111524

Tuning and calibration of analogue, mixed-signal and RF circuits 381


34 Deve, N., Kouki, A.B., Nerguizian, V.: A compact size reconfigurable 13 GHz
impedance tuner suitable for RF MEMS applications, Proceedings of the 16th
IEEE International Conference on Microelectronics, Nis, Serbia and Montenegro,
68 December 2004, pp. 1014
35 de Lima, R.N., Huyart, B., Bergeault, E., Jallet, L.: MMIC impedance matching
system, Electronics Letters, 2000;36 (16):13934
36 Lange, K.L., Papapolymerou, J., Goldsmith, C.L., Malczewski, A., Kleber, J.:
A reconfigurable double stub tuner using MEMS devices, Proceedings of IEEE
MTT-S, 2001, Vol. 1, pp. 33740
37 McIntosh, C.E., Pollard, R.D., Miles, R.E.: Novel MMIC source-impedance
tuners for on-wafer microwave noise-parameter measurements, IEEE Transactions on Microwave Theory and Techniques, February 1999;47 (2):12531
38 Zolomy, A., Mernyei, F., Erdelyi, J., Pardoen, M., Toth, G.: Automatic antenna
tuning for RF transmitter IC applying high Q antenna, Proceedings of IEEE
Radio Frequency Integrated Circuits Symposium, Fort Worth, TX, June 2004,
pp. 5014
39 Sun, Y., Lau, W.K.: Automatic impedance matching using genetic algorithms,
Proceedings of IEE Conference on Antennas and Propagation, York, UK, August
1999

Index

active filters 34865


tuning
amplitude peak detection 3589,
3635
frequency tuning 34852, 3559
masterslave tuning 3545
online and offline tuning 3523
Q tuning 34852, 35960
transient response method 3578
active-RC filters
bypassing method
bandwidth broadening 1813
switched opamp techniques 1867
oscillation-based test (OBT) 1936
adaptive test control and collection 1723
A/D converters: see analogue-to-digital (A/D)
converters
admittance-function-based parameter
identification 87
algebraic methods, symbolic analysis 40
ambiguity groups 42, 4757
singular-value decomposition approach
527
amplitude-and-phase detector (APD) 31112
analogue filters 180212
bypassing method 1817
bandwidth broadening 1816
switched opamp techniques 1868
high order 20110
bypassing method 2023
cascade filters 2035
multiplexing technique 2037
OBT structures 20710
OTA-C filters 2057
switched opamp technique 2023

multiplexing technique 18892, 2037


oscillation-based test (OBT) 192201,
20710
analogue-to-digital (A/D) converters 21334
beat frequency testing 221
built-in self-test 22831
dynamic performance parameters 21820
test methodology 2268, 230
envelope testing 221
feedback-loop test methodology 2234,
229
frequency domain test methodology 2267
histogram test methodology 2246, 229
signal capture 2212
sine-wave fitting test methodology 228
static performance parameters 21618
test methodology 2216, 229
test set-up 2201
transfer characteristics 21420
antennas
impedance matching 3717
impedance sensors 3767
matching network 3736
tuning algorithms 377
artificial neural network (ANN)-based
approaches: see neural-network-based
approaches
automatic test equipment (ATE) interface
142, 145, 174
backward propagation neural networks
(BPNNs) 857, 8890
bilinear decomposition of fault equations
5962
binary partition tree (BPT) 124, 1267

384 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


BIST: see built-in self test (BIST)
branch-fault diagnosis: see k-branch-fault
diagnosis method
built-in self test (BIST) 14178
analogue-to-digital (A/D) converters
22831
background 1423
digital signal processing (DSP)- based
measurement 143
analogue-to-digital (A/D) converters
230
architecture 1445, 16672
calibration techniques for TMU and
TDC 1646
coherent sampling 169
crosstalk 16970
jitter measurement 1602
oscilloscope/curve tracing 168
radio frequency (RF) testing 1712
signal capture 1514
signal generation 14650
supply/substrate noise 1701
time domain
reflectometry/transmission 169
timing measurements 15464
frequency-response characterization 311
hierarchical/decomposition techniques
11619, 1213
phase-locked loops (PLLs) 1634, 3016
radio frequency (RF) testing 1712
wireless transceivers 31419,
33342
sigma-delta () converters 2559
test set-up 1425
bypassing method
bandwidth broadening 1816
high order filters 2023
switched opamp techniques 1868, 2023
calibration techniques
phase-locked loops (PLL) frequency
synthesizers, 36571
time measurement unit (TMU) 1646
time-to-digital converter (TDC) 1646
canonical ambiguity group 42, 4850
singular-value decomposition approach
527
cascade filters, multiplexing technique 2035
charge-pump semi-digital phase-locked loops
(CP-PLL): see phase-locked loops
(PLLs)

class-fault diagnosis 1521


general algebraic method 1617
topological technique 1821
code bin width 21415
comparator offset 172
component connection model (CCM)
11719, 1213
component selection 4950
concurrent testing 145
continuous-time active-RC filters: see
active-RC filters
CP-PLLs: see phase-locked loops (PLLs)
crosstalk 16970
cutset-fault diagnosis: see k-cutset-fault
diagnosis
DAG (directed acyclic graph) 1257, 128
DDFS (direct digital frequency synthesis)
1467
definitions 11415
delay locked loop (DLL) 158, 1634
design for manufacturability 173
design for testability (DfT) 143, 17980
analogue filters
bypassing method 1817
multiplexing technique 18892
k-fault diagnosis methods 68
model-based testing 2612
oscillation-based test (OBT) 192201
phase-locked loops (PLLs) 298
see also built-in self test (BIST)
diagnosis definitions 11415
differential sensitivity analysis 30
digital resonators 1478
digital signal processing (DSP)- based
measurement 143
analogue-to-digital (A/D) converters 230
architecture 1445, 16672
calibration techniques for TMU and TDC
1646
jitter measurement 1602
signal capture 1514
signal generation 14650
timing measurements 15464
digitization, signal capture 151, 1534
direct digital frequency synthesis
(DDFS) 1467
directed acyclic graph (DAG) 1257, 128
DLL (delay locked loop) 158, 1634
DSP: see digital signal processing (DSP)based measurement

Index 385
effective number of bits (ENOB) 247
embedded test techniques: see built-in self test
(BIST)
equivalent time sampling 221
eye-opening monitor (EOM) 161
fast Fourier transform (FFT) 24854
fault clustering/collapsing 11516
fault compensation source method 246
fault detection (FD) defined 114
fault diagnosis definitions 11415
fault dictionary method 12, 116
neural-network-based: see
neural-network-based approaches
test node selection 29
fault grouping 11516
fault incremental circuits 34
non-linear circuits 214
fault location/fault isolation (FI) defined 114
fault observability concept 30
fault tree selection 14
fault value evaluation defined 11415
fault verification method 2
FFT (fast Fourier transform) 24854
filters: see active filters; analogue filters
FleisherLaker SC biquad filter 199201
four opamp biquad high-pass filter 99100
fractional-N synthesizer 36570
frequency domain approach 30
frequency-response characterization system
(FRCS) 31123
BIST implementation 31419
experimental evaluation 31923
operating principle 31113
testing methodology 31314
frequency synthesizers, self-calibration
36571
genetic algorithms 30
global ambiguity groups 4950, 52
HABIST 229
hierarchical techniques 12139
extensions using the self-test algorithm
1213
large-scale circuit fault diagnosis 312
mixed SBT/SAT approaches 1357
neural-network-based approaches 1301
NewtonRaphson-based approach 1367
simulation-after-test (SAT) 12131

simulation-before-test (SBT) 12930,


1315
symbolic analysis 12430
IFA (inductive fault analysis) 11516
impedance matching, on-chip antennas 3717
impedance sensors 3767
matching network 3736
tuning algorithms 377
incremental sensitivity analysis 30
inductive fault analysis (IFA) 11516
intellectual properties (IPs) 1445
interpolation-based time-to-digital converter
(TDC) 155
IPs (intellectual properties) 1445
jitter measurement
analogue-based device 1602
phase-locked loops (PLLs) 164, 2837,
295, 298300, 306
Vernier delay line 158, 159
Katznelson-type algorithm 73
k-branch-fault diagnosis method 45
bilinear function 89
design for testability 68
multiple excitation method 89
testability analysis 68
k-cutset-fault diagnosis 1214
branch-fault diagnosis equations 14
loop- and mesh-fault diagnosis 14
tree selection 14
KerwinHuelsmanNewcomb (KHN)
biquad filter
multiplexing technique 18990
oscillation-based test (OBT) 1989
state-variable filter
oscillation-based test (OBT) 1934
k-fault diagnosis methods 336
class-fault diagnosis 1521
fault incremental circuit 34
non-linear circuits 226
recent advances 2932
relation of branch-, node- and cutset-fault
diagnosis 14
test node selection 2930
tolerance effects and treatment 15
see also k-branch-fault diagnosis method;
k-cutset-fault diagnosis; k-node-fault
diagnosis

386 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


KHN: see KerwinHuelsmanNewcomb
(KHN)
k-node-fault diagnosis 912
parameter identification 1012
L1-norm optimization approach 84, 10010
illustrative example 10910
neural network application 1059
ladder-based filters
multiplexing technique 205
tuning 3615
large-scale circuit fault diagnosis
background 11321
hierarchical techniques 312, 12137
mixed SBT/SAT approaches 1357
simulation-after-test (SAT) 12131
simulation-before-test (SBT)
12930, 1315
neural-network-based approach 90, 924
leapfrog (LF) filters, tuning 3605
linear programming neural networks (LPNN)
1056
loop- and mesh-fault diagnosis 14
MADBIST (mixed-analogue-digital BIST)
144
manufacturable-by construction design 173
MARS (multi-variate adaptive regression
splines) 136
memory-based signal generation 1489
mixed-analogue-digital BIST (MADBIST)
144
MLF: see multiple loop feedback (MLF) filters
modified nodal analysis (MNA) 1245
MOSFET-C filters 180
multiple-fault diagnosis: see k-fault diagnosis
methods
multiple loop feedback (MLF) filters
bypassing method 203
multiplexing technique 2057
tuning 3605
multiplexing technique 18892, 2037
multi-tone signal generation 14950, 2567
multi-variate adaptive regression splines
(MARS) 136
mutual exclusive (MUTEX) circuit 162
neural-network-based approaches 31,
83100, 1301
artificial neural networks 8494

L1-norm optimization 84, 10510


wavelet neural networks 94100
NewtonRaphson-based approach 627,
1367
node-fault diagnosis: see k-node-fault
diagnosis
noise effects
analogue-to-digital (A/D) converters
21920
sigma-delta () converters 2368, 247
supply/substrate noise 1701
wavelet-based neural-network technique
946
non-linear circuits
bilinear function for k-fault parameter
identification 256
fault incremental circuits 214, 26
fault location and identification 246
fault modelling 214, 26
L1-norm optimization approach 84,
10010
mixed-fault incremental circuit 267
quasi-fault incremental circuit 267
symbolic function approach 717
piecewise linear (PWL) models 723
SAPDEC application 747
transient analysis models for reactive
components 73
testability analysis 57
two-step diagnosis methods 289
non-linear constrained optimization 1067
non-linear regression models 136
OBT (oscillation-based test) 192201,
20710
on-chip testing: see built-in self test (BIST)
open architecture 173
optimization-based identification technique 2
oscillation-based test (OBT) 192201,
20710
OTA-C filters
multiplexing technique 1902
oscillation-based test (OBT) 1969,
20710
tuning 34952
parallel testing 141, 145
parametric fault diagnosis 2, 378, 5971
admittance-function-based parameter
identification 87

Index 387
bilinear decomposition of fault
equations 5962
bilinear function 256
k-node-fault diagnosis 1012
NewtonRaphson-based approach 627
non-linear circuits 84
phase-locked loops (PLLs) 28797
test frequency selection 6771
see also L1-norm optimization approach;
symbolic function approach
phase frequency detectors 279, 306
phase-locked loops (PLLs) 1634, 277307
architecture 27782
capture and lock range measurements 303
charge pump and loop filter
configuration 2801
charge pump current
measurement 2956
digital structures 282
fault models 283
frequency lock test (FLT) 28890
frequency synthesizers, calibration 36571
gain and linearity measurement 297,
3023
jitter measurement 2837, 295,
298300, 306
lock range and capture range
measurement 288
on-chip filter frequency tuning 3567
operational-parameter-based
measurements 28797
operation and test issues 27782
phase frequency detector 279, 306
phase transfer function monitoring 2925,
3035
production focussed tests 298300
step response test 2902
structural decomposition tests 2957
test issues 27782
test parameters 2823
transient response monitoring 28892
voltage controlled oscillator (VCO)
2812, 297
piecewise linear (PWL) models 30, 717
PLL: see phase-locked loops (PLLs)
PWL (piecewise linear) models 30, 717
quantization noise 2368, 247
quasi-fault incremental circuit 267
radio frequency (RF): see RF testing

RC filters: see active-RC filters


reactive components, transient analysis
models 73
RF amplitude detectors (RFD) 32430
RF testing 149, 1712
RF wireless transceivers 30945
amplitude detector method 32433
frequency-response characterization
31123
BIST implementation 31419
experimental evaluation 31923
operating principle 31113
testing methodology 31314
gain and compression point measurement
32733
sequence of testing 310
simulation results 33942
switched loop-back architecture 3337
testing strategy 3379
Sallen-Key band-pass filter
oscillation-based test (OBT) 1956
parametric fault diagnosis 647
testability analysis 501
wavelet-based neural-network technique
98100
sampled data switched-capacitor (SC) filters:
see SC filters
sampling-offset TDC (SOTDC) 165
SAPDEC (Symbolic Analysis Program for
Diagnosis of Electronic Circuits) 72,
747
SAPWIN 401, 46, 55, 56
SAT: see simulation-after-test (SAT)
SBF: see simulation-before-test (SBT)
SC filters
bypassing method
bypassing by bandwidth broadening
1836
bypassing using duplicated/switched
opamp 187
oscillation-based test (OBT) 199201
self-test (ST) algorithm 11619
hierarchical/decomposition techniques
1213
see also built-in self test (BIST)
sensitivity analysis 30, 1201
hierarchical techniques 124, 12730
sequence of expressions (SOE) 124,
1289

388 Test and diagnosis of analogue, mixed-signal and RF integrated circuits


sigma-delta () converters 23576
architecture 23942
behavioural model 26471
built-in self test (BIST) 2559, 26271
defect-oriented testing 2589
design for testability (DfT) 2612
digital filtering and decimation 2389
dynamic performance
parameters 2468
first-order modulators 240, 241
functional testing 2545, 2568
high-order modulators 2412
histogram testing 246
model-based testing 25971
performance characterization 24354
polynomial model 2624
principle of operation 2368
quantization noise 247
servo-loop method 246
spectral analysis technique 24854
static performance parameters 2446
signal capture 1514
analogue-to-digital (A/D) converters
2212
complete on-chip test core 168
digitization 151, 1534
undersampling 1523, 154
signal generation 1435, 14650
area overhead 150
complete on-chip test core 168
direct digital frequency synthesis (DDFS)
1467
memory-based 1489
multi-tones 14950
oscillator-based approaches 1478
simulation-after-test (SAT) 37,
59, 11621
hierarchical techniques 12131
self-test (ST) algorithm 11619, 1213
sensitivity analysis 30, 1201
see also fault verification method;
parametric fault diagnosis; symbolic
analysis
simulation-before-test (SBT) 37,
589, 115
symbolic function approach 12930
see also fault dictionary method
singular-value decomposition (SVD)
approach 527
SoC: see systems on chip (SoCs)
SOTDC (sampling-offset TDC) 165

spectral analysis technique, sigma-delta ()


converters 24854
standardized test platforms 173
statistical process control 1723
structural test: see built-in self test (BIST)
successive approximation register
(SAR) 151
switched-capacitor (SC) filters: see SC filters
switched opamp techniques 1868, 2023
symbolic analysis 301, 3941
hierarchical techniques 12430
Symbolic Analysis Program for Diagnosis of
Electronic Circuits (SAPDEC) 72,
747
SYmbolic FAult Diagnosis
(SYFAD) 46, 50
symbolic function approach 31
bilinear decomposition of fault equations
5962
non-linear circuits 57, 717
piecewise linear (PWL)
models 723
parametric fault diagnosis
NewtonRaphson-based approach
627
test frequency selection 6771
symbolic analysis 301, 3941, 12430
testability analysis 4457
ambiguity groups 4752
non-linear circuits 57
singular-value decomposition (SVD)
527
transient analysis models for reactive
components 73
systems on chip (SoCs) 141, 1445, 347
TAGA (Testability and Ambiguity Group
Analysis) 556
TDC (time-to-digital converter) 1556, 157,
1646
TDR/TDT (time domain
reflectometry/transmission) 169
TEI (test error index) 689
testability analysis 4157
ambiguity groups 42, 4752
evaluation algorithms 427
k-fault diagnosis methods 68
numerical approach 434
symbolic approach 447
ambiguity groups 4752
non-linear circuits 57

Index 389
singular-value decomposition (SVD)
527
Testability and Ambiguity Group Analysis
(TAGA) 556
testable groups 4951, 52
test control 1723
test costs 141
test error index (TEI) 689
test node selection 289
test points 41, 46
test signal generation 2930
time amplification 1623
time domain approach 30
time domain reflectometry/transmission
(TDR/TDT) 169
time measurement unit (TMU) 156, 1646
time shuffling: see equivalent time sampling
time-to-digital converter (TDC) 1556, 157,
1646
time-to-voltage converter 155
timing measurements 1549
analogue-based interpolation techniques
1556
calibration techniques 1646
digital phase-interpolation techniques
1567
jitter measurement 15962
single counter 154
time amplification 1623
Vernier delay line 1579
TMU (time measurement unit) 156, 1646
tolerance effects and treatment 15, 31, 115,
3478
see also sensitivity analysis
topological methods

class-fault diagnosis 1821


symbolic analysis 401
Tow-Thomas (TT)
band-pass filter 567
biquad filter
multiplexing technique 1889,
1902
oscillation-based test (OBT) 1945,
1978
transient analysis models, reactive
components 73
TT: see Tow-Thomas (TT)
two-integrator loop biquad filter 1967, 201
two-stage common emitter (CE) audio
amplifier 6971
undersampling 1523, 154
Vernier delay line 1579
voltage controlled oscillator (VCO)
gain and linearity measurement 297
phase-locked loops (PLLs) 2812, 36571
wavelet-based neural-network technique 31,
834, 94100
algorithm for fault diagnosis 978
example circuits and results 98100
feature extraction of noisy signals 956
four opamp biquad high-pass filter 99100
wavelet decomposition 945
wavelet neural networks 968
wavelet packet decomposition 30
wireless transceivers: see RF wireless
transceivers

You might also like