Professional Documents
Culture Documents
Dedication
Contents
Preface
List of contributors
1
1.3
1.4
1.5
Introduction
Multiple-fault diagnosis of linear circuits
1.2.1
Fault incremental circuit
1.2.2
Branch-fault diagnosis
1.2.3
Testability analysis and design for testability
1.2.4
Bilinear function and multiple excitation method
1.2.5
Node-fault diagnosis
1.2.6
Parameter identification after k-node fault location
1.2.7
Cutset-fault diagnosis
1.2.8
Tolerance effects and treatment
Class-fault diagnosis of analogue circuits
1.3.1
Class-fault diagnosis and general algebraic method for
classification
1.3.2
Class-fault diagnosis and topological technique for
classification
1.3.3
t-class-fault diagnosis and topological method for
classification
Fault diagnosis of non-linear circuits
1.4.1
Fault modelling and fault incremental circuits
1.4.2
Fault location and identification
1.4.3
Alternative fault incremental circuits and fault
diagnosis
Recent advances in fault diagnosis of analogue circuits
1.5.1
Test node selection and test signal generation
xv
xix
1
1
3
3
4
6
8
9
10
12
15
15
16
18
19
21
21
24
26
29
29
viii
Summary
References
30
31
31
32
33
37
2.1
2.2
37
39
40
40
41
42
47
52
57
57
2.3
2.4
2.5
2.6
2.7
3
Introduction
Symbolic analysis
2.2.1
Symbolic analysis techniques
2.2.2
The SAPWIN program
Testability and ambiguity groups
2.3.1
Algorithms for testability evaluation
2.3.2
Ambiguity groups
2.3.3
Singular-value decomposition approach
2.3.4
Testability analysis of non-linear circuits
Fault diagnosis of linear analogue circuits
2.4.1
Techniques based on bilinear decomposition of fault
equations
2.4.2
NewtonRaphson-based approach
2.4.3
Selection of the test frequencies
Fault diagnosis of non-linear circuits
2.5.1
PWL models
2.5.2
Transient analysis models for reactive components
2.5.3
The Katznelson-type algorithm
2.5.4
Circuit fault diagnosis application
2.5.5
The SAPDEC program
Conclusions
References
Introduction
Fault diagnosis of analogue circuits with tolerances using
artificial neural networks
3.2.1
Artificial neural networks
3.2.2
Fault diagnosis of analogue circuits
3.2.3
Fault diagnosis using ANNs
59
62
67
71
72
73
73
74
75
77
77
83
83
84
85
87
88
List of contents
3.2.4
3.3
3.4
3.5
3.6
4
4.3
4.4
4.5
5
ix
Introduction
4.1.1
Diagnosis definitions
Background to analogue fault diagnosis
4.2.1
Simulation before test
4.2.2
Simulation after test
Hierarchical techniques
4.3.1
Simulation after test
4.3.2
Simulation before test
4.3.3
Mixed SBT/SAT approaches
Conclusions
References
90
90
94
94
95
96
97
98
100
103
105
109
110
111
113
113
114
115
115
116
121
121
131
135
137
138
141
5.1
5.2
5.3
141
142
146
146
147
148
Introduction
Background
Signal generation
5.3.1
Direct digital frequency synthesis
5.3.2
Oscillator-based approaches
5.3.3
Memory-based signal generation
5.4
5.5
5.6
5.7
5.8
5.9
5.10
6
5.3.4
Multi-tones
5.3.5
Area overhead
Signal capture
Timing measurements and jitter analysers
5.5.1
Single counter
5.5.2
Analogue-based interpolation techniques:
time-to-voltage converter
5.5.3
Digital phase-initerpolation techniques: delay line
5.5.4
Vernier delay line
5.5.5
Component-invariant VDL for jitter
measurement
5.5.6
Analogue-based jitter measurement device
5.5.7
Time amplification
5.5.8
PLL and DLL injection methods for PLL tests
Calibration techniques for TMU and TDC
Complete on-chip test core: proposed architecture in
Reference 11 and its versatile applications
5.7.1
Attractive and flexible architecture
5.7.2
Oscilloscope/curve tracing
5.7.3
Coherent sampling
5.7.4
Time domain reflectometry/transmission
5.7.5
Crosstalk
5.7.6
Supply/substrate noise
5.7.7
RF testing amplifier resonance
5.7.8
Limitations of the proposed architecture in
Reference 11
Recent trends
Conclusions
References
149
150
151
154
154
155
156
157
159
160
162
163
164
166
166
168
169
169
169
170
171
172
172
173
174
179
6.1
6.2
179
181
181
186
188
188
189
190
192
193
196
199
6.3
6.4
Introduction
DfT by bypassing
6.2.1
Bypassing by bandwidth broadening
6.2.2
Bypassing using duplicated/switched opamp
DfT by multiplexing
6.3.1
Tow-Thomas biquad filter
6.3.2
The KerwinHuelsmanNewcomb biquad filter
6.3.3
Second-order OTA-C filter
OBT of analogue filters
6.4.1
Test transformations of active-RC filters
6.4.2
OBT of OTA-C filters
6.4.3
OBT of SC biquadratic filter
List of contents
6.5
6.6
6.7
7
7.3
7.4
7.5
7.6
8
Introduction
A/D conversion
7.2.1
Static A/D converter performance parameters
7.2.2
Dynamic A/D converter performance parameters
A/D converter test approaches
7.3.1
Set-up for A/D converter test
7.3.2
Capturing the test response
7.3.3
Static performance parameter test
7.3.4
Dynamic performance parameter test
A/D converter built-in self-test
Summary and conclusions
References
xi
201
202
203
205
207
210
210
213
213
214
216
218
220
220
221
222
226
228
231
232
Test of converters
Gildas Leger and Adoracin Rueda
235
8.1
8.2
235
236
8.3
8.4
8.5
Introduction
An overview of modulation: opening the ADC black box
8.2.1
Principle of operation: modulation and noise
shaping
8.2.2
Digital filtering and decimation
8.2.3
modulator architecture
Characterization of converters
8.3.1
Consequences of modulation for ADC
characterization
8.3.2
Static performance
8.3.3
Dynamic performance
8.3.4
Applying a FFT with success
Test of converters
8.4.1
Limitations of the functional approach
8.4.2
The built-in self-test approach
Model-based testing
8.5.1
Model-based test concepts
236
238
239
243
243
244
246
248
254
255
255
259
259
xii
8.6
8.7
9
9.2
9.3
9.4
9.5
10
8.5.2
Polynomial model-based BIST
8.5.3
Behavioural model-based BIST
Conclusions
References
10.3
10.4
10.5
10.6
Introduction
Frequency-response test system for analogue baseband
circuits
10.2.1 Principle of operation
10.2.2 Testing methodology
10.2.3 Implementation as a complete on-chip test system
with a digital interface
10.2.4 Experimental evaluation of the FRCS
CMOS amplitude detector for on-chip testing of
RF circuits
10.3.1 Gain and 1-dB compression point measurement
with amplitude detectors
10.3.2 CMOS RF amplitude detector design
10.3.3 Experimental results
Architecture for on-chip testing of wireless transceivers
10.4.1 Switched loop-back architecture
10.4.2 Overall testing strategy
10.4.3 Simulation results
Summary and outlook
References
262
264
271
273
277
277
277
282
287
287
298
301
306
306
309
309
311
311
313
314
319
324
327
328
330
333
333
337
339
342
343
List of contents
11
xiii
347
11.1
11.2
347
348
348
349
352
354
355
359
360
365
365
366
368
370
371
11.3
11.4
11.5
11.6
Index
Introduction
On-chip filter tuning
11.2.1 Tuning system requirements for on-chip filters
11.2.2 Frequency tuning and Q tuning
11.2.3 Online and offline tuning
11.2.4 Masterslave tuning
11.2.5 Frequency tuning methods
11.2.6 Q tuning techniques
11.2.7 Tuning of high-order leapfrog filters
Self-calibration techniques for PLL frequency synthesizers
11.3.1 Need for calibration in PLL synthesizers
11.3.2 PLL synthesizer with calibrated VCO
11.3.3 Automatic PLL calibration
11.3.4 Other PLL synthesizer calibration applications
On-chip antenna impedance matching
11.4.1 Requirement for on-chip antenna impedance
matching
11.4.2 Matching network
11.4.3 Impedance sensors
11.4.4 Tuning algorithms
Conclusions
References
371
373
376
377
378
378
383
Preface
System on chip (SoC) integrated circuits (ICs) for communications, multimedia and
computer applications are receiving considerable international attention. One example of a SoC is a single-chip transceiver. Modern microelectronic design processes
adopt a mixed-signal approach since a SoC is a mixed-signal system that includes
both analogue and digital circuits. There are several IC technologies currently available, however, the low-cost and readily available CMOS technique is the mainstream
technology used in IC production for applications such as computer hard disk drive
systems, sensors and sensing systems for health care, video, image and display systems and cable modems for wired communications, radio frequency (RF) transceivers
for wireless communications and high-speed transceivers for optical communications.
Currently, microelectronic circuits and systems are mainly based on submicron and
deep-submicron CMOS technologies, although nano-CMOS technology has already
been used in computer, communication and multimedia chip design. While still pushing the limits of CMOS, preparation for the post-CMOS era is well under way with
many other potential alternatives being actively pursued.
There is an increasing interest in the testing of SoC devices as automatic testing
becomes crucially important to drive down the overall cost of SoC devices due to the
imperfect nature of the manufacturing process and its associated tolerances. Traditional external test has become more and more irrelevant for SoC devices, because
these devices have a very limited number of test nodes. Design for testability (DfT)
and built-in self-test (BIST) approaches have thus been the choice for many applications. The concept of on chip test systems including test generation, measurement and
processing has also been proposed for complex integrated systems. Test and fault diagnosis of analogue and mixed-signal circuits, however, is much more difficult than that
of digital circuits due to tolerances, parasitics and non-linearities, and thus it remains
a bottleneck for automatic SoC test. Recently, the closely related tuning, calibration
and correction issues of analogue, mixed-signal and RF circuits have been intensively
studied. However, the papers on testing, diagnosis and tuning have been published
in a diverse range of journals and conferences, and thus they have been treated quite
separately by the associated communities. For example, work on tuning has been
mainly published in journals and conferences concerned with circuit design and has
not therefore come to the attention of the testing community. Similarly, analogue fault
xvi
diagnosis was mainly investigated by circuit theorists in the past, although it has now
become a serious topic in the testing community.
The scope of this book is to consider the whole range of automatic testing, diagnosis and tuning of analogue, mixed-signal and RF ICs and systems. It aims to provide
a comprehensive treatment of testing, diagnosis and tuning in a coherent way and
to report systematically the most recent developments in all these areas in a single
source for the first time. The book attempts to provide a balanced view of the three
important topics, however, stress has been put on the testing side. Motivated by recent
SoC test concepts, the diagnosis, testing and tuning issues of analogue, mixed-signal
and RF circuits are addressed, in particular, from the SoC perspective, which forms
another unique feature of this book.
The book contains 11 chapters written by leading international researchers in
the subject areas. It covers three theme topics: diagnosis, testing and tuning. The
first four chapters are concerned with fault diagnosis of analogue circuits. Chapter
1 systematically presents various circuit-theory-based diagnosis methodologies for
both linear and non-linear circuits including some material not previously available
in the public domain. This chapter also serves as an overview of fault diagnosis.
The following three chapters cover the three most popular diagnosis approaches;
the symbolic function, neural network and hierarchical decomposition techniques,
respectively. Then testing of analogue, mixed-signal and RF ICs is discussed extensively in Chapters 5-10. Chapter 5 gives a general review of all aspects of testing with
emphasis on DfT and BIST. Chapters 610 focus in depth on recent advances in testing analogue filters, data converters, sigma-delta modulators, phase-locked loops, RF
transceivers and components, respectively. Finally, Chapter 11 discusses auto-tuning
and calibration of analogue, mixed-signal and RF circuits including continuous-time
filters, voltage-controlled oscillators and phase-locked loops synthesizers, impedance
matching networks and antenna tuning units.
The book can be used as a text or reference for a broad range of readers from
both academia and industry. It is especially useful for those who wish to gain a
viewpoint from which to understand the relationship of diagnosis, testing and tuning.
An indispensible reference companion to researchers and engineers in electronic and
electrical engineering, the book is also intended to be a text for graduate and senior
undergraduate students, as may be appropriate.
I would like to thank staff members in the Publishing Department of the IET
for their support and assistance, especially the former Commissioning Editors Sarah
Kramer and Nick Canty and the current Commissioning Editor, Lisa Reading. I am
very grateful to the chapter authors for their considerable efforts in contributing these
high-quality chapters; their professionalism is highly appreciated. I must also thank
my wife Xiaohui, son Bo and daughter Lucy for their understanding and support;
without them behind me this book would not have been possible.
As a final note, it has been my long dream to write or edit something in the topic
area of this book. The first research paper published in my academic career was about
fault diagnosis in analogue circuits. This was over 20 years ago when I studied for
the MSc degree. The real motivation for doing this book, however, came along with
the proposal for a special issue on analogue and mixed-signal test for SoCs for IEE
Preface xvii
Proceedings: Circuits, Devices and Systems (published in 2004). It has since been
a long journey for the book to come into being as you see now, however, the book
has indeed been significantly improved with the time during the editorial process.
I sincerely hope that the efforts from the editor and authors pay off as a truly useful
and long-lasting companion in your successful career.
Yichuang Sun
Contributors
Stefano Manetti
Department of Electronics and
Telecommunications
University of Florence
Firenze, Italy
James Moritz
School of Electronic, Communication
and Electrical Engineering
University of Hertfordshire
Hatfield, Herts, UK
Maria Cristina Piccirilli
Department of Electronics and
Telecommunications
University of Florence
Firenze, Italy
Andrew Richardson
Centre for Microsystems Engineering
Department of Engineering
Lancaster University
Lancaster, UK
Gordon Roberts
Department of Electrical Engineering
McGill University
Montreal, Quebec, Canada
xx
Adoracin Rueda
Instituto de Microelectronica
de Sevilla
Centro Nacional de
Microelectronica
Edificio CICA, Sevilla
Spain
Mona Safi-Harb
Department of Electrical
Engineering
McGill University
Montreal, Quebec, Canada
Edgar Sanchez-Sinencio
Analog and Mixed-signal Center
Department of Electrical and
Computer Engineering
Texas A&M University
College Station, Texas, USA
Peter Shepherd
Department of Electronic &
Electrical Engineering
University of Bath
Claverton Down, Bath, UK
Jose Silva-Martinez
Analog and Mixed-signal Center
Department of Electrical and
Computer Engineering
Texas A&M University
College Station, Texas, USA
Yichuang Sun
School of Electronic, Communication
and Electrical Engineering
University of Hertfordshire
Hatfield, Herts, UK
Alberto Valdes-Garcia
Communication IC Design
IBM T. J. Watson Research Center
New York, USA
Chapter 1
1.1
Introduction
dictionary method is used practically in the fault diagnosis of analogue and mixedsignal circuits, especially for single hard-fault diagnosis. The drawback of the method
is the large number of SBT computations that are needed for the construction of a
fault dictionary, especially for multiple-fault and soft-fault diagnosis of a large circuit. Methods for effective fault simulation and large-change sensitivity computation
are thus needed [610]. Tolerance effects need to be considered as simulations are
conducted at nominal values for fault-free components.
The parameter identification approach calculates all actual component values from
a set of linear or non-linear equations after test and compares them with their nominal
values to decide which components are faulty [1317]. The method is useful for
circuit design modification and tuning. There is no restriction on the number of
faults and tolerance is not a problem in this method because the method targets all
actual component values. However, the method normally assumes that all circuit
nodes are accessible and thus it is not practical for modern IC diagnosis [15, 16]. In
addition, some parameter identification requires solving non-linear equations [13, 14],
which is computationally demanding especially for large-scale circuits. The parameter
identification method has thus become more of a topic of theoretical interest in circuit
diagnosis, in contrast to circuit analysis and circuit design. The only exception is
perhaps the optimization-based identification technique [17] that can have limited
tests for approximate, but optimized, component value calculation. The optimizationbased method will be discussed in the context of the neural network approach in
Chapter 3.
The fault verification method [1839] is concerned with fault location of analogue
circuits with a small number of test nodes and a limited number of faults by using linear diagnosis equations. Indeed, modern highly integrated systems have very limited
external accessibility and normally only a few components become faulty simultaneously. Under the assumption that the number of faults is fewer than the number of
accessible nodes, the fault locations of a circuit can be determined by simply checking
the consistency of a set of linear equations. Thus, the SAT computation burden of
the method is small. The fault verification method is suitable for all types of fault,
and component values can also be determined after fault location. Tolerance effects
are, however, of concern in this method because fault-free components are assumed
to take their nominal values. The fault verification method has attracted considerable
attention, with the k-fault diagnosis approach [1839] being widely investigated.
This chapter systematically introduces k-fault diagnosis theory and methods for
both linear and non-linear circuits as well as the derivative class-fault diagnosis
approach. We also give a general overview of recent research in fault diagnosis of
analogue circuits. Throughout the chapter, a unified discussion is adopted based on
the fault incremental circuit concept. In Section 1.2, we introduce the fault incremental circuit of linear circuits and discuss various k-fault diagnosis methods including
branch-, node- and cutset-fault diagnoses and various practical issues such as component value determination and testability analysis and design. A class-fault diagnosis
theory without structural restrictions for fault location is introduced in Section 1.3,
which comprises both algebraic and topological classification methods. In Section 1.4,
the fault incremental circuit of non-linear circuits is constructed and a series of linear
methods and special considerations of non-linear circuit fault diagnosis are discussed.
We also introduce some of the latest advances in fault diagnosis of analogue circuits
in Section 1.5. A summary of the chapter is given in Section 1.6.
1.2
The k-fault diagnosis methods [1839] have been widely investigated because of
various advantages such as the need for only a limited number of test nodes and use
of linear fault diagnosis equations. It is also practical to assume a limited number of
simultaneous faults. The k-fault diagnosis theory is very systematic and is based on
circuit analysis and circuit design methods.
1.2.1
Consider a linear circuit, which contains b branches and n nodes. Assume that the
circuit does not contain controlled sources and multi-terminal devices. The branch
equation in the nominal state can be written as
Ib = Yb Vb
(1.1)
where Ib is the branch current vector, Vb is the branch voltage vector and Yb is the
branch admittance matrix.
When the circuit is faulty, component values will have deviation of Yb , which
will cause changes in the branch currents and voltages by Ib and Vb , respectively.
The branch equation of the faulty circuit can then be written as
Ib + Ib = (Yb + Yb )(Vb + Vb )
(1.2)
(1.3)
(1.4)
(1.5)
Note that Xb can be used to judge whether a branch or component is faulty or not
by verifying if the corresponding element in Xb is non-zero.
Equation (1.4) can be considered to be the branch equation of a circuit with the
branch current vector being Ib , the branch voltage vector being Vb and the branch
admittance matrix being Yb . Xb can be viewed as excitation sources due to faults the
so-called fault compensation sources. We call this circuit a fault incremental circuit.
Assuming that the nominal circuit and faulty circuit have the same normal or test
input signals, the subtraction of the inputs of the two circuits will be equal to zero,
that is, an open circuit for a current source and a short-circuit for a voltage source
in the fault incremental circuit. Also, note that the fault incremental circuit has the
same topology as the nominal circuit. By applying Kirchhoffs current law (KCL)
and Kirchhoffs voltage law to the fault incremental circuit, we can derive numerous
equations useful for fault diagnosis of analogue circuits.
For linear controlled sources and multi-terminal devices or subcircuits, we can also
derive the corresponding branch equations in the fault incremental circuit [3235].
For example, for a VCCS with i1 = gm v2 , it can be shown that i1 = gm v2 + x1
in the fault incremental circuit, where x1 = gm (v2 + v2 ). This remains a VCCS
with an incremental current in the controlled branch, an incremental voltage in the
controlling branch and a fault compensation current source in the controlled branch.
For a three-terminal linear device, y-parameters can be used to describe its terminal
characteristics, with one terminal being taken to be common:
i1 = y11 v1 + y12 v2
i2 = y21 v1 + y22 v2
We can derive the corresponding equations in the fault incremental circuit as
i1 = y11 v1 + y12 v2 + x1
i2 = y21 v1 + y22 v2 + x2
where
x1 = y11 (v1 + v1 ) + y12 (v2 + v2 )
x2 = y21 (v1 + v1 ) + y22 (v2 + v2 )
Although the device has four y-parameters, only two fault compensation current
sources are used, one for each branch in a T -type equivalent circuit in the fault
incremental circuit. If either of x1 or x2 is not equal to zero, the three-terminal
device is faulty. Only if both x1 and x2 are zero, is it fault free. Similarly, we can
also develop a star model for multi-terminal linear devices or subcircuits [35]. A or
delta model is not preferred owing to the existence of loops (will become clear later),
use of more branches and consequence of possible additional internal nodes.
A fault incremental circuit will become a differential incremental circuit if Xb =
Yb (Vb + Vb ) is replaced by Xb = Yb Vb . The differential incremental circuit
is useful for differential sensitivity and tolerance effects analysis, whereas the fault
incremental circuit can be used for large-change sensitivity analysis, fault simulation
and fault diagnosis.
1.2.2
Branch-fault diagnosis
Branch-fault diagnosis was first published by Biernacki and Bandler [18] and generalised to non-linear circuits by Sun and Lin [3234] and Sun [35]. The k-branch-fault
diagnosis method [1823, 3235] assumes that there are k-branch faults in the circuit
and requires that the number of accessible nodes, m is larger than k. As discussed
above, the change of a components value with respect to its nominal can be represented by a current source in parallel with the component and if the fault compensation
source current is non-zero, the component is faulty.
A branch is said to be faulty if its component is faulty.
Consider a linear circuit with b branches and n nodes (excluding the ground
node), of which m are accessible and l inaccessible. Assume that the nominal circuit
and faulty circuit have the same current input, then the input current to an accessible
node in the fault incremental circuit is zero. On applying KCL to the fault incremental
circuit, that is, AIb = 0 (where A is the node incident matrix) and noting that Vb =
AT Vn (where Vn is the nodal voltage increment vector), and on substituting it in
Equation (1.4) we can derive:
Znb Xb = Vn
(1.6)
where Znb = (AYb AT )1 A and Xb = Yb (Vb + Vb ) as given in Equation (1.5).
Dividing Znb = [Zmb T , Zlb T ]T and Vn = [Vm T , Vl T ]T according to external (accessible) and internal (inaccessible) nodes, the branch-fault diagnosis equation
can be derived as
Zmb Xb = Vm
(1.7)
and the formula for calculating the internal node voltages is given by
Vl = Zlb Xb
(1.8)
(1.9)
(1.10)
The non-zero elements of Xk in Equation (1.10) indicate the faulty branches. By
checking consistency of the equations of different k branches, we can determine the
k faulty branches. Because we do not know which k components are faulty, we have
to consider all possible combinations of k out of b branches in the CUT. If there are
more than one k-branch combinations whose corresponding equations are consistent,
the k faulty branches cannot be uniquely determined, as they are not distinguishable
from other consistent k-branch combinations.
More generally, the k-fault diagnosis problem is to find the solutions of Xb
from the underdetermined equation in Equation (1.7), which contains only k nonzero elements. This further becomes a problem of checking the consistency of a
series of overdetermined equations similar to Equation (1.9) corresponding to all
k-branch combinations. A detailed discussion of the problems and methods can be
found in References 18 and 26.
After location of the k faulty branches, we can calculate Vl using Equation (1.8),
then Vb = AT Vn , and further we can calculate Yb from Equation (1.5).
1.2.3
A general mathematical algebraic theory and algorithms for solving k-fault diagnosis
equations that are suitable for all k-fault diagnosis methods such as branch-, nodeand cutset-fault diagnosis have been thoroughly and rigorously studied in Reference
26. Several interesting and useful theorems and algorithms have been proposed. In
this section, we focus on topological aspects of k-fault diagnosis. This is because
topological testability conditions are more straightforward and useful than algebraic
conditions. Checking topological conditions is much simpler than verifying algebraic
conditions, as the former can be done by inspection only, while the latter requires
numerical computation. Topological conditions can also be used to guide design for
better testability through selection of test nodes, test input signals and topological
structures of the CUT. Sun [22] and Sun and He [23] have investigated testability
analysis and design, on the basis of k-branch-fault diagnosis and k-component value
identification methods. In this section, we discuss topological aspects of k-fault diagnosis, including topological conditions, testability analysis and design for testability,
mainly based on the results obtained in References 22, 23 and 3234.
Definition 1.1 A circuit is said to be k-branch-fault testable if any k faulty branches
can be determined uniquely from test input, accessible node voltages and nominal
component values.
As we have discussed in Section 1.2.2, the equation of the k faulty branches is
consistent. If in the CUT, there are more than one k-branch combinations whose corresponding equations are also consistent, then we will be unable to determine the faulty
branches through consistency verification and thus the circuit is not testable. Therefore, it is important to investigate testability conditions. The following conditions can
be demonstrated.
Theorem 1.1 The necessary and almost sufficient condition for kbranch faults to
be testable is that for all (k + 1)-branch combinations, the corresponding equation
coefficient matrices are of full rank, that is, rank [Zm(k+1) ] = k + 1.
So there are two algebraic requirements that are important: rank [Zmk ] = k and
rank [Zm(k+1) ] = k + 1. The first one is for the equation to be solvable, which is
always assumed to be true and the second is for a unique solution. In the following,
we will give the topological equivalents of both.
Definition 1.2 A cutset is said to be dependent if all accessible nodes and the
reference node are in one of the two parts into which the cutset divides the circuit.
A simple dependent cutset is one in which there is only one inaccessible node in
one part.
Theorem 1.2 The necessary and almost sufficient condition for rank [Zmk ] = k of
all k-branch combinations is that the CUT does not have any loops and dependent
cutsets which contain k branches.
Theorem 1.3 The necessary and almost sufficient condition for k-branch faults to
be testable (rank [Zm(k+1) ] = k + 1 for all (k + 1)-branch combinations) is that the
CUT does not have any loops and dependent cutsets that contain (k + 1) branches.
When k = 1, the necessary and almost sufficient condition for a single branch
fault to be testable becomes that the circuit does not have any two branches in parallel
or forming a dependent cutset.
A loop is called the minimum loop if it contains the fewest number of branches
among all loops. A dependent cutset is called the minimum dependent cutset it contains the fewest number of branches among all dependent cutsets. Denote lmin and
cmin as the number of branches in the minimum loop and minimum dependent cutset,
respectively. Then we have the following theorems.
Theorem 1.4 The necessary and almost sufficient condition for k-branch faults to
be testable is k < lmin 1 if lmin cmin or k < cmin 1 if cmin lmin .
It is necessary to find out loops and dependent cutsets to determine lmin and cmin .
To seek loops is relatively easy, which can be conducted in the CUT, N. However,
dependent cutsets are a little difficult to look for, especially for large circuits. The
following theorem provides a simple method for this purpose, that is, to find dependent
cutsets in N0 , instead of N, equivalently.
Theorem 1.5 Let N0 be the circuit obtained by connecting all accessible nodes to
the reference node in the original circuit N. Then all cutsets in N0 are dependent and
N0 contains all cutsets in N.
Note that k-branch-fault testability is dependent on both loops and dependent cutsets. Increasing the number of branches in the minimum loop and minimum dependent
cutset may allow more simultaneous branch faults to be testable. This is useful when
k is not known.
It is also noted that a non-dependent cutset does not pose any restriction on testability. Whether or not a cutset is dependent will depend on the number and position
of accessible nodes. Therefore, proper selection of accessible nodes can change the
dependency of a cutset and thus the testability of k-branch faults. The greater the
number of nodes accessible the smaller will be the number of dependent cutsets. If
all circuit nodes are accessible, there will be no dependent cutset. Testability will
then be completely decided by the condition on loops, that is, k < lmin 1. Choosing different accessible nodes may change a dependent cutset to a non-dependent
cutset, thus improving testability. However, selection of accessible nodes will not
change the conditions on loops. Therefore, if testability is decided by loop conditions
only, for example, when lmin cmin , changing accessible nodes will not improve the
testability. However, since dependency of cutsets is related to the number and position of accessible nodes, when testability is decided by conditions on cutsets only,
we will want to select accessible nodes to eliminate dependent cutsets or increase
the number of branches in the minimum dependent cutset to improve the testability.
To increase the number of branches in the minimum dependent cutset, it is always
useful to choose those nodes containing a smaller number of branches as accessible
nodes, because all branches connected to an inaccessible node constitute a dependent
cutset. Generally, to choose some node in the part without the reference node of the
circuit that a dependent cutset divides as an accessible node can make the minimum
dependent cutset not dependent. Finally, the number of accessible nodes must be
larger than the number of faulty branches, as is always assumed. If possible, having
more accessible nodes is always useful as in many cases we do not know exactly how
many faults may happen and the number of dependent cutsets may also be reduced.
In summary, we need to meet m k + 1, lmin > k + 1 and cmin > k + 1.
For a more graph-theory-based discussion of testability, readers may refer to
Reference 23 where detailed testability analysis and design for testability procedures
are given and other equivalent testability conditions are proposed.
We can enhance testability of a circuit by using multiple excitations. For example,
by checking the invariance of the k component values under two different excitations,
we can identify the k faulty components. The CUT can now have (k +1)-branch loops
and dependent cutsets, as in these cases we can still uniquely determine the faulty
branches. Using multiple excitations, all circuits will be single-fault diagnosable.
This will be detailed on the basis of a bilinear method in the next section.
1.2.4
The k-branch combination test can be repeated for different excitations. To generate
two excitations, the same input signal can be applied to two different accessible
nodes or two input signals with different amplitudes can be applied to the same
accessible node. The real fault indicator vectors obtained from different excitations
should be in agreement. Below, a two-excitation method for k-branch-fault location
and component value identification is given.
On the basis of the k-branch-fault diagnosis method in Section 1.2.2, assuming that
rank[Zmk ] = k we can derive the following bilinear relation mapping the measured
node voltage space to the component admittance space as [21]:
col(Yk ) = diag[ATk (Vn + Tnm Vm )ZL
mk Vm ]
(1.11)
After location of the k faulty branches, we can use the bilinear relation in Equation
(1.11) to determine the k faulty component values. More usefully, a multiple excitation
method can be developed based on checking the invariance of the corresponding kcomponent values under different excitations for a unique identification of the faulty
branches and components.
If we use two independent current excitations with the same frequency and calculate col(Yk )s of all k-component combinations under each excitation, as col(Yk )1
and col(Yk )2 , respectively, and by denoting rk = col(Yk )1 col(Yk )2 , we can
determine the kfaulty branches by checking if rk is equal to zero. This method
can realize the simultaneous determination of the faulty branches and faulty component values as calculated under any excitation. The multiple excitation method can
enhance diagnosability. Equivalent faulty k-branch combinations can be eliminated
as for these k-branch combinations, rk is not equal to zero (because component values
of real fault-free k-branch combinations will change with different excitations). Now,
the only condition for the unique identification of k faulty components is that rank
[Zmk ] = k or the k branches do not form loops or dependent cutsets. Thus, during
the checking of different combinations of k components, once a k-component combination is found to have rk = 0, we can stop further checking and this k-component
combination is the faulty one. For a.c. circuits, multiple test frequencies may also be
used, however, component values R, L, C rather than their admittances should be
used since admittances are frequency dependent [21]. A similar bilinear relation and
two-excitation method for non-linear circuits [39] will be discussed in Section 1.4.2.
1.2.5
Node-fault diagnosis
Node-fault diagnosis was first proposed by Huang et al. [24] and generalised to nonlinear circuits by Sun [35, 36]. A node is said to be faulty if at least one of the branches
connected to it is faulty. Instead of locating faulty branches in a circuit directly, we
locate the faulty nodes. It is assumed that the number of faulty nodes is smaller than
the number of accessible nodes.
Similar to the derivation of the branch-fault diagnosis equation, applying KCL to
the fault incremental circuit, we have
Znn Xn = Vn
(1.12)
where
T 1
Znn = Y1
n = (AYb A )
(1.13)
(1.14)
and the formula for calculating the internal node voltages is given by
Vl = Zln Xn
(1.15)
Assume that there are only k faulty nodes and k < m. Then checking the consistency
of all possible combinations of the k nodes we can locate the faulty nodes from
Equation (1.14). Node-fault diagnosis may require less computation owing to n < b
10
(1.16)
However, we should note that if the number of possible faulty branches is larger
than the number of faulty nodes, k then it may not be possible to locate the faulty
branches using this equation. This is the case when the possible faulty branches
form loops. Otherwise, the node incident matrix corresponding to the possible faulty
branches connected to the faulty nodes is left invertible. There are topological restrictions on this method; the requirement of a full column rank requires that the possible
faulty branches do not form loops. A graph method has been proposed for locating
faulty branches in a faulty circuit with the fault-free nodes and associated branches
taken away [36]. A multiple excitation method will be given in the next section which
will overcome the above limitations.
1.2.6
This section addresses how to determine the faulty component values after k-node
fault location. For branch-fault diagnosis, after fault location we can easily determine
the values of the faulty components, Yk , from Xk = Yk (Vk + Vk ) as Vk can
be obtained with Xk being a known excitation. The bilinear relation method can also
be used as discussed in Section 1.2.4. Furthermore, the general bilinear relations of
analogue circuits given in References 7, 8 and 2022 may also be used to determine
the faulty component values as well.
After fault location using node-fault diagnosis, determination of faulty component
values is not so simple. Below we present a method for this, which has not been
published in the literature. Assume that there are only k faulty nodes, the ith faulty
node contains mi branches, i = 1, 2, , k. Without loss of generality, for the ith
faulty node we derive the component value determination equations. The jth branch
connected to node i in the fault incremental circuit in Section 1.2.1 can be described as
ij = yj vj + yj (vj + vj )
Applying KCL to node i, we have:
(v1 + v1 )y1 + (v2 + v2 )y2 + + (vmi + vmi )ymi
= (y1 v1 + y2 v2 + + ymi vmi )
Note that after faulty node location, all internal node voltages can be calculated.
Thus, all branch voltages can be computed. So applying mi excitations with the same
11
12
solved twice. Since once a yj is obtained from one faulty node, it can be used as
a known value for the other faulty node, then the other faulty node should have one
equation less to solve. Theoretically, the minimum number of equations needed is the
number of branches between faulty nodes or the faulty nodes and ground.
A faulty node that has a grounded branch is said to be independent because it
contains a branch that is not owned by other faulty nodes. Otherwise, it is said to be
dependent. A dependent node may not need to be dealt with as its branch component
values can be obtained by solving other faulty node equations. In practice we can use
the following steps to make sure that we solve the minimum number of equations.
Supposing that the first hi branches are not connected to the faulty nodes that
have already been dealt with, then only hi independent excitations are needed for
node i. The new equations can be written as
(v1 + v1 )(1) y1 + + (vhi + vhi )(1) yhi
= (y1 v1 (1) + + ymi vmi (1) ) [(vhi +1 + vhi +1 )(1) yhi +1 +
+ (vhi + vhi )(1) yhi ]
(v1 + v1 )(2) y1 + + (vhi + vhi )(2) yhi
= (y1 v1 (2) + + ymi vmi (2) ) [(vhi +1 + vhi +1 )(2) yhi +1 +
+ (vhi + vhi )(2) yhi ] . . .
= (y1 v1 (hi ) + + ymi vmi (hi ) ) [(vhi +1 + vhi +1 )(hi ) yhi +1 +
1.2.7
Cutset-fault diagnosis
Research on multiple-fault diagnosis has been mainly focused on branch- and nodefault diagnosis, as discussed in the previous sections. This section is concerned with
cutset-fault diagnosis as proposed and investigated for both linear and non-linear
circuits by Sun [25, 37, 38]. Relations of branch-, node- and cutset-fault diagnosis
methods are also discussed.
A branch is said to be measurable if the two nodes to which the branch is connected are accessible. The branch voltage of a measurable branch can be obtained
13
(1.17)
where
T 1
Ztt = Y1
t = (DYb D )
(1.18)
(1.19)
and the formula for calculating the unmeasurable tree branch voltages is given by
Vq = Zqt Xt
(1.20)
Similar to the branch- and node-fault diagnosis, assuming that there are k
simultaneous faulty cutsets and k < p, we can locate the faulty cutsets from
Equation (1.19).
After faulty-cutset location, we can easily determine that all branches in the faultfree cutsets are not faulty. All possible faulty branches are in the faulty cutsets. If a
faulty cutset contains only one possible faulty branch, this branch must be faulty. The
faulty branches and faulty component values in the faulty cutsets after calculating the
internal tree branch voltages in Equation (1.20) may be determined by Equation (1.21):
Xt = DXb = DYb DT (Vt + Vt )
(1.21)
14
(1.22)
and the formula for calculating the internal tree branch voltages is given by Vq =
Zqb Xb . Equation (1.22) may benefit from the selectability of trees.
1.2.7.3 Relation of branch-, node- and cutset-fault diagnosis
The three fault vectors are linked by Xn = AXb , Xt = DXb and Xn =
At Xt . The matrix At is invertible and the inversion of it can be obtained using a
simple graph algorithm. If any of Xb , Xn and Xt is known, the other two may
be found using the relations. Also, the cutset-fault diagnosis method will become the
node-fault diagnosis method when a cutset reduces to a node.
1.2.7.4 Loop- and mesh-fault diagnosis
Theoretically, we can also very easily define and derive the loop- and mesh-fault
diagnosis problems, but unfortunately they are not useful practically because for kloop faults and k-mesh faults, loop- and mesh-fault diagnosis methods require that
m (>k) loop and m mesh currents should be measurable, respectively. Measuring
branch currents is not preferred in ICs as it will need breaking the connections (not
in situ). The loop- and mesh-fault diagnosis methods are mentioned here merely for
theoretical completeness of k-fault diagnosis.
1.2.8
15
In all k-fault diagnosis methods, all non-faulty components are assumed to take on
their nominal values. However, in actual circuits, the values of non-faulty components will randomly fall in the tolerance range due to the existence of tolerance. The
reliability of diagnosis results will thus be affected; sometimes this may result in false
fault declaration or real faults missed and thus make fault diagnosis less accurate. The
tolerance effect will become more severe when the fault and tolerance ratio is small.
To solve this problem, one method is to apply a threshold reflecting tolerance effects
for compatibility checking. Another method is to use some optimization method to
search for the faults by minimizing an objective error function. We can also discount
tolerance effects from the actual circuit to have a modified circuit with net changes
caused by the faults only. This method may need a separate tolerance analysis of
the CUT by using the differential incremental circuit concept, a special case of the
fault incremental circuit, as mentioned in Section 1.2.1. Some detailed discussion of
tolerance effects on fault diagnosis may be found in References 4 and 5.
1.3
16
1.3.1
(1.23)
From Equations (1.9), (1.10) and (1.23), we know that for the faulty branch set
Zmki Vm = Vm , which is equivalent to rank[Zmki , Vm ] = k.
Definition 1.3 The k-branch set Sj is said to be dependent on the k-branch set Si if
Zmki Zmkj = Zmkj .
The dependence relation is an equivalence relation. We can use this relation to
classify all k-branch sets, that is, if Si and Sj are dependent, Si and Sj belong to
the same class; otherwise they fall into two different classes. It can be proved that
Zmki Zmkj = Zmkj is equivalent to rank[Zmki , zj ] = k, = 1, 2, , k.
Theorem 1.6 Assume that j
= i. If for j = 1 , 2 , , u , we have Zmki Zmkj = Zmkj
and for all other j, Zmki Zmkj
= Zmkj , then k-branch sets Si , S1 , S2 , . . . , Su form a
class.
Theorem 1.7 Assume j
= i1 , i2 , , ik . If for j = 1 , 2 , . . . , e , Zmki zj = zj and for
all other j, Zmki zj
= zj , the k-branch sets formed by k + e branches i1 , i2 , , ik and
1 , 2 , , e form a class.
Theorem 1.8 If for some (k + 1) branches, rank [Zm(k+1) ] = k, then all k-branch
sets formed by these (k + 1) branches belong to the same class.
If Zmki Vm = Vm , the k-branch set Si is said to be consistent. It can be proved
that if Si and Sj are dependent, they both are consistent or both are inconsistent. It can
also be proved that if Si and Sj are consistent simultaneously, Si and Sj are dependent.
A class Ci is said to be faulty if it contains the faulty branch set. A class Ci is said
to be consistent, if the k-branch sets in the class are consistent. If Si is faulty, it is
17
consistent and if Si is inconsistent, it is not faulty. Thus, the faulty class must be
the consistent class and an inconsistent class is not faulty. Owing to the equivalence
relation, if there is one consistent branch set, the class is consistent and if there is one
inconsistent branch set, the class is inconsistent. There is only one consistent class and
the faulty class can be uniquely determined. Clearly, we can identify the faulty class
by checking only one branch set in each class and once a consistent set/class is found,
we do not need to check any more as the remaining classes are not faulty. When the
number of classes is equal to the number of branch sets, that is, each class contains
only one branch set, the class-fault diagnosis method reduces to the k-branch-fault
diagnosis method.
In the above we assume that all k-branch sets are of full column rank. In the cases
that there are some branch sets which are not of full column rank, the method can also
be used with some generalization. This can be possible by putting all k-branch sets
which are not of full rank together as a class, called the non-full rank class. We can find
all branch sets with rank [Zmki ] < k by checking determinants det(ZTmki Zmki ) = 0.
For all full rank branch sets we do classification and identification as normal. If a
normal full rank class is faulty by consistency checking, the fault diagnosis is completed. If none of the full rank classes is faulty, we judge the non-full rank class as
the faulty class.
Classification should be conducted from k = 1 to k = m 1, unless we know
the k value. Classes are determined for each k value using the method given in the
above. A class table similar to a fault dictionary can be formed before test. Zmki of
one branch set (any one of the k-branch sets) in each class computable before test
can also be included in the class table for consistency checking to identify the faulty
class after test.
The class-fault diagnosis method may be suitable for relatively large-scale circuits
as it targets at a faulty region and a class may actually be a subcircuit. In classfault diagnosis, the after-test computation level is small because classification can be
conducted and the class table can be constructed before test. The number of classes is
smaller than the number of branch sets and due to the unique identifiability, we can stop
checking once a class is found to be consistent thus the times of consistency checking
is at worst equal to the number of classes. There is also no need for testability design
due to the unique identifiability. The method has no restriction; the only assumption
is m > k. The method can also be used to classify k-node sets and k-cutset sets [28].
After the determination of the faulty class, we can further determine the k faulty
branches. If the faulty class contains only one k-branch set, the branch set is the
faulty. Otherwise, the two-excitation methods based on the invariance of the faulty
component values may be used to identify the faulty branch set from others in the
faulty class.
The class-fault diagnosis technique is the best combination of the SBT and SAT
methods, retaining the advantages of both. It uses compatibility verification of linear
equations, but the class table is very similar to a fault dictionary. It can deal with
multiple soft faults and the after-test computation level is small. Topological classification methods to be introduced in the next section can make classification simpler
and computation before test smaller.
18
1.3.2
On the basis of the algebraic classification theory discussed in the above, we present
a topological classification method. First we give some definitions. If some of the
branches in a loop also constitute a cutset, we say the loop contains a cutset. If some
of the branches in a cutset also constitute a loop, we say the cutset contains a loop.
Theorem 1.9 We can find the k-branch sets which are not of full rank topologically
as below [29, 30].
1. When branches i1 , i2 , , ik form a loop or dependent cutset, Si is not of full
rank.
2. The (k + 1), k-branch sets formed by the (k + 1) branches in a (k + 1)-branch
loop containing a dependent cutset are not of full rank. The (k + 1), k-branch
sets formed by the (k + 1) branches in a (k + 1)-branch-dependent cutset
containing a loop are not of full rank.
3. The k-branch sets formed by the (k + 1) branches in a (k + 1)-branch loop
or dependent cutset that shares k branches with another loop containing a
dependent cutset are not of full rank. The k-branch sets formed by the (k + 1)
branches in a (k + 1)-branch dependent cutset or loop that shares k branches
with another dependent cutset containing a loop are not of full rank.
Theorem 1.10 For all normal full rank k-branch sets we can classify them
topologically as below [29, 30]:
1. The (k + 1), k-branch sets in a (k + 1)-branch loop belong to the same class.
The (k + 1), k-branch sets in a (k + 1)-branch dependent cutset belong to the
same class. As a special case of the latter, supposing that an inaccessible node
has (k + 1) branches, the (k + 1), k-branch sets formed by the (k + 1) branches
connected to the node belong to the same class.
2. When a (k +1)-branch loop or dependent cutset shares k branches with another
(k + 1)-branch loop or dependent cutset, all k-branch sets formed by the
branches in the both belong to the same class.
3. In a (k + 2)-branch loop containing a dependent cutset, all those k-branch sets
that do not form the dependent cutset belong to the same class. Similarly, in a
(k + 2)-branch dependent cutset containing a loop, all those k-branch sets that
do not form the loop belong to the same class.
To use the topological classification theorems, we need to find all related loops
and dependent cutsets in the CUT in order to find all non-full rank k-branch sets
and classify all full rank k-branch sets. Dependent cutsets can be found equivalently
in N0 , since it has the same dependent cutsets as those in N, as shown in Theorem
1.5. A systematic algorithm for the construction of the complete dictionary-like class
table has been given in References 29 and 30. The faulty class can be identified
by verifying the consistency of any k-branch set in each class, as discussed in the
preceding section. If none of the full rank classes is consistent, then the non-full rank
class is the faulty class.
1.3.3
19
We take a different view of class-fault diagnosis, which may not necessarily be based
on the k-fault diagnosis method. We focus on the effects of faults at the output,
Vm . Two sets of different numbers of faulty branches can cause the same Vm .
For example, using the current source shifting theorems [62], in a three-branch loop
of the fault incremental circuit, the effect of the three branches being faulty can
be equivalent to any two branches being faulty as shifting one fault compensation
current source to other two branches would not change Vm . They can thus be put
into the same class. Using the branch-fault diagnosis equation Zmb Xb = Vm ,
this means that for the two-branch set, Zm2 X2 = Vm and for the three-branch
set, Zm3 X3 = Vm . Although Zm3 is not of full rank due to the loop, Zm2 is of
full rank. So even if there are three faulty branches, by checking the consistency of
Zm2 X2 = Vm we can still identify the faulty class. Note here that two is not the
number of real faults and the number of faults is k = 3. Generalizing the above simple
consideration, another topological method of class-fault diagnosis of analogue circuits
is described in Reference 31. This method allows the k faulty-branch set to be not of
full rank, which can not be dealt with in a normal way in the methods presented above.
1.3.3.1 Classification theorem
The following discussion is based on the branch-fault diagnosis equation, but
introducing a new type of classification.
Definition 1.4 An i-branch set and a j-branch set are said to be t-dependent if the
same Vm is caused when their branches are faulty.
For the two-branch sets, we have Zmi Xi = Zmj Xj = Vm . Note that i and
j can be equal, for example, i = j = k, which is the case of class-fault diagnosis
in Sections 1.3.1 and 1.3.2 and they can also be different, which is a new case. To
reflect the difference we use the phrase t-dependence to define the relation, where
t will become clear later. It is evident that this dependence relationship is also an
equivalence relation. So we can classify branch sets in a circuit by combining all
dependent branch sets into a class. Obviously each concerned branch set can lie in
one and only one class. A branch set is a class itself only if it is not dependent on
other branch sets. A topological classification theorem is given below.
Theorem 1.11 [31] The (t + 1)-branch set and (t + 1), t-branch sets formed by the
(t + 1) branches in a (t + 1)-branch loop belong to the same class. The (t + 1)-branch
set and (t + 1), t-branch sets formed by the (t + 1) branches in a (t + 1)-branchdependent cutset belong to the same class. As a special case of the latter, supposing
that an inaccessible node has (t + 1) branches connected to it, the (t + 1)-branch set
and (t + 1), t-branch sets formed by the (t + 1) branches belong to the same class.
Note that Xb can be looked upon as a fault excitation current source vector.
Therefore, the above theorem can be easily proved by means of the theorems of
current source and voltage source shift [62]. On the basis of the above theorem some
20
21
1.4
Analogue circuit fault diagnosis has proved to be a very difficult problem. Fault
diagnosis of non-linear circuits is even more difficult due to the challenge in fault
modelling and the complexity of non-linear circuits. There has been less work on
fault diagnosis of non-linear circuits than that of linear circuits in the literature. As
practical circuits are always non-linear; devices such as diodes, transistors, and so
on, are non-linear, development of efficient methods for fault diagnosis of non-linear
circuits become particularly important. Sun and co-workers [3239] have conducted
extensive research into fault diagnosis of non-linear circuits and in References 3239
they have proposed a series of linear methods. This section summarizes some of the
results.
1.4.1
Fault modelling is a difficult task. For linear circuits, component value deviation
from their nominal value is used to judge whether or not a circuit has faults. Parameter identification methods were developed to try to calculate all component values
to determine such deviations. Subsequently, modelling faults by equivalent compensation current sources was proposed. In this method, the component value change is
equivalently described by a fault excitation source. If the fault source current is not
equal to zero, the corresponding component is faulty. Here the real component values
are not the target, but the incremental current sources caused by faults which are used
as an indicator/verifier. This modelling method has resulted in a large range of fault
verification methods. For non-linear circuits, fault modelling by component value
deviation is possible, but is not a convenient or preferred choice, as in many cases
22
there is no direct single value which can represent the state of a non-linear component
like a linear one. A non-linear component often contains several parameters and any
parameter-based method may result in too many non-linear equations. Fortunately,
the fault compensation source method can be easily extended to non-linear circuits.
Any two-terminal non-linear component can be represented by a single compensation source no matter how many parameters are in the characterization function. The
resulting diagnosis equations are accurately (not approximately) linear, although the
circuit is non-linear, thus reduced computation, time and memory. This is obviously
an attractive feature.
In the fault modelling of non-linear circuits [3235], the key problem is that a
change in the operation point of a non-linear component could be caused by either a
fault in itself or by faults in other components. The fault model must be able to tell
the real fault from the fake ones. Whether or not a non-linear component is faulty
should be decided by whether the actual operating point of the component falls on to
the nominal characteristic curve, so as to distinguish it from the fake fault due to the
operation point movement of the non-linear component caused by the faults in other
components.
Consider a nominal non-linear resistive component of the characteristic of
i = g(v)
(1.24)
If the non-linear component is not faulty, the actual branch current and voltage due
to faults in the circuit will satisfy:
i + i = g(v + v)
(1.25)
that is, the actual operation point remains on the nominal non-linear curve, although
moved from the nominal point (i, v); otherwise, the component is faulty since the real
operation point shifts away from the nominal non-linear curve, which means that the
non-linear characteristic has changed.
Introducing
x = i + i g(v + v)
(1.26)
we can then use x to determine whether or not the non-linear component is faulty. If
x is not equal to zero, the component is faulty, otherwise it is not faulty according to
Equation (1.25). Using Equations (1.26) and (1.24), we can write i = g(v + v)
g(v) + x and further:
i = yv + x
(1.27)
where
g(v + v) g(v)
(1.28)
v
which is the incremental conductance at the nominal operation point and can be
calculated when v is known.
Equation (1.27) can be seen as a branch equation where i and v are the branch
current and voltage, respectively; y is the branch admittance and x is a current source.
y=
23
Suppose that the circuit has c non-linear components and all non-linear components
are two-terminal voltage controlled non-linear resistors and have the characteristic
i = g(v). For all non-linear components, we can write:
Ic = Yc Vc + Xc
(1.29)
Equation (1.29) can be treated as the branch equation corresponding to the nonlinear branches in the fault incremental circuit where Yc is the branch admittance
matrix with individual element given by Equation (1.28), Ic , the branch current vector, Vc the branch voltage vector and Xc the current source vector with individual
element given by equation (1.26).
Suppose that the CUT has b linear resistor branches, the branch equation of the
linear branches in the fault incremental circuit was derived in Section 1.2.1 and is
rewritten with new equation numbers for convenience:
Ib = Yb Vb + Xb
(1.30)
(1.31)
Assume that the circuit to be considered has a branches, of which b branches are
linear and c non-linear, a = b + c. The components are numbered in the order of
linear to non-linear elements. The branch equation of the fault incremental circuit can
be written by combining Equations (1.30) and (1.29) as
Ia = Ya Va + Xa
(1.32)
where Ia =[Ib T , Ic T ]T , Va =[Vb T , Vc T ]T , Xa = [Xb T , Xc T ]T and Ya =
diag{Yb , Yc }.
Note that the fault incremental circuit is linear, although the nominal and faulty
circuits are non-linear. This will make fault diagnosis of non-linear circuits much simpler, in the same complexity of linear circuits. Also note that during the derivation,
no approximation is made. So the linearization method is accurate. The traditional
linearization method uses the differential conductance at the nominal operation point,
g(v)/v, causing an inaccuracy in the calculated x and thus an inaccuracy in the
fault diagnosis. Similar to fault diagnosis of linear circuits based on the fault incremental circuit, we can derive branch-, node- and cutset-fault diagnosis equations of
non-linear circuits based on the formulated fault incremental circuit.
Non-linear controlled sources and three-terminal devices can also be modelled.
For example, for a non-linear VCCS with i1 = gm (v2 ), we have i1 = ym v2 + x1
in the fault incremental circuit, where ym = [gm (v2 +v2 )gm (v2 )]/v2 and x1 =
i1 + i1 gm (v2 + v2 ). This remains a VCCC with the incremental current of the
controlled branch, incremental voltage of the controlling branch and a compensation
current source in the controlled branch.
Suppose that a three-terminal non-linear device, with one terminal as common,
has the following functions:
i1 = g1 (v1 , v2 )
i2 = g2 (v1 , v2 )
24
y1
1.4.2
Assume that a non-linear resistive circuit has n nodes (excluding the reference node),
m of which are accessible. Also assume that all non-linear branches are measurable
so that Yc can be calculated once Vc is measured. Using the fault incremental circuit
we can derive the branch-fault diagnosis equation as [32, 33, 35]:
Zma Xa = Vm
(1.33)
and the formula for calculating the internal node voltages is given by
Vl = Zla Xa
(1.34)
AT )1 A
T ]T
T,
T ]T
25
faulty nodes and cutsets by non-zero elements in Xn and Xt respectively. Further
determination of the faulty branches (linear or non-linear) can be conducted based on
Xn = AXa and Xt = DXa for node- and cutset-fault diagnosis respectively.
In the above we have assumed that all non-linear branches are measurable. Thus,
non-linear components are connected among accessible nodes and ground and are in
the chosen tree as measurable tree branches. This may limit the number of non-linear
components and when choosing test nodes we need to select those nodes connected
by non-linear components. This may not be a serious problem, since in practical electronic circuits and systems, linear components are dominant; there are usually only a
very few non-linear components in an analogue circuit. It is noted that the coefficients
of all diagnosis equations are determined after test as calculation of Yc can only be
obtained after Vc is measured. However, this is a rather simple computation. Also,
partitioning the node admittance matrix or the cutset admittance matrix according to
accessible and inaccessible nodes or tree branches, only the block of the dimension of
m m or p p corresponding to the accessible nodes or tree branches is related to Yc .
All the other three blocks can be obtained before test because they do not contain Yc .
Using block-based matrix manipulation, the contribution of the m m or p p block
can be moved to the right-hand side to be with the incremental accessible node or tree
branch voltage vector for after-test computation and thus the main coefficient matrix
in the left-hand side of the diagnosis equations can still be computed before test [35].
In the next section, we will further discuss other ways of dealing with non-linear
components.
1.4.2.1 Bilinear function for k-fault parameter identification
For non-linear circuit fault diagnosis we focus on fault location by compensation
current sources rather than parameter identification, as for non-linear components,
defining deviation in branch admittances is not possible or would not provide any
useful further information about the faulty state or nature. We can, however, continue
to use the deviation model for linear components Equation (1.31) to determine the
values of the faulty linear components.
Generally, we can determine the values of the faulty linear components using the
methods similar to those used for linear circuits after branch-, node- and cutset-fault
diagnosis, treating Xc as known current sources. Here we mention that bilinear
relations between linear component value increments and accessible node voltage
increments can also be established for non-linear circuits. This allows us to determine
the values of faulty linear components and develop a two-excitation method with
enhanced diagnosability. Suppose that there are k1 faulty linear components and k2
faulty non-linear components, k1 + k2 = k. On the basis of the branch-fault diagnosis
method and equations derived using the nodal analysis method, for non-linear circuits,
we can also derive a bilinear relation given by Reference 39:
col(Yk1 ) = {diag[Ak1 T (Vn + Tnm Vm )]}1 Wk1k Zmk L Vm
(1.35)
where Wk1k =[Uk1k1 , Ok1k2 ] and Uk1k1 is a unity matrix of dimension of k1 k1 and
Ok1k2 is a zero matrix of k1 k2 . The meanings of the other symbols including Zmk L
and Tnm are the same as those in Section 1.2.4.
26
1.4.3
The fundamental issues of fault diagnosis of non-linear circuits have been discussed
above. Now we address some alternative, possibly more useful, solutions for different
situations of non-linear circuit fault diagnosis.
1.4.3.1 Alternative fault models of non-linear components [33, 34]
To lift the requirement that all non-linear components are measurable, we can do some
equivalent transformations to the non-linear branches. For a non-linear component
i = g(v), as modelled in Section 1.4.1, the corresponding branch equation in the
fault incremental circuit is given by Equation (1.27). If we add to and subtract from
Equation (1.27) the same item y v, the equation remains the same. That is
i = y v + (yv y v + x)
(1.36)
Introducing:
x = (y y )v + x
(1.37)
(1.38)
Equation (1.38) has the same form as Equation (1.27). So, the corresponding
branch in the fault incremental circuit has y as the branch admittance and x as the
compensation current source.
Two points are interesting. One is that y can be any value and can thus be set
arbitrarily before test. If we use Equation (1.38) for all non-linear components, the
admittance matrix of the fault incremental circuit will be known before test and so
are all diagnosis equation coefficients. The other is that x may not be zero even for
a fault-free non-linear component unless y is chosen to be equal to y. However, if we
can determine x , x may be determined from Equation (1.37) after y is calculated.
The true fault state of the non-linear component can still be found.
1.4.3.2 Quasi-fault incremental circuit and fault diagnosis [33, 34, 36]
We consider two cases here. The first case is that we use the alternative model for
all non-linear branches, irrespective of whether or not a non-linear component is
27
measurable. On the basis of Equations (1.38) and (1.37) we can write the branch
equation for all non-linear branches as
Ic = Yc Vc + Xc
(1.39)
(1.40)
The overall branch equation of the fault incremental circuit can be obtained by
combining Equations (1.30) and (1.39) as
Ia = Ya Va + Xa
Xa =
[XTb , Xc T ]T ,
(1.41)
Ya = diag{Yb , Yc }
We call the transformed fault incremental circuit of Equation (1.41) the quasi-fault
incremental circuit. We can derive branch-, node- and cutset-fault diagnosis equations
on the basis of the quasi-fault incremental circuit. Taking branch-fault diagnosis as an
example, after determination of Xa , we cannot immediately judge the state of the
non-linear components. However, Vc can be calculated and thus Yc . Then we can
calculate Xc from Equation (1.40) and use it to decide if a non-linear component is
faulty.
Because Yc can be chosen before test, diagnosis equation coefficients can be
obtained before test. This can reduce computation after test and is good for online
test. Because all non-linear branches will be in the faulty branch set and any node
or cutset that contains a non-linear component will behave as a faulty one whether
or not the non-linear component is faulty, the number of possible faulty branches,
nodes and cutsets for branch-, node- and cutset-fault diagnosis will increase. This
may require more test nodes for a circuit that contains more non-linear components.
In the search of faults, only the k-branch sets that contain all non-linear components,
k-node sets containing all nodes connected by non-linear components and k-cutset
sets containing all cutsets with non-linear components need to be considered.
1.4.3.3 Mixed-fault incremental circuit and fault diagnosis [3739]
The second case is that for measurable non-linear components we still use the original
model Equation (1.27), while for unmeasurable non-linear components we use the
alternative model of Equation (1.38). Assuming that there are c1 measurable nonlinear branches and c2 unmeasurable non-linear branches, c1 + c2 = c, then we have
the corresponding branch equations of the respective non-linear branches in the fault
incremental circuit as
Ic1 = Yc1 Vc1 + Xc1
(1.42)
(1.43)
(1.44)
28
The overall branch equation of the fault incremental circuit can be obtained by
combining Equations (1.30) (1.42) and (1.43) as
= Ya Va + Xa
Ia
(1.45)
(1.46)
(1.47)
(1.48)
29
(1.49)
(1.50)
(1.51)
where Ancc is the incident matrix of nc non-linear nodes corresponding to c nonlinear branches. Dtcc is a tcc submatrix of D. All equation coefficients are a function
of Ya .
On the basis of the above equations, the two-step method can now be stated
as follows. Solve corresponding diagnosis equations in equations (1.46) (1.48) and
(1.50) and decide the fault status of linear branches, linear nodes and linear cutsets
by Xb , Xnb and Xtb , respectively. Then calculate Vc and Yc and further Xc ,
Xnc and Xtc using Equations (1.47) (1.49) and (1.51) and use them to decide the
fault status of non-linear branches, non-linear nodes and non-linear cutsets.
Similarly, the two-step method can also be explained for fault diagnosis of nonlinear circuits on the basis of the mixed-fault incremental circuit. On the basis of
the above two alternative fault incremental circuits, we can also discuss class-fault
diagnosis and linear component value determination issues [39].
1.5
The following areas and methods of fault diagnosis of analogue circuits have received
particular attention in recent years. Some promising results have been achieved. We
briefly summarize them here, leaving the details to be covered in the following three
chapters.
1.5.1
Considerable research has been conducted on testability analysis and design in terms
of test nodes, test excitations and topological structures of analogue circuits based on
the k-fault diagnosis methods of fault verification [2224]. In addition to the early
work on the faulty dictionary method [11, 12], References 40 and 41 have proposed
computationally efficient methods for test node selection in analogue fault dictionary
techniques. Test node selection techniques can be classified into one of two categories:
selection by inclusion or selection by exclusion. For the inclusion method, the desired
optimum set of test nodes is initialized at zero, then a new test point is added to it if
needed. For the exclusion approach, the desired optimum set is initialized to include
all available test nodes. Then a test node is deleted if its exclusion does not degrade
the degree of fault diagnosis. In Reference 40, strategies and methods for inclusion
of a node and exclusion of a node as a test node have been proposed. An inclusion
method is developed for test node selection by transforming the problem of selection
of test measurements into the well-known sorting problem and minimal test sets are
obtained by an efficient-sorting-based exclusion method. Reference 41 has further
proposed an entropy-based method for optimum test point selection, based on an
30
integer-coded dictionary. The minimum test set is found by using the entropy index
of test points.
References 42 and 43 have studied the testability of analogue circuits in the frequency domain using the fault observability concept. Steady frequency responses are
used. Methods for choosing input frequencies and test nodes to enhance fault observability of the CUT are proposed. The methods proposed in References 42 and 43 are
based on differential sensitivity and incremental sensitivity analysis, respectively. The
differential-sensitivity-based method is realistic for manipulating soft faults, while
for large deviation and hard faults, accuracy increases with the use of incremental
sensitivity.
References 44 and 45 have investigated the testability of analogue circuits in
the time domain. Transient time responses are used. The proposed test generation
method in Reference 44 is targeted towards detecting specification violations caused
by parametric faults. The relationship between circuit parameters and circuit functions
is used for deriving optimum transient tests. An algorithm for generating the optimum
transient stimulus and for determining the time points at which the output needs to be
sampled is presented. The research on the optimum input stimulus and sampling points
is formulated as an optimization problem where the parameters of the stimulus (the
amplitude and pulse widths for pulse trains, slope for ramp stimulus) are optimized.
The test approach is demonstrated by deriving the optimum piecewise linear (PWL)
input waveform for transient testing. The PWL input stimulus is used because any
general transient waveform can be approximated by PWL segments.
In Reference 45 a method of selecting transient and a.c. stimuli has been presented
based on genetic algorithms and wavelet packet decomposition. The method minimizes the ambiguity of faults in the CUT. It also reduces memory and computation
costs because matrix calculation is not required in the optimization. The stimuli here
are in accordance with PWL transient and a.c. sources. A PWL source is defined by
the given time interval of two neighbouring inflexion points, the number of inflexion
points (the first point is (0, 0)) and the magnitude of the point of inflexion of PWL
with its value that can be varied within a range. An a.c. source is defined by the
test frequency and corresponding magnitude. The frequency can be changed within a
range, with the first test frequency being equal to zero (d.c. test) and the total number
of test frequencies can be chosen. Using wavelet packet decomposition to formulate
the objective function and genetic algorithms to optimize it, we can obtain the magnitudes of inflexion points of transient PWL sources or the values of test frequencies
of a.c. sources.
1.5.2
31
1.5.3
Generally, tolerance effects make the parameter values of circuit components uncertain and the computational equations of traditional methods complex. The non-linear
characteristic of the relation between the circuit performance and its constituent components makes it even more difficult to diagnose faults online and may lead to a false
diagnosis. To overcome these problems, a robust and fast fault diagnosis method
taking tolerances into account is needed. Neural networks have the advantages of
large-scale parallel processing, parallel storing, robust adaptive learning and online
computation. Neural networks provide a mechanism for adaptive pattern classification. They are therefore ideal for fault diagnosis of analogue circuits with tolerances.
Several neural-network-based approaches have been recently proposed for analogue
fault diagnosis and they appear to be very promising [5056]. Most studies make use
of the adaptive and robust classification features of neural networks [5055] however, other studies simply use the neural networks as a fast and efficient optimization
method [56]. More recently, wavelet-based techniques have been proposed for fault
diagnosis and testing of analogue circuits [45, 52, 55]. A neural-network-based fault
diagnosis method has been developed in Reference 52 using a wavelet transform
as a preprocessor to reduce the number of input features to the neural network. In
Reference 55 the authors have used a wavelet transform and packets to extract appropriate feature vectors from the signals sampled from the CUT under various faulty
conditions. Chapter 3 will describe different methods for fault diagnosis of analogue
circuits using neural networks and wavelet transforms.
1.5.4
The size and complexity of integrated analogue circuits and systems has continued
to grow at a remarkable pace during recent years. Many of the fault diagnosis methods proposed, however, are only suitable for relatively small circuits. For large-scale
32
analogue circuits, the decomposition and hierarchical approach has attracted considerable attention in recent years [35, 5761]. Some early work on fault diagnosis of
large-scale analogue circuits is based on the decomposition of circuits and verification
of certain KCL equations [58]. This method divides the CUT into a number of subcircuits based on nodal decomposition and requires that measurement nodes are the
decomposition nodes. Branch decomposition and branch-node mixed decomposition
methods can also be used. A simple method for cascaded analogue systems has also
been proposed [59]. This method first decomposes a large-scale circuit into a cascaded
structure and then verifies the invariance of simple voltage ratios of different stages to
isolate the faulty stage(s). This method has the minimum computation cost both after
and before test as it only needs to calculate the voltage ratios of accessible nodes and
does not need to solve any linear or non-linear equations. Another method is to first
divide the CUT into a number of subcircuits, then find the equivalent circuits of the
subcircuits and use k-fault diagnosis methods to locate the faults of the large-scale
circuits by diagnosing the equivalent circuits [35]. Here an m-terminal subcircuit is
equivalently described by (m1) branches no matter how complex the inside of the
subcircuit. If any of the (m1) equivalent branches is faulty, then the subcircuit is
faulty. More recently, hierarchical methods based on component connection models
[57] have been proposed [60, 61]. The application of hierarchical techniques to the
fault diagnosis of large-scale analogue circuits will be reviewed in Chapter 4.
1.6
Summary
A review of fault diagnosis techniques for analogue circuits with a focus on fault
verification methods has been represented. A systematic treatment of the k-fault
diagnosis theory and methods for both linear and non-linear circuits as well as the
class-fault diagnosis technique has been given. The fault incremental circuit for
both linear and non-linear circuits has been introduced, based on which a coherent
discussion on different fault diagnosis methods has been achieved.
The k-fault diagnosis method involves only linear equations after test and requires
only a few accessible nodes. Both algebraic and topological methods have been presented in detail for fault verification and testability analysis in branch-fault diagnosis.
A bilinear method for k-component value determination and a multiple excitation
method for parameter identification in node-fault diagnosis have been described. The
cutset-fault diagnosis method has also been discussed, which is more flexible and less
restrictive than branch- and node-fault diagnosis methods owing to the selectability
of trees in a circuit.
A class-fault diagnosis theory for fault location has been introduced, which comprises both algebraic and topological classification methods. The class-fault diagnosis
method classifies branch sets, node-sets or cutset-sets according to an equivalent relation. The faulty class can be uniquely identified by checking consistency of any set
in a class. This method has no structural restriction and classification can be carried
out before test. Class-fault diagnosis can be viewed as a combination of the fault
dictionary and fault verification methods.
33
Linear methods and special considerations for fault diagnosis of non-linear circuits have been discussed. Faults in non-linear circuits are accurately modelled by
compensation current sources and the linear fault incremental circuit has been constructed. Linear equations for fault diagnosis of non-linear circuits can be derived
based on the fault incremental circuit. All k-fault diagnosis and class-fault diagnosis
methods developed for linear circuits have been extended to non-linear circuits using
the fault incremental circuit.
Some latest advances in fault diagnosis of analogue circuits have been reviewed,
including selection and design of test points and test signals. The next three chapters
will continue the discussion of fault diagnosis of analogue circuits with a detailed
coverage of three topical fault diagnosis methods: the symbolic function, neural
network and hierarchical methods in Chapters 2, 3 and 4, respectively.
1.7
References
34
12 Lin, P.-M., Elcherif, Y.S.: Analog circuits fault dictionary new approaches
and implementation, International Journal of Circuit Theory and Applications,
1985;13 (2): 14972
13 Berkowitz, R.S.: Conditions for networkelement-value solvability, IRE Transactions on Circuit Theory, 1962;6 (3):249
14 Navid, N., Wilson, A.N. Jr.: A theory and algorithm for analog circuit fault
diagnosis, IEEE Transactions on Circuits and Systems, 1979;26 (7):44057
15 Trick, T.N., Mayeda, W., Sakla, A.A.: Calculation of parameter values from node
voltage measurements, IEEE Transactions on Circuits and Systems, 1979;26 (7):
46673
16 Roytman, L.M., Swamy, M.N.S.: One method of the circuit diagnosis,
Proceedings of IEEE, 1981;69 (5):6612
17 Bandler, J.W., Biernacki, R.M., Salama, A.E., Starzyk, J.A.: Fault isolation in
linear analog circuits using the L1 norm, Proceedings of IEEE International
Symposium on Circuits and Systems, 1982, pp. 11403
18 Biernacki, R.M., Bandler, J.W.: Multiple-fault location in analog circuits, IEEE
Transactions on Circuits and Systems, 1981;28 (5):3616
19 Starzyk, J.A., Bandler, J.W.: Multiport approach to multiple fault location in
analog circuits, IEEE Transactions on Circuits and Systems, 1983;30 (10):7625
20 Trick, T.N., Li, Y.: A sensitivity based algorithm for fault isolation in analog
circuits, Proceedings of IEEE International Symposium on Circuits and Systems,
1983, pp. 10981101
21 Sun, Y.: Bilinear relations for fault diagnosis of linear circuits, Proceedings of
CSEE and IEEE Beijing Section National Conference on CAA and CAD, Zhejiang,
1988
22 Sun, Y.: Determination of k-fault-element values and design of testability in
analog circuits, Journal of Electronic Measurement and Instrument, 1988;2
(3):2531
23 Sun, Y., He, Y.: Topological conditions, analysis and design for testability in
analogue circuits, Journal of Hunan University, 2002;29 (1):8592
24 Huang, Z.F., Lin, C., Liu, R.W.: Node-fault diagnosis and a design of testability,
IEEE Transactions on Circuit and Systems, 1983; 30 (5):25765
25 Sun, Y.: Faulty-cutset diagnosis of analog circuits, Proceedings of CIE 3rd
National Conference on CAD, Tianjin, 1988, pp. 3-143-18
26 Sun, Y.: Theory and algorithms of solving a class of linear algebraic equations,
Proceedings of CSEE and IEEE Beijing Section National Conference on CAA and
CAD, Zhejiang, 1988
27 Togawa, Y., Matsumato, T., Arai, H.: The TF -equivalence class approach to
analog fault diagnosis problems, IEEE Transactions on Circuits and Systems,
1986;33 (10):9921009
28 Sun, Y.: Class-fault diagnosis of analog circuits theory and approaches,
Journal of China Institute of Communications, 1990;11 (5):238
29 Sun, Y.: Faulty class identification of analog circuits, Proceedings of CIE 3rd
National Conference on CAD, Tianjin, 1988, pp. 3-403-43
35
36
48 Starzyk, J.A., Pang, J., Manetti, S., Piccirilli, M.C., Fedi, G.: Finding ambiguity
groups in low testability analog circuits, IEEE Transactions on Circuits and
Systems, 2000;47 (8):112537
49 Stenbakken, G.N., Souders, T.M., Stewart, G.W.: Ambiguity groups and
testability, IEEE Transactions on Circuits and Systems, 1989;38 (5):9417
50 Spina, R., Upadhyaya, S.: Linear circuit fault diagnosis using neuromorphic
analyzers, IEEE Transactions on Circuits and Systems-II, 1997;44 (3):18896
51 Aminian, F., Aminian, M., Collins, H.W.: Analog fault diagnosis of actual circuits
using neural networks, IEEE Transactions on Instrumentation and Measurement,
2002;51 (3):54450
52 Aminian, M., Aminian, F.: Neural-network based analog circuit fault diagnosis
using wavelet transform as preprocessor, IEEE Transactions on Circuits and
Systems-II, 2000;47 (2):1516
53 He, Y., Ding, Y., Sun, Y.: Fault diagnosis of analog circuits with tolerances using
artificial neural networks, Proceedings of IEEE APCCAS, Tianjin, China, 2000,
pp. 2925
54 He, Y., Tan, Y., Sun, Y.: A neural network approach for fault diagnosis of large
scale analog circuits, Proceedings of IEEE ISCAS, Arizona, USA, 2002, pp.
1536
55 He, Y., Tan, Y., Sun, Y.: Wavelet neural network approach for fault diagnosis
of analog circuits, IEE Proceedings Circuits, Devices and Systems, 2004;151
(4):37984
56 He, Y., Sun, Y.: A neural-based L1-norm optimization approach for fault diagnosis of nonlinear circuits with tolerances, IEE Proceedings Circuits, Devices
and Systems, 2001;148 (4):2238
57 Wu, C.C., Nakazima, K., Wei, C.L., Saeks, R.: Analog fault diagnosis with
failure bounds, IEEE Transactions on Circuits and Systems, 1982;29 (5):27784
58 Salama, A.E., Starzyk, J.A., Bandler, J.W.: A unified decomposition approach
for fault location in large scale analog circuits, IEEE Transactions on Circuits
and Systems, 1984;31 (7):60922
59 Sun, Y.: Fault diagnosis of large-scale linear networks, Journal of Dalian Maritime University, 1985; 11 (3), also Proceedings of CIE National Conference on
LSICAD, Huangshan, 1985, pp. 95101
60 Ho, C.K., Shepherd, P.R., Eberhardt, F., Tenten, W.: Hierarchical fault diagnosis of analog integrated circuits, IEEE Transactions on Circuits and Systems,
2001;48 (8):9219
61 Sheu, H.T., Chang, Y.H.: Robust fault diagnosis for large-scale analog circuits
with measurement noises, IEEE Transactions on Circuits and Systems, 1997;44
(3):198209
62 Sun, Y.: Some theorems on the shift of nonideal sources and circuit equivalence,
Electronic Science and Technology, 1987;17 (5):1820.
Chapter 2
2.1
Introduction
38
I /O relations
Rk
Lm
gn
Component values
........
Ri = 10 k
Cj = 5 pF
Rk = 15 k
Lm = 3.3 mH
gn = 3
........
Figure 2.1
approach is a natural choice, because an I/O relation, in which the component values
are the unknowns, is properly represented by a symbolic I/O relation.
The chapter is organized as follows: In Section 2.2, a brief review on symbolic
analysis is reported. Section 2.3 is dedicated to symbolic procedures for testability
analysis, that is, testability evaluation and ambiguity group determination. As it will
be shown, the testability and ambiguity group concepts are of fundamental importance
for determining the solvability degree of the fault diagnosis problem respectively at
the global level and at a component level, once the test points have been selected.
So, testability analysis is essential to both the designer, who must know which test
points to make accessible, and the test engineer, who must know how many and what
parameters can be uniquely isolated by the planned tests.
In Section 2.4 fault diagnosis procedures based on the use of symbolic techniques
are reported.
Both Sections 2.3 and 2.4 refer to analogue linear or linearized circuits. This
is not so big a restriction, because, the analogue part of modern complex systems
is almost all linear, while the non-linear functions are moved toward the digital
part [1]. However, in Section 2.5 a brief description of a possible use of symbolic
methods for testability analysis and fault diagnosis of non-linear analogue circuits is
reported.
2.2
39
Symbolic analysis
F(s) =
i
s ai (p1 , ..., pm )
N(s, p1 , ..., pm )
= i i
D(s, p1 , ..., pm )
i s bi (p1 , ..., pm )
(2.1)
40
2.2.1
During the past 30 years, several algorithms and computer programs for circuit symbolic analysis have been introduced. These symbolic techniques can be classified,
according to the basic method used, as follows:
1. Algebraic methods:
numerical interpolation methods
parameter extraction methods
determinant expansion methods.
2. Topological methods:
tree enumeration methods
two-graph method
directed-tree enumeration method
flowgraph methods
signal-flow-graph method
Coates-flow-graph method.
The algebraic methods are based on the idea of generating the symbolic circuit
equations, using symbolic manipulations of algebraic expressions, directly solving
the linear system that describes the circuit behaviour, obtained, for example, using
the modified nodal analysis (MNA) technique. Several computer programs have been
realized in the past following this way. Interesting results have been obtained, in
particular, using determinant expansion methods.
The topological methods are based, essentially, on the enumeration of some
subgraphs of the circuit graph. Among these methods, the two-graph method is particularly efficient. The efficiency of the method is mainly owing to the fact that
it, intrinsically, does not generate cancelling terms. In fact, the presence of cancelling terms can produce a severe overhead in computational times, due to the
post-processing needed to elaborate the cancellations.
The basic two-graph method works only on circuits that contain resistors, capacitors, inductors and voltage-controlled current sources; but it is possible to include all
the other circuit elements using simple preliminary network transformations.
2.2.2
A personal computer program based on the use of the two-graph method is SAPWIN,
developed during the last years by the authors.
SAPWIN is an integrated package of schematic capture, symbolic analysis and
graphic post-processing for linear analogue circuits. The program provides several
tools to create the schema of a linear analogue circuit, to perform its symbolic analysis
and to show the results in graphic form. In the schematic capture option, the main
screen is a white sheet where the user can draw a circuit by using typical Windows
tools to copy, cut, paste, move and edit a component or a part of the circuit. All the
passive components, controlled sources and many linear models of active devices
(operational amplifiers and small-signal equivalent models of BJTs and MOSFETs)
41
are available. The program can produce symbolic network functions where each component can appear with its symbolic name or with a numerical value. The graphical
post-processor is able to show the network function and to plot gain, phase, delay,
pole and zero position, time-domain step and impulse response. The program can be
freely downloaded at the address http://cirlab.det.unifi.it/SapWin.
The symbolic expressions generated by SAPWIN are also saved, in a particular
format, in a binary file, which can constitute an interface to other programs. During
the past years, several applications have been developed using SAPWIN as a symbolic
simulator engine, such as symbolic sensitivity analysis, transient analysis of power
electronic circuits, testability evaluation and circuit fault diagnosis. All the programs
presented in this chapter are based on the use of SAPWIN.
2.3
In general, a method for locating a fault in an analogue circuit consists in the measure
of all its internal parameters, comparing the measured values with their nominal
working ranges. This kind of measurement, as can be imagined, is not straightforward
and, often, it is not possible to characterize all the parameters. The possibility of
actually accessing this information depends on which kind of measurements are made
on the circuit, as well as on the internal topology of the circuit itself. Then the selection
of the set of measurements, that is, of the test points, is an essential problem in fault
diagnosis applications, because not all the possible test points can be reached in
an easy way. For example, usually it is very difficult to measure currents without
breaking connections or, for complex circuits, a great number of measurements could
not be economically convenient. In other words, the test point selection must take into
account practical measurement problems that are strictly tied with the used technology
and with the application field of the circuit under consideration. So, in order to perform
test point selection, it is necessary to have a quantitative index to compare different
possible choices. The testability measure concept meets this requirement.
Testability is strictly tied to the concept of network-element-value-solvability,
which was first introduced by Berkowitz [8]. Successively, a very useful testability
measure was introduced by Saeks and co-workers [912]. Other definitions have been
presented in subsequent years (see, for example, References 1315); and, then, there
is not a universal definition of analogue testability. However, the Saeks definition has
been the most widely used [1619], because it provides a well-defined quantitative
measure of testability. In fact, once a set of test points are selected by representing
the circuit under test (CUT) through a set of equations non-linear with respect to
the component parameters, the testability definition gives a measure of solvability of
these equations and indicates the ambiguity resulting from an attempt to solve such
equations in a neighbourhood of almost any failure. Therefore, this testability measure
allows to know a priori if a unique solution of the fault diagnosis problem exists.
Furthermore, if this solution does not exist, it gives a quantitative measure of how far
we are from it, that is, how many components cannot be diagnosed with the given test
point set.
42
2.3.1
The analogue CUT can be considered as a multiple-input multiple-output linear timeinvariant system. Using the MNA, the circuit can be described using the following
equation:
y(p, s)
x(s)
A(p, s)
=
(2.2)
E(p, s)
0
where p = [p1 p2 . . . pm ]t is the vector of the potentially faulty parameters, assuming
that all the faults are expressed as parameter variations, without influencing the circuit
topology (faults as short and open are not considered, that is, the approach is suitable
for parametric faults and not for catastrophic faults), x(s) = [x1 (s) x2 (s) . . . xnx (s)]t
is the input vector, A(p, s) is the characteristic matrix, conformable to the vectors,
y(p, s) = [y1 (p, s) y2 (p, s) . . . yny (p, s)]t is the vector of the output test points (voltages and/or currents) and E(p, s) = [E1 (p, s) E2 (p, s) . . . Ene (p, s)]t is the vector
of the inaccessible node voltages and/or currents of all the elements that do not have
an admittance representation.
The fault diagnosis equations of the CUT are constituted by the network
functions relevant to each test point output and to each input. They can be
43
obtained from Equation (2.2) by applying the superposition principle and have the
following form:
(j)
hi (p, s) =
(j)
yi (p, s)
det Aij (p, s)
= (1)i+j
xj (s)
det A(p, s)
i = 1, . . . , ny
j = 1, . . . , nx
(2.3)
(j)
with Aij (p, s) minor of the matrix A(p, s) and yi (p, s) the ith output due to the
contribution of input xj only. As it can be easily noted, the total number of the fault
diagnosis equations is equal to the product of the number of outputs and inputs.
Let (s) = (rk (s)) be the Jacobian matrix associated with the algebraic diagnosis Equation (2.3) evaluated at a generic frequency s and at a nominal value p0 of
the parameters. From Equation (2.3) we obtain for rk (s):
det Aij (p, s)
i+j
rk (s) = (1)
(2.4)
pk
det A(p, s) p =p
0
where r = (i j) nx + j. The matrix(s) is rational in s and, from Equation (2.4),
we get that the functions (det A(p, s))2 p = p rk (s)are polynomial functions in s. As
0
shown in References 912, the testability measure T of the analogue system, evaluated
in a suitable neighbourhood of the nominal value p0 , is given by the maximum number
of linearly independent columns of (s)
T = rankcol ((s))
(2.5)
44
This method provides a valid mean for the numerical computation of testability.
However, the numerical programs obtained in this way are of a very high computational complexity. First, the calculation of the coefficients of the polynomials prk
requires the knowledge of the values assumed by the polynomials in at least d + 1
points, where d is the degree of the polynomial; this degree must be a priori estimated,
on the basis of the type of the components present in the CUT. Therefore, for large
circuits, the numerical calculation of a considerable number of circuit sensitivities is
required. Furthermore, the program must take into account the inevitable round-off
errors introduced by the algorithm used for sensitivity computation. This problem
was partially overcome by using two different polynomial expansions (for example,
Reference 23). Nevertheless, for large circuits these errors could have a magnitude
so large that the obtained testability values must be considered only as an estimate of
the true testability.
2.3.1.2 Symbolic approach
The drawbacks of the previous numerical approach are overcome if we are able to
determine the polynomial matrix directly in a completely symbolic form. In fact, it
has been proven [24] that the number of linearly independent columns of the matrix
P(s) is equal to the rank of a matrix B constituted by the coefficients of the polynomial
functions of P(s). Then, the entries of the matrix B are independent with respect to
the complex frequency s. In other words, by expressing P(s) in the following way:
...
(2.7)
P(s) =
...
pl1 (s) pl2 (s) . . . plm (s)
0 + b1 s + + bd sd and d = max {deg p , deg p , . . . deg p },
with prk (s) = brk
11
12
lm
rk
rk
we have rankcol P(s) = rank B, where B is a matrix of order (d + 1) l m(l = nx ny )
of the following form:
0
0
0
. . . b1m
b11 b12
...
0
0
0
b
1l1 b1l2 . . . b1lm
b
11 b12 . . . b1m
...
(2.8)
B=
b1 b1 . . . b1
l1
l2
lm
...
bd bd . . . bd
11
12
1m
...
d
d
d
bl1 bl2 . . . blm
45
In a numerical computation of testability, the previous result is not easily applicable, because the computation of the coefficients brk by means of classical numerical
analysis algorithms is very difficult and may cause considerable drawbacks, particularly for large networks. The result is very useful if the coefficients brk are in
completely symbolic form. In fact, in this case, they are functions of circuit parameters, to which we can assign arbitrary values, because testability is independent of
component values [10]. Furthermore, since the matrix B is, essentially, a sensitivity
matrix of the CUT, starting from a fully symbolic generation of the network functions
corresponding to the selected fault diagnosis equations, it is very easy to obtain symbolic sensitivity functions [3032]. As a consequence, the use of a symbolic approach
simplifies the testability measure procedure and reduces round-off errors, because the
entries of B are not affected by any computational error.
An important simplification of this procedure, reported in Reference 25, results
from the fact that the testability measure can be evaluated as rank of a matrix BC ,
constituted by the derivatives of the coefficients of the fault diagnosis equations with
respect to the potentially faulty circuit parameters.
For the sake of simplicity, let us consider a circuit with only one test point and
only one input. In this case there is only one fault diagnosis equation that must be
expressed in the following form
n
ai (p) si
N(p, s)
h(s, p) = m1i = 0
=
m
j
D(p, s)
j = 0 bj (p)s + s
(2.9)
t
with p = p1 , p2 , . . . , pR vector of potentially faulty parameters, n and m degrees
of numerator and denominator, respectively. The matrix BC , of order (m + n + 1) R,
constituted by the derivatives of the coefficients of h(s,p) in Equation (2.9) with respect
to the R unknown parameters, is the following:
BC =
a0
p1
a1
p1
a0
p2
a1
p2
an
p1
b0
p1
an
p2
b0
p2
bm1
p1
bm1
p2
...
...
...
...
...
...
...
...
...
...
...
a0
pR
a1
pR
...
an
pR
b0
pR
...
bm1
pR
(2.10)
As shown in Reference 25, the matrix BC has the same rank of the previously
defined matrix B, because the rows of B are linear combination of the rows of
BC . Then the testability value can be computed as the rank of BC by assigning
arbitrary values to the parameters pi and by applying classical triangularization methods. If the CUT is a multiple-input multiple-output system, that is, if there is more
than one fault diagnosis equation, the same result can be easily obtained. This is a
noteworthy simplification from a computational point of view, because derivatives of
46
the coefficients of fault diagnosis equations are simpler to compute with respect to
derivatives of fault diagnosis equations.
The described procedure has been implemented in the program, SYmbolic FAult
Diagnosis (SYFAD) [33, 34], based on the software package SAPWIN [3537].
It should be noted that, from this procedure, it is possible to derive some necessary conditions for a testable circuit (that is, a circuit with maximum of testability)
which are very simple to apply. These necessary conditions are simply based on the
consideration that, for a maximum of testability, the matrix BC must have a rank
equal to the number of unknown parameters, that is, equal to the number of columns.
Then, for a circuit with a given set of test points, we have the following first necessary
condition:
Necessary condition for maximum of testability is that the number of coefficients in the
fault diagnosis equations must be equal or greater than the number of unknown parameters.
Another interesting necessary condition follows from the consideration that the
number of coefficients depends on the order of the network. In fact, the maximum
number of coefficients of a network function is 2N + 1, if the network is of order
N. From this consideration and from the previous necessary condition, it is possible to determine the minimum number of fault diagnosis equations and then of test
points, necessary for maximum testability or, given the number of test points, it is
possible to determine the maximum number of unknown parameters for a maximum
of testability. For the single test point case, we have Mp = 2N + 1, where Mp
is the maximum number of unknown parameters, that is, the maximum number of
parameters that are possible to determine with the given fault diagnosis equation. For
the multiple-test point case, since all the fault diagnosis equations are characterized
by the same denominator, we have Mp = N + n(N + 1), where n is the number
of fault diagnosis equations. In summary, we have the following second necessary
condition:
For a circuit of order N, with n test points, a necessary condition for a maximum of
testability is that the number of potentially faulty parameters is equal or lower than
N + n(N + 1).
47
necessary to divide all the coefficients of the rational functions by the coefficient of
the highest term of the denominator, with a consequent complication in the evaluation
of the derivatives (derivative of a rational function instead of a polynomial function).
In this case an increase in computing speed can be obtained by applying the approach
presented in References 26 and 27, where the testability evaluation is performed
starting from fault diagnosis equations with the coefficient of the highest-order term
of the denominator different from one.
2.3.2
Ambiguity groups
At this point it is important to discuss the importance of the testability concept and
to understand the information given by canonical ambiguity group determination.
As was previously mentioned, once the matrix BC has been determined, testability
evaluation can be performed by assigning arbitrary values to the component parameters and triangularizing BC . The disadvantage of considering as a testability matrix
the matrix BC instead of the Jacobian matrix consists in the fact that the testability
meaning of solvability measure of the fault diagnosis equations is less immediate.
However, this limitation can be overcome by splitting the fault diagnosis equation
solution into two phase. In the first phase, starting from the measurements carried
out on the selected test points at different frequencies, the coefficients of the fault
diagnosis equations are evaluated, eventually exploiting a least-squares procedure in
order to minimize the error due to measurement inaccuracy [34]. In the second phase,
the circuit parameter values are obtained by solving the non-linear system constituted
by the equations expressing the previously determined coefficients as functions of the
circuit parameters. In this way, by considering k-fault diagnosis equations expressed
as follows:
n1 (l)
i
i=0 ai (p)/bm (p) s
Nl (p, s)
hl (p, s) =
=
j
D(p, s)
sm + m1
j=0 bj (p)/bm (p) s
l = 1, . . . , K
(2.11)
(1)
a0 (p)
bm (p)
(K)
a0 (p)
bm (p)
b0 (p)
bm (p)
(l)
(1)
= A0
(K)
= B0
= A0
..
.
(1)
an1 (p)
bm (p)
(K)
anK (p)
bm (p)
bm1 (p)
bm (p)
(1)
= An1
(K)
(2.12)
= AnK
= Bm1
48
reported in Equation (2.13) for the case of kfault equations and bm different
from one
(1)
(1)
(1)
a0 /bm
a0 /bm
a0 /bm
...
p2
pR
p1
.
.
.
.
.
.
...
(1)
(1)
(1)
an1 /bm
an1 /bm
an1 /bm
.
.
.
p1
p2
pR
.
.
.
.
.
.
.
.
.
...
(K)
(K)
a(K) /bm
a0 /bm
a0 /bm
0
pR
p1
p2
(2.13)
BC =
...
.
.
.
.
.
.
(K)
(K)
an(K)
/bm
anK /bm
anK /bm
K
.
.
.
p1
p2
pR
(b0 /bm )
(b0 /bm )
(b0 /bm )
p1
p2
pR
.
.
.
...
...
...
(bm1 /bm )
(bm1 /bm )
(bm1 /bm )
.
.
.
p1
p2
pR
Hence, all the information provided by a Jacobian matrix with respect to its
corresponding non-linear system can be obtained from the matrix BC .
Summarizing, independent of the used fault location method, the testability value
T = rank BC gives information on the solvability degree of the problem, as explained
by the following:
If T is equal to the number of unknown elements, the parameter values can be
theoretically uniquely determined starting from a set of measurements carried out
on the test points.
If T is lower than the number R of unknown parameters, a locally unique solution
can be determined only if RT components are considered not faulty.
Generally T is not maximum and the hypothesis of a bounded number k of faulty
elements is made (k-fault hypothesis), where k T . Then, important information is
given by the testability value: the solvability degree of the fault diagnosis problem
and, consequently, the maximum possible fault hypothesis k.
In the case of low testability and k-fault hypothesis, at most a number of faults
equal to the testability value can be considered. However, under this hypothesis,
whatever fault location method is used, it is necessary to be able to select as potentially
faulty parameters a set of elements that represents, as well as possible, all the circuit
components. To this end, the determination of both the canonical ambiguity groups
and surely testable group is of fundamental importance. In order to understand this
statement better, some definitions and a theorem [20] are now reported.
The matrix BC does not only give information about the global solvability degree
of the fault diagnosis problem. In fact, by noticing that each column is relevant to a
specific parameter of the circuit and by considering the linearly dependent columns of
BC , other information can be obtained. For example, if a column is linearly dependent
with respect to another one, this means that a variation of the corresponding parameter
49
50
51
R2
+
+
Vi R1
C1 C2
R3
Vo
R4 R5
Figure 2.2
Testability value: 3
Total number of components: 7
Canonical ambiguity groups:
G 5 G4
C2 G2 G3
Figure 2.3
at most a three-fault hypothesis, that is, a possible solution can be obtained if only
three component values are considered as unknowns. On the base of the previous
procedure, the elements to select as representative of the circuit components are the
surely testable group components and only one component belonging to one of the
two canonical ambiguity groups. Let us suppose, for example, the situation of a single
fault. Independent of the used fault location method, if the obtained solution gives
as faulty element C1 or G1 , we can localize the fault with certainty, because both
C1 and G1 belong to the surely testable group. If we locate as the potentially faulty
element a component belonging to the second-order canonical ambiguity group, we
can only know that there is a fault in this ambiguity group, but we cannot locate
it exactly because there is not a unique solution. Instead, if we obtain as the faulty
element a component belonging to the third-order ambiguity group, we have a unique
solution and then we can localize the fault with certainty. In other words, a fault in a
component of this group can be counterbalanced only by simultaneous faults on all
52
the other components of the same group. However, by the hypothesis of single fault,
this situation cannot occur.
2.3.3
From the previous section, it is possible to understand the importance of the canonical
ambiguity group determination. In fact, from knowledge of the order of the minimum
canonical ambiguity group it is possible to establish if a circuit is k-fault testable or
not. Furthermore, it is important to know also which are the canonical ambiguity
groups, in order to suitably choose the potentially faulty elements. In other words,
knowledge of the canonical ambiguity groups allows us to determine, taking into
account also their intersections, that is, the global ambiguity groups, the testable
groups. These are groups of potentially faulty components giving a solution to the
problem of fault location. This solution will be unique if the circuit is k-fault testable,
otherwise it will allow us to confine the presence of faults to well-defined groups
of components belonging to global ambiguity groups [20]. So, the importance of
the canonical ambiguity group determination is twofold: (i) the possibility of establishing a priori if a circuit is k-fault testable; and (ii) the possibility of determining
testable groups of components, which are easily obtainable, through a combinatorial
procedure, starting from the canonical ambiguity group knowledge.
One of the first algorithms for ambiguity group determination was presented in
Reference [19]. In References 33 and 34 a combinatorial method, based on a symbolic
approach, has been implemented in the program SYFAD. In Figure 2.4 the flowchart
of the algorithm for the canonical ambiguity group determination is shown: in the
figure, T indicates the circuit testability and R the total number of potentially faulty
parameters. Summarizing, the procedure can be described as a process that evaluates
the testability of the whole combinations of groups of k components, starting from a
group constituted by only one component and increasing it until the maximum allowed
number (that is obviously the testability value of the circuit). In the development of
the procedure, if an ambiguity group is found, the further combinations that include it
as a subsystem are not considered: so the canonical ambiguity groups are determined.
In the procedure a classical total pivot method is used on BC and its submatrices.
Subsequently, another efficient numerical procedure for ambiguity group determination, based on the QR factorization of the testability matrix, was proposed in
References 38 and 39. However, this last method, even though not combinatorial,
is a very complex technique to search for canonical and global ambiguity groups.
Furthermore, although the QR decomposition approach presents several interesting
features, it suffers from problems related to round-off errors. These problems become
particularly critical when the dimensions of the testability matrix (or, in circuit terms,
the circuit size) increase. In fact, the procedures for testability and ambiguity group
determination are strictly tied to the numerical rank evaluation of the testability matrix
and of some its submatrices. As is well known, the matrix rank computation using
QR decomposition or other triangularization methods, is affected by round-off errors,
especially if the matrix is rank deficient and the numerical rank obtained is, often,
Figure 2.4
53
only an estimate of the effective rank. These numerical problems are mostly overcome by the use of the singular-value decomposition (SVD) approach which is a
powerful technique in many matrix computations and analyses and has the advantage
of being more robust to numerical errors. The SVD approach allows us to obtain the
effective numerical rank of the matrix, taking into account round-off errors [40]. So,
by exploiting the great numerical robustness of the SVD approach, an accurate evaluation of testability value and an efficient procedure for canonical ambiguity group
determination can be obtained, as it will be shown in the following [28].
As known [40], a matrix BC with m rows and n columns can be written as follows
in terms of its SVD:
BC = UVT
(2.14)
54
where U and V are two square matrices of order m and n, respectively and is
a diagonal matrix of dimension m n. If BC has rank k, the first k elements i ,
called singular values, on the diagonal of are different from zero and are related
by 1 2 k > 0. The matrix is unique, the matrices U and V are not
unique, but are unitary. This means that they have maximum rank and their rows and
columns are orthonormal. In our case BC is the testability matrix, then n is equal to
the number of potentially faulty parameters. As known, the testability value does not
depend on component values [10]. Then, by assigning, for example, arbitrary values
to the circuit parameters, the numerical value of the entries of BC can be evaluated
and, by applying SVD, the testability value T = rankBC can be determined from the
number of singular values.
Now, V being a unitary matrix and rank BC = T , by multiplying for V both the
members of Equation (2.14), the following expression can be obtained:
BC V = U = [UT T |0]
(2.15)
where UT indicates the matrix constituted by the first T columns of U and T the
square submatrix of containing the singular values. The matrix UT T has dimension
m T , the null submatrix 0 has dimension m (n T ). At this point, the following
equations can be written:
BC VT = UT T
BC VnT = 0m(nT )
(2.16)
(2.17)
55
Equation (2.17) means that the columns of H belongs to ker BC . Furthermore each
row and, consequently, each column of H refer to the corresponding column of BC ,
that is, in our case, to a specific circuit parameter. In Reference 28 the following
theorem and corollaries have been demonstrated.
Theorem 2.2 If in the matrix BC there are only disjoint canonical ambiguity groups,
they are identified by the entries different from zero of the columns of the matrix H.
Corollary 1 If in the matrix BC there are canonical ambiguity groups with non-null
intersection, that is, there are global ambiguity groups, the matrix H provides the
disjoint global ambiguity groups.
Corollary 2 If VnT and then H, have a null row (also a column for H), it
corresponds to a surely testable element.
Furthermore, in Reference 28 it has also been shown that, if the matrix H has all
the entries different from zero, then this means that one of the following conditions
occurs: there are surely testable elements or there is a unique global ambiguity group.
In any case, since we do not know a priori which is the situation for a given circuit,
it is necessary to consider a procedure giving the canonical ambiguity groups. If
in H disjoint global ambiguity groups are located, it is again necessary to consider
a procedure giving the canonical ambiguity groups. In practice, the procedure of
canonical ambiguity group determination ends at the evaluation of H only if H is
constituted by blocks of order two, which locate second-order canonical ambiguity
groups, otherwise it must continue.
In order to determine a canonical ambiguity group starting from the basis constituted by the columns of VnT , it is necessary to determine a suitable vector v of
dimension n T which, multiplied for VnT , yields the vector of dimension n representing the canonical ambiguity group. In Reference 28 it was demonstrated that
vectors belonging to ker BC and giving canonical ambiguity groups can be obtained
by locating the submatrices S of VnT , with dimension (n T 1)(n T ), whose
rank is equal to n T 1. These submatrices have a kernel with a unitary dimension,
whose basis x (a vector with n T entries) can be easily obtained through the SVD of
these matrices. In fact x corresponds to the last column of the matrix V of the SVD of
these matrices. By multiplying VnT for the basis x of the kernels of all the matrices
S, canonical ambiguity groups can be obtained [28]. Each vector y = VnT x has
corresponding null entries in the rows of the matrix S relevant to x, because x is a
basis of ker S.
The program, Testability and Ambiguity Group Analysis (TAGA) [29] permits us
to determine testability and canonical ambiguity groups of a linear analogue circuit
on the basis of the theoretical treatment reported in Reference 28 and previously
summarized. It exploits symbolic analysis techniques and it is based on the software
package SAPWIN. Once the symbolic network functions have been determined, the
testability matrix BC is built initially in symbolic form and then in numerical form, by
56
assigning arbitrary values to the circuit parameters. At this point the following steps
are performed:
1. SVD of the testability matrix BC and determination of the testability value T
and of the matrix VnT .
2. Determination of the matrix H = VnT VTnT . If the matrix H is constituted only
by blocks of order two, second-order canonical ambiguity groups are located,
then stop; otherwise go to step 3.
3. Selection of a submatrix S of VnT with n T 1 rows and n T columns.
4. SVD of S. If rank S < n T 1, go to step 3. If rank S = n T 1, go to
step 5.
5. Multiplication of VnT and vector x, basis of ker S. If the obtained vector y,
with dimension n, has non-zero entries, except those relevant to the rows of S,
a canonical ambiguity group of order T + 1 has been located, then go to step 8.
If the obtained vector y, with dimension n, has other null entries, besides those
relevant to the rows of S, a canonical ambiguity group of order lower or equal
to T has been located, then go to step 6.
6. Insertion of the obtained canonical ambiguity group in a matrix, called the
ambiguity matrix, where number of rows is equal to n and number of columns
is equal to the total number of determined canonical ambiguity groups.
7. If all the possible submatrices S have been considered, stop. If not all the
possible submatrices S have been considered, go to step 3, discarding the
submatrices S having null rows, because they certainly have a rank less than
n T 1.
8. If all the possible combinations of T elements relevant to the canonical ambiguity group of order T + 1 give testable groups of components, then go to
step 7.
The proof of the statements in step 5 are in Reference 28. Furthermore, if there
are surely testable elements in the CUT, they correspond to null rows in the ambiguity
matrix, because each row of the ambiguity matrix corresponds to a specific potentially
faulty circuit element and surely testable elements cannot belong to any canonical
ambiguity group of order at most equal to T [28].
It is important to remark that the availability of network functions in symbolic
form strongly reduces the computational effort in the determination of entries of the
matrix BC , because they can be simply led back to derivatives of sums of products.
Let us consider, as an example, the circuit shown in Figure 2.5. The output V0 has
been chosen as the test point. In Figure 2.6 the matrices VnT and H are shown. As it
can be noted, the matrix H has columns where entries are all different from zero. Then,
the whole procedure of canonical ambiguity group determination has to be applied
and, in a very short time, the results are obtained, as shown in Figure 2.6, where the
ambiguity matrix and the canonical ambiguity groups are reported. In the ambiguity
matrix it is possible to locate three surely testable components (C1 , R1 , R4 ) and ten
canonical ambiguity groups. The computational times are very short: on a Pentium
III 500 MHz; the symbolic analysis, performed by SAPWIN, requires 70 min and the
canonical ambiguity group determination, performed by TAGA, requires 50 min.
C1
R3
R1
C2
R4
+
V0
2.3.4
R6
R2
Figure 2.5
57
R5
The previously presented definitions and methods are based on the study of network
functions in the transformed domain. Thus, they are rigorously applicable only for
linear circuits. However, by means of some considerations, they can give useful
information also for the fault diagnosis of non-linear circuits.
To discuss such considerations it is useful to take into account two different kinds
of non-linear circuits: those in which the non-linear behaviour is structural, that is,
the presence of non-linear components is essential to the desired behaviour (rectifiers,
mixers, modulators and so on) and those in which the non-linear behaviour can be
considered as being parasitic.
For the latter case the above presented techniques of testability analysis can be
applied to a linearized model of the CUT and can be used directly to optimize the
selection of test points in the circuit. Obviously the non-linear behaviour, which could
be prevalent in fault conditions, will render much more difficult the fault location
phase.
For the former case, the use of the proposed techniques can be useful if it is possible
to represent the non-linear circuit by means of suitable piece-wise linear (PWL)
models. In this case the testability analysis can be performed on the corresponding
PWL circuit. This aspect will be further discussed in subsection 2.5.1.
2.4
In the past years, a noteworthy number of techniques have been proposed for the
fault diagnosis of analogue linear and non-linear networks (excellent presentations of
the state of the art in this field can be found in References 16, 41 and 42). All these
techniques can be classified in two basic groups: SBT techniques and SAT techniques.
Both SBT and SAT techniques share a combination of simulations and measurements,
the difference depending on the time sequence in which they are applied. In the
58
Testability = 3
Matrix Vn T :
0.000
0.522
0.110
0.000
0.000
0.234
0.709
0.399
0.366 0.247
0.580 0.122
0.000
0.000
0.077 0.251
0.595
0.563 0.226
0.186
0.531 0.605
0.106
0.000
0.000
0.451
0.000
0.529 0.111
0.000
0.093 0.436
0.085
0.036
C2
R1
Matrix H :
C1
C1
R2
R3
R4
R5
R6
0.048
C2 0.044
0.912
R1 0.316
0.049
0.351
0.075
R2 0.068 0.136
0.075
0.791 0.147
0.069 0.297
0.149
R3 0.048 0.096
0.053 0.147
0.897
0.048 0.209
0.105
R4 0.288
0.321
0.069
0.048
0.293
0.098 0.049
0.098
0.580
0.212
0.105 0.049
0.212
0.894
0.045
R5 0.096 0.193
R6
0.048
0.097 0.054
0.149
0.053
0.045 0.193
0.321
0.097
0.107 0.054
Ambiguity matrix:
C1
0.00
0.00
0.00 0.00
0.00
0.00
0.00
0.00 0.00
C2
0.00
0.00
0.00 0.00
0.00
0.67 0.61
R1
0.00
0.00
0.00 0.00
0.00
0.00
0.00
0.00 0.00
R2
0.00
0.00
0.00 0.76
0.81 0.77
0.00
0.00
0.00 0.79
R3
0.00
0.64
0.00
R4
0.00
0.00
0.00 0.00
0.00
0.00
0.00
0.00
0.00 0.00
R5
0.65
0.00
0.00
0.00
0.69
0.00 0.00
R6
0.76 0.71
0.00 0.65
0.00 0.75
0.00
0.00 0.00
0.00
0.00
0.00
Figure 2.6
former case SBT, the CUT is simulated under different faults and, after a set of
measurements, a comparison between the actual circuit response to a set of stimuli
and the presimulation gives an estimation on how probable is a given fault. There are
many different procedures, but they often rely on constructing a fault dictionary, that
is, a prestored data set corresponding to the value of some network variables when a
given fault exists in the circuit. These techniques are especially suited to the location
59
of hard or catastrophic faults for two reasons: the first one is that they are generally
based on the assumption that any fault influences the large-signal behaviour of the
network, the second one is that the dictionary size becomes very large in multiple
soft-fault situations.
The SAT approaches are suitable to cases where the faults perturb the small-signal
behaviour, that is, they are especially suitable to diagnose parametric faults (that is,
deviations of parameter values from a given tolerance). In these methods, starting from
the measurements carried out on the selected test points, the network parameters are
reconstructed and compared to those of the fault-free network to identify the fault.
The use of symbolic methods is particularly suited for SAT techniques and, in
particular, for those based on parameter identification. This is due to the fact that
SAT approaches need more computational time than SBT approaches and, using a
symbolic approach, noteworthy advantages can be reached, not only in computational
terms, but also in terms of including automatically, in the fault diagnosis procedure,
testability analysis, that, as already specified in the previous section, is a necessary
and preliminary step for whatever method of fault diagnosis.
In this section, methods of fault diagnosis based on parameter identification are
considered. In these techniques the aim is the estimation of the effective values of
the circuit parameters. To this end it is necessary to know a series of measurements
carried out on a previously selected test point set, the circuit topology and the nominal
values of the components. Once these data are known, a set of equations representing
the circuit is determined. These equations are non-linear with respect to the parameter
values, which represent the unknowns. Their solution gives the effective values of the
circuit parameters. In both determination and solution of the non-linear equation set,
symbolic analysis can be advantageously used, as will be shown in this section.
In parametric fault diagnosis techniques, the measurements can either be in the
frequency domain or the time domain. Generally, the procedures based on time
domain measurements do not exploit symbolic techniques in the fault location phase.
Nevertheless, also for these procedures, if a symbolic approach is used for testability
analysis, a considerable improvement in the quality of the results can be obtained. An
example of this kind is reported in Reference 43, where a neural network approach is
used in the fault location phase and a symbolic testability analysis is used for sizing
and training the network.
On the contrary, symbolic techniques are used in parametric fault diagnosis methods based on frequency domain measurements. So, in the following, only this kind
of procedure is considered. For all the techniques presented, the quite realistic k-fault
hypothesis is made, by also taking into account the component tolerances. The use of
a symbolic approach gives noteworthy advantages, not only in the phases of testability
analysis and solution of the fault diagnosis equations, but also in the search of the
best frequencies at which the measurements have to be carried out.
2.4.1
60
is the most frequent, the double-fault case is less frequent and the case of all faulty
components is almost impossible, in the following the procedures of location and
estimation of the faulty components in the case of single-fault hypothesis will be
described.
2.4.1.1 First technique
Let us suppose that the fault diagnosis equations of the analogue, linear, time-invariant
CUT are constituted by the network functions relevant to the selected test points. The
coefficients of these equations are related to the circuit parameters in a linear way,
that is, each coefficient can be considered as a bilinear function with respect to the
single-circuit parameter. Under the hypothesis of single fault, considering, for the
sake of simplicity, only one network function h(s, p) and fixing all the parameters,
except one, at the nominal value, the following bilinear function can be obtained:
h(s, p) =
a(s) + b(s) p
c(s) + d(s) p
(2.18)
where p + is a circuit parameter and a(s), b(s), c(s) and d(s) are polynomial
functions. If s = j, Equation (2.18) becomes the following:
h ( j, p) =
a ( j) + b ( j) p
c ( j) + d ( j) p
(2.19)
By considering the measured value of the fault equation at a fixed frequency different from the pole frequencies (that is, different from the frequencies that make
the denominator of Equation (2.19) equal to zero), Equation (2.19) can be inverted
with respect to p and, because it has a co-domain in the complex number field, the
following equations for the real and imaginary parts can be obtained
a ( j) h ( j) c ( j)
p = Re
(2.20)
h ( j) d ( j) b ( j)
a ( j) h ( j) c ( j)
=0
(2.21)
Im
h ( j) d ( j) b ( j)
At this point, the procedure of fault location can be so summarized [44]. One element
at a time is considered faulty and all the other are considered to be well working.
Once the test frequency has been fixed and the measured value in the test point has
been collected, for each circuit parameter the imaginary part is evaluated through
the corresponding Equation (2.21) by substituting the nominal value for all the other
parameters. The evaluation of Equation (2.21) is repeated many times, one for each
element considered faulty. Only for the effectively faulty component is the imaginary
part null and the real part gives an estimate of its value. Obviously, this is true if the
component is surely testable, otherwise the considerations in Reference 20 have to
be applied.
The symbolic approach is of fundamental importance in the implementation of
this procedure, because the availability in completely symbolic form of a(s), b(s),
c(s) and d(s) strongly reduces the computational complexity.
61
The extension of the procedure to the double-fault case, the component tolerance
consideration and a description of the realized system for the full automation of the
procedure are reported in References 4446 respectively.
2.4.1.2 Second technique
Let us consider the fault diagnosis equations of the CUT as in Equation (2.11). By
exploiting the measurements carried out on the test points, the coefficients of the
network functions can be determined by applying a least-squares procedure. In theory,
a number of measurements equal to the number of unknowns is required. In practice,
a number of measurements also much larger than the number of unknowns is used
in order to minimize the errors due to measurement inaccuracy. Once the coefficients
have been evaluated, the component values can be determined by exploiting the
system in Equation (2.12). Let us consider the hypothesis of a single fault [47]. By
testability evaluation, the rank of the Jacobian matrix BC of the system in Equation
(2.12) has been determined, so the linearly independent rows of BC , that is, the linearly
independent equations of the system in Equation (2.12), are known. By indicating
with p a potentially faulty parameter and by choosing among the linearly independent
equations of the system in Equation (2.12) two of them dependent on p, the following
bilinear system can be determined, where Ai (l) and Aj (k) are the coefficient values,
while mi (l) , mj (k) , qi (l) and qj (k) are numerical values obtained by replacing the
nominal value to each circuit parameter considered not faulty:
Ai (l) = mi (l) p + qi (l)
Aj (k) = mj (k) p + qj (k)
(2.22)
By substituting the first equation into the second one, the following expression can
be obtained, where Mand Q are numerical terms:
Aj (k) = mj (k) Ai (l) qi (l) /mi (l) + qj (k) = MAi (l) + Q
(2.23)
Equation (2.23) is verified by replacing Ai (l) and Aj (k) with the values obtained by the
measurements only if the potentially faulty parameter is the faulty one and the others
are really not faulty. So, by repeating this procedure for each circuit parameter, the
faulty element can be located. Furthermore, the faulty element can also be estimated
by inverting one of the equations in Equation (2.22), as, for example:
Ai (l) qi (l)
p=
(2.24)
mi (l)
If it happens that a parameter p appears in only one equation, this means that this
equation is independent of all the others and can be used in its bilinear form for
evaluating the p value. If this value is out of its tolerance, range, the parameter p is
faulty, because the parameter p, appearing in only one coefficient of the system in
Equation (2.12), certainly does not belong to any canonical ambiguity group, that is,
it is surely testable and, then, distinguishable with respect to all the other parameters.
62
2.4.2
NewtonRaphson-based approach
The two bilinear techniques previously described are suitable for the single- and
double-fault cases, because they become excessively complex for a fault hypothesis
greater than two. In this subsection a procedure for parametric fault diagnosis based
on the classical NewtonRaphson method is presented [34]. It is suitable for any
possible fault hypothesis.
Let us consider the fault diagnosis equations expressed in Equation (2.11). As
reported in subsection 2.3.2, the parameter evaluation is led back to the solution of
the non-linear system reported in Equation (2.12), whose testability matrix is reported
in Equation (2.13).
By indicating with R the total number of circuit parameters, with k the number of
potentially faulty parameters and with T the testability value (T R and k T ), the
fundamental steps of the fault diagnosis procedure can be summarized as
1. Evaluation of T .
2. Determination of all the possible combinations of the k testable parameters.
3. Application of the NewtonRaphson method to each testable group of k
parameters.
A group of k elements is testable if the related columns of BC are linearly independent. If we want to know if a group of k elements is testable, we must triangularize
the submatrix of BC constituted by the columns related to the selected parameters.
If the k elements we have chosen are testable, the first k rows of the triangularized
matrix show the independent equations. So, we have a k-equations and k-unknowns
non-linear system and solve it employing the classical NewtonRaphson method, by
assigning to the other R k parameters their nominal values. As is well known, in
the NewtonRaphson method it is necessary to evaluate the Jacobian matrix that, in
this case, is a submatrix of the testability matrix BC . To ensure convergence to the
NewtonRaphson procedure, the starting point has to be chosen close enough to the
63
solution. To overcome the problem of not really knowing the solution positions, a
grid of initial points is chosen as reported in Reference 34.
It is worth pointing out that the components considered to be working well, due to
their tolerances, yield a deviation in the solution for the components considered faulty,
that is, the solution is affected by the tolerances of the components considered to be
working well. The more the testability decreases (that is, the number of parameters
that cannot be considered unknowns increases), the more the error grows. In extremely
unlucky cases the tolerance effect can completely change the solution results. The
error entity is tightly dependent on the circuit behaviour, that is, on the network
sensitivity with respect to the circuit components. In fact, if a component that gives
a high value of sensitivity has a small deviation with respect to the nominal value, it
could produce completely wrong solutions if it is not considered unknown. However,
it is possible to affirm that high-sensitivity circuit components are often realized with
smaller tolerance intervals, in the sense that also a small deviation with respect to the
nominal value must be considered as a parametric fault.
Each solution obtained with the NewtonRaphson method gives a possible set of
faulty components.
The flow diagram of the algorithm of fault solution determination is shown in
Figure 2.7.
Multiple solutions can be present for the following reasons:
By solving the system with respect to any possible group of k-testable components,
we obtain several solutions, each one indicating a different possible fault situation,
that is, a different parameter group whose values are out of tolerance.
Owing to the system non-linearity, multiple solutions can exist for each parameter
group.
It is worth pointing out that, very often, several of the solutions are equivalent.
In fact, let B be a set of n components (n < R), with values out of tolerance, which
constitute the faulty components of one of the solutions. Assuming n < k, all the
groups of k components that include the set B will have, among their solutions, the
solution in which the components of set B are out of tolerance and the remaining k n
are within tolerance. So, the solution of the system with respect to each combination
of k components leads to multiple equivalent solutions, one for any combination of k
testable components that includes the set B. Then, it is useful to synthesize all these
solutions into a unique one. In practice, the solution list can be remarkably reduced by
applying the procedure shown in the flowchart of Figure 2.8. Referring to this figure,
once the whole set of N solutions (set 1) has been determined, a set (set 2) constituted
by all the possible faulty component groups has to be built. This set is obviously
empty in the first step of the algorithm. We consider iteratively each solution and, if
its related group of faulty components and their values are different from the already
stored ones (in the limits of given tolerances), we add it to set 2.
In the automation of the fault diagnosis procedure, the availability of the network
functions in completely symbolic form permits us to simplify not only the testability
analysis, but also the repeated solution of the non-linear system with different combinations of potentially faulty parameters. In fact, the Jacobian matrices relevant to the
64
Figure 2.7
65
i=1
i=i+1
No
Select the faulty component
group related to the i th solution
i=N?
STOP
Yes
Is there a group
in set 2
with the same
faulty components and
the same values?
No
Yes
Figure 2.8
If k = 4, each group that includes G3 G4 is not testable and also the group
G1 G2 C1 C2 is not testable.
The faults have been simulated by substituting some components with others of
different value. The nominal values of the circuit components are the following:
G1 = G2 = G3 = 1 103 1 (R1 = R2 = R3 = 1 k)
G4 = 1.786 103 1 (R4 = 560 )
C1 = C2 = 47 nF
O1 : TL081
66
G1
O1
G2
+
+
V1
Va
C2
Vo
G4
G3
Figure 2.9
The circuit has been made by using a simple wiring board and standard components with 5 per cent of tolerance. A double parametric fault case has been simulated
by substituting the capacitor C2 with a capacitor of value equal to 20 nF and the
resistor R2 with a resistor of value equal to 1460 .
The amplitude and phase responses related to the selected test points have been
measured using an acquisition board interfaced with a personal computer. Forty
measurements related to a sweep of frequencies between 250 and 10 000 Hz have
been acquired (input signal amplitude equal to 0.1 V). This range has been chosen taking into account the frequency response of the circuit, in order to include
the high-sensitivity region. The collected results, related to the two selected test
points Vo and Va , have been used as inputs for the software program SYFAD, which
implements the procedure of fault location. Choosing to solve the fault diagnosis
equations with respect to the set of all the possible testable combinations of four
components, the program has selected the following solutions among all the possible
ones (the numbers in parenthesis are the ratios between the obtained values and the
nominal ones):
G1 = 2.338 103 (2.337 79)
G2 = 1.701 103 (1.700 78)
C1 = 1.163 107 (2.4747)
or
G2 = 6.873 104 (0.687 267)
C2 = 1.899 108 (0.404 09)
or
G1 = 1.375 103 (1.374 54)
67
2.4.3
68
This inequality provides an upper bound on the error, that is, the worst-case error in
the solution of x. To obtain the condition number of the matrix, the SVD method can
be used.
In the case under analysis, the non-linear fault equations are solved by the Newton
Raphson method, that performs a step-by-step linearization. Then, in order to relate
the previous terms, relative to the linear case, with the actual problem, let us note
that the following associations with the generic case exist: b is the vector of the gain
measurements, that is, each entry of b is a measurement of the amplitude in decibels
of a network response at a different frequency (the extension to the case of gain and
phase measurements is not difficult), consequently the entries of b are the measurement errors; the entries of the matrix A are the entries of the Jacobian matrix (a
generic entry has the form 20 log |hk (ji , p)|/pj with the subscript k indicating the
kth fault equation and the subscript i the corresponding ith measurement frequency),
the entries of A are given by the tolerances of circuit components considered well
working (not faulty). The solution vector x is related to the values of the components
belonging to the testable group. Moreover, it should be highlighted that every column
j of the matrix A is related to a different component belonging to the testable group,
while each row i of the same matrix is related to a different frequency for each test
point; therefore, in order to get a square matrix, the number of performed measurements has to be suitably chosen for each test point. At this point, the choice of a set
of frequencies in a zone where the condition number is minimum is suitable for minimizing the deviation in the solution vector, that is, the error in the resulting component
values.
On the other hand, the condition number alone does not take into account the size
of derivatives in a given frequency range; the condition number could be good in a
frequency zone where high variations of component values with respect to nominal
values result in a small variation of network function amplitude. Then, in addition to
the condition number, it could be useful to take into account the norm of the matrix A,
which gives a measure of the sensitivity of the network functions with respect to the
component variations at a given set of frequencies. Taking into account the previous
observations, a Test Error Index (TEI) of the following form can be introduced:
TEI =
1
1
cond (J) 1
=
J2
min
max
(2.27)
where min and max represent the minimum and maximum singular values of the
Jacobian matrix. The TEI has been chosen in this way, because, in order to minimize
the worst-case error in the solution, the norm of the matrix must be as high as possible
and its condition number must be as low as possible, that is, as near as possible to one.
Consequently, by looking for the minimum of Equation (2.27), both the requirements
are satisfied. In order to find the most suitable frequency set, that is, the set where the
previous index number is minimum, two different procedures can be used. The first
one is based on a heuristic approach [48] and is suitable for the case of a single test
point, the second one is based on the use of genetic algorithms and is more general,
but requires more computational time [49].
69
In the first procedure, under the hypothesis of only one test point, the logarithm of
the TEI is evaluated on different frequency sets, constituted by a number of frequencies, spaced by octaves, equal to the number of unknown parameters. The minimum
of the TEI is determined and the corresponding set of frequencies constitutes the optimum set of frequencies. In Reference 48 an applicative example is reported, in which
a double-fault situation is considered. By using a new version of program SYFAD,
performing parameter inversion through the NewtonRaphson algorithm, it is shown
that the double fault is correctly identified if the measurement frequencies are determined by minimizing the TEI volume, while the NewtonRaphson algorithm does
not converge or give completely incorrect results for other frequency sets.
In the second approach [49], an optimization procedure, based on a genetic algorithm, performs the choice of both the testable parameter group and the frequency set
that better leads to locate parametric faults. In fact, the Jacobian matrix associated
with the fault equations depends not only on frequencies, but also on the selected
testable group. Then, even if all the possible testable groups are theoretically equivalent, in the phase of TEI minimization, a testable group could be better than another
one, owing to the different sensitivities of the network functions to the parameters
of each testable groups. Consequently, the algorithm of TEI minimization also has
to take into account this aspect (note that, in the previous method, the testable group
is randomly chosen). A description of the genetic algorithm is reported in Reference
49. The steps of the fault diagnosis procedure exploiting this approach of frequency
selection are summarized below:
1. A list of all the possible testable groups is generated, through a combinatorial
procedure taking into account the canonical ambiguity groups determined in
the phase of testability analysis.
2. The genetic algorithm determines, starting from the nominal component values
po , the testable group and the test frequencies.
3. The fault equations, relevant to the testable group determined in step 2, are
solved with the NewtonRaphson algorithm by using measurements carried
out on the frequencies determined in step 2.
4. The genetic algorithm determines new test frequencies, starting from the
solution p of the previous step (the testable group is unchanged).
5. With the NewtonRaphson algorithm a new solution p* is determined. If, i,
|(pi pi )/pi | 100 , with fixed a priori, stop, otherwise go to step 4.
The test frequency set will be that used in the last application of the Newton
Raphson algorithm.
In the automation of both the procedures of frequency selection, the use of
symbolic techniques gives great advantages. In fact, the availability of network
functions in symbolic form strongly reduces the computational effort in both the
testability analysis phase and the determination of the frequency-dependent Jacobian
matrix.
We conclude this subsection with an example relevant to the second described
procedure. In Figure 2.10 a two-stage common emitter (CE) audio amplifier is shown.
For the transistors, a simplified model with the same parameter values is considered.
70
Vout
Q1
Q2
R7
R2
R4
Figure 2.10
R9
C1
+
Vin
R6
R8
C3
C4
R5 = 14420 ,
hie_Q2
= 1004.8
R8 = 329.54
The analysis of the results already suggests that R4 and hfe_Q1 are the faulty parameters.
Now, considering the testable group found in the first step, the calculation of the best
frequencies with these parameter values is performed again by the genetic algorithm.
The new set of frequencies, found in 12 iterations, is: f1 = 1.58 Hz, f2 = 3.70 Hz,
f3 = 9.36 Hz, f4 = 23.22 Hz, f5 = 37.72 Hz, f6 = 55.20 Hz, f7 = 78.53 Hz. By
repeating the diagnosis with the new set of frequencies, the following values of the
testable parameters are determined:
R2 = 9552 ,
R9 = 3915.6 ,
R4 = 152.3 ,
R5 = 14 409 ,
R8 = 330.03
= 80.984,
hie_Q2
= 1023
hfe_Q1
71
Comparing these values with the previous ones, the following percentage deviations
are obtained
|R R |
|R R |
R2 % = 2R 2 % = 0.228%,
R4 % = 4R 4 % = 0.177%
4
R5 %
R9 % =
|R5 R5 |
% = 0.076%,
R5
R9 R9
R9
hfe_Q1 hfe_Q1
hfe_Q1
hie_Q2 hie_Q2
hie_Q2
R8 %
|R8 R8 |
R8
% = 0.148%
% = 0.061%
hfe_Q1
%=
% = 0.473%
hie_Q2
%=
% = 1.81%
When all the percentage deviations are less than , then the procedure is completed
in only one cycle. By comparing the obtained values with the nominal ones, we have:
R2 % =
R5 % =
R9 % =
|R2 R2 |
R2
R5 R5
R5
R9 R9
R9
hfe_Q1 %
hie_Q2 %
|
|
% = 4.48%,
% = 3.94%,
R4 % =
R8 % =
|R4 R4 |
R4
R8 R8
R8
% = 52.3%
% = 0.009%
% = 2.11%,
hfe_Q1 hfe_Q1
=
% = 19.02%,
hfe_Q1
hie_Q2 hie_Q2
=
% = 2.3%
hie_Q2
Considering the tolerance in every parameter, the faulty parameters are R4 and hfe_Q1
with the following fault values:
R4 = 152.3 ,
hfe_Q1 = 80.984
Comparing these values with the actual fault values, we have an error of 1.53 and
1.23 per cent respectively.
2.5
The previously presented fault diagnosis methodologies are applicable only to linear
circuits or to linearized models of the CUT. They are not applicable to circuits in
which the non-linear behaviour is structural, that is, if it is essential to the requested
electrical behaviour. However, the symbolic approach can also be usefully applied
in these cases. The aim of this section is to present an example of this kind of
application [50].
A field in which the symbolic approach can give advantages with respect to the
numerical techniques is constituted by those applications that require the repetition
of a high number of simulations performed on the same circuit topology with the
variation of component values and/or input signal values. In this kind of application
72
the symbolic approach can be used to generate the requested network functions of
the analysed circuit in parametric form. In this way, circuit analysis is performed
only once and, during the simulation phase, only a parameter substitution and an
expression evaluation are required to obtain numerical results. This approach can
be used to generate autonomous programs devoted to the numerical simulation of a
particular circuit. Furthermore, for a complex circuit, these simulators can be devoted
to parts of the circuit in order to obtain a simulator library.
In this section a program package, developed by the authors following the outlined approach, is presented. The program, named, Symbolic Analysis Program for
Diagnosis of Electronic Circuits (SAPDEC) , is able to produce devoted simulators
for non-linear analogue circuits and is aimed for fault diagnosis applications. The
output of the program package is an autonomous executable program, a simulator
devoted to a given circuit structure, instead of a network function in symbolic form.
The generated simulators work with inputs and outputs in numerical form, nevertheless, they are very efficient because they strongly exploit the symbolic approach;
in fact:
1. They use, for numerical simulation, the closed symbolic form of the requested
network functions.
2. They are devoted to a given circuit structure.
3. They are independent from both the component values and input values, which
must be indicated only at run time, before numerical simulation.
The generated symbolic simulators produce time domain simulations and are able
to work on non-linear circuits. To this end the following methods have been used:
1. Non-linear components are replaced by suitable PWL models.
2. Reactive elements are simulated by their backward-difference models.
3. A Katznelson-type algorithm is used for time domain response calculation.
2.5.1
PWL models
With the PWL technique [51] the voltagecurrent characteristic of any non-linear
electronic device is replaced by PWL segments obtained by means of the individuation
of one or more corner points on the characteristic. The so obtained PWL characteristic
describes approximately the element behaviour in the different operating regions in
which it can work. It is evident that the increase of the corner point number and,
consequently, of the linearity region number allows us to obtain a higher precision
in the simulation of the real component; obviously, in this way, the corresponding
model becomes more complex.
It is worth pointing out that the symbolic analysis is completely independent of the
number of PWL characteristic corner points; in fact, from a symbolic analysis point of
view, each non-linear component of a PWL model is represented by a single symbol.
However, the increase in the number of corner points influences the computational
time in the numerical simulation phase, so a trade-off between a small number of
corner points and requested accuracy must be realized for each model.
Vk + 1
73
Ik + 1
Gi
I = (C/ T )Vk
I k + 1 = (C/ T )(V k + 1Vk)
B
Figure 2.11
It is also worth pointing out that, by exploiting PWL models for non-linear devices,
testability analysis of non-linear circuits can be performed through the methods presented in Section 2.3. with a testability value that is independent of circuit parameter
value, testability evaluation and ambiguity group determination can be performed
starting from the symbolic network functions obtained by replacing the non-linear
devices with their PWL models and assigning arbitrary values to both the parameters
corresponding to linear components and the ones corresponding to PWL models of
non-linear components [52] in the methods of Section 2.3.
2.5.2
The reactive components are made time independent by using the backward-difference
algorithm and the corresponding circuit models are constituted by a conductance in
parallel with a current source [50]. In Figure 2.11 the model of a capacitor is shown
as an example: the conductance value is a function of the sampling time T and
of the capacitance value, while the current value depends on the sampling time T ,
the capacitance value and the voltage value at the previous time step. In this way,
neither the Laplace variable nor integrodifferential operations are used and the circuit
becomes, from the symbolic analysis point of view, without memory and, from the
numerical simulation point of view, time discrete.
2.5.3
The Katznelson algorithm is an iterative process, which allows one to determine the
d.c. solution of a PWL circuit [53].
It must be noted that, using a symbolic approach, the time domain simulation
can be obtained with an algorithm derived from the standard Katznelson algorithm,
but simpler. The difference consists, prevalently, in the fact that, with the symbolic
approach, the program works on the closed-form expressions of the network functions
and then, for each step of the algorithm, there is not a linear system solution, but only
an expression evaluation [50].
74
2.5.4
Following the approach outlined, the program package SAPDEC is able to generate
simulators devoted to any part of a suitably partitioned circuit.
The program can be used to realize a library of devoted simulators. Each simulator
of the library is devoted to a part of the circuit and can be directly used for circuit
fault diagnosis. The input signals for these simulators can be constituted by the actual
signals on the CUT, suitably measured and stored on a file. The circuit responses,
produced by the simulators and stored in another file, can be compared by means
of qualitative and/or quantitative methods with the actual responses measured on the
CUT. From this comparison, it is possible to test the correctness of the behaviour
of the considered part. When a faulty part has been located, it is possible to locate
the fault at a component level. This last phase will require repeated simulations with
variations of component values.
The realization of the simulators library for a given equipment requires a preliminary phase constituted by the decomposition of the CUT in small parts by means of
suitable partitioning techniques. Then, for each part, the following operations have
to be carried out:
1. Representation of non-linear components by means of appropriate PWL
models.
2. Symbolic analysis of each part with the generation of the required network
functions as C language statements.
3. Generation of the devoted simulator by means of the compilation of the produced functions and linking with a standard module, in C language, which
implements a PWL simulation technique by means of a Katznelson-type
algorithm.
The above-mentioned operations are performed, in an automatic and autonomous
way, by the program SAPDEC.
An important characteristic of the proposed approach is constituted by the fact
that the input signals for the simulators are constituted by the actual signals on the
CUT, measured in fault conditions. In fact, usually, for fault diagnosis, test signals
are used instead of actual input signals; this can be very difficult to do and could not
reflect the real behaviour in many cases, as, for example, in equipment with strong
feedback and with internally generated signals, such as d.c.d.c. converters.
As regards the decomposition of the CUT, some considerations can be done. At
present, this partition is not performed automatically and has to be done by the user
who must try to obtain a trade-off among several objectives. An important point is the
choice of the size of the blocks; they must be small, not only to obtain faster simulators,
but also to have blocks characterized by a high testability, in which it is possible to
determine the faulty components starting from the measurements performed on I/O
nodes.
On the other hand, very small blocks increase the cost of the test because they
complicate the phase of location of the faulty block (or blocks) and, generally, they
involve a high number of I/O nodes (which, obviously, must be accessible).
75
2.5.5
In Figure 2.12 the block diagram of SAPDEC is shown. The program requires an
ASCII file describing the CUT.
The form of this file is, for many aspects, similar to that required by the SPICE
program. Each device of the circuit is represented in the input file by one line. The
allowed linear components are a conductor, an inductor, a capacitor, independent
voltage and current sources, four controlled sources, a mutual inductance and the
ideal transformer. The non-linear components are the following: a non-linear conductor, diode, a voltage-controlled switch, an operational amplifier and bipolar and
MOS transistors. Suitable commands, included in the input file, must be used to communicate to the program the output nodes list and the names of the files containing
the input signal samples and component values.
Once the program has started, all the component numerical values (both linear and
non-linear) are stored and both non-linear and reactive components are automatically
replaced by the corresponding equivalent circuits. Then the symbolic evaluation of
the requested network functions is carried out by means of a program written in the
C++ language. These network functions are generated in the form of C language statements and are automatically assembled with the standard module, in the C language,
SPICE-like
circuit description
Generation of
symbolic
network functions
SAPDEC
Symbolic
network
functions
C
compiler
Simulation
algorithm
Devoted
simulator
Figure 2.12
76
Input file
Devoted simulator
Output file
Figure 2.13
77
It is worth pointing out that the files for input signals and for output signals have
the same structure. Then it is also possible to use as input signals for a given block
the simulated output signals of another block.
2.6
Conclusions
2.7
References
78
79
25 Liberatore, A., Manetti, S., Piccirilli, M.C.: A new efficient method for analog circuit testability measurement, Proceedings of IEEE Instrumentation and
Measurement Technology Conference, Hamamatsu, Japan, 1994, pp. 1936
26 Catelani, M., Fedi, G., Luchetta, A., Manetti, S., Marini, M., Piccirilli, M.C.:
A new symbolic approach for testability measurement of analog networks,
Proceedings of MELECON96, Bari, Italy, 1996, pp. 51720
27 Fedi, G., Luchetta, A., Manetti, S., Piccirilli, M.C.: A new symbolic method for
analog circuit testability evaluation, IEEE Transactions on Instrumentation and
Measurement, 1998;47:55465
28 Manetti, S., Piccirilli, M.C.: A singular-value decomposition approach for ambiguity group determination in analog circuits, IEEE Transactions on Circuits and
Systems I, 2003;50:47787
29 Grasso, F., Manetti, S., Piccirilli, M.C.: A program for ambiguity group determination in analog circuits using singular-value decomposition, Proceedings of
ECCTD03, Cracow, Poland, 2003, pp. 5760
30 Liberatore, A., Manetti, S.: SAPEC A personal computer program for the symbolic analysis of electric circuits, Proceedings of IEEE International Symposium
on Circuits and Systems, Helsinki, Finland, 1988, pp. 897900
31 Manetti, S.: A new approach to automatic symbolic analysis of electric circuits,
IEE Proceedings Circuits, Devices and Systems, 1991;138:228
32 Liberatore, A., Manetti, S.: Network sensitivity analysis via symbolic formulation, Proceedings of IEEE International Symposium on Circuits and Systems,
Portland, OR, 1989, pp. 7058
33 Fedi, G., Giomi, R., Luchetta, A., Manetti, S., Piccirilli, M.C.: Symbolic algorithm for ambiguity group determination in analog fault diagnosis, Proceedings
of ECCTD97, Budapest, Hungary, 1997, pp. 128691
34 Fedi, G., Giomi, R., Luchetta, A., Manetti, S., Piccirilli, M.C.: On the application
of symbolic techniques to the multiple fault location in low testability analog
circuits, IEEE Transactions on Circuits and Systems II, 1998;45:13838
35 Liberatore, A., Luchetta, A., Manetti, S., Piccirilli, M.C.: A new symbolic
program package for the interactive design of analog circuits, Proceedings of
IEEE International Symposium on Circuits and Systems, Seattle, WA, 1995,
pp. 220912
36 Luchetta, A., Manetti, S., Piccirilli, M.C.: A Windows package for symbolic
and numerical simulation of analog circuits, Proceedings of Electrosoft96,
San Miniato, Italy, 1996, pp. 11523
37 Luchetta, A., Manetti, S., Reatti, A.: SAPWIN-A symbolic simulator as a support
in electrical engineering education, IEEE Transactions on Education, vol.44, p.
9 and CD-ROM support, 2001.
38 Starzyk, J., Pang, J., Fedi, G., Giomi, R., Manetti, S., Piccirilli, M.C.: A software
program for ambiguity group determination in low testability analog circuits,
Proceedings of ECCTD99, Stresa, Italy, 1999, pp. 6036
39 Starzyk, J., Pang, J., Manetti, S., Piccirilli, M.C., Fedi, G.: Finding ambiguity
groups in low testability analog circuits, IEEE Transactions on Circuits and
Systems I, 2000;47:112537
80
40 Golub, G.H., van Loan, C.F.: Matrix Computations (John Hopkins University
Press, Baltimore, MD, 1983)
41 Liu, R.: Testing and Diagnosis of Analog Circuits and Systems (Van Nostrand
Reinhold, New York, 1991)
42 Huertas, J.L.: Test and design for testability of analog and mixed-signal integrated circuits: theoretical basis and pragmatical approaches, Proceedings of
ECCTD93, Davos, Switzerland, 1993, pp. 75151
43 Cannas, B., Fanni, A., Manetti, S., Montisci, A., Piccirilli, M.C.: Neural network-based analog fault diagnosis using testability analysis, Neural
Computing and Applications, 2004;13:28898
44 Fedi, G., Liberatore, A., Luchetta, A., Manetti, S., Piccirilli, M.C.: A symbolic approach to the fault location in analog circuits, Proceedings of IEEE
International Symposium on Circuits and Systems, Atlanta, GA, 1996, pp. 8103
45 Catelani, M., Fedi, G., Giraldi, S., Luchetta, A., Manetti, S., Piccirilli, M.C.:
A new symbolic approach to the fault diagnosis of analog circuits, Proceedings
of IEEE Instrumentation and Measurement Technology Conference, Brussels,
Belgium, 1996, pp. 11825
46 Catelani, M., Fedi, G., Giraldi, S., Luchetta, A., Manetti, S., Piccirilli, M.C.: A
fully automated measurement system for the fault diagnosis of analog electronic
circuits, Proceedings of XIV IMEKO World Congress, Tampere, Finland, 1997,
pp. 527
47 Fedi, G., Luchetta, A., Manetti, S., Piccirilli, M.C.: Multiple fault diagnosis
of analog circuits using a new symbolic approach, Proceedings of 6th International Workshop on Symbolic Methods and Application in Circuit Design, Lisbon,
Portugal, 2000, pp. 13943
48 Grasso, F., Luchetta, A., Manetti, S., Piccirilli, M.C.: Symbolic techniques
for the selection of test frequencies in analog fault diagnosis, Analog Inegrated
Circuits and Signal Processing, 2004;40:20513
49 Grasso, F., Manetti, S., Piccirilli, M.C.: An appproach to analog fault diagnosis
using genetic algorithms, Proceedings of MELECON04, Dubrovnik, Croatia,
2004, pp. 11114
50 Manetti, S., Piccirilli, M.C.: Symbolic simulators for the fault diagnosis of
nonlinear analog circuits, Analog Integrated Circuits and Signal Processing,
1993;3:5972
51 Vlach, J., Singhal, K.: Computer Methods for Circuit Analysis and Design, 2nd
edn (Van Rostrand Reinhold, New York, 1994)
52 Fedi, G., Giomi, R., Manetti, S., Piccirilli, M.C.: A symbolic approach for
testability evaluation in fault diagnosis of nonlinear analog circuits, Proceedings
of IEEE International Symposium on Circuits and Systems, Monterey, CA, 1998,
pp. 912
53 Katznelson, J.: An algorithm for solving nonlinear resistor networks, Bell System
Technical Journal, 1965;44:160520
54 Konkzykowska, A., Starzyk, J.: Computer analysis of large signal flowgraphs
by hierarchical decomposition methods, Proceedings of ECCTD80, Warsaw,
Poland, 1980, pp. 40813
81
Chapter 3
3.1
Introduction
Fault diagnosis of analogue circuits has been an active research area since the 1970s.
Various useful techniques have been proposed in the literature such as the fault dictionary technique, the parameter identification technique and the fault verification
method [111]. The fault dictionary technique is widely used in practical engineering
applications because of its simplicity and effectiveness. However, the traditional fault
dictionary technique can only detect hard faults and its application is largely limited
to small to medium-sized analogue circuits [5]. To solve these problems, several artificial neural network (ANN)-based approaches have been proposed for analogue fault
diagnosis and they have proved to be very promising [1225]. The neural-networkbased fault dictionary technique [2023] can locate and identify not only hard faults
but also soft faults because neural networks are capable of robust classification even
in noisy environments. Furthermore, in the neural-network-based fault dictionary
technique, looking up a dictionary to locate and identify faults is actually carried out
at the same time as setting up the dictionary. It thus reduces the computational effort
and has better real-time features. The method is also suitable for large-scale analogue
circuits.
More recently, wavelet-based techniques have been proposed for fault diagnosis and testing of analogue circuits [18, 19, 24, 25]. References 18 and 19 develop a
neural-network-based fault diagnosis method using wavelet transform as a preprocessor to reduce the number of input features to the neural network. However, selecting
the approximation coefficients as the features from the output node of the circuit
and treating the details as noise and setting them to zero may lead to the loss of valid
information, thus resulting in a high probability of ambiguous solutions and low diagnosability. Also, additional processors are needed to decompose the details, resulting
84
3.2
Component tolerances, non-linearity and a poor fault model make analogue fault
location particularly challenging. Generally, tolerance effects make the parameter
values of circuit components uncertain and the computational equations of traditional methods complex. The non-linear characteristic of the relation between the
circuit performance and its constituent components makes it even more difficult to
diagnose faults online and may lead to a false diagnosis. To overcome these problems, a robust and fast fault diagnosis method taking tolerances into account is
thus needed. ANNs have the advantages of large-scale parallel processing, parallel storing, robust adaptive learning and online computation. They are therefore ideal
for fault diagnosis of analogue circuits with tolerances. The process of creating a
fault dictionary, memorizing the dictionary and verifying it can be simultaneously
completed by ANNs, thus the computation time can be reduced enormously. The
robustness of ANNs can effectively deal with tolerance effects and measurement noise
as well.
85
This section discusses methods for analogue fault diagnosis using neural networks
[20, 21]. The primary focus is to provide a robust diagnosis using a mechanism to
deal with the problem of component tolerances and reduce testing time. The approach
is based on the k-fault diagnosis method and backward propagation neural networks
(BPNNs). Section 3.2.1 describes ANNs (especially BPNNs). Section 3.2.2 discusses
the theoretical basis and framework of fault diagnosis of analogue circuits. The neuralnetwork-based diagnosis method is described in Section 3.2.3. Section 3.2.4 addresses
fault location of large-scale analogue circuits using ANNs. Simulation results of two
examples are presented in Section 3.2.5.
3.2.1
In recent years, ANNs have received considerable attention from the research community and have been applied successfully in various fields, such as chemical processes,
digital circuitry, and control systems. This is because ANNs provide a mechanism for
adaptive pattern classification. Even in unfavourable environments, they can still have
robust classification. Choosing a suitable ANN architecture is vital for the successful
application of ANNs. To date the most popular ANN architecture is the BPNN. One
of the significant features of neural networks when applied in fault diagnosis and
testing is that online diagnosis is fast once the network is trained. In addition, ANN
classifiers require fewer fault features than traditional classifiers. Furthermore, neural
networks are capable of performing fault classification at hierarchical levels.
On the basis of learning strategies, ANNs fall into one of two categories: supervised and unsupervised. The BPNN is a supervised network. Typical BPNNs have two
or three layers of interconnecting weights. Figure 3.1 shows a standard three-layer
neural network. Each input node is connected to a hidden layer node and each node
of the final hidden layer is connected to an output node in a similar way. The nodes
of hidden layers are connected to each other as well. This makes the BPNN a fully
connected network topology. Learning takes place during the propagation of input
patterns from the input nodes to the output nodes. The outputs are compared with the
desired target values and an error is produced. Then the weights are adapted to minimize the error. Since the desired target values should be known, this is a supervised
learning process.
y
x
Figure 3.1
BPNN architecture
86
Ii
B
(s)
(s1)
Wij Oj
, i = 1, 2, . . . , A
(3.1)
j=1
(s)
(s)
Oi = fs Ii
(3.2)
where A and B are the number of neurons of the sth and the (s1)th layer, respectively.
(s)
Wij represents the weight connecting the jth neuron of the (s1)th layer and the ith
neuron of the sth layer. Function fs ( ) is the limiting function through which Ii(s) is
passed and it must be non-decreasing and differentiable over all time. A common
limiting function is the sigmoid in the following form:
fs (I) =
1
1 + exp (I)
(3.3)
The generalized delta rule that performs a gradient descent over an error surface is
utilized to adapt the weights. Also, the initial values of the weights are assumed to be
random numbers evenly distributed between 0.5 and 0.5.
For an input pattern P of the BPNN, the output error of the output layer can be
calculated as
(yi di )2
EP = 21
i
where di is the expected output of the ith output node in the output layer.
The error signal at the jth node in the sth layer is generally given by
(s)
jP =
EP
(s)
I jP
jP =
C
(s+1)
iP
(s+1)
WijP
(s)
f IjP
i=1
87
WijP
(s)
(s1)
= jP OjP
EP (T + 1)
(s)
WijP (T )
(s)
+ WijP (T 1)
(3.4)
where is the learning rate and 0 < < 1, is the momentum factor and
(s)
WijP
(T 1) is added to improve the speed of learning and generally = 0.9.
It can be seen that the BPNN has the following ideal advantages for fault diagnosis
with tolerances:
The BPNN processes information very quickly. Since the architecture of the
BPNN is parallel, neurons can work simultaneously. The computation speed
of a highly parallel processing neural network far exceeds that of a traditional
computer.
The BPNN has the function of association and the capability to gain complete fault
features from fragmentary information, that is, the BPNN has good robustness.
3.2.2
The methods for analogue fault diagnosis fall into two categories: simulation before
test (SBT) and simulation after test (SAT). The fault dictionary technique is a SBT
method [5]. The parameter identification technique [6] and the fault verification
method [711] belong to the SAT category. The k-branch-fault diagnosis method
[710] assumes that there are k faults in the circuit and requires that the number
of accessible nodes, m is larger than k. In addition, the change of value of an element with respect to its nominal value can be represented by a current source in
parallel with the element and verifying whether the substitution source currents are
non-zero, we can locate the faulty elements. Reference 15 proposes a neural-network
dictionary based on normal dictionary methods and accessible node voltages, while
in Reference 14, a neural-network-based dictionary approach is developed using
the admittance-function-based parameter identification method. In the following a
neural-network-based SBT method is described using the k-fault verification method.
For a linear circuit with n nodes (excluding the ground node), m accessible nodes
and b branches, the k-branch-fault diagnosis equation can be derived as [711]
Vm = Zmb Jb
(3.5)
where Jb is the substitution current source vector, Vm is the voltage increment vector
measured at the testing nodes and Zmb denotes the transfer impedance matrix.
For the clarity of derivation we assume no tolerance initially. According to the
k-branch-fault diagnosis theory, for k faults corresponding to a k-column matrix Zmk
in Zmb , because only those elements Jk corresponding to the k faulty branches in Jb
are non-zero, Equation (3.5) becomes:
Vm = Zmk Jk
(3.6)
88
This equation is compatible or rank [Zmk Vm ]= k and can be solved to give the
solution of
Jk = (ZTmk Zmk )1 ZTmk Vm
For a single fault occurring in the circuit (k = 1), Zmk becomes a single column vector
Zmf and Jk a single variable Jf , where f is the number of the faulty branch.
T
Denoting Zmf = z1f z2f zmf and Vm = [v1 v2 vm ]T , it can be
derived that:
vi = Dzif ,
i = 1, 2, . . . , m
(3.7)
m
2
j=1 vj
zif
=
m
2
j=1 zjf
(3.8)
where i = 1, 2, , m.
Thus, the single fault diagnosis becomes the checking of Equation (3.8) for all b
branches (f = 1, 2, , b).
The k-branch-fault diagnosis method can effectively locate faults in circuits without tolerances. However, for circuits with tolerances, the values of Vm and fault
features are influenced by the tolerances, which will make the contributions of faults
to the two values ambiguous and the testing process becomes slow. In this situation,
fault location results may not be accurate and sometimes a false diagnosis may result.
Fortunately, the advantages of memorizing and associating of ANNs can make up for
this. In order to improve the online characteristics and achieve robustness of diagnosis, we present a method that combines the k-fault diagnosis method with the highly
parallel processing BPNN in the next section.
3.2.3
Using the BPNN to diagnose faults in analogue circuits with tolerances involves
establishing the neural-network structure, generating fault features, forming sample
training groups, training the network and diagnosing the faults [20, 21]. In the diagnosis system the BPNN functions as a classifier. For simplicity we consider a single
soft-fault diagnosis. We select m testing nodes according to the topology of the faulty
circuit with b components.
3.2.3.1 The BPNN algorithm
The two hidden layer, m-input and b-output neural network is adopted. As far as
the hidden neurons are concerned, their number is determined by the complexity of
the circuit and the difficulty in classifying the faults. Generally speaking, the more
elements there are in the circuit, the larger the number of hidden nodes needed. The
selection of the number of layers, the number of nodes in a layer, activation functions
and initial values of the weights to be adapted will all affect the learning rate, the
89
complexity of computation and the effectiveness of the neural network for a specific
problem. To date, there is no absolute rule to design the structure of a BPNN and
results are very much empirical in nature.
3.2.3.2 Fault feature generation
Using the principles of pattern recognition, feature generation
aims at obtaining the
m
2
essential characteristics of input patterns. Since vi /
j=1 vj (i = 1, 2, , m)
can be measured and are used for search of the faulty element among all branches, they
are utilized as the fault feature values of the BPNN and as the inputs of a dimension
of m, of the BPNN. The outputs of the BPNN correspond to the circuit elements to
be diagnosed, having a dimension of b.
3.2.3.3 Constitution of sample and actual feature groups
By using Equation (3.8), under the condition that all elements have nominal
values,
m
2
the sample feature values of the circuit can be obtained by calculating zif /
j=1 zjf
(i = 1, 2, , m), which is determined by the circuit topology. Because there are
increases and decreases in the values of elements, the minimum sample training
groups with 2b single faults are formed.
With the excitation current,
we can measure Vm and obtain the actual feature
m
2
values by calculating vi /
j=1 vj (i = 1, 2, , m) which correspond to different
single faults. The groups of actual feature values are thus formed.
3.2.3.4 Training the BPNN and diagnosing faults
In the diagnosis system, because tolerances influence the actual feature values, BPNN
is used as a classifier to process the feature values affected by tolerances and locate
the fault. According to the k-fault diagnosis method and the BPPN algorithm, 2b
samples and large numbers of actual feature values are input to the BPNN to locate
single faults. During the process, training requires a large number of iterations before
the network converges to a minimal error. Some improvements in the algorithm may
be used to reduce the testing time. To locate the fault from the output of the BPNN,
we utilize that if the value of the output node is more than 0.5 (the threshold value of
the activation function of the neurons in the output layer), the corresponding element
is deemed faulty; otherwise, it is fault free.
The BPNN will not be fully trained until it converges to the target value. The
above process must be done before test. After the test, the measured feature vector
is applied to the trained BPNN and the output of the BPNN is the number of the
faulty branch/components. The steps involved in the fault diagnosis of a circuit can
be summarized as follows:
1. Define faults of interest.
2. Apply the test signal to the CUT and calculate the feature vectors under various
faulty or fault-free conditions.
3. Pass the feature vectors through the BPNN and train the BPNN.
90
3.2.4
3.2.5
Illustrative examples
3.2.5.1 Example 1
The neural-network-based fault diagnosis method is first illustrated using the circuit
shown in Figure 3.2.
In Figure 3.2, there are eight resistors. The nominal value of each resistor is
1 and each element has tolerance of 5 per cent. Suppose that a single soft-fault
R8
R2
1
R1
Figure 3.2
A resistive circuit
R4
2
R3
R6
3
R5
R7
R3
Table 3.2
R3
value
(ohms)
0.2
0.9
1.2
2.5
91
X1
X2
0.6396
0.6396
0.4264
0.6396
0.6396
0.4264
X0
X1
X2
Output node
3 (R3 ) value
0.6486
0.6529
0.6439
0.6482
0.6305
0.6233
0.6352
0.6412
0.4264
0.4304
0.4260
0.4259
0.8495
0.8491
0.9480
0.9510
Maximum
value in other
output nodes
0.1195
0.1194
0.0569
0.0546
has occurred in it. According to the topology of the circuit, three testing nodes are
selected, which are numbered nodes 1, 3 and 4. Thus, the BPNN should have three
input nodes in the input layer and eight output nodes in the output layer. In addition,
two hidden layers with eight hidden nodes each are designed. The BPNN algorithm is
simulated by computer in the C language. Also, PSpice is used to simulate the circuit
to obtain Vm . Because the diagnosis principle is the same for every resistor in the
circuit, we arbitrarily select R3 as an example to demonstrate the method described.
The sample feature values of R3 (X0 , X1 , X2 ) are calculated and shown in Table 3.1.
These sample feature values of R3 are input to the BPNN in order that the BPNN is
trained and can memorize the information learned before. After over 5000 times of
training and when the overall error is less than 0.03, the training of the BPNN is
completed and the knowledge of the sample features is stored in it.
Now suppose R3 is faulty and when it is 0.2, 0.9, 1.2 and 2.5 respectively, the
values of the other resistors are within the tolerance range of 5 per cent (here the
values of the seven resistors are selected arbitrarily as R1 = 1.04 , R2 = 0.99 ,
R4 = 1.02 , R5 = 0.98 , R6 = 1.01 , R7 = 0.987 , R8 = 0.964 ). With
the excitation of a 1 A current to testing node 1, the actual feature values of the
three testing nodes are obtained by getting Vm and calculating the left-hand part
of Equation (3.8). Then, the actual feature values (X0 , X1 , X2 ) of the four situations
are input to the input nodes of the BPNN, respectively, to classify and locate the
corresponding faulty element. The results are shown in Table 3.2.
From Table 3.2, it can be seen that the diagnosis result is correct. For output
node 3 the value of the output layer is more than 0.5 and those of the other output
nodes are less than 0.5, which shows that R3 is the faulty element. Also, when R3
92
is 0.9 , which is the case that the fault is very small and comparatively difficult
to detect, the BPNN-based k-fault diagnosis method can still successfully locate it.
Furthermore, for the other seven resistors, the method has also been proven to be
effective by simulation. In addition, once the BPNN is trained, the diagnosis process
becomes very simple and fast.
Compared with the traditional k-fault diagnosis method, the BPNN-based method
has clear advantages. The BPNN-based method requires less computation and is very
fast. Computation is needed only once to obtain sufficient sample and actual feature
values of testing nodes for a particular circuit. Also, the problem of component
tolerance can be successfully handled by the robustness of the BPNN. Hence, the
neural-network-based diagnosis method is more robust and faster and can be used in
real-time testing.
3.2.5.2 Example 2
A second circuit is shown in Figure 3.3. This is a large-scale analogue circuit. It is
decomposed into four subcircuits (marked in dashed lines), denoted by x1 , x2 , x3 ,
x4 , according to the nature of the circuit. Assume that R14 and Q3 are faulty; the
value of R14 is changed to 450 and the base of Q7 is open. Following the steps in
Section 3.2.4, Table 3.3 is produced, containing accessible node voltages.
R7
400
R1
600
10
Q3 Q4
11
5
R2
820
1
R3
42
vi1
R11
1K
14
13
R9
780
R10
780
R5
42
+Vcc
+6v
Q7
Q8
Vo1
21
R13
45
v i2
R14
X3
45
18
X2
R4
42
R12
1K
15
Q5 Q 6
16 17
12
4
Q1 Q2
R8
400
X1
22
R17
1K
R15
470
Vo2
R18
1K
>
8
Q9
9
Q10
19
X4
R6
500
Q11
20
R16
370
23
Figure 3.3
VEE
6v
Items
BP 1
BP 2
BP 3
BP 4
7
0.8422
0.8330
8
2.7457
2.9178
7
0.8422
0.8330
3
1.0122
1.3384
8
2.7457
2.9178
14
3.6267
2.7878
12
0.3233
0.0146
4
1.0122
1.3152
23
6.0000
6.0000
15
3.6267
0.3886
13
0.3233
0.5615
6
6.0000
6.0000
Values
21
2.8859
2.0556
7
0.8422
0.8330
22
2.8859
7.275109
0 0 0 1
Feature vectors
0.0535 0.9986 0
Diagnosis data for Example 2 (Vio are the nominal accessible node voltages and Vi the measured)
NN
Table 3.3
94
The feature vectors are passed through the corresponding BPNNs and the following results are obtained from the outputs of the BPNNs: subcircuit 1 (x1 ) and subcircuit
4 (x4 ) were fault free; subcircuit 2 (x2 ) and subcircuit 3 (x3 ) were faulty. The faulty
elements were identified as Q3 and R14 , which are the same as originally assumed.
3.3
3.3.1
Wavelet decomposition
Wavelets were introduced by Grossmann and Morlet as a function (t) whose dilations and translations can be used for expansions
of
L 2 (R) functions if (t) satisfies
2
the admissible condition: C = () /|| d < . Let V2j (j Z) be a
2
multi-resolution approximation
j of L (R). There exists an orthonormal basis (scaling
j/2
function) (j, k) = 2 2 t k (k Z) of any V2j by dilating a function (t)
with a coefficient 2j and translating the resulting function on a grid whose interval is
proportional to 2j . Any set of multi-resolution approximation spaces can be written as
95
k=1 gk .
Thus, the orthogonal projections of f (t) in various frequency bands are obtained. The
projection pj1 of f (t) on Vj1 is given by
pj1 f =
djk j,k
cjk j,k +
kZ
kZ
where cjk
and djk
3.3.2
There are two types of noise source in a circuit: interior noise, for example, thermal
noise of a resistor or scintillation noise generated by a semiconductor element and
exterior noise such as disturbance of input signals. A noisy signal s(t) can be written
as s(t) = f (t) + e(t), where f (t) and e(t) are the principal content and the noise,
respectively. With wavelet decomposition being accomplished, detail sections are the
superposition of the details of f (t) and e(t).
In practice, f (t) can usually be expressed as low-frequency signals and e(t) as
high-frequency ones. Thus, noise removal can be executed by minimizing the noise
contained in the detail coefficients by first wavelet decomposing s(t) and then reconstructing the resulting signals. The proposed method in References 18 and 19 selects
the approximation coefficients as the features from the output node of the circuit
and treating the details as noise and just setting them to zero can lead to loss of
valid information, thus resulting in a high probability of ambiguous cases [17]. The
technique presented here is to extract the candidate features generated from the test
points by wavelet noise removal and wavelet decomposition. The optimal feature
vectors for training neural networks are then obtained by PCA and normalization of
approximation and detail coefficients.
The proposed algorithm can be summarized as follows:
1. Approximate cjk and djk details at various levels are obtained by N level
orthogonal decomposition of the original sampled signals.
2. Removing noise from all djk s.
3. Calculating the energy of every frequency band for the signals with noise
removed:
2
E = Eopp Edet , the elements in Eopp and Edet : Ej opp =
cjk ,
Ej det
2
=
djk
k
96
3.3.3
WNNs
Neural networks have many features suitable for fault diagnosis. We now combine
wavelet transforms and neural networks for fault diagnosis of analogue circuits. The
WNN is shown in Figure 3.4. This is a multi-layer feedback architecture with wavelets,
allowing the minimum time to converge to its global maximum. The WNN employs
a wavelet base rather than a sigmoid function, which discriminates it from general
BPNNS. The function of mapping from Rm to Rn can be expressed as
m
p
t bj
(3.9)
yi (t) = 1
wij
xk (t)
aj
j=1
k=1
In Equation (3.9), (.) and 1 (.) are the wavelet bases; xk (t) and yi (t) are the kth
input and ith output respectively. The weight functions in the hidden layer and output
layer are wavelet functions. The sum square error performance function is expected
x1(t )
xk(t )
xm(t )
Figure 3.4
t b1
a1
t bk
ak
wij
wi1
wip
t bm
am
1(.)
yi (t )
97
to reach minimum by feeding information forward and feeding error backward, thus
updating the weights and bias parameters according to its learning algorithms. A
momentum and adaptive learning rule is adopted to reduce the sensitivity of the local
details of error surfaces and shorten the learning time.
3.3.4
The fault diagnosis method based on WNNs identifies the fault class by the output of
the WNN trained with various fault patterns.
Definition 3.1 Test points of a circuit are those accessible nodes in a circuit whose
sensitivities to component parameters are not equal to zero.
Definition 3.2 Pattern extraction nodes (PEN) of a circuit are those test points in a
circuit which are distributed uniformly.
From definition 3.2, using PENs can reduce the probability of missing a fault, thus
increasing the diagnosability. The sketch of the WNN algorithm for fault diagnosis
is given in Figure 3.4. It contains three main steps described below.
1. Extraction of candidate patterns and feature vectors.
To extract candidate features from the PENs of a circuit, 500 Monte Carlo
analyses are conducted for every fault pattern of the sampled circuits with tolerances; 350 of them are used to train WNNs and the other 150 are adopted for
simulation. Then optimal features for training neural networks are obtained by
first selecting candidate sets from wavelet coefficients using PCA and normalization (according to steps 15 as mentioned in Section 3.3.2). Assuming that
the feature vector of the ith PEN is TVi = [A1 , A2 , . . . , An ], then for all PENs,
we have TV = [TV1 , TV2 , . . . , TVq ], where q is the number of PENs.
2. Design and training of WNNs.
A multi-layer feedback neural network whose output number is equal to
the number of fault classes is used. The error performance function is given by
q
N
2
l
E = 21
yd,i
(3.10)
(t) yil (t)
l=1 i=1
l
where N is the total number of training patterns and yd,i
(t) and yil (t) are
the desired and real output associated with feature TVi for the lth neuron,
respectively. Also, yil (t) is given in Equation (3.9).
To minimize the sum square error function in Equation (3.10) the weights
and coefficients in Equation (3.9) or Figure 3.4 can be updated using the
following formulas
98
p
m
tb
tb
l
wij
xkl (t) aj j
yd,i
(t)yil (t) xkl (t) aj j w ij 1
q
m
N
l=1 i=1 k=1
Daj =
q
m
N
l
yd,i
(t)yil (t) w ij 1
Dbj =
j=1
wij
j=1
m
k=1
xkl (t)
k=1
tbj
aj
1
aj2
tb
wij xkl (t) t bj a j aj j
p
m
tb
tb
l
wij
xkl (t) aj j
yd,i
(t)yil (t) a1j wij xkl (t) t bj aj j w ij 1
q
m
N
l=1 i=1 k=1
j=1
k=1
(3.12)
3.3.5
The two circuits chosen are the same as those in References 1719 for convenience
of comparison, and are shown in Figures 3.5 and 3.6. The circuit in Figure 3.5 is
a SallenKey bandpass filter. The nominal values of its components that result in a
centre frequency of 160 kHz are shown in the figure. The resistors and capacitors are
assumed to have tolerances of 5 and 10 per cent, respectively. The primary motivation
for selecting this filter and its associated faults described later in this section is to
compare our results with those in References 17 and 19. If we assume that R2 , R3 ,
C1 and C2 are 50 per cent higher or lower than their respective nominal values shown
in Figure 3.5, we have the fault classes R2 , R2 , R3 , R3 , C1 , C1 , C2 and
R1
R2
1k
3k
C1 5n
out
R3
C2
2k
OUT
+
R5
5n
R4
4k
Figure 3.5
4k
99
r2
C1 5 n
6200
C2
r1
R3
r 52
5100
5n
6200
r 51
r4
6200
2
OUT
1600
5100
OUT
OUT
r 62
10 k
0
r 64
10 k
r 63
10 k
OUT
+
r 61
10 k
Figure 3.6
Out
C2. The notations and stand for high and low, respectively. In order to generate
training data for different fault classes, we set faulty components in the circuit and
vary resisters and capacitors within their tolerances.
Figure 3.6 shows a four opamp biquad high-pass filter of a cut-off frequency of 10
kHz. Its nominal component values are given in the figure. Faulty component values
for this circuit are set to be the same as those in Reference 19 for convenience of
comparison. The tolerances of 5 and 10 per cent are used for resistors and capacitors
to make the example practical.
The impulse responses of the filter circuit are simulated to train the WNN with
the filter input of a single pulse of height 5 V and duration 10 s. We adopt a
WNN architecture of N1 -38-N2 , where N1 is the number of input patterns and N2
is the number of fault patterns. For the fault diagnosis of the SallenKey filter in
Figure 3.5, the method presented in Reference 17 requires a three-layer BPNN. This
network has 49 inputs, 10 first-layer and 10 second-layer neurons, resulting in about
700 adjustable parameters. During the training phase, an error function of these
parameters must be minimized to obtain the optimal weight and bias values. The
trained network was able to properly classify 95 per cent of the test patterns. Reference
19 for diagnosing nine fault classes (eight faulty components plus the no-fault class)
in the same SallenKey bandpass filter, requires a neural network with four inputs, six
first-layer and eight output-layer neurons. Their results show that the neural network
can not distinguish between the NF (no-fault class) and the R2 fault classes. If
these two classes are combined into one ambiguity group and we use eight output
neurons accordingly, the neural network can correctly classify 97 per cent of the test
patterns. Using the method described above, the trained WNN is capable of 100 per
cent correct classification of the test data, although the WNN used is somewhat more
complicated.
Using the method in Reference 19 to diagnose the 13 single faults assumed in Table
I of Reference 19 for the four opamp biquad high-pass filter of Figure 3.6 requires
a neural network with five inputs, 16 first-layer and 13 output-layer neurons. Their
3.4
Analogue circuit fault location has proved to be an extremely difficult problem. This
is mainly because of component tolerances and the non-linear nature of the problem. Among the many fault diagnosis methods, the L1 optimization technique is
a very important parameter identification approach [28, 29], which is insensitive to
tolerances. This method has been successfully used to isolate the most likely faulty elements in linear analogue circuits and when combined with neural networks, real-time
testing becomes possible for linear circuits with tolerances. Some fault verification
methods have been proposed for non-linear circuit fault diagnosis [3034]. On the
basis of these linearization principles, parameter identification methods can be developed for non-linear circuits. In particular, the L1 optimization method can be extended
and modified for fault diagnosis of non-linear circuits with tolerances. Neural networks can also be used to make the method more effective and faster for non-linear
circuit fault location.
This section deals with fault diagnosis in non-linear analogue circuits with tolerances under an insufficient number of independent voltage measurements. The
L1 -norm optimization problem for different scenarios of non-linear fault diagnosis is
formulated. A neural-network-based approach for solving the non-linear constrained
L1 -norm optimization problem is presented and utilized in locating the most likely
faulty elements in non-linear circuits. The validity of the method is verified and
simulation examples are presented.
1.965
1.96
1.955
1.95
0
50
100
150
200
250
De-noise signal
1.965
1.96
1.955
1.95
0
50
100
3
ca5
5
150
200
250
cd5
cd4
10
0.01
11.1
0
0.01
11.05
10
0
10
5
3
10
0.02
0
cd3
5
3
10
10
10
cd2
2
10
10
3
20
cd1
0
5
2
0
0
4
5
5
0
Figure 3.7
20
40
6
0
50
100
100
200
1.96
1.955
1.95
50
100
150
200
250
300
350
400
300
350
400
De-noised signal
1.965
1.96
1.955
1.95
0
50
100
ca5
10
5
10
200
250
cd5
15
150
20
cd4
0.6
0.6
0.4
0.4
0.2
0.2
0.2
cd3
10
20
0.2
cd2
0.5
0.5
20
40
cd1
1.5
1
0.5
0.5
Figure 3.8
50
0.5
0
0
50
100
0.5
100
200
3.4.1
Assume that a non-linear resistive circuit has n nodes (excluding the reference node),
m of which are accessible. There are b branches, of which p elements are linear and
q non-linear, b = p + q. The components are numbered in the order of linear to nonlinear elements. For simplicity, we assume that all non-linear elements are voltage
controlled, with characteristics being denoted as
ip+1 = fp+1 (vp+1 ), . . . , ip+q = fp+q (vp+q )
When the non-linear circuit is fault free, the non-linear component will work at its
static point Q0 and its voltagecurrent relation can be described as iQ0 = y0 vQ0 ,
where y0 is the value of static conductance at working point Q0 , and iQ0 and vQ0 are
the current and voltage at point Q0 , respectively. When the circuit is faulty, no matter
whether or not the non-linear element is faulty, the static parameter will change from
y0 to y0 + y, where y represents the increment from y0 . The change y can be
equivalently described by a parallel current source vy where v is the actual voltage
[3033]. For the linear elements, as is well known, the change in a component value
from its nominal can be represented by a current source. For a small-signal excitation,
which lies in the neighbourhood of its working point Q0 , the non-linear resistive
element can be replaced by a linear resistor. According to the superposition theorem,
we can derive [3033]:
Vm = Hmb Eb
(3.13a)
Eb = [e1 , e2 , . . . , eb ]T
(3.13b)
ei = vi yi
(3.13c)
where Vm is the increment vector of the voltages of accessible nodes due to faults,
Hmb is the coefficient matrix that relates the accessible nodal voltages to the equivalent
current source vector Eb , which can be calculated from the nominal linear conductances and working point conductances of non-linear components, vi is the actual
branch voltage for the component i, yi (i = 1, 2, , p) is the change in the conductance of the linear component and yi (i = p+1, , p + q) is the deviation from the
static conductance of the non-linear element.
Equation (3.13) is an underdetermined system of linear equations for parameters
Eb . Therefore the L1 -norm optimization problem may be stated as
minimize
b
|ei |
(3.14a)
i=1
subject to
Vm = Hmb Eb
(3.14b)
(3.15)
to determine whether or not the element is faulty or in other words, whether or not the
actual working point remains on the normal characteristic curve within the tolerance
limits [3034]. If Equation (3.15) holds within its tolerance, the non-linear element is
fault free and the y, the result of working point Q0 moving along its characteristic
function curve, is caused by other faulty elements. If Equation (3.15) does not hold,
the non-linear element is faulty.
Equation (3.14) is restricted to a single excitation. In fact, multiple excitations
can be used to enhance diagnosability and provide better results. For k excitations
applied to the faulty network, the L1 -norm problem is formulated as
minimize
b
yi
y
i=1
(3.16a)
i0
subject to
1 1
Hmb Vb
V1m
V2 H2 V2
m
mb b
... =
...
Vkm
Hkmb Vkb
Y
(3.16b)
i0
i0
(3.17a)
...
...
Vkm
Hkmb Vkb
Y
(3.17b)
where vi0 represents the nominal branch voltage, vi is the change in the voltage due
to faults and tolerance and vi + vi0 the actual voltage vi .
From Equation (3.17) we can obtain Y. For a linear element if y/y0 exceeds
its allowed tolerance significantly, we can consider it to be faulty. However, for
a non-linear resistive element, we cannot simply draw a conclusion. For analogue
circuits with tolerances, the relation of the voltage and current of a non-linear resistive
element can be represented by a set of curves instead of a single one due to tolerance,
the nominal voltage-to-current characteristic of the non-linear element being in the
centre of the zone. Therefore, for a non-linear component, after determining y/y0
we need to simulate the non-linear circuit again to judge whether or not the component
is faulty. If the actual VI curve of the non-linear element significantly deviates from
the tolerance zone of curves, the non-linear element can be considered as a faulty one.
Otherwise, if the actual curve falls within the zone, the non-linear element is fault free.
3.4.2
According to the above analyses, the L1 -norm problem of non-linear circuit fault diagnosis has three representations corresponding to Equations (3.14), (3.16) and (3.17),
respectively. The L1 -norm problem in Equation (3.14) belongs to the underdetermined linear equation parameter estimation problem, while the L1 -norm problem in
Equation (3.17) is the problem of non-linear parameter estimation. L1 -norm problem
of Equation (3.16), is solved traditionally by using the linear programming algorithm
and hence can be considered as the linear parameter estimation problem.
The L1 -norm problem in Equation (3.14) is simple and can be solved using
the Linear Programming Neural Network (LPNN) algorithms. We can easily transform the problem of Equation (3.14) into the standard linear programming problem.
Introducing new variables xi :
xi = ei ,
xi = 0,
xb+i = 0, ei 0
xb+i = ei ,
ei 0
(3.18)
(3.19a)
AX = B
X0
(3.19b)
subject to
The L1 -norm problem in Equation (3.16) can also be transformed into a standard
linear programming problem in the same way as the above and can be solved using
the LPNN.
The L1 -norm problem in Equation (3.17) belongs to the non-linear constrained
optimization problem. The solution of a non-linear constrained optimization problem
generally constitutes a difficult and often frustrating task. The search for new insights
and more effective solutions remains an active research endeavour. To solve a nonlinear constrained optimization problem by using neural networks, the key step is
to derive a computational energy function (Lyapunov function) E so that the lowest
energy state will correspond to the desired solution. There have been various neuralnetwork-based optimization techniques such as the exterior penalty function method,
the augmented Lagrange multiplier method, and so on. However, existing references
discuss only the unconstrained L1 -norm optimization problem. Here, an effective
method for solving the non-linear constrained L1 -norm optimization problem such as
Equation (3.17) is presented.
Although aiming at solving the problem in Equation (3.17), the approach is developed in a general way. The symbols to be used may have different meanings from
(and should not be confused with) those in the above, though they may be the same
for the convention. Applying the general formation to the problem in Equation (3.17)
is straightforward.
A general non-linear constrained L1 -norm optimization problem can be
described as
m
fj (X)
minimize
(3.20a)
j=1
subject to
Ci (X) = 0
(i = 1, 2, . . . , l)
(3.20b)
m
l
fj (X) +
ri |Ci (X)|
j=1
i=1
(3.21)
xj (0) = xj0 ,
i=1
j = 1, . . . , n
dfi
= fi sfi fi (X),
dt
dci
= ci sci Ci (X),
dt
where
fi (0) = fi0 ,
(3.23a)
i = 1, . . . , m
ci (0) = cio ,
i = 1, . . . , l
(3.23b)
(3.23c)
1
fi (X) > 0
(i = 1, . . . , m)
1 fi (X) < 0
1
Ci (X) > 0
ci = sgn(Ci (X)) =
(i = 1, . . . , l)
1 Ci (X) < 0
0
fi (X) = 0
sfi =
(i = 1, . . . , m)
1
fi (X) = 0
0
Ci (X) = 0
sci =
(i = 1, . . . , l)
1
Ci (X) = 0
fi 1, |ci | 1, j , fi , ci > 0
fi = sgn(fi (X)) =
VCC =1
f 1
f1(X)
Sf 1
Sf 1
VCC =1
C1(X)
SC1
C1
C1
VCC =1
SC1
+
SC1
VCC =1
f1(X)/xj
control network
X
+
r1
C1(X)/xj
xj0
xj
Figure 3.9
neural network
will movefrom any initial state X0 that lies in the neighbourhood
N (X ) = X0 X0 X < , > 0 } of X in a direction that tends to decrease
the cost function being minimized. Eventually, a stable state in the network will be
reached that corresponds to a local minimum of the cost function.
It can be proved that the stable state X satisfies the necessary conditions for
optimality of the function in Equation (3.21). Obviously, dE(X, R)/dt 0. It should
be noted that dE(X, R)/dt = 0, if and only if dx/dt = 0, that is, the neural network
is in the steady state, and
i fi (X ) +
Vi fi (X ) = 0
(3.24)
iA
/
iA
where
fi (X) = fi (X),
(i = 1, . . . , m)
fi (X) = ri Ci (X),
(i = m + 1, . . . , m + l)
I = {1, . . . , m, m + 1, . . . , m + l}
A = A(X ) = {i|fi (X ) = 0, i I}
V 1, i A
i
i = sgn(fi (X )),
i
/A
This means that stable equilibrium point X of the neural network satisfies the necessary conditions for optimality of Equation (3.21). According to theorem 3.1, the stable
G2
G1
G8
G3
G7
3
G5
G4
4
G6
5
Figure 3.10
state X of the neural network corresponds to the solution of the L1 -norm problem in
Equation (3.20).
Note that fi , ci are used as adaptive control parameters to accelerate the minimization process. From the ANN depicted in Figure 3.9, it follows that variables
fi , ci are controlled by the inner state of the neural network, which will make the
neural network have a smooth dynamic process and as a result of this, the neural
network can quickly converge to a stable state that is a local minimum of E(X, R).
Because the neural-network computation energy function in Equation (3.21) is derived
from the exact penalty approach, on applying the ANN in Figure 3.9 (or the neural
network algorithm of Equation (3.23)), more
accurate results can be obtained, with
appropriate penalty parameters ri ( ri > i ) not being large. The main advantage
of the neural-network-based method for solving the L1 -norm problem in comparison
with other known methods is that it avoids the error problem caused by approximating
the absolute value function, thus providing a high-precision solution without use of
large penalty parameters. The effectiveness and performance of the neural-network
architecture and algorithm have been successfully simulated. One example is given
below.
3.4.3
Illustrative example
Consider the non-linear resistive network shown in Figure 3.10, with the nominal
values of linear elements 16 being yi0 = 1 (i = 1, , 6) and the characteristics of
non-linear resistive elements 7 and 8 being i7 = 10V73 and i8 = 5V82 , respectively.
Both linear element parameters and static conductances (y70 and y80 ) of non-linear
elements are with a tolerance of 0.05. Nodes 1, 3, 4, 5 are assumed to be accessible
with node 5 being taken as the ground. Node 2 is assumed to be internal where no
measurement can be performed.
For a single small signal current excitation with 10 mA at node 1, the
changes in the accessible nodal voltages due to faults can be obtained as Vm =
[0.0044, 0.001, 0.000 4598]T
Construct the matrices required by Equation (3.17) using the nominal/static component values and solve the L1 problem in Equation (3.17) using the neural network
described in Equation (3.23). The neural network with ri = 10 (i = 1, 2, 3), zero
initial state, j = 106 , fj = ci = 107 (j = 1, . . . , 8; i = 1, 2, 3) has been simulated using the fourth-order RungeKutta method. The equilibrium point of the
neural network is the solution of the L1 problem, given by y1 /y10 = 0.5071,
3.5
Summary
This chapter addressed the application of neural networks in fault diagnosis of analogue circuits. A fault dictionary method based on neural networks has been presented.
This method is robust to element tolerances and requires little after-test computation.
The diagnosis of soft-faults has been shown, while the method is also suitable for hard
faults. Significant diagnosis precision has been reached by training a large number
of samples in the BPNN. While the faulty samples trained can be easily identified,
the BPNN can also detect untrained faulty samples. Therefore, the fault diagnosis
method presented can not only quickly detect the faults in the traditional dictionary
but can also detect the faults not in the dictionary. As has been demonstrated, the
method is also suitable for large-scale circuit fault diagnosis.
A method for fault diagnosis of noisy analogue circuits using WNNs has also
been described. In this technique, candidate features are extracted from the energy
in every frequency band of the signals sampled from the PENs in a circuit de-noised
by wavelet analysis and are employed to select optimal feature vectors by PCA, data
normalization and wavelet multi-resolution decomposition. The optimal feature sets
are then used to train the WNN. The method is characterized by its high diagnosability.
It can distinguish the ambiguity sets or some misclassified faults that other methods
cannot identify and is robust to noise. However, some overlapped ranges appear as
the component tolerances increase to 10 per cent.
Fault diagnosis of non-linear circuits taking tolerances into account is the most
challenging topic in analogue fault diagnosis. A neural-network-based L1 -norm optimization approach has been introduced for fault diagnosis of non-linear resistive
circuits with tolerances. The neural-network-based L1 -norm method can solve various linear and non-linear equations. A fault diagnosis example has been presented,
which shows that the method can effectively locate faults in non-linear circuits. The
method is robust to tolerance levels and suitable for online fault diagnosis of nonlinear circuits as it requires fewer steps in the L1 optimization and the use of neural
networks further speeds up the diagnosis process.
3.6
References
Chapter 4
4.1
Introduction
The size and complexity of integrated circuits (ICs) and related systems has continued to grow at a remarkable pace during recent years. This has included much
larger-scale analogue circuits and the development of complex analogue/mixed-signal
(AMS) circuits. Whereas the testing and diagnosis techniques for digital circuits are
well developed and have largely kept pace with the growth in complexity of the ICs,
analogue test and diagnosis methods have always been less mature than their digital
counterparts. There are a number of reasons for this fact. First, the stuck-at fault
modelling and a structured approach to testing has been widely exploited on the digital
side, whereas there is no real equivalent in the analogue world for translating physical faults into a simple electrical model. The second major problem with analogue
circuits is the continuous nature of the signals, giving rise to an almost infinite number of possible faults within the circuit. Third, there is the problem of the tolerance
associated with component and signal parameters, resulting in the definition of faulty
and fault-free conditions being somewhat blurred. Other problems inherent in largescale analogue circuit evaluation include the non-linearities of certain components
and feedback systems within the circuits.
However, even though circuits have grown in size and complexity, the design
tools to realise these circuits have matched this development. This means that complex circuits are becoming increasingly available as custom design items for a growing
number of engineers. Unfortunately, the test and diagnosis tools have not matched
this rate of development; so although the cost of IC production has seen a steady
decrease in terms of cost per component, the test and maintenance cost has increased
proportionally. While some standard analogue test and diagnosis procedures have
been developed over the years, many of these are only applicable to relatively small
4.1.1
Diagnosis definitions
4.2
4.2.1
As the name suggests, the majority of the simulation work relevant to the diagnosis routine is performed before measuring the circuit under test (CUT). The main
approach is to perform a series of fault simulations, that is, to introduce a particular
fault into the circuit description, simulate the circuit and record the details of the
faulty response. This process is repeated for different faults and so a series of faulty
responses are derived. Sometimes different faults give rise to similar responses and
these faults can be grouped together in a set. This process has been variously called
fault grouping, fault clustering and fault collapsing in the literature. Note that there
are basically two classifications of faults which can be simulated. These are hard (or
catastrophic) faults, which are typically modelled as short or open-circuits and soft
(or parametric) faults, which consist of component parameters having a value outside of their nominal range, but which still operate functionally. In analogue circuits
there are theoretically an infinite number of possible hard and soft faults, so clearly
a limited set of these must be taken in the SBT approaches in order to make the
problem tractable. Certainly a reduced set of hard faults can be envisioned, branches
can be open-circuited or nodes short-circuited to other nodes. Either an exhaustive
set of such faults can be simulated or a reduced set of the more likely faults to occur
can be constructed. One method for creating a realistic fault list is inductive fault
analysis (IFA) [3], which is based on the physical translation of processing faults into
4.2.2
R1
3
V1
R2
R4
V1
R1
R2
C3
R4
0
0
Figure 4.1
Once the CUT graph has been derived, it is possible to define a tree for the
graph. A tree is a subset of the graph edges which connects all the nodes without
completing any closed loops. The co-tree is the complement subset of the tree. Given
a particular graph, there may be many different ways of defining tree/co-tree pairs.
Once a particular tree has been defined, the component connection model (CCM) [7]
can be used to separate the CUT model into component behaviour and topological
description. The behaviour is modelled using the matrix equation:
b = Za
(4.1)
where
a = v itree and b = i vtree
cotree
cotree
are the input and output vectors, respectively. Z is called the component transfer
matrix and describes the linear voltagecurrent relationships of the components in
CUT. The topology the CUT is described by the connection equation:
a = L11 b + L12 u
(4.2)
where u is a stimulus vector. The results from the measurement of the CUT are
described by the measurement equation:
y = L21 b + L22 u
(4.3)
where y is the test point vector containing the measurement results. The Lij are the
connection matrices which are derived from the node incidence matrices referring
to the tree/co-tree partition. From the simple example circuit of Figure 4.1, we can
derive a tree such that V1 , R1 and C3 form the tree and R2 and R4 form the co-tree. We
consider V1 to be the stimulus component, and so we can derive the various vector
equations
iR1
vR1
iC
vC
3
3
a=
b=
(4.4)
vR2 ,
iR2 and u = (uV1 )
vR4
iR4
R1
0
0
0
1
vR1
0
0
0 iR1
jC3
vC
iC3
3 =
1
iR2 0
0
0 vR2
R
2
iR4
1 vR4
0
0
0
R4
The connection equation is
iR1
0
0
iC 0
0
3 =
vR2 1 0
1 1
vR4
1
0
0
0
1
vR1
0
vC 0
1
3 + u
0 iR2 1 V1
0
1
iR4
(4.5)
(4.6)
Suppose that we have as our test point the current iR1 and vR2 , then the measurement
equation becomes:
vR1
0 0 1 1
iR1
vC3 + 0 uV
=
(4.7)
1
1 0 0 0 iR2
1
vR2
iR4
Clearly, with large-scale circuits, a number of possible tree/co-tree pairs are
possible and as the connection matrices depend on this partition, different sets of
CCM equations are possible. An optimal tree-generation procedure is proposed in
Reference 8, which ensures the sparsest matrix system in order to minimize the computational burden. Once the optimal tree/co-tree partition has been determined, test
points need to be determined in order to ensure diagnosability of the CUT. An arbitrary
diagnosis depth (number of allowable faults present) can be specified. As the diagnosis depth is increased, so the number of test points needs to be increased. Often the
algorithm is run on the basis of only one fault being present (a diagnosis depth of unity).
The construction of the CCM equation set is only the first part of the diagnosis
algorithm. The second stage is the implementation of the ST algorithm itself. In this
stage, the components in the circuit are divided up into tester and testee groups. In
the first instance, all the components in the tester group are assumed to be fault free.
So the a and b matrices are split into tester (superscript 1) and testee (superscript 2)
elements respectively:
1
b
a
a = 2 and b = 2
a
b
We now form what is termed the pseudo-circuit description. First the CCM
equations are re-written according to the tester/testee partition:
1
1
0
Z
a
b
=
(4.8)
a2
0 Z2
b2
L112
b1
u
+
b2
L12
L212
11
b1
2
+ L22 u
L21
b2
L12
11
(4.9)
(4.10)
k
where the matrices Lkl
ij and Lij are obtained by appropriately picking up the rows and
columns of the connection matrices Lij . Solving these equations for testee quantities
yields the so-called pseudo-circuit equation [5]:
1
K11 K12
a
b
=
(4.11)
K21 K22
yp
up
where
a
y = 2 ,
b
p
K12 =
(L112
u
u =
,
y
p
2 1
L12
11 (L21 ) L22
K22 =
2 1
L212 L22
11 (L21 ) L22
(L221 )1 L22
12 2 1 1
K11 = L11
11 L11 (L21 ) L21 ,
2 1
L12
11 (L21 ) ),
2 1
L22
11 (L21 )
K21 =
22 2 1 1
L21
11 L11 (L21 ) L21
(L221 )1 L221
(L221 )1
This equation is solved to obtain the testee quantities a2 and b2 based on the
knowledge of the test stimuli u and the measured results y. Whether a particular
testee component is fault free or not is determined by whether the results obtained from
solving the pseudo-circuit equations agrees with the expected behaviour described by
Z2 . For ideal fault-free behaviour, the two values of b2 should be identical. However,
there will be an allowable tolerance, so the test is whether the difference between
the two vectors is within a certain tolerance band. Remember that the tester/testee
partition was done without knowledge of whether the components were faulty or fault
free and the algorithm operates on the assumption that all the components in the tester
group are fault free. Therefore, it is unlikely that this first pass of the ST algorithm
will provide a reliable diagnosis result. However, there will be some components in
the testee group which can be reliably said to be fault free. These can therefore be
moved into the tester partition group (being swapped with other circuit components)
and the ST algorithm re-run. Further testee components will be identified as fault free
and the iterative process continues until it is known that all the components in the
tester group are indeed fault free, at which point the diagnosis is known to be reliable
and the process is complete.
It should be noted that strictly this algorithm is only valid for parametric faults.
If catastrophic faults (short- or open-circuit) are present then this will change the
topology of the original circuit and original graph and tree/co-tree definitions will
be in error. However, it will be seen in the next section that a hierarchical extension
to this process can indeed diagnose catastrophic faults provided they are completely
within a grouped subcircuit.
4.3
Hierarchical techniques
It can be seen from the descriptions in the previous section that for both SBT and SAT
diagnosis approaches, the computational effort increases enormously with increasing
circuit size. In the SBT case, as the number of components increases, so does the
number of possible faults and therefore the number of simulations required in order
to derive a sufficiently extensive and reliable fault dictionary. In SAT approaches, as
these are often matrix-based calculations, the size of the vectors and matrices grows
proportionally with the complexity of the CUT, but the processing increases at a
greater rate, particularly when matrix inversions are required.
In both cases, the computational burden can be made more tractable by the use of
hierarchical techniques. This basically takes the form of grouping components into
subcircuits and treating these subcircuits as entities in their own right, thus effectively
reducing the total number of components. This can be extended to a number of
different levels, with larger groupings being made, perhaps including the subcircuits
into larger blocks. The diagnosis can be implemented at different levels and if required
the blocks can be expanded back out to pinpoint a fault that had been identified with
a particular block.
A number of different approaches to hierarchical diagnosis have been proposed,
for both SBT and SAT techniques (and also combinations of the two). Some of these
are described in the remainder of this chapter. Although not an exhaustive treatment,
it highlights some of the more important procedures described in the literature in
recent years.
4.3.1
We will look first at the hierarchical extensions based on SAT approaches. Some
of these are based on the ST and CCM combined system described in the previous
section. There have also been approaches which make use of sensitivity analysis and
also neural networks to achieve the diagnosis process.
4.3.1.1 Extensions using the ST algorithm
A very straightforward approach to adapting the ST/CCM to hierarchical applications
is described in Reference 10. Here the subcircuit grouping is performed and the
subcircuit is then effectively treated as a black box and is represented by a section
of graph corresponding to the terminals of the subcircuit. For example, an operational
Vin
VDD
Vo
VDD
Vip
Vin
Vo
VSS
Figure 4.2
VSS
(4.13)
The hierarchical component transfer matrix Zhier is part of the overall transfer
matrix Z.
Clearly, by adopting this approach, the size of the overall circuit graph can be
considerably reduced from the flat circuit representation and the matrix solution
problem becomes more tractable. However, there are a number of complications
arising from this approach. The next step of the CCM algorithm is to partition the
graph into a tree/co-tree pair. On remembering the definition of the a and b vectors
from Equation (4.1), and then when considering the hierarchical component, the
constituent edge currents and voltages must be part of the a and b vectors in the
same way. An optimal tree-generation algorithm was proposed in Reference 8 for
non-hierarchical circuits. This consists of ordering the various components by their
type (voltage sources, capacitors, resistors, inductors and current sources). The earlier
components in the list are preferred tree components, the latter are preferred co-tree.
This preference list has to be adapted to take account of the hierarchical components as
well. As described in Reference 11, this includes the hierarchical tree edges between
the voltage sources and capacitors and the hierarchical co-tree edges between the
inductors and current sources in the preferred listing. In order to prioritise components
in the same class, the edge weight of each component is calculated as the sum of
other edges connected to the edge under consideration. For components with equal
weight, they are further prioritised by the parameter value.
B1
B2
B4
B5
B3
B4
Figure 4.3
B3
B5
(4.14)
Hk = f (s, X, H1 , . . . , Hk1 )
H = f (s, X, H1 , . . . , Hk )
The final expression gives the transfer function H of the overall circuit where s is the
Laplace variable and X the set of component parameters.
A fault diagnosis approach using this sort of symbolic method was described
in Reference 16 in which simulation of single and double parametric faults were
simulated. The simulation time for a large-scale circuit was shown to be 15 times
faster using SOE rather than a traditional numerical simulation. Further improvements
in speed are described in Reference 13, which includes: (i) only re-evaluating the part
of the SOE that is influenced by a parameter variation; and (ii) optimum choice of
the BPT so that the number of expressions influenced by a parameter is minimized.
These two methods are now described in detail. For method (i), this is derived from
Reference 17 in which the SOE approach was applied to sensitivity analysis. In order
to identify the equations of the SOE that are influenced by a particular parameter x,
use is made of a graphical technique. Here, the dependencies implied in the SOE are
represented by connecting arrows. As an example, consider the SOE equation system
given in Equation (4.15) and the corresponding expression graph given in Figure 4.4.
H = Hk /Hk2
Hk = Hk1 + Hk2
Hk1 = 3Hk5 + 3
Hk2 = 2Hk4
..
.
(4.15)
H4 = H3 /H1
H3 = 5H2
H2 = x2
H1 = x1
Each expression in the SOE is represented by a node, Ha and the dependency of
Ha on Hb is represented by an edge (Ha , Hb ) from vertex Ha to Hb . As the SOE is
constructed so that there are only ever backward dependencies, the resulting graph
has no loops, that is, it is acyclic and is referred to as a directed acyclic graph (DAG)
[17]. The final term H is called the root node and refers to the overall transfer function
of the system and the leaf nodes of the system refer solely to the circuit parameters,
for example, H1 and H2 . The DAG is then used to accelerate the fault simulation
in the following fashion. Given a particular parameter of interest, x, we follow the
DAG paths in the opposite direction to the arrows, that is, bottom-up, from the leaf
H4
H3
H2 = x2
H1 = x1
Figure 4.4
H41
H42
H4k4
Figure 4.5
H1k1
H2k2
H51
H52
H31
H32
H3k3
H5k5
node to the root node. In this way, only the expressions that are influenced by x are
re-evaluated, potentially leading to great savings in computation time.
For method (ii), additional acceleration of computation can be made by reducing
the number of expressions influenced by a parameter. The aim is to minimize the average number of influenced expressions per parameter, resulting in an average reduction
in the computation cost. The number of expressions influenced by a parameter is the
sum of the lengths of the paths in the DAG that connect from the root node to the leaf
node of the parameter under consideration. Therefore, the aim of this method is to
minimize the average length of the DAG paths from root to leaves. Clearly the SOE
represents the functionality of the circuit, so this cannot be altered itself. However, the
way in which the circuit is hierarchically partitioned can be altered and it is through
this choice that the optimization is achieved. A heuristic solution to this problem was
introduced in Reference 18 for sensitivity analysis. It relies on the fact that the DAG
and the BPT are strongly related. As an example, consider the DAG representation
that is related to the BPT illustrated in Figure 4.3 as shown in Figure 4.5.
For each node in the BPT there is a sample set of equations from the SOE and
therefore a corresponding set of nodes from the DAG. Similarly, as the BPT represents
the dependency between different hierarchical blocks through the connecting directed
edges, there is a close similarity between the paths in the BPT and paths in the DAG.
Therefore, for most typical circuit structures, the length, lpt , of the path in the BPT
B3
B4
B5
B4
B5
B1
B2
B4
Figure 4.6
B3
B5
B6
B7
and the lengths, lDAG , of corresponding paths in the DAG are proportional to each
other, lDAG lpt . Therefore, minimizing lpt minimizes lDAG . We have now reduced
the problem to determining the BPT that will minimize the average value of lpt . The
solution to this comes from graph theory, where it is known that the solution is a
maximally balanced tree structure, as illustrated in Figure 4.6.
Here the two extremes of balance for a tree structure are illustrated, a totally
unbalanced tree and a maximally balanced partition tree. The respective average
lengths of the paths are given by
lpt (unbalanced) = n + 1 1
n
2
lpt (balanced) = log2 n
(4.16)
(4.17)
where n is the number of leaves of the tree (number of circuit parameters). Therefore, a
maximum improvement of O(n)/O(log2 n) can be achieved by choosing a maximally
balanced BPT as the basis for the SOE analysis.
Having established the advantages of the hierarchical approach to tolerance/sensitivity analysis and fault simulation yielded by symbolic analysis, both SAT
and SBT applications can now be envisioned. Although SBT approaches are further
detailed in the subsequent section of this chapter, both the applications of symbolic
analysis will be outlined here.
In respect of SAT algorithms, the symbolic approach to the calculation of sensitivities is directly applicable to the method of diagnosis described in Section 4.2.2.2.
(Hi ,Hj )DAG
(4.18)
Hi Hj
Hj x1
..
.
H
=
x1
(H,Hj )DAG
H Hj
Hj x1
The summing condition (Hi , Hj ) DAG((Hi , Hj ) = edge from Hi to Hj ) is a consequence of the fact that the edges represent the explicit dependencies between the
expressions, that is
/ DAG
(Hi , Hj )
Hi
=0
Hj
(4.19)
(4.20)
(Hi ,Hj )DAG
H Hi
Hi Hj
..
.
H
=
H1
(Hi ,Hj )DAG
H Hi
Hi H1
As the leaf expressions Hn of the SOE correspond to a circuit parameter, then the
sensitivities of the network function are given by the partial derivatives of H with
respect to the leaf expressions:
sen(H, xn ) =
H
H
=
xn
Hn
(4.21)
Therefore, evaluating Equation (4.20) generates all the sensitivities in parallel. Further
details of the method are given in Reference 6. As one sensitivity term is generated
for each leaf node of the DAG and as the number of leaf nodes is expected to increase
linearly with increasing circuit complexity [14, 17], this indicates that the computational expense of this parallel system is expected to grow only linearly with circuit
complexity.
In respect of the SBT approach, the symbolic approach can be applied to the
generation of a fault dictionary through fault simulation. The process follows the
following steps:
1. The circuit is divided into a hierarchical system employing a maximally
balanced BPT.
2. The SOE for the system is established.
3. Using the nominal circuit parameters (leaf nodes), the nominal SOE is
evaluated to yield the nominal value of the transfer function H.
4. For each fault simulation, the parameter under consideration is changed to its
faulty value and also the corresponding leaf node is allocated a token.
5. Proceeding bottom-up through the graph, each node that has a token, passes the
token on to all the predecessor DAG nodes and all the respective expressions
are re-evaluated.
4.3.2
As outlined in Section 4.2.1, the main approach to SBT has been through the use
of fault dictionaries. There is a requirement for constructing a database of responses
from a set of simulations, each one introducing a particular fault into the circuit.
The main problem comes in selecting a suitable set of faults to examine, which is
comprehensive enough to provide a reliable database and yet which is still within
a reasonable computational effort. As the circuit complexity grows, the number of
faults to be simulated rises and so having a hierarchical approach to the problem can
ease the computational requirements.
One group which has been pioneering in this particular area is based at Georgia
Institute of Technology in Atlanta, GA, USA and this section will describe some of
this groups work. One main publication that introduced this work is Reference 2,
which describes the development and functioning of the MiST PROFIT (Mixed-Signal
Test Program for Fault Insertion and Testing) software. This has been implemented by
Cadence Design Systems in their IC software suite. The software includes hierarchical
fault simulation, fault clustering and hierarchical diagnosis.
The basis of the hierarchical fault modelling approach is illustrated in Figure 4.7.
Here the circuit consists of N levels of hierarchy, with the highest level representing
the complete assembled system. Level 1, the lowest level, consists of the leaf cells.
Depending on the nature of the circuit, these may be at the transistor level or possibly
subcircuit (e.g., op-amp) level. In any event, the concept of the LRU is used in
this approach, which may or may not coincide with the leaf cells. For example, in
Figure 4.7 where the LRUs are indicated by shaded boxes, module A is designated as
a LRU, but is not at level 1 of the hierarchy. The point about LRUs in this approach
Level N
Spec(n)
Module A
Level 1
Figure 4.7
Module B
Spec(1)
Spec(n )
Spec(1)
Spec(n)
is that they represent the lowest point in the hierarchy to which the diagnosis will be
performed. This is a practical consideration from the point of view of being able to
repair a faulty circuit or system.
The specifications for each level of the hierarchy are indicated in Figure 4.7. At
the top level there are n specifications, at the other levels there are varying numbers
of specifications appropriate to the different levels of abstraction. The key to this
approach though is the relationship between the specifications at various levels and,
in particular, how faults are propagated from lower to higher levels. This is performed
through behavioural modelling and is the pivotal aspect of the approach. A fault at the
top level of the hierarchy can be represented by one or more out of range values for the
spec(1),, spec(n) or may occur because of a structural fault in the interconnections
between the modules at the N1 level. Therefore, knowledge of the faulty values of
the spec() parameters and the structural interconnection faults must be known from
the basic faults introduced in order to form the fault dictionary. The fault simulation
process must therefore be a bottom-up approach.
Starting with the leaf cells, which are the basic building blocks of the circuit,
the specifications for these cells are known in terms of nominal values and tolerance
ranges. A value outside of the accepted range, whether it represents a parametric
deviation (soft fault) or deviation to a range extremity (hard fault) can be introduced
into the fault modelling process. By simulation, the effect of the introduced faults
can be quantified. These effects now have to be translated into the next level of the
hierarchy via behavioural modelling of the module they affect. However, during this
process, the concept of fault clustering can be introduced. Suppose two different faults
at the leaf cell level give rise to substantially the same simulated response (within a
specified bound). There is no need to translate behavioural models of the individual
faults a single model will suffice for both faults. There is no loss of diagnostic
power here (in terms of fault location) as the two (or more) faults that give rise to the
characteristic response originate in the same cell.
During the fault simulation process, there are two possible approaches to fault
propagation. First, injection of a chosen fault into the leaf cell and propagation of
the results of the fault into the higher levels of the hierarchy or, second, computation
4.3.3
The group at Georgia Tech have further extended their diagnosis approach for complex
AMS systems by implementing both SBT and SAT approaches in a combined method
[21]. This method aims at both fault location and fault value evaluation, using the
fault dictionary SBT approach for the fault location and an SAT approach for the
fault value evaluation. As a running illustration in Reference 21, a biquadratic filter is
considered and three levels of hierarchy are used the top level being the overall filter
circuit, the next level down consists of op-amp, resistor and capacitor components,
and the lowest level considers the nine transistors that comprise each op-amp circuit.
This is not the only circuit that the authors have used the diagnosis tools on, but it
provides a useful example system.
The first stage is the hierarchical fault modelling, which follows along the same
lines as described in the previous section. At the topmost level the functional block
comprising the filter is characterized by a magnitude response for the transfer function at three separate frequencies. At the next hierarchical level down the op-amps are
characterized by two parameters, the voltage gain, Av and the gain-bandwidth product
(GBW), while the resistors and capacitors are simply characterized by their resistance
and capacitance respectively. Finally, at the lowest hierarchical level, the MOSFET
devices that comprise the op-amps are characterized by two parameters, the threshold
voltage, Vth and the width/length dimension ratio of the gate, W /L. The leaf cells of
the hierarchical system are the resistors, capacitors and transistors and these are also
defined as the LRUs. Therefore, LRUs exist at two different levels of the hierarchical structure. In the construction of the fault dictionary, the process starts with the
introduction of a series of faults into the transistors. These translate into variations of
the two parameters Vth and W /L, which in turn, through fault simulation, translate
into variations of the op-amp parameters Av and GBW. Once all the transistor level
faults have been simulated and translated into the Av , GBW specification space for
the op-amp, fault clustering can then take place. This produces a reduced set of fault
syndromes that will be entered into the fault dictionary. Reference 21 details certain
rules to be followed in the fault clustering process to provide a critical set of fault
syndromes so that complete diagnostic accuracy can be ensured.
The fault propagation process is then continued up through the hierarchy, based on
the critical fault syndromes, again by simulation at the next level of hierarchy. In this
example case the next level is in fact the top level, consisting of the filter circuit. Here
the fault syndromes from the Av , GBW specification space for the various op-amps
4.4
Conclusions
4.5
References
Chapter 5
5.1
Introduction
The continuous decrease in the cost to manufacture a transistor, mainly due to the
exponential decrease in the CMOS technology minimum feature length, has enabled
higher levels of integration and the creation of extremely sophisticated and complex
designs and systems on chip (SoCs). This increase in packing density has been coupled
with a cost-of-test function that has remained fairly constant over the past two decades.
In fact, the Semiconductor Industry Association (SIA) predicts that by the year 2014,
testing a transistor with a projected minimum length of 35 nm might cost more than
its manufacture [1].
Many reasons have contributed to a fairly flat cost-of-test function over the past
years. Although transistor dimensions have been shrinking, the same can not be
said about the number of input/output (I/O) operations needed. In fact, the increased
packing density and operational speeds have been inevitably linked to an increased
pin count. First, maintaining a constant pin countbandwidth ratio can be achieved
through parallelism. Second, the increased power consumption implies an increased
number of dedicated supply and ground pins for reliability reasons. Third, the
increased complexity and the multiple functionalities implemented in todays SoCs
entail the need for an increased number of probing pins for debugging and testing purposes. All the above-mentioned reasons, among others, have resulted in an increased
test cost.
Testing high-speed analogue and mixed-signal designs, in particular, is becoming
a more difficult task, and observing critical nodes in a system is becoming increasingly
challenging. As the technology keeps scaling, especially past the 90 nm technology,
metal layers and packing densities are increasing as a function of signal bandwidth
and rise times extending beyond the gigahertz range. Viewing tools such as wafer or
5.2
Background
The standard test methodologies for testing digital circuits are simple and consist
largely of scan chains, automatic test pattern generators and are usually used to test
for catastrophic and processing/manufacturing errors. In fact, digital testing including
digital BIST has become quite mature and is now cost effective [2, 3]. The same can
not be said about analogue tests that are performed for a totally different reason:
meeting the design specifications under process variations, mismatches and device
loading effects. While digital circuits are either good or bad, analogue circuits are
tested for their functionality within acceptable upper and lower performance limits as
shown in Figure 5.1. They have a nominal behaviour and an uncertainty range. The
acceptable uncertainty range and the error or deviation from the nominal behaviour is
heavily dependant on the application. In todays high-resolution systems, it could well
be within 0.1 per cent or lower. This makes the requirements extremely demanding
on the precision of the test equipments and methods used to perform those tests.
Added to this problem is the increased test cost when testing is performed after the
integration of the component to be tested into a bigger system. As a rule of thumb, it
costs tens times more to locate and repair a problem at the next stage when compared
to the previous one [4]. Testing at early design stages is therefore economically
DFT and BIST techniques for analogue and mixed-signal test 143
(a)
(b)
Good
pdf
Good
Bad
Bad
Bad
0
Figure 5.1
XD
Digital Fn
XL
XV Analogue Fn
beneficial. This paradigm where, early on in the design stages, trade-offs between
functionality, performance and feasibility/ease of test are considered has come to be
known as design for testability (DfT).
Ultimately, one would want to reduce, if not eliminate, the test challenges as
semiconductor devices exhibit better performance and higher level of integration. The
most basic test set-up for analogue circuits consists of exciting the device under test
(DUT) with a known analogue signal such as a d.c., sine, ramp, or arbitrary waveform, and then extracting the output information for further analysis. Commonly,
the input stimulus is periodic to allow for mathematically averaging the test results,
through long observation time intervals, to reduce the effect of noise [5]. Generally,
the stimulus is generated using a signal generator and the output instrument is a root
mean square (RMS) meter that measures the amount of RMS power over a narrow but
variable frequency band. A preferred test set-up is the digital signal processing (DSP)based measurement for both signal generation and capture. Most, if not all, modern
test instruments rely on powerful DSP techniques for ease of automation [6] and
increased accuracy and repeatability. Most mixed-signal circuits rely on the presence
of some components such as a digital-to-analogue converter (DAC) and an analogueto-digital converter (ADC). In some cases, it is those components themselves that
constitute the DUT. Testing converters can be achieved by gaining access to internal nodes through some analogue switches (usually CMOS transmission gates). The
major drawback for such method is the increased I/O pin count and the degradation
due to the non-idealities in the switches, especially at high speed, even though some
techniques have been proposed to correct for some of these degradation effects [7].
Nonetheless, researchers looked to define a mixed-signal test bus standard compatible
with the existing IEEE 1149.1 boundary scan standard [8] to facilitate the testing of
mixed-signal components. One of the earliest BIST to be devised was as a go/no-go
test for an ADC [9]. The technique relies on the generation of an analogue ramp signal, and a digital finite-state machine is used to compare the measured voltage to the
expected one. A decision is then made about whether or not the ADC passes the test.
While not a major drawback on the functionality of the devised BIST, the proposed
test technique in Reference 9 relies on an untested analogue ramp generation that constitutes a drawback on the overall popularity of the method. An alternative approach
would therefore be to devise signal generation schemes that can be controlled, tuned
and easily transferred to and from the chip in a digital format. Several techniques have
been proposed for on-chip signal generation and they are the subject of Section 5.3.
DSP
Digital
ADC
Signal
generator
Digital
Anti-aliasing
filter
DAC
Analogue
Analogue
output
Figure 5.2
Multiplexer
Analogue
input
Here, it suffices to mention that with the use of sigma-delta ()-based schemes,
it is possible to overcome the drawback of the analogue ramp, as was proposed by
Toner and Roberts [5] in another BIST scheme, referred to as mixed-analogue-digital
BIST (MADBIST). The method relies on the presence of a DAC and an ADC on a
single integrated circuit (IC) as is the case in a coder/decoder (CODEC), for example.
Figure 5.2, which illustrates such a scheme.
In the MADBIST scheme, first the ADC is tested alone using a digital based
oscillator excitation. Once the ADC passes the test, the DAC is then tested using either
the DSP engine or the signal generator. The analogue response of the DAC is then
looped back and digitized using the ADC. Once the ADC and then the DAC pass the
test respectively, they can be used to characterize other circuit behaviours. In fact, this
technique was used to successfully test circuits with bandpass responses as in wireless
communications. In Reference 10, MADBIST was extended to a superhetrodyne
transceiver architecture by employing a bandpass oscillator for the stimulus
source which was then mixed down using a local oscillator and digitized using the
ADC. Once tested, the DAC and transmit path are then characterized using the loopback configuration explained above.
To further extend the capabilities of on-chip testing, a complete on-chip mixedsignal tester was then proposed in Reference 11, which is capable of a multitude
of on-chip testing functions, all the while relying on transferring the information
to/from the IC core in a purely digital format. The architecture format is generic and
is shown in Figure 5.3. The functional diagram is identical to that of a generic DSPbased test system. Its unified clock guarantees coherence between the generation and
measurement subsystems that is important from a repeatability and reproducibility
point of view, especially in a production testing environment. This architecture in
particular is the simplest among all those presented above and is versatile enough to
perform many testing functions as will be shown in Section 5.7.
Of particular interest to the architecture proposed in Reference 11, besides its
simplicity and its digital interfacing, is its potential to achieve a more economical
test platform in an SoC environment. SoC developers are moving towards integration
of third-party intellectual properties (IPs) and embedding the various IP cores in an
DFT and BIST techniques for analogue and mixed-signal test 145
Clock source
Arbitrary
waveform
generator
Program
Periodic bit
stream
generator
Figure 5.3
Analogue
reconstruction
filter
DUT
Waveform
digitizer
To DSP
+
Programmable
reference
architecture to provide functionality and performance. The SoC developers have also
the responsibility of testing each IP individually. While attractive to maintain the
integration trend, the resultant test time and cost has inevitably increased as well.
Parallel testing can be used to combat this difficulty, avoiding therefore sequential
testing where a significant amount of DUT, DUT interfaces and ATE resources remain
idle for a significant amount of time. However, incorporating more of the specialized
analogue instruments (arbitrary waveform generators and digitizers) within the same
test system is one of the cost drivers for mixed-signal ATEs, placing a bound on the
upper limit of parallelism that can be achieved. In fact, modest parallelism is already
in use today by industry to test devices on different wafers, using external probe cards
[12]. However, reliable operation of a high pin count probe is difficult, placing an
upper constraint to the parallel testing, a constraint that does not appear to be able
to keep up with the integration level and therefore, the increased IC pin count, I/O
bandwidth and the complexity and variation in the nature of the IPs integrated, the
semiconductor industry has been facing.
Concurrent testing, which relies on devising an optimum strategy for the DUT
and/or ATE resource utilization to maintain a high tester throughput can help offset some of the test time cost that is due to idle test resources. The shared-resource
architecture available in todays tester equipment cannot support an on-the-fly reconfiguration of the pins, periods, timing, levels, patterns and sequencing of the ATE. On
the other hand, embedded or BIST techniques can improve the degree of concurrency
significantly. Embedded techniques, such as the one proposed in Reference 11, benefit
from an increased level of integration due to the mere fact of technology scaling that
allows multiple embedded test core integration in critical locations. This, together with
the manufacturing cost, bandwidth limitation and area overhead, all scaling favourably
with the technology evolution. This allows for parallelism and at-speed tests to be
exploited to a level that could potentially track the trend in technology/manufacturing
evolution.
Before presenting the architecture and its measurement capability in more
details, a description of some of the most important building blocks that led to the
implementation of such architecture are detailed first.
C1
C2
Figure 5.4
W1
W
Z1
D
Sine
ROM
D-bit
DAC
Analogue
filter
Out
W
Phase accumulator
Figure 5.5
5.3
Signal generation
Conventional analogue signal generation relys on tuned or relaxation oscillator circuits as shown in Figure 5.4. The problem with this approach is that it is not suitable
as an on-chip solution, DfT or BIST technique; first they are sensitive to process variations since their amplitude and frequency depend on absolute component values,
second, they are inflexible and difficult to control and do not allow multi-tone signal
generation and finally, their quality is largely dependent on the quality factor, Q, of
the reactive components unless piezoelectric crystals are used.
5.3.1
An early signal generation method that is more robust and flexible is known as the
direct digital frequency synthesis (DDFS) method [13] whereby a digital bitstream is
first numerically created and then converted to analogue form using a DAC followed
by a filtering operation. One such form is shown in Figure 5.5.
The read only memory (ROM) can store up to D-bit accuracy and can have up to 2W
words recorded. The phase accumulator enables the user to scan the ROM (digitally)
with different increments changing therefore the resultant sine-wave frequency, fout ,
according to
fout =
M fs
2W
(5.1)
DFT and BIST techniques for analogue and mixed-signal test 147
Z1
+
Z1
M-bits
D-bit
DAC
Analogue
filter
Out
Typically a DAC
k
Figure 5.6
Digital resonator
where w is the number of bits at the output of the phase accumulator, M is the number of
complete sine-wave cycles and fs is the sampling frequency. The amplitude precision
that is a function of D, the ROM word width, is then given according to
ADDFS =
Amax
2D+1
(5.2)
The above method requires the use of a DAC which needs to be tested and characterized if it is to be used in a BIST. The number of bits required from the DAC is
dictated by the resolution required for the analogue stimulus, which is often multi-bit.
This, in turn, entails a large silicon area, sophisticated design and increased test time,
all of which are not desirable.
5.3.2
Oscillator-based approaches
Z1
DAC
(STF=1)
Analogue
filter
Out
MUX
0
S
1
1-bit
k
Figure 5.7
or
DC
Figure 5.8
5.3.3
DFT and BIST techniques for analogue and mixed-signal test 149
1, A1, 1
+
DAC
Low-pass
filter
Analogue
output
2, A2, 2
Multi-bit digital
adder
N, AN, N
Figure 5.9
requirement [21]. N is chosen given a certain maximum memory length. The bitstream
is then generated according to a set of criteria such as the signal-to-noise ratio, dynamic
range, amplitude precision and so on.
The practicality of choosing the appropriate bitstream using the minimum hardware needed while maintaining a required resolution in terms of amplitude, phase and
spurious-free dynamic range were analysed in detail in Reference 22. Small changes
in the bitstream can lead to changes as large as 1040 dB in the quality or resolution
of the signal. As a result, an optimization can be run to achieve the best resolution
for a given number of bits and a given hardware availability.
5.3.4
Multi-tones
5.3.5
Area overhead
An important criterion in any BIST solution is the area overhead it entails. While it is
argued that the area occupied by the test structure benefits from technology scaling,
especially in the case of digital implementations, it is always desired to minimize
the silicon space and therefore cost, occupied by the test circuit. The memory-based
signal generation scheme presented in Section 5.3.3, which was seen to improve the
test stimulus generation capabilities from a repeatability point of view when compared
to its analogue-based stimulus counterpart, can be improved even further. Commonly,
the DUT has a front-end low-pass filter; an example would be an ADC with a preceding
anti-aliasing filter. In this case, the analogue filter that precedes the memory-based
bitstream can be removed altogether, while relying on the built-in filtering operation
of the CUT [16]. This concept is illustrated graphically in Figure 5.10. Later, it will be
seen how this same area-savings concept can be applied to the testing of phase-locked
loops (PLLs).
DUT
~
(a)
bitstream
generator
Filter
DUT
(b)
bitstream
generator
Filter
Circuit
DUT
(c)
Figure 5.10
DFT and BIST techniques for analogue and mixed-signal test 151
5.4
Signal capture
Testing in general comprises of first sending a known stimulus and then capturing
the resultant waveform of the CUT for further analysis. As discussed previously,
the interface to/from the CUT is preferably in digital form to ease the transfer of
information. The previous sections discussed the reliable generation of on-chip test
stimulus. Signal generation constitutes just one aspect of the testing of analogue and
mixed-signal circuits. This section discusses the other aspect of testing; the analogue
signal capture.
The signal capture of on-chip analogue waveforms underwent an evolution. First,
a simple analogue bus was used to transport this information directly off chip through
analogue pads [23]. Later, an analogue buffer was included on chip to efficiently
drive the pads and interconnect paths external to the chip. This evolution is illustrated
graphically in Figure 5.11. In both the cases above, the information is exported off
chip in analogue form and is then digitized using external equipment. Perhaps a better
way to export analogue information is by digitizing it first. This led to the modification
shown in Figure 5.12, whereby the analogue buffer is replaced with a 1-bit digitizer or
a simple comparator. Here, too, the digitization is achieved externally, shown in one
possible implementation using the successive approximation register (SAR), and with
external reference voltages feeding the comparator, usually and commonly generated
using an external DAC.
Ain
Analogue
function
F/F
DTout
Analogue
function
F/F
F/F
... Analogue
function
Aout
F/F
DTin
AT1
AT2
Ain
Analogue
function
DTin
F/F
Analogue
function
F/F
F/F
... Analogue
function
F/F
AT1
AT2
Figure 5.11
ATout
Aout
1-bit
output
Ain
DTin
Analogue
function
Analogue
function
F/F
F/F
... Analogue
function
F/F
Aout
F/F
DTout
AT1
On-chip
Ain
SAR
Digital
output
MSB
DAC
Figure 5.12
LSB
Aref
Another important evolution to the front-end sampling process is the use of undersampling. This becomes essential when the analogue waveform to be captured is very
fast. In general, capturing an analogue signal comprises sampling and holding the
analogue information first, and then converting this analogue signal into a digital
representation using an ADC. There exist many classes of ADCs each suitable for
a given application. Whatever the class of choice might be, the front-end sampling
in ADCs have to obey the Nyquist criterion; that is, the sampling of the information
having a bandwidth, BW, has to be done at a high enough sampling rate, fs , given
by 2BW . As the input information occupies a higher bandwidth, the sampling rate
has to correspondingly increase making the design of the ADC equivalently harder,
as well as more area and power consuming. Instead, testing applications make use
of an important property of the signal to be captured and that is its periodicity. Any
signal that needs to be captured can be made periodic by repeating the triggering of
the event that causes such an output signal to exist using an externally generated,
DFT and BIST techniques for analogue and mixed-signal test 153
Figure 5.13
Nodes
of
CUT
S/H
S/H
+
Figure 5.14
Quantization level
Programmable
reference
14
12
10
8
6
4
2
0
Vout
Aref
Out
Aref
1
15
Pass number
16
accurate and arbitrarily slow clock. Each time the external clock is triggered, it is also
slightly delayed. This periodicity feature in the signal to be captured and incremental delay in the external trigger give rise to an interesting capture method known as
undersampling, illustrated in Figure 5.13. For that, a slowly running clock (slower
than the minimum required by the Nyquist criterion) is used to capture a waveform,
so that the clock frequency is slightly offseted with respect to the input signal period.
That is, if the clock frequency is T + T (with T T ) and the input frequency
is T , then the signal using a multi-pass approach can be captured with an effective
resolution of T . This method has been demonstrated to be an efficient way of capturing high frequency and broadband signals where the input information bandwidth
can be brought down in frequency, making the transport of this information off chip
easier and less challenging, as was first demonstrated in the implementation of the
integrated on-chip sampler in Reference 24.
In order to make the digitization also included on chip, a multi-pass approach
was first introduced in Reference 25 whereby the undersampling approach is still
maintained in the front-end sample and hold stage and then further demonstrated and
improved in Reference 11 with the inclusion of the comparator and reference level
generator on chip. The top-level diagram of the circuit that performs such a function
with the corresponding timing and voltage diagram are shown in Figure 5.14 and
operates as described next.
The programmable reference is first used to generate one d.c. level. The sampledand-held voltage of the CUT is then compared to this reference level and quantized
using a 1-bit ADC (or simply a comparator). The next run through, the d.c. reference
5.5
An important feature in BIST techniques is the ability to measure fine time intervals
for a multitude of purposes. Most high-speed mixed-signal capture systems today rely
on undersampling. Undersampling allows the capture of high-frequency narrowband
signals by shifting the signal components into a much lower frequency range (or
bandwidth) which is then easily digitized with low-speed components. This was
detailed in Section 5.4. The achieved results are in large a function of the resolution and
accuracy attained by the time intervals. For that, many circuits capable of generating
fine delays and the corresponding circuits that allow for on-chip characterization of
such circuit components are summarized in this section.
5.5.1
Single counter
The simplest form of time measurement between two edges is through the use of a
single counter triggered with a fast running clock as shown in Figure 5.15. The use
of an N-bit register at the output acts as an N-bit counter. The number (or count) of
clock edges that elapsed between two data events (in Figure 5.15, the events are the
rising edges of the start and stop signals) are computed using an N-bit counter. The
output count corresponds to a thermometer-code digital representation of the time
interval, T .
The resolution attained by this method is largely dependent on the clock frequency
with respect to the time difference (or data) to be measured. The higher the frequency
of the clock, the better is the counter accuracy and the overall count resolution. As
the technology is shrinking sizes, the time differences to be measured are decreasing.
Intervals on the order of or even more than the fastest clock that can be generated
sometimes need to be measured. On the other hand, the task of generating clocks
much faster than the data to be measured is becoming a much more difficult task
and in some cases, not feasible. As a result, better approaches are needed and are
highlighted next.
DFT and BIST techniques for analogue and mixed-signal test 155
T
Start
Q
Count
Counter
Stop
D
Clock
Figure 5.15
Vout
Vin
Vin
ADC
Out
Figure 5.16
5.5.2
One of the most basic building blocks in time measurement is the time-to-voltage
converter, also known as the interpolation-based time-to-digital converter (TDC).
The idea behind such a circuit is to convert the time difference between the edges
to be measured into complementary pulses using appropriate digital logic. The pulse
width is then integrated on a capacitor, C, as shown in Figure 5.16 [26]. The ramp
final d.c. value (more accurately, the step size on the capacitor) is directly related to
the pulse width. Using a high-resolution ADC, this d.c. value can then be digitized
performing therefore a digitization of the time difference or data to be measured.
The disadvantage of the above method is that it relies on the absolute value of the
capacitor, C. It also requires the design of a good ADC. While the ADC is required
to digitize d.c. levels only, making its design task slightly easier than high-frequency
I/n
Vout
Vin
Vin
TMU
Out
Stretched pulses
Vref
I
Figure 5.17
(I )
(I/n)
ADCs, nonetheless, this ADC can be power hungry and could be a tedious and time
consuming task.
A better approach that is insensitive to the absolute value of the capacitor relies
on the concept of charging and then discharging the same capacitor by currents that
are scaled versions of each other. The system [27] is shown in Figure 5.17 and
accomplishes two advantages: (i) it does not rely on the actual capacitor value since
the capacitor is now only used as a mean of storing charge and then discharging it at a
slower rate; and (ii) it performs pulse stretching which will make the original pulse to
be measured much larger, making the task of quantizing it a lot easier. In this case, a
single threshold comparator (1-bit ADC) can be used to detect the threshold crossing
times. A relatively low-resolution time measurement unit (TMU) can then be used to
digitize the time difference. The TMU can be a simple counter, as explained above
in Section 5.5.1 or one of the other potential TMUs that will be discussed next.
The techniques in References 26 and 27 can become power hungry if a narrow
pulse is to be measured. Trade-offs exists in the choice of the biasing current, I, and
the bit-resolution of the ADC (for a given integration capacitor, C); the larger I the
lower the ADC resolution required. However, as the pulse width decreases and in
order to maintain the same resolution requirement on the ADC while using the same
capacitor, C, the biasing current and therefore the power dissipation has to increase.
In fact, for very small pulse widths, the differential pair might even fail to respond fast
enough to the changes in the pulse. For that, digital phase-interpolation techniques
offer an alternative to the analogue-based interpolation schemes.
5.5.3
Through the use of a chain of delays that will delay the clock and/or data as it is
propagating down the chain, generation [28] and measurement of fine delays can be
achieved. With the use of an edge-triggered D-flip-flop (DFF), delayed clock or data
edges can be obtained. This, unlike the analogue techniques that rely on an ADC, are
known as sampling phase time measurement units, and fall more in the category of
digital time measurement techniques.
DFT and BIST techniques for analogue and mixed-signal test 157
Data
1
1
D
1
D
CLK
Count_M
Count_2
Count_1
Figure 5.18
The operation of such a TDC is analogous to a flash ADC, where the analogue
quantity to be converted into a digital word is a time interval. They operate by comparing a signal edge to various reference edges all displaced in time. Typically, these
devices measure the time difference between two edges, often denoted as the START
and STOP edges. The START signal usually initiates the measurements while the
STOP edge terminates it. Given that the delay through each stage is known a priori
(which will require a calibration step), the final state of the delay lines can be read
through a set of DFFs and which is directly related to the time interval to be measured.
Usually the use of such delay lines has a limited time dynamic range. Some TDCs
employ time range extension techniques which rely on counters for a coarse time
measurement and the delay lines for fine intervals digitization. This is identical to the
coarse/fine quantizers in ADCs. Other techniques include pulse stretching [29], pulse
shrinking and time interpolation.
The use of the above devices extends to applications such as laser ranging and
high-energy physics experiments. With the addition of a counter at the output, this
simple circuit can be used to measure the accumulated jitter of a data signal (Data)
with respect to a master clock (CLK), as shown in Figure 5.18.
The above circuit can be used for time resolutions down to the gate delay created
by the technology in which they are implemented. To overcome the above limitations,
a Vernier delay line (VDL) can be used.
5.5.4
In a VDL, both the data to be digitized or analysed, as well as the clock signal are
delayed with two slightly offset delays as shown in Figure 5.19. Using this arrangement, time resolutions as small as res = (2 1 ) can be achieved, provided that
2 > 1 (sometimes also referred to as s and f for slow and fast respectively). In this
1
1
D
CLK
2
2
2
Count_M
Count_2
Count_1
Figure 5.19
1
case and having a total of N delay stages, the time range that can be captured is given
by range = N (2 1 ).
Usually these delays can be implemented using identical gates that are purposely
slightly mismatched. A few picoseconds timing resolution can be achieved in this
method, equivalent to a deep sub-gate delay sampling resolution. VDL samplers
have previously been used to perform time interval measurements [30] and data
recovery [31].
When Vernier samplers are used, data is latched at different moments in time,
leading to synchronization issues that must be considered when interfacing the block
with other units. Read-out structures exist though, allowing for continuous operation
and synchronization of the outcoming data [32]. For the purpose of jitter measurement,
this synchronization block is not needed.
The circuit was indeed used for jitter measurement and implemented [33] in a
standard 0.35-m CMOS technology, achieving a jitter measurement resolution of
res = 18 ps. The RMS jitter was measured to be 27 ps and the peak to peak jitter was
324 ps. For jitter measurements, the same circuit can be configured with the addition
of the appropriate counters, as shown in Figure 5.19.
Note that, in general, these delay stages are voltage controlled to allow for tuning
ranges, and, more often, in a negative feedback arrangement, known as a delaylocked loop (DLL), the delay stages are a lot more robust to noise and jitter due to the
feedback nature of the implementation. For that it is worth mentioning that, almost
exclusively, DLLs are now relying on a linear voltage-controlled delay (VCD) cell
introduced by Maneatis [34]. The linear aspect of the cell stems from the use of a
diode connected load in parallel with the traditional load. This gives the load of the
cell a more linear aspect, extending the linearity range of the delay cell. The biasing
of these cells are also made more robust to supply noise and variations due to the use
of a uniform biasing circuit to generate both the N- and P-sides biasing. The same
biasing is also used for all blocks where variations affecting one will affect the other
in a uniform manner.
DFT and BIST techniques for analogue and mixed-signal test 159
Single VDL stage
s
s
s
Data
D Q
f
D Q
D Q
f
f
CLK
Count_1
Count_2
Count_N
Data
D Q
Counter
CLK
Figure 5.20
5.5.5
The disadvantages of the previously proposed VDL, namely (i) the increased number
of stages for large dynamic time ranges; (ii) the matching requirements between the
many stages and (iii) the area and power dissipation overheads, can be overcome
with the use of a single-stage component-invariant VDL [35]. The proposed system
is shown in Figure 5.20.
The single stage consists of two triggered delay elements, one triggered by the data
and the other by the reference clock. The counter acts as a phase detector. This method
was indeed implemented in 0.18-m CMOS technology and only using standard cells
which facilitates the design task even further. The area occupied by a single stage of
the VDL is 0.12 mm2 , which is at least an order of magnitude less in area overhead
when compared to other methods. The measured resolution in this circuit was 19 ps.
The test time was approximately 150 ns/sample, for a clock running at 6.66 MHz.
Note that the inverters in the feedback loop were implemented as VCD cells to allow
for calibration and tuning.
5.5.6
The delay line structures that were presented in the previous sections can be referred
to as digital-type jitter measurement devices. Recently, an analogue-type macro,
shown in Figure 5.21, has been introduced and acts as an on-chip jitter spectrum
analyser [36]. The basic idea is to convert the time difference between the edges of
a reference clock and the jittery clock or to measure the signal into analogue form
using a phase-frequency detector (PFD) followed by a charge pump. The voltage
stored on the capacitor which represents the time information of interest is then digitized using an ADC. The speed of the proposed macro in Reference 36 is limited
by that of the ADC, as well as the ability of the PFD to resolve small time differences. A calibration step is usually required in such a design to remove any effects
Up
Phase
frequency
detector
Measured clock
Vdd/2
Vx
Down
Icp
LPF
(optional)
ADC
Reference
clock
Measured
clock
Up
Down
Vx
Vdd/2
Digitized using the ADC
Figure 5.21
Out
DFT and BIST techniques for analogue and mixed-signal test 161
of processvoltagetemperature variations. A jitter range of 200 ps in a 0.18-m
CMOS technology was demonstrated with a sensitivity of 3.2 mV/ps.
A similar idea was presented in Reference 37, but at the board level, in order to
provide a production-type testing of the accumulated over many periods timing jitter,
and was applied to the board level testing of data transceivers. No external jitter-free
clock is needed as a reference, which makes the implementation more attractive. The
clock to be measured is delayed and both the jittery signal and its delayed version
are then used to control a PFDcharge pumpADC combination with the jitter then
being digitized using an ADC. The comparators in the ADC were implemented using
a chain of inverters that were sized according to different switching thresholds, acting
therefore as some sort of multi-bit digitization. The system was also used as a BIST
technique to measure the jitter [38] and experimentally verified in Reference 39. The
measured jitter, accumulated over eight periods, on a 1 GHz clock was successfully
tested and evaluated at 3050 ps. The performance was then slightly pushed in a more
recent design with detailed experimental results presented recently in Reference 40.
Later, an embedded eye-opening monitor (EOM) was successfully implemented in
Reference 41. The purpose of such a monitor is to continuously compare the horizontal
and vertical openings of the eye diagram of an incoming digital high-speed link, as
illustrated conceptually in Figure 5.22. The horizontal measure gives information
about the amount of time jitter present in the system, while the vertical one is related
to the amplitude jitter of the system. Given some prespecified acceptable voltage and
phase (time) limits set by the application under consideration and that could be fed
to the embedded EOM solution, a pass or fail is then generated by the system. The
accumulated count of fail, also related to the bit error rate (BER) of the system, can
be fed to an equalizer that is usually used to adaptively compensate, in a feedback
mechanism, for the digital signal degradation. The circuit was experimentally verified
Voltage
Horizontal opening
Vhigh
Vertical opening
Mask
Vlow
early
Figure 5.22
late
Time
An example of the received eye diagram of high-speed link, with acceptable openings shown, both in the voltage and time scales, defining the
mask. Violations of those limits are detected by the EOM suggested in
Reference 41
V1
Reference
V2
Signal first
Out
Reference first
Figure 5.23
5.5.7
Time amplification
(5.3)
where is the conversion factor from the time to initial voltage at nodes V1 and V2 ,
t is the time difference between the rising edges of the signal and reference and
DFT and BIST techniques for analogue and mixed-signal test 163
1
M1
M2
2
M3
Figure 5.24
M4
is the device time constant. By measuring the time t between the moment that the
inputs switch to when the OR gates switches, t can be found.
The previously proposed circuit is compact area wise, but its use is limited for
only a few picoseconds in input time range. The gain also is at the single digit level.
Cascading might get around the latter problem.
A second method proposed for time amplification [43] is shown in Figure 5.24.
The circuit consists of two cross-coupled differential pairs with passive RC loads
attached. Upon arrival of the rising edges of
1 and
2 , the amplifier bias current
is steered around the differential pairs and into the passive loads. This causes the
voltage at the drains of transistors M1 and M2 to be equal at a certain time and that
of M3 and M4 to coincide a short time later. This effectively produces a time interval
proportional to the input time difference which can then be detected by a voltage
comparator.
The second time amplification method proposed in Reference 43, while more area
and power consuming, works for very large input ranges extending therefore the input
time dynamic range. Its gain can also be at least an order of magnitude higher, using
only a single stage. The circuit has been built in a 0.18-m CMOS technology and
was experimentally shown to achieve a gain of 200 s/s for an input range of 5300 ps,
giving therefore an output time difference of 160 ns.
Time amplification, when thought of as analogous to the use of PGAs in ADCs,
is the perfect block to precede a TDC. The reason being that with a front-end time
amplification stage, a low-resolution TDC can be used to get an overall high-resolution
TMU.
5.5.8
PLLs and DLLs are essential blocks in communication devices. They are used for
clock de-skewing in clock distribution networks, clock synchronization on chip, clock
multiplications, and so on. These blocks are mainly characterized for their ability to
Phase
detect
Charge
pump
Low-pass
filter
Output
o
N
Vary N to N + 1
Figure 5.25
lock or track the reference clock fast (hence the tracking or locking time characteristic), as well their phase or jitter noise. Testing for these is of paramount importance
in todays SoCs.
An embedded technique for the measurement of the jitter transfer function of a
PLL was suggested in Reference 44. The technique relies on one of three methods
where the PLL is excited by a controlled amount of phase jitter from which the loop
dynamics can be measured. These techniques, shown in Figure 5.25, include phase
modulating the input reference, i , sinusoidal injection (using a bitstream or a
PDM representation of the input signal) at the input of the low-pass filter or varying
the divide-by-N counter between N and N + 1.
All three techniques have been verified experimentally and tested on commercial PLLs, allowing one to easily implement these testing techniques for on-chip
characterization of jitter in PLLs.
Given the proposed PLL testing method technique presented in Reference 44,
it is beneficial to draw some analogies with the voltage measurement or stimulating schemes presented earlier in Section 5.3.5. A pulse-density modulated signal is
injected into the PLL. Due to the inherent low-pass filter present in PLLs, the testing
or stimulating of such systems, similar to the testing of ADCs, can be achieved in
a purely digital manner without the need for an additional low-pass filter. Silicon
area savings and reduced circuit complexity can be achieved, and is an added bonus
of the proposed PLL BIST. So stimulating the PLL is, here too, done using only
a digital interface [45]. Another analogy can be drawn with respect to the voltage
domain testing. While in the analogue stimulus generation, it is the amplitude that is
modulated, in the case of a PLL, it is the case for the phases or clock edges, as shown
in Figure 5.26.
5.6
DFT and BIST techniques for analogue and mixed-signal test 165
bitstream
generator
(Amplitude)
Filter
Circuit
(ADC)
Amplitude
modulation
Circuit
(PLL)
Phase
modulation
DUT
bitstream
generator
(Phase
placement)
Filter
DUT
Figure 5.26
of edges with known time intervals. However, as the desired calibration resolution
becomes smaller than a few picoseconds, such a task becomes more difficult; on chip,
mismatches and jitter put a lower bound on reliable achievable timing generators,
while off chip, edge and pulse generators can produce such intervals accurately at
additional costs. Calibration methods and their associated trade-offs are therefore
important and will be the subject of Chapter 11. Here, we restrict the discussion to
the calibration of time measurement instruments and in particular, to the flip-flop
calibration of what is known as the sampling-offset TDC or SOTDC for short [46]. A
sampling-offset TDC is a type of flash converter that relies solely on flip-flop transistor
mismatch, instead of separate delay buffers, to obtain fine temporal resolution. While
a rather specific type of TDC, it is probably one of the more challenging types to
calibrate due to the very fine temporal resolutions that this TDC can achieve, making
therefore the task of measuring and calibrating such small time differences a difficult
task. In fact, it was shown in Reference 47 that mismatches due to process variation can
produce temporal offsets from 30 ps down to 2 ps, depending on the implementation
technology and architecture chosen for the flip-flop. Those flip-flops need therefore
to be calibrated first before they can be used as TMUs.
In Reference 47, an indirect calibration technique was proposed that involves
the use of two uncorrelated signals (practically, two square waves running at
slightly offset frequencies) to find the relative offsets of the flip-flops used in the
SOTDC.
Finding the absolute values of the offset, which is statistically referred to as
the mean of a distribution of offsets, requires a direct calibration technique. This
technique was introduced in Reference 48. It involves sending two edges to the flipflop to be calibrated, with time difference, T , tightly controlled, and repeating
the measurements many times to get a (normal or Gaussian) distribution. T is then
changed and the same experiment is repeated. A counter or accumulator is then used to
5.7
Some of the above BIST techniques that were highlighted in previous sections have
been incorporated in a single system that was used to perform a full set of tests,
emulating therefore the function of a mixed-signal tester on chip. The advantages of
such a proposed system in Reference 11 include a full digital input/output access,
a coherent system for signal generation and capture, a fully programmable d.c. and
a.c. systems, and a single-comparator or 1-bit ADC, which with an on-chip DLL
can perform a multi-pass capture digitization. The proposed system [11] was shown
earlier in Figure 5.3. This section is dedicated to show some of it versatile applications
that were indeed built, tested and characterized.
5.7.1
Perhaps of most significance, from a BIST point of view, are two important aspects
that the architecture in Reference 11 offers. First, its almost all-digital implementation makes it very attractive from a future scaling perspective whereby the occupied
area overhead is expected to decrease with newer CMOS technologies. As shown
in Figure 5.27, with the exception of a crude low-pass filter for d.c. generation, an
analogue filter for a.c. generation, two samples and holds (S/H) and a comparator,
the remaining consists of an all-memory implementation. Notice that theoretically,
only one S/H is needed and that is at the output of the CUT, where the information is
analogue and might be varying. However, for practical reasons, and more specifically,
in order to combat capacitor charge leakage that occurs when the charge is held for
a long time, an identical S/H is placed on the other terminal of the comparator [25],
namely where the d.c. voltage is fed. This provides symmetry and therefore identical
charge loss at both comparison terminals.
Another very important aspect of the proposed architecture is its digital-in digitalout scan capabilities, shown in Figure 5.28. This is particularly important from a signal
DFT and BIST techniques for analogue and mixed-signal test 167
DIV
CLK
Analogue
filter
CUT
S/H
S/H
DSP
N log2 N
N2
Figure 5.27
Original
analogue/mixedsignal core
+
Scan path
Scannable cells for reguar core I/O and for internal "scan chain" acces
Figure 5.28
integrity perspective whereby a digital interface, both at the input and output terminals,
is a lot more immune to noise and signal degradation caused by the interconnect
paths.
Last but not least, its flexibility from an area overhead perspective is what adds to
its BIST value. As highlighted in Figure 5.29, the proposed test core can be greatly
reduced if the area is of paramount importance. The a.c. and d.c. memory scan chains
...
...
Analogue/mixed
signal
core
+
...
Figure 5.29
...
...
Chip boundary
can be off chip using external equipment. Similarly, the memory that holds the digital
logic and performs the back-end DSP capabilities can also be external, both, while still
maintaining digital-in digital-out interface. In this case, the abbreviated mixed-signal
test core consists of simple digital buffers (to restore the rise and fall times of the
digital bitstream), the crude low-order d.c. low-pass filter, and the single-comparator
performing the digitization in a multi-pass approach.
5.7.2
Oscilloscope/curve tracing
The system was first checked for its ability to perform signal generation, as well as
signal capture. Fully digital d.c. and a.c. signal-memory-based generation systems
are incorporated. The programming of the memory is achieved with a sub routine and
optimized using software. The memories are then loaded with the bitstream, through
a global clock. With appropriate on-chip low-pass filtering, and by using the DLL,
which controls the sample-and-hold clock (all of which are generated using the same
global clock), and a 1-bit comparator, a multi-pass algorithm allows the capture of
the generated signals. The digitized version of the output is then exported for further
analysis.
Experimental results from d.c. curve tracing showed a linearity of 10 bits in a
0.35-m CMOS technology, for an effective capture rate of 4 GHz, corresponding to a time resolution of 200 ps. Single and multi-tone generation have also been
demonstrated in the same technology, as well as 0.25- and 0.18-m CMOS technologies. Spectral purity as good as 65 dB at 500 kHz and 40 dB at 0.5 GHz have
been achieved. The capture method was also tested demonstrating a resolution of
approximately 12 bits.
DFT and BIST techniques for analogue and mixed-signal test 169
5.7.3
Coherent sampling
Coherency is an important and essential feature in production testing where repeatability and reproducibility of the test results is in large a function of the signal generation
and output capture triggering time. A single master clock clocking the different parts
of the complete system ensures coherency and edge synchronization. With shorter
distances, as is the case on chip, delays between the different subsystems is less
critical. In the case of relatively larger chips or high speed and/or high performance,
localized PLL and DLL might be necessary.
The proposed system does indeed have an inherent coherency that makes it even
more attractive for production testing.
5.7.4
With a clock rate in excess of 10 GHz by 2010 [1], clock pulses as little as 100 ps will
be needed to cross from one end of the chip to the other. On the other hand, it takes
about 67 ps for an electromagnetic wave to travel 1 cm in silicon dioxide, a delay
comparable to the clock period. Signal integrity analysis such as time domain reflectometry (TDR), time domain transmission (TDT), crosstalk and so on is therefore
of paramount importance. Owing to their broadband nature, capturing such highfrequency signals on chip is very costly. Embedded tools are therefore essential for
such characterization tasks.
Board level TDR have also been experimentally proven using the system above.
The digitizer core, introducing only a few femtofarads of capacitive loading can be,
and was in fact, used as a tool for testing TDR and TDT on a board. For that, only
the digitization part of the system was used, and a 6-bit resolution at an effective
sampling rate of 10 GHz was demonstrated [49]. External clocks with a time offset
of 100 ps were used in this particular experiment.
5.7.5
Crosstalk
One other ultimate application for digital communication in deep-submicron technologies is the crosstalk that is becoming more pronounced as technologies scale
down, speeds go up and interconnect traces become longer and noisier. The increased
level of packing density is inevitably introducing lines that are in proximity to one
another where quiet lines, in proximity of aggressor lines get transformed into what
are known as victim lines. This crosstalk effect was indeed captured using the versatile
system proposed above [49].
An earlier version was also implemented in Reference 50. The embedded circuit
was also used to measure digital crosstalk on a victim line due to aggressor lines
switching. In this implementation, only the sample-and-hold function was placed on
chip, together with a VCD line that was externally controlled with a varying d.c.
voltage to generate the delayed clock system. Buffers were then used to export the
d.c. analogue sampled-and-held voltage, and the signal was reconstructed externally.
The circuit relies on external equipment for the most part (which is not always an
undesired effect, in fact it is more desired in a testing environment for more control
5.7.6
Supply/substrate noise
ADC
External
pulse
generator
(with
variable )
Supply (to be
characterized)
Out
(digital out for
post-processing)
Two samplers
+
Buffer
VCO
Sampler
Counter
Sampler
Figure 5.30
Oscillation frequency
is then measured using
a high-speed counter
DFT and BIST techniques for analogue and mixed-signal test 171
to control the frequency of oscillation of the VCO. This frequency is then measured
using a high-frequency counter and exported off chip in a digital manner. Calibration
is necessary in this implementation in order to capture the voltagefrequencydigital
bitstream relationship.
The system in Reference 51 was implemented in a 0.13-m CMOS technology and experimentally verified to capture both the deterministic nature of the noise
(largely captured using undersampling), as well as the stationary noise in a 4-Gb/s
serial link system. The stationary noise was captured using the autocorrelation function and was in large due, and correlated, to the clock in the system. The power
spectral density revealed the highest noise contribution at 200 MHz agreeing with the
system clock. Other noise contributions in the PSD occurred at other frequencies that
were directly related to some switching activity in the system. So the proposed system
in Reference 51 was indeed capable of capturing both deterministic (also referred to
as periodic) and stationary properties of the supply noise in a Gb/s serial link system.
Also recently, an on-chip system to characterize substrate integrity beyond 1 GHz
was implemented in a 0.13-m CMOS technology [52] and successfully tested. The
relevance of this paper is on one hand its circuit implementation for measuring substrate integrity, which confirms the need for embedded approaches. On the other hand,
the papers conclusion confirms that in an SoC, integrity issues have to be studied and
can not be ignored, especially beyond 1 GHz of operational speed.
5.7.7
One other test was also performed on the proposed system in Reference 11, and
that is through the capture of an RF low-noise amplifier (LNA) frequency response,
particularly around its resonance frequency. The CUT was implemented on chip
and its frequency response was tested through the multi-tone signal generation and
multi-pass single-comparator capture system proposed. A 1.2-GHz centre resonance
frequency was successfully measured with 29 dB of spurious-free dynamic range [49].
More focused RF BIST testing has been proposed in Reference 53. An example
diagram for testing RF gain and noise figure is shown in Figure 5.31. A noise diode
generates a broadband RF noise signal, and an output diode, preceded by an LNA for
amplification purposes, acts as an RF detector. Narrowband filters are used to filter
out the broadband noise. Sweeping of the power levels is achieved by varying the
Narrowband
filter
Noise diode
Figure 5.31
DUT
Calibration
path
Narrowband
filter
LNA
RF detector
5.7.8
With all the above applications that were indeed experimentally verified, the system
is in fact versatile, almost all-digital with the exception of one comparator, two lowpass filters and two sample-and-hold systems. The circuit was proven to perform
test capabilities that are otherwise non-achievable or, to say the least, very expensive
to test for. Despite its versatility, some limitations exist for the proposed system in
Reference 11 and these are highlighted next.
Comparator offset is one such limitation; the comparator needs to be fully characterized for its offset, as well as dynamically tested, two tasks not easily done, or at
best, test is time consuming and require some additional consideration.
The other limitation, albeit less severe, lies in the uncertainty associated with the
rise/fall time mismatch of the digital bitstream in the on-chip memory bitstream d.c.
generation. This, however, can be taken care of at the design level and accounts for
the worst-case process variations.
One last limitation is the increased test time that each test will require due to the
multi-pass approach. The dead time needed for the d.c. signal generation subsystem
to settle to an acceptable level within an acceptable resolution, each time the d.c.
generation block updates its output level, is another source of increased test time.
This was a trade-off between design complexity and test time that the authors had to
consider.
5.8
Recent trends
If the cost of a component has to be brought down to track Moores law, its testing cost
has to go down as well. While most of the recent tools are mainly for characterization
and device functional testing, more needs to be done about production testing. One
important criterion in production testing is the ability to calibrate all devices while
using simple calibration techniques, with as little test time overhead as possible to be
a production worthy solution. It is therefore important to highlight some of the latest
test concerns and techniques that have emerged in recent years, mainly to reduce
overall test time and cost.
Adaptive test control and collection and test floor statistical process control are
now emerging topics that are believed to decrease the overall test time through investigating the effect of gathering statistical parameters about the die, wafer and lot, and
feeding them back to a test control section through some interactive interface. As
more parts are tested, it is believed that the variations in the parts are better understood, allowing the test control to enable or disable tests, re-order tests, for example,
DFT and BIST techniques for analogue and mixed-signal test 173
allowing tests that are catching the defects to be run first [54]. This has the potential
effect of centring the distribution of the devices performance more tightly around
its mean; in other words, getting test results with less variance or standard deviation.
Once this is achieved, the remaining devices in the production line can be easily
scanned and binned more quickly. However, this solution does not address the issue
of mean shifting that could happen if there is a sudden change in the environmental
set-up. Also, the time it takes to gather a statistically valid set of data that works more
or less globally is not yet defined. This is an important criterion since having a set that
works for only a small percentage of the devices to be tested is not an economically
feasible solution. In other words, the time offset introduced by the proposed method
should not have a detrimental effect on the overall test time. Otherwise the proposed
method is not justified.
A design for manufacturability technique based on a manufacturable-byconstruction design was also recently proposed in Reference 55. The idea proposed
is specifically intended for the nanometre era and puts forward the concept of incorporating accurate physical and layout models of a particular process as part of the
computer-aided design tool used to simulate the system. Such models are then continuously and dynamically updated based on the yield losses. The concept was
experimentally verified on five different SoCs implemented in a 0.13-m CMOS
process, including a baseband cell phone, a micro-controller and a graphics chip.
Experimental results show a yield improvement varying between 4 and 12 per cent,
depending on the nature of the system implemented on the chip. The yield improvement was measured with respect to previous revisions of the same ICs implemented
using traditional methods.
Recent questions and efforts are also entailing the consideration by the ATE
manufacturing industry to what is known as an open architecture with module instruments to standardize test platforms and increase their lifetime, which resulted in The
Semiconductor Test Consortium formed between Intel and the Japanese Advantest
Corp. [56].
Finally, the testing of multiple Gb/s serial links and buses has been the focus
of recent panel discussions [57]. Some of the questions that have been addressed
include the appropriateness of DfT/BIST for such tests, whether such measures
are, or will be, the bottleneck for analogue tests, rather than the RF front-end in
mobile/wireless computing, and finally, whether it is necessary to even consider testing for jitter, noise and BER from a cost and economics perspective in a production
environment.
5.9
Conclusions
In summary, it is clear that test solutions and design for test techniques are important,
but where the test solutions are implemented and how they are partitioned, especially
in an SoC era, have an effect on the overall test cost. Improvising the optimum test
strategy that is affordable, achieves a high yield and minimizes the time to market is
a difficult task.
5.10 References
1 The 1997 National Technology Roadmap for Semiconductors (Semiconductors
Industry Association, San Jose, CA, 1997)
2 Tsui, F.F.: LSI/VLSI Testability Design (McGraw Hill, New York, 1986)
3 Bardell, P.H., McAnney, W.H., Savir, J.: Built-in Test for VLSI: Pseudorandom
Techniques (John Wiley and Sons, New York, 1987)
4 Davis, B.: The Economics of Automatic Testing (McGraw Hill, UK, 1982)
5 Toner, M.F., Roberts, G.W.: A BIST scheme for an SNR, gain tracking and
frequency response test of a sigma-delta ADC, IEEE Transactions on Circuits
and Systems II: Analog and Digital Signal Processing, 1995; 42 (1): 115
6 Grochowski, A., Bhattacharya, D., Viswanathan, T.R., Laker, K.: Integrated
circuit testing for quality assurance in manufacturing: history, current status and
future trends, IEEE Transactions on Circuits and Systems II: Analog and Digital
Signal Processing, 1997; 44 (8): 61033
7 Sunder, S.: A low cost 100 MHz analog test bus, Proceedings of IEEE VLSI Test
Symposium, Princetown, NJ, 1995, pp. 603
8 Osseiran, A.: Getting to a test standard for mixed-signal boards, Proceedings of
IEEE Midwest Symposium on Circuits and Systems, Rio de Janeiro, Brazil, 1995,
pp. 115761
9 DeWitt, M.R., Gross, G.F. Jr., Ramanchandran, R.: Built-in Self-Test for Analog
to Digital Converters, US Patent no. 5132 685, 1992
10 Veillette, B.R., Roberts, G.W.: A built-in self-test strategy for wireless communication systems, Proceedings of IEEE International Test Conference, Washington,
DC, 1995, pp. 9309
DFT and BIST techniques for analogue and mixed-signal test 175
11 Hafed, M.M., Abaskahroun, N., Roberts, G.W.: A 4 GHz effective sample rate
integrated test core for analog and mixed-signal circuits, IEEE Journal of Solid
State Circuits, 2002; 37 (4): 499514
12 Zimmermann, K.F.: SiPROBE A new technology for wafer probing,
Proceedings of IEEE International Test Conference, Washington, DC, 1995,
pp. 10612
13 Tierney, J., Rader, C.M., Gold, B.: A digital frequency synthesizer, IEEE
Transactions on Audio and Electroacoustic, 1971; 19: 4857
14 Bruton, L.: Low sensitivity digital ladder filters, IEEE Transactions on Circuits
and Systems, 1975; 22 (3): 16876
15 Lu, A.K., Roberts, G.W., Johns, D.A.: High-quality analog oscillator using
oversampling D/A conversion techniques, IEEE Transactions on Circuits and
Systems II: Analog and Digital Signal Processing, 1994; 41 (7): 43744
16 Toner, M.F., Roberts, G.W.: Towards built-in-self-test for SNR testing of a
mixed-signal IC, Proceedings of IEEE International Symposium on Circuits and
Systems, Chicago, IL, 1993, pp. 1599602
17 Lu, A.K., Roberts, G.W.: An analog multi-tone signal generation for builtin-self-test applications, Proceedings of IEEE International Test Conference,
Washington, DC, 1994, pp. 6509
18 Haurie, X., Roberts, G.W.: Arbitrary precision signal generation for bandlimited mixed-signal testing, Proceedings of IEEE International Test Conference,
Washington, DC, 1995, pp. 7886
19 Veillette, B., Roberts, G.W.: High-frequency signal generation using deltasigma modulation techniques, Proceedings of IEEE International Symposium
on Circuits and Systems, Seattle, Washington, 1995, pp. 63740
20 Hawrysh, E.M., Roberts, G.W.: An integration of memory-based analog signal
generation into current DFT architectures, Proceedings of IEEE International
Test Conference, Washington, DC, 1996, pp. 52837
21 Burns, M., Roberts, G.W.: An Introduction to Mixed-Signal IC Test and
Measurement (Oxford University Press, New York, 2001)
22 Dufort, B., Roberts, G.W.: On-chip signal generation for mixed-signal built-in
self test, IEEE Journal of Solid State Circuits, 1999; 34 (3): 31830
23 Parker, K.P., McDermid, J.E., Oresjo, S.: Structure and metrology for an analog
testability bus, Proceedings of IEEE International Test Conference, Baltimore,
MD, 1993, pp. 30922
24 Larsson, P., Svensson, S.: Measuring high-bandwidth signals in CMOS circuits,
Electronics Letters, 1993; 29 (20): 17612
25 Hajjar, A., Roberts, G.W.: A high speed and area efficient on-chip analog waveform extractor, Proceedings of IEEE International Test Conference, Washington,
DC, 1998, pp. 68897
26 Stevens, A.E., van Berg, R., van der Spiegel, J., Williams, H.H.: A time-tovoltage converter and analog memory for colliding beam detectors, IEEE Journal
of Solid State Circuits, 1989; 24 (6): 174852
27 Sumner, R.L.: Apparatus and Method for Measuring Time Intervals With Very
High Resolution, US Patent 6137 749, 2000
DFT and BIST techniques for analogue and mixed-signal test 177
41 Anulai, B., Rylyakob, A., Rylov, S., Hajimiri, A.: A 10 Gb/s eye-opening monitor in 0.13 m CMOS, Proceedings of IEEE International Solid-State Circuits
Conference, San Francisco, CA, 2005, pp. 3324
42 Abas, A.M., Bystrov, A., Kinniment, D.J., Maevsky, O.V., Russell, G.,
Yakovlev, A.V.: Time difference amplifier, Electronics Letters, 2002; 38 (23):
14378
43 Oulmane, M., Roberts, G.W.: A CMOS time-amplifier for femto-second resolution timing measurement, Proceedings of IEEE International Symposium on
Circuits and Systems, London, 2004, pp. 50912
44 Veillette, B., Roberts, G.W.: On-chip measurement of the jitter transfer function
of charge pump phase-locked loops, IEEE Journal of Solid State Circuits, 1998;
33 (3): 48391
45 Veillette, B., Roberts, G.W.: Stimulus generation for built-in-self-test of chargepump phase-locked-loops, Proceedings of IEEE International Test Conference,
Washington, DC, 1997, pp. 397400
46 Gutnik, V.: Analysis and Characterization of Random Skew and Jitter in a Novel
Clock Network, Ph.D. dissertation, Massachusetts Institute of Technology, USA,
2000
47 Gutnik, V., Chandrakasan, A.: On-chip time measurement, Proceedings of IEEE
Symposium on VLSI Circuits, Orlando, FL, 2000, pp. 523
48 Levine, P., Roberts, G.W.: A high-resolution flash time-to-digital converter
and calibration scheme, Proceedings of IEEE International Test Conference,
Charlotte, NC, 2004, pp. 114857
49 Hafed, M., Roberts, G.W.: A 5-channel, variable resolution, 10-GHz sampling rate coherent tester/oscilloscope IC and associated test vehicles, Proceedings of IEEE Custom Integrated Circuits Conference, San Jose, CA, 2003,
pp. 6214
50 Delmas-Bendhia, S., Caignet, F., Sicard, E., Roca, M.: On-chip sampling in
CMOS integrated circuits, IEEE Transactions on Electromagnetic Compatibility,
1999; 41 (4): 4036
51 Alon, E., Stojanovic, V., Horowitz, M.: Circuits and techniques for highresolution measurement of on-chip power supply noise, IEEE Journal of Solid
State Circuits, 2005; 40 (4): 8208
52 Nagata, M., Fukazama, M., Hamanishi, N., Shiochi, M., Lida, T., Watanabe, J.,
Murasaka, M., Iwata, A.: Substrate integrity beyond 1 GHz, Proceedings of
IEEE International Solid-State Circuits Conference, San Francisco, CA, 2005,
pp. 2668
53 Ferrario, J., Wolf, R., Moss, S., Slamani, M.: A low-cost test solution for wireless
phone RFICs, IEEE Communications Magazine, 2003; 41 (9): 828
54 Rehani, M., Abercrombie, D., Madge, R., Teisher, J., Saw, J.: ATE data collection
a comprehensive requirements proposal to maximize ROI of test, Proceedings
of IEEE International Test Conference, Charlotte, NC, 2004, pp. 1819
55 Strojwas, A., Kibarian, J.: Design for manufacturability in the nanometer era:
system implementation and silicon results, Proceedings of IEEE International
Solid-State Circuits Conference, San Francisco, CA, 2005, pp. 2689
Chapter 6
6.1
Introduction
Test and diagnosis techniques for digital systems have been developed for over three
decades. Advances in technology, increasing integration and mixed-signal designs
demand similar techniques for testing analogue circuitry. Design for testability (DfT)
for analogue circuits is one of the most challenging jobs in mixed-signal system on
chip design owing to the sensitivity of circuit performance with respect to component
variations and process technologies. A large portion of test development time and
total test time is spent on analogue circuits because of the broad specifications and
the strong dependency of circuit performance on circuit components. To ensure that a
design is testable is an even more formidable task, since testability is not well defined
within the context of analogue circuits. Testing of analogue circuits based on circuit
functionality and specification under typical operational conditions may result in poor
fault coverage, long testing times and the requirement for dedicated test equipment.
Furthermore, the small number of input/output (I/O) pins of an analogue integrated
circuit compared with that of digital circuits, the complexity due to continuous signal
values in the time domain and the inherent interaction between various circuit parameters make it almost impossible to design an efficient DfT for functional verification
and diagnosis. Therefore, an efficient DfT procedure is required that uses a single
signal as input or self-generated input signal, has access to several internal nodes,
and has an output that contains sufficient information about the circuit under test.
A number of test methods can be found in the literature and various corresponding
DfT techniques have been proposed [115]. DfT methods can be generally divided
into two categories. The first seeks to enhance the controllability and observability
of the internal nodes of a circuit under test in order to utilize only the normal circuit
input and output nodes to test the circuit. The second is to convert the function of a
6.2
DfT by bypassing
Two basic design approaches are commonly used in DfT methodologies for fault
detection and diagnosis in analogue integrated filters. The first approach is based
on splitting the filter under test (FUT) into a few isolated parts, injecting external
test stimuli and taking outputs by multiplexing. The second approach is an I/O DfT
technique based on the partitioning of the FUT into the filter stages. Each filter stage is
then separately tested by bypassing the other stages. Bypassing a stage can be realized
either by bypassing the capacitors (bandwidth broadening) of the stage using MOS
switches or using a duplicate opamp structure at the interface between two stages.
The multiplexing approach will be discussed in Section 6.3. This section addresses
the bypassing approach.
6.2.1
The bypassing methodology [2] is applicable to the class of active analogue filters
based on the standard operational amplifier. In this DfT technique, testability is defined
as controllability and observability of the significant waveforms within the filter structure. It permits full control and observation of I/O signals from the input of the first
stage and the output of the last stage of the filter. Therefore, the first stage input is
controllable and the last stage output is observable. The bypassing method can be
divided into two steps, detection of out-of-specification faults and diagnosis of faults.
The detection of out-of-specification faults is based on the comparison between the
ideal output and the measured data. The diagnosis step involves test generation techniques and fault identification for a given circuit. Digital scan design principles can
be used directly in the modified forms of active analogue filters. These techniques
require sequential structures and one class exhibiting this configuration is a multistage active analogue filter. A signal travels sequentially through the stages, each of
which has a well-defined function and whose gain and bandwidth are determined
by the appropriate input and feedback impedances. The gain and bandwidth of the
individual stage will modify the signal before passing on to the next stage. Analogue
scanning to control and observe signals within the filter is possible if the bandwidth of
each stage can be dynamically broadened in the test mode. Such bandwidth broadening may cause gain change. However, this will not pose problems since any change in
the gains can be fixed by the programming of the test equipment. Bandwidth expansion is performed by reducing the capacitive effects of the impedances of the stages
that are not under test. All the impedances in the active filter are based on the four
basic combinations of resistors and capacitors: single resistor, single capacitor, RC
series and RC parallel.
6.2.1.1 Active-RC filters
The following transformations on the basis of impedance modifications may be
required in test mode.
NMOS
C
Figure 6.1
(a)
(b)
PMOS
NMOS
R
Figure 6.2
An ideal resistor has an unlimited bandwidth and does not need to be modified in
test mode.
The single capacitor branch transformation requires two MOS switches as shown
in Figure 6.1. The impedance in the normal mode ZN is approximately the same as
the original impedance without MOS switches only if the on-resistance of the NMOS
switch RS is small enough so that the zero created does not affect the frequency
response of the stage. The size of the PMOS switch does not matter since its onresistance only affects the gain in the test mode.
Two possible transformations of the series RC branch are as shown in Figure 6.2.
A switch in parallel with the capacitor makes the branch resistive in the test mode
or a switch in series with the capacitor disconnects the branch in the test mode as
shown in Figures 6.2(a) and (b) respectively. To avoid significant perturbations of the
original pole-zero locations in the series switch configuration, the on-resistance of
the NMOS switch must be much less than the series resistance of the branch.
The parallel RC branch may be considered as a combination of a single resistor
branch and a single capacitor branch. The parallel RC branch requires only one switch.
The switch is either in series with the capacitor in order to disconnect it or in parallel
with the capacitor to short it out in test mode, as shown in Figures 6.3(a) and (b),
respectively. To reduce the effect on normal filter performance, the on-resistance of the
NMOS switch must be small and the off-resistance of the PMOS switch must be large.
The modified three-opamp, second-order active-RC filter is shown in Figure 6.4.
The modification requires only three extra MOS switches to invoke each stage of the
FUT in expanded bandwidth in test mode.
(b)
R
PMOS
NMOS
C
Figure 6.3
Vin
R1
Figure 6.4
T2
C1
C2 T2
R5
R2
R4
R3
Vout
The test methodology is very simple. The FUT is first tested in normal mode by
setting control switches, T1 = T2 = high level. If the FUT fails, the test mode is
activated and all stages except the stage under test are transformed to simple gain
stages, with all capacitors disconnected by setting the control switches at a low level.
Thus, the input signal can pass through preceding inverting amplifier stages to the
input of the stage under test and the output signal of the stage under test can pass
through succeeding inverting amplifier stages to the output of the filter, so that any
individual stage can be tested from the input and output of the filter.
To isolate the faulty stage(s), one stage is tested at a time until all stages are
tested. The input test waveforms depend upon the overall filter topology and transfer
functions of stages. A circuit simulator provides the expected output waveforms, gain
and phase. Given a filter of n stages, n + 2 simulations are required per fault. The
simulated and measured data are interpreted to identify and isolate the faults. These
data should include signals as functions of time, magnitude and phase responses,
Fourier spectra and d.c. bias conditions.
6.2.1.2 SC filters
The bandwidth broadening DfT methodology can be extended to SC filters using a
timing strategy [3]. The timing waveform will convert each stage of the filter into
a simple gain stage without any extra MOS switches. MOS switches are already
C2
2
1
C4
1
Vin
Figure 6.5
C1
2
1
Vout
included in the basic SC resistor realizations. The requirement for test signal propagation through the SC structure is thus established with the ONOFF sequence of
these built-in switches. The output signal may not be an exact duplicate of the input
but still contains most of the information in the input. The output voltage will be
scaled by inexact gain of the stages or subsystems of the filter. Hence, the timing
strategy permits full control and observation of I/O signals from the input to the first
stage to the output of the last stage of the filter. A timing methodology for signal
propagation does not only account for the test requirements, but also considers the
proper operations of SC filters. The following combinations are needed to produce
the test timing signals:
1. The master clock, which is the OR combination of two non-overlapping clocks,
1 and 2 .
2. The test enable control signal for each stage.
3. The phase combinations of the master clock and test enable signal.
The clock distribution into the SC structures is needed to permit the selection of
normal mode or test mode of operations.
The basic lowpass single-stage SC filter [31] is given in Figure 6.5.
In the test mode, the path for the input signal to the output can be created such
that the switches in series with capacitors are closed and the switches used to ground
any capacitor are opened. Let T be the test control signal, which remains high during
the test mode and low in the normal mode. The proper switch control waveforms can
be defined as
for signal switches:
1S = T + 1
2S = T + 2
(6.1)
(6.2)
C11
11
10
11
C1111
Vin
20
10
Figure 6.6
10
11
C12
10
20
C21
10
CC1
10
20
11
10
22 C
21
12
12
22
CC2
11
C31 11
20
10
11
C32
10
CC3
Vout
The subscripts, S and O, are added to the clock phases to stress the functions of these
signals with respect to the switches.
During the test mode, the filter operates in continuous fashion and its transfer
function is given by
Vout =
C1
Vin
C2 + C 4
(6.3)
Equation (6.3) shows that the input signal Vin is transmitted through the circuit with
its amplitude scaled by the capacitor ratio.
Now we apply the same technique to a third-order lowpass SC filter [32]
shown in Figure 6.6. Assuming that we are interested in testing of stage 2 and
the only accessible points of the circuit are the input of stage 1 and output of
stage 3.
The functional testing of stage 2 requires two signal propagation conditions:
1. Establishing a path through stage 1 to control the input stage 2.
2. Establishing a path through stage 3 to observe the output stage 2.
Therefore the switches can be divided into three distinct groups:
1. The grounding switches in stages 1 and 3 remain open during the testing of
stage 2.
2. The signal switches in stages 1 and 3 remain closed during the testing of
stage 2.
3. The switches in inter-stage feedback circuits remain open to ensure controllability and observability and to avoid possible instability.
In test mode, stage 2 should be in normal operation, that is, the switches in stage
2 are controlled by the normal two-phase clock. Three test control lines are required
to enable testing of each of the three stages. These lines are designated as T1 , T2 and
T3 . The clock waveforms for both normal and test are defined as follows:
1. For grounding and inter-stage feedback switches:
iO = (T1 + T2 + T3 ) i
where i denotes clock phase i, i = 1, 2.
(6.4)
Input
V+
Vt
S1
Vout
S2
Figure 6.7
Output
k=3
Tk
(6.5)
k=1,k=j
6.2.2
The duplicated or switched opamp [4] methodology can be applied without any modification to both continuous-time and SC filters, providing a unified DfT strategy with
better performance in terms of signal degradation. The modified opamp has duplicate
input stages and two MOS switches in the small signal path of the filter and is shown
in Figure 6.7.
In filter mode, switch S1 is closed and S2 is open, the opamp operates normally
and the circuit under test behaves as a filter with very small performance degradation.
In test mode, S1 is open and S2 is closed and the opamp operates as a unity gain
follower, passing the test signal to the output. Owing to the use of switches, the
circuit is often called the switched opamp, alternatively the duplication of input stage
leads to the name of duplicated opamp.
6.2.2.1 Second-order active-RC filter
A three-stage second-order RC filter using the duplicate input opamp is shown in
Figure 6.8. The filter consists of three different types of stage, depending on the
R1
Vin
C1
C2
V
V+
R2
V1
Vt
Mode T/F
R4
V
V+
R3
V2
Vt
Mode T/F
V
V+
Vout
V3
Vt
Mode T/F
Figure 6.8
(6.6)
(6.7)
From the above equations it can be seen that every stage can be tested from the filter
input and output due to the use of the switched opamp.
6.2.2.2 Third-order SC filter
Figure 6.9 shows a third-order SC filter [32] using the duplicate input opamp as shown
in Figure 6.7. Each stage is able to perform its filter function in normal mode or to
work as unity gain stage in test mode.
The filter testing procedure is as follows. The FUT is first tested in filter mode.
If the FUT fails, the test mode is activated and all stages are transformed into simple
unity gain stages except the stage to be tested for fault. The stage under test is a lossy
or ideal integrator. Testing can be conducted by comparing the measured results with
the expected transfer function of the stage under test. Further designs of switched
opamps and their applications in analogue filter testing can be found in Reference 5.
11
Vin
11
20 C11 10
V
V+
Vt
10
11
10 C12
11
10
10
20 C
20
21
10
CC1
11
10 C32
22
12
12 C21 22
Mode T/F
11
10
CC2
V
V+
Vt
11
20 C
31
11
10
CC3
V
V+
Vt
Vout
Mode T/F
Mode T/F
Figure 6.9
6.3
DfT by multiplexing
The multiplexing DfT technique has been proposed to increase access to internal
circuit nodes. Through a demultiplexer, a test input signal can be applied to internal nodes, while a multiplexer can be used to take outputs from internal nodes. The
controllability and observability of the filter are thus enhanced. When using the multiplexing DfT technique, the FUT may be divided into a number of functional blocks
or stages. The input demultiplexer routes the input signal to the inputs of different
stages and the outputs of the stages are loaded to the primary output by the output
multiplexer [6]. Testing and diagnosis of embedded blocks or internal stages will thus
become much easier.
For integrator-based test, for example, the FUT is divided into separate test stages
using MOS switches such that each stage represents a basic integrator function [9].
Individual integrators are tested separately against their expected performances to
isolate the faulty stages. The diagnosis procedure then further identifies the specific
faults in the faulty stages. Normally, the filter can be divided into two possible types of
integrator: lossy integrator and ideal integrator. Time, amplitude and phase responses
may be tested for these integrators. The implementation of the multiplexing-based
DfT requires only few MOS switches. The value of the MOS switch-on resistance is
chosen such that it does not affect the performance of the filter in normal mode.
6.3.1
R1
Vin
R4
C2
C1
V1
R6
S1
R5
V2
S1
V3
Vout
Multiplexer
A0 A1
R2
S1
A0 A1
Figure 6.10
S1
A1
A0
Mode
Operation
1
0
0
0
0
0
1
1
0
1
0
1
Normal
Test
Test
Test
Filter
Lossy integrator
Ideal integrator
Amplifier
R1
(1 et/R1 C1 )Vin
R4
(6.8)
t
R2 C
Vin
(6.9)
For the ideal integrator, therefore a square-wave input will produce a triangular-wave
output for each stage if the circuit is fault free. If there is fault in the stage, the output
of the stage will not be an ideal triangular wave for a square-wave input. Amplitude
and phase responses in the frequency domain could equally be used as test indicators.
6.3.2
Vin
R4
R1
S1
V1
R5
C1
V2
S1
R6
A0 A1
R2
V2
S1
V3
S1
Multiplexer
Demultiplexer
R3
Vout
A0 A1
Figure 6.11
S1
A1
A0
Mode
Operation
1
0
0
0
0
0
1
1
0
1
0
1
Normal
Test
Test
Test
Filter
Amplifier
Ideal integrator
Ideal integrator
switch resistances are chosen such that the pole frequency movement is negligible
and the new zeros introduced by the switches are as far outside the original filter
bandwidth as possible. The operation of the testable KHN filter in Figure 6.11 is
given in Table 6.2. In normal mode operation, all control switches designated as S1
are closed with address pins A0 and A1 at zero level and the circuit performs the
same function as the original filter. The fault diagnosis method involves the following
procedure:
1. Set the KHN filter in test mode by placing all switches (S1 ) in open position.
2. Observe the output waveforms of the stage under test in the KHN filter by
assigning the respective address of the stage, as given in Table 6.2.
Each stage is investigated step by step to locate the fault. The multiplexing technique can be used to observe the fault in any stage. The function of each stage is
simply an ideal integrator or amplifier.
6.3.3
In this section, we present multiplexing-based test techniques for the TT OTA-C filter.
This can be considered to be the OTA-C equivalent of the TT active-RC filter, which
consists of an ideal integrator and a lossy integrator in a single loop. TT filters have
S2
+
V01
gm1
A0 A1 S1
S2
C1
gm2
C2
V02
gm3
+
Multiplexer
Vin
Demultiplexer
Vout
A0 A1
Figure 6.12
S1
S2
A1
A0
Mode
Operation
0
1
1
1
0
0
0
0
1
0
1
0
Normal
Test
Test
Filter
Ideal integrator
Lossy integrator
excellent low sensitivity to parasitic input capacitances and are suitable for cascade
synthesis of active filters at high frequencies. Multiplexing-based DfT is directly
applicable to the TT using only MOS switches as shown in Figure 6.12.
The values of the switch resistances are chosen so that the modified filter performs
the same function as the original filter. The optimum selection of aspect ratio between
length and width of the control switches will produce negligible phase perturbation
and insignificant increase in the total harmonic distortion due to the non-linearity of
the MOS switch.
The modified TT filter is first tested in normal mode. In normal mode operation,
control switches designated as S2 are closed and S1 are opened, as shown in Table 6.3.
In the case that failure occurs, the test mode will be activated, with switches S2 open
and S1 closed and the individual stages will be tested sequentially to isolate the faulty
stage. During testing, the TT filter in Figure 6.12 will become two individual stages,
an ideal integrator, stage 1 and a lossy integrator, stage 2. The transfer function of
stage 1 can be derived as
Vout =
gm1
Vin
sC1
(6.10)
gm2
Vin
sC2 + gm3
(6.11)
6.4
OBT procedures for analogue filters, based on transformation of the FUT into an
oscillator have been recently introduced [1113]. The oscillation-based DfT structure
uses vectorless output frequency comparison between fault-free and faulty circuits
and consequently reduces test time, test cost, test complexity and area overhead.
Furthermore, the testing of high-frequency filter circuits becomes easier because no
external test signal is required for this test method. OBT shows greatly improved
detection and diagnostic capabilities associated with a number of catastrophic and
parametric faults. Application of the oscillation-based DfT scheme to low-order analogue filters of different types is discussed, because these structures are commonly
used individually as filters and also as building blocks for high-order filters.
In OBT, the circuit that we want to test is transformed into an oscillating circuit
and the frequency of oscillation is measured. The frequency of the fault-free circuit
is taken as a reference value. Discrepancy between the oscillation frequency and the
reference value indicates possible faults. Fault detection can be performed as a BIST
or in the frame of an external tester. In BIST, the original circuit is modified by
inserting some test control logic that provides for oscillation during test mode. In the
external tester, the oscillation is achieved by an external feedback loop network that
is normally implemented as part of a dedicated tester.
An ideal quadrature oscillator consists of two lossless integrators (inverting and
non-inverting) cascaded in a loop, resulting in a characteristic equation with a pair of
roots lying on the imaginary axis of the complex frequency plane. In practice, however, parasitics may cause the roots to be inside the left half of the complex frequency
plane, hence preventing the oscillation from starting. Any practical oscillator must
be designed to have its poles initially located inside the right-half complex frequency
plane in order to assure self-starting oscillation. Most of the existing theory for sinusoidal oscillator analysis [26] models the oscillator structure with a basic feedback
loop. The feedback loop may be positive, negative or a combination of both. The
quadrature oscillator model can ideally be described by a second-order characteristic
equation:
(s2 bs + 20 )V0 (s) = 0
(6.12)
(6.13)
6.4.1
The most popular method for high-order filter design is the cascade method due to
its simplicity of design and modularity of structure. Second-order filters are the basic
sections in the cascade structures. Therefore, in this section, we briefly discuss the
oscillation-based DfT techniques for second-order active-RC filters [12]. Filter to
oscillator transformation methods, using only MOS switches, for KHN state-variable
biquad, TT biquad and SallenKey filters are presented and discussed.
6.4.1.1 KHN state-variable filter
The modified form of the KHN state-variable filter is shown in Figure 6.13. It can be
used for simultaneous realization of lowpass, bandpass and highpass characteristics.
All three filters have the same poles. Only one extra MOS switch is inserted in
the original KHN filter and the modified KHN performs the same functions with
negligible pole frequency movement. In the normal mode of operation, the control
switch designated as S1 is closed. The oscillation output may be taken from any of
the filter nodes; there may be reasons for the output to be taken from the lowpass,
bandpass or highpass nodes.
The transfer function of lowpass filter at node V3 can be described by
1/R1 R2 C1 C2
V3 (s)
= K1 2
Vin (s)
s + K1 ((R5 /R6 )/R1 C1 ) s + ((R4 /R3 )/R1 R2 C1 C2 )
(6.15)
R4
C1
Vin
R5
R1
V1
R6
Figure 6.13
S1
C2
V2
R2
V3
and the frequency of the pole and the quality factor are given by
R4
R 3 R1 R2 C 1 C 2
R3 R2 C2
R5
1
= K1
R6
R4 R 1 C 1
Q
0 =
(6.16)
(6.17)
Vin
S1
C1
R4
C2
V1
Figure 6.14
R6
R2
R5
V2
V3
The frequency of pole and quality factor are given by the expressions:
R6
0 =
R2 R3 R5 C1 C2
1
1 R 2 R3 C2
=
Q
R1
C1
(6.18)
It is clear from Equation (6.18) that both the Q factor and pole frequency 0 can be
independently adjusted. From the above expressions we can see that the condition for
oscillation Q without affecting 0 will be satisfied if R1 . This is realized
by inserting switch S1 to disconnect R1 from the circuit. In the test mode the filter
will oscillate at resonance frequency 0 . Deviations in the oscillation frequency with
respect to the resonance frequency indicate faulty behaviour of the circuit. The amount
of frequency deviation will determine the possible type of fault, either catastrophic
or parametric, as well as the specific location where the fault has occurred.
6.4.1.3 The SallenKey filter
The SallenKey filter is one of the most popular second-order filters [18]. It is shown
in Figure 6.15. It employs an opamp arranged as a VCVS with gain K and an RC
network.
Its transfer function is given by
Vout (s)
K/R1 R2 C1 C2
= 2
Vin (s)
s + ((1/R2 C2 )+(1/R1 C1 )+(1/R2 C1 )(K/R2 C2 )) s+(1/R1 R2 C1 C2 )
(6.19)
The quality factor of the filter is given by
1
R 2 C2
R1 C2
R1 C1
=
+
+ (1 K)
Q
R1 C1
R 2 C1
R 2 C2
(6.20)
Vin R1
R2
+
Vout
C2
Figure 6.15
RA
RB
where the amplifier gain K is equal to 1 + (RB /RA ). We can put the SallenKey
filter into oscillation by substituting 1/Q = 0 in Equation (6.20). As a result, we
get amplifier gain K = (R2 C2 + R1 C2 + R1 C1 )/R1 C1 . Some external control of the
value of RB /RA must be provided to obtain the required value of K in test mode.
Note, however, that even when the passive elements are in perfect adjustment, the
finite bandwidth of a real amplifier causes dissimilar effects on the pole and zero
positions. We can also put the SallenKey filter into oscillation by adding a feedback
loop containing a high-gain inverter [12].
6.4.2
In this section we will present techniques for converting OTA-C filters into an oscillator using MOS switches. The conversion methods for two-integrator loop, TT and
KHN OTA-C filters are proposed and discussed.
6.4.2.1 Two-integrator loop OTA-C filter
Two-integrator loop OTA-C filters are a very popular category of filters that have
very low sensitivity and can be used alone or as a section in a high-order cascade
filter design [25, 27]. A second- or higher-order system for any type of OTA-C filter
has the potential for oscillation. This ability can be used to convert the FUT into an
oscillator by establishing the oscillation condition in its transfer function using the
strategy shown in Figure 6.16.
In the normal filter mode, the switch S1 is open and M1 and M2 are open-circuited,
but M3 short-circuited. The transfer function of the lowpass second-order filter can
be derived as
H(s) =
(6.21)
V01
+
gm1
C1
M2
+
gm2
V02 =Vout
C2
Figure 6.16
(6.22)
(6.23)
To put the network into oscillation with constant amplitude, the poles must be placed
on the imaginary j axis. By closing the switch S1 , the filter network will be converted
into an oscillator, as M1 and M2 are now short-circuited and M3 open-circuited. The
characteristic equation of the resulting oscillator can be described as
gm1 gm2
=0
(6.24)
s2 +
C1 C2
with the poles given by
gm1 gm2
s1 , s2 = j
C1 C2
(6.25)
(6.27)
V1 + S1
V01
+
gm1
+
gm2
Vout
C2
C1
+
gm3
V02
Figure 6.17
1
Q=
gm3
(6.28)
To put the TT-filter into oscillation with constant amplitude the quality factor must be
infinite. The network will then oscillate with resonant frequency 0 if quality factor
Q . By closing the switch S1 , M1 is short-circuited and M2 open-circuited, the
filter network will be converted into an oscillator and the poles are given by
gm1 gm2
s1 , s2 = j
(6.29)
C1 C 2
From Equation (6.28) we can see that the condition for oscillation will be satisfied if
gm3 = 0, without affecting the resonant frequency. In Figure 6.17 this can be realized
by switching off the gm3 OTA.
6.4.2.3 KHN OTA-C filter
The filter in Figure 6.18 is the OTA-C equivalent of the KHN active-RC biquad,
in which the two feedback paths share a single OTA resistor. The KHN OTA-C
filter can simultaneously perform lowpass, bandpass and highpass functions. The
implementation of oscillation-based DfT requires only two extra MOS switches to
the original circuit. This modified KHN performs the same functions with negligible
pole frequency movement.
The lowpass transfer function is given by
(gm1 gm2 /C1 C2 )
VLP
= 2
Vin
s + (gm1 gm3 /gm5 C1 ) s + (gm1 gm2 gm4 /gm5 C1 C2 )
The cut-off frequency 0 and the quality factor Q are given by
gm4 gm1 gm2
0 =
gm5 C1 C2
(6.30)
(6.31)
1
Q=
gm3
(6.32)
VHP
Vin
t Close = Osc M1
1
V1 +
M2
gm5
+
S1
+
gm4
+
gm3
VBP
gm1
+
C1
+
gm2
C
VLP
2
Figure 6.18
Equations (6.31) and (6.32) shows that we can change the cut-off frequency 0
and quality factor Q of the filter independently. The KHN filter will oscillate at
resonant frequency 0 if the quality factor Q . The condition of oscillation will
be satisfied by substituting gm3 = 0 in Equation (6.32). By closing the switch S1 ,
M1 is short-circuited and M2 open-circuited, the filter network is converted into an
oscillator and the oscillator frequency is the resonant frequency of the filter.
6.4.3
Vout (z)
(a2 z2 + a1 z + a0 )
=
Vin (z)
(z2 + b1 z + b0 )
(6.33)
The coefficients of the equation depend upon the particular type of the filter. They are
related to the normalized capacitors in the SC biquad in Figure 6.19 as follows:
(C05 + C06 )(C07 + C08 )
a2 = C01 +
(6.34)
(1 + C09 )
C07
clk1
C09
C08
C03
Vin
clk2
clk1
C01
clk2
clk1
C04
V1
clk2
clk1
C02
clk2
clk1
Vout
C05
clk2
C06
Figure 6.19
V Non-linear block
N(A)
V
Figure 6.20
To convert the biquadratic filter into an oscillator requires a circuit to force a displacement of a pair of poles to the unit circle. A non-linear block in the filter feedback
loop [26, 34] can generate self-sustained robust oscillations. The oscillation condition
and approximation for the frequency and amplitude of the resulting oscillator for the
system in Figure 6.20 would be determined by the roots of
1 [N(A) H(z)] = 0
(6.39)
where N(A) represents the transfer function of the non-linear block as a function of
the amplitude A of the first harmonic of its input. We consider the non-linear function
formed by an analogue comparator providing one of the two voltages V , as shown
z2 2r cos( )z + r 2 = 0
(6.41)
where
2r cos( ) =
b1 a1 N(A)
,
1 a2 N(A)
r2 =
b0 a0 N(A)
1 a2 N(A)
(6.42)
(6.43)
The above equations mean that the poles are on the unit circle and the oscillation
amplitude is stable. The amplitude A0 and frequency 0 of oscillation are given by
4 |V | a0 a2
(6.44)
b0 1
1 b1 a1 N(A0 )
(6.45)
0 = fa cos
2
1 a2 N(A0 )
The integrator-based second-order SC filter in Figure 6.19 will be converted into an
oscillator if at least one of the transfer functions at the outputs belongs to the set of
the functions fulfilling the required conditions in Equation (6.43). An important fact
derived from Equation (6.44) is that the amplitude of oscillation can be controlled
by varying the voltage V . Therefore, we can select the amplitude to achieve the best
testing conditions for the biquad filter.
The OBT and diagnosis procedure using the structure shown in Figure 6.20 for
two-integrator loop SC biquad is described as follows. The OBT divides the FUT
into two modes of operation, filter mode and test mode. The system is first tested in
filter mode. Then in test mode the filter is converted into a quadrature oscillator and
the frequency of oscillation is evaluated. Deviations in the oscillation frequency with
respect to the nominal value given by Equation (6.44) indicate faulty behaviour of the
FUT. The amount of frequency deviation will determine the possible type of fault,
either catastrophic or parametric, as well as the specific location where the fault has
occurred.
A0 =
6.5
Two main approaches are found in the literature [16, 23] for the realization of
high-order filters. The first is to cascade second-order stages without feedback (cascade filter) or through the application of negative feedback, multiple loop feedbacks
6.5.1
The testing of analogue systems normally deals with the verification of the functional
and electrical behaviour of the system under test. The verification process requires
measuring the output signal with respect to the input test signal at several internal
nodes. However, in integrated systems, access to the deep internal input and output nodes is severely limited due to the limited number of chip I/O pins. Several
approaches have been reported [614] to enhance external access to deep input and
output nodes of the system. The two basic techniques, namely, multiplexing and
bypassing have been commonly used in digital systems in recent decades. These
techniques are equally applicable to analogue systems, specifically for high-order
filters. There are two major issues related to the accessibility of internal nodes:
1. The isolation of the node from the rest of the system before applying the test
stimulus.
2. The effect on performance of the original system due to the insertion of external
switches for controllability and observability of the subsystem.
In bypass techniques, the internal node is made accessible at the primary input
and output by reconfiguration of all the stages as buffer stages except the stage under
test. The two bypassing approaches, bandwidth broadening and duplicated/switched
opamp, have been discussed for low-order filters in Sections 6.2.1 and 6.2.2. The
switched opamp bypass technique has some advantages over bandwidth broadening
that make it more efficient in the fault diagnosis of high-order filters [5]. The switched
opamp cell avoids the back-driving effect and can reduce the impact of the extra
Input
Analogue block 2
Analogue block 3
Analogue block n
V+
V+
V+
V+
Vt
Vt
Vt
Vt
Mode T/F
Mode T/F
Mode T/F
Mode T/F
Figure 6.21
Output
components. The basic block for a switched opamp is illustrated in Figure 6.7. It has
two operation modes defined by a digital mode control input T /F. At logic zero, the
opamp operates normally and the circuit under test behaves as filter with very small
performance degradation. When T /F has a value of one, the analogue block acts as
buffer, passing the input signal to the output of the block. The implementation of the
bypass scheme using switched opamps for the nth-order filter is shown in Figure 6.21.
The testable nth-order filter based on switched opamps is easily divided into
separate analogue blocks, each block being a first- or second-order filter. To test
the ith block, all blocks except the block under test (BUT) are put into test mode,
operating as buffers. The test signal at the input of the system enters the input node
of the BUT via the buffer stages and the output node of BUT is then observed at
the primary output of the system through subsequent buffer stages. The only block
operating as a first- or second-order filter is the BUT. Therefore, the input to the BUT
will be equal to the primary input of the filter, that is
Vi1 = Vi2 = = Vin
(6.46)
where i = 1, 2, 3, , n
The output of the BUT is equal to the primary output of the filter:
Vi = Vi+1 = = Vout
(6.47)
In Figure 6.21, although we did not show the coupling and feedback between different
stages, the test method is also suitable for MLF structures.
6.5.2
The cascade connection of second-order sections is the most popular and useful
method for realization of high-order filter function. The testing of the cascade system requires the controllability and observability of internal nodes of the filter.
The controllability and observability can be increased by partitioning the system into accessible blocks. The cascade filter structure can be easily divided into
the blocks of second-order sections representing biquadratic transfer functions.
The programmable-biquad-based DfT architecture of a cascade filter is shown in
Figure 6.22. An analogue multiplexer (AMUX) can be used to select those biquads
with a minimum impact on normal filter operation. The input test signal is applied
simultaneously to the selected biquad and a programmable biquad [7, 15, 33, 34].
St2
Vin
Biquad-1
Si1
S01
Sc2
Si2
Stn
St3
Biquad-2
S02
Sc3
Si3
Biquad-3
S03
Scn
Vout
Biquad-n
Sin
Son
AMUX
Control logic
Figure 6.22
Programmable
Biquad
Comparator
Error signal
The control logic will programme the programmable biquad with the same transfer
function as the biquad under test. A programmable biquad is a universal biquadratic
section that can implement any of the basic filter types by electrical programming.
The comparator circuit compares the responses of the biquads to generate an error
signal. The system biquad will be considered fault free if the error signal lies inside
the comparison window.
The testable cascade filter structure in Figure 6.22 consists of the FUT, programmable biquad, comparator and control logic. The input multiplexer Si1 to Sin
connects the different biquad inputs to the programmable biquad input and the output
multiplexer, So1 to Son , connects the output node of each biquad to the comparator.
A set of switches, Sc2 to Scn , connect and disconnect each biquad output to the next
biquad input. An additional set of switches, St2 to Stn , act as a demultiplexer able to
distribute the input signal to the different biquad input nodes. The control logic is a
finite sequential machine that controls the operational modes as well as configuring
the programmable biquad according to the requirements of the biquad under test.
The DfT procedure has two operating modes, normal/filter mode and test mode. The
test mode of operation is further divided into two sub-modes, online test and offline
test. In online test mode, testing of the selected biquad is carried out during normal
operation of the filter, using normal signals rather than signals generated specifically
for testing. When working in online test mode, the control logic can connect the input
of any biquad in the cascade to the programmable biquads input as well as the same
biquads output to the comparator input. The control logic also programmes the programmable biquad to implement the same transfer function as the selected biquad.
We can perform functional comparison between the outputs of the selected biquad
and the programmed biquad with a margin range. If the selected biquad is fault free
then the comparator output will lie between the given tolerance limits, since the same
input signal is applied to the input of both biquads.
When the offline test mode is invoked, switches Scj split the filter into biquad stages
and the input is selectively applied to one of them and to the programmable biquad.
The control logic connects the output of the biquad under test to the comparator and
the comparator compares this output to the programmable biquad output for the same
input signal. The error signal from the comparator output will indicate the faulty
6.5.3
The DfT based on multiplexing can be directly applied to high-order MLF OTA-C
filters. Testability of a filter is defined as the controllability and observability of significant waveforms within the multi-stage filter structure. The significant waveforms
are the input/output signals of individual stages in the high-order filter configurations. The high-order filter can be divided into integrators using extra MOS switches
between input and output of two consecutive stages. The responses from each stage
of an analogue multi-stage filter are compared with correct response, to determine
whether it is faulty or fault free. The implementation of the multiplexing-based DfT
method requires the following modifications to the original filter and test steps:
1. Insert MOS switches to stage i, 1 i n,and define the controllable waveforms
necessary in both the normal and test mode.
2. Input/test signal is connected to the filter stages through a demultiplexer and
the output of the respective stage or the filter is observed through a multiplexer.
3. Set the MOS switches to logic 1 and logic 0 for normal and test mode
respectively.
4. The overall circuit topology and transfer function are used to generate the
necessary test waveforms to test each stage. A simulation tool then provides
the expected output waveforms, gain and phase.
5. Interpret the simulated results to recognize and isolate the fault.
The value of MOS switch ON resistance is chosen such that it does not affect
the performance of the original filter. The test methodology is straightforward. The
modified circuit is first tested in normal mode. If any malfunction or failure occurs,
the test mode is activated and all the individual stages are tested one by one to isolate
the faulty stage. Then the faulty stage must be further investigated to locate the fault.
The general MLF OTA-C filter is shown in Figure 6.23. The MLF OTA-C filter is composed of a feed-forward network of integrators connected in cascade
and a feedback network that contains pure wire connections only for canonical
realizations [28].
The feedback network may be described as
Vfi =
n
j=i
fij Voj
(6.48)
Feedback network
Vf 1
Vf 2
gm1
Vin
Vo1
gm2
Vo2
C2
C1
Figure 6.23
Vf 3
Vfn
gm3
+
gmn
Vo3
Vout
C3
Cn
T3
T2
T1
+
gm1
A0 Am
S1
Figure 6.24
C1
----
Tn
S2
+
gm2
C2
S2
+
gm3
S2
C3
+
gmn
S
Cn 2
T3
T2
T1
Demultiplexer
Vin
Demultiplexer
----
Tn
A0
Vout
Am
where fij is the voltage feedback coefficient from the output of integrator j to the
input of integrator i. The feedback coefficient fij can have zero or non-zero values
depending upon the feedback. Equation (6.48) can be written in the matrix form.
[Vf ] = [F] [VO ]
(6.49)
where [Vo ] = [Vo1 Vo2 Von ]t , the output voltages of integrators, [Vf ] = [Vf1 Vf2
Vfn ]t , the feedback voltage to the inverting input terminals of integrators and
[F] = [fij ]n n, the feedback coefficient matrix. The different feedback coefficients
will result in different filter structures. Thus, the feedback network classifies the filter
structures.
The modified MLF OTA-C filter using the multiplexing DfT technique is shown
in Figure 6.24. The operation of the modified MLF filter, in normal and test mode is
given in Table 6.4.
In normal operating mode, control switches S2 are closed and S1 are opened, while
the address pins from A0 to Am are at level 0. The fault-free circuit will perform the
S1
S2
Am
A1
A0
Mode
Operation
0
1
1
1
0
0
0
0
0
0000
0000
0000
1111
0
0
1
0
1
0
Normal
Test
Test
Test
Filter
Integrator T1
Integrator T2
Integrator Tn
same function as the original filter. In cases where filter performance does not meet
specification, the OTA-C stages must be investigated separately. The fault diagnosis
method involves the following steps:
1. Activate the test mode of operation by closing switches S1 and opening switches
S2 . And set multiplexer address inputs A0 , . . . , Am to select an OTA-C stage
for testing.
2. Applying the input test signal to the selected OTA-C stage through the analogue
demultiplexer.
3. Observe the output of the selected OTA-C stage at the output terminal of the
AMUX. The function of each individual OTA-C stage is an ideal integrator.
The voltage transfer function of the stage can be defined as
H(s) =
gmi
sCi
(6.50)
where i is the number of the OTA-C stages and gmi and Ci are the transconductance and capacitance of the related OTA and capacitor. Therefore, the
output of the fault-free OTA-C stage will be a triangular wave in response to a
square-wave input.
6.5.4
OBT structures for high-order OTA-C filters are based on the decomposition of the
filter into functional building blocks. The partitioning of the filter should be made
such that each individual block represents a biquadratic transfer function. Then these
blocks can easily be converted into oscillators by establishing the oscillation condition
in their transfer functions. During test mode operation, each block will oscillate at
a frequency that is a function of its component values and transconductance of the
OTAs. Deviations in the oscillation frequency from the expected frequency indicate
faulty behaviour of the components in the block. The sensitivity of the oscillation
frequency with respect to the variations of the component parameters will determine
the detectable range of the fault.
Sp
+
gm1
Sn
Sp
+
g
m3
+
g
m2
C1
C2
Sn
Sn
Sp
Sp
Sp
+
gm(n1)
+
g
m4
C3
C4
Sn
Sn
+
g
mn
Cn-1
Sn
Vout
Cn
Frequency counter
Vin
Sn
Sn
+
gm1
+
gm2
C2
C1
Sp
Sn
Sp
Sn
+
g
m3
Sp
+
g
m4
C3
Sp
Frequency counter
(b)
Sn
+
gm(n1)
C4
+
g
mn
Cn1
Sp
Sn
Vout
Cn
Sp
Sn
Frequency counter
(c)
Sp
Vin
+
gm1
Sn
+
gm2
Sn
C1
Figure 6.25
C2
Sp
Sp
+
g
m3
Sn
C3
+
g
m4
Sn
C4
Sp
Sp
+
gm(n1)
Sn
Cn1
+
g
mn
Vout
Cn
Sn
Commonly used design approaches for high-order OTA-C filters are based on
cascade and MLF structures. Choice of the feedback network can result in the cascade,
inverse follow-the-leader feedback (IFLF) and leap-frog (LF) configurations [29].
These types of multi-stage (high-order) OTA-C filter structures can be easily modified
to implement the oscillation-based DfT technique as shown in Figure 6.25, where Sn
and Sp are the NMOS and PMOS transistor switches respectively.
Implementation of the oscillation-based DfT method requires the following
modifications to the original filter:
1. Decomposition of the filter into the biquadratic stages.
2. Isolation of biquadratic stages from each other.
L
(VGS VT )1
kW
(6.51)
where W and L are the channel width and length resepctively, VGS and VA are the
gate-source bias voltage and threshold voltage respectively, k = n Cox and where
n is the electron mobility and Cox is the oxide capacitance per unit area.
A larger aspect ratio will reduce the series resistance. However, the parasitic
capacitance is approximately proportional to the product of width and length. Therefore, choosing an optimum aspect ratio and a sensible point in the signal paths for
switch insertion will ensure a minimal impact on the performance of the filter. The
modified filter circuits shown in Figure 6.25 require two types of switches; switches
in the signal path to divide the filter into biquadratic blocks and switches in the feedback path to establish oscillation conditions. The switches in the signal path must be
realized using MOS transistors with minimum values of the ON resistance, whereas
the other switches can be designed for minimum size.
The modified filter circuit has two mode of operations, normal and test mode. In
normal mode of operation all switches designated Sp are closed whereas the switches
designated Sn are open and the circuit will perform the original filter functions. When
the test mode is invoked Sp switches are opened and Sn switches closed. Switches
Sp split the filter into biquad stages and switches Sn convert these biquad stages into
oscillators. The oscillation frequency of the oscillator is then
gi gi+1
0i =
i = odd, i = 1, 3, 5, . . . , n 1
(6.52)
Ci Ci+1
where n is the order of the filter and is even. When n is odd, the last integrator can
be combined with the (n1)th integrator to form an oscillator, although the (n1)th
integrator has already been tested. The condition of oscillation for two-integrator loop
biquadratic filters is discussed in Section 6.4.
The test and diagnosis procedure of OBT is straightforward. The FUT is first
tested in normal mode and the cut-off frequency of the FUT measured. The test mode
will be activated if the cut-off frequency deviates beyond the given tolerance band. In
test mode, the high-order filter is decomposed into individual biquad oscillators and
individual oscillator frequencies are measured to isolate the faulty stage. Comparison
between the frequency evaluated from Equation (6.52) and the measured frequency of
the corresponding oscillator stage identifies the faulty stage of the FUT. The deviation
nAn + (n 1)Ap
100%
A
(6.53)
(n + 1)An + nAp
100%
(6.54)
A
where A is the original circuit area, An is the area of switch Sn and Ap is the area of
switch Sp .
Overhead for IFLF =
6.6
Summary
This chapter has been concerned with DfT of, and test techniques for, analogue
integrated filters. Many different testable filter structures have been presented. Typical
DfT techniques, such as bypassing, multiplexing and OBT have been discussed. Most
popular filters such as active-RC, OTA-C and SCs filters have been covered. Test of
low-order and high-order filters have been addressed. DfT of OTA-C filters have
been investigated, particularly, because this topic has not been so well studied as the
testing of active-RC and SC filters. Many of the test concepts, structures and methods
described in the chapter are also suitable for other analogue circuits, although they
may be most useful for analogue filters as demonstrated in the chapter.
6.7
References
1 Wey, C.L.: Built-in self-test structure for analogue circuit fault diagnosis, IEEE
Transactions on Instrumentation and Measurement, 1990;39 (3):51721
2 Soma, M.: A design-for-test methodology for active analogue filters, Proceedings of IEEE International Test Conference, Washington, DC, September 1990,
pp. 18392
3 Soma, M., Kolarik, V.: A design-for-test technique for switched-capacitor filters,
Proceedings of IEEE VLSI Test Symposium, Princeton, NJ, April 1994, pp. 427
4 Vazquez, D., Rueda, A., Huertas, J.L., Richardson, A.M.D.: Practical DfT strategy for fault diagnosis in active analogue filters, Electronics Letters, July 1995;31
(15):12212
Chapter 7
7.1
Introduction
7.2
A/D conversion
T [2N1]
Representational
ideal straight line
Centre
of code k
Code k
T [1]
W [k]
Code 1
Figure 7.1
Vmax
Vmax Q
T [k+1]
T [k]
Vk
Vmin+2Q
Vmin+Q/2
Vmin+Q
Vmin
Code 0
Analogue
input
Q/2
Figure 7.2
Vmax
Input
FS
2N
(Vmax Vmin )
= 1 LSB
2N
(7.1)
The ideal code bin width Q, usually given in volts, may also be given as a percentage
of the full-scale range. By standard convention, the first code bin starts at voltage
Vmin and is numbered as 0, followed by the first code transition level T [1] to code
bin 1, up to the last code transition level T [2N 1] to the highest code bin [2N 1],
which reaches the maximum converter input voltage Vmax [3]. In the ideal case, all
code bin centres fall onto a straight line with equidistant code transition levels, as
illustrated in Figure 7.1. The analogue equivalent of a digital A/D converter output
code k corresponds to the location of the particular ideal code bin centre Vk on the
horizontal axis.
The quantization process itself introduces an error corresponding to the difference
between the A/D converters analogue input and the equivalent analogue value of its
output, which is depicted
over the full-scale range in Figure 7.2. With a root mean
square (RMS) value of Q/ (12) for a uniform
probability distribution between Q/2
and Q/2 and an RMS value of FS/(2 (2)) for a full-scale input sine wave, the
ideal or theoretical signal-to-noise ratio (SNR) for an N-bit converter can be given in
decibels as
2
2
(FS/2 2)
12
= 10 log10 2N
SNRideal = 10 log10
8
(Q/ 12)2
12
N
= 20 log10 [2 ] + 20 log10
= 6.02N + 1.76
(7.2)
8
For real A/D converters, further errors affect the conversion accuracy and converter
performance. The following sections will introduce the main static and dynamic
performance parameters that are usually verified to meet the specifications in production testing. Standardized performance parameters associated with the A/D converter
transient response and frequency response can be found in Reference 3.
7.2.1
Apart from the systematic quantization error due to finite converter resolution, A/D
converters have further static errors mainly due to deviations in transition levels from
the ideal case and are affected by internally and externally generated noise. One of the
characteristic parameters that can indicate conversion errors is the real code widths. A
particular code bin width, W [k], can be determined from its adjacent code transition
levels T [k] and T [k + 1], as indicated in Figure 7.1:
W [k] = T [k + 1] T [k] for 1 k 2N 2
(7.3)
where code transition level T [k] corresponds to the analogue input voltage where half
the digital outputs are greater than or equal to code k, while the other half are below
code k.
In addition to the assessment of converter performance from transition levels and
code bin widths, the real transfer function may also be approximated by a straight line
for comparison with the ideal case. The straight line can be determined through a linear regression computation where the regions close to the upper and lower end of the
transfer function are ignored to avoid data corruption due to overdriving the converter
(input voltage exceeds the real full-scale range). The following main static performance parameters are introduced and described below: gain and offset, differential
non-linearity (DNL) and integral non-linearity (INL).
The basic effect of offset in A/D converters is frequently described as a uniform
lateral displacement of the transfer function, while a deviation from ideal gain corresponds to a difference in the transfer functions slope after offset compensation.
With regard to performance verification and test, offset and gain can be defined as
two parameters, VOS and G, in a straight-line fit for the real code transition levels,
as given on the left-hand side in Equation (7.4) [3, 4]. The values for offset and gain
can be determined through an optimization procedure aiming at minimum matching
error (k) between gain and offset adjusted real transition levels and the ideal values
(right-hand side of Equation (7.4)):
G T [k] + VOS + [k] = Q (k 1) + T [1]ideal
for 1 k 2N 1 (7.4)
where G is the gain, VOS the offset, Q the ideal code bin width, T [1]ideal the ideal first
transition level and T [k] the real transition level between codes k and (k 1). The
value for VOS corresponds to the analogue equivalent of the offset effect observed at
the output.
However, different optimization techniques yield slightly different values for
offset, gain and the remaining matching error. For example, the matching may be
achieved through mean squared value minimization for (k) for all k [3]; alternatively, the maximum of the matching errors may be reduced. Simpler offset and gain
measurements are often based on targeting an exact match in Equation (7.4) for the
first and last code transition levels, T [1] and T [2N 1] ((1) and (2N 1) equal
to zero) referred to as terminal-based offset and gain. An example for this case is
illustrated in Figure 7.3. An alternative methodology is to employ the straight-line
approximation of the real transfer function mentioned above. Offset and gain values
2N1
Q
DNL[m]
Code m
Code n
INL[n]
Real
transfer
function
Ideal
transfer
function
Code 1
Figure 7.3
Vmax
T[2N1]
T [1]
Vmin
Code 0
Analog
input
are then determined through matching this real straight line with the ideal straight
line, which again can deviate slightly from the optimization process results [3].
Differential non-linearity is a measure of the deviation of the gain and offset
corrected real code widths from the ideal value. DNL values are given in LSBs for
the codes 1 to (2N 2) as a function of k as
DNL[k] =
(W [k] Q)
Q
for 1 k 2N 2
(7.5)
where W [k] is the width of code k determined from the gain and offset corrected code
transition levels as given in Equation (7.3) and Q is the ideal code bin width. Note
that neither the real code bin widths nor the ideal value are defined at either end of the
transfer function. As an example, a DNL of approximately +1/4 LSB in code m is
included in Figure 7.3. The absolute or maximum DNL corresponds to the maximum
value of |DNL[k]| over the range of k given in Equation (7.5). A value of 1 for
DNL[k] corresponds to a missing code.
Integral non-linearity quantifies the absolute deviation of a gain and offset compensated transfer curve from the ideal case. INL values are given in LSBs at the code
transition levels as a function of k by
INL[k] =
[k]
Q
for 1 k 2N 2
(7.6)
7.2.2
A/D converter performance is also expressed in the frequency domain. This section
introduces the main dynamic performance parameters associated with the converters
output spectrum, while the determination of their values in converter testing is
described in Section 7.3.4.
Figure 7.4 illustrates an A/D converter output spectrum, a plot of frequency component magnitude over a range of frequency bins. Such a spectrum can be obtained
Amplitude
A1
ASi
AH3
AH2
fi
Figure 7.4
2fi
AHk
3fi
kfi
Frequency
Psignal
RMS(signal)
SINAD = 20 log10
(7.7)
= 10 log10
RMS(total noise)
Ptotal noise
The effective number of bits (ENOB) compares the performance of a real A/D
converter to the ideal case with regard to noise [7]. The ENOB is determined through
RMS(total noise)
ENOB = N log2
(7.8)
RMS(ideal noise)
where N is the number of bits of the real converter. In other words, an ideal A/D
converter with a resolution equal to the ENOB determined for a real A/D converter
will have the same RMS noise level for the specified input signal amplitude and
frequency. The ENOB and SINAD performance parameters can be correlated to each
other as analysed in Reference 3.
THD is a measure of the total output signal power contained in the second to kth
harmonic component, where k is usually in the range from five to ten (depending on
the ratio of the particular harmonic distortion power to the random noise power) [8].
The THD can be determined from RMS values of the input signal and the harmonic
components and is commonly expressed as the ratio of the powers in decibels:
k
2
i=2 AHi(rms)
Pharmonic
THD = 20 log10
(7.9)
= 10 log10
Pinput
A1(rms)
where A1(rms) is the RMS for the signal and AHi(rms) the RMS for the ith harmonic.
THD is given in decibels and usually with respect to a full-scale input (dBFS). Where
the THD is given in dBc, the unit is in decibels with respect to a carrier signal of
specified amplitude.
A1
(7.10)
SFDR = 20 log10
max{AH(max) , AS(max) }
where AH(max) and AS(max) are the amplitudes of the largest harmonic component and
spurious component, respectively.
While the dynamic performance parameters introduced above are essential for an
understanding of A/D converter test methodologies (Section 7.3), an entire range of
further performance parameters is included in the IEEE standard 1241 [3], such as
various SNRs specified for particular bandwidths or for particular noise components.
Furthermore, some performance parameters are defined to assess inter-modulation
distortion in A/D converters with a two-tone or multiple tone sine-wave input.
7.3
This section introduces A/D converter test methodologies, for static and dynamic
performance parameter testing. The basic test set-up and other prerequisites are
briefly described in the next section. For further reference, an introduction to production test of ICs, ranging from test methodologies and design-for-test basics to
aspects relating to automatic test equipment (ATE) can be found in Reference 9.
Details on DSP-based testing of analogue and mixed-signal circuits are provided in
References 10 and 11.
7.3.1
The generic test set-up is illustrated in Figure 7.5. In generic terms, a suitable stimulus
supplied by a test source is applied to the A/D converter under test via some type of
test access mechanism. The test stimulus generator (TSG) block corresponds to one
or more sine waves, arbitrary waveform or pulse generator(s) depending on the type
of test to be executed. Generally, the response is captured for processing in a test sink
again facilitating a test access mechanism.
Clock generation and distribution
Test
stimulus
generator
Filter
(optional)
Test access
Mechanism
Test source
Figure 7.5
Test access
ADC
Mechanism
buffer
(optional)
Output
response
analyser
Test sink
7.3.2
The action of collecting a set of A/D converter output samples and transferring it to
the output response analyser (ORA) is commonly referred to as taking a data record.
The aim is to accumulate consecutive samples; however, for high-speed converters
interfacing restrictions may require output decimation [3]. This is a process in which
only every ith sample of a consecutive sequence is recorded at a lower speed than the
A/D converters sampling speed.
On the other hand, the A/D converter maximum sampling frequency restricts the
rate at which a waveform can be digitized and therefore the measurement bandwidth.
When sampling periodic waveforms, it is generally desirable to record an integer number of waveform periods while not resampling identical points at different cycles. This
can be assured by applying coherent sampling, Equation (7.11), where additionally
the number of samples in the record, M, and the number of cycles, N, are in the ratio
of relative prime numbers [10]:
fi = fs
N
M
(7.11)
Amplitude
9
4
16
10
15
17
14
18
11
5
21
20
12 13
Data
samples
19
Amplitude
15
9 16 3
10
17
4
1
11
18
14
5
12 19
Figure 7.6
6 13
7
20
(21) Data
samples
Amplitude
(a)
Data
samples
Amplitude
(b)
Data
samples
Figure 7.7
halves of the input waveform phases as illustrated in Figure 7.7(b).While the latter
sampling schemes allow a quick visualization of the waveforms shape, the sampling
techniques introduced can be employed in the test methodologies described in the
next sections.
7.3.3
(a)
Vref+
(b)
M >N
R
N
DAC
ADC
ADC
Vref
C
Digital
comparator
Figure 7.8
C
M
Counter or
accumulator
Digital
comparator
Input
voltage
T [7]
T [1]
k
7
6
5
4
3
2
1
0
Code count H [k]
Input
voltage
T [7]
k
7
6
5
4
3
2
1
0
T [1]
(a)
Amplitude
Time
Time
Figure 7.9
For ramp histograms, where ideal values for H[2] to H[2N 2] are equal, code
transition levels can be given as in the first part of Equation (7.12), where C is an
offset component and A a gain factor that is multiplied with the accumulated code
count up to the transition to code k [3]. As the widths of the extreme codes, 0 and 2N 1,
cannot be defined, their code counts are usually set to zero (H[0] = H[2N 1] = 0).
In these cases, C and A can be determined as shown in Equation (7.12), where the
first code transition level, T [1], is interpreted as the offset component. The gain
factor defines the proportion of the full-scale input range for a single sample in the
histogram, where Htot is the total number of samples in the entire histogram:
T [k] = C + A
k1
i=0
T [2N 1] T [1]
H[i]
Htot
k1
H[i] = T [1] +
(7.12)
i=0
T [k] = C A cos
(7.13)
Htot
where offset component C and gain factor G correspond to the input sine waves offset
and amplitude.
7.3.4
Generally, the aim in dynamic performance parameter testing is to identify the signals
components at the A/D converter output, such as the converted input signal, harmonics
and random noise, and to compute performance parameters introduced in Section
7.2.2. For the majority of these parameters and determination of signal components,
a transformation from the time domain to the frequency domain is required. A/D
converter testing employing discrete Fourier transformation is described in the next
section. However, some dynamic performance parameters can also be determined in
the time domain from an A/D converter model generated to match a data record taken
from a real converter. The so-called sine-fit testing is introduced in Section 7.3.4.2.
For either technique, it is assumed that a single tone sine-wave stimulus is applied to
the A/D converter.
7.3.4.1 Frequency domain test methodology
This section focuses on the application of frequency domain test to A/D converters. It
is beyond the scope of this chapter to provide an introduction to Fourier transformation
[6] or to discuss general aspects of DSP-based testing and spectrum analysis [10, 11]
in great detail.
A signal can be described in time or frequency domain where the Fourier analysis
is employed to move from one domain to the other without loss of information. For
coherent sampling of periodic signals with the number of samples taken in the time
A1
AH2
fi
Figure 7.10
2fi
AH5
AH3
AH4
3fi
4fi
ASi
AH6
5fi
6fi
AH7
AH8
7fi
8fi fs/2
Frequency
domain being a power of two, the discrete Fourier transformation can be computed
more efficiently through FFT algorithms. If coherent sampling of all signal components cannot be guaranteed, a periodic repetition of the sampled waveform section
can lead to discontinuities at either end of the sampling interval causing spectral
leakage. In such cases, windowing has to be applied, a processing step in which
the sampled waveform section is mathematically manipulated to converge to zero
amplitude towards the interval boundaries, effectively removing discontinuities [24].
In either case, the A/D converter output signal is decomposed into its individual
frequency components for performance analysis. The frequency range covered by
the spectrum analysis depends on the rate of A/C converter output code sampling,
fs . The number of discrete frequency points, also referred to as frequency bins, is
determined by the number of samples, N, processed in the FFT. While accounting
for aliasing, signal and sampling frequencies have to be chosen to allow sufficient
spacing between the harmonics and the fundamental component. The graphical presentation of the spectrum obtained from the analysis, frequently referred to as FFT
plot, illustrates the particular signal component amplitude with its frequency on the
x-axis (Figure 7.10). The number of frequency bins is equal to N/2 and their widths
are equal to fs /N.
The following spectrum features can be identified in Figure 7.10. First the fundamental component, A1 , corresponding to the input signal, second the harmonic
distortion components, AH2 to AH8 , third large spurious components, such as ASi
and finally the remaining noise floor representing quantization and random noise.
Dynamic performance parameters, such as SINAD, THD and SFDR, can be calculated from the particular signal components real amplitudes (not in decibels) or the
power contained in them, as given in Section 7.2.2 and described in Reference 25
including multi-tone testing.
for
n = 1, . . . , M
(7.14)
where A is the amplitude, the phase and C the offset of the fitted sine wave of
angular frequency . When the frequencies of the input stimulus and the sampling and
therefore parameter of the fitted function are known, the remaining three sine-wave
parameters can be calculated through minimization of the RMS error between the data
record and the model (three-parameter least-square fit [3]). When the frequencies are
unknown or not stable, then the four-parameter least-square fit has to be employed.
Here, an iteration process beginning with an initial estimate for the angular frequency
surrounds the least-square minimization process. The value for is updated between
loops until the change in obtained sine-wave parameters remains small. The threeparameter and four-parameter fitting process is derived and described in far more
detail in Reference 17.
The performance parameter that is usually computed for the fitted A/D converter model is the ENOB [7], Equation (7.8). Some further performance analysis
can be achieved by test execution under different conditions, such as various input
stimulus amplitudes or frequencies as described in Reference 26. A potential problem is that this test methodology does not verify the converter performance over
its entire full-scale input range, as the test stimulus amplitude has to be chosen to
avoid clipping. Localized conversion error, affecting a very small section of the transfer function, may also escape unnoticed due to the averaging effect of the fitting
process.
7.4
Built-in self-test (BIST) for analogue and mixed-signal components has been identified as one of the major requirements for future economic deep sub-micron IC
test [27, 28]. The main advantage of BIST is to reduce test access requirements.
At the same time, the growing performance gap between the circuit under test and
the external tester is addressed by the migration of tester functions onto the chip. In
addition, parasitics induced from the external tester and the demands on the tester
can be reduced. Finally, analogue BIST is expected to eventually enable the use of
Test clock
TSG
(opt. DAC)
Optionally
on chip
Figure 7.11
S&H,
ADC
signature of
ADC
Reference
Histogram +
Difference
generator
histogram
on chip
Optionally
on chip
cheaper, digital only or so-called DfT-testers that will help with the integration of
analogue virtual components including BIST for digital SoC applications. Here
the aim is to enable the SoC integrator to avoid the use of expensive mixed-signal
test equipment. Also, for multi-chip modules, on-chip test support hardware helps
to migrate the test of analogue circuitry to the wafer level. It is expected that the
reuse of BIST structures will significantly reduce escalating test generation costs, test
time and time-to-market for a range of devices. Full BIST has to include circuitry to
implement both TSG and ORA. This section briefly summarizes BIST solutions that
have been proposed for performance parameter testing of A/D converters, some of
which have been commercialized.
Most BIST approaches for A/D converter testing aim to implement one of the
converter testing techniques described in Section 7.3. In Reference 29 it is proposed
to accumulate a converter histogram in an on-chip RAM while the test stimulus is
generated externally. The accumulated code counts can be compared against test
thresholds on chip to test for DNL; further test analysis has to be performed off chip.
This test solution can be extended towards a full BIST by including an on-chip triangular waveform generator [30]. In a similar approach, the histogram-based analogue
BIST (HABIST), additional memory and ORA circuitry can be integrated to store
a reference histogram on chip for more complete static performance parameter testing of A/D converters [31]. This commercialized approach [32] also allows the use
of the tested A/D converter (ADC) with the BIST circuitry to apply histogram-based
testing to other analogue blocks included in the same IC. As illustrated in Figure 7.11,
the on-chip integration of a sine wave or saw tooth TSG is optional. The histogram
is accumulated in a RAM where the converter output provides the address and a
read-modify-write cycle updates the corresponding code count. The response analysis is performed after test data accumulation and subtraction of a golden reference
histogram. As for the TSG, on-chip implementation of the full ORA is optional.
Also the feedback-loop test methodology has been considered for a straightforward BIST implementation [33]. The oscillating input signal is generated through
the charging or discharging of a capacitor with a positive or a negative reference
current I, generated on chip (Figure 7.12). Testing for DNL and INL is based on the
measurement of the oscillation frequency on the switch control line (ctrl) similar to
feedback-loop testing (Section 7.3.3.1).
ADC
BIST
VSS
(a)
(b)
Signature of
ADC
BIST
(c)
S1
S3
S2
Test clock
Lowpass
filter
Figure 7.13
ADC
y = b0 + b1x + b2 x + b3x
2
Input voltage
Figure 7.12
Ctrl
S3 S3
S2
S1
S0
Time
S2
S1
S0
7.5
This chapter discussed the key parameters and specifications normally targeted in
ADC testing, methods for extracting these performance parameters and potential
solutions for either implementing full self-test or migrating test resources from external test equipment to the device under test. Table 7.1 provides a summary of the
advantages and limitations of five of the main test methods used in A/D converter
testing.
The field now faces major new challenges, as the demand for higher-resolution
devices becomes the norm. The concept of design reuse in the form of integrating third-party designs is also having a major impact on the test requirements, as
in many cases system integrators wishing to utilize high-performance converter
functions will not normally have the engineering or production test equipment
required to test these devices. The concept of being able to supply an ADC with
an embedded test solution that requires only digital external test equipment is hence a
major goal.
In the case of on-chip test solutions, proposed or available commercially, limitations need to be understood before investing design effort. Histogram testing, for
example, will require a large amount of data to be stored and evaluated on chip while
requiring long test times. For servo-loop-based solutions, the oscillation around a single transition level may be difficult to achieve under realistic noise levels. Sine-wave
fitting will require some significant area overhead for the on-chip computation, as do
FFT-based solutions and may still not achieve satisfying measurement accuracy and
resolution. Further work is therefore required to quantify test times, associated cost
and measurement accuracies and generate quality test quality metrics.
Technique
Performance
parameters
tested
Major advantages
Main limitations
Histogram based
Static performance
(offset and gain
error, DNL,
INL, missing
codes, etc.)
Well-established,
complete linearity
test
Servo-loop
Static performance
(offset and gain
error, DNL, INL)
Accurate
measurement of
transition edges
(not based on
statistics)
Test stimulus
accuracy,
measurement
accuracy
Sine-wave
curve fitting
DNL, INL,
missing codes,
aperture uncertainty,
noise
Input frequency is a
submultiple of
sample frequency,
lack of
convergence of
algorithm,
measurement
accuracy
Beat frequency
testing
Dynamic
characteristic
No accurate test
FFT based
Dynamic
performance
(THD, SINAD,
SNR, ENOB)
No tests for
linearity
7.6
References
1 van de Plassche, R.: Integrated Analog-to-Digital and Digital-to-Analog Converters (Kluwer, Amsterdam, The Netherlands, 1994)
2 Geiger, R.L., Allen, P.E., Strader, N.R.: VLSI Design Techniques for Analog and
Digital Circuits (McGraw-Hill, New York, 1990)
3 IEEE Standard 1241-2000: IEEE Standard for Terminology and Test Methods for
Analog-to-Digital Converters (Institute of Electrical and Electronics Engineers,
New York, 2000)
Chapter 8
Test of converters
Gildas Leger and Adoracin Rueda
8.1
Introduction
Back in the 1960s, Cutler introduced the concept of modulation [1]. Some years
later, Inose et al., applied this concept to analogue-to-digital converters (ADCs) [2].
Sigma-delta () converters attracted little interest at that time because they required
extensive digital processing. However, with newer processes and their ever decreasing
feature size, what was first considered to be a drawback is now a powerful advantage: a significant part of the conversion is realized by digital filters, allowing for a
reduced number of analogue parts, built of simple blocks. Nevertheless, the simplicity
of the hardware has been traded-off against behavioural complexity. modulators are very difficult to study and raise a number of behavioural peculiarities (limit
cycles, chaotic behaviour, etc.) that represent an exciting challenge to the ingenuity
of researchers and are also an important concern for industry.
Owing to this inherent complexity, it is quite difficult to relate defects and in
general non-idealities to performance degradation. In linear time-invariant circuits, it
is usually possible to extract the impact of a defect on performance by considering
that the defect acts as a perturbation of the nominal situation. This operation is known
as defect-to-fault mapping. For instance, in a flash ADC, a defect in a comparator can
be modelled as a stuck-at fault or an unwanted offset that can be directly related to
the differential non-linearity. However, in the case of converters, a given defect
can manifest itself only under given circumstances. For instance, it is known that
the appearance of limit cycles is of great concern, particularly in audio applications.
Indeed, the human ear can perceive these pseudo-periodic effects as deep as 20 dB
below the noise floor. A defect in an amplifier can affect its d.c. gain and cause
unwanted integrator leakage and limit cycles. Such a defect can be quite difficult to
detect with a simple functional test. Performing a good and accurate test of a
modulator is, thus, far from straightforward. A stand-alone modulator in its own
8.2
8.2.1
Figure 8.1 shows the structure of a converter. The converter is divided into two
important domains: the analogue and the digital domains. In the analogue part, the
modulator adaptively approximates the output low-resolution bit-stream to the input
signal and shapes the quantization noise at high frequencies. It is often preceded
by a simple anti-aliasing filter that removes potential high-frequency components
from the input signal. Then, the decimation filter removes the high-frequency noise
and converts the low-resolution high-rate bit-stream into a high-resolution low-rate
digital code.
The objective of modulation is to shape the quantization error at high frequencies, as seen in Figure 8.2. This allows most of the quantization error to be filtered
and the performance greatly improves. Taking the concept to an extreme, it is even
possible to use a 1-bit quantizer and obtain high-precision converters by appropriate
quantization, noise shaping and filtering.
The noise-shaping capability of modulators is achieved by feeding back the
quantization error to the input. Actually, a modulator should be seen as an adaptive
Bitstream
Anti-aliasing
filter
modulator
Decimation filter
Analog
Figure 8.1
1011101001010001
1001100101001010
1001010001010010
1100010100101010
Digital
Decomposition of a ADC
Modulation
Figure 8.2
fs/2
fs/2
z1
Figure 8.3
+
Controller
Error-predicting architecture
system. Let us imagine that only a low-resolution converter is available: if the input
signal is directly fed into that converter, the output will be a coarse approximation of
the input. However, the input signal is slow with respect to the maximum sampling
frequency of the low-resolution converter. In control theory, the simplest approach
to improve the behaviour of a system is to use a proportional controller: a quantity
proportional to the quantization error is subtracted from the input signal. This is
depicted in Figure 8.3. When considering a discrete-time situation, a delay has to
be introduced into the feedback loop. If the quantity subtracted from the input is
the entire quantization error, an architecture known as error predicting is obtained.
Modelling the low-resolution converter as an additive noise source [3], the transfer
function of the device can be resolved in the z-domain.
Y
ADC
L1=(H-1)/H
DAC
Figure 8.4
The output is equal to the input signal plus a quantization noise that is shaped at
high frequencies by the function (1 z1 ). In control theory, the system performance
can be improved by taking an adequate controller (proportional, integral, differential
or any combination of them). In the same way, modulation can be presented in a
generic way as in Figure 8.4: the input signal and the modulator output are combined
in a loop filter whose output is quantized by a coarse converter. The characteristics
of the loop filter further define the architecture of the modulator and the order of the
noise shaping.
The modulator state equation can be written as
U(z) = L0 (z)X(z) + L1 (z)Y (z)
(8.1)
(8.2)
The function G(z) is known as the signal-transfer function. Similarly, H(z) is the
noise-transfer function (NTF). The term in the parenthesis represents the quantization
noise.
8.2.2
As was said above, in order to retrieve the data at the wanted precision, the quantization noise has to be properly filtered. The cut-off frequency of the filter defines
the modulator baseband. Indeed both the quantization noise and the input signal are
affected by the filter. Once the filtering operation has been done, the frequency range
above the filter cut-off frequency is useless. For that reason, the filter output datastream is often decimated or down-sampled: only one sample out of N is considered.
In order to avoid the aliasing effect, N has to be set such that the output data rate is
twice the filter cut-off frequency. This process is illustrated in Figure 8.5.
The output spectrum of a converter appears to be very similar to a Nyquistrate converter but the input signal is actually sampled at a much higher frequency
than the converter output rate. The oversampling ratio (OSR) is defined as
OSR =
fs
2fc
(8.3)
Filtering
fs/2
fc
Figure 8.5
Decimation
fc
0 fc 2fc
fs/2
1
1z1
reg
1zN
+
L
Accumulators
reg
fs
Figure 8.6
fs/2
reg
reg
L
Differentiators
fd =fs/N
where fc is the cut-off frequency of the filter and fs the sampling frequency of the
modulator. It is easy to show that the number N defined above for the decimation
operation is actually equal to the OSR.
Both the filtering and decimation operations have to be carried out with care. The
decimation cannot be performed directly on the bit-stream, as the high-frequency
noise would alias into the baseband. On the other hand, performing the whole decimation at the filter output may not be optimum in terms of power efficiency. Indeed, it
would force the entire filter to run at the maximum frequency. It may be more convenient to split the filter into several stages with decimators at intermediate frequencies.
Hence, finding an adequate decimation and filtering strategy for a given modulator
is an interesting optimization problem. One widely used structure, however, is that
presented in Figure 8.6.
8.2.3
modulator architecture
With the advent of digital-oriented processes, converters have gained more and
more interest. Research effort has been focused on both theory and implementation.
In order to get more benefits from noise shaping, high-order architectures have been
developed with a wide variety of loop-filter topologies. In parallel, these refinements
require a deeper understanding of the non-linear dynamics of complex modulators. The topic of most relevance is without doubt the stability of the modulator
internal states [46].
z1
1-z1
1
1
Y
+
Figure 8.7
(8.4)
The modulator output is thus equal to the delayed modulator input plus the quantizer
error shaped by the function (1 z1 ). Considering the quantizer error as a white
noise that respects Benetts conditions [3] and assuming a large OSR (that is, the
baseband frequency is much lower than the sampling frequency), the quantization
noise power in the modulator baseband can be calculated as
PQ =
2
2
12
OSR3
(8.5)
a1
z1
1z 1
a2
Figure 8.8
b1
z1
1z 1
1
1
E
Y
k
b2
B2
BL
A1
A0
A2
AL
Y1
+
Y2
+
Figure 8.10
Reconstruction filter
Figure 8.9
YN
The second technique consists in cascading several low-order stages [12] as shown
in Figure 8.10. The quantization error of stage i is digitized by stage i + 1, in some
way similar to pipeline converters where the residue of one stage conversion (i.e., the
quantization error) is the input of the next stage. A proper reconstruction filter has to
be designed that combines the output bit-streams of all the stages. As a result, all but
the last stage quantization errors are cancelled. Such structures benefit from a greater
simplicity than single-loop modulators and their design flow is better controlled. A
drawback is that noise cancellation within the reconstruction filter depends on the
characteristics of the different stages (integrator gain, amplifier d.c. gain and branch
coefficients). In other words, the digital reconstruction filter has to match the analogue
characteristics of the stages. These requirements put more stress on the design of the
analogue blocks as the overall modulator is more sensitive to integrator leakage and
branch mismatches than single-loop modulators.
8.3
Characterization of converters
The converters are ADCs. For this reason, their performance can be described
by standard metrics, defined for any ADC. Similarly, there exist standard techniques
to measure these standard metrics. All this information about ADC testing is actually
gathered in the IEEE 1241-2000 standard [13]. Characterization of state-of-the-art
converters is challenging in itself from a metrological viewpoint. Some
converters claim for a precision of up to 24 bits. For such levels of accuracy, no
detail of the test set-up can be overlooked. However, these concerns are not specific
to converters but are simply a consequence of their overwhelming capability
to reach high resolutions. What is intended in this section is to contemplate ADC
characterization from the viewpoint of modulation. For more general information,
the reader can refer to Chapter 7.
The performance specifications of ADCs are usually divided into two categories:
static and dynamic. The meaning of these two words seems to identify the role of
these specifications to the field of application of the converter. As the first modulators were of low order, they required a high OSR to reach a good precision.
Their baseband was thus limited to low frequencies. Then, evolutions in the modulator architecture allowed reducing the OSR while maintaining or even increasing
the resolution. For this reason, the market for converters has evolved from the
low-frequency spectrum to the highest one. In the lowest range of frequency,
modulators are used for instrumentation medical, seismology, d.c. meters, and so
on. At low frequencies, state-of-the-art converters claim for a resolution of up to
24 bits. In that case, the most important ADC performance parameters seem to be
the static ones: gain, offset, linearity. However, the noise figures are also of great
interest for those metering applications that require the detection of very small signals. The most important market for converters can be found in the audio range.
Indeed, most CODECs use modulators. In that case, dynamic specifications are
of interest. Moving forward in the frequency spectrum, modulators can be found
in communication and video applications. The first target was ADSL and ADSL+
but now converters can be found that are designed for GSM, CDMA and AMPS
receivers.
8.3.1
Opening the ADC black box results in numerous conclusions. The first one is almost
purely structural. Because converters are so clearly divided into two domains
( modulator and digital filter), the building blocks are often sold separately. From
a characterization viewpoint, it is obvious that the converter performance depends
on the filter characteristics: it will define the OSR and also the amount of noise
effectively removed. It also affects the signal frequency response. Furthermore, it
must be correctly implemented such that finite precision arithmetic does not degrade
the final resolution. However, the filter is digital and to some extent its performance is
guaranteed by design. The filter has to be tested to ensure that no defect has modified
its structure or caused an important failure, but the characteristics of the filter should
8.3.2
Static performance
Gain
DC input
Offset
Best-fit
line
INL
DNL
Ideal code
width
Figure 8.11
8.3.3
Dynamic performance
8.3.4
The first issue to consider is that performing a FFT over a finite number of points
gives an approximation of the FT of the signal under study. Actually, if N points
are acquired at a frequency facq , the outcome of the FFT is the Fourier series of
the periodic signal of period N/facq that best approximates the acquired samples.
Most of the time, however, the acquired signal does not have the required period. It
may even not be periodic at all, due to noise or spurious components. For that reason,
spectral leakage occurs. The signal components at frequencies other than the available
frequency bins (k facq /N, with k varying from zero to N1) will leak and spread
across adjacent bins, making them unobservable. Actually, the obtained spectrum can
be considered as the FT of an infinite-length version of the analysed signal multiplied
by a rectangular signal with N ones and an infinite number of zeros. That signal is
called a rectangular window. The multiplication in the time domain corresponds to a
convolution in the frequency domain. So a spectral line at a given frequency (a Dirac
distribution) in the ideal FT of the signal will appear as a version of the rectangular
window spectrum centred at the same frequency. More exactly, what will appear in
the FFT output are the samples of the displaced rectangular window spectrum that
corresponds to the available frequency bins. This is illustrated in Figure 8.12.
If the spectral line exactly corresponds to one of the FFT bins it means that it can
be represented adequately by a Fourier series of length N. This corresponds to case
(a) in Figure 8.12. In that case, the rectangular window spectrum is sampled at its
maximum, and the rest of the samples exactly correspond to the nulls of the window
spectrum. However, if the spectral line falls between two FFT bins (case (b)), the
rectangular window spectrum is not sampled at its maximum on the main lobe. Part
of the missing signal power leaks into adjacent FFT bins that sample the rectangular
window sidelobes.
(b)
Frequency
Figure 8.12
Spectrum of a rectangular window (a) for a coherent tone and (b) for
a non-coherent tone
Coherent sampling is the first technique to limit these undesirable effects. It consists in properly choosing the test frequencies such that they correspond to FFT bins
as exactly as possible. In practice, this can be implemented if the test signal generator
can be synchronized with the ADC. It can be shown [16] that the test frequencies
should be set to a fraction of the acquisition frequency:
ftest =
J
facq
N
(8.7)
where N is the number of samples in the acquisition register and J is an integer, prime
with N, that represents the number of test signal periods contained in the register.
This choice also ensures that all the samples are evenly distributed over the test signal
period and that no sample is repeated.
However, it is not always possible to control the test frequencies with a sufficient
accuracy. Similarly, there may be spurious tones in the converter output spectrum
at uncontrolled frequencies. In those cases, a window different from the rectangular
one is required. Spectral leakage occurs because the analysed signal is not periodic
with a period N/facq . The idea behind windowing is to force the acquired signal to
respect the periodicity condition. For that to be done, the signal has to be multiplied
by a function that continuously tends to zero at its edges. As a result, the power
contained in the sidelobes of the window spectrum can be greatly reduced. The
window has to be chosen such that the leakage of all components present in the
ADC output signal falls below the noise floor and thus does not corrupt spectrum
observation. The drawback of such an operation is that the tones present in the output
spectrum are no longer represented by a sharp spectral line at one FFT bin. Indeed,
the main lobe of the window is always sampled by number of adjacent FFT bins
greater than one. As a result, frequency resolution is lost. There is a trade-off between
frequency resolution and sidelobe attenuation. Figure 8.13 represents the spectrum
of several windows sampled for a 1024-point FFT. Figure 8.13(a) shows how the
window would be sampled for a non-coherent tone that would fall exactly between
(b)
Rectangular
RifeVincent (type II)
BlackmanHarris
Window power in dB
Hanning
100
50
200
100
150
100
200
200
0.1
0.2
0.3
0.4
0.5
0.13
0.135
0.14
0.13
0.135
0.14
Normalized frequency
Figure 8.13
two FFT bins. Figure 8.13(b) shows a close-up of the main lobes of the window
spectra for a coherent tone. Notice that for Figure 8.13(b), there is one marker per
FFT bin.
In order to limit spectral leakage, the authors in Reference 17 proposed to combine
sine-fit and FFT. A sine-fit is performed on the acquired register in order to evaluate
the gain and offset of the modulator. Then, an FFT is performed on the residue of
the sine-fit. As the high-power spectral line has been subtracted from the register,
the residue mainly contains noise, spurious components and harmonics. In most
cases, these components do not exhibit high power tones. A simple window or even
a rectangular window can be used. The spectral leakage of these components should
be buried below the noise floor. The overall spectrum (what the authors call pseudospectrum) can be reconstituted by adding manually the spectral line corresponding
to the input signal. The main drawback of this technique is obviously that it requires
the computational effort of both sine-fit and FFT.
The proper application of a FFT requires that three parameters must be determined:
the number of samples in a register, the number of averaged registers and the window
to be applied. The window type sounds too qualitative and it is useful to divide it into
four parameters: the main lobe width (for instance, 13 FFT bins for the RifeVincent
window of Figure 8.13(a)), the window energy, the maximum sidelobe power and
the asymptotic sidelobe power evolution. Figure 8.14 shows how these parameters
relate to the measurement objectives and to the set-up constraints through a number
of central concepts.
The required frequency resolution is defined by the need for tone discrimination
and affected by set-up limitations such as the frequency precision of the signal generator. For a given type of test, a number of tones are expected in the output spectrum.
For instance, in an inter-modulation test, the user has to calculate, as a function of the
input tone frequency, the expected frequency of the inter-modulation and distortion
Central concepts:
Frequency resolution
Noise floor
Noise dispersion
Set-up constraints:
Signal leakage
Stimulus frequency
precision
Noise leakage
FFT parameters:
Number of samples
Number of averages
Window:
- Main lobe width
- Window energy
- Side-lobes power
- Side-lobes decay
Figure 8.14
tones. Similarly, expected spurious tones can be taken into account, such as 50 Hz (or
60 Hz) tones. All those components should be correctly discriminated by the FFT in
order to perform correct measurements. Frequency resolution is primarily driven by
the number of samples in the acquired register but the window type is also of great
importance. Indeed, the main lobe width for an efficient window (from a leakage
viewpoint) as for the RifeVincent window shown in Figure 8.13(a) is as large as 13
FFT bins. This means that the frequency resolution is reduced by a factor 13 with
respect to a rectangular window whose main lobe is only one-bin wide. In many cases
though, few tones are expected in the output spectrum and the frequency resolution
issue can easily be solved by a judicious choice of the test frequency.
The noise floor is the concept of most importance. The power of a random signal
spreads over a given frequency range. For a white noise, it spreads uniformly from
d.c. to half the acquisition frequency (facq /2). What the FFT measures is actually the
amount of noise power in a small bandwidth centred on each FFT bin. Obviously, the
larger the number of samples acquired, the smaller the bandwidth and the lower the
amount of noise falls in that bandwidth. The expected value for a noise bin is
Xk = noise
2
N Ewin
(8.8)
where noise is the standard deviation of the white noise, N is the number of samples
in the acquisition register and Ewin is the energy of the applied window. Indeed,
the window is applied to the whole output data, including the noise and influences
the effective noise bandwidth. The energy of the window is simply calculated from
(8.9)
On the other hand, the noise floor is related to the set-up constraints by the actual
noise power in the output data, which should be estimated a priori. The noise floor
has to be set to a value that enables the observation of the lowest expected tone power.
In other words, if a tone of 90 dB below full scale has to be detectable, the number
of samples and the window energy have to be chosen such that the noise floor of the
resulting FFT falls below 90 dB.
The noise dispersion should also be taken into account. It can be shown that the
random variable that corresponds to an FFT bin and whose mean value is expressed
in Equation (8.8) has a standard deviation of the same order as its mean value. As a
result, in the representation of the spectrum in decibels of the full scale, random noise
appears as a large band that goes from 12 dB above the expected power level down
to tens of decibels below. Averaging the magnitude of the FFT bins for K acquisition
registers helps to reduce the standard deviation of the noise FFT bins by a factor
of K 0.5 . For a significant number of averages, the noise floor tends to a continuous
line, which would be its ideal representation. Actually, the following equation could
be used to derive the FFT parameters from the requirement of the lowest detectable
tone:
N Ewin
3
1
10 log
+ 20 log 1 +
= Pspur (8.10)
10 log
2Pnoise
2
K
where Pnoise is the expected noise power of the converter, K is the number of averaged
registers and Pspur is the power of the minimum spur that has to be detected. Notice that
a full-scale tone is taken as the power reference in Equation (8.10). The last logarithmic
term in Equation (8.10) stands for the dispersion of the noise floor. Figure 8.15 intends
to facilitate comprehension of Equation (8.10).
The dispersion term should be maintained below the variations of the noise spectral
density that has to be detected. For instance, if an increase of 6 dB in the noise density
due to flicker noise has to be detected, the noise dispersion term should be lower than
6 dB, which implies averaging K = 10 FFT registers. Note that if the actual noise
power is higher than expected, the noise floor of the obtained FFT is increased. As
a result, the minimum detectable tone is higher. To compensate for this effect, the
number of points in the register should be increased to decrease the noise floor. An
extra term may be introduced into Equation (8.10) in order to account for unexpected
noise increases.
Returning to Figure 8.14, the concept of signal leakage has already been explained.
Considering the maximum input tone power and the frequency precision of the signal
generator available, the proper window should be selected such that the sidelobe power
falls below the noise floor. Notice that if the frequency precision of the generator is
better than half the FFT bin bandwidth, facq /(2N), the sidelobe power requirements
Power in dBFS
Full-scale tone
Buried tone
Noise
Noise floor:
N Ewin
1
10log
2Pnoise
2
10log
Minimum
detectable
tone
20log 1+
Figure 8.15
FFT bins
N/2
may be relaxed as the window spectra would not be worst-case sampled. Taking that
case to an extreme, if coherent sampling is available to the test set-up, no signal
leakage occurs.
For converters, however, another leakage concept may have to be taken
into account: noise leakage. As was said in Section 8.3.1, converters nonidealities are located mainly in the analogue part which is the modulator. In that
sense, performing the FFT on the modulator bit-stream gives more insight into the
functioning of the modulator because it is possible to check the correctness of
the noise shaping at high frequencies (beyond the cut-off frequency of the decimation
filter). If the FFT is performed on the output of the decimation filter, a number of
samples N has to be acquired at the filter output frequency (facq ) in a high-resolution
format (for instance, the filter output of a 24-bit precision filter can be in a 32-bit
format). If it is performed on the modulator bit-stream, a number of samples, N , has
to be acquired at the sampling frequency of the modulator (which is equal to the filter
output frequency multiplied by the OSR) in a low-precision format (typically 1 bit).
Taking into account that the same non-idealities have to be detected in the baseband,
the same frequency resolution has to be selected in both cases. Hence, the FFT of the
modulator output bit-stream requires OSR times more points than the acquisition at
the filter output. The acquisition time is thus the same in both cases, and the memory
requirements should be of the same order due to the difference in the samples formats.
The drawback of performing an FFT on the modulator bit-stream is that it puts more
stress on the choice of the window that has to be applied to the data register. Indeed
in most ADCs, the noise spectral distribution is almost flat and its power is far lower
than full-scale signal power. As a result, noise leakage has little or no impact on the
output spectrum. This reasoning is also valid for a converter if data is acquired
at the output of the decimation filter. However, if the FFT is performed directly at
8.4
Test of converters
The term test is commonly used for characterization. Indeed, functional test is the
most-used practice in the field of mixed-signal circuit production test and is very
similar to characterization. It consists in measuring a given set of datasheet parameters and verifying that they lie within the specified limits. Nevertheless, the correct
definition of test is broader than that of characterization. Test should represent any
task that ensures directly or not within a given confidence interval that the circuit
is (and not just performs) as expected. For instance, if it were possible to check that
the geometry of all process layers is the expected one and that the electrical process
parameters are within specification over the whole wafer, circuit performance would
be ensured by design. As was said before, production test for mixed-signal circuits and
in particular for modulators is usually functional: a subset of key specifications
are measured and the rest of the datasheet parameters are assumed to be correlated to
those measured parameters. It should be clear that functional test is not the perfect test
as it does not guarantee that the circuit is defect free. There exist other alternatives,
8.4.1
In the context of SoCs, traditional functional approach is much more costly than for
stand-alone parts. For instance, an embedded modulator may not have its input and
output available. The functional test of the complete SoC may well hit the complexity barrier just as for digital integrated circuits. Solutions have thus to be found to
circumvent such issues.
Providing access to the nominal input of the modulator under test may be far from
easy in an SoC context. If the input of the modulator is not an input of the SoC, the
test signal cannot be sent directly through the pads of the chip. One solution could be
to implement a multiplexer at the modulator input so as to be able to select between
two input paths: the nominal one and the test one that would be connected to the
output pads. This obviously requires increasing the number of pads for test purposes.
Moreover, the test of high-resolution modulators requires the use of a test stimulus
of even higher resolution (both in terms of noise and distortion). The signal bus and
devices necessary to send the test signal to the modulator input thus need to maintain
that high-resolution criterion. In other words, the multiplexer at the modulator input
has to be designed for a precision higher than the modulator under test. In that sense,
standard approaches such as the IEEE 1149.4 [18, 19] test bus do not seem of much
help in the case of modulators.
The modulator and the decimation filter output are digital which mean that these
signals can be sent to a test pad through any standard communication protocol with no
loss. Standard approaches such as the IEEE 1149.1 [20] test bus may be used. At first
sight, one may think that the observability is not much of an issue. The difficulties
arise with the amount of data to be shifted off chip. As a consequence, compression
techniques can be implemented on chip and the data may also be stored in an on-chip
memory. Though SoC are likely to include a digital signal processor (DSP) and RAM,
the use of these resources for modulator test has two non-negligible drawbacks. The
first one is test complexity, as these resources have to be configured to perform their
normal operation and the test of the modulator. The second one is that it impedes
the test of different parts of the SoC in parallel, which is one of the recommended
techniques to speed-up the test of such complex devices.
Solutions to improve the testability of mixed-signal circuits in SoC are necessary
in general and are of particular importance for converters.
8.4.2
Much of the research done to improve the testability of mixed-signal circuits targets
built-in self-test (BIST) solutions. The concept is quite explicit: the objective is to
carry out most of the test on chip, from test signal generation to data analysis. An ideal
or full-BIST would require only a digital start signal and output a digital PASS/FAIL
Digital
modulator
z1
z1
z1
K1
K1
z1
Figure 8.16
z1
z1
K2
K2
Low-pass
filter
KN
KN
signal. However, most solutions reported lack of some of these functions and hence
they are actually partial BISTs.
8.4.2.1 Functional BIST
With respect to converters, some solutions have been proposed that can be applied
to ADCs. Obviously, these are functional tests that suffer the same limitations as
characterization techniques. Histogram-based BIST [2123] and servo-loop-based
BIST [24] are therefore not quite adapted to modulators. The most interesting
solutions in that field are undoubtedly those proposed by Gordon Roberts team. They
proposed two solutions for the on-chip generation of precise sine waves and another
for the on-chip evaluation of some converter characteristics.
The first one [25] consists in building a digital oscillator, by embedding a
attenuator in a digital resonator, as seen in Figure 8.16. The selection of the
attenuation coefficient (Ki ) defines the oscillation frequency. The digital output
bit-stream encodes a very precise sine-wave tone (defined by the design of the digital
oscillator). Then, an analogue filter cuts down significant amounts of quantization
noise. This scheme has the advantage of being mostly digital and thus is very robust
and testable.
Lin and Liu [26] modified the technique so as to be able to implement multitone
waveforms. The core of their idea is to use time division multiplexing to accommodate
the additional tones. In order to maintain the same efficiency as the original scheme,
the master frequency has to be raised by a factor M (M being the number of tones).
Similarly, the order of the digital modulator that can be seen in the loop in
Figure 8.16 also has to be multiplied by a factor M. Actually, each delay element
in the original modulator has to be replaced by M delay elements. This scheme
is thus practical only for a reduced number of tones. The other solution [27, 28],
sketched in Figure 8.17, consists in recording in a recycling register (i.e., a 1-bit
shift register which output is fed back to the input) a portion of a encoded
signal.
The advantage with respect to the previous proposal is the flexibility of the encoded
signal, as the only a priori restriction is that the wanted signal be periodic with a maximum period equal to the length of the register. On the other hand, the drawbacks
Low-pass
filter
Software
modulator
Figure 8.17
Register-based oscillator
concern the trade-off between precision and extra area. Indeed, the wider the register,
the more precise the encoded signal and the larger the required silicon area. Nevertheless, they also proposed to reuse the boundary-scan register for the generator
shift register. This would provide a potentially large register with a low overhead.
Alternatively, a RAM available on chip could also be reused. Notice that it is important to optimize the recorded bit-stream to obtain the best results. The bit-stream
recorded in the shift register is a portion of the output bit-stream from a software
modulator encoding the wanted test stimulus. Optimization consists in choosing the
best performing bit-stream portion over the total bit-stream and in slightly varying
the software modulator input signal parameters to get the best results. Indeed,
the SFDR results of a modulator can vary significantly with the input signal amplitude. These proposals are well developed and alternative generation methods of the
bit-stream have been shown to improve the obtained signal precision.
In Reference 29 the authors took the idea of Gordon Roberts team and built
a fourth-order oscillator in a field programmable gate array to demonstrate the
validity of the approach. Their oscillator was designed to avoid the need of multipliers
and required around 6000 gates. They achieve a more than 110 dB dynamic range for
a tone at 4 kHz (the modulator master clock is set to 2.5 MHz).
For the output data analysis, Roberts team also proposed a solution to extract
some dynamic parameters in association to their sine-wave generation mechanism. In
Reference 30 they compare three possible techniques. The first one is straightforward.
It consists in the implementation of a FFT engine. Although it provides a good precision, it is not affordable in the majority of cases. The second one consists in using
a standard linear regression to do a sine-fit on the acquired data. The same master
clock is used for the sampling process and the test stimulus generation. So the input
frequency is precisely known, which avoids the necessity of using a non-linear fourparameter search. The precision of the SNR calculation is similar to the FFT, but less
hardware is necessary. However, some multiplications need to be done in real-time
and some memory is also required to tabulate the values of the sine and cosine at the
test stimulus frequency. The third and last proposed solution is to use a digital notch
filter to remove the test signal frequency component and calculate the noise power
and a selective bandpass filter to calculate the signal power. The required hardware
8.5
8.5.1
Model-based testing
Model-based test concepts
Model-based testing has been developed with the objective of reducing the number
of measurements that have to be performed to exhaustively characterize a device.
For a converter, all performance figures (SNR, THD, CMRR, etc.) should be
measured at several operating conditions, varying temperature and polarization, and at
several input conditions varying the test stimulus(i) amplitude(s) and frequency(ies).
The authors of Reference 37 pointed out that a large number of these performance
figures are correlated. Indeed, it is reasonable to think that the THD for a given sine
wave will be strongly correlated with the THD for another sine wave for another
frequency and with the THD for the same sine wave but at another temperature. They
concluded that in many cases, a model could be built that relates the large number of
performance figures to a much reduced number of independent parameters. Retrieving
the independent parameters would give access to the whole set of performance figures
but it may be done at a cost much inferior than that of measuring all the performance
figures in all the operating and stimulus conditions.
The key point of the approach was how to derive the correlations between the
performance figures and the independent parameters (i.e., how to build the model).
(8.11)
8.5.2
Converter
Under test
Output code
Ideal response
to the ramp
Actual response
External ramp
generator
Time
S0
Figure 8.18
S1
S2
S3
Four test
syndromes
(8.13)
The authors demonstrate that the coefficients of the polynomial can be written as a
function of the acquired syndromes
b0
b
1
b2
b3
1/n
0
0
0
0
4/n2
0
0
4/3n
0
16/nr 3
0
0
16/3n2
0
128/3n4
1
1
1
1
1
1
1
3
1
1
1
3
1
1
1
1
S0
S
1
S2
S3
(8.14)
n2
n
3n3
n2
n3
+ b1 + b3
cos(t)+ b2
cos(2t)+ b3
cos(3t)
8
2
32
8
32
(8.15)
Performance parameters such as the offset, gain, second and third harmonics (further
noted A2 and A3 ) can be retrieved from the equation.
After inverting the matrices in Equation (8.14) and combining the result with
Equation (8.15), the proposed test can be formalized accordingly to Equation (8.12),
obtaining:
S0
S1
S2
S3
offset
gain
A2
A3
n/4
n/4
n/4
n/4
1
0
0
0
3n2 /32
n2 /32
n2 /32
3n2 /32
0
n/2
0
0
7n3 /192
n3 /192
n3 /192
7n3 /192
n2 /8
0
n2 /8
0
15n4 /1024
n4 /1024
n4 /1024
15n4 /1024
0
3n3 /32
0
n3 /32
b0
b1
b2
b3
(8.16)
The reader should notice that this matrix inversion is unnecessary as Equation (8.14)
allows us to retrieve the polynomial coefficients directly from the syndromes. It has
been done only to illustrate that the proposed test can be seen as a model-based test.
The application of the scheme described above to modulators is particularly
appealing. First of all, the four syndromes are acquired by accumulating the converter
output. For a converter, the operation can be performed directly on the modulator
bit-stream and thus only requires up-down counters. Moreover, it can be shown that
some non-idealities, such as amplifier settling error, map onto the transfer function in
a manner that is quite accurately approximated by a low-order polynomial. In some
8.5.3
Performance
specifications
SNR
THD
SFDR
PSRR
Amplifier
DC gain
Slew-rate
Bandwidth
+
Behavioral
Behavioural
model
Hysteresis
Physical
implementation
Figure 8.19
Comparator
Behavioural
model-based test
Layout
/fabrication
Matching
Linearity
Macro-blocks
division
Macro-block
architecture choice
/transistor sizing
Electrical
implementation
Capacitor
Characterization
/functional test
Verification/electrical simulation
Designer expertise
/heuristic search
/high-level simul.
Real defects
and so on. Behavioural model-based test can thus be considered as hierarchical testing, and from that viewpoint, the approach is not so new [50]. Actually it has been
claimed [51] that inductive fault analysis for mixed-signal circuits should consider
macro performance degradations as fault classes. In other words, a behavioural model
level of abstraction is adequate for defect-oriented tests. In that sense, behavioural
model-based test offers valuable advantages for device debugging.
As was said before, the application of model-based test to DfT has to be focused on
relaxing the measurement requirements. This means that the behavioural parameters
have to be retrieved with simple tests. It has been shown in recent works that some
behavioural parameters, such as amplifier d.c. gain and settling errors (which are
related to slew rate and gain bandwidth), can be tested using digital stimuli that are
easily generated on chip [45]. The proposed tests can be roughly gathered in the set-up
of Figure 8.20.
The test stimuli are digital and can be generated on-chip at the cost of a linear
feedback shift register (LFSR) of only 6 bits. Those digital stimuli are then sent to the
modulator under test through the feedback DAC during the sampling phase. During
the integrating phase, the feedback DAC is driven by the modulator output, as usual.
That time-multiplexed use of the DAC is symbolized in Figure 8.20 by an extra input.
For the analysis of the test output, test signatures are computed by accumulating the
modulator output bit-stream minus the input sequence. This only requires a couple
XOR
DAC ADC
Output bit-stream
AND
Up
AND
Down
Counter
z1
z1
z1
z1
Figure 8.20
z1
z1
Signature
of logic gates and an up-down counter. They can be simply related to the modulator
behavioural parameters. However, the reader should notice that the test decision has to
be taken in the model parameter space. Indeed, the calculation of explicit performance
figures would require the simulation of the behavioural model. For device-debugging
purposes, the behavioural signatures should be shifted off chip. However, for test
purposes, tolerance windows have to be designed for each behavioural signature. The
performance specifications cannot be mapped onto the behavioural parameter space.
Hence, it is not possible to set the tolerance windows so as to obtain an equivalentfunctional test. However, behavioural parameters are closely related to the modulator
design flow. They correspond to performance specifications of the different macros.
When choosing one point of the design space at the behavioural model level, some
units are also taken on those parameters, according to the variations of the process
parameters. Those units can be used to establish the required tolerance window. For
instance, if an amplifier with a d.c. gain of 80 dB is considered necessary to meet
specifications, an amplifier with a nominal 90 dB d.c. gain will possibly be designed
such that in the worst-case process corner the d.c. gain is ensured to be higher than
80 dB. For test purposes, that 80-dB limit could serve as the parameter tolerance
window.
It is worth mentioning that this behavioural model-based solution is very attractive
in an SoC context as the different tests could be easily interfaced to a digital test bus
such as the IEEE 1149.1. Research has still to be done to cover more behavioural
parameters and extend the methodology to generic high-order architectures. The
digital tests proposed in References 4345 apply to first- and second-order modulators
and their cascade combinations, and the results seem quite promising.
Using the set-up sketched in Figure 8.20, the first integrator leakage of a secondorder modulator can be measured using a periodic digital sequence with a non-null
mean value. The mean value of the test sequence has to be calculated considering
that a digital 1 corresponds to the DAC positive level and a digital 0 to the DAC
negative level, which together define the modulator full scale. Leger and Rueda [43]
propose the use of a [1 1 1 1 1 0] sequence whose mean value is 2/3 for a (1;
1) normalized full scale. The signature is also built according to the test set-up of
Figure 8.20. The differences between the modulator output bit-stream and the input
sequence are accumulated over a given number N of samples. It is simply sensing how
signature = 4N Q (1 p1 ) 2 6
(8.17)
Q = 2/3
The term (1 p1 ) is the first integrator pole error. This pole error can be directly
related to the d.c. gain of the integrator amplifier [6]. It can be seen that the error
term is independent of the number of acquired samples N. This implies that the
correct determination of the pole error requires a number of samples that is inversely
proportional to the pole error, and thus, to a first approximation, proportional to the
amplifier nominal d.c. gain.
A very similar test is provided in Reference 43 to test integrator leakage in firstorder modulators. It has been demonstrated in Reference 52 that a first-order
modulator is transparent to digital sequences: the output bit-stream strictly follows
the input sequence. This effect is even stabilized by integrator leakage. The authors
propose to add an extra delay in the digital feedback path (a simple D-latch on the
DAC control switches) during test mode. With this modification, it is shown that the
test set-up of Figure 8.20 provides a signature proportional to the integrator pole error.
An additional condition is also set on the digital sequence: it has to be composed of L
ones and a single zero, with L superior to five. For hardware simplification, the same
sequence as above ([1 1 1 1 1 0]) can be used:
4N (1 p)
signature =
4
ln(3L 5)/(L 5)
(8.18)
L=6
The d.c. gain non-linearity of the first amplifier of a modulator can cause harmonic
distortion. The error associated to d.c. gain non-linearity in amplifiers located further
in the loop is usually negligible because it is partially shaped by the loop filter.
In a second-order modulator, it can be shown that the first integrator output
mean value is proportional to the input mean value. The output of the integrator is the
output of the amplifier, so it can be expected that the effective d.c. gain of the amplifier
varies with the input mean value. As a result, the integrator pole error also depends
on the input mean value. The test of d.c. gain non-linearity for the first amplifier in a
second-order modulator simply relies on repeating the leakage test with two different
sequences mean value: typically a small one (noted Qs ) and a large one (noted Ql ).
In the ideal case, if the amplifier d.c. gain is linear, the obtained signatures should
follow the ratio:
signaturel
Ql
=
signatures
Qs
(8.19)
In the presence of non-linearity, the effective d.c. gain for the sequence of large mean
value Al should be lower than the effective gain for the sequence of small mean value
As . As a result, the pole error for Ql should be greater than for Qs . Thus, it can be
Master clock
Sampling phase
Integrating phase
Figure 8.21
written as
signaturel
Ql
As
=
signatures
Qs
Al
(8.20)
Notice that the signature has to be acquired over a large enough number of points
that the deviation of the effective gain from the actual gain is sensed. Typically, if a
variation of 1 per cent of the effective gain from the nominal gain has to be sensed,
it could be necessary to acquire 100 times the number of acquired points to test
the nominal gain (i.e., the leakage). Fortunately, the distortion induced by the nonlinearity of the amplifier d.c. gain is also proportional to the nominal d.c. gain. In other
words, if the d.c. gain is non-linear but very high, then the non-linearity will induce a
distortion that will fall below the noise floor of the converter. Only the non-linearities
associated with a low-nominal d.c. gain will have a significant impact. Translating
this information to the test, it means that acquiring a very large number of points to
detect d.c. gain non-linearity makes little sense, as it corresponds to a deviation that
has no impact on the modulator precision.
The test for integrator settling errors (which are related to amplifier slew-rate
and gain-bandwidth product) introduced in Reference 44 is the same for both firstand second-order modulators but requires modification of the modulator clocking
phase. The test sequence is a pseudo-random sequence that can be generated with
a 6-bit LFSR as shown in Figure 8.20. For a one-valued input sample, the clock
phases are doubled (their duration is two master clock periods), and for a zero-valued
input sample they remain unchanged (their duration is one master clock period). The
clocking modification is illustrated in Figure 8.21.
This input-dependent clocking allows unbalancing the integrator settling error.
For a one-valued input sample, the integrator has time to fully settle but not for a
zero-valued input sample. The unbalanced input-referred difference is sensed by the
signature analyser and accumulated over N samples. To get rid of any offset, another
acquisition has to be done inverting the clocking rule: the phases are doubled for a
zero-valued input sample and remain the same for a one-valued input sample. The
results of the two acquisitions are combined to give the offset-insensitive signature:
N
N
(8.21)
signature = er 4 + 3er 2
2
2
S1
Disabled in
test mode
S3
Nominal
input
Test
sequence
1
2
1
2
S1
2
1
Vref
+
+
1
S2
Vref Vref
Figure 8.22
2
1
1
S3
Disabled in
normal operation
Modulator output
bit-stream
S2
Modulator output
bit-stream
Vref
The term er corresponds to the settling error committed by the integrator for a onevalued input sample and a zero-valued feedback. This corresponds to the largest step
that can be input to the integrator.
The clocking modification can be implemented on chip at low cost. A simple
finite-state machine is required that consists of an input-dependent frequency divider
(a 2-bit counter). The obtained digital signal is then converted to usable clock phases
by a standard non-overlapping clock generator.
In order to perform all the above-explained digital tests, the schematic of the
modulator should be modified, basically to allow digital test inputs [4345]. There
exist two straightforward solutions. The first one consists of disabling the nominal
input of the integrators and duplicating the DAC to send the test sequence during the
sampling phase. This is illustrated in Figure 8.22, where switches S1 S3 form the
duplicated DAC.
This approach has the advantage of being very easy to implement. However, a
drawback is that it adds extra switch parasitics to the input node. To avoid this issue,
Leger and Rueda [43] propose to reuse the feedback DAC during the sampling phase
to input the test sequence. The nominal input is disconnected and the feedback switch
is kept closed during both sampling and integrating phases. Only the DAC control
has to be modified to accommodate the double-sampling regime. This is illustrated
in Figure 8.23.
This other solution does not alter the analogue signal path but does put more
emphasis on the timing control of the DAC. Figure 8.23 shows a possible implementation of the DAC control with transmission gates for the sake of clarity but other
solutions may give better results. Notice that in both cases, the modifications can
easily be introduced in the modulator design flow and do not assume the addition of
complex elements such as buffers.
2
Disabled in
test mode
1
2
Nominal
input
2
1
2
1
Closed in
test mode
2
Vref Vref
Figure 8.23
Test sequence
1
1
2
Modulator output
bit-stream
1
Modulator output
bit-stream
Test sequence
It should be noticed that the two modifications described above can be realized
on any integrator, which means that a digital test sequence can be inputed at any
feedback node. In the case of a second-order modulator, by disabling the nominal
input of the second integrator and enabling the digital test input, an equivalent firstorder modulator is obtained. This is symbolized in the diagram of Figure 8.24, where
coefficients a2 and b2 are duplicated to represent the additional test-mode input. Thus,
the tests developed for first-order modulators can be used to test defects in the second
integrator of the reconfigured second-order modulators, without significant impact.
The proposed tests have been validated extensively by simulation. These simulations have been carried out in MATLAB using a behavioural model that implemented
most of the non-idealities described in Reference 48. The test signatures were shown
to be accurate for the isolated effects, only varying the parameter of interest and
maintaining the others at their nominal values [43, 44]. Simulations varying all the
parameters at the same time have also been realized [45]. In that case, the whole set
of proposed tests were performed. It was shown that the whole set of tests provided
high fault coverage if the test limits were set to the expected values of the signatures,
according to the nominal values of the behavioural parameters. Actually, 100 per cent
of the faults that affected the behavioural parameters involved in the proposed tests
were detected.
As a test methodology, behavioural model-based test for modulators has
a great potential, in particular for converters embedded in SoCs. Indeed it opens
the door to structural tests that can relax hardware requirements but maintain a
close relation to the circuit functionality, and also device-debugging capabilities.
It can be considered as a trade-off between functional and defect-oriented tests. In
its current development state, digital tests have been proposed to simply evaluate
integrator leakage and settling errors. These digital tests do not alter the modulator topology and rely on proper modulation. As a result, they have the ability,
Test sequence
a2
a1
Disconnected
part
z1
1z1
b1
a2
z1
1z1
Output
bit-stream
b2
(b)
Test sequence
b2
a1
z1
1-z1
a2
b1
z1
1z1
Output
bit-stream
b2
Disconnected part
Figure 8.24
z-Domain representation of the test sequence input in a secondorder modulator (a) to the first integrator and (b) to the second
integrator
beyond the behavioural parameter evaluation, to detect any catastrophic error in the
modulator signal path. Research has to be done to detect more behavioural parameters such as branch coefficient mismatches or non-linear switch-on resistance, for
example. Similarly, the digital test methodology should be extended to higher-order
architectures.
8.6
Conclusions
In this chapter, we have tried to provide insights into modulator tests. It has
been shown that the ever-increasing levels of functionality integration, the ultimate
expression of which is SoC, raise new problems on how to test embedded components
such as modulators. These issues may even compromise the test feasibility, or at
least they may displace test time from its prominent position in the list of factors that
determine the overall test cost.
Table 8.1 summarizes the information contained in the chapter. It is clear that
considerable research is still necessary to produce a satisfying solution, but the first
steps are encouraging. In particular, we believe that solutions based on behavioural
model-based BIST may greatly simplify the test requirements.
Characterization
Static parameters:
Histogram Servo-loop
Dynamic parameters:
Sine-fit FFT
Functional test
References 2531
Reference 32
Defect-oriented test
Reconfiguration [33]
OBT [34]
NTF [35]
Pros.
Cons.
Exhaustive characterization
requires a large amount of time.
INL and DNL should be related
to transitions in modulators.
Requires the input of a precise
stimulus.
Requires complex DSP.
Requires the input of a precise
stimulus.
Continued
Pseudo-random [36]
Model-based test
Model-based test
standard approach
[3740]
Ad-hoc model-based
BIST [41, 42]
Behavioural
model-based BIST
[4345]
8.7
References
Chapter 9
9.1
Phase-locked loops (PLLs) are incorporated into almost every large-scale mixedsignal and digital system on chip (SoC). Various types of PLL architectures exist
including fully analogue, fully digital, semi-digital and software based. Currently,
the most commonly used PLL architecture for SoC environments and chipset applications is the charge-pump (CP) semi-digital type. This architecture is commonly
used for clock-synthesis applications, such as the supply of a high-frequency on-chip
clock, which is derived from a low-frequency board-level clock. In addition, CPPLL architectures are now frequently used for demanding radio-frequency synthesis
and data synchronization applications. On-chip system blocks that rely on correct
PLL operation may include third-party intellectual property cores, analogue-to-digital
conversions (ADCs), digital-to-analogue conversions (DACs) and user-defined logic.
Basically, any on-chip function that requires a stable clock will be reliant on correct PLL operation. As a direct consequence it is essential that the PLL function
is reliably verified during both the design and debug phase and through production
testing.
This chapter focuses on test approaches related to embedded CP-PLLs used for
the purpose of clock generation for SoC. However, methods discussed will generally
apply to CP-PLLs used for other applications.
9.1.1
The CP-PLL architecture of Figure 9.1 consists of a phase detector, a CP, a loop
filter (LF), a voltage-controlled oscillator (VCO) and a feedback divider (N). The
phase frequency detector (PFD) senses the relative timing differences between the
+I CH
PLLREF
Phase
detector
(PFD)
KPD
UP
DN
Loop
filter
F(s)
ICH
PLLFB
Figure 9.1
Voltage-controlled
oscillator
VCO
KVCO
Vc
Divide by N
Fosc = N PLLREF
edges of the reference clock and VCO clock (feedback clock) and applies charge-up
or charge-down pulses to the CP that are proportional to the timing difference. The
pulses are most commonly used to switch current sources, which charge or discharge
a capacitor in the LF. The voltage at the output of the LF is applied to the input
of the VCO, which changes oscillation frequency as a function of its input voltage.
Note that ideally when the feedback and reference clocks are equal, that is, they are
both phase and frequency aligned, the CP transistors will operate in such a way as
to maintain the LF voltage at a constant value. In this condition, the PLL is locked
which implies that the output signal phase and frequency is aligned to the input within
a certain limit. Note that the division block up-converts the VCO output frequency
to an integer multiple of the frequency present on its reference input (PLLREF). It
follows that when the PLL is in its locked state:
Fout = N PLLREF
(9.1)
In Figure 9.1, the following conversion gains are used for the respective blocks.
KPD = phase detector gain = Ich /2 (A rad1 )
F(s) = LF transfer function
KVCO = VCO gain (rad s1 V1 )
Using feedback theory, the generalized transfer equation in the Laplace domain
for the system depicted in Figure 9.1 is
H(s) =
(9.2)
Note that by substituting suitable values for N and F(s) Equation (9.1) will generally
apply to any-order PLL system [1]. Specific transfer equations are provided as part
of the LF description.
PFDUP
PLLREF
R
To CP
PLLFB
R
Q
PFDDN
Figure 9.2
It must be noted that, even for the case of a CP-PLL, the implementation details
for the blocks may vary widely; however, in many applications, designers attempt
to design the PLL to exhibit the response of a second-order system. This is owing
to the fact that second-order systems can be characterized using well-established
techniques. The response of a second-order CP-PLL will be generally considered in
this chapter [24].
A brief description of each of the blocks now follows. Further, basic principles of
CP-PLL operation are given in References 1, 3, 5 and 6.
9.1.1.1 Phase frequency detector
The phase detector most commonly used in CP-PLL implementations is the type4 edge-sensitive PFD. The PFD may be designed to operate on rising or falling
edges. For the purpose of this discussion, it will be assumed that the PFD is rising
edge-sensitive. A schematic of this type of PFD is shown below.
In Figure 9.2, PFDUP and PFDDN represent the control signals for the up and
down current sources, respectively. When considering the operation of the PFD, it
is also useful to consider the change in VCO output frequency. Considering phase
alignment of the PFD input signals, REF will be used to designate the instantaneous
phase of PLLREF and FB will be used to designate the instantaneous phase of the
PLLFB signal. Using this convention and with reference to Figures 9.2 and 9.3 the
PFD operation is now explained.
1. FB (t) leads i (t): LF voltage falls and VCO frequency falls to try and reduce
the difference between i (t) and FB (t).
2. i (t) leads FB (t): LF voltage rises and VCO frequency rises to try and reduce
the difference between i (t) and FB (t).
3. i (t) coincident with FB (t): the PLL is locked and in its stable state.
i(t)
FB(t)
i(t)= FB(t)
Leads
i(t)
Lags
PFDUP
PFDDN
VCO output
frequency
Figure 9.3
1
sC1
(9.3)
O (s)
2 n s + n2
= 2
i (s)
s + 2 n s + n2
(9.4)
KO IP
n =
N2C1
2 KO IP
=
2 N2C1
(9.5)
(9.6)
It must be mentioned that for CP-PLLs, in general, and for embedded CP-PLLs,
specifically, the LF node can be considered as the critical controlling node of the PLL.
Any noise coupled into this node will generally manifest itself as a direct instantaneous
I1
UP
M2
VCTRL
DN
M1
R1
+
I2
Figure 9.4
C1
alteration of the VCO output frequency, this action will be observed as PLL output
jitter. Consequently, PLL designers usually spend a great deal of design effort in
screening this node. In addition, correct LF operation is essential if the PLL is to
function properly over all desired operational ranges. Embedded LFs usually include
one or more large area MOSFET capacitors. These structures may be sensitive to spot
defects, such as gate oxide shorts [7].
Matching of the CP currents is also a critical part of PLL design. Leakage and
mismatch in the CP will lead to deterministic jitter on the PLL output.
9.1.1.3 Voltage controlled oscillator
For embedded CP-PLL configurations, the VCO is usually constructed as a currentstarved ring oscillator structure. This is primarily due to the ease of implementation
in CMOS technologies. The structure may be single ended or differential, with
differential configurations being preferred, due to their superior noise rejection capabilities. A typical single-ended current-starved ring oscillator structure is illustrated
in Figure 9.5.
In this circuit, VCTRL is the input control voltage taken from the LF node and
Fout is the VCO output signal. Note that to prevent excessive loading of the VCO, its
output is usually connected to buffer stages.
A2
A3
M7
M8
M9
M2
M4
M6
FFout
out
C3
M5
5fF
M13
C2
M12
M3
5fF
Figure 9.5
5fF
M11
VCTRL
Vcrtl
C1
M1
M14
The transfer gain of the VCO is found from the ratio of output frequency deviation
to a corresponding change in control voltage. That is
KVCO =
F2 F1
(MHz/V) or (rad/s/V)
V2 V 1
(9.7)
9.1.2
Important functional characteristics that are often stated for CP-PLL performance are
listed below:
PLL block
Structures
A
(1) PFD
(2) CP
(3) LF
(4) OSC
(5) DIV
D
Direct
access/modification
At speed testing
required
Commonly suggested
fault models
Yes
No
No
Yes
Yes
Yes
Yes
Yes
Yes
Yes
* MOS transistor catastrophic faults: gate to drain shorts; gate to source shorts; drain opens; source
opens.
overshoot
loop bandwidth (3 dB)
output jitter.
All of the above parameters are interrelated to a certain extent. For example, the
loop bandwidth will have an effect on the PLL output jitter. However, loop bandwidth,
lock time, overshoot and step response time are also directly related to the natural
frequency and damping of the system. It must be mentioned that, certain non-idealities
or faults may contribute to further jitter on the PLL output or increased lock time.
Examples of typical measurements for these parameters are provided in later sections.
Table 9.1 provides an initial analysis of testing issues for the PLL sub-blocks.
Fault models are suggested for use in fault coverage calculations for each of the
blocks. Further research and justification for the use of fault models in the key PLL
sub-blocks are given in References 713.
Note also that the fault models suggested in Table 9.1 can also be used to assess
the fault coverage of built-in self-test (BIST) techniques. It should be noted, however,
that many fault types are related to the structure realization of the PLL hence these
guidelines should be used with care. Faults that may be implementation-dependent
include:
Skewed distribution
due to constant
deterministic phase
offset
Nominal
timing
Increasing deviation
Figure 9.6
Ideal distribution of
random jitter
Increasing + deviation
lead to excessive jitter in the PLL output. Jitter may be divided into two main classes
as follows:
(9.9)
i=1
In both of the above equations, N represents the total number of samples taken and
Ti represents the time dispersion of each individual sample.
For clock signals, jitter measurements are often classified in terms of short-term
jitter and long-term jitter. These terms are further described below:
Short-term jitter: This covers short-term variations in the clock signal output period.
Commonly used terms include:
Period jitter: This is defined as the maximum or minimum deviation (whichever
is the greatest) of the output period from the ideal period.
Cycle-to-cycle jitter: This is defined as the period difference between consecutive
clock cycles, that is cycle-to-cycle jitter = [period (n) period (n 1)]. It must
be noted that cycle-to-cycle jitter represents the upper bound for the period jitter.
Duty cycle distortion jitter: This is the change in the duty cycle relative to the
ideal duty cycle. The relationship often quoted for duty cycle is
Duty_cycle =
Highperiod
Highperiod + Lowperiod
100(%)
(9.10)
where Highperiod is the time duration when the signal is high during one cycle of
the waveform and Lowperiod is the time duration when the signal is low over one
period of the measured waveform. In an ideal situation, the duty cycle will be 50
per cent, the duty cycle distortion jitter will measure the deviation of the output
waveform duty cycle from the ideal position. Typical requirements for duty cycle
jitter is that it should be within 45 to 55 per cent [14, 15].
The above jitter parameters are often quoted as being measured in terms of degrees
deviation with respect to an ideal waveform. Another metric often encountered is that
of a unit interval (UI), where one UI is equivalent to 360 . A graphical representation
of a UI is given in Figure 9.7.
Long-term jitter: Provides a measure of the long-term stability of the PLL output;
that is, it represents the drift of the clock signal over time. It is usually specified
over a certain time interval (usually a second) and expressed in parts per million. For
example, a long-term jitter specification of 1 ppm would mean that a signal edge is
allowed to drift by 1 s from the ideal position in 1 s.
Figure 9.7
Graphical representation of a UI
Ideal reference signal
N1
Figure 9.8
1.608 ns
1 UI
2.6 =
2.6 = 4.1808 ps
1000
1000
(9.12)
9.2
This section will explain traditional or commonly employed CP-PLL test techniques
that are used for the evaluation of PLL operation. Many of the techniques will be
applicable for an analogue or semi-digital type of PLL or CP-PLL, however, it must
be recognized that although the basic principles may hold, the test stimuli may have
to undergo slight modification for the fully analogue case. The section is subdivided
into two subsections, focusing on characterization and production test techniques,
respectively.
9.2.1
In this subsection we will review typical techniques that are used to characterize a PLL
system. Characterization in this context will refer to measurements made by the PLL
designer upon an initial test die for the purpose of verifying correct circuit functionality and allowing generation of the device or data sheet [18]. Characterization-based
tests usually cover a greater number of operational parameters than that carried
out for production test. Also they can be carried out using specific special function test equipment and hardware, as apposed to general-purpose production test
equipment.
9.2.1.1 Operational-parameter-based measurements
The key parameter-based measurements employed for CP-PLL verification generally
consist of the following tests:
1. Lock range and capture range.
2. Transient response.
Correct operation from power up of the system incorporating the PLL. This
test is often ascertained using a frequency lock test (FLT).
Correct step response of the system, when the PLL is switched between
two frequencies or phases.
3. Phase transfer function (or jitter transfer function monitoring).
To ascertain correct 3 dB bandwidth of the PLL system.
To ascertain correct phase response of the PLL system.
The above tests can be considered to be full functionality tests as they are
carried out upon the whole PLL system. It must also be mentioned that for
second-order systems, both of the above techniques can be used to extract
defining parameters such as n (natural frequency) and (damping).
Capture range: Refers to the range of frequencies that the PLL can lock to when
lock does not already exist.
Lock range: The range of frequencies that the PLL can remain locked after lock
has been achieved.
For certain applications, these parameters are particularly important. For instance,
lock range would need to be evaluated for frequency demodulation applications.
When considering edge-sensitive CP-PLLs the lock range is usually equal to the
capture range.
For a CP-PLL synthesizer, the lock range would be ascertained in the following
manner for a single division ratio:
1. The CP-PLL would initially be allowed to lock to a reference frequency that
is in the correct range for a particular divider setting.
2. The reference frequency would be slowly increased until the CP-PLL can no
longer readjust its output to keep the PFDs inputs phase aligned.
3. When the CP-PLL fails to acquire constant lock, the reference frequency is
recorded.
This sequence is often aided by use of lock detect circuitry, which is used to provide
a digital output signal when the PLL has lost lock.
9.2.1.1.2 Transient-type response monitoring
Frequency lock test. An initial test that is carried out before more elaborate tests are
employed is the FLT. This test simply determines whether the PLL can achieve a
PLL
Reference
clock
PLL output
Tlock
Figure 9.9
stable locked condition for a given operational configuration. Stability criteria will
be determined by the application and may consist of an allowable phase or frequency
error at the time of measurement. Typically, this test is carried out in conjunction
with a maximum specified time criteria, that is, if the PLL has failed to achieve lock
after a specified time, then the PLL is faulty. The start of test initiation for the FLT
is usually taken from system startup. It is common for this test to be carried out for
various PLL settings, such as, maximum and minimum divider ratios, different LF
settings, and so on. Owing to its simplicity and the fact that it will uncover many hard
faults and some soft faults in the PLL, this test is often used in many production test
applications. A graphical description of the FLT is given in Figure 9.9.
In the above diagram, T0 represents the start of the test and Tlock indicates the
time taken to achieve lock.
In many applications, the output frequency is simply measured after a predetermined time; this is often the case in automated-test-equipment-based test schemes,
where the tester master clock would be used to determine the time duration. Alternatively, in some situations, the PLL itself is fitted with LD (lock detect) circuitry
that produces a logic signal when the PLL has attained lock [20]. In this situation, a
digital counter is started at T0 and stopped by the LD signal, thus enabling accurate
lock time calculations to be made. Note that LD circuitry is not test specific, as it
is often included in PLL circuits to inform other system components when a stable clock signal is available. However, sometimes an LD connection is fitted solely
for design-for-testability (DfT) purposes. It must also be mentioned that, in certain
PLL applications, it may be acceptable to access the LF node. If this is the case, the
approximate settling time of the PLL can be monitored from this node. This technique
is sometimes used for characterization of chipset PLLs, however, due to problems
PFD
LF
VCO
Divide by N
F1
F2
Figure 9.10
F1
F2
LF Output
outlined in the previous sections, it appears to be less commonly used for test of fully
embedded PLLs.
Step response test. The step response monitoring of PLLs is a commonly used bench
characterization technique [2]; the basic hardware set-up is shown in Figure 9.10.
Further details relating to Figure 9.10 are given below:
The input signal step is applied by using a signal generator set-up capable of
producing a frequency shift keying (FSK) signal. The signal is toggled periodically
between F1 and F2 . Note that a suitable toggling frequency will allow the system
to reach the steady-state condition after each step transition.
If an external LF is used, it is sometimes possible to measure the output response
from the LF node. The signal measured at this node will be directly proportional
to the variation in output frequency that would be observed at the PLLs output.
Also note that as the VCO output frequency is directly proportional to the LF voltage, the step response can also be measured at the VCO output. In fact, this is the
technique that must be employed when LF access is prohibited. However, this technique can only be carried if test equipment with frequency trajectory (FT) probing
capabilities is available. This type of equipment allows a plot or oscilloscope trace of
instantaneous frequency against time to be made, thus providing a correct indication
of the transient step characteristics. Many high-specification bench-test equipment
products incorporate FT functions, but it is often hard to incorporate the technique
into a high-volume production test plan.
An alternative method of introducing a frequency step to the system involves
switching the feedback divider between N and N + 1. This method will produce
an output response from the PLL that is equivalent to the response that would be
observed for application of an FSK input frequency step equal to the PLLs reference
frequency. The technique can be easily verified with reference to Equation (9.1).
Frequency
(Mhz)
A1
Vstop
(Fstop)
VoutSS
FoutSS
A2
V
(F)
T
Settling time
Vstart
(Fstart)
Figure 9.11
0
Time (s)
The step response can be used to make estimates of the parameters outlined
in Section 9.1. To further illustrate the technique, a graphical representation for a
second-order system step response is provided in Figure 9.11.
In Figure 9.11, the dashed line indicates the application of the input step parameter,
and the solid line indicates the output response. Note that the parameters of interest
are shown as V -parameters and F-parameters to indicate the similarity between a
common second-order system response and a second-order PLL system response. An
explanation of the parameters is now given.
Vstart (Fstart ): It is the voltage or frequency before the input step is applied.
Vstop (Fstop ): It is the final value of the input stimulus signal.
V (
F): It is the amount by which the input signal is changed.
VoutSS (FoutSS ):This represents the final steady-state output value of the system.
Settling time: The amount of time it takes, after the application of the input step,
for the system to reach its steady-state value.
A1 : Peak overshoot of the signal.
A2 : Peak undershoot of signal.
T : Time difference between the consecutive peaks of the transient response.
Direct measurement of these parameters can be used to extract n and . Estimation
of the parameters is carried out using the following formulas that are taken from
Reference 2 and are also found in many control texts [4]. The formulas are valid
only for underdamped system, that is, one in which A1 , A2 and hence T can be
calculated. However, if this is the case, other parameters can be used to assess the
system performance such as delay time or rise time. This is true for many applications,
p
0 dB asymptote
3 dB
Frequency (rad/sec)
Figure 9.12
105
when what is really desired is the overall knowledge of the transient shape of the step
response.
The damping factor can be found as follows:
=
ln(A1 /A2 )
1/2
+ ln(A1 /A2 )2
(9.13)
T 1 2
(9.14)
Furthermore, PLL system theory and control system theory texts [24] also contain normalized frequency and phase step response plots, where the amplitude and
time axis are normalized to the natural frequency of the system. Design engineers
commonly employ these types of plots in the initial system design phase.
9.2.1.1.3 Transfer function monitoring
In many applications, a PLL system is designed to produce a second-order response. It
must be noted that although second-order systems are considered here, measurement
of the transfer functions of higher-order PLLs can provide valuable information about
system operation and can be achieved using the methods explained here.
A Bode plot of the transfer function of a general unity gain second-order system
is shown in Figure 9.12.
Typical parameters of interest for a second-order system are highlighted in
Figure 9.12; these are now explained in context with a PLL system. 0 dB asymptote.
For a unity gain system, within the 3 dB frequency (see 3 dB below), the magnitude
T
360
T -cycle
(9.15)
PFD
LF
VCO
Phase modulated
input signal for one
frequency
Phase reference
Phase variation of
input signal
Phase variation of
output signal for one
frequency
Figure 9.13
T cycle
Input
waveform
Output
waveform
T
Figure 9.14
For measurement of the magnitude response it must be recalled that well within
the loop bandwidth the PLL response can be approximated as unity. It follows that
an initial output signal resulting from an input signal, whose modulation frequency
is sufficiently low, can be taken as a datum measurement. Thus, all subsequent measurements can be referenced at an initial output measurement, and knowledge of the
input signal is not required. For example, if an initial measurement was taken for
Vm100 Hz
(dB)
VmN
(9.16)
where Vm100 Hz is the peak-to-peak voltage measured at an input modulation frequency of 100 Hz and VmN is the Nth peak-to-peak voltage output for a corresponding
modulation frequency.
The technique described above for phase transfer monitoring is almost identical to
a similar test technique known as jitter transfer function monitoring [21, 22]. However,
in this case, a known jittery source signal is used as the phase modulation signal, as
apposed to the sine-wave modulation mentioned in this text.
9.2.1.1.4 Structural decomposition
This subsection will outline common structural decomposition tests that are often used
to ease PLL characterization. In the interests of brevity, emphasis on the analogue
subcircuits of the PLL will be given. With reference to Section 9.1 and the associated
equations, it can be seen that the PLL system is broken down into three main analoguetype blocks, consisting of the CP, the LF and the VCO. These are considered to be
critical parts of the PLL, hence much design effort is spent on these blocks. The blocks
are often designed independently so that the combination of the associated transfer
characteristics will yield the final desired PLL transfer function. In consequence,
it seems logical to attempt to verify the design parameters independently. Typical
parameters of interest that are often checked include the following:
absolute CP current
CP mismatch
VCO gain
VCO linearity.
If direct access to the LF control node is permitted, all of these tests can be
enabled using relatively simple methods. Also, to allow these tests to be carried out,
extensive design effort goes into construction of access structures that will place
minimal loading on the LF node. However, injection of noise into the loop is still a
possibility and the technique seems to be less commonly used. A brief explanation
of common test methods is now provided.
CP measurements:
A typical test set-up for measuring the CP current is shown in Figure 9.15.
Here, CPU is the up current control input, CPD is the down current control input,
TEST is the test initiation signal that couples the LF node to the external pin via a
transmission gate network, Rref is an external reference resistor and Vref is the voltage
generated across Rref due to the CP current. The tester senses Vref and thus the CP
current can be ascertained. A typical test sequence for the CP circuitry may contain
PLL system
TEST
External pin
V
r
e
f
Rref
POS
CPU
TEST
POS
VCO
CPD
Tester
NEG
NEG
Figure 9.15
NEG
Vrefup
Rref
(9.17)
5. Activate the down current source by disabling CPU and enabling CPD
6. Wait a sufficient time for the network to settle
7. Measure the resultant down current in the CP using the relationship:
Vrefdn
(9.18)
Rref
An estimate of the CP current mismatch can be found by subtracting the results of
Equations (9.17) and (9.18). Also, the CPU and CPD inputs can be often indirectly
controlled via the PFD inputs thus, removing the necessity of direct access to these
points and additionally providing some indication of correct PFD functionality.
Note that in the previous description, the test access point is connected to an
inherently capacitive node consisting of the VCO input transistor and the LF capacitors, respectively. In consequence, if no faults are present in these components, there
should be negligible current flow through their associated networks. It follows that
this type of test will give some indication of the LF structure and interconnect faults.
ICPD =
External pin
F2
POS
Ideal transfer
function
CPU
TEST
POS
F1
Non-Ideal transfer
function
VCO
CPD
NEG
V1
NEG
Figure 9.16
V2
NEG
VCO measurements:
A typical test set-up to facilitate the measurement of the VCO gain and linearity is
shown in Figure 9.16, where CPU is the CP up current control input, CPD is the CP
down current control input and TEST is the test initiation control input.
A typical test sequence would be carried out as follows:
1. Initially, both CP control inputs are set to open the associated CP switch
transistors. This step is carried out to isolate the current sources from the
external control pin.
2. A voltage V1 is forced onto the external pin.
3. The external pin is connected to the LF node by activation of the TEST signal.
4. After settling, the corresponding output frequency, F1 , of the VCO is measured.
5. A higher voltage, V2 , is then forced onto the external pin.
6. After settling, the corresponding output frequency F2 of the VCO is measured.
In the above sequence of events, the values chosen for the forcing voltages will be
dependent on the application.
After taking the above measurements, the VCO gain can be determined using the
following relationship:
KVCO =
F2 F1
(Hz/V)
V2 V 1
(9.19)
9.2.2
In many situations, owing to the problems stated in previous sections, the FLT may
be the only test carried out on embedded PLLs. A particular test plan may therefore
include the criteria that the PLL must lock within a certain time for a certain set of
divider settings. Often to enhance the FLT results, structural decomposition and ad
hoc DfT techniques such as the ones outlined in the previous sections are used. The
PLL is also generally provided with various DfT techniques incorporated into the
digital structures, such as
In addition to these features, an embedded PLL will normally have a bypass mode
that allows other on-chip circuitry to receive clock signals from an external tester as
appose to the PLL itself. This mode allows other on-chip circuitry to be synchronized
to the tester during system test. In this situation, the PLL core is often placed in a
power down mode. Particular examples of generic production test methodologies are
provided in References 18 and 19.
Ad hoc DfT methods can be of use for PLL testing; however, some of the associated problems, such as noise injection and analogue test pin access can introduce
severe limitations, especially when considering test for multiple on-chip PLLs. In
consequence, there has been recent interest into fully embedded BIST techniques for
PLLs. An overview of BIST strategies is presented in Section 9.3.
9.2.2.1 Jitter measurements
This section will provide an outline of typical jitter measurement techniques. Accurate
jitter measurements generally require some form of accurate time-based reference.
Accuracy in this context refers to a reference signal that possesses good long-term
stability and small short-term jitter as the reference signal jitter will add to the device
under tests generated jitter and will be indistinguishable in the final measurement.
In consequence, the reference jitter should be at least an order of magnitude less
than the expected output jitter of the device under test. For the following discussions
it will be assumed that a good reference signal exists. It must be noted that much
of the literature, devoted to new jitter test techniques, appears to concentrate on
the generation of accurate time-based measurements. However, the basic analysis
principle often remain similar. Commonly used measurement and analysis techniques
include period measurements and histogram measurements. These techniques are
explained below.
Period-based jitter measurements
Period-based measurements essentially consist in measuring the time difference
between equally spaced cycles of a continuous periodic waveform. A graphical
representation of the technique is shown in Figure 9.17.
Start count N + 1
Stop count N
Reference
signal
Stop count N + 1
Output signal
1
Count = 7
Count = 6
(n + 1)th cycle
nth cycle
Figure 9.17
Input
Start
Stop
Main gate
Figure 9.18
This technique essentially carries out a frequency counting operation on the PLL
output signal and will measure or count the number of PLL output transitions in
a predetermined time interval (gate time) determined by the reference signal. The
difference between successive counts will be related to the average period jitter of the
PLL waveform. Obviously, this method cannot be used to carry out the short-term
cycle-to-cycle jitter measurements. Accuracy of this technique will require that the
PLL output signal frequency is much higher than the gate signal.
The signals would be gated as shown in Figure 9.18 and would be used with fast
measurement circuitry to initiate and end the measurements.
Histogram analysis
Histogram-based analysis is often carried out using a strobe-based comparison
method. In this method, the clean reference signal is offset by multiples of equally
spaced time intervals, that is, the reference signal edge can accurately be offset from
Reference
clock
Jitter
distribution
St1
Figure 9.19
(9.20)
where N represents the maximum number of time delays that the reference signal
edge can be displaced by and
T is the minimum time resolution.
The measured signal is then compared to ascertain how many times its edge
transition occurs after consecutive sets of the displaced edges. A jitter histogram
of the measured waveform is then constructed by incrementing N and counting the
occurrences of the rising edge over a predetermined set of measurements. A value
of N = 0, will correspond to a non-delayed version of the reference signal. The
measurement accuracy will be primarily dependent upon
T , and
T should be an
order of magnitude below the required resolution. For example, 100 ps measurement
accuracy would require a
T of 10 ps.
An illustration of strobe edge placement for N = 7 is shown in Figure 9.19. As an
example, the count values could be collected from a given set of 100 measurements
as shown in Table 9.2. The values from the table would then be used to construct the
appropriate jitter histogram as shown in Figure 9.20.
It must be mentioned that various other techniques exist and are used to facilitate
approximation of jitter, such as indirect measurements and Fourier-based methods.
For indirect measurements, a system function reliant on the PLL clock signal is tested.
Typical examples may include signal-to-noise ratio testing of ADC or DAC systems.
For Fourier-based methods, the signal of interest is viewed in the frequency domain
as appose to the time domain and the resultant phase noise plot is examined. Proportionality exists between phase noise within a given bandwidth and the corresponding
jitter measurement, thus allowing jitter estimation [23, 24].
Strobe position
number
Failure count
Pass count
St1
St2
St3
St4
St5
St6
St7
4
15
35
50
65
85
90
96
85
65
52
35
15
10
50
35
35
15
15
10
4
St1
Figure 9.20
9.3
BIST techniques
Although the primary function of a PLL is relatively simple, previous sections have
shown that there are a wide range of specifications that are critical to the stability
and performance of PLL functions that need to be verified during engineering and
production test. These specifications range from lock time and capture range to key
parameters encoded in the phase transfer function such as damping and natural frequency. Parameters such as jitter are also becoming more critical as performance
specifications become more aggressive.
The challenge therefore associated with self-testing PLLs is to find solutions that
can be added to the design with minimal impact of the primary PLL function,
have minimal impact of the power consumption,
The following section identifies several BIST strategies proposed for PLL structures. Only the basic techniques are described here. The reader should consult the
publications referenced for more information on practical implementation issues,
limitations and potential improvements.
A fully digital BIST solution was proposed by Sunter and Roy [10]. This solution
is restricted to semi-digital types of PLL and are based on the observation that the
open-loop gain is linearly dependent on the key parameters associated with the PLL,
that is
GOL =
Kp Kv G(s)
N s
where Kp is the gain of the phase detector that is a function of the CP current, G(s) is
the frequency-dependent gain of the lowpass filter or integrator, Kv is the gain of the
VCO in rad/s/volt, N is the digital divider integer and s is the Laplace variable.
The BIST solution opens the feedback loop to ensure that the output of the phase
detector is independent of the VCO frequency. This is achieved by adding a multiplexer to the PLL input as shown in Figure 9.21. A fully digital method is used to
derive an input signal with a temporary phase shift. The method uses signals tapped
Loop gain test
mode
Fref
ref
Phase detector
and CP
VCO
FB_clk
Divide by N
=Fref
Phase delay
circuit
Figure 9.21
=2Fref
VCO =
Kv ICP
fref C
FB
Kv ICP
=
2 NC
FB
2fref
PFDUP
0
PLLREF
R
MFREQ
PLLFB
Q
PFDDN
Existing or
additional
PFD
Figure 9.22
PLLREF
leading
PLLREF
lagging
EXTREF
M1
Input
modulator
Start
M2
0
A
1
B
Stop
PLLREF
C
PLL
forward
path
D
PLLFB
Feedback
path
1/N
Test clock
Divider
Phase
counter
Frequency
counter
Gate
control
Test
sequencer
TEST
Figure 9.23
BIST architecture
the PLL being locked; hence, measurement of the output frequency at this point will
allow the magnitude response of the PLL to be calculated at the reference frequency
of the input stimuli. Repeating this process for different values of input frequency will
allow the phase transfer function to be constructed. This modified phase detector and
the methodology described above is used within the overall BIST architecture shown
in Figure 9.23. The input multiplexer M2 is used to connect or break the feedback
loop and apply identical inputs to the PLL forward path to artificially lock the PLL.
Test sequence
Test stage
M1
M2
A=C
B=D
A=C
B=D
A=C
B=D
A=C
A=D
A=C
A=D
Comments
(5) Increase modulation frequency FN and repeat steps 1 to 4 until all frequencies of interest
have been monitored.
The algorithm used to construct the phase transfer function is as in Table 9.3.
Note that this technique requires an input stimulus generator that provides either a
frequency or phase-modulated input with a strobe signal generated at the peaks. Either
frequency modulation using a digitally controlled oscillator or phase modulation using
multiplexed delay lines can be used.
A third method of achieving a structural self-test of a PLL structure was proposed
by Kim et al. [9] and involves injecting a constant current into the PLL forward path
and monitoring the LF output that is usually a multiple of the input current injected
and a function of the impedance of the forward path.
In this approach, additional circuitry is placed between the PFD and CP with the
primary objective of applying known control signals directly to the CP transistors.
In the test, the PLL feedback path is broken and control signals referenced to a
common time base are applied to the CP control inputs. The oscillator frequency will
be proportional to the voltage present at the LF node, which is in turn dependent on
the current applied from the CP. Thus, if the output frequency can be determined,
information can be obtained about the forward path PLL blocks. The test proposal
suggests that the loop divider is reconfigured as a frequency counter. The test basically
comprises three steps as follows. Initially closing both of the CP transistors performs
a d.c. reference count. If the CP currents are matched, the voltage of the LF node
should be at approximately half the supply voltage. The measurement from this test
phase is used as a datum for all subsequent measurements. In the second stage of the
test, the LF is discharged for a known time. Finally, the LF is charged for a known
9.4
This chapter has summarized the types of PLL used within electronic systems, the
primary function of the core blocks and key specifications. Typical test strategies
and test parameters have been described and a number of DfT and BIST solutions
described.
It is clear that as circuit speeds increase and electronic systems rely more heavily
on accurate and stable clock control and synchronization, the integrity and stability
requirements of the PLL functions will become more aggressive increasing test time
and test complexity. Methods of designing PLLs to be more accurately and more
easily tested will hence become more important as the SoC industry grows.
9.5
References
1 Gardner, F.M.: Phase Lock Techniques, 2nd edn (Wiley Interscience, New York,
1979)
2 Best, R.: Phase Locked Loops, Design Simulation and Applications, 4th edn
(McGraw-Hill, New York, 2003)
3 Gardner, F.M.: Charge-pump phase-lock loops, IEEE Transactions on Communications, 1980;28:184958
4 Gayakwad, R., Sokoloff, L.: Analog and Digital Control Systems (Prentice Hall,
Eaglewood Cliffs, NJ, 1998)
5 Lee, T.H.: The Design of CMOS Radio-Frequency Integrated Circuits (Cambridge
University Press, Cambridge, 1998), pp. 438549
6 Johns, D.A., Martin, K.: Analog Integrated Circuit Design (John Wiley & Sons,
New York, 1997), pp. 64895
7 Sachdev, M.: Defect Oriented Testing for CMOS Analog and Digital Circuits
(Kluwer, Boston, MA, 1998), pp. 3738 and 7981
8 Kim, S., Soma, M.: Programmable self-checking BIST scheme for deep
sub micron PLL applications, Technical Report, Department of Electrical
Engineering, University of Washington, Seattle, WA
Chapter 10
10.1
Introduction
Transceiver on wafer
Hardware
interface
RF front-end
f ~ 15 GHz
Digital and
d.c test bus
Figure 10.1
On-chip
test
circuits
Analogue baseband
f ~ 1100 MHz
devices communicate with the ATE through an interface of low-rate digital data
and d.c. voltages. From the extracted information on the transceiver performance at
different intermediate stages, catastrophic and parametric faults can be detected and
located.
Throughout the chapter, special emphasis is made on the description of transistorlevel design techniques to implement embedded test devices that attain robustness,
transparency to CUT operation and minimum area overhead.
To address the problem of testing a system with a high degree of complexity
such as a modern transceiver, three separate tasks are defined: (i) test of the analogue
baseband components which involve frequencies in the range of megahertz; (ii) test
of the RF front-end section at frequencies in the range of gigahertz; and (iii) test of
the transceiver as a full system.
Section 10.2 deals with the first task. A robust method for built-in magnitude and
phase-response measurements based on an analogue multiplier is discussed. Based on
this technique, a complete frequency-response characterization system (FRCS) [11]
for analogue baseband components is described. Its performance is demonstrated
through an integrated prototype in which the gain and phase shift of two analogue
filters are measured at different frequencies up to 130 MHz.
One of the most difficult challenges in the implementation of BIST techniques
for integrated RF integrated circuits (ICs) is to observe high-frequency signal paths
without affecting the performance of the RF CUT. As a solution to this problem,
a very compact CMOS, RF amplitude detector [12], a methodology for its use in
the built-in measurement of the gain and 1 dB compression point of RF circuits are
described in Section 10.3. Measurement results for an integrated detector operating
in the range from 900 MHz to 2.4 GHz are discussed including its application in the
on-chip test of a 1.6 GHz low-noise amplifier (LNA).
Finally, to address the third task, Section 10.4 presents an overall testing strategy
for an integrated wireless transceiver that combines the use of the two above mentioned techniques with a switched loop-back architecture [10]. The capabilities of
this synergetic testing scheme are illustrated through its application on a 2.4 GHz
transceiver macromodel.
On-chip testing techniques for RF wireless transceiver systems and components 311
A
0
Signal
generator
Figure 10.2
10.2
B
A
H() =
|H()| =
Acos(0 t)
CUT
H (0)
Bcos(t0 + )
APD
A general analogue system, such as a line driver, equalizer or the baseband chain
in a transceiver consists of a cascade of building blocks or stages. At a given frequency 0 each stage is expected to show a gain or loss and a delay (phase shift)
within certain specifications; these characteristics can be described by a frequencyresponse function H(0 ). An effective way to detect and locate catastrophic and
parametric faults in these analogue systems is to test the frequency response H() of
each of their building blocks. A few BIST implementations for frequency-response
characterization of analogue circuits have been developed recently using sigma-delta
[13], switched-capacitor [14] and direct-digital-synthesis [15] techniques. These test
systems show different trade-offs in terms of complexity and performance. Even
though their frequency-response test capabilities have been demonstrated only in the
range from kilohertz to few megahertz, implementations in current deep-submicron
technologies may extend their frequency of operation.
This section describes an integrated FRCS that enables the test of the magnitude
and phasefrequency responses of a CUT through d.c. measurements. Robust analogue circuits implement the system that attains a frequency-response measurement
range in the range of hundreds of megahertz, suitable for contemporary wireless
analogue baseband circuits.
10.2.1
Principle of operation
At a given test frequency (0 ), the transfer function H(0 ) of a CUT can be obtained
by comparing the amplitude and phase between the signals at its input and output. By
implementing a signal generator (tunable over the bandwidth of interest for the characterization) and an amplitude-and-phase detector (APD), a FRCS can be obtained
[16] as shown in Figure 10.2.
Figure 10.3 presents a block diagram of an effective APD. An analogue multiplier
sequentially performs three multiplications between the input and output signals from
the CUT. For each operation, a d.c. voltage and a frequency component at 20 are
generated; the latter is suppressed by a lowpass filter (LPF).
CUT
H(0)
From
signal
generator
LPF
Step 2
Bcos(0t + ) Acos(0t)
To next
stage
d.c Output
to ADC
2
A
X=K 2
C<<20
Figure 10.3
CUT
Step 3
Bcos(0t + ) Acos(0t)
H(0)
CUT
Bcos(0t + )
H(0)
LPF
Y=K
C<<20
ABcos ()
2
LPF
Z=K
B
2
C<<20
A2
2
1
K A B cos ()
2
(10.1)
(10.2)
B2
(10.3)
2
where K is the gain of the multiplier, A and B are the amplitude of the signals at the
input and output of the CUT, respectively and is the phase shift introduced by the
CUT at 0 . From these d.c. outputs, a low-cost ATE can evaluate the absolute value
of the phase (||) and the gain (B/A) responses of the CUT at 0 by performing the
following simple operations:
Y
| | = cos1
(10.4)
X Z
B
Z
=
(10.5)
A
X
It is important to note that these operations do not imply a need for sophisticated off-chip equipment. Various inexpensive modern 8-bit microcontrollers have
the capability of working with trigonometric functions and other mathematical
operations.
From Equations (10.4) and (10.5) note that for the computation of the parameters
of interest (B/A and ||), neither the amplitude of the signal generator (A) nor the gain
of the multiplier (K) need to be set or known a priori. Hence, these parameters do not
require an accurate control. If the cut-off frequency (c ) of the LPF is small enough,
its variations will have a negligible effect on the accuracy of the measurements.
Z=K
On-chip testing techniques for RF wireless transceiver systems and components 313
Table 10.1
F
A
B
MAG
PHI
d.c.1
d.c.2
d.c.3
Test variables
Frequency of the signal applied to the CUT
Amplitude of the signal applied to the CUT
Amplitude of the signal at the output of the CUT
Magnitude response of the CUT (B/A) at F
Phase response of the CUT at F
d.c. voltage proportional to A2 /2
d.c. voltage proportional to B2 /2
d.c. voltage proportional to ABcos(PHI)
Moreover, any static d.c. offset that the multiplier may have can be measured when
no signal is present and then cancelled before the computations. In summary, this
technique for the measurement of magnitude and phase responses is inherently robust
to the effect that process variations can have on the main performance characteristics
of the building blocks, which makes it suitable for BIST applications.
The effect of the spectral content of the test signal is now analysed. Let HDi ,
be the relative voltage amplitude of the ith harmonic component (i = 2, 3, , n)
in respect of the amplitude A of the fundamental test tone. Under the pessimistic
assumption that the CUT does not introduce any attenuation or phase shift to either
of these frequency components, the d.c. error voltage (E) introduced by the harmonic
distortion components to each of the voltages X, Y and Z is given by
A2
A2
THD = X THD
(HDi )2 = K
2
2
n
E=K
(10.6)
i=2
where THD is the total harmonic distortion of the signal generator given by the ratio
of the total power of the harmonics over the power of the fundamental tone. If THD is
as high as 0.1 (10 per cent), even in this pessimistic scenario, E would be equivalent
to only 0.01 (1 per cent) of X. This tolerance to harmonic components is an important
advantage since it eliminates the need for a low-distortion sinusoidal signal generator.
10.2.2
Testing methodology
A procedure for the automated test of a CUT using the described frequency-response
measurement technique is described next.
The control and output variables involved in a test process using the phase and
amplitude detector are summarized in Table 10.1.
From the specifications of the CUT, a set of N test frequencies [F1 F2 . . . FN ]
is defined. Through adequate fault modelling, the smallest N to attain the desired
fault coverage can be found. Even though the amplitude- and phase-detection is
independent of the amplitude of the on-chip signal generator, an appropriate amplitude
[Ai ] for the input signal (which does not necessarily have to be different for each
frequency) should be chosen to avoid saturation in the CUT. As described in the
Output vector
[MAGi PHIi]
meets spec.?
NO
i=i+1
Figure 10.4
NO
CUT FAIL!
YES
i = N?
CUT PASS!
previous section, MAG and PHI can be computed from the outputs of the phase and
amplitude detector (d.c.1, d.c.2 and d.c.3). From the expected magnitude and phase
responses of the CUT, each test vector [Fi Ai ] is associated with acceptable boundaries
for the output vector [MAGi, PHIi]. Using the described test parameters, the algorithm
shown in Figure 10.4 can be employed for the efficient functional verification of the
CUT. Note that the measurement of d.c.1i serves also as a self-verification of the
entire system at the ith frequency, since it involves all of the FRCS components but
not the CUT.
10.2.3
Based on the described robust technique for phase and amplitude detection, a complete
FRCS can be implemented [11]. Figure 10.5 presents the system architecture. It
consists of a frequency synthesizer, an APD, a demultiplexer that serves as an interface
between different nodes of the CUT and the APD. The circuit-level design of each
building block is described next. As shown in Figure 10.5 an ADC can also be added
at the output of the APD to make the FRCS interface fully digital. Since only a d.c.
On-chip testing techniques for RF wireless transceiver systems and components 315
Multi-stage analogue circuit under test
Digital ATE
Evaluation
of magnitude and
phase response
B
H() =
A
H() =
Stage 1
Stage 2
Stage 3
Stage n
H 1 ()
H 2 ()
H 3 ()
Hn ()
Bcos(0t + )
0
Frequency selection
Acos(0t)
Frequency
synthesizer
Demultiplexer
n + 1 to 2
Node selection
Bcos(0t + )
Acos(0t)
Test data
Fault detection
and diagnosis
Figure 10.5
d.c to digital
converter
d.c
Intergrated
frequency
response
characterization
system
APD
voltage needs to be digitized, the ADC design can be robust and compact, and some
sample implementations are presented in References 11 and 17.
Figure 10.6(a) shows a block diagram of the analogue multiplier employed for
the APD. The complete transistor-level schematic is depicted in Figure 10.6(b). The
core of the four-quadrant multiplier (transistors M1 and M2 ) is based on the one in
Figure 7(c) in Reference 18. The inputs are the differential voltages VA and VB and
the output is the d.c. voltage VOUT . Transistors M1 operate in the triode region; the
multiplication operation takes place between their gate-to-source and drain-to-source
voltages and the result is the current IOUT and the drain of M2 .
Transistors M2 act as source followers. Ideally, the voltage at the source of transistors M2 should be just a d.c.-shifted version of the voltage signal applied to their
gates (B+ and B). However, the drain current of transistors M1 and M2 is the result
of the multiplication and its variations affect the operation of the source followers.
This results in an undesired phase shift on the voltage signals applied to the drain
of transistors M1 , which significantly degrades the phase detection accuracy of the
multiplier. To overcome this problem, transistors M3 (which operate in the saturation
region) are added to the multiplier core. These additional transistors provide a fixed
d.c. current to the source followers improving their transconductance and reducing
their sensitivity to the a.c. current variations. Simulation results show that this design
feature reduces the error in-phase detection from more than 10 to less than 1 .
The output currents from four single-ended multiplication branches are combined
to form a four-quadrant multiplier that is followed by an LPF. C1 and M4 (diodeconnected transistor) implement the dominant pole of the LPF. M6 and M7 perform a
differential to single-ended conversion. The second pole of the LPF is implemented
by the capacitor C2 and the passive resistor R1 . The d.c. operating point of VOUT can
be set through VBO and hence, no other active circuitry is required to set this voltage.
An important component of this system is the interface between the CUT and the
APD. As shown in Figure 10.5, through a demultiplexer, the frequency response at
B+
IOUT
LPF
IOUT
B+
M2
A+
M1
M3
IDC
A+
VOUT~AB
LPF
VB
A
LPF
B+
(b)
VBMP
VDD
M5
M4
M4
M5
M6
M6
C1
VB+
M2
M1
M3
Figure 10.6
M2
M1
M3
VA
VOUT
M2
M2
M1
M3
VB
C2
R1
VA+
M1
M3
M7
VBMN
M7
VBO
Analogue multiplier for the APD: (a) block diagram and (b) circuit
schematic
different stages of the CUT can be characterized. In addition, the multiplexer should
present a high input impedance (so that the performance of the CUT is not affected)
and provide the appropriate d.c. bias voltages to the phase and amplitude detector. A
circuit that complies with these functions is depicted in Figure 10.7.
The differential pair with active load composed by transistors M11 forms a buffer
with unity gain. The output of the buffers (differential voltages VA and VB ) are connected to the corresponding inputs of the APD. The d.c. operating point of the output
is easily set through the voltages VBA and VBB . The input capacitance as seen from
the input of the multiplexers switches is approximately 50 fF in a 0.35 m CMOS
implementation. This is an insignificant loading in the range of hundreds of megahertz.
On-chip testing techniques for RF wireless transceiver systems and components 317
VDD
VBBN
M9
M9
V N
VN+
V2+
V1+
M10 M10
VN+
V2
V1
VA
VA+
VB
VB+
M11 M11
M11 M11
VBA
Figure 10.7
V2+
V1+
M10 M10
VBB
(a)
Up
fREF
PFD
Down
1 MHz
Off-chip
loop filter
VCO
R1
f
Frequency
selection
Charge
pump
fOUT
C2
C1
DIV
Programable
divider
(b)
Up
fREF
PFD
Down
Charge
pump
VCO
R1
C2
fOUT
Figure 10.8
fDIV~1 MHz
Programable
divider
The frequency synthesizer for the generation of the input signal to the CUT is
designed as a type-II phase-locked loop (PLL) with a 7-bit programmable counter,
spanning a range of 128 MHz in steps of 1 MHz. The block diagram is shown in
Figure 10.8(a).
One of the main advantages of using a PLL in this application is that to generate
the internal stimulus, only a relatively low-frequency signal (fREF = 1 MHz in this
LPF
VCO
LPF
VDD
M11
M11
M14
M14
VOUT
+
M12
M12
VC
C2
Figure 10.9
M15
VOUT
C1
M13
M13
M15
C2
On-chip testing techniques for RF wireless transceiver systems and components 319
CUT 1
11 MHz
BPF
CUT 2
20 MHz
LPF
Algorithmc ADC
Figure 10.10
LPF are tuned simultaneously through VC to keep the oscillation amplitude relatively
constant over the entire frequency tuning range (within 3 dB of variation) and a total
harmonic distortion (TMD) less than 10 per cent.
10.2.4
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
90
60
30
30
60
90
120
150
180
Phase (deg)
Figure 10.11
Experimental results for the VCO of Figure 10.9 are shown in Figure 10.12. The
output frequency varies from 0.5 to 140 MHz and the amplitude variations in this
range are within 3.5 dB, which are in good agreement with the design goals.
Figure 10.13 presents the output spectrum of the VC towards the low end of
the tuning range (at around 16 MHz) where a higher THD is observed. Throughout
the complete tuning range, the harmonic components are always below 20 dBc.
According to the analysis presented in Section 10.2.1, this harmonic distortion would
cause relative errors in the magnitude and phase measurements of less than 1 per cent.
The complete frequency synthesizer is operated with a reference frequency of 1
MHz and through the 7-bit programmable counter that covers a range from 1 to 128
MHz in steps of 1 MHz. The measured reference spurs are below 36 dBc. The area
of the entire synthesizer is 380 m390 m and the current consumption changes
from 1.5 to 4 mA as the output frequency increases.
Figure 10.14 describes the experimental set-up for the evaluation of the entire
system in the test of the integrated CUTs. Each fourth-order filter consists of two
OTA-C biquads and each biquad has two nodes of interest, namely bandpass (BP)
node and lowpass (LP) node. Nodes 2 and 4 (biquad outputs) are BP nodes in the 11
MHz BPF (CUT 1) and LP nodes in the 20 MHz LPF (CUT 2). Buffers are added
to the output node of each biquad so that their frequency response can be measured
with an external network analyser.
The results of the operation of the entire FRCS in the magnitude response characterization of the 11 MHz BPF at its two BP outputs are shown in Figure 10.15. These
On-chip testing techniques for RF wireless transceiver systems and components 321
(a)
103
102
101
100
101
1.4
(b)
1.5
1.6
1.7
1.8
1.9
Control voltage (V)
2.1
2.2
60
80
100
Frequency (MHz)
120
140
9
10
11
12
13
14
15
16
17
Figure 10.12
20
40
results are compared against the characterization performed with a commercial network analyser. In this measurement, the dynamic range of the system is limited to
about 21 dB due to the 7-bit resolution of the ADC. The phase response of the filter
as measured by the FRCS is shown in Figure 10.16.
The corresponding results for the characterization of the 20 MHz LPF are presented in Figures 10.17 and 10.18. In this case the d.c. output of the APD is measured
through a data acquisition card with an accuracy of 10 bits. As it can be observed,
10
21.73 dBm
15.87174349 MHZ
Ref Lv1
10 dBm
RBW
VBW
SWT
RF Att
50 kHz
50 kHz
60 ms
20 dB
Unit
dBm
21.73 dBm
15.87174349 MHz
20.48 dB
1 (T1)
32.10420842 MHz
1 (T1)
20
30
40
1
15A
1VIEW
50
60
70
80
90
100
110
Figure 10.13
Source
Balun
Commercial
network
analyser
BiQuad 1
Node 1
PLL
output
Node 2
BiQuad 2
Node 3
Node 4
Balun
CH1
Balun
CH2
On-chip
buffers
MUX
inputs
Figure 10.14
Control inputs
Output
On-chip testing techniques for RF wireless transceiver systems and components 323
(a)
CH1 Ach log MAG
5 dB/REF 20 dB
Network analyser
Proposed system
Avg
16
IF BW 1 KHz
START 10 Hz
POWER 20 dBm
(b)
CH1 Ach
log MAG
5 dB/REF 20 dB
Network analyser
Proposed system
Avg
16
IF BW 1 KHz
START 10 Hz
Figure 10.15
POWER 20 dBm
Magnitude response test of the 11 MHz BPF. (a) Results for the first
biquad (second-order filter) and (b) results for the complete fourthorder filter
the APD is able to track the frequency response of the filter and perform phase
measurements in a dynamic range of 30 dB up to 130 MHz.
On average, in the test of both CUTs, the magnitude response measured by the offchip equipment is about 2 dB below the estimation of the FRCS. This discrepancy
is in good agreement with the simulated loss of the employed buffers and baluns.
Table 10.2 presents the performance summary of this integrated test system.
110
100
90
80
70
60
50
40
30
20
10
0
10
12
14
16
18
20
Frequency (MHz)
(b)
180
160
140
120
100
80
60
40
20
0
10
12
14
16
18
Frequency (MHz)
Figure 10.16
10.3
Phase response test of the 11 MHz BPF. (a) Results for the first biquad
(second-order filter) and (b) results for the complete fourth-order
filter
On-chip testing techniques for RF wireless transceiver systems and components 325
(a)
CH1 Ach
log MAG
5 dB/REF 20 dB
Network analyser
Proposed system
Avg
16
IF BW 1 KHz
START 10 Hz
(b)
CH1 Ach
log MAG
POWER 20 dBm
5 dB/REF 20 dB
Network analyser
Proposed system
Avg
16
IF BW 1 KHz
START 10 Hz
Figure 10.17
POWER 20 dBm
Magnitude response test of the 20 MHz LPF. (a) Results for the first
biquad (second-order filter) and (b) results for the complete fourthorder filter
to the extra circuitry and output pads would be unaffordable. Therefore, it is desirable
to have an on-chip RF amplitude detector (RFD) to monitor the voltage magnitude
of RF signals through d.c. measurements. Different implementations of RFDs using
bipolar transistors on a SiGe process technology have been reported recently [9, 20].
The desired characteristics of a practical RFD are: (i) a high input impedance
at the testing frequency to prevent loading and performance degradation of the RF
160
140
120
100
80
60
40
20
0
20
40
60
80
100
Frequency (MHz)
120
140
(b)
180
160
140
120
100
80
60
40
20
0
Figure 10.18
10
20
30
40
Frequency (MHz)
50
60
Phase response test of the 20 MHz LPF. (a) Results for the first biquad
(second-order filter) and (b) results for the complete fourth-order
filter
CUT; (ii) a minimum area overhead; and (iii) a dynamic range suitable for the target
building blocks. In addition, the measurement method should be robust to the effect
that process variations may have on the detectors response. Other figures of merit
such as power consumption and temperature stability are not a priority since the RFD
would not be used during the normal operation of the system under test.
On-chip testing techniques for RF wireless transceiver systems and components 327
Table 10.2
Technology
Dynamic range for measurement of magnitude response
Resolution for phase measurements
Frequency range
Digital output resolution
Supply
Power consumption (at 130 MHz)
Area
0.35 m CMOS
30 dB
1
1130 MHz
7 bits
3.3 V
20 mW
0.3 mm2
10.3.1
The following example illustrates an effective technique to measure the gain compression of an integrated RF device [21]. An RFD is used at the input of the RF CUT
and another at the output as shown in Figure 10.19. A macromodel is built to simulate this test set-up. The model of the RFD consists of an amplifier with high input
impedance followed by a rectifier and a second-order LPF (the design of this detector
architecture will be explained in the next section). Two different LNA models are
considered as CUT. LNA1 has a gain of 10 dB, output 1-dB compression point of
3 dBm, output IP3 of 7 dBm and noise figure of 4 dB. LNA2 represents a faulty
LNA with a gain of 8 dB, output 1 dB compression of 5 dBm and the same IP3
and noise figure (NF) as LNA1. The amplitude of the sinusoidal signal at the input
of the LNA (and the first detector) is swept from 20 to 0 dBm in steps of 2 dB.
Figure 10.19 shows the simulation results. For a given input amplitude, the gain of
the LNA can be measured as the distance in decibels from the response of the detector
at the output to the reference response (output of the detector at the input). As it can
be observed, the input amplitude (and corresponding output amplitude) for which the
gain decreases by 1 dB can be easily extrapolated.
Note that with the use of the reference response, the absolute gain and the
non-linearity of the RFDs RF to d.c. conversion characteristic do not affect the
measurement. In this way, process variations do not affect the measurement accuracy
significantly. The mismatch between the gains of the different detectors would be the
only remaining source of error. It is also important to mention that the d.c. offset that
may be present at the output of the detectors is not a matter of concern since it can be
measured (when no signal is present at the input) before the characterization process.
RF
detctor 1
RF
detctor 2
0.9
RF CUT
0.8
RF IN
9 dB
0.7
0.6
RF OUT
0.5
0.4
10 dB
0.3
Output 1 dB
compression
point
0.2
0.1
20
18
Figure 10.19
14
8
10
12
6
Input amplitude (dBm)
Rectification
Post
rectification
VI conversion
and amplification
Class AB
rectifier
Figure 10.20
10.3.2
16
Pre
rectification
RF IN
high Z
at RF
Small-signal
voltage gain
DC OUT
The design of a practical CMOS RFD [12] is now described. It consists in three stages;
a conceptual block diagram is depicted in Figure 10.20. The first stage presents a high
impedance to the RF signal path, converts the sensed voltage to a current signal and
amplifies it. The second stage is a full-wave rectifier. The rectified waveform is then
filtered in the last stage to obtain its average value. The output is therefore a d.c.
voltage proportional to the amplitude of the RF signal at the input of the detector.
The circuit schematic of the RFD detector is shown in Figure 10.21. The
d.c. current sources IB1 IB5 are implemented using CMOS current mirrors; their
transistor-level schematics are omitted for simplicity. Transistor M1 senses the voltage at the RF node to be observed and converts it into current. Its size is chosen to
be small in order to present minimum parasitic loading; simulation results show that
On-chip testing techniques for RF wireless transceiver systems and components 329
VDD
M2
Input stage
R1
Class AB rectifier
IB3
M3
LPF
M13 IB5
M9
IB4
IRF
AGND
M8
M10
R3
RFIN
C2
M1
M7
DCOUT
R4
M11
IB2
IB1
C1
C3
M4
R2
IB4
M5
M6
M12 M14
M15
GND
Figure 10.21
RF amplitude
detector
Detector at
LNA ouput
LNA and
buffer
Detector at
LNA input
Figure 10.22
both currents that is important. The resultant rectified current is converted to voltage
by R3 . Finally, the passive LPF formed by R4 and C3 extract the d.c. component. This
passive pole also sets the settling time of the detector, which is designed to be in the
order of tenths of nanoseconds. AGND is set to 1.65 V (VDD = 3.3 V) to define the
d.c. operating point of the output of the detector (d.c.OUT ) as well as the d.c. voltage
at the source of M10 and M11 . It is important to note that all of the signal amplification
and rectification in the detector is done in current mode and all of the high-frequency
internal nodes are at low impedance. These characteristics prevent the occurrence of
large voltage swings and minimize the injection of substrate noise.
From simulation and experimental results, the ratio of the maximum and minimum
signal amplitudes that can be detected (dynamic range) by the rectifying circuit is 30
dB. The sensitivity of the detector is mainly controlled by IB4 . As this current is
reduced, the rectifier is sensitive to smaller signal amplitudes. On the other hand, if
IB4 is increased, the minimum detectable signal becomes larger but the compression
point of the rectifier is also moved to higher amplitudes. In this way, the useful range
of the detector can be set to higher or lower signal levels according to the expected
conditions of the RF node to be observed.
10.3.3
Experimental results
An IC prototype was fabricated in the TSMC 0.35 m CMOS process and measured
in a QFN package. The chip microphotograph is shown in Figure 10.22. The IC
includes a stand-alone RFD and an LNA with detectors at its input and output to
evaluate the on-chip testing capabilities of the device. The RFD occupies an area of
only 0.031 mm2 .
The response of the detector is evaluated at different frequencies. The experimental results are shown in Figure 10.23. For these measurements, an external RF
On-chip testing techniques for RF wireless transceiver systems and components 331
2
1.8
1.6
1.4
1.2
1
1.9 GHZ
2.4 GHZ
0.9 GHZ
1.2 GHZ
1.6 GHZ
0.8
0.6
0.4
0.2
0
40
Figure 10.23
35
30
25
20 15 10 5
Input power (dBm)
10
signal generator was employed and input match to 50 was assured through offchip components. In a range of 1.5 GHz (from 0.9 to 2.4 GHz) the detector shows
a conversion gain of approximately 50 mV/dBm in a dynamic range of 30 dB. The
minimum detectable signal is around 35 dBm. The wideband nature of the detectors response is an important advantage for test purposes; the device can be used
to monitor signal amplitudes at different points of an RF system even if they have
different frequency content (e.g., in a multi-standard or a dual-conversion transceiver
architecture) without any further tuning in the design. At 400 MHz and 2.8 GHz the
measured dynamic range is still greater than 20 dB.
Table 10.3 presents the performance summary for the RF amplitude detector. It is
worth mentioning that the fast settling time of the detector allows performing tens of
measurements (e.g., varying the input power or frequency) in just few microseconds.
The response of the proposed detector is not perfectly linear with respect to the
input amplitude in the entire dynamic range. However, as discussed previously, this
is not a limitation for on chip-test purposes. To evaluate the effectiveness of the
proposed RF test device to measure gain and 1 dB compression point of an RF
CUT, an LNA is integrated in the prototype IC. The LNA is a standard single-ended
inductively degenerated cascade amplifier [23] that has a gain of 10 dB at 1.6 GHz. The
degeneration and load inductors are implemented on-chip while the gate inductance is
the bonding wire that connects the LNA input to the package pad. A buffer is included
at the output of the LNA to measure its performance with off-chip equipment. The
buffer is a simple common source stage. Resistive source degeneration is employed in
RF
amplitude
performance summary
0.35 m
0.031 mm2
50 mV/dBm
>30 dB
0.92.4 GHz
3.3 V
10 mW
<40 ns
CMOS process
Area
Conversion gain
Dynamic range
Measured operating frequency
Supply voltage
Power consumption
Settling time
RF signal generator
Spectrum analyser
LNA
Figure 10.24
detector
Buffer
RFD 1
RFD 2
DC OUT 1
DC OUT 2
IC
prototype
the buffer to attain a 1 dB compression point higher than that of the LNA. Post-layout
simulation results show that the buffer has a loss of 10 dB at 1.6 GHz while driving
the 50 load of a spectrum analyser through the package parasitics.
The test set-up employed for the characterization of the LNA is shown in
Figure 10.24. The gain of the LNA is measured with the RFDs and also with external instrumentation for different power levels and at different frequencies. Some
significant examples of the performed measurements are presented next.
Figure 10.25 shows the measured d.c. voltage at the output of each detector for
different input power levels at 1.6 GHz. Employing the discussed technique the LNA
gain is measured as 9.5 dB and the input 1 dB compression point as 1 dBm. It
is worth mentioning that for the on-chip test of the LNA, the rectifier current (IB4 )
On-chip testing techniques for RF wireless transceiver systems and components 333
2
RMS Det. at input of LNA
1.8
1.6
1.4
1.2
LNA Gain: 9.5 dB
1
0.8
LNA Gain: 8.5 dB
0.6
Input 1 dB Comp. Point: dBm
0.4
14 12 10
Figure 10.25
6 4 2
0
Input power (dBm)
Measured response of the RFDs at the input and output of the on-chip
LNA at 1.6 GHz
used in the RF detectors is higher than the one used in the measurements presented in
Figure 10.23. This shows how the useful range of the rectifier can be adjusted to test
an RF CUT at the signal levels of interest (e.g., around its 1 dB compression point).
Figure 10.26 presents a comparison of the LNA gain measured at 1.7 GHz with the
integrated detectors in comparison with the gain roll-off measured with an external
RF spectrum analyser. At low input power levels (<2 dBm) the estimated gain
appears to be lower due to the reduced gain of the RFD at the input in this range.
From the obtained experimental results at different frequencies and power levels it
is estimated that the practical accuracy of the method is 1 dB, which is adequate for
multiple wafer-level and production test purposes.
10.4
This section presents an integral testing strategy for integrated wireless transceivers
based on the BIST techniques discussed in Sections 10.2 and 10.3, in combination
with a switched loop-back architecture.
10.4.1
A loop-back connection between the transmitter and receiver chains is one of the
earliest strategies to test the functionality of wireless and wire-line communication
12
10
8
6
4
2
0
10
Figure 10.26
2
0
2
Input power (dBm)
10
systems [24, 25]. It does not require an external stimulus and is effective to detect
catastrophic faults in the complete signal path. Figure 10.27(a) depicts this testing
scheme for a transceiver architecture with direct up-conversion. In a complete realization the base band sections include in-phase (I) and quadrature (Q) paths but in
this block diagram only one path is shown for simplicity.
In the loop-back configuration, the baseband section of the transmitter generates
a tone or a modulated signal with a centre frequency fB . With the input from the
local oscillator (LO) at a frequency fRF the up-converter generates a tone at fB + fRF .
The loop-back connection must attenuate the output of the power amplifier (PA) to
make it suitable for the dynamic range of the LNA. After the down-conversion with
the same tone from the LO, the resultant signal at the receiver baseband is centred at
fB . The characteristics of the demodulated or digitized signal can be analysed by the
ATE to evaluate the performance of the transceiver. In this configuration, the range
of values that fB can take is limited by the transmitter baseband.
Recent radio implementations use transmitter architectures in which the modulation of the transmitted signal is directly performed on the VCO [26, 27], avoiding
the up-conversion. As shown in Figure 10.27(b), the direct application of loop-back
test is not practical in this kind of transceiver. However, to introduce a switch in the
loop-back path can be useful to overcome this limitation. Figure 10.27(c) illustrates
the principle of operation of a switched loop-back technique applied to a transceiver
with direct VCO modulation.
On-chip testing techniques for RF wireless transceiver systems and components 335
(a)
M(f )
U(f )
f
fB
D(f )
Loop-back
connection
B(f )
fRF + fB
fB
fB
LNA
PA
DC offset
cancellation
LO(f )
Local
oscillator
DAC
and/or
modulator
f RF
Digital
signal
processor
Input data
Output Data
ADC
and/or
demodulator
(b)
Loop-back
connection
T(f )
D(f )
fRF
B(f )
f
DC
LNA
PA
DC offset
cancellation
VCO Control
PLL
modulator
ADC
and/or
demodulator
Output data
DSP
Input data
(c)
T(f )
S(f )
fRF
D(f )
fRF, fRF f SW
B(f )
DC fSW
fSW
fSW
PA
ATTN
LNA
DC offset
cancellation
VCO control
PLL
modulator
Switched loop-back
DSP
Input data
Figure 10.27
Output data
ADC
and/or
demodulator
n=0
+ K2 cos (2SW t + 2 ) +
(10.7)
where SW = 2 fSW and Kn , n are constants that define the amplitude and phase of
each frequency component respectively. The product of P(t) and the RF signal with
amplitude A and frequency fRF results in the switched signal S(t)
S (t) =
Kn cos (nSW t + n ) A cos (RF t)
n=0
A
2
Kn cos (nSW t + n ) A cos (RF t) B cos (RF t + )
n=0
where , n and n are phase constants. The final amplitude of each frequency
component (Cn , En ) depends on the amplitude B of the LO as well as on the conversion
gain of the mixer. The d.c. component C0 is blocked by the d.c. offset cancellation
circuitry and the frequency components located around 2fRF will have a negligible
amplitude since the output of a down-conversion mixer shows a LP characteristic.
On-chip testing techniques for RF wireless transceiver systems and components 337
In addition, C2 , C3 , , Cn depend on the non-dominant frequency components of
S(t) and hence will be small in comparison to C1 .
One of the most important advantages of this approach is that the loop-back
connection can have a simple on-chip implementation. A programmable attenuator
can be implemented with switches and a bank of resistors or capacitors and a simple
CMOS switch can perform the commutation of the signal at the input of the LNA. The
switching signal is a digital clock with frequency fSW in the range of megahertz which
can be easily applied to the transceiver on wafer. The ATE can have a direct control
over fSW and in this way frequency response of the transmitter and receiver chains
can be performed independently without any other modification to the transceiver
architecture.
One of the limitations of a stand-alone loop-back test is that it is not able to
identify the location of catastrophic faults (e.g., an open circuit in the signal path) and
some important parametric faults can pass undetected. For example, a higher gain in
the PA or mixer can mask a lower gain in the LNA. In this sense, a more effective
testing strategy incorporates means of verifying the receiver operation at different
intermediate stages of the signal path and not only at its end points.
10.4.2
The joint application of the techniques described in this chapter can act in a synergetic
way to improve the testability of an entire integrated system. Figure 10.28 depicts the
block diagram of a transceiver using a direct conversion transmitter with a switched
loop-back connection, RFDs in the RF section and a FRCS in the baseband section. A
d.c. to digital converter (d.c.DC) [11] acts as an interface between the on-chip testing
circuitry and a digital port of the ATE.
With the exception of the baseband circuitry at the transmitter, the entire
transceiver chain can be tested by using the LO signal and the switched loop-back
connection. A complete end-to-end test requires the application of a low-frequency
RF out
1I
RF in
2I
7I
RFD
Baseband
in
f SW
RFD
PA
3
1Q
RFD
Baseband
out
6
7Q
Frequency
synthesizer
RFD
9Q
From RF
detectors
10Q
Figure 10.28
8Q
RFD
10I
From baseband
observation points
1, 2, 7, 8, 9 (I &Q)
9I
LNA
ATTN
2Q
Transciever
die
8I
RFD
Analog
multiplexer
APD
DC MUX
DC
to digital
converter
To ATE
Test
Observation
nodes
5, 6
RFD
RFD
10 (I and Q)
FRCS
7, 8, 9
(I and Q)
7, 8
(I and Q)
8 or 9
FRCS
8, 9
(I and Q)
signal at the input of the transmitter either from the ATE or from an on-chip signal generator like the one proposed for the FRCS. The switch loop-back connection
guarantees the flow of a test stimulus throughout the transceiver path that can be used
by the embedded testing devices to perform measurements at different intermediate
points of the system.
By providing independent control of the frequency of the signals across the transmitter and receiver chains, and providing access to internal points in the RF and
baseband sections, the testability of the receiver is improved. Table 10.4 describes
the different tests that can be performed in this architecture. A complete testing solution for a given transceiver may not have to perform all of the possible tests. This
On-chip testing techniques for RF wireless transceiver systems and components 339
1. End-to-end Loop Back Test
Detection of catastrophic faults and
major performance deviations.
FAIL
PASS
Overall system tests: Gain
programmability and 1 dB comp. point
of transmitter and reciever, output
SNR, adjacent channel rejection, ...
Figure 10.29
10.4.3
Simulation results
A macromodel for the transceiver architecture shown in Figure 10.28, including the
switched loop-back connection and the RF RMS detectors is built to analyse the performance of the proposed testing scheme. The components employed for the model
include the most important non-idealities expected from an integrated implementation such as noise, compression, non-linearity and finite isolation between terminals.
Ref.
Standard
[26]
[27]
[28]
[29]
Bluetooth
DECT
802.15.4 (ZigBee)
Bluetooth and 802.11b
Table 10.6
Analogue
area (mm2 )
CMOS
process
(m)
Overhead of 6 RF RMS
sdetectors, FRCS and
d.c.DC (0.45 mm2 ) (%)
5.9
9.4
8.75
16
0.18
0.25
0.18
0.18
7.6
4.8
5.1
2.8
RF
Transmitter architecture
Transmitter power
PA gain
Receiver architecture
Sensitivity
RF front-end IIP3
RF front-end gain
Baseband filter
2.4 GHz
Direct conversion
0 dBm
15 dB
Low-IF; IF = 4 MHz
82 dBm
4 dBm
30 dB (LNA 15 dB + Mixer 15 dB)
Fifth-order bandpass polyphase
Table 10.6 summarizes the specific characteristics of the modelled architecture, which
are taken from the transceiver reported in Reference 28. An IEEE 802.15.4 implementation is chosen for the example because this standard is targeted for very low-cost
applications. The attenuator in the loop-back connection has a loss of 25 dB to bring
the 0 dBm output of the PA within the linear range of the receiver. The RFDs are
modelled according to the device described in Section 10.3.
Figure 10.30 shows the simulation results for a transceiver meeting specifications.
The frequency for the loop-back switching is 4 MHz, since this is the centre frequency
of the baseband filter. Figures 10.30(a) and (b) show the switched signal at the input
of the LNA in the time and frequency domains, respectively. Observe that the frequency components of interest (2400 4 MHz) are at least 10 dB above other tones.
Figures 10.30(c), (d) and (e) show the outputs of the RF RMS detectors at the output
of the PA, the LNA input and LNA output respectively. Finally, the expected 4 MHz
signal at the output of the baseband filter is shown in Figure 10.30(f).
Even though the output of the RFDs placed after the switch is intermittent, the
gain of the LNA can still be estimated provided that the d.c.DC. samples their output
at the appropriate rate. In the presented model, the d.c.DC. has around 100 ns to
sample the output of each detector. In a given scenario where the d.c.DC. is slower,
fSW can be set first to a lower value (so that the RFDs hold their output for a longer
On-chip testing techniques for RF wireless transceiver systems and components 341
(a)
(b)
30
25
20
10
0
10
20
30
1.0
1.2
35
40
45
1.4
Time (s)
(c)
30
2.380
2.388
2.412
2.420
Frequency (GHz)
(d)
70
60
800
600
400
200
50
40
30
20
10
0
25
50
75
100
100
200
Time (ns)
300
400
Time (ns)
(e)
(f)
450
600
350
400
300
250
200
150
100
50
400
200
0
200
400
600
0
0
100
200
300
Time (ns)
Figure 10.30
400
Time (ns)
time) to test the LNA and then shift to a higher value to test the rest of the receiver
chain.
Figure 10.31 shows the simulation results for a transceiver in which some of the
individual building blocks do not meet the target specifications. The PA has a 2 dB
higher gain (12 dB total), the LNA has 5 dB less gain (10 dB total) and the channel
selection filter is not centred at 4 MHz but at 4.5 MHz. Figures 10.31(a) and (b)
show the output of the RFDs at the outputs of the PA and LNA, respectively. It can
be readily noticed that these final values are different from the ones in the case of
(b)
300
1.00
1.20
0.80
0.60
0.40
0.20
250
200
150
100
50
0
0
25
50
75
(c)
100
Time (ns)
200
300
400
Time (s)
(d)
600
600
100
400
200
0
200
400
600
400
200
0
200
400
600
Time (s)
Figure 10.31
Time (s)
Figure 10.30. Figures 10.31(c) and (d) show the output of the channel selection filter
for fSW = 4 MHz and fSW = 4.5 MHz.
Note that through a stand-alone end-to-end test, it would not be possible to determine the cause of a reduced amplitude at the end of the receiver baseband. Moreover,
if both PA and LNA exhibit a higher gain, the output of the receiver could show the
expected amplitude even if the filter has a deviated centre frequency. If this transceiver
was tested with a conventional loop-back test without the switch, by changing the
input frequency to the transmitter it could be determined that the fault is occurring at
the baseband but not so if it is on the transmitter or receiver side.
10.5
The combination of a switched loop-back architecture with the use of the recently
developed on-chip testing devices demonstrated in integrated implementations significantly enhances the testability of an RF transceiver. The on-chip testing devices
show that the direct, on-chip observation of analogue and RF building blocks at
megahertz and gigahertz frequencies can be performed in a CMOS process, and with
On-chip testing techniques for RF wireless transceiver systems and components 343
a minimum area and parasitic loading overhead. The presented strategy enables the
test of the entire wireless system and its individual building blocks at the wafer level
through digital information. The use of external analogue/RF equipment or components is avoided, allowing the implementation of a practical and cost-effective
test solution. Extending the proposed concepts to implementations in current deepsubmicron technologies opens significant opportunities for improved performance as
well as the solution to new challenges.
10.6
References
1 Ozev, S., Orailoglu, A., Olgaard, C.V.: Multilevel testability analysis and solutions for integrated Bluetooth transceivers, IEEE Design and Test of Computers,
2002;19 (5):8291
2 Ferrario, J., Wolf, R., Moss, S.: Architecting millisecond test solutions for
wireless phone RFICs, Proceedings of the IEEE International Test Conference,
Charlotte, NC, September 2003, pp. 132532
3 Akbay, S.S., Halder, A., Chatterjee, A., Keezer, D.: Low-cost test of embedded RF/analog/mixed-signal circuits in SOPs, IEEE Transactions on Advanced
Packaging, 2004;27 (2):35263
4 Acar, E., Ozev, S.: Defect-based RF testing using a new catastrophic fault model,
Proceedings of the IEEE International Test Conference, Austin, TX, November
2005, pp. 4219
5 Bhattacharya, S., Halder, A., Srinivasan, G., Chatterjee, A.: Alternate testing of
RF transceivers using optimized test stimulus for accurate prediction of system
specifications, Journal of Electronic Testing: Theory and Applications, 2005;21
(3):32339
6 Silva, E., de Gyvez, J.P., Gronthoud, G.: Functional vs. multi-VDD testing of
RF circuits, Proceedings of the IEEE International Test Conference, Austin, TX,
November 2005, pp. 41220
7 Ozev, S., Olgaard, C.: Wafer-level RF test and DFT for VCO modulating
transceiver architectures, Proceedings of the 22nd IEEE VLSI Test Symposium,
Napa Valley, CA, April 2004, pp. 21722
8 Bhattacharya, S., Chatterjee, A.: Use of embedded sensors for built-in-test of RF
circuits, Proceedings of the IEEE International Test Conference, Charlotte, NC,
September 2004, pp. 8019
9 Ryu, J.-Y., Kim, B.C., Sylla, I.: A new low-cost RF built-in self-test measurement for system-on-chip transceivers, IEEE Transactions on Instrumentation
and Measurement, 2006;55 (2):3818
10 Valdes-Garcia, A., Silva-Martinez, J., Snchez-Sinencio, E.: On-chip testing
techniques for wireless RF transceivers, IEEE Design and Test of Computers,
2006;23:26877
11 Valdes-Garcia, A., Hussein, F., Silva-Martinez, J., Sanchez-Sinencio, E.: An
integrated transfer function characterization system with a digital interface for
analog testing, IEEE Journal of Solid State Circuits, 2006;41 (10):230113
On-chip testing techniques for RF wireless transceiver systems and components 345
28 Choi, P.: An experimental coin-seized radio for extremely low-power WPAN
(IEEE 802.15.4) application at 2.4 GHz, IEEE Journal of Solid State Circuits,
2003;38 (12):225868
29 Byunghak, T.: A 2.4 GHz dual-mode 0.18-um CMOS transceiver for Bluetooth
and 802.11b, IEEE Journal of Solid State Circuits, 2004;39 (11):191626
Chapter 11
11.1
Introduction
The mixed-signal system-on-a-chip (SoC) has become one of the main drivers for
electronic circuit design. It has become normal to integrate complex systems with
both digital and analogue functions in a single chip, to produce systems such as
wireless transceivers, broadband modems, mobile phone handsets, digital broadcast
receivers and many other application devices. A major motivation for producing such
complex systems as an SoC is cost. With modern submicron semiconductor technology, the available complexity of an SoC is continually being rapidly increased, with
relatively little increase in the associated cost of fabrication. Eliminating the requirement for many of the external components formerly required also drastically reduces
the manufacturing costs of products incorporating such highly integrated SoCs, as
well as bringing technical advantages such as reduced size and power consumption.
From the viewpoint of the analogue and mixed-signal circuit designer, mixedsignal SoC design brings many challenges. The vast majority of circuitry in the
SoC is digital; economic requirements dictate that digital CMOS integrated circuit
(IC) processes are used to fabricate the SoC. However, such processes do not yield
optimized analogue components. A major source of difficulty in circuit design is the
variability of integrated components [1]. Each process step has a degree of variability
associated with it, leading to loose component tolerances. Components of the same
type integrated on the same die are subject to nearly identical processing, however, a
close matching of component value ratios is possible, even though absolute tolerances
are large. The ability to produce components with well-matched ratios has been
heavily exploited by circuit designers. However, as device geometries shrink with
each increment in technology feature size, statistical variations between components
integrated on the same die increase, leading to a deterioration in ratio matching.
Section 11.2 On-chip automatic filter tuning. Filters are important building
blocks in virtually all applications where analogue signals are processed and,
for the vast majority of continuous-time filter designs, on-chip automatic tuning
schemes are an essential requirement in order to achieve the performance goals
demanded by the application.
Section 11.3 Self-calibration techniques for frequency synthesizers. Precise
frequency generation is also an essential requirement in a very wide range of
applications; for frequencies in the RF range, phase-locked loop (PLL) frequency
synthesis is widely used. The critical analogue circuit block in a PLL is the
voltage-controlled oscillator (VCO). VCO performance parameters are highly
process dependent, and on-chip VCO calibration techniques can be employed to
reduce the effects of process variations on VCO parameters, yielding improved
performance for the overall PLL system.
Section 11.4 On-chip antenna impedance matching. To achieve efficient operation of RF power amplifiers, the impedance of the load must be matched to that
required by the amplifier. Normally, the load is actually an antenna, which has a
highly variable impedance depending on the exact operating frequency and the
operating environment of the antenna. Automatic impedance matching maximizes
output and efficiency under varying load conditions.
The chapter is concluded in Section 11.5.
11.2
11.2.1
11.2.2
Vin
+
g0
+
g1
g3
+
Vtune
VBP
C1
g2
C2
Figure 11.1
generates simultaneous lowpass and bandpass outputs; here we focus on the bandpass
transfer function, but the lowpass case is very similar. The transconductances in the
filter are made tuneable by varying their bias currents. The capacitors are fixed.
In the case where all components are ideal, the transfer function of Figure 11.1 is
HBP (s) =
VBP
g0
(g3 /C2 ) s
=
Vin
g3 s2 + (g3 /C2 ) s + (g1 g2 /C1 C2 )
(11.1)
By equating coefficients with the standard form of the second-order bandpass transfer
function
HBP (s) = KBP
(0 /Q) s
s2 + (0 /Q) s + 02
we have
0 =
(11.2)
g 1 g2
,
C1 C2
1
Q=
g3
g 1 g 2 C2
,
C1
KBP =
g0
g3
(11.3)
c 2
g 1 2
1 2
0 has been changed by a factor kg /kc . In order to restore the design value of 0 , the
transconductances of the four OTAs are simultaneously tuned until kg = kc , in which
case Equation (11.4) reduces to Equation (11.1). In practice, this is achieved by tuning
the transconductances until 0 is equal to the design value. It is also possible to tune
Q independently of 0 by varying g3 alone. However, because Q is determined by
(b)
+
g0
Vin
g3
+
Vtune
|g |
+
g1
VBP
g2
Cp2
C2
C1
Gp1
Excess
phase
g
Figure 11.2
p
=g 1
(11.5)
g (s) = g
= g exp
p
p
1 + s/p
In the circuit of Figure 11.2(a), the most significant influence of excess phase occurs
in the two integrators made up of g1 , C1 and g2 , C2 . Substituting this frequencydependent transconductance for the ideal transconductors in Equation (11.1) and
making appropriate approximations for 0 p gives a new value of Q for the
circuit when excess phase is included:
Q
Q
1 2Q 0 /p
(11.6)
Q is significantly affected by quite small values of excess phase. For example, if the
design value of Q is 10 and p = 100 0 , giving rise to excess phase of about 0.6 ,
Q from Equation (11.6) is 12.5, an increase of 25 per cent. A design Q of 50 will
result in Q approaching infinity and instability.
H(s) =
s2 +
g0 Gp1 /C1 C2 1 + C1 /Gp1 s
+ (g3 /C2 ) s + Gp1 Gp2 /C1 C2 + Gp1 g3 /C1 C2 + (g1 g2 /C1 C2 )
(11.7)
The Q of the modified circuit is approximated by
Q =
Q
1 + (Q/0 ) Gp1 /C1 + Gp2 /C2
(11.8)
Thus, increasing the output conductance of the OTAs reduces Q. This sets an upper
limit to the Q which can be achieved for a given set of transconductors and capacitors,
as the design Q in Equation (11.3) tends to infinity, the maximum achievable Q is
Qmax
=
1
(1/0 ) Gp1 /C1 + Gp2 /C2
(11.9)
In summary, the large tolerances of integrated components give rise to large frequency
errors in the filter response, so integrated filters almost always require on-chip tuning. Frequency tuning will often be sufficient for filters operating at modest values
of Q and frequency, typically lowpass and bandpass filters where the bandwidth
is of the same order as the centre frequency. However, in high-Q, high-frequency
filters, circuit parasitics, principally the excess phase and finite d.c. gain of the
active circuits, profoundly affect the Q of the filter response, so that Q must also
be tuned.
11.2.3
The outline of a typical tuning system is shown in Figure 11.3. A well-defined reference signal is applied to the filter input. One or more parameters of the filter output
signal are measured by the frequency tuning control circuit and compared to a reference. The resultant error signal is used by the control circuit to calculate a correction
signal, which is then applied to the frequency tuning input of the filter. Thus, the
system forms a closed feedback loop in which the filter is forced to converge on the
desired frequency response. In a similar way, if implemented, the Q control circuit
generates a tuning signal that corrects the Q of the filter.
Desirable features of any on-chip tuning system are minimal chip area and low
power consumption. This requires simple hardware and minimal computation requirements for the tuning algorithm. Conversely, the functional requirements placed on
the filter design may be very complex, requiring several different performance goals
for cut-off frequencies, gain, group delay, and so on, which must be simultaneously
met. It is not usually possible for an on-chip tuning system to evaluate all the relevant parameters of filter performance since this requires many measurements to be
H(s) Filter
Q Tuning
signal
Output
Frequency
tuning signal
Reference
Frequency
tuning control
Q Tuning
control
Figure 11.3
performed on the filter output signal over a range of frequencies. In high-order filters, there are many tuneable components, so achieving the desired filter response
requires the control of a large number of variables simultaneously. For these reasons,
it is very difficult to directly tune a high-order filter using reasonably simple tuning
circuits.
In order for the filter to function correctly, the tuning system must operate when
the chip is first powered on. Also, component values will continuously drift while the
circuit is powered, due to changes in environmental and operating conditions, so it
is necessary to periodically repeat the tuning process during normal operation. This
creates a problem in that the reference will be present within the filter passband at the
same time as the desired signal, with the inevitable possibility of mutual interference
occurring between the tuning system and the rest of the transceiver signal processing.
The scheme of Figure 11.3 is therefore normally operated as an offline tuning system;
periodically the normal signal input to the filter is removed, and the reference signal
applied. The filter is then tuned, and the updated values of frequency and Q tuning
signals stored until the next tuning cycle occurs.
These offline tuning cycles can be readily accommodated in many system architectures; for example, many types of transceiver alternate between transmit and receive;
receiver filter tuning can take place during the transmit periods without affecting
receiver operation. However, the additional signal routing and the requirement to
store the tuning signals while the filter is online lead to added complication. Therefore, online tuning is widely used, where the tuning process proceeds continuously
and simultaneously with normal circuit operation. One way of achieving this is to
devise a reference signal that has minimal effect on subsequent signal processing, but
which at the same time can be used to measure the necessary filter parameters. An
example of this is described in Reference 6, where the reference signal is made nearly
orthogonal to the received signal using spread-spectrum techniques.
H(s)
Reference
signal
Master
filter
Q Tuning
signal
Frequency
tuning signal
Frequency
tuning control
Q Tuning
control
Signal
input
Figure 11.4
11.2.4
H(s)
Slave
filter
Signal
output
Masterslave tuning
A very widely used and important online tuning scheme is the masterslave tuning scheme outlined in Figure 11.4. This makes use of the inherent good matching
between components and circuit subsystems that are achieved within a single IC. Two
well-matched filter sections are used; the reference signal is applied to the master section, whilst the actual input signal is applied to the slave section. The tuning system
develops tuning signals in a closed feedback loop which correct errors in the response
of the master section, as in offline tuning. The same tuning signals are simultaneously
applied to the slave section. If the master and slave sections are identical and perfectly matched, the response of the slave filter will be the same as the master. Thus,
it is unnecessary to apply the reference signal to the master filter, which can operate
continuously.
In practice, master and slave are usually different. The master section is usually
a low-order filter, often a biquad, since this has a simple response for which it is
relatively easy to design tuning algorithms and is economic in its use of chip area
and power consumption. This is illustrated in Figure 11.5; the master filter is a single
biquad with a single resonance peak in its response. The slave section can be of
whatever order is required to meet the filter specifications, with the same tuning signals
applied to each section. The diagram shows the effect on the frequency responses of
master and slave as they are simultaneously tuned. Clearly, it is much easier to design
a tuning algorithm for the single biquad master when compared to the high-order
slave response, with its multiple maxima and minima. Thus, in addition to allowing
online tuning, the masterslave tuning scheme provides a solution to the problem
of tuning complex filters. A large proportion of high-order integrated filter designs
therefore utilize masterslave tuning in some form.
Slave amplitude
response
fmin
Figure 11.5
fnom
fmax
Frequency
The essential assumption made in the masterslave scheme is that the ratios of
components in the master and slave sections can be accurately realized, and will track
each other precisely as the master section is tuned. If this is the case, the cut-off
frequencies and Q of all the filter sections in the slave will exactly track those of
the master, and if the master section Q is maintained at the correct value the slave
filter response shape will remain correct as the filter frequency is tuned. There are
practical limitations on how closely this can be achieved, and a substantial amount
of design and layout effort must be expended to ensure that the master accurately
models the tuning behaviour of the slave. Since parasitic effects can substantially
alter the performance of the filter, these must also be accurately modelled in the
master section. These requirements can usually be best met by designing the slave
filter with circuits that are as near identical to the master as possible, and by making the
tuning reference signal frequency close to that of the signal frequency. This allows the
best matching between sections, and also ensures that frequency-dependent parasitic
effects are similar in master and slave. Synthesis techniques are required that result
in filter circuits using the minimum possible spread of component values.
11.2.5
For frequency tuning, the most commonly used input reference signal is derived from
a stable clock oscillator. This is convenient, since most systems already include an
accurate off-chip clock signal, usually derived from a quartz crystal, from which all
on-chip clock signals are derived through various forms of frequency synthesis. At
the output of the filter, phase comparison is the most widely used method of determining the state of tuning of the filter. An outline frequency tuning scheme is shown
in Figure 11.6.
In second-order filter sections, the phase difference between the filter input and
output reach well-defined values at the resonance frequency, 90 in the case of
a lowpass section and 0 for a bandpass section. Accurate reference phase shifts
independent of component value tolerances can be generated using digital counters
Signal
in
H(s)
Figure 11.6
H(s)
Signal
out
Tuning
input
Tuning
input
Reference
frequency
Slave filter
Phase
detector
Error
amplifier
Loop
filter
Signal
in
H(s)
Signal
out
Tuning
input
Tuning
input
VCO
Slave filter
Limiting
amplifier
Phase
detector
Error
amplifier
Loop
filter
Reference
frequency
Figure 11.7
and the VCO which effectively operates at infinite Q and inherently requires a nonlinear amplitude limiting mechanism to achieve a stable signal amplitude. To ensure
that the frequency-determining elements operate within their linear range, the VCO
is usually implemented by adding a limiting amplifier to provide feedback around a
bandpass biquad filter section.
Many successful frequency tuning systems using frequency- or PLLs as described
have been implemented in practical designs, for example, in References 7 and 8.
These methods are well suited to masterslave designs, where the tuning loop can
operate continuously. This yields an extremely simple control system and is often
capable of frequency tuning accuracy within 1 per cent. These techniques become
increasingly difficult to apply at the highest frequencies, due to the increasingly severe
errors caused by excess phase, both in the filter or VCO and in the phase detector
itself.
Frequency tuning techniques can utilize the time-domain response of the filter.
The response of a high-Q bandpass filter to a step or impulse function is a damped
sinusoid at the filter output. The period of the sinusoid is approximately equal to the
resonant frequency of the filter. The filter output waveform is squared using a limiting
amplifier and the period measured using digital counter techniques. A tuning signal is
derived by comparing the measured period with the desired value. In order to achieve
good accuracy, high resolution in the period measurement is necessary. This requires
that the transient response has a long duration. The duration increases with Q and
filter order, and owing to this and the iterative nature of the measurement technique,
it is most appropriate for offline tuning of high-order, high-Q bandpass filters [9, 10].
This tuning control method is digital in nature, so is easily combined with switched
array tuning schemes.
A related technique is to measure the time constant of an integrator using a d.c.
charging current. An example of this technique using an OTA-C integrator is shown
in Figure 11.8.
Slave filter
Vout
gm
Vout(max)
Peak
detector
Signal
out
Tuning
input
Vref
H(s)
Error
amplifier
gm control
input
Clock input
t
Figure 11.8
An accurate clock signal is used to open the switch for a period t. During t, the
integrator output voltage is a linear ramp that reaches a maximum value of
Vout(max) = Vref t
gm
C
(11.10)
This maximum voltage is stored by the peak detector, and compared with the
reference voltage by the error amplifier. The resulting lowpass-filtered error signal is
applied to the OTA transconductance control input and causes the capacitor charging
current and therefore Vout (max) to vary. Over a large number of clock cycles, this
feedback loop causes Vout (max) to become equal to Vref :
Vref t
gm
= Vref ,
C
gm
=t
C
(11.11)
Since t is accurately defined by the clock signal, and the resonant frequency of
the filter is accurately proportional to gm /C due to well-defined ratios between all
transconductances and capacitances on the chip, the resonant frequency is set to the
correct value.
In order to avoid problems caused by unwanted phase shifts, frequency tuning methods have been devised
based on amplitude measurements. A second-order
response with Q greater than 1/ 2 contains a peak in its amplitude response plotted
against frequency. For high-Q values, the amplitude peak closely approximates the
resonant frequency. Tuning the resonant frequency of the filter with a fixed input
reference frequency will also produce a peak in the output response when the two
frequencies coincide. The tuning system only needs to detect when the maximum
output signal is achieved; the amplitude detector need therefore have neither high
accuracy nor linearity, provided it has a monotonic response.
Master filter
Envelope
detector
Venv
H(s)
Ramp
generator
Tuning
input
Peak
detector
Vpk
Tuning signal
to slave filter
Vtune
Comparator
Control
logic
Figure 11.9
A tuning scheme using this principle is shown in Figure 11.9. The reference signal
is applied to the biquad input, and the envelope detector produces a d.c. level, Venv ,
proportional to the amplitude of the filter output. In the first phase of the tuning cycle,
the filter tuning voltage Vtune is swept through its range by the ramp generator. At the
point where the resonant frequency of the filter coincides with the reference frequency,
the filter output amplitude and thus Venv reaches a maximum, and this value is stored
by the peak detector as Vpk . In the second tuning phase, Vtune is swept again and Venv
is compared with Vpk by the comparator. At the point where the resonant frequency
and reference frequency coincide, Venv is equal to Vpk and the control logic opens the
switch. Thus, the value of Vtune giving the correct filter resonant frequency is stored
on the hold capacitor, until the next tuning cycle begins.
In practice, the circuit of Figure 11.9 will suffer tuning errors due to parasitic
charge injection and loss from the tuning voltage holding capacitor offsets in the
comparator and peak detector. However, a more sophisticated implementation of this
technique has been described [11] in which these errors are largely eliminated.
11.2.6
Q tuning techniques
Frequency tuning ensures that the centre frequency or cut-off frequency of the filter
is tuned to the correct value; however, this does not necessarily ensure that the shape
of the frequency response is correct; this also depends on the Q of the filter sections.
As noted above, parasitic effects in particular may lead to severe distortion of the
filter response. The frequency tuning schemes described above are independent of Q.
However, in order to tune the filter Q, it is first necessary that the frequency tuning
process is completed, because Q is defined in terms of the way that the filter response
changes close to the resonant frequency. Any error in the filter resonant frequency will
therefore also result in errors in Q. A further difficulty is that although the designer
will attempt as far as possible to make Q unaffected by frequency tuning and vice
versa, they are never entirely independent. Inevitably, tuning Q will introduce a new
error in filter frequency, and correcting this error will alter Q again.
Tuning signal
to slave
Master filter
Vref
H(s)
Error
amplifier
Loop
filter
Q
Q Tuning
input
Figure 11.10
Q tuning scheme
Therefore, several iterations of frequency and Q tuning may be required to correctly tune the filter or the two processes must proceed simultaneously. The designer
must take the interdependence of both tuning processes into account in order to ensure
that convergence takes place [12]. This is especially difficult with high-Q filter sections, where as seen in Section 11.2.2, Q is sensitive to small changes in tuning, and
instability can easily occur.
The most widely used Q tuning technique [11, 13] utilizes the fact that in many
cases, the gain of a biquad at the resonant frequency is proportional to the Q. For
example, in the case of the OTA-C biquad of Figure 11.1, from Equations (11.1),
(11.2) and (11.3), we can derive expressions for the gain in terms of Q at 0 :
VBP
VLP
g0
= Q,
(11.12)
V = Q g g C /C
V
in
in
1 2 2
1
The Q tuning system of Figure 11.10 is an amplitude locked loop which operates
using this proportionality between gain and Q. It is assumed that separate frequency
tuning circuits maintain 0 of the filter exactly equal to the desired value and that the
gain of the filter is equal to the Q at 0 . The reference signal is attenuated by a factor
1/KQ by a potential divider and is applied to the filter input. The output amplitude
of the filter is therefore Vref Q/KQ . A pair of matched envelope detectors generate
d.c. levels proportional to Vref and the filter output, which are compared by an error
amplifier. The resulting feedback signal varies the Q of the filter so that the filter
output is equal to Vref , in which case Q = KQ . Since KQ is determined by component
ratios which can be made accurately, Q is also accurately defined.
11.2.7
Multiple loop feedback (MLF) filters are desirable for fully integrated filters because
of their low sensitivity. This applies especially to high-Q bandpass filters, due to
their high sensitivity to frequency tuning errors and the high Q required from each
filter section. However, the multiple feedback structure which is responsible for this
low sensitivity at the same time makes this type of filter more difficult to tune. The
multiple feedback paths existing between sections of the filter result in interaction
S2
Rs
C2
L2
S(n1)
I2
C(n1) L(n1)
I4
I(n1)
RL
V1
V3
S1
C1 L1
VL
Vn
C3
C3
L3
Sn
Cn
Ln
Vs
o
Figure 11.11
between all filter sections. Thus, tuning any one section of the filter affects all poles
and zeros in the filter transfer function, modifying the filter transfer function in a
complex way. This makes the design of a tuning algorithm capable of realizing the
desired response extremely difficult. This section describes a tuning method based
on Dishals technique [14, 15] which overcomes this problem, and is applicable to
the leapfrog (LF) form of MLF filter and other types of filters based on LC ladder
simulation. This method can be illustrated using the LC ladder bandpass filter shown
in Figure 11.11.
Synthesis of this ladder filter with centre frequency 0 results in the inductor and
capacitor values in each branch of the ladder having the same resonant frequency,
1/Li Ci = 02 . To tune the filter, initially all switches in the series arms are opened and
all those in the shunt arms are closed. A signal is applied to the input at frequency
0 , and V1 is monitored by the amplitude detector. S1 is opened and C1 /L1 is tuned to
parallel resonance, that is, maximum amplitude of V1 . Since S2 is open, the resonator
C1 /L1 is isolated from the rest of the circuit, which therefore does not alter the resonant
frequency. Next, S2 is closed and C2 /L2 is tuned to series resonance and minimum
V1 . Since S3 is closed, C2 /L2 are also isolated from succeeding stages of the filter.
Each successive branch is then adjusted in turn, the shunt branches for maximum V1
and the series branches for minimum V1 , with the associated switch being opened or
closed. Since all preceding branches are already resonant, the reactive component of
their net series or shunt impedance is zero, and they are transparent at frequency 0 .
When Ln /Cn have been adjusted, the tuning process is complete. In tuning schemes for
second-order cascade filters, it is normally necessary to provide Q tuning capability.
This is not done when tuning using Dishals method as described, and so the tuning
process does not completely define the transfer function of the filter. The bandwidth
and ripple in the response are defined by ratios between component values in different
branches of the circuit, whilst the method described above only tunes the inductor
and capacitor in each individual branch in isolation. However, because all branches
1
S2
S3
S(n1)
Sn
1
RL
1
Rs
1
sL2
1
sC1
Vs
Rs
V1
1
I2
1
1
sL2
Figure 11.12
1
sL(n1)
1
sC3
V3
1
1
sC1
1
sCn
I(n1)
1
1
sL3
Vn =VL
1
1
sC(n1)
1
sLn
are resonant at 0 , the passband is symmetrical, insertion loss is minimized and gross
distortion of the frequency response does not occur.
Each LC resonator in the prototype is replaced by a two-integrator-loop biquad
with the same 0 . In the LC filter, coupling between resonators occurs because they
are connected together; in the LF filter, this coupling occurs via the feedback paths.
Therefore, the switches in Figures 11.11 and 11.12 perform an equivalent function.
As in the LC prototype, it is only necessary to monitor the test signal amplitude at
one point in the LF circuit, the output of the first integrator, V1 .
A single biquad making up part of the filter in Figure 11.12 is shown in
Figure 11.13(a). This could be implemented as the OTA-C biquad circuit of
Figure 11.13(b). The transfer function of Figure 11.13(a) is
s (1/RS C1 )
RS
2
R s + s (1/RS C1 ) + (1/L1 C1 )
H(s)
1
=
,
L 1 C1
1
0
=
,
Q
RS C1
KBP =
RS
R
(11.13)
g0
(g3 /C1 ) s
g3 s2 + (g3 /C1 ) s + (g1 g2 /C1 C2 )
g 1 g2
0
g0
g3
,
,
KBP =
=
=
C1 C2
Q
C1
g3
=
(11.14)
(b)
1/Rs
1/R
1
sC1
Vin
Vin
+
g0
Vout
Vo
C2
g3
+
Vtune
g1
C1
1
sL1
Figure 11.13
+
g2
In order to implement the tuning method, g0 g3 and the bias current sources
are dimensioned so that the ratio between the transconductances remains constant
as Vtune is varied. Similarly, the ratio of C1 /C2 will be preserved with variations in
absolute capacitance. Suppose process variations change all transconductances by a
factor kg and all capacitances by a factor kc . The transfer function of Figure 11.13(b)
then becomes:
s kg g3 /kc C1
kg g1 g2
g0
H (s) =
,
=
0
g3 s2 + s k g /k C + k 2 g g /k 2 C C
kc C 1 C 2
g 3
c 1
g 1 2
1 2
(11.15)
0 is altered from 0 by a factor of kg /kc . The effect of tuning the circuit to resonance
using Dishals method is to force 0 to the design value 0 by changing Vtune ,
and hence kg g0 kg g3 . This is achieved when kg = kc . Substituting kg = kc into
Equation (11.15) gives the original transfer function. Thus, tuning only the pole
frequencies of the biquad also restores Q to the original value.
An on-chip tuning system which tunes the pole frequency of a single biquad by
detecting the peak of its amplitude response is described in detail in Reference 11.
This system is shown in elementary form in Figure 11.14. A test signal at 0 is
applied to the biquad input and Vo is rectified. The rectified signal Venv is applied to
a peak detector. In the first tuning phase, Vtune is swept through its range by a ramp
generator. At the point where the pole frequency of the biquad is equal to 0 , Venv is
a maximum, and this value is stored by the peak detector output Vpk . In the second
tuning phase, Vtune is swept again, and Venv is compared with Vpk . At the point where
both are equal, the control logic opens the switch, causing the current value of tuning
voltage to be stored on Chold , which is again the peak of the amplitude response.
Reference 11 describes a more sophisticated implementation in which the effects of
delays and offsets are cancelled.
This scheme may be extended as in Figure 11.15 to sequentially tune a number of
biquads making up the bandpass LF filter. Initially, Vtune1 is adjusted for peak output at
Vin
Venv
Vo
Peak
detector
Vpk
Vtune
Comparator
Chold
Ramp
generator
Figure 11.14
0
Control
logic
Vo
Vin
Vtune1
Chold1
V1
Vtune2
Vtune3
VtuneN
Chold2
Chold3
CholdN
Ramp
generator
Venv
Peak
detector
Vpk Comparators
Control
logic
Minimum Vmin
detector
Figure 11.15
V1 . To isolate the first biquad from the rest of the filter, Vtune2 VtuneN are initialized
to zero, de-biasing the other biquads. After Vtune1 has been adjusted, Vtune2 is tuned
for minimum V1 . The minimum detector is a peak detector with inverted polarity.
The process is repeated with Vtune3 VtuneN until all biquads have been tuned.
The tuning scheme described above has a number of benefits:
The test signal is connected to the input node, and the filter response is measured at
the input resonator node of the filter throughout the tuning process. This minimizes
fout
Phase/
frequency
comparator
Vtune
Loop filter
VCO
1/N
divider
Figure 11.16
the number of signal paths that must be added to the filter, minimizing additional
circuit parasitics.
A single test-signal frequency is required, equal to the filter centre frequency.
Often a suitable signal will already be available in the system as a carrier signal.
The tuning system need only detect amplitude maxima and minima; it is not necessary to measure accurate amplitude ratios or phase, reducing possible sources
of error in high-frequency applications.
11.3
11.3.1
PLL frequency synthesizers [16] are a widely used building block for integrated
applications such as wireless transceivers and clock generation for digital systems,
where it is required to generate a precise RF which is a multiple of a relatively low
reference frequency, such as might be obtained from a crystal oscillator. Figure 11.16
shows an elementary PLL synthesizer block diagram. The output signal from a VCO
is divided by an integer factor N by a programmable modulus digital counter. The
divided VCO output frequency is then compared with the reference source fref by
a phase/frequency detector, the output of which provides an error signal, Vtune , that
is applied to the VCO tuning input via the loop filter. The resulting feedback loop
forces the VCO output frequency to become equal to exactly N times the reference
frequency, and the phase difference between the inputs to the phase comparator is
such that the required value of Vtune is maintained at the loop filter output. A variation
on this theme is the fractional-N synthesizer, where an integer relationship is not
required between the reference and output frequencies.
A critical component in the PLL and fractional-N systems is the VCO. These are
usually either delay-based ring oscillator circuits where the delay in the cells making
up the ring is controlled by a tuning voltage or current in order to vary the output
frequency or feedback oscillators where the frequency-determining element is an LC
resonator. In LC resonator-based VCOs, tuning is usually achieved using varactor
effects in diodes or diode-connected MOS transistors to achieve a voltage-dependent
capacitance. The presence of phase noise in the VCO output signal is undesirable.
VCO frequency
tolerance
Required output
frequency range
VCO frequency
tolerance
Figure 11.17
There are usually stringent requirements on the spectral purity of the output signal,
especially the phase noise sidebands around the output frequency or time-domain
jitter in digital clock applications.
The VCO gain KVCO is the gradient of the VCO output frequency versus tuning
voltage function: as illustrated in Figure 11.17, KVCO often varies over the tuning
range of the VCO. The tuning range of a VCO must be large enough to cover the
range of output frequencies required for the synthesizer application, and also to cover
the tolerance on operating frequency resulting from the effect of process variations
on the frequency-determining component values. In many applications, the tolerance
on operating frequency is much larger than the required operating frequency range,
requiring a VCO with a wide tuning range compared to the actual operating frequency
range. Shrinking CMOS geometries results in lower supply voltages, which in turn
reduce the tuning voltage range that is feasible. The combination of small tuning voltage and large output frequency range results in a large value of KVCO being required.
Unfortunately, a VCO with high gain is inherently more noisy than one with low
gain, because a given noise level at the tuning voltage input will result in greater
phase noise in the output signal. LC resonator-based VCOs generally have superior
noise characteristics to relaxation oscillators, but have more restricted tuning ranges,
since the capacitance variation possible with low tuning voltages is restricted. A further problem with varactor-based tuning is that KVCO varies with the tuning voltage,
due to the non-linear relationship between voltage, capacitance and frequency. The
changing VCO gain makes it difficult to optimize the dynamic response of the PLL
feedback loop over the whole tuning range.
11.3.2
One approach to realizing a large VCO tuning range while at the same time achieving low VCO gain is to utilize a band-switched VCO circuit such as that shown in
Figure 11.18. In this circuit, a relatively narrow tuning range is provided by applying
Vtune to MOS varactors. Coarse tuning over a wider range is provided by an array of
SCs. A digital band-selection word controls capacitor selection via MOS switches,
Digital band
select inputs
Figure 11.18
VCO
frequency
VCO frequency
tolerance
Band 2
Band 1
Band 0
Required output
frequency range
VCO frequency
tolerance
Figure 11.19
providing discrete coarse tuning steps, as illustrated in Figure 11.19. In this way, a
wide tuning range is provided as a series of overlapping narrow bands.
As well as reducing the required VCO gain, VCO tuning linearity can also be
improved, since by providing sufficient overlap between sub-bands, a relatively linear
portion of the voltage tuning transfer function can be used.
In order to implement this band-switching scheme, the frequency control system
must be able to select the correct sub-band in order to generate the required output
11.3.3
Open-loop [1720] and closed-loop [21, 22] PLL calibration algorithms have been
developed. In open-loop algorithms, the feedback loop is opened between the phase
comparator and the VCO, and fixed reference voltages representing the required upper
and lower limits of the tuning voltage range are applied to the VCO tuning voltage
input. The existing digital counters in the PLL are then used to determine the limits
within which the VCO tuning range lies for particular sub-bands. The tuning control
logic can then select the appropriate VCO sub-band for the required output frequency.
In closed-loop algorithms, the VCO tuning voltage is compared with fixed reference
voltages. If Vtune lies outside the desired tuning voltage range, the calibration logic
increments or decrements the digital tuning word until an acceptable tuning voltage
is obtained.
A closed-loop calibration algorithm is thus capable of continuously updating the
coarse tuning sub-band while the PLL is operating, allowing for continual changes
in operating conditions and PLL output frequency. However, selecting a different
sub-band while the PLL is operating will result in transient changes in VCO output
frequency while phase lock is reacquired. In the integer-N PLL synthesizer, this capture transient has a long duration, since the loop bandwidth is inevitably much smaller
than the reference frequency. Therefore, the PLL output signal may be lost for significant periods when coarse tuning occurs. In open-loop algorithms, the calibration
logic can store the required tuning data, so that the appropriate digital tuning word
can immediately be selected in response to a requirement to change the PLL output
frequency. This minimizes the time required for frequency changes. However, the
calibration process must be repeated when a change in operating conditions results
in changes in VCO output frequency.
A PLL synthesizer including an open-loop calibration scheme is illustrated in
Figure 11.20, and the calibration algorithm by the flow-chart of Figure 11.21. The
required tuning sub-band is identified as the one enabling the whole required output
frequency range to be covered with the available tuning voltage range. The existing
divide-by-N counter, reference source and phase/frequency comparator are used to
compare the VCO output frequency with the required tuning limits. For each sub-band,
the minimum tuning voltage is applied to the VCO tuning input, and the divide-byN is counter-programmed with the value of N corresponding to the lowest required
frequency in the tuning range. The frequency at the divider output is compared with the
Select
Vmax, Vmin
Voltage
reference;
Vmax, Vmin
Coarse tune
digital inputs
Calibrate
fref
Phase/
frequency
comparator
Vtune
Operate
Loop filter
Select N
1/N
divider
Figure 11.20
Start calibration
Connect Vtune to
voltage reference
f < fref?
No
Select lower
frequency sub-band
No
Select higher
frequency sub-band
Yes
Set Vtune to Vmax
f > fref?
Yes
Reconnect Vtune to
loop filter
Calibration complete
Figure 11.21
fout
VCO
fout
Phase/
frequency
comparator
Vtune
VCO
Loop filter
Coarse tune
digital inlputs
1/N
divider
Vmax
Calibration
logic
Vmin
Figure 11.22
reference frequency; if it is lower than the reference frequency, the lower limit of the
required output frequency range is inside the tuning range of the VCO. If the divider
output frequency is higher than the reference frequency, the lowest required output
frequency is beyond the lower limit of the VCO tuning range, and a lower frequency
sub-band must be selected in order for the VCO to tune the required frequency range.
A similar procedure can be applied to the upper end of the VCO tuning range; the VCO
now has the maximum tuning voltage applied, and the divide-by-N is programmed
with the value of N corresponding to the maximum required output frequency. A
frequency at the divider output greater than the reference frequency indicates that
the maximum required output frequency is inside the VCO tuning range, while a
frequency below the reference frequency indicates that a higher-frequency sub-band
must be selected. This algorithm is run iteratively until a sub-band is selected that
satisfies both minimum and maximum tuning range requirements.
A PLL synthesizer including a closed-loop calibration scheme is shown in
Figure 11.22. The VCO tuning voltage is continuously compared with Vmax and
Vmin , representing the limits of allowable tuning voltage excursion. When Vtune moves
outside the maximum or minimum limit, the coarse tuning word is incremented or
decremented appropriately.
11.3.4
As well as VCO tuning range, other parameters in the PLL synthesizer system may
benefit from on-chip calibration schemes. A number of schemes have been described
to improve accuracy of quadrature signal generation, and to minimize jitter in VCO
output.
11.4
11.4.1
An essential component of any wireless system is the antenna. The antenna is a transducer that converts RF electrical power from a transmitter into an electromagnetic
wave propagating in free space, intercepts electromagnetic waves from a distant transmitter and converts them into electrical signals that are applied to the receiver input.
In order to maximize the transfer of power between transmitter and antenna, and
maximize the signal-to-noise ratio of the received signal, the electrical impedance at
Zant Z0
Zant + Z0
(11.16)
where Zant is the impedance of the antenna and Z0 is the load impedance required
by the power amplifier. Since both Zant and Z0 may be complex, is also a complex
number, with magnitude between zero and one. When Zant and Z0 are equal, that is,
perfectly matched, is zero. Antenna design is a compromise between many factors; achieving a desirable impedance must be traded off against radiation pattern,
bandwidth, efficiency and other factors. This is especially so when electrically small
antennas are required (that is, the dimensions of the antenna are small compared to the
operating wavelength), as is the case for many integrated wireless transceiver applications operating in the ultra-high frequency (UHF) range. Typically, the impedance
of such antennas varies rapidly with frequency and also due to environmental effects,
so it is not practical to obtain precise impedance matching either through antenna
design or using fixed impedance-matching networks.
A possible solution to the antenna-impedance-matching problem is to incorporate a tunable impedance-matching network between the transmitter/receiver and the
antenna. This has long been widespread practice in the medium frequency, high
frequency and very high frequency ranges where automatic antenna tuners with
discrete-component LC networks are used in order to achieve the relatively large
inductances and capacitances required at lower frequencies [28]. Recently, integrated on-chip matching networks have been investigated for UHF and microwave
transceiver antennas, since it is feasible to produce the smaller components required
in integrated form. An on-chip antenna tuner consists of three major components
(Figure 11.23):
An adjustable matching network, capable of producing the required impedance
transformation.
An impedance sensor which monitors the voltage and current relationships in the
matching system.
A control system, which includes a tuning algorithm that is capable of adjusting
the matching network component values to optimize the impedance match, in
response to feedback data from the impedance sensor.
Thus, the automatic antenna tuner functions as a feedback system, adjusting the
matching network components to optimize the transformed impedance at the matching
network input. The system can thus respond to changes in antenna impedance that
occur over time; in mobile and hand-held applications, large impedance changes
occur due to relative movements between the antenna and surrounding conducting
or dielectric objects, especially the users body. The issues involved in the design of
the major components of the automatic antenna tuner are described in the following
sections.
TX PA
Matching network
Control
system
Figure 11.23
Source impedance
R1
Figure 11.24
11.4.2
Impedance
sensor
Load impedance
C1
C2
R2
network
Matching network
Q02
(R1 /R2 )
,
+ 1 (R1 /R2 )
XL =
Q0 R1 + (R1 R2 /XC2 )
Q02 + 1
(11.17)
(11.18)
2C
4C
Figure 11.25
2nC
2L
4L
2nL
Series tunable
capacitor
Impedance
inverting network
Figure 11.26
Impedance
inverting network
Input from
power
amplifier
Digital tuning
inputs
Figure 11.27
Input from
power
amplifier
Figure 11.28
Active devices used as switches also introduce additional loss and circuit parasitics
and, since they are non-linear, generate harmonics and inter-modulation products.
They also place constraints on the power-handling capability of the matching network
due to their limited breakdown voltages. Active devices perform best when configured
as shunt switches, since the full supply voltage can be applied as bias between the
gate and source electrodes and, since the source is grounded, VGS is not modulated
by the signal voltage, as would be the case for a series switch. This minimizes the
switch-on resistance and reduces production of inter-modulation products due to
switch non-linearity. Selection of switching transistor dimensions is a compromise
between increased resistive losses in switches with small widths and increased shunt
capacitance in larger widths. MEMS have also been proposed as low-loss matching
network switches [34, 36].
11.4.3
Impedance sensors
The impedance sensor provides the automatic tuning control system with feedback to
determine if a satisfactory impedance match has been achieved. Numerous methods
for sensing the impedance match have been used. The simplest scheme is to detect
the amplitude of the transmitted signal at the antenna terminals [33]; at any given
frequency, this amplitude is a function of the transmitter power reaching the antenna,
so maximizing the amplitude also maximizes radiated power. However, the maximum
power condition does not necessarily coincide with the optimum load impedance
conditions for the transmitter power amplifier. Thus, the power amplifier may not
operate at maximum efficiency or minimum distortion levels with this scheme, and at
high power levels may be subjected to electrical over-stress. A phase detector can be
used at the power amplifier output to monitor the phase relationship between the output
voltage and current from the power amplifier. The control system then adjusts the
matching network for minimum phase difference between voltage and current, that is,
making the load impedance at the power amplifier (PA) output resistive. This scheme
does not detect a mismatch in the resistance level. However, for high-Q antenna
structures, such as electrically small loops, the largest proportion of the impedance
mismatch is normally due to the reactive component of the antenna impedance, and
ensuring the antenna system is tuned to resonance in this way results in a substantial
improvement in power transfer [38].
A directional coupler, equipped with a detector at the reverse coupled port, connected between the PA and matching network provides an output signal which is a
function of the reflection coefficient at the input to the matching network. In this case,
11.4.4
Tuning algorithms
A major challenge in devising tuning algorithms for automatic antenna tuning systems is that only incomplete input data is usually available to the tuning algorithm. As
noted above, in systems where antenna tuning is required, the antenna impedance is
usually subject to large and unpredictable variations. The on-chip matching network
itself is also subject to large uncertainties due to processing variations. In most cases,
impedance sensors are not capable of providing complete impedance data, but only
a signal that gives some indication of degree of mismatch. Due to these unknown
variables in the system, it is normal to use iterative tuning algorithms that attempt to
converge on the best combination of network component values. Another important
consideration for tuning algorithms is speed. The tuning process will cause amplitude
and phase modulation of the signal radiated as matching network component values
are changed [33], so transmitted data may be corrupted if tuning occurs during transmission. Therefore, it is desirable to perform tuning during limited idle periods or at
least minimize loss of data by minimizing the duration of the tuning process.
In systems where the number of possible matching network component combinations are relatively small, it may be feasible to perform an exhaustive search of
all possible combinations in order to find the one producing the optimum impedance
match. However, when a large number of possible network combinations exist or
there are restrictions on time available for tuning, tuning algorithms are required that
minimize the number of network combinations that must be tested. One approach
to achieve this is to generate a look-up table of matching network data for different
operating frequencies during an initialization phase; the control system then selects
the appropriate tuning data from the table as the operating frequency is varied. This
scheme achieves rapid tuning, but is not capable of responding to variations in antenna
impedance that occur over time without repeating the initialization process. Functional
tuning algorithms have been developed that utilize impedance sensor outputs as feedback data in order to iteratively converge on the optimum matching network values.
Rapid convergence is facilitated by using an impedance sensor that provides both signal amplitude and phase information for the tuning algorithm. More complex sensors
are required to achieve this. Simple sensors typically provide only amplitude information to the tuning algorithm. The algorithm must then use a partly trial-and-error
procedure, since no feedback information is provided regarding the relative magnitude and phase of the antenna impedance, only the degree of impedance mismatch.
Genetic algorithms [39] have been applied to this type of tuning algorithm; initially,
the system must perform many iterations to achieve a satisfactory impedance match.
With continued operation, the genetic algorithm adapts to the system and changing
antenna parameters, without requiring explicit rules to achieve impedance matching.
11.5
Conclusions
The variability of integrated components is often a challenge to analogue and mixedsignal IC design and, as has been described in the preceding sections, in many
instances requires the provision of on-chip tuning systems to provide analogue functions with the required precision. On-chip tuning is therefore an essential feature
in the implementation of high-performance analogue signal processing for mixedsignal SoCs, and in many cases the design of a satisfactory tuning system represents
a substantial proportion of the overall circuit design challenge.
Most continuous-time filters require on-chip tuning in order to achieve the required
response with adequate accuracy; tuning system design is therefore an integral part
of the overall filter design. Integrated continuous-time filters with on-chip tuning are
widely deployed in systems such as wireless transceivers, digital broadcast receivers,
cable modems, hard disk drive read channels and many other applications. Currently
available tuning techniques such as those described in Section 11.2 are capable of
giving satisfactory performance for lowpass and low-Q bandpass designs for frequencies up to hundreds of megahertz. The continual demand for increased bandwidths,
combined with improved RF performance provides challenges for filter and tuning
system design, particularly for very-high-frequency and high-Q designs, where circuit
parasitics and non-ideal device behaviour become dominant features in filter circuit
performance.
The PLL frequency synthesizer is also a very widely deployed sub-system, with a
huge range of applications in signal and clock generation. Within the PLL, the VCO is
the most critical analogue component, having a major impact on the phase noise and
jitter of the synthesizer output signal spectrum. There is continual demand to increase
operating frequencies and improve performance with regard to spectral purity. The
calibration techniques described in Section 11.3 provide a valuable contribution by
optimizing the VCO performance.
On-chip automatic antenna tuning is not yet widely deployed in integrated wireless
systems. However, with the trend for multi-standard, multi-band, wide bandwidth
operation and continual pressure to reduce the size of antennas while retaining overall
power efficiency, it is likely that antenna tuning systems will soon become useful
or essential. Significant challenges exist in providing low-loss matching network
components and switching, while also achieving adequate power handling. Tuning
algorithm design also remains an area for further study.
11.6
References
1 Nimmo, R.: Analogue electronics, the poor relation?, Proceedings of IEE Symposium on Analogue Signal Processing, Oxford, 1 November 2000, pp. 1/11/5
2 Deliyannis, T., Sun, Y., Fidler, J.K.: Continuous-Time Active Filter Design (CRC
Press, Boca Raton, FL, 1999)
Index
Index 385
effective number of bits (ENOB) 247
embedded test techniques: see built-in self test
(BIST)
equivalent time sampling 221
eye-opening monitor (EOM) 161
fast Fourier transform (FFT) 24854
fault clustering/collapsing 11516
fault compensation source method 246
fault detection (FD) defined 114
fault diagnosis definitions 11415
fault dictionary method 12, 116
neural-network-based: see
neural-network-based approaches
test node selection 29
fault grouping 11516
fault incremental circuits 34
non-linear circuits 214
fault location/fault isolation (FI) defined 114
fault observability concept 30
fault tree selection 14
fault value evaluation defined 11415
fault verification method 2
FFT (fast Fourier transform) 24854
filters: see active filters; analogue filters
FleisherLaker SC biquad filter 199201
four opamp biquad high-pass filter 99100
fractional-N synthesizer 36570
frequency domain approach 30
frequency-response characterization system
(FRCS) 31123
BIST implementation 31419
experimental evaluation 31923
operating principle 31113
testing methodology 31314
frequency synthesizers, self-calibration
36571
genetic algorithms 30
global ambiguity groups 4950, 52
HABIST 229
hierarchical techniques 12139
extensions using the self-test algorithm
1213
large-scale circuit fault diagnosis 312
mixed SBT/SAT approaches 1357
neural-network-based approaches 1301
NewtonRaphson-based approach 1367
simulation-after-test (SAT) 12131
Index 387
bilinear decomposition of fault
equations 5962
bilinear function 256
k-node-fault diagnosis 1012
NewtonRaphson-based approach 627
non-linear circuits 84
phase-locked loops (PLLs) 28797
test frequency selection 6771
see also L1-norm optimization approach;
symbolic function approach
phase frequency detectors 279, 306
phase-locked loops (PLLs) 1634, 277307
architecture 27782
capture and lock range measurements 303
charge pump and loop filter
configuration 2801
charge pump current
measurement 2956
digital structures 282
fault models 283
frequency lock test (FLT) 28890
frequency synthesizers, calibration 36571
gain and linearity measurement 297,
3023
jitter measurement 2837, 295,
298300, 306
lock range and capture range
measurement 288
on-chip filter frequency tuning 3567
operational-parameter-based
measurements 28797
operation and test issues 27782
phase frequency detector 279, 306
phase transfer function monitoring 2925,
3035
production focussed tests 298300
step response test 2902
structural decomposition tests 2957
test issues 27782
test parameters 2823
transient response monitoring 28892
voltage controlled oscillator (VCO)
2812, 297
piecewise linear (PWL) models 30, 717
PLL: see phase-locked loops (PLLs)
PWL (piecewise linear) models 30, 717
quantization noise 2368, 247
quasi-fault incremental circuit 267
radio frequency (RF): see RF testing
Index 389
singular-value decomposition (SVD)
527
Testability and Ambiguity Group Analysis
(TAGA) 556
testable groups 4951, 52
test control 1723
test costs 141
test error index (TEI) 689
test node selection 289
test points 41, 46
test signal generation 2930
time amplification 1623
time domain approach 30
time domain reflectometry/transmission
(TDR/TDT) 169
time measurement unit (TMU) 156, 1646
time shuffling: see equivalent time sampling
time-to-digital converter (TDC) 1556, 157,
1646
time-to-voltage converter 155
timing measurements 1549
analogue-based interpolation techniques
1556
calibration techniques 1646
digital phase-interpolation techniques
1567
jitter measurement 15962
single counter 154
time amplification 1623
Vernier delay line 1579
TMU (time measurement unit) 156, 1646
tolerance effects and treatment 15, 31, 115,
3478
see also sensitivity analysis
topological methods