Professional Documents
Culture Documents
CHAPTER-1
1.1 INTRODUCTION
In the production of integrated circuits, testing is done to identify defective chips. This is
very important for shipping high quality products. Testing is also done to diagnose the reason for
a chip failure in order to improve the manufacturing process. In system maintenance, testing is
done to identify parts that need to be replaced in order to repair a system. Testing a digital circuit
involves applying an appropriate set of input patterns to the circuit and checking for the correct
outputs. The conventional approach is to use an external tester to perform the test. However,
built-in self-test (BIST) techniques have been developed in which some of the tester functions
are incorporated on the chip enabling the chip to test itself. BIST provides a number of wellknown advantages. It eliminates the need for expensive testers. It provides fast location of failed
units in a system because the chips can test themselves concurrently.
And, it allows at-speed testing in which the chip is tested at its normal operating clock
rate which is very important for detecting timing faults. Despite all of these advantages, BIST
has seen limited use in industry because of its area and performance overhead, increased design
time, and lack of BIST design tools. These are problems that this dissertation addresses. The
research described in this dissertation is timely because the interest in BIST is growing rapidly.
The increasing pin count, operating speed, and complexity of ICs is outstripping the capabilities
of external testers. BIST provides solutions to these problems.
1.2 Pseudo-Random BIST
Figure 1.1 is a block diagram showing the architecture for BIST. The circuit that is being
tested is called the circuit-under-test (CUT). There is a test pattern generator which applies test
patterns to the CUT and an output response analyzer which checks the outputs. The test pattern
generator must generate a set of test patterns that provides a high fault coverage in order to
thoroughly test the CUT. Pseudo-random testing is an attractive approach for BIST. A linear
feedback shift register (LFSR) can be used to apply pseudo-random patterns to the CUT. An
LFSR has a simple structure requiring small area overhead. Moreover, an LFSR can also be used
as an output response analyzer thereby serving a dual purpose. BIST techniques such as circular
BIST [Stroud88], [Krasniewski89], and BILBO registers [Koenemann 79] make use of this
advantage to reduce overhead.
If the fault coverage for pseudo-random BIST is insufficient, then there are two solutions.
One is to modify the circuit-under-test to make it random pattern testable, and the other is to
modify the test pattern generator so that it generates patterns that detect the r.p.r. faults.
Innovative techniques for both of these approaches are described in this dissertation. These
techniques enable automated design of pseudo-random BIST implementations that satisfy fault
coverage requirements while minimizing area and performance overhead. These techniques have
been incorporated in the TOPS (Totally Optimized Synthesis-for-test) tool being developed at the
Center for Reliable Computing.
1.3 Weighted Pattern Testing
Weighted pattern testing is performed by weighting the signal probability (probability
that the signal is a '1') for each input to the circuit-under-test. Two issues in weighted pattern
testing are what set of weights to use and how to generate the weighted signals. Many techniques
have been proposed for computing weight sets [Bardell 87]. It has been shown that for most
circuits, multiple weight sets are required to achieve sufficient fault coverage [Wunderlich 88].
For BIST, the weight sets must be stored on-chip and control logic is needed to switch between
them which can result in a lot of overhead.
In order to reduce the BIST overhead for weighted pattern testing, researchers have
looked for efficient methods for on-chip generation of weighted patterns. Wunderlich proposed a
Generator of Unequiprobable Random Tests (GURT) in [Wunderlich 87] that requires very little
hardware overhead but is limited to only one weight set. Hartmann and Kemnitz proposed a
method in [Hartmann93] that uses a modified GURT structure and described test pattern
generators for the C2670 and C7552 benchmark circuits [Brglez85] that require very little
overhead.
However both of these methods are not general methods because they use only a single
weight set and therefore will not provide sufficient fault coverage for many circuits. Methods
that use multiple weight sets with 3 different weight values (0, .5, and 1) were described in
[Pomeranz93] and [AlShaibi94]. These methods essentially fix the value of certain inputs
while random patterns are being applied.
BIST include:
1. Pseudo random pattern generator(PRPG)
2. BIST Response Compaction
3. Output response analyzer(ORA)
1.6 PSEUDO RANDOM PATTERN GENERATOR:
To perform test on any circuit, a Pseudo random pattern generator use mostly linear feedback
shift register (LFSR) to generate input test vectors.
1.6.1 Standard LFSR
The standard LFSR method has been used as the test pattern generator for the BIST. A
LFSR is a shift register where the input is a linear function of two or more bits (taps) as shown in
Figure 4 . It consists of D flip-flops and linear exclusive-OR (XOR) gates. It is considered an
external exclusive-OR LFSR as the feedback network of the XOR gates feeds externally from
X0 to Xn-1. One of the two main parts of an LFSR is the shift register. A shift register is used to
shift its contents into adjacent positions within the register or, in the case of the position on the
end, output of the register.
Fig 4 By convention, the output bit of an LFSR that is n bits long is the nth bit; the input bit of an
LFSR is bit 1.
CHAPTER-2
PROJECT DESCRIPTION
2.1 AIM
A digital system is tested and diagnosed during its lifetime on numerous occasions. It is
very critical to have quick and very high fault coverage testing. One common and widely used in
semiconductor industry for IC chip testing is to ensure this is to specify test as one of the system
functions and thus becomes self-test. A system designed without an integrated test strategy which
covering all levels from the entire system to components is being described as chip-wise and
system-foolish. A proper designed Built-In-Self-Test (BIST) is able to offset the cost of added
test hardware while at the same time ensuring the reliability, testability and reduce maintenance
cost [6, 17].
The basic idea of BIST, in its most simple form, is to design a circuit so that the circuit
can test itself and determine whether it is good or bad (fault-free or faulty, respectively).
This typically requires additional circuitry whose functionality must be capable of generating test
patterns as well as providing a mechanism to determine if the output responses of the circuit
under test (CUT) to the test patterns correspond to that of a fault-free circuit . In all the cases
test-per-clock and the test-per-scan schemes are required
10
(CUT). For example, to produce a weight of a 0.25, two cells of the generator are
connected to an AND gate, whose output drives the input of the CUT.
When weights are assigned arbitrary values, arbitrary numbers of shift
register cells are used to produce the required input values. Register cells are
generally not allowed to be shared between gates that feed different inputs, to
avoid correlation between the inputs of the CUT. Pomeranz and Reddy in [5]
performed a major breakthrough to the weighted pattern generation paradigm,
proposing a method that dynamically walks through the range of possible
test generation approaches, starting from pure pseudorandom tests to detect easyto-detect faults at low hardware cost, then reducing the number of inputs which are
allowed to be specified randomly, fixing an increasing number of the inputs to 0 or
1 according to a given deterministic test set, to detect faults that have more
stringent requirements on input values and cannot be detected by a purely random
sequence of reasonable length.
Thus, the method proposed in [5] can be viewed as a weighted random test
generation method that uses three weights: 0, 0.5 and 1. A weight of 0 corresponds
to fixing an input to 0; a weight of 1 corresponds to fixing an input to 1; and a
weight of 0.5 indicates pure random values. Consequently, every weight is
generated using a single LFSR cell primary input, and a small number of logic
gates to account for the weights 0 and 1. Current VLSI circuits, e.g. data path
architectures, or Digital Signal Processing chips [6] commonly contain arithmetic
modules (accumulators or ALUs).
Utilizing accumulators for Built-In Testing (compression of the CUT
responses, or generation of test patterns) has been shown to result in low hardware
11
overhead and low impact on the circuit normal operating speed. In [7] it was
proved that the test vectors generated by an accumulator whose inputs are driven
by a constant pattern can have quite good pseudorandom characteristics, if the
input pattern is properly selected. To the best of our knowledge, no accumulatorbased weighted pattern generator has been proposed in the open literature to date.
In this paper, an Accumulator-based Weighted Pattern Generator (AWPG) is
presented, based on an accumulator whose inputs are driven by a small number of
constant patterns. The accumulator is modified in such way that the weight of each
output can be 0 / 1, or 0.5, depending on the input value of the certain input of the
accumulator and a control input.
2.3.2 WEIGHTED PATTERN GENERATION:
All weighted random and related methods suffer from the following
limitation. To produce weights which are different from 0.5, several cells of an
LFSR or a shift register are connected to a gate whose output is used to derive the
corresponding primary input of the circuit under test; e.g., to produce a weight of
0.25, two cells of the LFSR are connected to an AND gate, whose output drives a
primary input of the circuit. When weights are allowed to assume arbitrary values,
arbitrary numbers of shift-register cells have to be used to produce the required
input values. Register cells are generally not allowed to be shared between circuits
that generate weights for different primary inputs, to avoid correlation between the
values different primary inputs assume.
In [14], experimental evidence was presented to demonstrate that in some
cases, such correlations may not affect the fault coverage achieved. However, some
12
13
14
of times. As soon as both 0 and 1 are generated for B, the first two elements of Tare
produced and applied to the circuit.
15
assignments, with the built-in capability to regenerate some or all of the tests in the
given deterministic test set only if the required fault coverage cannot be otherwise
achieved. The following provides the details of the method proposed.
Under the proposed method, test generation always starts with a set of pure
random patterns, to detect easy to- detect faults at low hardware cost. Using the
tests out of the deterministic test set that detect faults which are yet undetected,
weight assignments are computed, that have increasingly large numbers of inputs
fixed to 0 or 1, to allow the harder to detect faults (faults with more stringent
requirement on primary input values) to be detected. The number of 0.5 weights in
a weight assignment, denoted by K, is decreased when a sequence of pseudorandom patterns generated for the weight assignment with K 0.5-weights fails to
detect any additional faults.
At that point, all remaining faults require more input values to be specified
for an additional fault to be detected, and the value of K must be decreased. The
magnitude of the decrement in K and the number of faults detected that bring about
a decrease in K can be varied. The general form of the algorithm for computing the
weight assignments, when detection of no additional faults causes a decrease by 1
in K, is as follows.
Procedure 1: The general form of weight assignment computation
1) Set F to be the set of all target faults (a set of collapsed, detectable faults). Set K
to equal the number of primary inputs.
2) If F is empty, stop.
3) Find a weight assignment t suitable for the faults in F, that has K 0.5-weights
(the selection of t is described later).
16
4) Generate N random patterns by fixing the inputs that have a weight 0 (1) under t
to logic value 0 (l), and randomly specifying the other inputs (N is a predetermined
constant). For each random pattern generated, perform fault simulation for every
fault f E F. If f is detected, remove f from F.
5) If no fault was detected by the previously applied N
tests, set K = K - 1..
Initially, K is set equal to the number of primary inputs, and sets of N
patterns are generated, until either all faults are detected and F is empty, or no
additional fault is detected by all N patterns in the last set. In the former case, test
generation is complete. In the latter case, K is reduced by 1, a new weight
assignment is computed, and test generation proceeds.
In the worst case, K is reduced to zero, and the deterministic test set (or
parts of it) is reproduced (this point will be clarified when the selection of weight
assignments is explained). The method is thus complete in the sense that fault
coverage can be guaranteed to equal deterministic fault coverage. Note that weight
assignment computation is performed by Procedure 1 dynamically; i.e., fault
simulation is performed for every random pattern generated, and detected faults are
dropped. As a result, the weight assignments computed are matched to a specific
random pattern generation method.
When the random pattern generation method is changed, a fault previously
detected may be left undetected, and complete fault coverage may not be achieved.
To solve this problem, in all the experiments we performed, an LFSR was
simulated, and pseudo-random patterns were generated by the LFSR. The same
LFSR can then be implemented to generate the random patterns in the actual
17
18
CHAPTER-3
WORKING OF PROJECT
3.1 INTRODUCTION
Pseudorandom built-in self test (BIST) generators have been widely
utilized to test integrated circuits and systems. The arsenal of pseudorandom
generators includes, among others, linear feedback shift registers (LFSRs) [1],
cellular automata [2], and accumulators driven by a constant value [3]. For circuits
with hard-to-detect faults, a large number of random patterns have to be generated
before high fault coverage is achieved.
In many case we used a clock gating technique where two non overlapping
clocks control the odd and even scan cells of the scan chain so that the shift power
dissipation is reduced by a factor of two. The ring generator [10] can generate a
single-input change (SIC) sequence which can effectively reduce test power. The
third approach aimsto reduce the dynamic power dissipation during scan shift
through gating of the outputs of a portion of the scan cells. Several low-power
approaches have also been proposed for scan-based BIST. It will modifies scanpath structures, and lets the CUT inputs remain unchanged during a shift operation.
Using multiple scan chains with many scan enable (SE) inputs to activate one scan
chain at a time, the TPG proposed in [18] can reduce average power consumption
during scan-based tests and the peak power in the CUT.
19
20
2) Uniqueness of patterns: The proposed sequence does not contain any repeated
patterns, and the number of distinct patterns in a sequence can meet the
requirement of the target fault coverage for the CUT.
3) Uniform distribution of patterns: The conventional algorithms of modifying the
test vectors generated by the LFSR use extra hardware to get more correlated test
vectors with a low number of transitions. However, they may reduce the
randomness in the patterns, which may result in lower fault coverage and higher
test time [3]. It is proved in this paper that our multiple SIC (MSIC) sequence is
nearly uniformly distributed.
4) Low hardware overhead consumed by extra TPGs: The linear relations are
selected with consecutive vectors or within a pattern, which has the benefit of
generating a sequence with a sequential decompressor. Hence, the proposed TPG
can be easily implemented by hardware
21
22
23
1) If SE = 0, the count from the adder is stored to the k-bit subtractor. During SE =
1, the contents of the k-bit subtractor will be decreased from the stored count to all
zeros gradually.
2) If SE = 1 and the contents of the k-bit subtractor are not all zeros, M-Johnson
will be kept at logic 1 (0).
3) Otherwise, it will be kept at logic 0 (1). Thus, the needed 1s (0s) will be shifted
into the M-bit shift register by clocking CLK2 l times, and unique Johnson
codewords will be applied into different scan chains.
24
25
The inputs of the XOR gates come from the seed generator and the SIC
counter, and their outputs are applied to M scan chains, respectively. The outputs of
the seed generator and XOR gates are applied to the CUTs PIs, respectively. The
test procedure is as follows.
1) The seed circuit generates a new seed by clocking CLK1 one time.
2) RJ_Mode is set to 0. The reconfigurable Johnson counter will operate in the
Johnson counter mode and generate a Johnson vector by clocking CLK2 one time.
3) After a new Johnson vector is generated, RJ_Mode and Init are set to 1. The
reconfigurable Johnson counter operates as a circular shift register, and generates l
codewords by clocking CLK2 l times. Then, a capture operation is inserted.
4) Repeat 23 until 2l Johnson vectors are generated.
5) Repeat 14 until the expected fault coverage or test length is achieved.
26
CHAPTER-4
HARDWARE AND SOFTWARE USED
3.11 SIMPLE ASIC DESIGN FLOW
27
RTL is expressed in Verilog or VHDL. This document will cover the basics of Verilog.
Verilog is a Hardware Description Language (HDL). A hardware description language is a
language used to describe a digital system example Latches, Flip-Flops, Combinatorial,
Sequential Elements etc Basically you can use Verilog to describe any kind of digital system.
One can design a digital system in Verilog using any level of abstraction. The most important
levels are
Behavior Level
This level describes a system by concurrent algorithms (Behavioral). Each algorithm
itself is sequential, that means it consists of a set of instructions that are executed one after the
other. There is no regard to the structural realization of the design.
Register Transfer Level (RTL)
Designs using the Register-Transfer Level specify the characteristics of a circuit by
transfer of data between the registers, and also the functionality; for example Finite State
Machines. An explicit clock is used. RTL design contains exact timing possibility; and data
transfer is scheduled to occur at certain times.
Gate level
The system is described in terms of gates (AND, OR, NOT, NAND etc). The signals
can have only these four logic states (0,1,X,Z). The Gate Level design is normally not
done because the output of Logic Synthesis is the gate level netlist.
Optimization
The circuit at the gate level in terms of the gates and flip-flops can be redundant in
nature. The same can be minimized with the help of minimization tools. The step is not shown
separately in the figure. The minimized logical design is converted to a circuit in terms of the
switch level cells from standard libraries provided by the foundries. The cell based design
generated by the tool is the last step in the logical design process; it forms the input to the first
level of physical design.
28
29
types of field-programmable devices. The emphasis is on devices with relatively high logic
capacity; all of the most important commercial products are discussed. Before proceeding, we
provide definitions of the terminology in this field. This is necessary because the technical jargon
has become somewhat inconsistent over the past few years as companies have attempted to
compare and contrast their products in literature.
Definitions of Relevant Terminology
The most important terminology used in this paper is defined below.
Field-Programmable Device (FPD): A general term that refers to any type of integrated circuit
used for implementing digital hardware, where the chip can be configured by the end user to
realize different designs. Programming of such a device often involves placing the chip into a
special programming unit, but some chips can also be configured in-system. Another name for
FPDs is programmable logic devices (PLDs); although PLDs encompass the same types of chips
as FPDs, we prefer the term FPD because historically the word PLD has referred to relatively
simple types of devices.
PLA : Programmable Logic Array (PLA) is a relatively small FPD that contains two levels of
logic, an AND-plane and an OR-plane, where both levels are programmable (note: although PLA
structures are sometimes embedded into full-custom chips, we refer here only to those PLAs that
are provided as separate integrated circuits and are user-programmable).
PAL: Programmable Array Logic (PAL) is a relatively small FPD that has a programmable
AND-plane followed by a fixed OR-plane.
SPLD: It refers to any type of Simple PLD, usually either a PLA or PAL.
CPLD: A more Complex PLD that consists of an arrangement of multiple SPLD-like blocks on
a single chip. Alternative names (that will not be used in this paper) sometimes adopted for this
style of chip are Enhanced PLD (EPLD), Super PAL, Mega PAL, and others.
FPGA: Field-Programmable Gate Array is an FPD featuring a general structure that allows
Very high logic capacity. Whereas CPLDs feature logic resources with a wide number of inputs
(AND planes), FPGAs offer more narrow logic resources. FPGAs also offer a higher ratio of
flip-flops to logic resources than do CPLDs.
30
31
of instantiating the algorithm on an FPGA. Then, and only then, one should consider whether the
kernel in question is suited to implementation on an FPGA.
In general terms FPGAs are best at tasks that use short word length integer or fixed point data,
and exhibit a high degree of parallelism, but they are not so good at high precision floating-point
arithmetic (although they can still outperform conventional processors in many cases). The
implications of shipping data to the FPGA from the CPU and vice versa must also come under
consideration, for if that outweighs any improvement in the kernel then implementing the
algorithm in an FPGA may be an exercise in futility.
FPGAs are best suited to integer arithmetic. Unfortunately, the vast majority of scientific codes
rely heavily on 64 bit IEEE floating point arithmetic (often referred to as double precision
floating point arithmetic). It is not unreasonable to suggest that in order to get the most out of
FPGAs computational scientists must perform a thorough numerical analysis of their code, and
ideally reemployment it using fixed point arithmetic or lower precision floating-point arithmetic.
Scientists who have been used to continual performance increases provided by each new
generation of processor are not easily convinced that the large amount of effort required for such
an exercise will be sufficiently rewarded. That said the recent development of efficient floating
point cores has gone some way towards encouraging scientists to use FPGAs. If the performance
of such cores can be demonstrated by accelerating a number of real-world applications then the
wider acceptance of FPGAs will move a step closer. At present there is very little performance
data available for 64-bit floating-point intensive algorithms on FPGAs. To give an indication of
expected performance we have therefore used data taken from the Xilinx floating-point cores
(v3) datasheet.
To measure the area, performance and power consumption gap between field -programmable
gate arrays (FPGAs) and standard cell application-specific integrated circuits (ASICs) for the
following reasons:
1. In the early stages of system design, when system architects choose their implementation
medium, they often choose between FPGAs and ASICs. Such decisions are based on the
32
differences in cost (which is related to area), performance and power consumption between these
implementation media but to date there have been few attempts to quantify these differences. A
system architect can use these measurements to assess whether implementation in an FPGA is
feasible.
These measurements can also be useful for those building ASICs that contain programmable
logic, by quantifying the impact of leaving part of a design to be implemented in the
programmable fabric.
2. FPGA makers seeking to improve FPGAs can gain insight by quantitative measurements of
these metrics, particularly when it comes to understanding the benefit of less programmable (but
more efficient) hard heterogeneous blocks such as block memory multipliers/accumulators and
multiplexers that modern FPGAs often employ.
4.1 FPGA
A field-programmable gate array (FPGA) is an integrated circuit designed
to be configured by the customer or designer after manufacturinghence "fieldprogrammable".
The FPGA configuration is generally specified using a hardware
description language (HDL), similar to that used for an application-specific
integrated circuit (ASIC) (circuit diagrams were previously used to specify the
configuration, as they were for ASICs, but this is increasingly rare). FPGAs can be
used to implement any logical function that an ASIC could perform.
33
4.1.1 Introduction
The area of field programmable gate array (FPGA) design is evolving at
a rapid pace. The increase in the complexity of the FPGA's architecture means that
it can now be used in far more applications than before. The newer FPGAs are
steering away from the plain vanilla type "logic only" architecture to one with
embedded dedicated blocks for specialized applications.
Definitions of Relevant Terminology are
Field-programmable Device (FPD) a general term that refers to any
type of integrated circuit used for implementing digital hardware, where the chip
can be configured by the end user to realize different designs.
34
35
The players in the current programmable logic market are Altera, Atmel,
Actel, Cypress, Lattice, Quick logic and Xilinx. Some of the larger and
more popular device families are: Stratix from Altera, Accelerator from Actel, is
XPGA from Lattice and Virtex from Xilinx.
36
Features
Clock
Xilinx virtex II
Altera
Actel
Pro
stratix
axcelerator
DCM
PLL
PLL
Lattice
pXPGA
Sys CLOCK
is
37
management
Embedded
memory
blocks
Up to 8
Up to 12
Up to 12
PLL up to 8
Block RAM
Tri Matrix
Up to 10 Mbit
Memory
Embedded
Sys MEM
RAM
Blocks
Data
CLB and
processing
18-bitx 18-bit
LEs
embedded
multipliers
Multipliers
(C-cell
Advanced IO Advanced
I/O s
Support
Embedded
features
power PC405
Cores
DSP blocks
&R-
cell)
Programmable Select IO
Special
Up to 414K
Sys IO
IO Support
Per pin
FIFOs for bus
application
Sys Hs 1 for
high
speed
serial interface
38
The DE0 board has many features that allow the user to implement a wide
range of designed circuits, from simple circuits to various multimedia projects.The
ollowing hardware is provided on the DE0 board:
Altera Cyclone III 3C16 FPGA device
Altera Serial Configuration device EPCS4
USB Blaster (on board) for programming and user API control; both JTAG
39
40
15,408 LEs
56 M9K Embedded Memory Blocks
504K total RAM bits
56 embedded multipliers
4 PLLs
346 user I/O pins
41
Flash memory
4-Mbyte NOR Flash memory
Support Byte (8-bits)/Word (16-bits) mode
SD card socket
Provides both SPI and SD 1-bit mod SD Card access
Pushbutton switches
3 pushbutton switches
Normally high; generates one active-low pulse when the switch is pressed
Slide switches
10 Slide switches
A switch causes logic 0 when in the DOWN position and logic 1 when in
the UP position
General User Interfaces
10 Green color LEDs (Active high)
4 seven-segment displays (Active low)
16x2 LCD Interface (Not include LCD module)
Clock inputs
50-MHz oscillator
VGA output
Uses a 4-bit resistor-network DAC
With 15-pin high-density D-sub connector
Supports up to 1280x1024 at 60-Hz refresh rate
Serial ports
One RS-232 port (Without DB-9 serial connector)
One PS/2 port (Can be used through a PS/2 Y Cable to allow you to connect
a keyboard and mouse to one port)
Two 40-pin expansion headers
42
72 Cyclone III I/O pins, as well as 8 power and ground lines, are brought out
to two 40-pin expansion connectors 40-pin header is designed to accept a
standard 40-pin ribbon cable used for IDE hard drives
Model Sim
ModelSim-Altera Edition
43
No line limitations
4.3.3 Quartus II
Quartus II is a software tool produced by Altera for analysis and synthesis of
HDL designs, which enables the developer to compile their designs, perform
timing analysis, examine RTL diagrams, simulate a design's reaction to different
stimuli, and configure the target device with the programmer.
44
Quartus II Subscription Edition is also available for free download, but a license
must be paid for to use the full functionality in the software. The free Web
Edition license can be used on this software, restricting the devices that can
be used.
45
CHAPTER-6
RESULT AND DISCUSSION:
MODEL SIM OUTPUT:
46
47
PERFORMANCE REPORT:
48
49
PERFORMANCE REPORT:
50
SYNTHESIS REPORT:
51
CONCLUSION
52
REFERENCES
[1] Y. Zorian, A distributed BIST control scheme for complex VLSI
devices, in 11th Annu. IEEE VLSI Test Symp. Dig. Papers, Apr. 1993,
pp. 49.
[2] P. Girard, Survey of low-power testing of VLSI circuits, IEEE Design
Test Comput., vol. 19, no. 3, pp. 8090, MayJun. 2002.
[3] A. Abu-Issa and S. Quigley, Bit-swapping LFSR and scan-chain
ordering: A novel technique for peak- and average-power reduction in
scan-based BIST, IEEE Trans. Comput.-Aided Design Integr. Circuits
Syst., vol. 28, no. 5, pp. 755759, May 2009.
[4] P. Girard, L. Guiller, C. Landrault, S. Pravossoudovitch, J. Figueras,
S. Manich, P. Teixeira, and M. Santos, Low-energy BIST design:
Impact of the LFSR TPG parameters on the weighted switching activity,
in Proc. IEEE Int. Symp. Circuits Syst., vol. 1. Jul. 1999, pp.
110113.
[5] S. Wang and S. Gupta, DS-LFSR: A BIST TPG for low switching
activity, IEEE Trans. Comput.-Aided Design Integr. Circuits Syst.,
vol. 21, no. 7, pp. 842851, Jul. 2002.
[6] F. Corno, M. Rebaudengo, M. Reorda, G. Squillero, and M. Violante,
Low power BIST via non-linear hybrid cellular automata,
in Proc. 18th IEEE VLSI Test Symp., Apr.May 2000, pp.
2934.
[7] P. Girard, L. Guiller, C. Landrault, S. Pravossoudovitch, and H. Wunderlich,
A modified clock scheme for a low power BIST test pattern
generator, in Proc. 19th IEEE VTS VLSI Test Symp., Mar.Apr. 2001,
pp. 306311.
53
54