You are on page 1of 82

16. V.

Volterra, Lec
,
ons sur les equations integrales et les equat-
ions integro-differentielles, Gauthier-Villars, Paris, 1913.
17. V. Volterra, Lec
,
ons sur la theorie mathematique de la lutte
pour la vie, Gauthier-Villars, Paris, 1931.
18. F. Bloom, Ill-Posed Problems for Integrodifferential Equations
in Mechanics and Electromagnetic Theory, SIAM, Philadel-
phia, 1981.
INTERMEDIATE-FREQUENCY AMPLIFIERS
H. R. WALKER
Pegasus Data Systems, Inc.
Edison, New Jersey
The intermediate-frequency (IF) amplier is the circuitry
used to process the information bearing signal between
the rst converter, or mixer, and the decision making cir-
cuit, or detector. It can consist of a very few or a great
many component parts. Generally, it consists of an ampli-
fying stage or device to provide gain, plus a bandpass l-
ter, or lters, to limit the frequency band to be passed. The
signal to be processed can be audio, video, digital, or
pulsed, using amplitude modulation, frequency modula-
tion, phase modulation, or combinations thereof. Several
examples are shown in Figs. 1319.
Bandpass IF ampliers are also used in radio trans-
mitters to limit the occupied bandwidth of the transmitted
signal. Certain modulation methods create a very broad
frequency spectrum, which can interfere with adjacent
channels. Regulatory agencies, such as the FCC (Federal
Communications Commission), require that these out-of-
band signals be reduced below a certain permissible level,
so they must undergo processing through a bandwidth-
limiting lter and amplier at the transmitter.
For each application there are certain design restric-
tions or rules that must be followed to achieve optimum
results.
1. GENERAL IF AMPLIFIER FUNCTIONS AND
RESTRICTIONS
The ve basic IFamplier functions and requirements are
as follows:
1. Image Rejection. The mixer stages in a superhet-
erodyne receiver can convert any frequency below or
above the local oscillator frequency to an intermediate
frequency. Only one of these frequencies is desired. The
intermediate frequency must be chosen so that undesir-
able frequencies or images are removed by the RF ampli-
er lter (preltering) and are rejected by the mixer. This
may mean that two or three different intermediate fre-
quencies must be used within the same receiver. The in-
termediate frequencies in common use range from 0 Hz to
approximately 2.0GHz.
2. Selectivity. Selectivity is required to reject as much
as possible of any adjacent channel interfering signal.
Generally this means attempting to obtain a bandpass
lter characteristic as close to that of the ideal lter as
possible that will pass the necessary Nyquist bandwidth
(the baseband bandwidth from 0Hz to the highest
frequency to be passed) without introducing harmful
amplitude or phase distortion.
3. Gain. Gain is required to amplify a weak signal to a
useful level for the decisionmaking circuit. This gain must
be provided by means of a stable amplier that introduces
a minimum of noise, so as not to degrade the receiver noise
gure. All circuit input and output impedances should be
properly matched for optimum power transfer and circuit
stability.
4. Automatic Gain Control. The amplier gain must
vary automatically with signal strength so that the deci-
sionmaking circuit receives a signal of as nearly constant
level as possible. The stages of the IFamplier must not be
overdriven, or go into limiting, until after the last band-
pass lter, to prevent splattering, or broadening and dis-
tortion of the signal.
5. Linearity. The amplier should be linear in phase or
amplitude to prevent distortion of the recovered informa-
tion. AM receivers should be linear in amplitude, while
FM or PM receivers should be linear in phase. Some mod-
ulation methods can tolerate more linearity distortion
than others.
2. SELECTING THE INTERMEDIATE FREQUENCY
Image rejection and signal selectivity are the primary rea-
sons for selecting an intermediate frequency. Most cur-
rently manufactured bandpass lters of the crystal, or
resonator type, have become standardized so that the de-
signer can obtain off-the-shelf components at reasonable
cost for these standard frequencies. The standard AM
broadcast receiver utilizes a 455-MHz IF lter because
extensive experience has shown that this will reject all but
the strongest images. Assume the desired signal is at
600 kHz. A local oscillator operating at 1055kHz will
have an image frequency at 1510 kHz, which the RF in-
put lter can easily reject. Similarly, an FM receiver op-
erating at 90.1MHz with an intermediate frequency of
10.7 MHz will have an image at 111.5 MHz, which will be
rejected by the RF amplier. In both of these cases, a sin-
gle intermediate frequency can be used.
A receiver operating at 450 MHz will require two In-
termediate frequencies obtained by using rst and second
mixers, as in Fig. 16. The rst IFamplier may consist of a
relatively broadband lter operating at 10.7 or 21.4 MHz,
followed by a second converter and IF stage operating at
455 kHz. The rst IF lter is narrow enough to reject any
455-kHz images, and the second IF lter is a narrowband
lter that passes only the desired signal bandwidth. If
the 455-kHz lter had been used as the rst IF lter, the
450-MHz RF lter, which is relatively broad, would not
have eliminated the image frequency, which is 455kHz
above or below the local oscillator (LO) frequency.
Television receivers use a video intermediate frequency
of approximately 45 MHz, since this permits a relatively
broad RF lter to pass the broadband TV signal, while still
INTERMEDIATE-FREQUENCY AMPLIFIERS 2175
Previous Page
rejecting the images. The video signal from the IF ampli-
er is AM, with an FM sound carrier riding on it. Televi-
sion sound is generally obtained from a beat, or difference
frequency between the video and sound carriers, which is
at 4.5MHz.
Satellite receivers use a broadband rst intermediate
frequency covering a frequency block from 900 MHz to
2.1 GHz. This is done by means of a low-noise block (LNB)
converter. The second mixer is made tunable so that any
frequency in the block can be converted to the second in-
termediate frequency, which is usually xed at 70 or
140MHz. The second intermediate frequency, which
drives the detector, has a narrower bandwidth to reduce
noise and reject adjacent channel interference.
Crystal, ceramic resonator, and SAW lters are massed
produced at relatively low cost for the frequencies men-
tioned above, so that most consumer products employ one
or more of the abovementioned standard frequencies and
standard mass-produced lters.
3. SELECTIVITY
Carsons rule, and the Nyquist sampling theorem on
which it is based, state that a certain bandwidth is re-
quired to transmit a signal undistorted. The necessary
bandwidth for an AM signal is given as follows:
BW2f
m
1
Thus an AM broadcast receiver will require 10 kHz of
bandwidth to pass a 5kHz f
m
audio tone. ( f
m
Fre-
quency of modulation.) In data transmission systems, the
frequency f
m
corresponding to the data rate f
b
, is f
m

1
2
f
b
.
The data clock frequency is twice the frequency of the data
in ones and zeros. This means that a baud rate f
b
of 9,600
bits per second (bps) will require a bandwidth of 9.6kHz.
For FM, the necessary bandwidth required for trans-
mission is
BW2f
m
Df 2
A 15-kHz audio tone ( f
m
) and an FM transmitter being
deviated with a modulation index of 5 will require 2 (15
(15 5)) 180kHz of bandwidth. Df is (5 15) and f
m
is
15 kHz. Narrowband FM, or phase modulation (PM) (with
a modulation index of o0.7), is somewhat different in that
the bandwidth actually required is the same as that for
AM. This is due to the fact the higher J
n
Bessel products
are missing [Eq. (1) applies].
These values are for double-sideband transmission.
Single-sideband transmission will require half as much
bandwidth. The required baseband bandwidth is the same
as the value for f
m
. This is also known as the Nyquist
bandwidth, or the minimum bandwidth that can carry the
signal undistorted at baseband.
Ideally, the IF lter, or the equivalent baseband lter,
need pass only this bandwidth and no more. This requires
the use of an ideal bandpass or lowpass lter, which does
not exist in practice, but can be approached by various
means. The lter must be as narrow as conditions permit
to reduce the noise bandwidth and any adjacent channel
interference, since noise power rises linearly with increas-
ing lter bandwidth [14]:
S
o
N
o
b
2
bit rate
filter BW
S
i
N
i
3a
S
o
N
o
modulation
gain
loss
processing gain
S
i
N
i
3b
These two equations show a generalized relationship be-
tween the signal-to-noise ratio (SNR) at the receiver input
and the SNR at the receiver output. The term b
2
repre-
sents a gain, or loss, in power due to the modulation meth-
od. In FM or PM it is the modulation angle. The term [(bit
rate)/(lter bandwidth)] is generally known as processing
gain. Narrowing the bandwidth improves the S
o
/N
o
ratio,
but this improvement is not always available, depending
on the modulation method. The Nyquist bandwidth rules
state that it should be (symbol rate)/BW1.
Pulse modulation, as in radar (radio detection and
ranging), generally requires a much broader lter band-
width than the other modulation methods. A condition
called envelope delay or group delay must also be ob-
served. This is discussed later along with the transfer
functions of the lters. For optimum results, the lter
bandwidth (Df) must be equal to [1/(pulsewidth)]. If the
lter bandwidth is too narrow, the amplitude detected is
reduced and the SNR is adversely affected. In this case,
the processing gain is ideally 1 [14].
S
o
N
o
processing gain
S
i
N
i

E
b
N
o
4
4. GAIN
The IF amplier must provide sufcient gain to raise a
weak signal at the RF input to the level required, or de-
sired, by the decisionmaking circuit or detector. This re-
ceiver gain can vary from 0 up to 130 dB, most of which is
usually provided by the IFamplier. The RFamplier and
mixer circuits preceding the IF amplier usually provide
Z20 dB of gain so that the IF amplier generally contrib-
utes little to the receiver noise gure. (See NOISE FIGURE
article elsewhere in this encyclopedia.) Ampliers with
very high gain have a tendency to oscillate; hence two
different intermediate frequencies may be used to reduce
the gain on any one frequency, or more of the gain may be
obtained from the RF section.
Gain is provided by an amplifying device, such as a
transistor, or vacuum tube (in older equipment). These
devices have input and output impedances of a complex
nature that must be matched to the ltering circuits for
best power transfer, stability, and lowest noise. Current
practice is often to use a gain stage, which consists of
multiple amplifying devices in an integrated circuit
2176 INTERMEDIATE-FREQUENCY AMPLIFIERS
package. These packages often contain the mixer stages
and detectors as well.
5. AUTOMATIC GAIN CONTROL
Receivers must respond to a wide range of input levels
while maintaining a nearly constant level at the detector
or decisionmaking circuit. The user or operator does not
wish to manually adjust the gain to obtain a constant
sound or picture level when changing stations. This func-
tion is performed by detecting the output level of the IF
amplier and correcting it by means of a feedback circuit
that adjusts the gain to keep the level as constant as pos-
sible. Since this detected level can vary rapidly, it is
passed through a lowpass lter [usually an RC (resis-
tance capacitance) pair] to integrate or slow down the
changes, then amplied by a DC (direct-current) amplier
and applied to an IF amplier circuit or gain stage that
has variable gain characteristics. Some receivers, such as
those used in an automobile, require relatively rapid act-
ing AGC circuits, while xed receivers can use a much
slower AGC time constant. Dual-gate eld-effect transis-
tors use the second gate to control the gain. Bipolar or
single-gate eld-effect transistors vary the gain by means
of a bias voltage or current applied to the input terminal
along with the signal. Special integrated circuit gain stag-
es for IF amplication are available, such as the Motorola
MC1350, which both amplify and provide a variable gain
control function.
6. FILTERS FOR IF AMPLIFIERS
Except for block conversions, which convert wide frequen-
cy bandwidths, such as those used on satellite receivers,
IF ampliers in general use a narrow bandpass, or a low-
pass lter, to limit the bandwidth to the Nyquist band-
width. Block conversion, on the other hand, can use a
highpasslowpass lter pair, where the bandwidth to be
passed lies between the high and low cutoff frequencies.
The traditional bandpass lter requires one or more
resonant elements. Although the actual resonator may be
a coil and capacitor, ceramic resonator, or SAW lter, the
principles are basically the same. Digital lters, which do
not use resonators, have been employed more recently.
These will be discussed later in brief. They are discussed
in more detail elsewhere in this encyclopedia.
The inductance/capacitance resonator was the rst
used, and is still a comparison standard. Figures 1a and
1b show series resonant circuits, and Fig. 1b shows a par-
allel resonant circuit. These circuits will pass a signal at
the resonant peak and reject a signal off resonance. Re-
sistances R
s
and R
p
are naturally occurring losses that
reduce the circuit efciency. Figure 2 shows the universal
resonance curve, which is applicable to both series and
parallel resonant circuits. It is important to note that the
signal rejection never goes to a zero level in the area of
interest, but reaches an asymptotic value between 0.1 and
0.2 or about 17 dB. If it is necessary to reject a signal on
the shoulders of this curve by 60 dB, then four cascaded
stages of this lter must be used to obtain the necessary
rejection. Note also that there is a nonlinear phase shift
that reaches a maximum in the area of interest, then
changes to 7701. When stages are cascaded, this phase
shift is multiplied by the number of stages. A nonlinear
phase shift can cause distortion in FM receivers. The
phase shift curve plotted is for a parallel resonant circuit.
The phase reverses for a series circuit. The phase at any
point on the curve is obtained by plotting horizontally
from the vertical amplitude/phase scale: a Q (cycles off
resonance/resonant frequency).
A frequency f
0
at which the response of a parallel res-
onant LC lter is a maximum, that is, the point at which
the parallel impedance is a maximum, is dened as a pole.
A frequency at which the impedance is a minimum, as in
the series LC circuit, is dened as a zero. Thus the as-
sumed four cascaded stages above would constitute a four-
pole lter, since it contains four resonant poles. The fre-
quency of resonance is given by Eq. (5); this is the fre-
quency at which [X
c
1/ joC] and [X
L
joL] are equal:
f
0

1
2pLC
1=2
5
The bandwidth that an analog LC lter can pass is altered
by the circuit efciency, or circuit Q, given in Eqs. (6).
C
R
p
R
s
R
p
C
C
L L
L
(a) (b) (c)
Figure 1. Series (a,b) and parallel (c) resonant circuits.
1.0
0.6
3dB
6dB
Phase lag
Amplitude
Phase lead
2.5 2.0 1.5 1.0 .5 .5 1.0 1.5 2.0 2.5
+75
+50
45
60
0
25
50
75
0.4
0.3
0.2
0.1
0.8
0.9
Figure 2. Universal resonance curve (BTbandwidthbit
period).
INTERMEDIATE-FREQUENCY AMPLIFIERS 2177
Generally the bandwidth is specied as the bandwidth
between the 3dB points, where the phase shift is 7451.
Q
X
c
R
s
for a series circuit 6a
Q
R
p
X
c
for aparallel circuit 6b
Q
f
0
3dBBW
6c
For simplicity in analyzing the following circuits, the Q
determining R will be assumed to be a parallel resistance
R
p
across the inductance.
Figure 3 shows a typical IF amplier stage as used in
earlier transistor radios [1,2]. In this circuit R
p
(the total
shunting resistive load) is actually three resistances in
parallel; one is the equivalent R
p
of the coil itself (repre-
senting the coil losses), another is the input resistance of
the following stage, as reected, and the third is the out-
put resistance of the driving transistor as reected. It
cannot be assumed that the resulting coil Q, and hence the
selectivity of the circuit, is that of the unloaded coil and
capacitor alone. Dual-gate eld effect transistors have the
highest shunting resistance values, bipolar transistors the
lowest. The gain can be varied by increasing or decreasing
the bias voltage V
b
applied to the input terminal.
Manufacturers of amplifying devices often provide the
impedances, or admittances of their products on their data
sheets. Formerly this was done in the form of h parame-
ters. The more common practice today is to provide the
information in the form of S parameters. These values can
be converted to impedances and admittances, but the
manual process is rather complicated. An easier method
is to use the various software programs (see Available
Software section at end of this article) to make the con-
version. Matrix algebra, h and S parameters are discussed
elsewhere in this encyclopedia and also in the Refs. 3 and
4 in this article. Unfortunately, S parameters for bandpass
lters are rarely available.
Figure 4a shows the equivalent circuit of the transistor
as the tuned LC sees it. The transistor amplies a cur-
rent, which is passed through a relatively low driving re-
sistance R
s
, to the outside. At the same time, the attached
LC sees an equivalent shunting resistance R
c
and capac-
itance C
c
, which must be added in parallel to R
p
, L, and
C. The input to the following stage, assumed to be an
identical transistor, will have a relatively low shunting
resistance R
i
, and capacitance C
i
, which must be added.
Unless the added capacitances are large compared to the
resonant C, they merely add to it without greatly detuning
the circuit. When tuned, the total C plus L will determine
the frequency and the resulting total R
0
p
will determine
the Q of the LC circuit, and hence the bandwidth. Thus
the complex components can be tuned out and the remain-
ing design problem consists of matching the real or resis-
tive part of the input and output impedances to the best
advantage.
The desired end result is to couple the output of the
driving stage to the input of the following stage with the
least loss by matching the differing impedances. An addi-
tional desired result is to narrow the band of frequencies
passed by means of a lter. These objectives are accom-
plished by transforming the input and output impedances
to a higher or lower shunting impedance that maintains
the desired bandpass characteristic of the lter. A low
driving or load impedance can be stepped up to become a
very high impedance, which maintains the circuit Q at the
desired value.
Impedance matching enables the designer to change
the actual impedance to a different apparent value, which
V
cc
V
b
Input
Output
Figure 3. Typical IF amplier stage.
R
c
Rp
Xc
XC
i
XC
c
XL
Rc Ri
R
s
R
i
C
c
C
12
C
i
(a)
(b)
Figure 4. Equivalent circuit of transistor as seen by tuned LC.
2178 INTERMEDIATE-FREQUENCY AMPLIFIERS
is optimum for the circuit. Figure 5 shows how impedanc-
es are matched by transformer action. A transformer with
a 3 : 1 turns ratio is shown as an example. The output im-
pedance relative to the input impedance is given by Eq.
(7), where N
i
and N
o
are the input and output numbers of
turns on the winding.
Z
i
Z
o

N
i
N
o

7
Thus 90 O at the input is seen as 10 O at the output with a
3 : 1 stepdown turns ratio. The automatic transformer
(tapped coil in Fig. 5) has the same relationship.
When all the reactances and resistances from the tuned
circuit and the transistor input and output as modied by
the stepup/stepdown process of the impedance-matching
networks are added, the network in Fig. 4b results. Cal-
culation of the resonant frequency and circuit Q from
these reactances and resistances in parallel is complicated
unless they are converted to admittances. Software is
available at reasonable cost to perform these calculations
(see Available Software section at end of this article).
Stock, or mass-produced IF transformers, which are
used to provide bandpass ltering as well as impedance
matching, seldom have the desired turns ratio to match
the impedances properly. An additional Z-matched circuit
using capacitors enables the available transformers to
match almost any impedance while preserving the circuit
Q. This capacitor divider circuit is often used instead of a
tapped coil or transformer as shown in Fig. 6.
The formulas used to calculate the matching conditions
using capacitors are more complex than those used for
transformer coupling, since there are more variables. In
this circuit R
i
is assumed to be lower than R
p
. Although R
p
is the equivalent parallel resistance of the LC circuit in
Fig. 6, it could also be the reduced resistance or reected
R
p2
at a transformer tap. N in these equations is equal to
the loaded resonator Q, or to a lower arbitrary value if
total shunting R
p
is lowered by transformer action as in
Fig. 6, or if the component ratios become unwieldy [12]:
X
C2

R
i
R
i
N
2
1
R
p
1
_ _
1=2
8
X
C1

R
p
N
N
2
1
1
R
i
NX
C2
_ _
9
X
C2
%

R
i
R
p
Q

10
X
C1
%
R
p
Q
X
L
11
Equations (8) and (9) calculate the reactances of the two
capacitors. Note that NX
L
is the same as QX
L
. Starting
with a value of NQ, nd X
C1
; then X
C2
:
If N is large in Eq. (8), the equations reduce to the
approximate values in Eqs. (10) and (11). Unless Q is
less than 10, these approximate equations are accurate
enough for general use. As an example, let R
i
100 O and
R
p
10,000O with Q100. Then, using Eq. (11), X
C2
be-
comes 10 O and X
C1
becomes 100O. C
2
is approximately 10
times larger than C
1
. Note the similarity of this ratio to
Eq. (7). If a transformer is involved, N becomes much
smaller and the full formulas (8) and (9) should be used.
Equations (8)(10) apply for R
i
oR
p
and N4(R
p
/
R
i
1)
1/2
.
7. DOUBLE-TUNED CIRCUITS
When two identical LC circuits are coupled together as
shown in Fig. 7, a number of responses are possible as
shown in Fig. 8. The amplitude response depends on the
coupling coefcient K. Undercoupling results in a two-pole
lter with the sharpest selectivity. Critical coupling re-
sults in the narrowest bandwidth with the highest gain.
Transitional coupling is slightly greater than critical cou-
pling and results in a at-topped response with a wider
90 ohms
10 ohms 10 ohms
Figure 5. Impedance matching by transformer action.
R
p
R
i
C
2
C
1
Figure 6. Lowering of total shunting by transformer action.
C
c
C
c
M
C
1
C
1
C
1
(a) (b) (c)
Figure 7. Coupling of identical LC circuits.
INTERMEDIATE-FREQUENCY AMPLIFIERS 2179
bandwidth. Overcoupling results in a double-humped re-
sponse with sharper skirts and broad bandwidth. The cou-
pling coefcient can be calculated using Eqs. (12).
Equation (12a) applies to mutual inductive coupling and
(12b)(12d), to capacitive coupling.
K
M

L
1
L
2
p 12a
K
c

Q
1
Q
2
_ 12b
K

C
c
C
c
C
1

12c
K

C
1
C
c
C
1

12d
Equation (12a) calculates the coupling coefcient for two
identical LC tuned circuits that are coupled together by
leakage inductance (Fig. 7a), often obtained by using
shielded coils with holes in the sides of the shield cans to
allow the magnetic elds to interact. The size of the hole
determines the value of the mutual inductance M. Since
this is difcult to control, a coupling capacitor is often used
as shown in Figs. 7b and 7c. The critical coupling value is
given by Eq. (12b). The coupling coefcients for Figs. 7b
and 7c are given in Eqs. (12c) and (12d).
The amplitude response curves in Fig. 8 do not yield
any information as to the phase shifts that take place
through the lter. In AM circuits, phase is generally of
little concern, with most attention paid to the amplitude
ripple and linearity. In FM circuits, nonlinear phase shift
or a related term, differential group delay, becomes more of
a problem and efforts are made to keep the phase shift as
linear as possible. In data transmission circuits using
phase modulation, or amplitude modulation, any nonlin-
earity must be avoided. For these reasons, the coupling
coefcients are carefully adjusted and cascaded IF ampli-
er stages are used to get the desired transfer function for
the IF amplier.
8. CASCADING IF AMPLIFIER STAGES AND FILTERS
All ltering actions that take place between the RF input
of the receiver and the decisionmaking circuit are parts of
the IF amplier bandpass lter. Since the nal decision-
making circuit is at baseband, or 0 Hz, all ltering prior to
the decisionmaking circuit is part of the IF bandpass l-
tering, which should be treated as a whole.
A single LC circuit seldom has the desired bandpass
characteristic for an IF amplier. Cascading IF amplier
stages with differing coupling and Q values enables the
designer to obtain the desired transfer response. One com-
bination of LC lters uses an overcoupled double-tuned
stage followed by a single-tuned stage with a lower Q. The
result is a three-pole lter with relatively steep skirt
slopes. Cascading these stages results in lters with re-
sponses resembling Butterworth, Chebyshev, elliptical, or
equal-ripple lters, which are noted for their rejection of
adjacent channel interference (see Figs. 9 and 10).
When additional ltering is required at baseband,
simple RC lters, lowpass LC lters, or digital nite
impulse response (FIR) lters are used. These and other
lters are discussed in greater detail elsewhere in this
encyclopedia.
9. CRYSTAL AND CERAMIC FILTERS
Figure 10a shows the equivalent circuit of a crystal or a
ceramic resonator. These devices have both a pole and a
(a) (b)
Figure 9. Curves resulting from cascad-
ing IF amplier stages.
A
B
C
D
Figure 8. Results of LC circuit coupling: critical (curve A); tran-
sitional (curve B); overcoupled (curve C); undercoupled (curve D).
2180 INTERMEDIATE-FREQUENCY AMPLIFIERS
zero that are located relatively close to each other in fre-
quency. Quartz crystals have Q values ranging from 2000
to 10,000 depending on the mechanical loading of the
crystal. Ceramic resonators usually have Q values be-
tween 100 and 2000. The higher the Q, the narrower the
lter bandpass. When two of these devices are connected
as shown in Fig. 10b, the result is a bandpass lter with
steep skirts as shown in Figure 11. These resonators are
used in pairs to create a two-pole lter, which can then be
combined in a single container with other pairs, to create a
lter with as many as eight or more poles. They usually
have excellent adjacent-channel rejection characteristics.
When using these devices, care must be taken to care-
fully match the specied impedance. Any impedance mis-
match can seriously alter the response curve of the lter.
The impedance-matching techniques discussed previously
will enable the designer to obtain a very close match,
which will optimize the circuit performance. Typical input
and output impedances range from 50 to 4000O. Crystal
lter manufacturers often build in transformer or other
tuned matching circuits so that the user does not need to
provide a matching circuit outside the crystal lter.
SAW (surface acoustic wave) lters utilize a crystal os-
cillating in a longitudinal mode with many ngers or taps
placed along the surface. They can be made with very
broad bandpass characteristics, which makes them well
suited for TV IF ampliers, spread-spectrum IF lters,
and other uses requiring a wide RF bandwidth. They have
losses, which are typically about 820 dB, so they must
have ampliers with adequate gain ahead of them if the
receiver noise gure is not to be degraded. They are not
suitable for use in ultranarrowband or low-frequency ap-
plications. The group delay quoted in the specications is
usually the differential group delay and not the actual
group delay, which is much higher.
10. BASEBAND IF FILTERING
IF bandpass lters with specific response characteristics
are sometimes very difcult to obtain, whereas the desired
characteristic is easily and inexpensively obtainable at
baseband. This concept is often applied to transmitters
where a sharp-baseband-cutoff lter can be obtained using
simple components, such as the switched lter. An 8-pole
equivalent at baseband becomes a 16-pole lter at the
modulation intermediate frequency. For example, a sharp-
cutoff lter for voice with a 4-kHz audio cutoff results in a
bandpass lter 8kHz wide at RF after modulation, with
the same sharp cutoff. The same cutoff characteristics at
RF would be almost impossible to obtain in a crystal lter,
which would also be very costly and beyond the manufac-
turing budget for a low-cost transmitter such as a cordless
telephone. By using baseband ltering, a poor-quality RF
lter that only rejects the opposite image can be used.
Similarly, a wideband, or poor-quality IF lter, can be used
ahead of a detector, if the undesired signal components
can be ltered off after detection at baseband, by using a
sharp-cutoff lter.
Switched-capacitor lters are available as packaged in-
tegrated circuits that can be used at baseband and some
lower intermediate frequencies. They have internal oper-
ational ampliers with a switched feedback capacitor, the
combinations of which determine the lter characteristics.
Since they are dependent on the speed of the operational
ampliers and the values of the feedback capacitors, they
seldom function much above 100kHz. They can be cong-
ured as Bessel, equal-ripple, and Butterworth lters. Typ-
ical of this type of lter are the LTC1060 family
manufactured by Linear Technology Corporation (a) and
the MAX274 from Maxim (b) [see items (a) and (b) in
Available Software list at end of this article].
1
As Bessel
lters they perform well out to about 0.7 times the cutoff
bandwidth, after which the phase changes rapidly and the
Bessel characteristic is lost.
Digital signal processing (DSP) at baseband is widely
used to reduce the component count and size of the base-
band lters in very small radio receivers, such as cordless
and cellular telephones. Almost any desired lter response
can be obtained from DSP and FIR lters without using
inductors and capacitors, which would require factory
tuning (c,d).
Separate FIR lters have a at group delay response
and are the best choice for FM or PM ltering, or lters
at baseband. Commercially available software design
Figure 11. Steep-skirted bandpass lter.
(a)
(b)
Series Parallel
Figure 10. Equivalent circuit of a crystal or ceramic reasonator.
1
In the remainder of this article, all lowercase letters in paren-
theses refer to entries in the Available Software list following the
Bibliography. Numbers in brackets refer to Bibliography entries
(references) as usual.
INTERMEDIATE-FREQUENCY AMPLIFIERS 2181
packages permit the design of trial circuits to investigate
phase shift and group delay (e,f).
Unfortunately, digital ltering of any type is frequency-
limited. The lter must use a sampling rate that is much
higher than the frequency to be passed. To use a digital
lter, such as a FIR lter, or DSP as a bandpass lter at
10.7 MHz, requires an analog-to-digital converter (ADC)
operating at 160 MHz or higher. Filtering at baseband
means the sampling rate can be much lower.
11. AMPLIFYING DEVICES FOR IF AMPLIFIERS
Transistors in one form or another have become the stan-
dard for IF ampliers. The single bipolar or eld-effect
transistor used as an individual component, was formerly
the preferred device. For very-high-Q circuits, the dual-
gate FET performs best, since it is the most stable and
offers the lowest shunt resistance. Single-gate FET devic-
es often have too much drain to gate capacitance for good
stability. Modern bipolar transistors usually have good
stability, but lower shunt resistances than dual-gate
FETs. Stability is discussed later in this section.
Monolithic ampliers [MMIC (monolithic microwave
integrated circuit) devices] are stable and have good
gain, but the shunt load impedance is too low for most
bandpass lters other than a crystal or SAW lter
matched to 50 O.
The most recent practice for IF ampliers is to use in-
tegrated circuit blocks containing more than one transis-
tor in a gain stage. These are then packaged together in an
integrated circuit with other circuit components to form
an almost complete radio. Integrated circuits of this type
are shown below.
12. TYPICAL CONSUMER IF AMPLIFIERS
Consumer radio and TV equipment is mass-produced for
the lowest possible cost consistent with reasonable quality.
Manufacturers of integrated circuits now produce single-
chip IF ampliers that can be combined with mass-pro-
duced stock lters to produce a uniform product with a
minimum of adjustment and tuning on the assembly line.
In the examples that follow, some circuit components in-
side and outside the IC have been omitted to emphasize
the IF amplier sections.
Figure 12 shows a single-chip AM receiver that uses
the Philips TDA1072 [7] integrated circuit and ceramic IF
lters at 455kHz. The input impedance of the ceramic l-
ter is too low to match the output impedance of the mixer,
so a tuned matching transformer is used to both reduce
the passed bandwidth (prelter) and match the imped-
ances. The input impedance of the IF amplier was de-
signed to match the average impedance of the ceramic
lters available. This integrated circuit has a built in au-
tomatic gain control that keeps the received audio output
level relatively constant at 250mV as long as the input
signal level to the chip exceeds 30 mV.
Figure 13 shows a single-chip FM radio based on the
Phillips NE605 integrated circuit (g) that uses ceramic IF
lters at 10.7 MHz. The input and output impedance of the
IF amplier sections is approximately 1500 O, to match
the ceramic lter impedance, so no matching transformer
is required. The audio output is maintained level at
175 mV for all signal levels at the input level from 110
to 0 dBm. An automatic frequency control (AFC) voltage
can be obtained from the quadrature detector output. This
circuit can also be used for narrow-angle phase modula-
tion if a crystal discriminator is used for a phase reference
at the quadrature input.
AGC is available from all FM integrated circuits so that
the gain of the mixer and RF stages can be controlled at a
level that does not allow these stages to be saturated by a
strong incoming signal. Saturation, or nonlinearity before
ltering, results in undesirable signal spreading. The
NE605 has a received-signal strength indicator (RSSI)
output that can be amplied and inverted if necessary to
provide an AGC voltage, or current, for the RF amplier
and mixer.
Figure 14 shows a TV IF amplier using the Motorola
MC44301/2 video IF integrated circuit (h) with a SAW
lter at 45 MHz. The SAW lter bandpass is made
V
cc
V
cc
z Match
Ceramic filter
RF Input
Oscillator LC Ac output
AF det.
Mixer
AGC det./amp
IF amp
TDA 1072
AGC
Figure 12. Layout of a single-chip AM receiver.
Audio out RSSI/AGC Oscillator
LC
RF
input
IF LIM
Filter Filter
NE605
Mixer
RSSI level det.
Quad
det.
Figure 13. Conguration of a single-chip FM radio.
2182 INTERMEDIATE-FREQUENCY AMPLIFIERS
approximately 6 MHz wide to pass the video and sound.
The circuit has both automatic frequency control (AFC)
and automatic gain control (AGC) features built in. Unlike
the IF ampliers used for AM and FM audio broadcast
applications, the TV IF amplier includes a phase-locked
loop (PLL) and synchronous detector that locks the fre-
quency of an internal oscillator to the intermediate fre-
quency. This locked, or synchronous, oscillator output is
then mixed with the information-bearing portion of the
signal to create a baseband signal.
The system shown in Fig. 14 is one of a family of 0-Hz
IF ampliers, are becoming more popular in wireless de-
signs, since they permit most or additional signal process-
ing at baseband. In Fig. 14, the video and sound carriers
are both passed by the SAW lter. They beat together at
4.5 MHz in the detector, providing a second IF stage with
the sound information. This 4.5-MHz IF information is
then ltered by a ceramic bandpass lter approximately
50 kHz wide to remove any video components, limited and
detected as a standard FM signal to provide the TV sound.
The video portion, consisting of signals from 15 kHz to
approximately 4.25 MHz, is then further processed to
separate the color information at 3.58 MHz from the
black-and-white information. The video output level is de-
tected to provide the AGC voltage.
The phase-locked oscillator, operating at the interme-
diate frequency, can also be used to provide automatic fre-
quency control to the rst mixer stage local oscillator.
Figure 15 shows a dual-conversion receiver in a single
integrated circuit for communications use, utilizing the
Motorola MC13135 integrated circuit (h). When the
receiver is operated at 450 or 850MHz, as was men-
tioned above, single-conversion IF stages do not offer
the necessary image rejection. This receiver is for narrow-
band FM as opposed to wideband FM for entertain-
ment purposes. The rst IF lter is a low-cost ceramic
lter at 10.7 MHz. The second lter is a multipole crystal
or ceramic lter with a bandpass just wide enough to pass
the signal with a small FM deviation ratio. Receivers
of this type can be used with 12.5 and 25 kHz of chan-
nel separation for voice-quality audio. Analog cellular
telephones, aircraft, marine, police, and taxicab radios
are typical examples.
13. DIRECT CONVERSION AND OSCILLATING FILTERS
Direct conversion converts the RF frequency directly to
baseband by using a local oscillator at the RF carrier fre-
quency. The TV IFamplier with the detector circuit given
in Fig. 14 illustrates some of the reasons. Conversion to
baseband can occur at the intermediate frequency or di-
rectly from the RF frequency.
There is a noticeable trend in integrated circuit design
to utilize synchronous detection [5] with the carrier
restored by means of a phase-locked loop, as shown in
Fig. 14, or by means of regenerative IF ampliers [6],
to accomplish several desirable features that cannot
be obtained from the classical circuits with square-law
detectors.
In the case of direct RF-to-baseband conversion, there
is no IF stage in the usual sense, and all ltering occurs at
baseband. For this reason direct-conversion receivers are
referred to as zero-hertz (0-Hz) IF radios. Integrated cir-
cuits for direct RF conversion are available that operate
well above 2.1 GHz at the RF input. The Maxim 2820 (b)
and the AMD1771 (i) are examples. DSP and FIR lters
are the preferred lowpass lters at baseband, where they
are referred to as windows.
It was discovered in the 1940s that the performance of
a TV receiver could be improved by using a reconstructed
synchronous or exalted carrier, as occurs in the TV IF
amplier depicted in Fig. 14. The carrier is reduced by
vestigial sideband ltering at the transmitter and con-
tains undesirable AM and PM signal components. By
causing an oscillator to be locked to, or to be synchronized
with the carrier, and then to be used by the detector, a
significant improvement in the received signal can be
achieved. Prior to using circuits of this type, the intercar-
rier sound at 4.5MHz in earlier TV sets had a character-
istic 60 Hz buzz due to the AM and PM on the carrier. By
substituting the recovered synchronous carrier instead,
this buzz was removed. Figure 14 illustrates an example.
The earliest direct-conversion receivers using locked
oscillators or synchronous detectors were built in the
Saw FL
Video det.
Video out
Audio out
Phase shifter
Phase det.
Sound IF
4.5 MHz
cer. fil
4.5 MHz det. VCO
AGC
amp
IF
amp
Limiter
IF input
AFC/AFT
out
I
Vcc Vcc
Quadrature L-C
VCO L-C
Q
Figure 14. Layout of a television IF amplier.
10.7 MHz
cer. fil
455 kHz
cer. fil
IF
amp
Limiter
1st mix
2nd mix
RF input
RSSI
Quad det.
MC13136
Quadrature
L-C
Vcc
1st LO.
L-C
2nd LO.
L-C
RSSI
out
Audio out
Figure 15. Conguration of a dual-conversion receiver in a sin-
gle IC.
INTERMEDIATE-FREQUENCY AMPLIFIERS 2183
1920s, when they were known as synchrodyne or homo-
dyne receivers. The theory is relatively simple. A signal
from the RF amplier is coupled to an oscillator, causing a
beat or difference frequency. As the frequencies of the two
sources come closer together, the oscillator is pulled to
match the incoming signal and locks to it. The lock range
depends on the strength of the incoming signal. The two
signals are then mixed to provide a signal at baseband,
which can be further ltered by means of a lowpass lter.
In this way, a relatively broad RF lter can be used, while
the resulting AM signal bandwidth after detection and
baseband ltering can be very narrow. The Q of the oscil-
lator tank circuit rises dramatically with oscillation so
that Q values of 600010,000 are not unusual and selec-
tivity is greatly improved. AGC can be obtained from the
audio signal to maintain an input signal level that is con-
stant to ensure a good lock range. An undesirable charac-
teristic is the whistle or squeal that occurs between
stations. Later receivers used a squelch circuit to make
the signal audible only after locking has occurred. High-
quality receivers for entertainment and communications
use were produced in the 1990s using this principle. They
offer higher sensitivity, better delity, and more controlled
response. Integrated circuits for receivers of this type (di-
rect conversion) are now being produced for paging, wi-
(wireless delity), direct-broadcast TV, and cellular and
cordless telephones. The Maxim 2820 (b) and the AMD
1771 (i) are examples.
Oscillating lters and phase-locked loops are similar in
principle. An intermediate frequency is applied to a phase/
frequency detector that compares the intermediate fre-
quency with the oscillator frequency. An error voltage is
created that changes the oscillator frequency to match, or
become coherent with, that of the incoming IF carrier fre-
quency. In some cases the phase-locked loop signal is 901
out of phase with the carrier, so a phase shifter is used to
restore the phase and make the signal from the oscillator
coherent in phase with the incoming signal. (See Figs. 14
and 18, where phase-locked loops and phase shifters are
employed.)
Synchronous oscillators and phase-locked loops not
only extend the lower signal-to-noise ratio but also have
a bandwidth ltering effect. The noise bandwidth of the
PLL lter is the loop bandwidth, while the actual signal
lter bandwidth is the lock range of the PLL, which is
much greater. Figure 16 shows the amplitude and linear
phase response of a synchronous oscillator. The PLL is not
always the optimum circuit for this use because its fre-
quency/phase-tracking response is that of the loop lter.
The locked oscillator [6] performs much better than the
PLL since it has a loop bandwidth equal to the lock range
without sacricing noise bandwidth, although with some
phase distortion. Some authors hold that the synchronous
oscillator and locked oscillator are variations of the PLL in
which the phase detection occurs in the nonlinear region
of the oscillating device and the voltage-controlled oscil-
lator (VCO) frequency change characteristic comes from
the biasing of the oscillator.
Both the PLL and the locked oscillator can introduce
phase distortion in the detected signal if the feedback
loop is nonlinear. A later circuit shown in Fig. 17 has two
feedback loops and is considered to be nearly free of phase
distortion [5]. This circuit has the amplitude/phase res-
ponse given in Fig. 16.
Phase-locked loops have been used for many years for
FM ltering, amplication, and detection. They are in
common use with satellite communications links for audio
and video reception. A 74HC4046 phase-locked loop inte-
grated circuit operating at 10.7 MHz (the FM intermediate
frequency) can be used to make an FM receiver for broad-
cast use [7]. The phase-locked loop extends the lower sig-
nal-to-noise limit of the FM receiver by several decibels
while simultaneously limiting bandwidth selectivity to the
lock range of the PLL. The detected audio signal is taken
from the loop lter.
14. AM STEREO (C-QUAM)
AM stereo radio is another application of the phase-locked
oscillator at the intermediate frequency. AM stereo radio
is dependent on two programs being transmitted at the
same time at the same frequency. They arrive at the re-
ceiver detector circuitry through a common IF amplier
V
cc
L-C tank

Input
Level adjust
Output
Figure 17. Circuit with same amplitude and phase response as
in Fig. 16 but with two feedback loops and markedly decreased
phase distortion.
Amplitude
Noise BW
Filter signal
bandwidth
Phase
Figure 16. Amplitude and linear phase response of a synchro-
nous oscillator.
2184 INTERMEDIATE-FREQUENCY AMPLIFIERS
operating at 455kHz. The normal program heard by all
listeners is the LR program. The stereo information
(LR) is transmitted at the same frequency, but in quad-
rature phase to the LR program. Quadrature, or or-
thogonal transmission, is used because the orthogonal
channels do not interfere with one another.
Each program section requires a reference carrier,
that is coherent with its own sideband data. The LR
program, which has a carrier, may use an ordinary
square-law detector or a synchronous detector. This is
the program heard over monaural radios. To obtain the
LR program that is transmitted without a carrier, a
phase-locked loop is used at the intermediate frequency
to lock a voltage controlled oscillator to the carrier of the
LR program. This carrier is then shifted 901 in phase
and becomes the reference carrier for the LR segment.
The output of the PLL has the proper phase for the LR
detector, so no phase shifting is necessary. The LR
detector is a coherent or synchronous detector that
ignores the orthogonal LR information. By adding,
then inverting and adding, the left and right channels
are separated. Figure 18 shows a simplied block diagram
of the C-QUAM receiver.
The Motorola MC1032X (h) series of integrated circuits
are designed for AM stereo use. The MC10322 and
MC10325 have most of the components required, includ-
ing the IF ampliers, for a complete AM stereo receiver in
two integrated circuit packages.
15. SUBCARRIERS
Subcarriers are used to carry two or more signals on the
same carrier. They differ from the orthogonal signals used
with C-QUAM in that they are carried as separate signals
superimposed over the main carrier information, as in the
video sound carrier shown in Fig. 14. In Fig. 14, a fre-
quency-modulated subcarrier at 4.5MHz is carried on top
of the main video signal information, which extends from
0 to 4.25 MHz. This is an example of an AM/FM subcar-
rier. Nondigital satellites utilize a frequency-modulated
video carrier with as many as 12 subcarriers at frequen-
cies ranging from 4.5 to 8.0 MHz. Normal FM stereo
broadcasting utilizes a FM/AM subcarrier at 38 kHz to
carry the LR portion of the stereo program. FM stations
frequently carry additional subcarriers at 67 and 92 kHz.
These FM/FM subcarriers are used to carry background
music, ethnic audio programs, and digital data.
To detect a subcarrier, the signal is rst reduced to
baseband, then a bandpass lter is used that separates
only the subcarrier frequencies. The subcarrier frequen-
cies are then passed to a second detector, which must be of
the type appropriate for the subcarrier modulation. This
can be seen in Fig. 14, where a 4.5-MHz lter is used. This
is followed by a limiter and quadrature detector, which is
appropriate for the FM signal. In the case of a 67-kHz FM/
FM subcarrier, the lter is 15 kHz wide at 67 kHz. Detec-
tion can be accomplished by a discriminator, quadrature
detector, or PLL.
16. CELLULAR AND CORDLESS TELEPHONES
Analog cellular telephones employ the circuits shown
in Figs. 13 and 15. Digital telephones utilizing Gaussian
minimum shift keying (GMSK) also use these circuits.
Digital telephones using quadrature amplitude modula-
tion (QAM) or phase shift keying (PSK) employ circuits
similar to that used for C-QUAM with digital ltering
and signal processing instead of audio ltering at base-
band. The PLL used for digital receivers is a more
complex circuit known as the Costas loop, which is neces-
sary to restore a coherent carrier for digital data recovery.
Some cellular phones are dual-mode; that is, they can
transmit and receive analog voice or digital GMSK
modulation using circuits similar to those shown in
Figs. 13 and 15.
17. NEUTRALIZATION, FEEDBACK, AND AMPLIFIER
STABILITY
Earlier transistors and triode vacuum tubes had consid-
erable capacitance between the output element (collector
or plate) and the input side of the device (see Fig. 4). Feed-
back due to this capacitance is multiplied by the gain of
the stage so that enough signal from the output was often
coupled back to the input to cause the stage to oscillate
unintentionally, as opposed to the planned oscillation of
the locked oscillator, synchronous oscillator, or PLL. To
prevent this, feedback of an opposite phase was deliber-
ately introduced to cancel the undesired feed back. A neu-
tralized IF amplier is shown in Fig. 19. Transistors and
integrated circuits made since 1985 are rarely unstable
and generally do not require neutralization unless seri-
ously mismatched. A better solution than neutralization is
usually to improve the matching of the components and
the circuit layout.
By carefully controlling the feedback, a regenerative IF
amplier can be constructed that operates on the verge
of oscillation. This greatly increases the Q of the tuned
circuit, thus narrowing the IF bandwidth. Circuits of
this type were once used in communication receivers for
L + R out
L R out
Phase shifter
Phase det.
455 kHz
cer. fil
I det.
Q det. VCO
AGC
amp
IF
amp
Limiter
IF input
AFC
out
I
Vcc VCO L-C
Q
AGC
Carrier
MC10322
Figure 18. Simplied block diagram of the C-QUAM receiver.
INTERMEDIATE-FREQUENCY AMPLIFIERS 2185
commercial and amateur use, where they were referred to
as Q multipliers.
The maximum stable gain (MSG) that can be achieved
from a potentially unstable amplier stage without neu-
tralization is obtainable from the S parameters and can be
calculated from Eq. (13). This equation assumes that the
input and output impedances are matched and there is
little or no scattering reection at either the input or out-
put. The stability factor K, usually given with the S pa-
rameters, must be 41. A failure to match the impedances
can result in an unstable amplier, but does not necessar-
ily do so. A higher gain can be obtained, but at the risk of
instability.
K MSG
S
21
S
12
13
In addition to impedance mismatch, the most frequent
cause of amplier instability, or oscillation, is poor circuit-
board layout or inadequate grounding and shielding, not
the device parameters. The wiring, whether printed or
handwired, forms inductive or capacitive coupling loops
between the input and output terminals of the amplifying
device. This is particularly noticeable when high-gain ICs
such as the NE605 are used. These integrated circuits
have IF gains of 4100dB and require very careful board
layouts for best results. Undesirable feedback can greatly
decrease the usable gain of the circuit.
18. SOFTWARE RADIO
Digital radios, or radios based on digital signal processing
(DSP), offer some technical advantages over their analog
predecessors. Digital radios can be used not only for dig-
ital modulation but also for AM and FM. One receiver can
simultaneously detect both digital and analog modulation;
thus they can be used for cellular telephones in environ-
ments where multiple modulation standards are used. As
a class, they belong to the 0-Hz intermediate-frequency
group.
The typical receiver consists of a conventional RF front
end and a mixer stage that converts the signal to a lower
frequency, as in the dual conversion radios discussed
above (Fig. 16). The signal at this stage is broadband in
nature, but not broadband enough to include the image
frequencies. The signal is then fed to an analog-to-digital
converter (ADC), which is sampled at several times f
m
.
This converts the portion of interest of the signal to base-
band (or 0 Hz) instead of a higher intermediate frequency.
The actual ltering to remove unwanted interfering sig-
nals then takes place at baseband, using digital ltering.
Digital signal processing and decimation are covered else-
where in this work. The ADC (c) performs the same func-
tions as do the oscillating detectors shown above.
Noise gure, amplication, and AGC considerations of
the rst IF amplier are the same as those for a conven-
tional receiver. The ADC and the DSP lters function best
with a constant signal input level.
The term software radio has been adopted because
the tuning function is done in software by changing the
sampling frequency at the ADC. The sampling frequency
is obtained from a digitally controlled frequency synthe-
sizer instead of tuned LC circuits.
19. SPREAD-SPECTRUM RADIOS
The spread-spectrum receiver also uses a conventional
front end with a wideband rst IF stage. The same con-
ditions apply as to software radios and dual-conversion
receivers. The rst IF stage must have the necessary
bandwidth to accommodate the spread bandwidth, ampli-
fy it with minimum added noise, and match the output to
the despreading circuitry. Spread-spectrum technology is
covered elsewhere in this encyclopedia. While usually as-
sociated with digital reception, spread-spectrum technol-
ogy can also be used for analog audio.
20. ORTHOGONAL FREQUENCY-DIVISION
MULTIPLEXING (OFDM) AND CODED OFDM (COFDM)
These modulation methods could be considered similar to
spread-spectrum techniques, or to methods requiring dual
conversion, in that they use a very broad spectrum as a
rst level, followed by a narrowband lter to extract an
individual channel. SAW lters are generally used at RF,
while second-stage processing can use digital ltering, as
in the software radio, or be done at baseband.
21. TRANSFER FUNCTIONS
The amplitude response, plotted relative to frequency of a
lter, is usually given in terms of the transfer function
H( f ). Some typical transfer functions are as follows. For
the LC lter of Fig. 2
H
LC
f exp
Qot
2
14
The LaPlace transform equivalent is
Hs
K
s
2
Bs o
2
0
15
V
cc
Figure 19. Conguration of a neutralized IF amplier.
2186 INTERMEDIATE-FREQUENCY AMPLIFIERS
A similar curve obtainable with digital lters is the Gauss-
ian lter:
H
Gauss
f exp1:38
1
BT
2
_ _
16
A generalized Nyquist IF lter bandpass spectrum is seen
in Fig. 20.
In Fig. 20 the centerline represents either the carrier
frequency, or 0 Hz. The portion of the spectrum to the right
of the centerline is the baseband response, while both ex-
tremes represent the RF double-sideband response with
the carrier at the center. The B region is the baseband re-
sponse of an ideal lter, which does not exist in practice.
Practical lters have a rolloff, or excess bandwidth, shown
in a. Outside the desired bandpass, there is a comeback
in region C. The ideal lter has no rolloff and no come-
back. The region B is the required Nyquist bandwidth.
Multilevel digital modulation methods such as quad-
rature amplitude modulation (QAM) and multiple phase
shift keying (MPSK) require lters that are free of ampli-
tude and phase distortion within the Nyquist bandwidth,
then having a rolloff a as abrupt as reasonably possible
outside that distortion-free bandwidth. The optimum lter
for this is considered to be the raised-cosine lter, so called
because the region after the uniform response is half-cycle
of a cosine wave squared (cosine raised to second power).
The transfer function for the raised-cosine lter is as fol-
lows. In the central bandpass region B, we obtain
H f 1 for jf j > f
m
1 a;
or of
m
1a or f
m

17
When a 0, the lter is said to be the ideal lter. In the
transition region a; since cos 2Acos
2
A1 or 1 cos 2A
cos
2
A0 elsewhere, we obtain the following forms of
Eq. (17):
1. H( f) cos
2
[(p|f|T)/2a) p(1 a)/4a], for f
m
(1a)
o|f|of
m
(1 a)
2. H( f)
1
2
{1cos [(p|f|T)/a) p(1a)/2a]}
In practice, there is always some comeback as seen in re-
gion C.
Figure 20 shows the double-sided RF bandwidth when
the center reference is the carrier. The right-hand side is
the baseband bandwidth with the reference at 0 Hz. When
used as a lowpass lter at baseband, the lter is referred
to as a window. There are many rolloff curves associated
with windows, which are realizable with DSPs or eld-
programmable gated arrays (FPGAs) used as FIR lters.
Designing an RF bandpass lter with these rolloff curves
is very difcult; therefore, the preferred practice is to do
the ltering at baseband where numerous windowing
curves are available.
Some popular rolloff curves for FIR lters used as win-
dows are the Bartlett, Blackman, Hamming, Hanning,
Elanix, Truncated Sinx/x, and Kaiser. These are usually
realized by changing the multipliers in the 2 of Eq. (17)
[above; after text following Eq. (17)] form of the raised-
cosine equation. For example, using this form of Eq. (17),
the Hamming window has the equation H(f) {0.54
0.46cos[(p|f|T)/a) p(1a)/2a]}. The ideal lter shape
(a 0) at baseband is called a rectangular window.
22. GROUP DELAY, ENVELOPE DELAY, AND RISE TIME
The group delay for conventional lters is traditionally
calculated to be [11]:
T
g

DF
2pDf
18
For LC or Gaussian lters (Fig. 2), this is
T
g

1
4Df
and T
g

QDF
o
19
Obviously, a very narrow bandwidth lter [Df] has a very
large group delay, which will adversely affect pulse mod-
ulation.
There is an associated equation for the risetime of the
conventional lter: T
r
0.7/B, where B is the 3-dB band-
width [Df] of the lter. This is the time interval from 10%
to 90% on the RC curve. Bandwidth, risetime, and sam-
pling rate are mathematically linked.
A radar system with a narrow pulse must have a RC
risetime that allows the pulse to pass. This necessarily
means a very broad lter bandwidth and an accompany-
ing high noise level.
Two-level modulation methods, such as BPSK, QPSK,
GMSK, NBFM, and NBPM (binary, quadrature, Gaussian
minimum shift keying and narrowband frequency and
phase modulation), can use a narrower-than-usual
bandpass lter. The bandpass can be as low as 0.2[Df] in
Eqs. (18) and (19).
Refer to Fig. 2 and Eq. (4), which demonstrate a reduc-
tion of the output level of the high-frequency portion of the
signal (sidebands) that pass through the lter and a si-
multaneously reduction of the noise power bandwidth.
The result is an overall S
o
/N
o
improvement. The notation
BT is used for this concept Bbandwidth and Tbit pe-
riod 1/f
b
. The value of T is xed, but B can be altered.
The effect is to raise the processing gain in Eq. (4) by 1/BT.
Certain newer modulation concepts (ultranarrowband)
require a lter that does not conform to the group delay
B
+f
m
(1+)
+f
m
-f
m
(1)
f
m
C C
Reference

Figure 20. A simplied Nyquist lter bandpass spectrum (from
Sklar [13] and Feher [14]).
INTERMEDIATE-FREQUENCY AMPLIFIERS 2187
equation [Eq. (18)]. These so called zero-group-delay lters
have a very narrow bandwidth with almost instantaneous
pulse response at a single frequency. Figure 21 shows the
circuit of a half-lattice, or bridge-type, lter, that has near-
zero group delay to a single-pulsed frequency. At the par-
allel resonant frequency of the crystal, the crystal appears
to have a very high resistance and the signal passes via
the phasing capacitor in the opposite bridge arm. This
circuit has a frequency response similar to that of the uni-
versal curve (Fig. 2) with shoulders that extend from 0 Hz
to innity. Therefore, it must be used together with pre-
lters to narrow the total noise bandwidth. A small ca-
pacitor or inductor can be used at z to extend the tuning
range of the crystal [11].
23. COMPUTER-AIDED DESIGN AND ENGINEERING
Digital lters are easily designed using commercially
available software packages and information provided by
the IC manufacturers (dg, l).
For IF lter design using discrete components, the ad-
mittances rather than the impedances are easiest to use,
since most components are in parallel as shown in the
equivalent circuit of Fig. 4b. Unfortunately, most available
data are in the form of S parameters, which are very dif-
cult to convert manually to impedances or admittances.
Parameters for the lters are rarely available, so calcu-
lated values based on assumed input and output imped-
ances must be used unless test equipment capable of
measuring return losses or standing waves is available,
in which case the S parameters can be measured or cal-
culated.
Smith and Linville charts have been used by some au-
thors to design IF ampliers, but these methods are not
totally satisfactory for IF amplier design, since a high-Q
circuit has its plot near the outer edge of the circle and
changes are difcult to observe. The network admittance
values shown in Fig. 4 would be used.
Computer-aided programs for linear or analog designs,
such as the various SPICE programs are readily avail-
able (j). Other programs which concentrate specifically on
lter design (f, km) can simplify the lter design. They
have outputs that then interface with the SPICE pro-
grams if desired. Most semiconductor manufacturers pro-
vide scattering parameters (S parameters) or SPICE input
data on disk for use with these programs. Some design
software sources are listed below (after the Bibliography).
Some of the IF amplier integrated circuit manufacturers
also provide software specific to their products.
BIBLIOGRAPHY
(References 8, 9, 10, and 13 contains applicable
software).
1. J. M. Petitt and M. M. McWhorter, Electronic Amplier Cir-
cuits, McGraw-Hill, New York, 1961.
2. W. Th. Hetterscheid, Transistor Bandpass Ampliers, Philips
Technical Library, N.V. Philips, Netherlands/Philips Semicon-
ductors, 1964.
3. Roy Hejhall, RF Small Signal Design Using Two-Port Param-
eters, Motorola Applications Note AN 215A.
4. F. Davis, Matching Network Designs with Computer Solu-
tions, Motorola Applications Note AN 267.
5. V. Uzunoglu and M. White, Synchronous oscillators and co-
herent phase locked oscillators, IEEE Trans. Circuits Syst.
36(7) (1989).
6. H. R. Walker, Regenerative IFampliers improve noise band-
width, Microwaves RF Mag. (Dec. 1995, Jan. 1996).
7. R. E. Best, Phase Locked Loops, McGraw-Hill, New York,
1984.
8. R. W. Goody, P-Spice for Windows, Prentice-Hall, Englewood
Cliffs, NJ, 2001.
9. M. E. Herniter, MicroSim P-Spice, Prentice-Hall, Englewood
Cliffs, NJ, 2000.
10. J. Keown, Orcad PSpice and Circuit Analysis, Prentice-Hall,
Englewood Cliffs, NJ, 2001.
11. W.-K. Chen, The Circuits and Filters Handbook, IEEE Press,
New York, 1995.
12. ARRL Handbook, Amateur Radio Relay League, Newington,
CT, 2000.
13. B. Sklar, Digital Communications, Prentice-Hall, Englewood
Cliffs, NJ, 2001. (Contains the Elanix SysView design soft-
ware on CD.)
14. K. Feher, Wireless Digital Communications, Prentice-Hall,
Englewood Cliffs, NJ, 1995.
AVAILABLE SOFTWARE
The following companies are representative of those pro-
viding packaged IFampliers as integrated circuits #, and
those offering development software packages *.
(a) #*Linear Technology Corporation, 720 Sycamore Drive, Mil-
pitas, CA 95035 (www.linear-tech.com).
(b) #Maxim Integrated Products, 120 San Gabriel Drive, Sunny-
vale, CA 94086 (www.maxim-ic.com).
(c) #*Analog Devices, One Technology Way, P.O. Box 9106, Nor-
wood, MA 02062 (www.analog.com).
(d) *#Texas Instruments, P.O. Box 954, Santa Clarita CA 91380
(www.ti.com/sc or www.ti.com/sc/expressdsp).
(e) #*Altera Corp., 101 Innovation Drive, San Jose, CA 95134
(www.altera.com).
Z
Figure 21. Circuit of a half-lattice (bridge-type) lter with near-
zero group delay to a single-pulsed frequency (from Chen [12]).
2188 INTERMEDIATE-FREQUENCY AMPLIFIERS
(f) *#Xilinx, Inc., 2100 Logic Drive, San Jose, CA 95124 (www.
xilinx.com).
(g) *# Philips Semiconductors, 811 E. Arques Avenue, P.O. Box
3409; Sunnyvale, CA 94088 (www.semiconductors.philips.
com).
(h) *#Motorola Literature Distribution Center, P.O. Box 5405;
Denver, CO 80217 (www.motorola.com/semiconductors/ or
www.Design-net.com).
(i) *#AMD, One AMD Place, P.O. Box 3453; Sunnyvale, CA 94088
(www.amd.com).
(j) *MicroSim Corp., 20 Fairbanks, Irvine, CA 92618 (www.
orcad.com).
(k) *Eagleware Corp., 1750 Mountain Glen, Stone Mountain, GA
30087 (www.eagleware.com).
(l) *Elanix Inc., 5655 Lindero Canyon Road, Suite 721, Westlake
Village, CA 91362 (www.elanix.com).
(m) *The Math Works (MatLab), 3 Apple Hill Drive, Natick, MA
01760-2098 (www.mathworks.com).
(n) *Intusoft, P.O. Box 710, San Pedro, CA 90733 (www.intusoft.
com).
(o) *#Hewlett-Packard Company, P.O. Box 58199, Santa Clara, CA
95052 (www.hp.com).
(p) *Z Domain Technologies, 555 Sun Valley Drive, Roswell, GA
30076 (www.zdt.com/Bdsp).
(q) #Rockwell Semiconductor Systems, 4311 Jamboree Road,
Newport Beach, CA 92660 (www.rockwell.com).
(r) #RF Micro Devices, 7628 Thorndike Road, Greensboro, NC
27409-9421 (www.rfmd.com).
INTERMODULATION
JOSE

CARLOS PEDRO
University of Aveiro
Portugal
1. INTRODUCTION
1.1. What Is Intermodulation Distortion?
Although the term intermodulation is used by some
authors to describe a specic manifestation of nonlinear
distortion, in this text we will adopt the wide-sense mean-
ing of intermodulation as any form of nonlinear distortion,
unless otherwise explicitly stated. So, it is convenient to
start an introduction to intermodulation by saying a few
words about distortion.
In the eld of telecommunication systems, distortion is
understood as any form of signal impairment. In this way,
distortion takes the broad sense of all differences between
the received and the transmitted information signals,
specically, those added or signal-dependent perturba-
tions.
In the rst set of added, or signal-independent, pertur-
bations, we should include random noise and determinis-
tic interferences. Typical examples of the former are the
always present thermal noise or shot noise of electronic
circuits. The second could be illustrated by some man-
made (synthetic) repetitive impulsive noise or simply
another telecommunications channel that shares the
same transmission medium but that does not carry any
useful information.
The set of signal-dependent perturbations can also be
divided into two major partslinear distortion and non-
linear distortionaccording to whether what distin-
guishes the received signal from its transmitted version
is due to a linear or a nonlinear process. The reason for this
organization stands in the easiness with which we correct
linear distortion and the difculty we have in dealing with
nonlinearity. In fact, since linear distortion describes all
differences in time-domain waveform, or frequency-domain
spectrum, as the ones caused by any usual lter or dis-
persive transmission medium, it can be corrected by an-
other inverse lter, with a methodology usually known as
pre- or postequalization. On the other hand, nonlinear
distortion cannot be corrected this way, remaining nowa-
days as a very tough engineering problem.
So, from a purely theoretical point of view, what
distinguishes linear distortion from nonlinear distortion
is simply the essence of the mapping corresponding to the
telecommunication system, from the signal source to the
detected signal. If that mapping responds to scaled ver-
sions of two different signals with two scaled versions of
the responses to these two signals, when they are pro-
cessed individually, we say that our transmission system
obeys superposition, and is thus linear [1]. In any other
case, we say that the system is a source of nonlinear
distortion. Nonlinear distortion can, therefore, manifest
itself in many different forms that range from the obvious
signal clipping of saturated ampliers, to the almost
unnoticeable total harmonic distortion present in our
high-delity audio ampliers.
Because nonlinear distortion is a property not shared
by our more familiar linear systems, we could think of it as
something visible only in some special-purpose systems or
poorly designed circuits. Unfortunately, that is not the
case. To a greater or lesser extent, nonlinear distortion is
present in the vast majority of electronic systems. Because
nonlinear distortion is associated with PN and PIN diodes
or varactors [2], it is found in many control devices such as
solid-state switches, controlled attenuators, phase shif-
ters, or tunable ltersand, because of the recognized
nonlinearity of magnetic-core inductors, it can also arise
from other passive lters and diplexers. However, prob-
ably more surprising, is the fact that it can even arise from
devices usually assumed as linear.
One example is the nonlinear distortion produced by
some RF MEM (radiofrequency micromachined electro-
mechanical) switches [3]. Another is passive intermodula-
tion (PIM), which is frequently observed when loose
connections, or junctions made of different metals or of
similar but oxidized metals are subject to high power
levels [4]. So, PIM is generated in many RF connectors,
antennas, antenna pylons, wire fences, and other compo-
nents. Finally, intermodulation can even arise from our
supposedly linear electronic circuits as it is inherent to the
operation of all electronic active transducers. To under-
stand this, let us take the example of the general amplier
described in Fig. 1.
Because our amplier is a physical system, it must obey
energy conservation, which implies that the sum of all
INTERMODULATION 2189
forms of input powereither signal power P
in
or DC
supply power P
DC
must equal the sum of all forms of
output power, whether it is signal power delivered to the
load P
out
or dissipated power P
diss
such as heat or harmonic
distortion:
P
in
P
DC
P
out
P
diss
1
On the other hand, we know that output signal power
should be a scaled replica of the input signal power,
dening, in this way, a certain amplier power gain:
G
P

P
out
P
in
2
However, (1) also implies that
G
P
1
P
DC
P
diss
P
in
3
which shows that no amplier that relies on a real supply
of nite power can keep its gain constant for any increasing
input signal level. Sooner or later, it will have to show gain
compression, presenting, therefore, nonlinearity.
1.2. Characterizing Intermodulation Distortion
After this brief introduction to the concept of intermodula-
tion distortion, let us now see in more detail which forms
of distortion it describes. For that, we will assume a simple
system represented by the following cubic model:
yt a
1
xt t
1
a
2
xt t
2

2
a
3
xt t
3

3
4
in which the input x(t) is nonlinearly transformed into an
output y(t). Note that this system not only shows non-
linearity as it also has memory, since it does not respond
instantaneously to the input, but to certain past versions
of it. This dynamic behavior is due to the presence of the
delays t
1
, t
2
, and t
3
.
1.2.1. Single-Tone Distortion Characterization. Suppos-
ing the input is initially composed of one amplitude A(t)
and phase y(t) modulated RF carrier of frequency o
c
xt At coso
c
t yt 5
the output will be composed of three sets of terms, say,
y
1
(t), y
2
(t), and y
3
(t), each one corresponding to a certain
polynomial degree. Illustrations of the time-domain wave-
forms and frequency-domain spectra of the input x(t) and
output y(t) are depicted in Figs. 2a and 2b and in Fig. 3a
and 3b, respectively.
The rst term of the output is given by
y
1
t a
1
At t
1
coso
c
t yt t
1
f
1
6
(where f
1
o
c
t
1
) and corresponds to the expected linear
response. Note that it includes exactly the same frequency
components already present at the input.
The second term
y
2
t
1
2
a
2
At t
2

1
2
a
2
At t
2

2
cos2o
c
t 2yt t
2
2f
2

7
(where f
2
o
c
t
2
) involves baseband products whose fre-
quency falls near DC and some other products whose
frequencies are located around the second harmonic,
2o
c
. The rst ones consist of second-order intermodulation
products of the form o
x
o
1
o
2
(in which o
x
is the
resulting frequency, while o
1
and o
2
are any two distinct
frequencies already present at the input), and describe the
demodulation generally provided by even-order nonlinea-
rities. When o
1
o
2
, then o
x
0, and the terms fall
exactly at DC. So, they also describe the circuits DC
bias shift. Because they are what is sought in AC to DC
converters, in ampliers they model the variation of the
y(t) mean value from the quiescent point, to the mean
value shown in presence of a signicant RF excitation
the large-signal bias point. The second type of even-order
products is again second-order intermodulation distortion
whose frequency now falls at o
x
o
1
o
2
. For o
1
o
2
,
Signal
source
Signal
load
Power supply
P
diss
P
diss
P
diss
P
diss
P
diss
P
diss
P
diss
P
diss
P
diss P
diss
P
diss
P
diss
P
diss
P
diss
P
diss
P
diss
P
diss P
diss
P
dc
P
in
P
out
V
DD
V
GG
Figure 1. Conceptual amplier showing input/output
power relations.
2190 INTERMODULATION
0 1000 3000 6000 8000
300
200
100
0
Frequency (MHz)
|S
xx
(f )| (dB)
2000 4000 7000 5000
(b)
1.0 1.1 1.2
1.3 1.4 1.5
1
0.5
0
0.5
1
x(t ) ( )
1.05 1.15 1.25 1.35 1.45
(a)
Time ( s)
Figure 2. (a) Time-domain waveform of the in-
put signal x(t); (b) corresponding frequency-
domain spectrum.
10
5
0
5
10
y(t ) ( )
1.0 1.1 1.2
1.3 1.4 1.5 1.05 1.15 1.25 1.35 1.45
Time ( s)
(a)
0 1000 2000 3000 4000 5000 6000 7000 8000
300
200
100
0
Frequency (MHz)
|S
yy
(f )| (dB)
(b)

Figure 3. (a) Time-domain waveform of the


output signal y(t); (b) corresponding frequency-
domain spectrum.
INTERMODULATION 2191
then o
x
2o
1
, and the products are known as second-
order harmonic distortion.
Finally, the third term is
y
3
t
3
4
a
3
At t
3

3
coso
c
t yt t
3
f
3

1
4
a
3
At t
3

3
cos3o
c
t 3yt t
3
3f
3

8
(where f
3
o
c
t
3
) and also involves two different sets of
products located near the input frequency band (or funda-
mental band) o
c
and the third harmonic 3o
c
.
As are the even-order products, the third-order pro-
ducts falling near the third harmonic 3o
c
are classied as
out-of-band products. Appearing at o
x
o
1
o
2
o
3
, that
is, out of the fundamental signal band in RF systems of
narrow bandpass characteristics, these products seldom
constitute a major source of nonlinear signal impairment
as they can be easily ltered out. Note, however, that they
may also constitute in-band products in ultra-wide-band
systems such as cable television (CATV).
Third-order products falling exactly over, or in the
vicinity of o
c
, in which the resulting frequencies can be
either o
x
o
1
o
2
o
3
, o
x
2o
1
o
2
, or even o
x
o
1
(whether they arise from the combination of three distinct,
two equal and one different, or three equal input frequen-
cies, respectively), are obviously called in-band products.
Contrary to the products treated above, they cannot be
eliminated by linear ltering, constituting the principal
object of intermodulation distortion studies in microwave
and wireless systems. In fact, some authors even reserve
the term intermodulation distortion for this particular
form of nonlinear signal perturbation.
To analyze these in-band distortion products in more
detail, we will now consider two different situations of
system memory. In the rst case, it is assumed that the
time delays of (4) are due only to the active devices
reactive components or to the input and output matching
networks. In this way, they may be comparable to the RF
carrier period, but negligible when compared to the much
slower modulation timescale. Therefore, the in-band pro-
ducts can be approximated by
3
4
a
3
At
3
coso
c
t yt f
3
9
which shows that, although the system kept its dynamic
behavior to the RF carrier, it became memoryless (i.e.,
responds instantaneously) to the modulation envelope.
Since general amplitude and phase modulations have
frequency components that start at DC, we have already
seen that these products include spectral lines falling
exactly over the ones already present at the input, and
some other new components named as spectral regrowth.
The third-order signal components that are coincident
with the input are given by o
x
o
1
o
1
o
1
o
1

(o
1
o
1
) o
1
and can be understood as being generated
by mixing second-order products at DC with rst-order (or
linear) ones. Except for their associated gain, which is no
longer a
1
, but
3
4
a
3
multiplied by the input amplitude-
averaged power A
2
, these products are indistinguishable
from the linear components of (6). They carry the same
information content, and are, therefore, termed signal-
correlated products. Although, in a strict sense, they
should be considered as nonlinear distortion products (as
their signal power rises at a slope of 3 dB/dB against the
1 dB/dB that characterizes truly linear components), from
an information content viewpoint, they may also be con-
sidered as linear products. In fact, since, for a constant-
input-averaged power, they cannot be distinguished from
the rst-order components, it all happens as if the ampli-
er had remained linear but with a gain that changed
from its small-signal value of Ga
1
exp( jf
1
) to an
amplitude-dependent large-signal gain of G(A)
a
1
exp( jf
1
) (3/4)A
2
a
3
exp( jf
3
). So, input amplitude
signal variations [or amplitude modulation (AM)] produce
different output amplitude variations, according to the so-
called amplier AMAM conversion. But, since the gain is
also characterized by a certain phase, it is obvious that
input amplitude signal variations will also generate out-
put phase variations. In conclusion, and as illustrated in
Figs. 4a and 4b, the amplier will show not only AMAM
but also AMPM conversion.
Figure 5 depicts a possible block diagram of a labora-
tory setup intended to measure these static AMAM and
AMPM characteristics [6]. As shown, it relies on a usual
microwave vector network analyzer whose signal source is
swept in power.
As a curious aside from this analysis, we should point
out that, although our nonlinearity manifests a signal
amplitude-dependent gain, it is completely insensitive to
P
in
(dBm)
P
out
(dBm)
15 10 5 0 5 10 15 20 25
5
0
5
10
15
20
25
AM-AM
(a)
1
0
1
2
3
4
5
( )
P
in
(dBm)
AM-PM
15 10 5 0 5 10 15 20 25
(b)
Figure 4. (a) Ampliers AMAM conversion;
(b) AMPM conversion.
2192 INTERMODULATION
the input signal phase. In fact, as can be concluded from
(9), the bandpass characteristics of our amplier would be
completely transparent to a phase-modulated signal of
constant amplitude, in the sense that the phase informa-
tion present at the output would be exactly equal to the
phase information present at its input.
In the second case, it is supposed that, beyond the usual
time constants of the order of the RF carrier period, our
system may even present time delays, t
1
0
, t
2
0
, and t
3
0
,
comparable to the modulation period (e.g., determined by
the bias circuitry, active-device charge carrier traps, self-
heating). Such time constants are no longer irrelevant
for the envelope evolution with time, and the system is
said to present long-term or envelope memory effects. The
in-band output distortion becomes
3
4
a
3
At t
3
0

3
coso
c
t yt t
3
0
f
3
0
10
and the output envelope will show a phase shift that is
dynamically dependent on the rate of amplitude varia-
tions. In this case, the output AMAM or AMPM is no
longer static, and dynamic (or hysteretic) AMAM and
AMPM conversions are observed, as shown in Figs. 6a
and 6b.
This shows that, if our nonlinear system only presents
short-term memory effects, and thus is memoryless for the
envelope, it may be characterized by a set of gain and
AM-AM
P
in
(dBm)
P
out
(dBm)
(a)
15 10 5 0 5 10 15 20 25
5
0
5
10
15
20
25
AM-PM
()
P
in
(dBm)
(b)
1
0
1
2
3
4
5
10 5 0 5 10 15 20 25 15
Figure 6. Typical hysteretic AMAM (a)
and AMPM (b) characteristics shown by
nonlinear dynamic ampliers suffering
from both short-term and long-term
memory effects.
1.800
GHz dBm
DC
AMP
Power and Frequency
control
Figure 5. AMAM and AMPM characteriza-
tion setup based on a microwave vector network
analyzer.
INTERMODULATION 2193
phase shift tests made with a sinusoidal, or CW (contin-
uous-wave), excitation with swept amplitude and, even-
tually, with varying frequency. However, if the system is
also dynamic to the envelope, then the observed AMAM/
AMPM varies with the speed of the input amplitude
sweep, and such a test becomes questionable. Since each
of the tested CW signals can be seen as a carrier modu-
lated by a constant (DC) envelope, it becomes obvious that
we cannot fully characterize a dynamic system using only
these simple DC excitations.
Moreover, it is clear that testing in-band intermodula-
tion products with a CW signal will never be an easy task,
as the output will only have signal-correlated components
where o
x
o
c
, which all overlap onto the usually much
higher linear output. Obviously, in-band intermodulation
characterization requires more complex stimuli.
1.2.2. Two-Tone Distortion Characterization. One way
to increase the complexity of our test signal is to use a
two-tone excitation:
xt A
1
coso
1
t A
2
coso
2
t 11
The in-band output components of (4) when subject to this
new stimulus will be
a
1
A
1
coso
1
t f
110
a
1
A
2
coso
2
t f
101

3
4
a
3
A
2
1
A
2
cos2o
1
o
2
t f
321

3
4
a
3
A
3
1

6
4
a
3
A
1
A
2
2
_ _
coso
1
t f
310

6
4
a
3
A
2
1
A
2

3
4
a
3
A
3
2
_ _
coso
2
t f
301

3
4
a
3
A
1
A
2
2
cos2o
2
o
1
t f
312

12
Beyond the expected linear components arising at o
1
and
o
2
, (12) is also composed of other third-order products at
o
1
, o
2
, 2o
1
o
2
, and 2o
2
o
1
. They constitute again the
signal-correlated (o
1
and o
2
) and signal-uncorrelated
(2o
1
o
2
and 2o
2
o
1
) components. The terms at o
1
(o
2
) that are dependent only on A
1
(A
2
) constitute the
AMAM/AMPM conversion discussed above. But now
there are some new terms at o
1
(o
2
) whose amplitude is
also controlled by A
2
(A
1
). They model two different, but
obviously related, nonlinear effects. One is cross-modula-
tion, a nonlinear effect in which amplitude modulation of
one RF carrier is converted into amplitude modulation of
the other; the other is known as desensitization, the loss of
receiver sensitivity to one signal when in presence of an
incoming strong perturbation (e.g., a jammer).
The terms at 2o
1
o
2
and 2o
2
o
1
are spectral re-
growth components that appear as sidebands located side
by side to the fundamentals at a distance equal to their
frequency separation o
2
o
1
. These in-band intermodula-
tion distortion (IMD) sidebands rise at a constant slope of
3 dB per dB of input level rise, until higher-order compo-
nents (in the case of our polynomial model, output con-
tributions due to higher-degree terms) show up. Since
rst-order components rise at a slope of only 1 dB per dB,
we could conceive of an extrapolated (never reached in
practice) output power where the output IMD and funda-
mentals would take the same value. As illustrated in
Fig. 7, this is the so-called third-order intercept point IP
3
.
Although meaningful only for small-signal regimes,
where the fundamental and IMD components follow their
idealized straight-line characteristics, IP
3
is still the most
widely used (some times erroneously) intermodulation
distortion gure of merit.
Figure 8 shows a block diagram of the most popular
laboratory setup used for two-tone intermodulation tests.
It relies on a two-tone generator of high-spectral purity,
and a high-dynamic range microwave spectrum analyzer.
Although, for many years, two-tone intermodulation
characterization has been restricted to these amplitude
measurements, more recently we have seen an increasing
interest to also identify the IMD components phase. The
reason for this can be traced to the efforts devoted to
extract behavioral models capable of representing the
devices IMD characteristics and to the design of amplier
linearizers that must be effective even when the main
nonlinear device presents long-term memory effects. In
fact, since most of the linearizers can be understood as
auxiliary circuits capable of generating IMD components
that will cancel the ones arising from the main amplier,
it is obvious that those linearizing circuits must be
designed to meet both IMD amplitude and phase require-
ments.
Unfortunately, the rst problem that arises when try-
ing to measure the IMD components phase is that, despite
phase is a relative entity, we have no phase reference for
IMD. Contrary to what happens to the output fundamen-
tals in which we can refer their phases to the phases at the
input (usually arbitrarily assumed zero), the problem is
that now there are no input components at the IMD
frequencies. So, we rst need to create a reference signal
at that IMD frequency. That is usually done with a
10 20
100
80
60
40
20
0
20
40
P
Fund
(
2
)
1dB/dB
3dB/dB
P
out
(dBm)
P
in
(dBm)
0
P
IMD
(2
2

1
)
IP
3i
10 30
IP
3


Figure 7. Typical fundamental and third-order intermodulation
power versus input power plots. Note the denition of the extra-
polated third-order intercept point IP
3
.
2194 INTERMODULATION
reference nonlinearity; thus IMD phase measurement
results become relative to the reference nonlinearity
used in the setup. For example, in the setup depicted in
Fig. 9, the reference nonlinearity is based in the nonlinear
characteristic of broadband Schottky diodes, and the
phase value is acquired from the variable phase shift
necessary to balance the device under test (DUT) and
reference arms.
1.2.3. Multitone Distortion Characterization. For com-
pleteness, let us now briey introduce intermodulation
characterization under multitone excitations. A detailed
analysis of this important and up-to-date subject can be
found in various references [e.g. 5,6].
First, we will assume that our stimulus can be de-
scribed as a sum of Q sinusoids of different frequencies:
xt

Q
q1
A
q
coso
q
t
1
2

Q
qQ
A
q
e
jo
q
t
13
The output of a general power series such as (4) to the
excitation of (13) will be
yt

N
n1
y
n
t 14a
where each of the orders can be expressed as
y
n
t
1
2
n
a
n

Q
q Q
A
q
e
jo
q
t
_ _
n

1
2
n
a
n

Q
q1 Q

Q
qn Q
A
q
1
A
q
n
e
jo
q
1
o
qn
t
14b
which contains various frequencies at o
x
o
q1
?o
qn
,
originating from many different mixing products.
Since there is, in general, more than one mixing
productthat is, more than one combination of input
frequenciesfalling at the same frequency, the calcula-
tion of their output amplitude requires that rst we are
able to determine the number of those different combina-
tions. One systematic way to do this is to recognize that
their frequencies must obey [7]
o
n;m
o
q
1
o
qn
m
Q
o
Q

m
1
o
1
m
1
o
1
m
Q
o
Q
15

2
AMP
DC 00.53 + 15.00
1.709
1.801
GHz
GHz
dBm
dBm
Isolators
Low-Pass
Filters
Figure 8. The most popular laboratory setup used for two-tone intermodulation tests.
DC
AMP
1.709
GHz dBm
Reference
Nonlinearity
Phase Shifter
Attenuator
DC
Isolator
Low-Pass
Filters
Isolator
1.801
GHz dBm

Figure 9. Possible IMD phase measurement setup based on a reference nonlinearity, a spectrum
analyzer, and an IMD cancellation loop.
INTERMODULATION 2195
where

Q
q Q
m
q
m
Q
m
1
m
1
m
Q
n 16
dening the following mixing vector:
vm
Q
m
1
m
1
m
Q
17
Then, the number of different ways of generating the same
mixing vector is given by the multinomial coefcient [7]:
t
n;n

n!
m
Q
! . . . m
1
!m
1
! . . . m
Q
!
18
These Q-tone distortion components allow a generaliza-
tion of the two-tone signal-to-intermodulation distortion
ratio [IMR; sometimes also known as carrier-to-IMD ratio
(C/I)] to various multitone distortion gures of merit. One
of these is dened as the ratio between the constant-
amplitude output fundamental signals and the highest
sideband IMD component (M-IMR).
Another measure is the ratio between integrated fun-
damental output power and integrated upper or lower
sideband distortion. As this sideband spectral regrowth
falls exactly over the location of a potentially present
adjacent channel, it is called the adjacent-channel power
ratio (ACPR).
Finally, a measure of the ratio of the fundamentals to
the signal-uncorrelated distortion components that fall
exactly among the fundamental components is given by
the so-called noise-power-ratio (NPR). The reason for this
denomination comes from the fact that, although that
gure of merit is being introduced in this text for a
multitone excitation, it was traditionally measured with
a bandlimited white-noise stimulusa generalized multi-
tone excitation with an innite number of tones.
Besides all these gures are measures of nonlinear
effects that share a common physical origin, and it has
not been easy to relate them, except for very particular
situations. First, we [5] presented relations between
various multitone distortion gures and IMR, obtained
for a third-degree polynomial memoryless model. Then,
Boulejfen et al. [8] extended those results for a fth-degree
polynomial. As a summary of these results, Fig. 10 pre-
sents the ratio of IMR to the above-dened multi-tone
distortion gures versus the number of tones Q for a
memoryless cubic polynomial.
A laboratory setup for multitone distortion tests is
similar to the one already shown for two-tone tests, except,
obviously, with respect to the signal generator [6]. How-
ever, since a NPR test focuses on the distortion that falls
exactly over the output fundamentals, something must be
done to separate the desired distortion components from
the much higher fundamental signals. The usual way to
solve that problem consists in creating a very narrow
measurement window within the input signal bandwidth.
This is accomplished by either shutting down a few input
toneswhen a multitone signal generator is usedor
introducing a notch lter between the bandlimited
white-noise generator and the nonlinear device under
test [6].
2. CAD TOOLS FOR INTERMODULATION
DISTORTION PREDICTION
Because intermodulation distortion is a nonlinear effect,
any attempt to predict its behavior by hand for even the
most simple practical circuits or devices becomes extre-
mely difcult, if not impossible. So, intermodulation dis-
tortion prediction relies heavily on good device models and
appropriate computer simulation algorithms. Unfortu-
nately, these subjects are so vast that we have to restrict
this text to a rst guiding overview. So, we will concen-
trate our discussion on a set of criteria for model quality
(for this specic purpose) and give some hints concerning
usual simulation tools.
2.1. Nonlinear Device Modeling for Distortion Analysis:
General Considerations
Starting with nonlinear device models, we can divide them
into four general groups: (1) physical and empirical mod-
els and (2) global and local models.
Physical models are mathematical descriptions of the
internal device operation that are drawn from the know-
ledge of the devices geometric and physical structure, and
ACPR
M-IMR
NPR
0 10 20 30 40 50 60 70 80 90 100
4
2
0
2
4
6
8
10
Number of Tones, Q
IMR/Q-Tone Distortion (dB)
110 120 130
IMR (6.0 dB)
1
4
4
3
IMR (1.3 dB)
IMR (7.8 dB)
6
1
140 150 160
Figure 10. Ratio of two-tone IMR to NPR, M-
IMR and ACPR versus the number of tones Q
for a memoryless cubic polynomial.
2196 INTERMODULATION
from the application of a certain set of basic physics laws.
Although relying on extremely complex formulations that
require an enormous number of parameters and are
computationally expensive to evaluate, they can provide
much better accuracy than the empirical models as they
necessarily mimic the basic device operation. On the other
hand, empirical models do not require any information
about the internal structure of the device, relying com-
pletely on inputoutput behavioral observations. Hence,
they are also known as blackbox models or behavioral
models.
Typical examples of physical models are the Schottky
diode equation and the device models described by a set of
coupled partial-differential equations of electric potential,
charge, and charge carrier mobility. Examples of purely
behavioral models are the linear scattering matrix, table-
based device models, or even the abovementioned AM
AM/AMPM models.
Local models can be distinguished from global models
for their approximation range. Because they are beha-
vioral in nature, they constitute two different compro-
mises between the domain of tting and the level of
accuracy. While local models are very good in representing
mild nonlinear behavior in the vicinity of some quiescent
point, global models are conceived as valid for any possible
operation regime, but at the expense of an increased error.
It is therefore natural that they are also known as small-
signal or large-signal models, respectively. The Gummel
Poon model of BJTs, the quadratic model of FETs, or an
AMAM/AMPM representation are examples of global
models, while the poor extrapolation capability usually
associated with polynomial approximators tends to grant
them a distinct local behavior. For example, the cubic
polynomial that could be extracted from the third-order
intercept point is necessarily a local model valid only for
small-signal excitation levels.
The polynomial example given above is not accidental
as it plays a fundamental role in all nonlinear distortion
analysis. In fact, for some unique reason we began this
article using exactly the same polynomial of expression
(4). As we then concluded, if the model is a polynomial, we
have a direct and easy way to calculate the various
intermodulation products to any signal that can be de-
scribed as a sum of sinusoids. Furthermore, by simply
selecting its coefcients, we can tailor the polynomial for
very different approximation goals. To understand that,
let us use an illustration example. Figures 11 and 12
depict the approximation of one typical transfer function
characteristic by two different polynomials: a Taylor series
and a Chebyshev polynomial series, both of 10th degree.
As seen from Figs. 11a and 11b, the coefcients of the
Chebyshev polynomial were selected so that the polyno-
mial could produce an optimum approximation to the
response of our original nonlinearity to a sinusoid of
1.5 V peak amplitude, centered at a quiescent point of
0 V (something close to what is found in typical class AB
power amplication regimes).
On the other hand, the coefcients of the Taylor series
were taken as the appropriately scaled derivatives of the
nonlinearity at the same quiescent voltage of 0V. It
constitutes, therefore, the optimum polynomial approxi-
mation to the nonlinear response to any signal of inni-
tesimal input amplitude (see Fig. 11a).
When excited by sinusoids of variable amplitude, the
output DC component (Fig. 12a), the fundamental compo-
nent (Fig 12b), and the second- and third-harmonic dis-
tortion components (Figs. 12c and 12d, respectively)
reveal that these two polynomial approximators present,
indeed, very distinct properties.
The Taylor series is clearly a local approximator that
produces optimum results in the vicinity of the quiescent
point, but then suffers from a catastrophic degradation
when the excitation exceeds B0.3V of amplitude. On the
2 1.5 1 0.5 0 0.5 1 1.5 2
2
1
0
1
2
3
4
5
6
(a)
(b)
f (x)( )
x (V)
0 20 40 60 80 100 120 140 160 180 200
1
2
3
4
5
Time (ns)
f (x(t ))( )
0
Figure 11. (a) Nonlinear memoryless transfer function, f(x) (- -)
and its 10th-order Taylor series approximation around x 0 V
(. . .) and Chebyshev polynomial optimized for a sinusoidal input
amplitude of A1.5V (); (b) time-domain waveform of the
output of the transfer function f[x(t)] (- -) and of its 10th-order
Taylor series approximation (. . .) and Chebyshev polynomial (),
when excited by a CW input of amplitude A1.5 V.
INTERMODULATION 2197
contrary, the Chebyshev series is worse at those small-
signal levels, but performs much better up to excitation
amplitudes of 1.5 V. It behaves, therefore, as a global
approximator. (In fact, the Chebyshev series is still a local
approximator whose domain is no longer dened around
the xed quiescent point of 0 V, but around a new general-
ized dynamic quiescent point imposed by an input sinu-
soid of 1.5 V amplitude.) As any other mean-square error
approximator, the Chebyshev polynomial wanders around
the original nonlinearity (see Figs. 11a and 11b), obviously
failing the higher-order derivatives of the function. This is
why, contrary to the Taylor series, which, by construction,
osculates these derivatives, the Chebyshev series does not
show good small-signal distortion behavior.
What we have just seen in this example is common to
almost all empirical models and results in an important
message as far as intermodulation distortion calculations
are concerned. Simultaneously reproducing the devices
mild nonlinear details (local characteristics) and the gen-
eral trends (global properties) is so difcult that we should
35 30 25 20 15 10 5 0 5
4
4.5
5
5.5
6
6.5
7
7.5
8
A(dB
v
)
35 30 25 20 15 10 5 0 5
A(dB
v
)
F
0
(A) (dB)
(a) (b)
30
25
20
15
10
5
0
5
F
1
(A) (dB)
35 30 25 20 15 10 5 0 5
80
70
60
50
40
30
20
10
80
70
60
50
40
30
20
10
0
A (dB
v
)
A (dB
v
)
F
2
( A) (dB)
(c) (d)
3
F (A) (dB)
35 30 25 20 15 10 5 0 5
Figure 12. (a) DC component of the output of the transfer function f[x(t)] (- -) of its 10th-order
Taylor series approximation ( ) and Chebyshev polynomial (), when excited by a CW input of
amplitude 0.015VoAo1.95V; (b) fundamental component of output of transfer function f[x(t)] (- -)
of its 10th-order Taylor series approximation ( ) and Chebyshev polynomial (), when excited by
a CW input of amplitude 0.015VoAo1.95V; (c) second-harmonic component of output of transfer
function f[x(t)] (- -) of its 10th-order Taylor series approximation ( ) and Chebyshev polynomial (
), when excited by a CW input of amplitude 0.015VoAo1.95V; (d) third-harmonic component of
output of unit transfer function f[x(t)] (- -) and its 10th-order Taylor series approximation (y) and
Chebyshev polynomial (), when excited by a CW input of amplitude 0.015VoAo1.95V.
2198 INTERMODULATION
never trust an empirical model unless we have guarantees
that it was specically tested for nonlinear distortion.
2.2. Nonlinear Models for Distortion Analysis at the
Circuit Level
To perform intermodulation analysis at the circuit level,
that is, to compute the distortion arising from a certain
electronic circuit subject to a specic bandpass RF input
signal stimulus, the device must be represented by some
equivalent-circuit model [6]. This is the normal modeling
requirement for using either time-marching algorithms,
like the ones used by SPICE, or frequency-domain simu-
lators, such as the harmonic-balance solvers. Such equiva-
lent circuits have topologies and parameter sets usually
supported from both physical and empirical data.
Linear, or bias-independent, elements are usually ex-
tracted from a broadband small-signal AC characteriza-
tion. Nonlinear elements can be either voltage-controlled
current sources (nonlinear device currents) i(v), voltage-
controlled electric charge sources (nonlinear capacitances)
q(v), or current-controlled magnetic ux sources (non-
linear inductances) f(i). Each of these is assumed to be
described by a static, or memoryless, function of its
controlling variable(s), which can, again, be supported
by both physical device knowledge or by empirical obser-
vations. Mostly in this latter case, it is the selection of
these functions that determines the quality of the model
for nonlinear distortion predictions. A small mean-square
error between measured and modeled data in the whole
range of device operation guarantees good global proper-
ties, but says nothing about local properties. To be able to
also provide good predictability under small-signal re-
gimes, the model must osculate at least the rst three
derivatives of the actual device function, which requires
special model extraction procedures.
Although those derivatives can be obtained from suc-
cessive differentiation of measured i(v), q(v), or f(i) data,
this is not recommended for at least two important
reasons: (1) since most of the microwave transistors
show low-frequency dispersion effects, differentiating DC
data may not lead to the real AC behavior; and (2) the
aggravation of measurement noise produced by numerical
differentiation. If we rely on averages (data integration) to
reduce random measurement errors, it is natural to expect
an aggravation of those errors if we go backward, that is,
numerically differentiating measurement data. So, the
best way to obtain these device derivatives is to measure
entities that directly depend on them; and one good
example of those entities is exactly the harmonic or
intermodulation distortion produced by the device under
a CW or a two-tone excitation.
As an example, the laboratory setup depicted in Fig. 13
uses exactly this principle to acquire the nine coefcients
of the Taylor series expansion of the drainsource current
of a FET:
i
ds
v
ds
; v
ds
G
m
v
gs
G
ds
v
ds
G
m2
v
2
gs
G
md
v
gs
v
ds
G
d2
v
2
ds
G
m3
v
3
gs
G
m2d
v
2
gs
v
ds
G
md2
v
gs
v
2
ds
G
d3
v
3
ds
19
Exciting the FET at the gate side with a sinusoid of
frequency o
1
and at the drain side with a sinusoid of
frequency o
2
allows the extraction of G
m
from the output
current component at o
1
, G
ds
from the component at o
2
,
G
m2
from the component at 2o
1
, G
md
from the component
at o
1
o
2
, G
d2
from the component at 2o
2
, and so on.
Unfortunately, the actual procedure is not that simple.
Although the unilateral properties presented by micro-
wave FETs at low frequencies guarantee that v
gs
will have
only the o
1
component, the requirement that the device is
terminated at the drain side by a nonnull impedance
determines that v
ds
will have components at o
1
, at o
2
,
and at all their mixing products. This impedes the ortho-
gonal (or one-to-one) extraction just explained, demanding
the solution of a 2 2 linear system for G
m
and G
ds
; a 3 3
linear system for G
m2
, G
md
, and G
d2
; and a 4 4 linear
system for extracting G
m3
, G
m2d
, G
md2
, and G
d3
[9]. Since
the concept supporting this setup is general, it can be
extended to other nonlinear current sources present in
any nonlinear device equivalent-circuit model, or even to
charge sources [10].
As an illustrative example, Fig. 14 shows all nine
coefcients of (19) extracted with the setup of Fig. 13,
FET
LPF
LPF
Diplexer
V
DS
V
L
(2)
V
GS
ATTN ATTN
ATN
Spectrum
analyzer
Double power
supply
Vs(1)
Figure 13. Laboratory setup used to extract
the Taylor series coefcients of a bidimensional
nonlinearity such as the i
DS
(v
GS
,v
DS
) of a FET.
INTERMODULATION 2199
from a medium-power microwave GaAs MESFET biased
in the saturation region.
2.3. Nonlinear Models for Distortion Analysis at the
System Level
Although system simulation for the modulated bandpass
RF signals has already taken the rst steps, system
simulation at the complex envelope level is, by far, the
most usual way to assess distortion performance of entire
communication systems. It assumes that the amplitude/
phase-modulated RF signal of (5) can be given by
xt At coso
c
t yt
ReAte
jyt
e
jo
c
t
Re ~ xxte
jo
c
t

20
in which ~ xxt is the complex envelopethe lowpass equiva-
lent signal of x(t) [11]and that we are interested only in
the systems in-band characteristics. Thus, the object of
the analysis ceases to be the real bandpass RF-modulated
signal to become only the complex lowpass envelope. In
this way, a signicant improvement in simulation ef-
ciency is achieved because time-domain simulations no
longer need to be carried on with sampling rates imposed
by the RF carrier and its harmonics, but only by the much
slower envelope. So, the models required for these envel-
ope-level system simulators are lowpass complex equiva-
lent behavioral models of the original bandpass RF
components [11]. They are, therefore, single-input/single-
output maps, which may be either linear on nonlinear.
Linear maps are easily implemented as gain factors in
the memoryless case, or as nite or innite impulse
responses, FIR or IIR, digital lters [11,12], when in
presence of dynamic elements.
A linear dynamic complex envelope lter whose fre-
quency response function is
~
HHj ~ oo can be directly derived
from the corresponding circuit level lter Hjo by simply
going through the following bandpasslowpass transfor-
mation [11]
~
HHj ~ oo Hj ~ ooo
c
u ~ ooo
c
21
where u(o) is the unity step function.
G
m
(mS) G
m2
(mS/V), G
m3
(mS/V
2
)
200
150
100
50
0
50
200
150
100
50
0
50
V
GS
(V)
4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5
G
ds
(mS) G
md
(mS/V), G
m2d
(mS/V
2
)
G
d3
(mS/V
2
) G
d2
(mS/V), G
md2
(mS/V
2
)
50
40
30
20
0
20
15
10
5
5
10
10
0
0.4
1
0
3
0.3
2
2
1
3
0.2
0.1
0.1
0.2
0.3
0.4
0
V
GS
(V)
4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5
V
GS
(V)
4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.5
(a) (b)
(c)
Figure 14. Taylor series coefcients of the bidimensional voltage-controlled i
DS
(v
GS
,v
DS
) current
source of a GaAs MESFET for a constant V
DS
in the saturation zone: (a) G
m
(), G
m2
(K), and
G
m3
( ); (b) G
ds
(), G
md
(K) and G
m2d
( ); (c) G
d2
(), G
md2
(K), and G
d3
( ).
2200 INTERMODULATION
2.3.1. Memoryless AMAM/AMPM Models. In their
most basic form, nonlinear complex envelope models
simply try to describe the amplitude-dependent memory-
less nonlinear effects observed for the amplitude and
phase modulation content. They are the AMAM/AM
PM models discussed above, and of which the quadra-
ture Saleh model of Fig. 15 is one of the most widely
known [13].
When modeled as a polynomial nonlinearity, this AM
AM/AMPM model can include only odd-degree (2n1)
terms involving n negative carrier frequencies plus (n1)
positive ones [11]
~ yyt

N1=2
n0
p
2n1
2
2n
2n1
n1
_ _
~ xxtj ~ xxtj
2n
22
where
m
r
_ _
stands for the number of different combinations
of r elements taken from a population of size m and p
2n1
are the polynomial coefcients, now having real and
imaginary parts.
2.3.2. Dynamic AMAM/AMPM Models. As already
seen in the introduction, when the system presents mem-
ory not only to the RF signal (as indicated by the AMPM
effect) but also to the slowly varying lowpass envelope, this
AMAM/AMPM model becomes unsatisfactory and a
true dynamic model is required. For example, one possi-
bility for such an extension could be to make the in-phase
A
I
(.) and quadrature A
Q
(.) static nonlinear functions
dependent not on the amplitude envelope but on some
dynamic version of it. In this way, the AMAM and
AMPM conversions would no longer be instantaneous
functions of A(t), but, as shown in Fig. 16, become instan-
taneous functions of an auxiliary dynamic variable ~ zzt,
and thus dynamically varying with A(t).
2.3.3. Memoryless Nonlinearity: Linear Filter Cascade
Models. Beyond the methods described above, several
other approximated topologies have been tried for build-
ing nonlinear dynamic models [14]. Some of those, like the
two- or three-box models shown in Fig. 17, deserve men-
tion because of their practical relevance. In fact, they
somehow mimic the internal structure of typical RF
devices (as microwave power ampliers), which are
usually constituted by a broadband (memoryless) non-
linear active device sandwiched between two linear dy-
namic input and output matching networks.
As shown in Fig. 17, these two- or three-box nonlinear
dynamic models can be cascades of a linear lter followed
by the measured memoryless AMAM/AMPM nonlinear
model (known as the Wiener model), be cascades of this
AMAM/AMPM memoryless nonlinearity followed by a
linear lter (the Hammerstein model), or even be consti-
tuted by a combination of both (the WienerHammerstein
model). Other parallel combinations of memoryless non-
linearities and linear lters also became popular when an
optimal extraction procedure was shown to be practically
possible [15].
2.3.4. General Nonlinear Dynamic Models. Unfortu-
nately, these three-box models become hopelessly inaccu-
rate when the dynamic effects presented to the envelope
are not due to the bandwidth limitations of the linear
matching networks, but are intrinsically mixed with the
nonlinearity [14]. That is the case, for example, with
wireless power ampliers whose nonlinear dynamic ef-
fects cannot obviously arise from bandwidth limitations
the RF signal can have bandwidths as narrow as 1% or
0.01%, but from the active device self-heating or from
reactive (to the envelope) bias paths. In such cases, more
general nonlinear dynamic models, as the ones briey
explained in the following paragraphs, must be attempted.
When the lowpass equivalent system is stable, contin-
uous, and of fading memory (i.e., its response cannot keep
memory from an innitely remote past), mathematics
operator theory has shown that its response ~ yyt to any
input ~ xxt can be approximated, within any desired error
margin, by
~ yys f
NL
~ xxs; ~ xxs 1; . . . ; ~ xxs Q 23
where f
NL
(.) is a (Q1)-to-one static nonlinear function, s
is the time instant in which the output is being calculated;
~ xxs 1; . . . ; ~ xxs Q are delayed, or past versions of the
x(t ) y(t )
Nonlinear / Memoryless
Nonlinear / Memoryless
jA
Q
[A(t)]sin
y
[A(t)]
|x(t)|
|x(t )|
x(t)
~ ~
~
~
~
e
j (t)
A
y
[A(t )]e
j [A(t )]
A(t)
A
I
[A(t)]cos
y
[A(t)]

Figure 15. AMAM and AMPM memoryless lowpass equiva-


lent behavioral model known as the Saleh quadrature model.
Nonlinear/Memoryless
e
j (t )
A(t )
Linear Filter
y(t )
~
x(t )
~
z(t )
~
|x(t )|
x(t)
~
~
|x(t )|
~
A
y
[A(t ),z(t)]e
j
y
[A(t ),z(t )] ~
~
H
z
()
Figure 16. An AMAM/AMPM model in which the amplier is
modeled as a dynamic gain function of the envelope amplitude.
x(t ) y(t )
AM-AM/AM-PM O() H()
Figure 17. A three-box, or WienerHammerstein, lowpass
equivalent model.
INTERMODULATION 2201
input; ~ xxs; and Q is the systems nite memory span.
Indeed, expression (23) simply states that the system
output at a certain instant can be calculated as the non-
linear combination of the input at that instant and all its
past versions within the memory span. There are basically
two ways of implementing this nonlinear and dynamic
inputoutput mapping, depending on whether f
NL
(.) is
approximated by a (Q1)-to-one polynomial or by a
neural network: polynomial lters [12] and articial
neural networks (ANNs) [16].
In the rst case, (23) becomes
~ yys

N
n1
~ yy
n
s 24a
where
~ yy
n
s

Q
q
1
0

Q
q
2n1
0
~
hh
2n1
q
1
; . . . ; q
2n1
~ xxs q
1

~ xxs q
n1
~ xxs q
n2

~ xxs q
2n1

24b
Such a dynamic polynomial formulation (also known as a
Volterra lter [12]) presents two important advantages:
1. Its various output components can be traced to a
particular coefcient or term. Therefore, it leads
to useful concepts as nonlinear order and gives
insights into parameter extraction. In fact, this
immediately allows model implementations such as
the ones depicted in Figs. 18a and 18b for the rst-
and third-order outputs, ~ yy
1
s and ~ yy
3
s, respectively.
2. The second advantage, shared with all polynomial
approximators, is that the formulation is linear in
the parameters (although obviously nonlinear in the
inputs). Thus, it allows a direct model parameter
extraction based on the solution of a system of
simultaneous linear equations.
Unfortunately, it also presents an important disadvan-
tage. Like any other polynomial approximator, it is a local
model.
It is mostly this drawback that justies the alternative
ANN formulation. A single hidden-layer ANN can be
expressed as [16]
~ uu
k
s

Q
q 0
w
k
q ~ xxs q b
k
25a
~ yys b
o

K
k 1
w
o
kf
s
~ uu
k
s 25b
in which the w
k
(q) and w
o
(k) are weighting factors and b
o
and b
k
are bias values, constituting the model parameter
set. f
s
(.) are static single-input/single-output nonlinear
functions (the so-called activating functions) of sigmoid
shape. Because a sigmoid is an output-bounded function,
an ANN is well behaved for all inputs.
...
...
...
y
3
(s)
...
...
...
...
...
...
... ...
...
x(s)
(b)
x
3
a
3,000
a
3,001
a
3,00Q
a
3,011
a
3,01Q
a
3,0QQ
a
3,112
a
3,1QQ
a
3,QQQ
a
3,111
x
3
z
1
z
1
z
1 ..
..
x
3
~
~
y
1
(s)
x(s)
(a)
z
1
z
1
z
1
a
1,0
a
1,1
a
1,2
a
1,Q
...
...
~
~
Figure 18. Implementation examples of rst- (a) and third- (b) order kernels of a general
polynomial lter.
2202 INTERMODULATION
A direct implementation of a dynamic ANN is shown in
Fig. 19. However, recognizing that (25a) constitutes a
biased linear FIR lter, whose bias is b
k
and impulse
response is w
k
(.), this dynamic ANN can also be imple-
mented as a set of parallel branches of the Wiener type, as
depicted in Fig. 20 [14].
Unfortunately, since all terms of the ANN are similar,
there is no way to identify relations between the systems
output properties and any particular ANN terms.
Furthermore, as the model is now also nonlinear for the
w
k
(q) and b
k
parameters, the parameter extraction process
must rely on some form of optimization. This optimization
process, called ANN training, is known to give results
that are highly dependent on the inputoutput training
data. Moreover, there is no guarantee that the parameter
set found is unique or even optimum, which can constitute
a severe limitation to the models predictability.
2.4. A Glimpse of Nonlinear Simulation Algorithms for
Distortion Prediction
In circuit-level simulators [17], the mathematical repre-
sentation of the circuit is built by substituting each
electronic element with its constitutive relation [e.g., a
linear resistor can be represented as Ohms law, i v/R; a
nonlinear resistor would be given by a voltage-controlled
current source, i(v); while a capacitor would be given by a
linear or nonlinear charge, q(v)] and then applying
Kirchhoff s current and voltage laws to the complete
circuit. This leads to a system of ordinary nonlinear
differential equations (ODEs) in time such as
iyt
dqyt
dt
xt 26
where x(t) and y(t) stand for the time-domain waveform of
the excitation and the state-variable vectors, respectively;
i[y(t)] represents memoryless linear or nonlinear ele-
ments, while q[y(t)] models memoryless linear or non-
linear charges (capacitors) or uxes (inductors). The
objective of the simulation is to nd the y(t) circuit
solution vector given a known x(t) input excitation.
On the other hand, system-level simulators are usually
implemented as either event-driven or envelope-driven
machines. In both cases the simulator treats the system in
the time domain, computing a set of time samples of the
information signal.
Event-driven machines operate at a very high logic
level, in which the information is simply a set of successive
logic states. They are, therefore, state ow simulators,
without enough subsystem description detail to allow
distortion calculations.
Envelope-driven simulators operate with the analogue
complex envelope. Hence, they do not handle the true
bandpass RF blocks but simply their complex lowpass
equivalents. Nevertheless, since these blocks are still
nonlinear dynamic blocks, the lowpass equivalent system
mathematical representation will again be an ordinary
differential equation similar to (26) with the only differ-
ence that now both the excitation vector x(t) and the state
variable vector y(t) are, in general, complex entities.
So, except for the type of signals handled, an ODEsuch as
(26) can be used to represent bandpass RF circuits, bandpass
RF systems, or even complex lowpass equivalent systems.
2.4.1. Time-Domain Techniques. The most intuitive
way to solve (26) is to covert it into a difference equation
iys
qys qys 1
T
s
xs 27a
or
iysT
s
qys xsT
s
qys 1 27b
in which T
s
is the sampling period, and then determine all
time samples of y(t), y(s), starting from a known initial
state y(0). Because we are integrating the nonlinear ODE
in a set of discretized timesteps, this is known as timestep
integration, and constitutes the basic approach adopted
in all time-domain circuit simulators (time-marching
machines) such as SPICE, or system simulators like
Simulink
1
.
z
1
z
1
z
1
u
k
b
k
w
o
(k)
w
k
(q)
f (u
k
)
u
k
b
o
y
s
~
x
s
~
Figure 19. Implementation of a nonlinear dynamic articial
neural network.
+
+
+
+
+
+
+ +
+
f [u
1
(s)] W
1
( )
u
1
(s)
[u
k
(s)]
u
k
(s)
f [u
K
(s)]
u
K
(s)
b
1
b
o
b
k
b
k

W
k
( )
W
k
( )
w
o
(1)
w
o
(k)
w
o
(k)
y(s)
~
~
~
~
~
~
~
f
x(s)
~
Figure 20. Alternative implementation of the model of Fig. 19, in
which the ANN is rebuilt as a parallel combination of several
biased linear lter/memoryless nonlinearity branches.
1
Simulink is a general-purpose system simulation package that is
supported by the Matlab scientic computation software plat-
form.
INTERMODULATION 2203
Although timestep integration is still the nonlinear
analysis method of wider acceptance, it suffers from
several disadvantages in the RF distortion circuit simula-
tion eld. First, since it was conceived to compute the
circuits transient response, while our interest normally
resides in the steady state, it becomes quite inefcient as it
has to wait until all transients have vanished. Also, by
operating in the time domain, it cannot handle linear
elements having a frequency-domain description, such as
dispersive distributed transmission media. Finally, even if
that drawback is circumvented (e.g., by approximating
these elements by lumped networks of reduced order), the
necessity of operating in the time domain, while the input
and resulting signals are usually handled in frequency
domain, would end up in all difculties associated with the
discrete Fourier transform (DFT), namely, spectral leak-
age when transforming quasiperiodic multitone signals.
Fortunately, some time-domain alternatives to the initial
timestep integration method, like the shooting Newton
[17], can bypass the transient response, therefore obviat-
ing the waste of time needed to let it vanish.
Furthermore, time-domain methods benet from two
important advantages: (1) since they rely on the SPICE
simulator engine, they are well known and available in
many electronic design automation tools; and (2) as they
use time as a natural continuation parameter [17], they
are especially suitable for supporting strong nonlinear
regimes. Envelope-driven system-level simulators must
handle the information envelopes, which are aperiodic
by nature. So, timestep integration does not suffer from
the inefciency attributed to the calculation of the periodic
steady-state response, becoming the obvious choice in
solving (26).
2.4.2. Frequency-Domain Techniques. Frequency-do-
main techniques no longer seek a set of time samples of
the circuit output or the state variables waveforms but a
spectral representation of them. In their most simple
form, they assume that both the steady state of the
excitation and the ODE solution are periodic in time, so
that they can be expanded in a truncated DFT of (2K1)
frequency points. For example, the state variables vector
would be represented by
yt

K
kK
Yko
0
e
jko
0
t
28
Since, in the frequency domain, time-domain derivatives
are transformed into products by jo, substituting (28) into
(26) leads to
IYo jXQYo Xo 29
which is a nonlinear algebraic function in the DFT coef-
cients Y(ko
0
). The orthogonality between different fre-
quency components provided by the DFT determines
that, despite its appearance, this is not a single equation
but can be expanded in a set of (2K1) equations, each of
these must be fullled for its harmonic component; in
other words, the LHS and RHS (left- and right-hand side)
components must be in equilibrium, which is why (29) is
known as the harmonic-balance equation.
Since this harmonic-balance (HB) technique computes
the periodic steady state directly, it circumvents most of
the disadvantages attributed to time-marching techni-
ques. Its only drawbacks are that, depending on the
DFT, it can handle only moderate nonlinear regimes,
where the y(t) can be described by a relatively small
number of harmonics, and that it requires both the ex-
citation and the vector of state variables to be periodic. As
we have already seen in Section 1, the excitations used for
intermodulation distortion analysis are often of the two-
tone or multitone type. In general, the frequencies of these
tones do not constitute any harmonic set (they cannot be
made harmonics of a common fundamental), and the
corresponding waveform is aperiodic. (Such multitone
signals are actually said to be quasiperiodic waveforms.)
One way to circumvent this problem consists in imagining
that a multitone time-domain waveform is evolving, not in
the natural time t, but in a number of articial timescales
equal to the number of nonharmonically related tones,
t
1
,y,t
Q
. For example, for a two-tone regime, the ODE in
time becomes a multirate partial-differential equation
(MPDE) in t
1
and t
2
:
iyt
1
; t
2

@qyt
1
; t
2

@t
1

@qyt
1
; t
2

@t
2
xt
1
; t
2
30
Since y(t
1
, t
2
) is now double-periodic in t
1
and t
2
, it admits
a bidimensional Fourier expansion
yt
1
; t
2

K
k
1
K

K
k
2
K
Yk
1
o
1
; k
2
o
2
e
jk
1
o
1
t
1
k
2
o
2
t
2

31
which, substituted in (30), results in a new bidimensional
HB equation. This is the technique known as the multi-
dimensional discrete Fourier transform harmonic-balance
(MDFT HB).
2.4.3. Time-Domain/Frequency-Domain Hybrid Techni-
ques. When the excitation is a RF carrier of frequency
o
c
, modulated by some independent baseband modulation
signal, like the one expressed in (5), it can be again
conceived as varying according to two independent time-
scales: one, t
1
, with fast evolution, for the carrier; and
another, t
2
, slower, for the modulation. So, the circuit can
again be described by a bidimensional MPDE such as (30).
If we now recognize that this regime is periodic for the
carrier but aperiodic for the modulation, we immediately
conclude that simulation efciency would be maximized if
we treated the carrier evolution in t
1
in the frequency
domain, but kept the baseband evolution t
2
in time. This
supposes a solution in which the vector of state variables
is decomposed in a t
2
time-varying Fourier series
yt
1
; t
2

K
k K
Yko
c
; t
2
e
jkoct
1

32
2204 INTERMODULATION
which, substituted in (30), leads to
IYko
c
; t
2
jX
c
QYko
c
; t
2

@QYko
c
; t
2

@t
2
Xko
c
; t
2

33
Solving (33) for the envelope, with a timestep integration
scheme, and for the carrier, with harmonic balance, leads
to the following recursive HB equation:
IYko
c
; sT
s
jX
c
QYko
c
; sT
s
QYko
c
; s
Xko
c
; sT
s
QYko
c
; s 1
34
By handling the RF signal components in the frequency
domain and the envelope in the time domain, (34) is
particularly appropriate to bridge the gap between circuit
and envelope-driven system simulation. In fact, we can
conceive of a simulator in which all except a few circuits of
a communication system are treated as system-level com-
plex equivalent lowpass behavioral inputoutput blocks
for maximized computational efciencywhile the re-
maining circuits are treated at the RF bandpass circuit
levelfor maximum accuracy.
2.4.4. Volterra Series. Although the Volterra series
method is not very widely used outside the intermodula-
tion prediction eld, it plays a determinant role for the
analysis and design of very-low-distortion circuits.
In comparison with the previously mentioned methods,
Volterra series no longer tries to nd a solution in an
iterative and numerical way, but seeks for an analytic
solution of a polynomial approximation of the original
circuit or system. In fact, it assumes that if the nonlinea-
rities of the original circuit or system can be decomposed
in a Taylor series around a certain xed quiescent point
iy g
1
yg
2
y
2
g
3
y
3
35
qy c
1
yc
2
y
2
c
3
y
3
36
then, the solution can be approximated by the following
functional series in the time domain:
yt y
1
t y
2
t y
3
t

_
1
1
h
1
txt tdt

_
1
1
_
1
1
h
2
t
1
; t
2
xt t
1
xt t
2
dt
1
dt
2

_
1
1
_
1
1
_
1
1
h
3
t
1
; t
2
; t
3
xt t
1

xt t
2
xt t
3
dt
1
dt
2
dt
3
37
If the excitation can be expressed as a frequency-domain
sum of complex exponentials (possibly, but not necessarily,
harmonically related sinusoids)
xt

Q
q Q
Xo
q
e
joqt
38
then we obtain a frequency-domain version of (37)
yt

Q
qQ
H
1
oXo
q
e
joqt

Q
q
1
Q

Q
q
2
Q
H
2
o
q
1
; o
q
2
Xo
q
1
Xo
q
2
e
joq
1
oq
2
t

Q
q
1
Q

Q
q
2
Q

Q
q
3
Q
H
3
o
q
1
; o
q
2
; o
q
3

Xo
q
1
Xo
q
2
Xo
q
3
e
jo
q
1
o
q
2
o
q
3
t
39
in which the h
n
(t
1
,y,t
n
) of (37) and the H
n
(o
1
,y,o
n
) of
(39) are the nth-order impulse responses and the nth-
order nonlinear transfer functions, respectively. Each of
these sets can be obtained from the other by the direct
application of a n-dimensional Fourier transform pair.
The Volterra series method consists in determining the
set of h
n
(.) or of H
n
(.) (as occurs with conventional linear
systems, the frequency-domain version is usually pre-
ferred), which then becomes a true nonlinear dynamic
model of the system. In fact, note that if one knows all the
H
n
(o
1
,y,o
n
) of a circuit or system, up to a certain order,
one immediately knows its response up to that order [from
(39)] to any multitone input represented by (38).
To show how these nonlinear transfer functions can be
determined, let us consider again the general circuit or
system described by the ODE of (26). Substituting (35),
(36), and (39) into (26), and assuming that the input is now
a rst-order elementary excitation of
xt e
jot
40
the orthogonality of the complex exponentials leads us to
the conclusion that H
1
(o) must be given by
H
1
o
1
g
1
joc
1
41
In fact, this H
1
(o) is merely the usual transfer function of
the linear circuit or system obtained from a linearization
around the quiescent point.
To obtain the second-order nonlinear transfer function,
we would now assume that the system is excited by a
second-order elementary excitation of
xt e
jo
1
t
e
jo
2
t
42
Substituting (35), (36), (39), and (42) into (26), and collect-
ing components in the second-order mixing product of
INTERMODULATION 2205
o
1
o
2
would lead to
H
2
o
1
; o
2

g
2
jo
1
o
2
c
2
g
1
jo
1
o
2
c
1
H
1
o
1
H
1
o
2
43
Similarly, the calculation of the third-order nonlinear
transfer function assumes an input of
xt e
jo
1
t
e
jo
2
t
e
jo
3
t
44
and leads to
H
3
o
1
; o
2
; o
3

2
3
g
2
jo
1
o
2
o
3
c
2
g
1
jo
1
o
2
o
3
c
1
H
1
o
1
H
2
o
2
; o
3
H
1
o
2

H
2
o
1
; o
3
H
1
o
3
H
2
o
1
; o
2

g
3
jo
1
o
2
o
3
c
3
g
1
jo
1
o
2
o
3
c
1
H
1
o
1
H
1
o
2
H
1
o
3

45
The terms [g
2
j(o
1
o
2
)c
2
]H
1
(o
1
)H
1
(o
2
) in (43) and
the terms

2
3
g
2
jo
1
o
2
o
3
c
2

H
1
o
1
H
2
o
2
; o
3
H
1
o
2
H
2
o
1
; o
3

H
1
o
3
H
2
o
1
; o
2

and g
3
jo
1
o
2
o
3
c
3
H
1
o
1
H
1
o
2
H
1
o
3

in (45) are known as the elementary second-order and


third-order nonlinear sources, respectively. In fact, com-
paring (43) and (45) with (41), we immediately conclude
that for the calculation of rst-, second-, and third-order
solutions, what we have been doing was to always analyze
the same linearized version of the original ODE with the
appropriate elementary nonlinear sources at o for rst
order, at o
1
o
2
for second order, and at o
1
o
2
o
3
for
third order. That is why the method of Volterra series
analysis is known to solve a forced nth-order nonlinear
dynamic problem, solving n times the same linear pro-
blem, with the appropriate (1st, 2nd,y,nth)-order forcing
functions, in a recursive way. As it is based on a poly-
nomial approximation of the nonlinearities, the Volterra
series is a local model restricted to small-signal levels, or
mildly nonlinear regimes. In practice, it can be used only
for calculating the distortion in nonsaturated mixers,
small-signal ampliers, or nonsaturated power ampliers,
that is, well below the 1-dB compression point. However,
because it is a closed-form nonlinear model, it provides
qualitative information on the nonlinear circuit or sys-
tems operation, giving, for instance, insight into the
physical origins of nonlinear distortion, and can be di-
rectly used for circuit and systems design.
3. INTERMODULATION DISTORTION IN SMALL-SIGNAL
AMPLIFIERS
First, let us clarify the meaning of small-signal ampli-
ers, as most of us would expect no appreciable nonlinear
distortion from these circuits. This term is used to distin-
guish low-noise or high-gain ampliers from the essen-
tially different power ampliers, treated in the next
section. While the small-signal ampliers referred to
here always supposedly operate in highly backed-off class
A regimes, power ampliers are operated close to satura-
tion, usually even in strongly nonlinear modes as class
AB, B, or C.
So, now, one question comes to our minds: What are
the mechanisms capable of causing signicant nonlinear
distortion in small-signal ampliers? To advance with an
answer, let us consider the case of the low-noise amplier
of a wireless communication receiver front end. This is a
circuit that faces, beyond the very weak desired channel,
many other incoming channels present in the same com-
munication systems band. For example, a low-noise front
end of a handset can be simultaneously excited by the
desired channel coming from a remote base station, and by
another channel coming from a nearby transmitter hand-
set. Since the ratio of distances between our receiver and
the base station, and our receiver and the perturbing
transmitter, can easily be on the order of several kilo-
meters to one meter, the ratio of incoming powers can be
higher than 10
8
to 1. Therefore, the signal-to-perturbation
ratio can be as poor as 70 or 80 dB, and, if it is true
that a desired signal of, say, 70 dBm cannot generate
any signicant nonlinear distortion, that is no longer the
case for the 10 dBm perturbation.
Indeed, as seen in Section 1, this high-level perturba-
tion can produce nonlinear distortion sidebands falling
over the desired channel, cross-modulation, and desensi-
tization. These effects are illustrated in Fig. 21 and can
constitute a severe limitation to the performance of RF
front ends. In fact, they allow the denition of a very
important signal delity gure of merit, the dynamic
range, which is the ratio between the amplitudes of the
highest and lowest incoming detectable signals that still
guarantee the specied signal-to-noise ratio SNR. The
lowest-amplitude detectable signalthe receivers sensi-
tivity S
i
is the one that stands the desired SNR over the
noise oor. The highest-amplitude detectable signal, P
max
,
is dened as the one that generates a nonlinear distortion
perturbation whose power equals the noise oor. So
DR
P
max
S
i
or DR
dB
P
max
dBm
S
i
dBm
46
where DR is the dynamic range.
Since signal excursions appearing at the nonlinear
active device are always kept much smaller than the
applied DC voltages and currents, the amplier can be
approximately described by a local model. For example,
considering the ideal (low-frequency) situation in which
the only nonlinear effects can be attributed to the
i
ds
(v
gs
,v
ds
) current of a FET, the amplier would be
described by the equivalent circuit depicted in Fig. 22,
2206 INTERMODULATION
while the nonlinearity would be represented by the Taylor
series of (19).
Although the model shown in Fig. 22 is very simplied,
it will already give us some insight onto the mechanisms
controlling IMD generation in these small-signal ampli-
ers. For that, we will rst redraw this circuit as the one of
Fig. 23 over which a Volterra series analysis will then be
performed.
Note that ports 1 and 2 in Fig. 23 handle the ampliers
input and output, respectively, but were dened after the
terminal admittances Y
S
and Y
L
were incorporated into
the main circuit. port 3 serves to dene v
gs
, one of the
control voltages of the i
ds
nonlinearity, and port 4 serves to
dene v
ds
, the other control voltage. Furthermore, since
Fig. 23 is the linearized equivalent-circuit version of the
original circuit of Fig. 22, its only i
ds
(v
gs
,v
ds
) components
are the rst-order ones: G
m
v
gs
and G
ds
v
ds
. All the other
nonlinear terms of (19) will behave as nonlinear sources
that will be incorporated as independent current sources
applied to port 4 [6,7].
Assuming that the equivalent Norton current excita-
tion corresponds to a narrowband two-tone stimulus, this
circuit can be represented by the following [Z] matrix and
input and output boundary conditions:
V
1
o
V
2
o
V
3
o
V
4
o
_

_
_

Z
11
Z
12
Z
13
Z
14
Z
21
Z
22
Z
23
Z
24
Z
31
Z
32
Z
33
Z
34
Z
41
Z
42
Z
43
Z
44
_

_
_

_
.
I
1
o
I
2
o
I
3
o
I
4
o
_

_
_

_
47a
I
1
o I
s
o : i
s
t I
s
1
coso
1
t I
s
2
coso
2
t 47b
I
2
o 0 47c
I
3
o 0 47d
If the two-tones are closely separated in frequency, the
circuit reactances are similar for all in-band products. So,
using o
0
as the center frequency [o
0
(o
1
o
2
)/2], Z
i-
ij
(o
1
)EZ
ij
(o
2
)EZ
ij
(2o
1
o
2
)EZ
ij
(2o
2
o
1
)EZ
ij
(o
0
) for
any i, j 1,2,3,4.
After determining the linear equivalent-circuit [Z]
matrix, the nonlinear currents method of Volterra series
analysis [6,7] proceeds by determining rst-order control
voltages V
3,1
(o
0
) and V
4,1
(o
0
) (q1,2) and rst-order
output voltage V
2,1
(o
0
), from the excitation I
s
(o
0
):
V
gs;1
o
1
; V
gs;1
o
2
: V
3;1
o
0
Z
31
o
0

I
S
o
0

2
48a
V
ds;1
o
1
; V
ds;1
o
2
: V
4;1
o
0
Z
41
o
0

I
S
o
0

2
48b
V
L;1
o
1
; V
L;1
o
2
: V
2;1
o
0
Z
21
o
0

I
S
o
0

2
49
From these rst-order control variables, the second-order
nonlinear current of i
ds
at o
1
o
2
Do and o
1
o
2
So,
SNR
DR
P
Max
S
i
N
i
S
xx
()

Figure 21. Nonlinear distortion impairments in a


mildly nonlinear receiver system: illustration of the
concepts of receivers desensitization and dynamic range.
+

i
DS
(v
GS
,v
DS
)
v
S
(t )
v
DS
(t )
R
0
v
o
(t)
R
0
Z
L
( )
v
GS
(t )
V
DS
V
GS

Input
matching
network
Output
matching
network
Figure 22. Model of a mildly nonlinear amplier for
small-signal distortion studies.
INTERMODULATION 2207
I
4,2
(o) should now be determined:
I
4;2
Do 2G
m2
jZ
31
o
0
j
2
G
md
Z
31
o
0
Z
41
o
0

G
md
Z
31
o
0

Z
41
o
0
2Gd
2
jZ
41
o
0
j
2

jI
S
j
2
4
50
I
4;2
So 2G
m2
Z
31
o
0

2
G
md
Z
31
o
0
Z
41
o
0

G
d2
Z
41
o
0

I
2
S
4
51
Then, the linear circuit should be analyzed again for this
new second-order current source, determining the second-
order control voltages at the difference o
1
o
2
Do and
sum o
1
o
2
So frequencies, V
3,2
(o) and V
4,2
(o):
V
3;2
Do Z
34
DoI
4;2
Do 52
V
3;2
So Z
33
SoI
3;2
So Z
34
SoI
4;2
So 53
V
4;2
Do Z
44
DoI
4;2
Do 54
V
4;2
So Z
43
SoI
3;2
So Z
44
SoI
4;2
So 55
The last step consists in calculating the third-order non-
linear current of i
ds
at 2o
1
o
2
, I
4,3
(2o
1
o
2
) from rst-
and second-order control voltages V
3,1
(o), V
3,2
(o), V
4,1
(o),
and V
4,2
(o):
I
4;3
2o
1
o
2
2G
m2
Z
31
o
0

Z
34
2o
0
I
4;2
2o
0

G
md
Z
31
o
0

Z
44
2o
0
I
4;2
2o
0

G
md
Z
41
o
0

Z
34
2o
0
I
4;2
2o
0

2G
d2
Z
41
o
0

Z
44
2o
0
I
4;2
2o
0

S
2
2G
m2
Z
31
o
0
Z
34
DoI
4;2
Do
G
md
Z
31
o
0
Z
44
DoI
4;2
Do
G
md
Z
41
o
0
Z
34
DoI
4;2
Do
2G
d2
Z
41
o
0
Z
44
DoI
4;2
Do
I
S
2
3G
m3
Z
31
o
0
jZ
31
o
0
j
2
G
m2d
Z
31
o
0

2
Z
41
o
0

2G
m2d
jZ
31
o
0
j
2
Z
41
o
0

G
md2
Z
31
o
0

Z
41
o
0

2
2G
md2
Z
31
o
0
jZ
41
o
0
j
2
3G
d3
Z
41
o
0
jZ
41
o
0
j
2

I
S
jI
S
j
2
8
56
and then, nally, determine third-order output voltage at
the IMD frequency 2o
1
o
2
:
V
2;3
2o
1
o
2
Z
24
o
0
I
4;3
2o
1
o
2
57
Now we are in a position to calculate the ampliers
signal-to-IMD ratio (IMR) by rst determining output
power at the fundamental
P
L

1
2
G
L
o
0
jV
L
o
0
j
2

1
2
G
L
o
0
jZ
21
o
0
j
2
jI
S
j
2
58
and IMD components
P
L
3
2G
L
o
0
jV
2;3
2o
1
o
2
j
2
59
where G
L
(o) is the real part of load admittance Y
L
(o). A
full expression for this IMR would be very complex. But,
under the assumptions that internal feedback is negligible
[Z
34
(o)E0] and that both second-harmonic and o
1
o
2
distortion will be very small (usually veried in small-
signal ampliers), it can be approximated by
IMR %16
A
v
o
0

Z
D
o
0

2
.
j3G
m3
G
m2d
A
v
o
0

2G
m2d
A
v
o
0
G
md2
A
v
o
0

2
2G
md2
jA
v
o
0
j
2
3G
d3
A
v
o
0
jA
v
o
0
j
2
j
2
jV
S
j
4
60
where Z
D
(o)1/[G
ds
Y
L
(o)], A
v
(o) is the intrinsic voltage
gain dened by A
v
(o)V
ds
(o)/V
gs
(o), and V
S
is the voltage
amplitude of the signal source, V
S
(o) Z
S
(o)I
S
(o).
If we now study the variation of various third-order
current components with V
GS
bias, as shown in Fig. 24 for
a typical general-purpose small-signal MESFET, we con-
clude that there are two very good points of IMD perfor-
mance, the so-called small-signal IMD sweet spots. The
rst one is located at the FETs threshold voltage [6] and
thus has a very small associated power gain. The other is
located in high-V
GS
regions, and although it may not
correspond to very good noise gures, it is denitely useful
for designing high-gain, highly linear small-signal ampli-
ers. Unfortunately, this latter small-signal IMD sweet
spot is a peculiarity of only some GaAs MESFETs, and
was never observed on HEMTs, MOSFETs or BJTs.
C
gs
C
gd
R
i
R
s
L
s
I
ds,1
V
gs
,
( ) V
ds
I
4
V
3
=V
gs
I
3
V
2
I
2
I
S
Y
S
I
1
V
1
V
4
=V
ds
Y
L
+

Figure 23. Linearized equivalent-circuit model of a FET-based


mildly nonlinear amplier used in Volterra series calculations.
2208 INTERMODULATION
Turning now our attention to the IMR variation with
source impedance, we can conclude that since P
L
can also
be given by P
L

1
2
G
L
o
0
jA
v
o
0
j
2
jV
S
j
2
, Eq. (60) conrms
the empirical observation that, for constant output power,
third-order distortion in FET-based small-signal ampli-
ers is almost independent of input termination Z
S
(o).
As far as the IMR variation with load impedance is
concerned, Fig. 24 and (60) indicate that, for typical V
GS
bias, the nonlinear current contributions of G
m3
and G
m2d
have effects on IMR that are important but, fortunately,
opposite in sign. Since NIG
m2d
is proportional to voltage
gain, and thus to Z
L
(o), this implies that a maximization
of voltage gain can also be benecial to IMR. This hypoth-
esis was indeed fully conrmed by the measured and
simulated IMR
3
load-pull data [9], showing that and
optimum Z
L
(o) really exists and it tends to coincide with
the one that maximizes small-signal voltage and power
gains in MESFET-based small-signal ampliers [6,9].
Since BJTs and HBTs have mildly nonlinear character-
istics that are completely different from those of FETs,
these results cannot be directly extrapolated to bipolar-
based small-signal ampliers. For example, while the
most important nonlinearity source, i
DS
(v
GS
,v
DS
), is lo-
cated at the FETs output, in bipolars it is manifested in
both the input, i
B
(v
BE
,v
CE
), and the output, i
C
(v
BE
,v
CE
) [6].
4. INTERMODULATION DISTORTION IN HIGH-POWER
AMPLIFIERS
Let us now turn our attention to power ampliers (PAs).
Contrary to small-signal ampliers where noise gure and
gain are of primary concern, power ampliers are de-
signed for high output power P
out
and power-added ef-
ciency (PAE).
In a well-designed PA, maximum output power is
determined by the loadline (load impedance termination)
and available output signal excursion. Power-added ef-
ciency is dependent mostly on the PA operation class
(quiescent point) and on a convenient output voltage and
current waveform shaping, specically, selection of har-
monic terminations. Therefore, it seems that little is left
for optimizing intermodulation distortion. Fortunately, as
we will see, that is not necessarily the case.
Since real devices do not present abrupt turnon points,
it is difcult to precisely dene the PA operation class. So,
to prevent any ambiguity in the following discussion, we
will rst dene classes A, AB, B, and C. Taking into
account the discussion in Ref. 6, we will adopt the follow-
ing denitions: (1) the turnon bias V
T
is dened as the
input quiescent point to which the turnon small-signal
IMD sweet spot corresponds (see Fig. 24); (2) biasing the
device below V
T
corresponds to class C (G
m3
40); (3)
biasing it exactly at V
T
corresponds to class B (G
m3
0);
and (4) biasing it above V
T
will determine the usual class
AB or class A (G
m3
o0).
The rst design step to be taken when designing a PA
is to decide whether precedence should be given to PAE
or to IMD specs, as they generally lead to opposite design
solutions. The traditional PA design rules state that a
PA optimized for IMD requires unsaturated class A opera-
tion; that is, the device should be biased and always
kept comfortably inside the linear amplication zone
(saturation region of FETs and the active region of BJTs
or HBTs).
On the other hand, a PA optimized for PAE is usually
biased near class B or slightly into class Cthat is, with a
quiescent point where output voltage is halfway between
knee voltage and breakdown, and output current is close
to turnonand then is allowed to be driven into satura-
tion. This leads to saturated classes such as classes E and
F [18]. Unfortunately, as such operation classes achieve
their high efciencies by operating the active device in an
almost switching mode, their associated nonlinear distor-
tion is also huge. In fact, recognizing that a switching
power amplier turns any waveform into a constant-
amplitude square wave, it is easy to conclude that those
class E or F PAs cannot be used when the amplitude of the
RF-modulated signal also carries information content
(modulation formats of non-constant-amplitude envelope).
The basic goal when designing linear PAs is to get class
B PAE with class A IMDand, although this is seldom
possible, there are some particular PA features that
provide a means to escape from this apparent deadend.
One that has been receiving a great deal of attention is the
so-called large-signal IMD sweet spots [19]. Contrary to
their small-signal counterparts studied above, which were
associated to a particular quiescent point and found
effective only at very-small-signal levels, these are pecu-
liar points of the IMDinput power characteristic (see
Fig. 25) where only a few decibels of output-power backoff
(and thus a few percent of efciency degradation) can lead
to astonishingly high levels of IMD reduction.
To understand how this curious effect takes place, we
need to abandon our small-signal local model, since, for the
signal levels where these large-signal IMD sweet spots are
observed, the Taylor expansion of (19) presents an unac-
ceptable error or simply may not converge. Instead, we are
40
20
0
20
40
60
80
NIG
m3
, NIG
m2d
, NIG
md2
, NIG
d3
, NI
d3
(mS/V
2
)
2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0
V
GS
(V)
40
20
0
20
40
60
80
Figure 24. Magnitude of total i
ds
(v
gs
,v
ds
) third-order current
components NI
d3
() and its various components due to G
m3
NIG
m3
(K), G
m2d
NIG
m2d
( ), G
md2
NIG
md2
( ), and G
d3
NIG
d3
(- - -) as a function of V
GS
bias.
INTERMODULATION 2209
forced to rely on qualitative solutions of approximated
global models. For that, we will transform the bidimen-
sional dependence of (19) on the input and output control
voltages, v
GS
(t) and v
DS
(t), into a one-dimensional model,
generating in this way an equivalent single-input/single-
output transfer function (TF), i
DS
[v
GS
(t)]. This assumes an
output boundary condition imposed by load impedance,
Z
L
(o), V
ds
(o) V
DC
Z
L
(o)I
ds
(o), where V
DC
is the applied
output quiescent voltage, beyond knowledge of the active
device nonlinear model i
DS
[v
GS
(t), v
DS
(t)].
In order to describe, with enough generality, the global
nonlinearities of the device, we will also consider that
turnon can be represented by an exponential of input
voltage. This method is commonly adopted to represent
subthreshold conduction in FETs, and is even more faith-
ful for describing i
C
(v
BE
) in common-base or common-
emitter bipolar devices. Then, for increasing v
GS
voltages,
it is assumed that the FET passes through a nearly
quadratic zone, which, because of short-channel effects,
tends to become linear for even higher v
GS
. This i
DS
(v
GS
)
behavior was shown to be well reproduced by the following
empirical expression [20]
i
DS
v
GS
b
smtv
GS

2
1 y smtv
GS

61a
where smt(v
GS
) is a smooth turnon function of v
GS
given by
smtv
GS
K
V
ln 1e
v
GS
K
V
_ _
61b
and b, y and K
V
are empirical scaling parameters.
If we now take the output boundary into account,
i
DS
(v
DS
) will be almost unchanged unless v
GS
is so high
that R
L
i
DS
becomes close to V
DC
. There, v
DS
is so small
that the FET enters the triode region (the saturation
region for a bipolar based PA). v
GS
rapidly looses control
over i
DS
, and the TF saturates (the PA enters into strong
compression). So, the global transfer characteristic
i
DS
(v
GS
) presents a distinct sigmoid shape.
Assuming again a two-tone stimulus, several qualita-
tive conclusions may be drawn for large-signal operation.
One of the most important is that when the amplier is
driven into harder and harder nonlinear regimes, its gain
goes into compression, which means that the phase of the
in-band nonlinear distortion components must oppose
those of the fundamentals. So, PA energy balance con-
siderations derived in Section 1 show that the large-signal
asymptotic phase value of the IMD sidebands, at 2o
1
o
2
and 2o
2
o
1
, must tend to a constant value of 1801 [19].
On the other hand, we also know that small-signal IMD
phase is determined by the sign of the TF local derivatives,
determined by the active devices quiescent point. As seen
above, G
m3
is positive below V
T
(class C operation), is null
exactly at V
T
(class B) and is negative above V
T
(classes
AB and A). So, since small-signal IMD sign can be made
positive, and tends to negative values in the large-signal
asymptotic regime, the BolzanoWeierstrass theorem
guarantees the existence of at least one IMD null some-
where in between. This will be observed as a more or less
pronounced notch in an IMDP
in
plot, constituting a large-
signal IMD sweet spot.
From these general conclusions it is clear that the
existence of a large-signal IMD sweet spot depends on
the small-signal IMD phase and on the physical effects
that determine large-signal gain compression. So, each
operation class will have its own particular IMD behavior.
Under class C, V
GS
oV
T
, G
m3
40, the PA presents gain
expansion and a high IMD level with 01 phase. When the
signal excursion reaches a strong nonlinearity as the
gatechannel junction conduction, gatechannel break-
down, or, more likely, the saturationtriode region transi-
tion, the PA enters into compression and an IMD notch is
observed. So, a large-signal IMD sweet spot should be
expected for class C ampliers when the signal excursion
is at the onset of saturation, not far from the PAs 1-dB
compression point. Although the PAE is not yet at its
maximum, it may present an interesting value.
In class A, V
GS
4V
T
, G
m3
o0, the PA starts at small
signal with almost unnoticeable gain compression and a
very small level of IMD with 1801 phase. As this phase is
maintained when the device enters strong compression,
no IMD sweet spot will be generated. Thus, and unless the
PA is biased above the small-signal IMD sweet spot found
for high V
GS
bias in certain MESFETs [9], no large-signal
IMD sweet spot will be observed. On the contrary, a
sudden increase of IMD power at the on-set of saturation
is the usual outcome of class A PAs.
In class AB, where V
GS
is only slightly higher than V
T
and G
m3
o0, the PA again shows a very shallow gain
compression and a low-level IMD of 1801 phase. Hence,
similar to what was concluded for class A operation, no
IMD sweet spot should be expected. Nevertheless, depend-
ing on the abruptness of turnon and succeeding lineariza-
tion of the TF characteristic, it can be shown that a
transition from 1801 to 01 IMD phase can occur at lower
values of output power [20,21] generating an IMD sweet
spot. At this stage, the circuit begins to behave as a class
C PA, with 01 IMD phase and gain expansion. Conse-
quently, a new IMD sweet spot will have to occur at large
signal when gain compression will nally take place. In
80
V
GS
= 0.35V
V
GS
= 0.41V
V
GS
= 0.53V
90
70
60
50
40
30
20
10
15 10 5 0 5
P
in
(dBm)
P
nout
(dBm)
Figure 25. Different IMD versus P
in
patterns showing large-
signal IMD sweet spots for a HEMT device.
2210 INTERMODULATION
summary, depending on the actual devices transfer char-
acteristic and on the adopted quiescent point, class AB may
be signicantly different from class A in that it may even
present two IMD sweet spots, one for small to moderate
levels of P
in
and another for the onset of saturation.
When the device is biased for class B (i.e., V
GS
V
T
and
G
m3
0), there is no small-signal IMD to be compensated
by the large-signal distortion behavior. The PA presents
very low levels of small-signal distortion (remember that it
was biased at a small-signal IMD sweet spot) and then
presents a sudden rise of distortion power at the onset
of saturation. To illustrate the results of this analysis,
Fig. 26 shows three IMRP
in
patterns typically observed
in MOSFET-based PAs biased for classes A, AB, and C,
respectively.
To close this qualitative analysis, let us draw some
conclusions about the dependence of large-signal IMD
sweet spots on impedance terminations. Starting with
source impedance, it is intuitive to realize that, because
the i
DS
(v
GS
,v
DS
) nonlinearity is located at the output, the
large-signal IMD behavior will be mostly invariant with
Z
S
(o), as was the case studied earlier for small-signal
ampliers. Note that this conclusion may not be extra-
polated to bipolar based ampliers, in which there is an
input nonlinearity due to the baseemitter junction, and
an output nonlinearity due to the active-to-saturation
region transition [6,22].
As seen from the small-signal Volterra series analysis
above, the dependence of i
DS
on v
DS
should also produce its
own impact on the large-signal IMD sweet spot, via Z
L
(o).
In fact, since these sweet spots were related to the output
signal excursion that crosses the saturation-to-triode re-
gion transition, and as the loadline slope determines that
signal level (see Fig. 27), it should be expected that the P
in
for which the IMD sweet spot is observed will be strongly
dependent on load termination. This is illustrated in Figs.
27a and 27b, where a shift of the simulated IMD sweet-
spot position is evident when loadline slope 1/R
L
is varied.
Furthermore, if the PA output is not perfectly matched,
the intrinsic load impedance actually presented to the
nonlinear current source may have a certain nonnull
phase. The output-induced large-signal distortion compo-
nents will no longer be in exact opposite phase with the
small-signal ones, and the previously observed large-
signal IMD sweet spots cease to be sharp dips of IMD
power to become more or less smooth valleys.
Further conclusions can also be drawn about the im-
pact of out-of-band terminations on the large-signal IMD
sweet spots [6,22]. As was seen above from the small-
signal analysis, the presence of even-order mixing pro-
ductswhich, as we have already seen, can be remixed
with the fundamentalswill generate new odd-order pro-
ducts. But, contrary to the small-signal case in which it
was assumed that the quasilinear operation of the ampli-
er would determine a minor effect to these indirect odd-
order products, that is no longer valid for a PA, and its
analysis becomes again much more complex:
1. Efciency considerations may have previously
dictated a certain second-harmonic termination.
Further, if in most usual situations we seek a
squared output voltage waveform, that is, without
even-order harmonics, there are situations (e.g., the
I
Max
V
K
V
DC
V
BR
= V
Max
v
DS
i
DS
(v
GS
,v
DS
) P
IMD
(dBm)
Q
P
in
(dBm)
180
40 30 20 10 0 10 20
160
140
120
100
80
60
40
20
0
20
(a) (b)
R
L
R
L
Q
R
L
Figure 27. Impact of PA loadline slope 1/R
L
on large-signal IMD sweet spots.
IMR (dBc)
70
60
50
40
30
20
10
10 5 0 5 10 15 20
P
out
(dBm)
Class C
Class AB
Class A
Figure 26. IMR versus P
out
power plots of typical MOSFET-
based PAs at the three operation classes studied: C ( ), AB
(- - -) and A ().
INTERMODULATION 2211
so-called inverse class F [18,23]) in which those even
harmonics are indeed maximized.
2. If, in small-signal ampliers, there would be no
difculty in designing bias networks presenting a
very low impedance to the modulation baseband
(o
2
o
1
), in PAs that is again incomparably more
difcult. Indeed, as output currents may be on the
order of several amperes, any parasitic resistance or
inductance may immediately develop a nonnegligi-
ble output voltage.
3. There will be even additional, contributing base-
band reactances in PAs from more or less unex-
pected physical origins. That is the case of trap-
induced low-frequency dispersion presented by some
microwave active devices [24], and dynamic self-
heating, common to almost all PAs [25].
Depending on the phase of the out-of-band terminations,
these new indirect odd-order products may have a phase
that either reinforces or reduces the directly generated
IMD. As far as the even-harmonics-induced products are
concerned, since the modulation bandwidth (or the two-
tone separation Do) is usually much smaller than the PA
bandwidth, it may be assumed that Z
L
(2o
1
)EZ
L
(2o
2
)E
Z
L
(2o
0
). So, no important IMD behavior variation within
the bandwidth should be expected; that is, the indirect
odd-order distortion products may reduce, reinforce, or be
in quadrature with the direct ones, but their impact will
be the same along the whole modulation bandwidth.
The situation regarding the baseband-induced pro-
ducts is completely different. Now, Z
L
(Do) may vary
signicantly within the modulation bandwidth, especially
if the bias networks present resonances. Therefore, it is
likely that IMD power will vary within that bandwidth,
and the amplier will show (undesirable) long-term mem-
ory effects. Moreover, the complex conjugate symmetry of
load impedance requires that the imaginary part of
Z
L
(o
2
o
1
) have a sign opposite that of Z
L
(o
1
o
2
). So,
if some other odd-order products (e.g., the ones due to the
presence of second harmonics) also have signicant ima-
ginary parts, their addition will even produce asymmetric
IMD sidebands [22].
These strange IMD effects have received a lot of atten-
tion more recently as their induced long-term memory
immensely complicates the design of PA linearizers. For-
tunately, since direct static IMD usually dominates this
indirect dynamic distortion, those long-term memory ef-
fects are seldom noticed. They would be evident only if the
direct static odd-order products were reduced. Unfortu-
nately, IMD sweet spots are, by nature, exactly one of
these situations, and so the selection of these out-of-band
impedances should not be overlooked during the PA de-
sign and implementation phases.
5. INTERMODULATION DISTORTION IN MICROWAVE
AND RF MIXERS
A mixer can be viewed as a special kind of amplier in
which the bias supply no longer provides a constant
voltage or current, but one that varies in timethe local
oscillator. In the same way, an amplier is a device where
the constant quiescent point is perturbed by a certain
dynamic signal; a mixer is a similar device where the
local-oscillator (LO) time-varying quiescent point is
perturbed by a dynamic radiofrequency (RF) excitation.
Assuming that the mixer is operated in an unsaturated
mode, as is the case of most practical interest, the RF
signal level is much smaller than the LO level, and the
mixer can be analyzed, for the RF signal, as a mild
nonlinearity. Thus, it admits a low-degree polynomial
expansion in the vicinity of the time-varying LO quiescent
point. That constitutes the standard large-signal/small-
signal analysis of mixers [7,26]. Mixer distortion analysis
can thus follow exactly the one already carried out for
small-signal ampliers, with the exception that now we
must start by determining the strong nonlinear regime
imposed by the LO and, eventually, some DC bias. The
voltage and current waveforms calculated in this way
constitute the time-varying quiescent point. Despite the
sinusoidal form of the LO excitation, the devices strong
nonlinearities will determine a periodic regime composed
by the LO frequency o
LO
and its harmonics. So, referring
to the illustrative case of the active FET mixer depicted in
Fig. 28, the time-varying quiescent voltages that control
the FETs i
DS
(v
GS
,v
DS
) nonlinearity will be given by
v
GS
t

K
k K
V
gs
ko
LO
e
jko
LO
t
62a
v
DS
t

K
kK
V
ds
ko
LO
e
jko
LO
t
62b
Then, the nonlinearity must be approximated by a local
polynomial model. For instance, a Taylor series such as
v
RF
(t )
v
IF
(t )
v
LO
(t )
V
GS
RF/LO
Diplexer
+
V
DS
+
+

Figure 28. Simplied schematic of the active


FET mixer used in the mixer distortion analysis.
2212 INTERMODULATION
(19), in the vicinity of this time-varying LO quiescent
point, [v
GS
(t), v
DS
(t)], where the small-signal component,
i
ds
(v
gs
,v
ds
), is determined by the small-signal RF excita-
tion. Since the coefcients of such Taylor series depend on
the control voltages v
GS
(t) and v
DS
(t), they will also be
time-variant:
i
ds
v
gs
; v
ds
G
m
tv
gs
t G
ds
tv
ds
t
G
m2
tv
gs
t
2
G
md
tv
gs
tv
ds
t
G
d2
tv
ds
t
2
G
m3
tv
gs
t
3
G
m2d
tv
gs
t
2
v
ds
t G
md2
tv
gs
tv
ds
t
2
G
d3
tv
ds
t
3
63
As v
GS
(t) and v
DS
(t) are periodic, the coefcients of (63) are
again periodic obeying a Fourier expansion of the form
gt

K
k K
Gko
LO
e
jko
LO
t
64
Assuming that the RF signal is a two-tone signal
v
RF
t V
RF
coso
1
t V
RF
coso
2
t 65
(in which we consider, without any lack of generality, that
o
1
oo
2
oo
LO
), the small-signal components of v
GS
(t),
v
DS
(t) v
gs
(t), v
ds
(t) are again two-tone signals. Substi-
tuting (64) and (65) in (63) determines a small-signal
current i
ds
(t), whose components obey ko
LO
m
1
o
RF
1
m
2
o
RF
2
(k is any integer number and m
1
,m
2
A{ 3, 2,
1,0,1,2,3}, |m
1
||m
2
|r3) and thus include the input
tone frequencies at o
RF
1;2
(k0), the intermediate fre-
quencies IF at o
IF
1;2
o
LO
o
RF
1;2
(k 1), its second and
third harmonics at 2o
IF
1;2
2o
LO
2o
RF
1;2
(k 2), and
3o
IF
1;2
3o
LO
3o
RF
1;2
(k 3), respectively, and second-
and third-order intermodulation products at o
IF
2D
o
IF
2

o
IF
1
o
RF
2
o
RF
1
(k 0) and o
IF
3
2o
IF
2
o
IF
1
o
LO

2o
RF
2
o
RF
1
(k 1), respectively.
One surprising conclusion that may be drawn from this
analysis is that, contrary to an amplier in which a single
Taylor coefcient determines both nth-order harmonics
and intermodulation products, in a mixer, for example, the
baseband second-order products are determined by the
DC component of the Fourier expansion of a coefcient
while the second harmonic is determined by the compo-
nent at 2o
LO
. Similarly, it is the o
LO
Fourier component
that determines the in-band third-order products, while
the third harmonic is controlled by the Fourier component
at 3o
LO
. Therefore, contrary to what happens in a mem-
oryless amplier, the behavior of the harmonics of a
memoryless mixer may say nothing about the behavior
of the corresponding in-band distortion products. A de-
tailed analysis of the distortion arising in a mixer is quite
laborious and requires a full small-signal/large-signal
analysis using the conversion matrix formalism [6,7,26].
However, some qualitative insight can already be obtained
if we consider the ideal situation of a unilateral gate mixer
(total absence of feedback) where the input is tuned for
o
RF
and o
LO
and the output is tuned for o
IF
. v
gs
(t) will
have only o
RF
components, while v
ds
(t) will have only the
resulting o
IF
components and its in-band distortion pro-
ducts o
IF
3
.
In such an ideal case the FETs i
ds
(t) current component
at the IF fundamental frequencies will be given by
I
ds
o
IF
%G
m1
V
gs
o
IM

G
m1
V
gs
o
RF
G
ds
0
V
ds
o
IF

66
where G
m
k
and G
ds
k
stand for the kth-order harmonic of
the Fourier expansion of G
m
(t) and G
ds
(t), as expressed by
(64), and V
gs
(o) and V
ds
(o) represent the v
gs
(t) and v
ds
(t)
components at o. o
IM
is the so-called image frequency.
Because it is symmetrically located near the RF compo-
nents, taking o
LO
as the symmetry axis (since, in the
present case, o
RF
o
LO
o
IF
, then o
IM
o
LO
o
IF
), it
will be also converted to the IF output, thus constituting
additive interference.
If now the third-order intermodulation product compo-
nents of i
ds
(t), I
ds
o
IF
3
, were calculated, we would have
I
ds3
o
IF
3
% G
m31
V
3
gs
2o
RF
1
o
RF
2
67
in which V
3
gs
2o
RF
1
o
RF
2
stands for the terms at o
IF
3
that result from the frequency-domain convolutions of
V
gs
(o)
*
V
gs
(o)
*
V
gs
(o) or the time-domain products of
v
gs
(t)
3
. Expressions (66) and (67) show that a mixer
designed for high linearity, namely, one in which conver-
sion gain is maximized and IMD is minimized, requires a
(V
GS
,V
DS
) bias point and a LO drive level that maximize
rst-order Fourier component of the time-varying trans-
conductance G
m
(t) and minimize rst-order Fourier com-
ponent of G
m3
(t). Unfortunately, these are conicting
requirements since maximizing G
m1
or G
m1
means
searching for a G
m
(t) waveform of highest amplitude and
odd symmetry, while reducing G
m3 1
implies reducing
G
m3
(t) swing and a G
m3
(t) waveform of even symmetry,
which, as we will see next, cannot be accomplished
simultaneously. So, a compromise should be sought in
terms of conversion gain and linearity optimization.
To illustrate this simplied analysis, Fig. 29a shows
three G
m
(t) waveforms for three distinct V
GS
bias points,
Fig. 29b shows the corresponding G
m3
(t) waveforms, and,
nally, Fig. 29c shows the resulting conversion gain
and IMD ratio I
ds
o
IF
=I
ds
o
IF
3
for the whole range of
V
GS
bias.
As stated above, there is indeed a compromise between
linearity and conversion gain. Although very high IMR
values can be obtained for particular bias points, none of
them coincides with the zone of highest conversion gain.
In a typical sigmoidal G
m
(v
GS
) (such that depicted in Fig.
14a), conversion efciency is optimized when the FET is
biased for maximum G
m
(v
GS
) variation, that is, for the
G
m2
(v
GS
) peak. Unfortunately, that maximized variation
of G
m
(v
GS
) is accompanied by an also nearly ideal odd
symmetry of G
m3
(v
GS
), which is responsible for the
INTERMODULATION 2213
observed IMD impairment. Furthermore, this simplied
analysis also shows that, as was previously studied for
ampliers, IMD behavior of mixers strongly depends on
the actual shape of the devices nonlinearity. (For example,
the very sharp peaks of IMR shown in Fig. 29c are due to
the ideal symmetric sigmoidal model used for the simu-
lated FETs transconductance.) So, different devices will
show quite distinct IMR patterns, impeding a straightfor-
ward extrapolation of these active FET mixer results to
diode mixers [6,27] or even resistive FET mixers [28,29].
6. CONCLUSIONS
This article showed that the study of nonlinear distortion
mechanisms is a subject of fundamental interest that
spreads through almost all microwave and RF signal
processing circuits and systems. Involving various scien-
tic disciplines that range from the physical level of the
active-device modeling, to the circuit and systems level of
communication links, it requires a broad range of micro-
wave knowledge. Hence, and despite the now more than
40 years of continued progress, intermodulation distortion
is still an exciting and challenging eld of strong active
research both in industry and academia.
Acknowledgements
The author would like to express his gratitude to several
of his colleagues and graduate students who contributed
with some of the knowledge presented in this article. Of
Time (ps) Time (ps)
IMR (dB)
V
GS
(V)
Conversion gain (dB)
G
m
(t ) (mS)
(a) (b)
(c)
G
m3
(t) (mS/ V
2
)
10
0 100 200 300 400 500 600 700 800 900 1000
0
20
30
40
50
60
0 100 200 300 400 500 600 700 800 900 1000
25
20
25
20
15
10
10
20
30
40
2.5 2.0 1.5 1.0 0.5
5
15
10
5
0
10
20 100
90
80
70
60
50
40
30
0
Figure 29. (a) Time-domain waveforms of G
m
(t) for three different V
GS
bias points: V
GS
1.5 V
(- - -), V
GS
1.0 V (), and V
GS
0.6V ( ); (b) corresponding time-domain waveforms of
G
m3
(t) for the same three V
GS
bias points; (c) conversion gain () and IMD ratio ( ) for the
whole range of the FETs V
GS
bias.
2214 INTERMODULATION
these, Nuno B. Carvalho, Jose A. Garcia, and Christian
Fager deserve a special mention.
BIBLIOGRAPHY
1. L. Chua, C. A. Desoer, and E. S. Kuh, Linear and Nonlinear
Circuits, McGraw-Hill, 1987.
2. R. H. Caverly, Distortion modeling of PIN diode switches and
attenuators, IEEE Int. Microwave Symp. Digest, 2004, pp.
957960.
3. L. Dussopt and G. M. Rebeiz, Intermodulation distortion and
power handling in RF MEMS switches, varactors, and tun-
able lters, IEEE Trans. Microwave Theory Tech. MTT-
51:12471256 (2003).
4. P. Liu, Passive intermodulation interference in communica-
tion systems, Electron. Commun. Eng. J. 2:109118 (1990).
5. J. C. Pedro and N. B. Carvalho, On the use of multi-tone
techniques for assessing RF components intermodulation
distortion, IEEE Trans. Microwave Theory Tech. MTT-
47:23932402 (1999).
6. J. C. Pedro and N. B. Carvalho, Intermodulation Distortion in
Microwave and Wireless Circuits, Artech House, Norwood,
MA, 2003.
7. S. A. Maas, Nonlinear Microwave and RF Circuits, 2nd ed.,
Artech House, Norwood, MA, 2003.
8. N. Boulejfen, A. Harguem, and F. A. Ghannouchi, New closed-
form expressions for the prediction of multitone intermodula-
tion distortion in fth-order nonlinear RF circuits/systems,
IEEE Trans. Microwave Theory Tech. MTT-52:121132
(2004).
9. J. C. Pedro and J. Perez, Accurate simulation of GaAs
MESFETs intermodulation distortion using a new drain-
source current model, IEEE Trans. Microwave Theory and
Tech. MTT-42:2533 (1994).
10. J. A. Garcia, A. Mediavilla, J. C. Pedro, N. B. Carvalho, A.
Tazon, and J. L. Garcia, Characterizing the gate to source
nonlinear capacitor role on GaAs FET IMD performance,
IEEE Trans. Microwave Theory Tech. MTT-46:23442355
(1998).
11. M. C. Jeruchim, P. Balaban, and K. S. Shanmugan, Simula-
tion of Communication SystemsModeling, Methodology and
Techniques, 2nd ed., Kluwer Academic/Plenum, New York,
2000.
12. V. Mathews and G. Sicuranza, Polynomial Signal Processing,
Wiley, New York, 2000.
13. A. Saleh, Frequency-independent and frequency-dependent
nonlinear models of TWT ampliers, IEEE Trans. Commun.
COM-29:17151720 (1981).
14. J. C. Pedro and S. A. Maas, A comparative overview of
microwave and wireless power amplier behavioral modeling
approaches, IEEE Trans. Microwave Theory Tech. (in press).
15. C. P. Silva, C. J. Clark, A. A. Moulthrop, and M. S. Muha,
Optimal-lter approach for nonlinear power amplier model-
ing and equalization, IEEE Int. Microwave Symp. Digest,
2000, pp. 437440.
16. Q. J. Zhang and K. C. Gupta, Neural Networks for RF and
Microwave Design, Artech House, Norwood, MA, 2000.
17. K. Kundert, J. White, and A. Sangiovanni-Vicentelli, Steady-
State Methods for Simulating Analog and Microwave Cir-
cuits, Kluwer Academic Publishers, Norwell, MA, 1990.
18. S. C. Cripps, RF Power Ampliers for Wireless Communica-
tions, Artech House, Norwood, MA, 1999.
19. N. B. Carvalho and J. C. Pedro, Large and small signal IMD
behavior of microwave power ampliers, IEEE Trans. Micro-
wave Theory Tech. MTT-47:23642374 (1999).
20. C. Fager, J. C. Pedro, N. B. Carvalho, and H. Zirath, Predic-
tion of IMD in LDMOS transistor ampliers using a new
large-signal model, IEEE Trans. Microwave Theory Tech.
MTT-50:28342842 (2002).
21. C. Fager, J. C. Pedro, N. B. Carvalho, H. Zirath, F. Fortes, and
M. J. Rosa rio, A comprehensive analysis of IMD behavior in
RF CMOS power ampliers, IEEE J. Solid State Circ. JSSC-
39:2434 (2004).
22. N. B. Carvalho and J. C. Pedro, A comprehensive explanation
of distortion sideband asymmetries, IEEE Trans. Microwave
Theory Tech. MTT-50:20902101 (2002).
23. L. R. Gomes and J. C. Pedro, Design rules for highly efcient
power ampliers driven by low voltage supplies, Proc. 29th
European Microwave Conf., 1999, Vol. II, pp. 267270.
24. J. M. Golio, M. G. Miller, G. N. Maracas, and D. A. Johnson,
Frequency-dependent electrical characteristics of GaAs MES-
FETs, IEEE Trans. Electron. Devices ED-37:12171227
(1990).
25. R. Anholt, Electrical and Thermal Characterization of MES-
FETs, HEMTs, and HBTs, Artech House, Norwood, MA, 1995.
26. S. A. Maas, Microwave Mixers, Artech House, Norwood, MA,
1986.
27. S. A. Maas, Two-tone intermodulation in diode mixers, IEEE
Trans. Microwave Theory Tech. MTT-35:307314 (1987).
28. S. Maas, A GaAs MESFET mixer with very low intermodula-
tion, IEEE Trans. Microwave Theory Tech. MTT-35:425429
(1987).
29. J. A. Garcia, J. C. Pedro, M. L. de La Fuente, N. B. Carvalho,
A. Mediavilla, and A. Tazon, Resistive FET mixer conversion
loss and IMD optimization by selective drain bias, IEEE
Trans. Microwave Theory Tech. MTT-47:23822392 (1999).
INTERMODULATION MEASUREMENT
MUHAMMAD TAHER
ABUELMAATTI
King Fahd University of
Petroleum and Minerals
Dhahran, Saudi Arabia
1. INTRODUCTION
Virtually all electronic circuits and systems exhibit non-
linear inputoutput transfer characteristic. Mixers, fre-
quency multipliers, modulators, and square-law detectors
represent examples of intentional class members, while
linear power ampliers, active lters, and microwave
transmitters, in which nonlinearity represents an unde-
sirable deviation of the system from ideal, linear opera-
tion, are examples of unintentional members.
Whenever a number of signals of differing frequencies
pass through a nonlinear device, energy is transferred to
frequencies that are sums and differences of the original
frequencies. These are the intermodulation products
(IMPs). In such cases, the instantaneous level of one sig-
nal may effectively modulate the level of another signal;
INTERMODULATION MEASUREMENT 2215
hence the term intermodulation. In a transmitting
system, the results of excessive intermodulation are
unwanted signals that may cause interference. In a
receiver, internally generated intermodulation can hinder
reception of the desired signals. It is interesting to note
that the ears cochlea has a similar nonlinear response
and produces sums and differences of the input frequen-
cies in the same way, particularly with loud sounds [1].
It has also been found that passive components, nor-
mally considered to be linear, can also generate IMPs. A
variety of situations can arise in which nonlinear resis-
tance junctions can be formed at metallic mating surfaces.
Such junctions may result from salt or chemical deposi-
tions or from corrosion. The result is sometimes known as
the rusty bolt effect because rusted bolts in structures
have been known to exhibit such nonlinearities. This phe-
nomenon is referred to as passive intermodulation (PIM).
Sources of PIM include waveguides, directional couplers,
duplexers, and antennas [26].
Intermodulation may also occur at the amplierloud-
speaker interface [7], or in general as a result of the non-
linear interaction between the input signal of a two-port
and a signal injected to the output port and propagating
into the input via a feedback network [8]. Externally
induced transmitter intermodulation, also known as
reverse intermodulation, backward intermodulation, and
antenna-induced intermodulation, is the mixing of a
carrier frequency with one or more interfering signals in
a transmitters nal stage [9]. Moreover, lack of screening
of open-wire transmission lines can result in significant
coupling to adjacent lines frequently giving rise to inter-
modulation products [10]. Furthermore, intermodulation
may arise when an array of receiving antennas is illumi-
nated with a transient impulsive electromagnetic plane
wave [11].
In discussing the sources of IMPs it is convenient to
divide nonlinear mechanisms yielding IMPs into two prin-
cipal forms. The rst is due to a nonlinear amplitude in-
put/output characteristic (AM/AM), which causes
amplitude compression with increasing input amplitude.
The second mechanism occurs because of the variation of
phase shift through the device, or the system, as the input
amplitude is changed (AM/PM).
Depending on the signal characteristics, sources of
IMPs can be divided into two categories: (1) static nonlin-
earity, depending solely on the amplitude of the signal,
and (2) dynamic nonlinearity, depending not only on the
amplitude but also on the time properties or frequency
composition of the signal.
Static nonlinearities usually encountered in electronic
circuits and systems can be classied into clipping, cross-
over, and soft nonlinearities [12] as shown in Fig. 1.
Among the hard nonlinearities of clipping (which is sig-
nificant near maximum input amplitudes) and crossover
(significant mostly at small input amplitudes), the soft
nonlinearity is usually the most important in the transfer
characteristic of an electronic circuit. If the frequency con-
tent or the time properties of the input signal affect the
transfer characteristic of the circuit or the system, the re-
sulting nonlinearities may be called dynamic. Intermodu-
lation products resulting from dynamic nonlinearities are
referred to as transient intermodulation (TIM), slew-in-
duced distortion (SID), or dynamic intermodulation dis-
tortion (DIM) [1316].
2. SIMPLE INTERMODULATION THEORY
IMPs occur when two or more signals exist simultaneously
in a nonlinear environment. In general, if N signals with
frequencies f
1
to f
N
are combined in a static nonlinearity,
the output will contain spectral components at frequencies
given by

N
n1
k
n
f
n
where k
n
is a positive integer, a negative integer, or zero,
and

N
n1
jk
n
j is the order of the IMP. Even with a small
number of input signals N, a very large number of IMPs
are generated. Fortunately, not all products are equally
troublesome. Depending on the system involved, some of
these IMPs can be neglected since they will be ltered out
at some point. For example, most of the communication
systems operate over a limited frequency band. Thus,
IMPs falling out of the band will be attenuated. Moreover,
amplitudes of the IMPs generally decrease with the order
of the products, and high-order products can often be ne-
glected. Low-order intermodulation components such as
the second-order component f
m
f
n
and f
m
f
n
and the
third-order components occurring at frequencies 2f
m
f
n
and f
m
f
n
f
q
are usually the most troublesome, having
the largest magnitudes and/or lying close to the originat-
ing frequencies, making their removal by ltering practi-
cally difcult. However, a salient characteristic of PIM, as
distinguished from the conventional IM counterpart, dis-
cussed above, is that the PIMs causing trouble are of a
high order, say, 11th21st.
Analysis of nonlinear systems differs from that of linear
systems in several respects: (1) there is no single analyt-
ical approach that is generally applicable (such as Fourier
a
a
b
b
c
c
Input
Output
Figure 1. Different types of static nonlinearities: (a) clipping;
(b) soft; (c) crossover.
2216 INTERMODULATION MEASUREMENT
or Laplace transforms in linear systems); (2) closed-form
analytical solutions of nonlinear equations are not ordi-
narily possible; and (3) there is rarely sufcient informa-
tion available to enable a set of equations that accurately
model the system to be derived. These factors preclude the
exact analytical determination of nonlinear effects, such
as IMPs, in the general case. In order to get anything done
at all, it is usually necessary to make various simplifying
assumptions and then use an approximate model that will
provide results of acceptable accuracy for the problem in
hand.
A simple approach, therefore, is to use frequency-do-
main techniques that provide a separate solution for each
frequency present in the output. In general, such methods
are (1) centered around a description of the nonlinear
mechanism by a continuous function type of characteris-
tic, for example, a polynomial or a Fourier series repre-
sentation of the output in terms of the input; and (2) based
on the simplifying assumption that this characteristic
does not vary with frequency, in other words, that it is a
memoryless characteristic.
Memoryless nonlinear circuits are oftenly modeled
with a power series of the form
V
out

N
n0
k
n
V
n
i
1
The rst coefcient, k
0
, represents the DC offset in the
circuit. The second coefcient, k
1
, is the gain of the circuit
associated with linear circuit theory. The remaining coef-
cients, k
2
and above, represent the nonlinear behavior of
the circuit. If the circuit were completely linear, all the
coefcients except k
1
would be zero.
The model can be simplied by ignoring the terms that
come after the k
3
term. For soft nonlinearities, the size of
k
n
decreases rapidly as n gets larger. For many applica-
tions the reduced model of Eq. (2) is sufcient, since the
second-order and third-order effects dominate. However,
many devices, circuits, and systems present difculties for
the polynomial approximation:
V
out
k
0
k
1
V
i
k
2
V
2
i
k
3
V
3
i
2
Assuming that the input signal is a two-tone of the form
V
i
V
1
cos o
1
t V
2
cos o
2
t 3
then combining Eqs. (2) and (3), yields
V
out
a
0
b
1
cos o
1
t c
1
cos o
2
t b
2
cos 2o
1
t
c
2
cos 2o
2
t b
3
coso
1
o
2
t c
3
coso
1
o
2
t
b
4
cos 3o
1
t c
4
cos 3o
2
t b
5
cos2o
1
o
2
t
cos2o
1
o
2
t c
5
cos2o
2
o
1
t
cos2o
2
o
1
t 4
where
a
0
k
0

k
2
2
V
2
1
V
2
2

b
1
k
1
V
1

3
4
k
3
V
3
1

3
2
k
3
V
1
V
2
2
c
1
k
1
V
2

3
4
k
3
V
3
2

3
2
k
3
V
2
1
V
2
b
2

1
2
k
2
V
2
1
c
2

1
2
k
2
V
2
2
b
3
c
3
k
2
V
1
V
2
b
4

1
4
k
3
V
3
1
c
4

1
4
k
3
V
3
2
b
5

3
4
k
3
V
2
1
V
2
c
5

3
4
k
3
V
1
V
2
2
For equal-amplitude input tones, Eq. (4) shows that the
second-order terms, of amplitudes b
2
, c
2
, b
3
, c
3
will be in-
creased 2 dB in amplitude when input tones are increased
by 1dB. The third-order terms, of amplitudes b
4
, c
4
, b
5
, c
5
,
are increased by 3 dB in amplitude when the input tones
are increased by 1 dB.
While Eq. (1) is adequate, and widely used, to predict
the intermodulation performance of a wide range of de-
vices, circuits, and systems, it seldom can be used. Exam-
ples include, but are not restricted to, prediction of
spectral regrowth in digital communication systems, tran-
sient intermodulation and frequency-dependent nonlin-
earities, and passive intermodulation.
3. SPECTRAL REGROWTH
When a modulated signal passes through a nonlinear de-
vice, its bandwidth is broadened by odd-order nonlinear-
ities. This phenomenon, called spectral regrowth or
spectral regeneration, is a result of mixing products (in-
termodulation) between the individual frequency compo-
nents of the spectrum [17]. The spectral regrowth can be
classied in the two following categories: (1) in-band in-
termodulations and (2) out-of-band intermodulations. The
rst cannot be eliminated by linear ltering and are re-
sponsible for the signal-to-noise ratio degradation and,
consequently, for the bit error rate (BER) degradation in
digital communication systems. The second generates the
interference between adjacent channels and can be l-
tered out at the nonlinear device output with certain out-
put power penalty that is caused by the lter insertion
losses. This spectral regrowth causes adjacent-channel
INTERMODULATION MEASUREMENT 2217
interference (ACI), which is measured by the adjacent-
channel power ratio (ACPR).
The ACPR is the power in the main channel divided by
the power in the lower plus upper adjacent channels. Con-
sidering just the lower channel yields ACPR
lower
and the
upper channel alone yields ACPR
upper
. Analog cellular ra-
dio uses frequency or phase modulation, and the ACPR is
adequately characterized by intermodulation distortion of
discrete tones. Typically, third-order intermodulation
product (IMP3) generation, in a two-tone test, is adequate
to describe spectral regrowth. Thus, distortion in analog
radio is accurately modeled using discrete-tone steady-
state simulation. Digital radio, however, uses complex
modulation, and adjacent-channel distortion has little re-
lationship to intermodulation in a two-tone test [18,19]. A
modulated input signal applied to radiofrequency (RF)
electronics in digital radio is a sophisticated waveform re-
sulting from coding, ltering, and quadrature generation.
Neither can it be represented by a small number of dis-
crete tones (or frequencies), nor can the waveform be rep-
resented in a simple analytic form. Thus, in digital radio,
ACPR is more difcult to predict than one- or two-tone
responses since it depends not only on the intrinsic non-
linear behavior of the device (e.g. amplier) but also on the
encoding method (i.e., the statistics of the input stream)
and the modulation format being used. The only way the
input stream can conveniently and accurately be repre-
sented is by its statistics, and transforming these using an
appropriate behavioral model provides accurate and ef-
cient modeling of ACPR [20]. While in Ref. 20 the input
signal is assumed Gaussian, digital communication sig-
nals are often far from being Gaussian. In Ref. 21 the in-
put is assumed stationary but not necessarily Gaussian.
ACPR is, therefore, dened differently in the various
wireless standards. The main difference is the way in
which adjacent-channel power affects the performance of
another wireless receiver for which the offending signal is
cochannel interference [20]. In general the ACPR can be
dened as [20]
ACPR
_
f
4
f
3
Sf df
_
f
2
f
1
Sf df
5
where S(f) is the power spectral density (PSD) of a signal
whose channel allocation is between frequencies f
1
and f
2
,
and its adjacent channel occupies frequencies between f
3
and f
4
. Regulatory authorities impose strict constraints on
ACPR and accurate methods of its determination are of par-
ticular interest to those involved in wireless system design.
4. SIMPLE TRANSIENT INTERMODULATION THEORY
To illustrate how TIM distortion arises, consider a differ-
ential amplier with negative feedback applied between
the output and the inverting input and a voltage step ap-
plied to the noninverting input. If the open-loop gain of the
amplier were at and the time delay through it were zero,
the voltage step would instantaneously propagate undis-
torted through the amplier, back through the feedback
loop, and into the inverting input, where it would be sub-
tracted from the input signal, and the difference signal,
which is a voltage step occurring at the same time that the
input voltage does, would be amplied by the amplier.
However, this is not the case when the open-loop gain of the
amplier is not at and the time delay through it is not
zero. When the voltage step occurs, the limited high-fre-
quency response of the amplier prevents the appearance
of a signal at the amplier output terminal until the inter-
nal capacitors of the amplier can charge or discharge. This
causes the momentary absence of a feedback signal at the
inverting input to the amplier, possibly causing the am-
plier to severely overload until the feedback signal arrives.
If the input signal to the differential amplier is formed
of a sine wave superimposed on a square wave, the am-
plier will exhibit the same response to the abrupt level
changes in the square wave as it did to the voltage step
discussed above. During the momentary absence of the
feedback when the square wave changes level, the ampli-
er can either saturate or cut off. If this occurs, the sine
wave momentarily disappears from the signal at the out-
put terminal of the amplier, or it momentarily decreases
in amplitude. This happens because the saturated or cut-
off amplier appears as a short circuit or open circuit, re-
spectively, to the sine wave, and this component of the
input signal is interrupted from the output signal, thus
resulting in TIM [16].
A point to be noted is that if the term were understood
literally, this would imply transients of both high and low
frequencies and/or high or low operating levels, in other
words, all transients. In actual practice, however, TIM oc-
curs only for signals with simultaneous high level and
high frequenciesnot lower levels or lower frequencies.
The key parameter of such signals is that they are char-
acterized by high signal slopes, not just high frequencies
or high levels. Neither high frequencies nor high levels in
themselves necessarily result in distortion, unless their
combination is such that a high effective signal slope is
produced. TIM is actually generated when the signal slope
approaches or exceeds the amplier slew rate. This can
happen for either transient or steady-state signals. Thus,
a more easily understood term to what actually happens
would be one that relates both slew rate and signal slope.
A more descriptive term to describe the mechanism would,
therefore, be the slew-induced distortion (SID); other de-
scriptive variations of this term are slew rate distortion
or slewing distortion [22].
Because of the complexity of the mechanism resulting
in TIM, especially handling the frequency dependence of
the amplier nonlinearity and incorporation of the feed-
back, Eq. (1) cannot be used to predict the TIM perfor-
mance of nonlinear devices, and recourse to other
analytical techniques, for example, Volterra series or har-
monic balance analysis, would be inevitable.
5. VOLTERRA SERIES ANDHARMONIC BALANCE ANALYSIS
Volterra series describes a system with frequency-depen-
dent nonlinearity in a way that is equivalent to the
2218 INTERMODULATION MEASUREMENT
manner in which Taylor series approximates an analytic
function. Depending on the amplitude of the exciting sig-
nal, a nonlinear system can be described by a truncated
Volterra series. Similar to the Taylor series representa-
tion, for very high amplitudes the Volterra series diverges.
Volterra series describe the output of a nonlinear system
as the sum of the response of a rst-order operator, a sec-
ond-order one, a third-order one, and so on [23]. Every
operator is described in either the time domain or the fre-
quency domain with a kind of transfer function called a
Volterra kernel.
In Volterra series analysis the nonlinear circuit is
treated purely as an AC problem. Assuming that none of
the input signals are harmonically related, an iterative
solution can be applied for circuits not operated under
distortion saturation conditions. First the circuit is solved
for the input signals. These results are then used to cal-
culate the second-order distortion products, and these are
treated as generators at a different frequency to the input
signals and the network is again solved. This is then re-
peated for higher-order distortion products. This leads to
extremely fast calculation of distortion behavior. Simula-
tion at higher power levels can be achieved by feeding
back contributions from higher-order distortion products
[24]. The use of Volterra series to characterize the output
as a function of the input [25,26] can, therefore, provide
closed-form expressions for all the distortion products of a
frequency-dependent nonlinearity excited by a multisinu-
soidal signal.
However, techniques using Volterra series suffer from
the disadvantage that a complex mathematical procedure
is required to obtain a closed-form expression for the out-
put amplitude associated with a single component of the
output spectrum. Moreover, the problem of obtaining out-
put products of orders higher than the third becomes pro-
hibitively difcult unless it may be assumed that higher-
order contributions vanish rapidly [27]. The Volterra se-
ries approach is, therefore, most applicable to mild non-
linearities where low-order Volterra kernels can
adequately model the circuit behavior. With appropriate
assumptions and simplications, many useful features of
the Volterra series technique can be used to nd approx-
imate expressions for TIM (SID). These are quite accurate
for relatively small distortion conditions [28,29].
Alternatively, most RF and microwave circuit analysis
are based on the harmonic balance analysis [30]. The har-
monic balance technique works by processing the linear
part of the circuit in the frequency domain and the non-
linear part in the time domain. Computation in the fre-
quency domain is very fast and efcient, especially for
frequency-selective components such as transmission
lines and resonant circuits. Computations in the time do-
main are followed by Fourier transform. Harmonic bal-
ance analysis can, therefore, handle intermodulation
distortion provided there are not too many excitation
tones. In the harmonic balance technique an initial esti-
mate is required for the nal waveshape, and this is re-
ned interactively during analysis. The harmonic balance
method computes the response of a nonlinear circuit by
iteration, and the nal result is a list of numbers that do
not indicate which nonlinearities in the circuit are mainly
responsible for the observed nonlinear behavior. Hence
such a method is suitable for verication of circuits that
have already been designed. This method does not present
information from which designers can derive which circuit
parameters or circuit elements they have to modify in or-
der to obtain the required specications [31]. While Vol-
terra series analysis can provide such information, it is
applicable only to weak nonlinearities.
While viewed as a universal solution, and has been
widely used, the harmonic balance analysis may be un-
necessarily slow, cumbersome, and prone to subtle errors
[32], especially for weak nonlinearities or when a nonlin-
ear device is excited by very small signals. Volterra series
analysis is generally more accurate than harmonic bal-
ance for these types of problems, and it is several orders of
magnitude faster than a harmonic balance analysis [32].
Moreover, Volterra series analysis integrates well with
linear analysis tools, supporting simultaneous optimiza-
tion of several parameters of the nonlinear system. There-
fore, Volterra theory appears to be an ideal tool for circuits
and systems that are not strongly nonlinear but have as-
pects of linear and nonlinear circuits [32]. However, Vol-
terra series analysis becomes very cumbersome above
third-order products, and for products above fth order,
it loses most of its advantages over the harmonic balance
analysis. The major disadvantage of Volterra series is the
occasional difculty in deciding whether the limitations to
weakly nonlinear operation have been exceeded.
In fact, Volterra-series analysis and the harmonic bal-
ance technique complement each other [32]. Thus, while
the Volterra series analysis works well in those cases
where harmonic balance works poorly, the harmonic bal-
ance works well where the Volterra series works poorly.
Volterra series analysis is, therefore, not appropriate for
mixers, frequency multipliers, saturated power ampliers,
and similar strongly driven and/or hard nonlinearities.
Volterra series analysis is suitable for small-signal ampli-
ers, phase shifters, attenuators, and similar small-signal
and/or soft nonlinearities.
Another technique for analyzing nonlinear systems is
the describing function. This approach can yield closed-
form expressions for a feedback system that contains an
isolated static nonlinearity in the feedback loop [33]. Since
it is not possible to map all nonlinear circuits and systems
to such a feedback system, the describing function method
has restricted applications.
6. PASSIVE INTERMODULATION (PIM)
While the concept of intermodulation in active devices
such as ampliers, lters, and mixers is familiar and well
documented, the effects of intermodulation in passive com-
ponents such as directional couplers, cables, coaxial con-
nectors, power splitters, antennas, and electromechanical
and solid-state programmable attenuators are less famil-
iar and less documented. More recently, evidence has
emerged that PIM has an impact in other system equip-
ment, such as ampliers and extenders, ber nodes, and
interface units [34]. Poor mechanical contact, dissimilar
metals in direct contact, ferrous content in the conductors,
INTERMODULATION MEASUREMENT 2219
debris within the connector, poor surface nish, corrosion,
vibration, and temperature variations are among the
many possible causes of PIM. The sources of PIM have
been studied extensively; see Refs. 3543 and the refer-
ences cited therein. Similar to the intermodulation prod-
ucts in active devices, PIM is generated when two or more
RF signals pass through RF passive devices having non-
linear characteristics [41,42]. Generally the nonlinearities
of RF passive devices consist of contact nonlinearity and
material nonlinearity [43]. Contact nonlinearity refers to
all metal contact nonlinearities causing nonlinear cur-
rentvoltage behavior, such as the tunneling effect, micro-
discharge, and contact resistance. Material nonlinearity
refers to the bulk material itself. Magnetoresistivity of the
transmission line, thermal resistivity, and nonlinear
hystresis of ferromagnetic material are good examples
[43]. PIM generation in RF passive devices is caused
by the simultaneous appearance of one or more of these
PIM sources, and the overall performance is often domi-
nated by one principal PIM source [43]. In the case of
antennas, PIM is generated not only by the same PIM
sources as in general RF passive components but also by
the external working environment, such as conducting
metal materials.
Over the years Eq. (1) was used to describe the nonlin-
ear current/voltage conduction characteristics of passive
components, (see, e.g., Refs. 3739 and the references cited
therein). While this approach results in simple expres-
sions for the magnitudes of the harmonics and intermod-
ulations products resulting from multisinusoidal
excitations, it suffers from the following shortcomings. In
order to predict high-order harmonic or intermodulation
product magnitudes, it is necessary to determine coef-
cients of terms of similar order in the polynomial. A pre-
requisite to obtaining coefcients of high-order polynomial
terms is measurement of output products of the same or-
der. For example, to obtain the coefcients of a fth-order
polynomial, it is necessary to measure the output fth-or-
der components. With increasing use of narrowband com-
ponents in multicouplers used in base stations of mobile
radio systems, it becomes difcult to determine high-order
coefcients in the nonlinear characteristic because the
measured high-order product amplitudes from which
they are computed are inuenced to an unknown extent
by the system selectivity [44]. To overcome these prob-
lems, an exponential method has been used to predict the
intermodulation arising from corrosion [45].
7. INTERMODULATION CHARACTERIZATION
Although it is important to understand the origin of in-
termodulation and the engineering techniques for avoid-
ing it, it is equally important to be able to characterize it
objectively, preferably in a way that correlates well with
the subjective perception of the intermodulation. The abil-
ity to characterize an imperfection in this way is an
important step toward eliminating it as a system perfor-
mance degradation.
Several techniques for characterizing intermodula-
tion distortion have been proposed. While some of these
techniques measure the total intermodulation distortion,
others distinguish between the various intermodulation
products. The latter are preferred, for subjective percep-
tion of intermodulation shows that equal amounts of total
intermodulation disortion differ widely in their effect ac-
cording to how the total is made up.
Depending on the signal characteristics, techniques for
characterization of intermodulation distortion can be clas-
sied into two categories: (1) steady-state techniques,
where characterization is performed on the assumption
that the input to the system under consideration is a mul-
tisinusoidal signal, and (2) dynamic techniques, where
characterization is performed on the assumption that
the input to the system under consideration is formed of
a sinusoidal signal superimposed on another signal char-
acterized by rapid changes of state, for example, a square
wave or a sawtooth wave. While steady-state techniques
can be used to characterize both RF and audio systems,
dynamic techniques are generally used for characterizing
only audio systems.
7.1. Steady-State Techniques
7.1.1. The Intercept Point. Increasing the signal level at
the input to a weakly nonlinear device will cause the IMPs
to increase at the output. In fact, the increase in IMP am-
plitudes is faster than the increase in the output version of
the input signal. For increasing fundamental input power,
the fundamental output power increases in a linear man-
ner, according to the gain or loss of the device. At some
point, gain compression occurs and the fundamental out-
put power no longer increases with input power. The out-
put power of the second-order intermodulation products
also increases with fundamental input power, but at a
faster rate. Recall that, according to the simple intermod-
ulation theory, the second-order intermodulation changes
by 2dB per 1dB of change in the fundamental. Similarly,
the third-order intermodulation changes by 3 dB per 1 dB
of change in the fundamental. Thus, on a logarithmic
scale, as shown in Fig. 2, the lines representing the
second- and third-order intermodulation products have
twice and three times, respectively, the slope of the
fundamental line.
If there were no gain compression, the fundamental
input power could be increased until the second-order in-
termodulation eventually caught up with it, and the two
output power levels would be equal. This point is referred
to as the second-order intercept point (IP2). The third-or-
der intermodulation product also increases faster than the
fundamental, and those two lines will intersect at the
third-order intercept point (IP3). Rarely can either of
these two points be measured directly, due to the gain
compression of the fundamental. Instead, the intercept
points are extrapolated from measurements of the funda-
mental and intermodulation products at power levels
below the point where gain compression occurs. The
intercept points are usually specied in dBm and may
refer to either the output or the input; the two points
will differ by the gain of the system under consideration.
The second-order and third-order intercept points are g-
ures of merit that are independent of the signal level.
2220 INTERMODULATION MEASUREMENT
Therefore, the intermodulation performance of two differ-
ent systems can be compared quite easily if their intercept
points are known [46].
Using the intercept point it is easy to calculate the rel-
ative intermodulation level corresponding to a given input
signal level. In fact, the difference between the level of the
second-order intermodulation and the fundamental signal
level is the same as the difference between the fundamen-
tal signal level and the intercept point. Thus, if the second-
order intercept point is 15 dBm and the fundamental
signal level is 10 dBm (both referred to the output of the
device), the difference between these two values is 25 dB.
Therefore, the second-order intermodulation products will
be 25 dB below the fundamental, or 35 dBm. So the in-
tercept point allows easy conversion between fundamental
signal level and the intermodulation level.
The difference between the level of the third-order in-
termodulation products and the fundamental signal level
is twice the difference between the fundamental signal
level and the third-order intercept point. (Note that the
second-order intercept point is not the same as the third-
order intercept point.) Suppose that the third-order inter-
cept point is 5 dBm and the fundamental signal is
25 dBm, both referred to the output of the device. The
difference between the intercept point and the fundamen-
tal is 30 dB, so the third-order intermodulation products
will be 2 times 30 dB down from the fundamental. The
relative distortion level is 60 dB, and the absolute power
of the intermodulation products is 85 dBm.
It is important, however, to note that the preceding
analyses assume that the second-order and third-order
intermodulation curves have slopes of 2 and 3dB/dB, re-
spectively. Thus, theoretically, the intercept points are not
functions of the input power level. If a power sweep
is performed, it is expected that the intercept points will
remain constant. The intercept points can, therefore, be
calculated from measurements at only one power level.
However, if the input signal exceeds a certain limit, the
amplitudes of the output fundamentals and the resulting
intermodulation products will start to saturate, and the
intercept points will usually drop off, indicating an invalid
measurement. It is essential to know this limit. It is par-
ticularly useful for high-dynamic-range circuits and sys-
tems with relatively low output powers where the
intermodulation is low, but only for signals that are low
enough. Expanding the model of Eq. (2) to include fourth-
and fth-order terms [47] can achieve this.
Moreover, at low power levels, the intercept points will
start to change as the noise oor of the measuring instru-
ment, usually a spectrum analyzer, is approached, thus
indicating an invalid measurement. It is important, there-
fore, to look at the variation of the intercept points as
functions of power as this provides a good way of checking
the valid measurement range.
7.1.2. Two-Tone Test. The two-tone test is extensively
used in characterizing a wide range of devices. Magnetic
tapes [48]; microwave and millimeter-wave diode detec-
tors [49]; analog-to-digital converters [50,51]; gamma cor-
rectors [52]; and electrical components such as resistors,
capacitors, inductors, as well as contacts of switches, con-
nectors, and relays [53] are a few examples. The two-tone
test is also used to characterize the performance of the
basilar membrane of the cochlea [54].
The two-tone test can also be used to determine the
transfer characteristic of a nonlinear device modeled by
the polynomial approximation of Eq. (2). With the input
formed of two properly selected frequencies o
1
and o
2
, and
if the second-order and third-order intermodulation prod-
ucts are measured separately, it is possible to nd, from
the measured data, the coefcients of the quadratic and
cubic terms k
2
and k
3
, respectively, in the polynomial ap-
proximation of Eq. (2). If in addition, the IMPs are mea-
sured at two sets of values of o
1
and o
2
, it is possible to
identify the dominant physical nonlinear process from the
variation of IMPs with test frequencies [13].
The two-tone test can also be used to determine the
complex transfer characteristic of a nonlinear device
exhibiting AM/AM nonlinearity only with xed phase
shift between the output and the input. In this case a
complete set of measurement for all the two-tone inter-
modulation products produced by the nonlinearity at two
different power levels is necessary [55]. If the device under
consideration exhibits both AM/AM and AM/PM non-
linearities, then determination of a unique set of polyno-
mial coefcients requires a complete set of intermodula-
tion measurements at three different power levels [55].
The set obtained at the highest power level will decide
the amplitude range within which the characterization
will be valid.

IP2
IP3
Input power (dBm/tone)
O
u
t
p
u
t

p
o
w
e
r

(
d
B
m
)
100 60 20
80
0
40
0
40
80
a
b
c
Figure 2. Third-order and second-order intercept points are de-
termined by extending the fundamental, the second- and the
third-order intermodulation transfer function lines: (a) Funda-
mental transfer function, slope 1; (b) second-order intermodu-
lation, slope 2; (c) third-order intermodulation, slope 3. IP3
third-order intercept point; IP2second-order intercept point.
INTERMODULATION MEASUREMENT 2221
According to the basic assumption that the nonlinear-
ities are represented by polynomials, high-accuracy rep-
resentation of the device characteristics will require
difcult accurate measurements of higher-order intermod-
ulation products, in addition to increased complications
and considerable efforts involved in the analysis [55].
Another difculty from which this method suffers arises
from the necessity of measuring complete sets of two-tone
intermodulation products spread over a relatively wide
frequency range, which consequently may impose strin-
gent specications on the measuring instruments and
techniques if accurate measurements are to be achieved.
In the two-tone test the inband IMPs are used to de-
scribe a device, a circuit or a system nonlinearity. Mea-
surements are made in or near the frequency range of
interest. In this test, the input signal consists of two fre-
quencies, o
1
and o
2
of equal amplitude and a xed amount
of frequency spacing. At the output of the circuit or the
system under test the amplitudes of the third-order inter-
modulation products 2o
1
o
2
and 2o
2
o
1
are measured.
The intermodulation distortion is dened as the ratio be-
tween the root sum square of the intermodulation prod-
ucts and the root sum square of the twin-tone amplitudes.
Unless a wave analyzer or a spectrum analyzer is avail-
able, the implementation of the two-tone test invariably
require amplication of the whole output spectrum to ob-
tain components o
1
and o
2
on a normalized value (100%).
Then, o
1
and o
2
are suppressed, and the remaining com-
ponents 2o
1
o
2
and 2o
2
o
1
are measured with an AC
voltmeter or oscilloscope. Especially at audiofrequencies,
this approach requires steep lters, one set of lters for
each set of o
1
and o
2
. For the same reason o
2
o
1
cannot
be too low, so it will never be a really narrowband system.
This narrowband aspect is particularly important for
higher frequencies, where equalizers, in the reproduction
audio channel, may give unequal amplication of the com-
ponents in the spectrum [56]. In the audiofrequency range
several versions of the two-tone test are available [5659].
7.1.3. Three-Tone Test. In this test, again, specific in-
band IMPs are selected to characterize the overall system
nonlinearities [60]. The more even spectral distribution
and exibility, while still allowing discrete frequency eval-
uation, make this an attractive test for multifrequency
systems such as communication and cable television sys-
tems. In this test three equal-amplitude tones are applied
to the input of the nonlinear system under consideration.
Thus
V
i
Vcos o
1
t cos o
2
t cos o
3
t 6
Combining Eqs. (2) and (6), and using simple trigonomet-
ric identities, it is easy to show that the third-order term,
k
3
V
i
3
will contribute, to the output spectrum, the following:
1. Three components at frequencies o
1
, o
2
and o
3
each
with amplitude given by
A
1

15
4
k
3
V
3
7
2. Three components at frequencies 3o
1
, 3o
2
, 3o
3
each
with amplitude given by
A
3

1
4
k
3
V
3
8
3. Twelve components at frequencies 2o
m
o
n
; m; n
13, each with amplitude given by
A
21

3
4
k
3
V
3
9
4. Four components at frequencies o
m
o
n
o
p
; m; n;
p13, each with amplitude given by
A
111

3
2
k
3
V
3
10
Equations (9) and (10) show that an intermodulation prod-
uct of frequency o
m
o
n
o
p
is 6 dB higher in level than
an intermodulation product of frequency 2o
m
o
n
: Inter-
modulation distortion is dened as the ratio between the
amplitude of one of the intermodulation products of fre-
quency o
m
o
n
o
p
and the amplitude of one of the three
output tones. In this test the choice of frequencies o
1
,o
2
,o
3
used to perform the measurement is important. This is
because a systems intermodulation performance may not
be constant over its operating frequency range.
The three-tone test is widely used to characterize the
performance of RF ampliers used in television broadcast
transposers, where the vision carrier, color subcarrier, and
sound carrier frequency components interact in the pres-
ence of amplier nonlinearities. If the three frequency
components are represented as single frequencies (o the
vision carrier, o
sc
the color subcarrier, and o
s
the sound
carrier with amplitudes V
v
, V
sc
, and V
s
, respectively), then
the input signal can be expressed as
V
i
V
v
cos o
v
t V
sc
cos o
sc
t V
s
cos o
s
t 11
Combining Eqs. (2) and (11), and using simple trigono-
metric identities, it is easy to show that the third-order
term of Eq. (2) produces, among others, two in-band in-
termodulation components given by
V
ip

3
2
k
3
V
v
V
sc
V
s
coso
v
o
s
o
sc
t

3
4
k
3
V
s
V
2
sc
cos2o
sc
o
s
t
12
Intermodulation performance of the transposer is mea-
sured by taking the transposer out of service and using the
three-tone simulation of a composite video and sound sig-
nal, given by Eq. (11), as its input. The three levels and
frequencies vary from system to system. Typical levels,
below the peak synchronous pulse level, are V
v
6 dB,
V
sc
17 dB, and V
s
10 dB. Under these conditions, the
rst term of Eq. (12) is the most visible, and the second
term will be much lower in amplitude, typically 17 dB less.
Using a spectrum analyzer, the relative amplitude of
the major in-band intermodulation is measured and
2222 INTERMODULATION MEASUREMENT
referenced to the level of peak synchronous pulse. Usually,
the permissible level of the major in-band intermodulation
component is 53 dB below the reference level. This
three-tone test method is slow and requires spectrum an-
alyzers with relatively wide dynamic ranges. Moreover, it
measures the system performance at one luminance level
and one chrominance level. Thus, it does not test the sys-
tem over its full operating range [61].
The inadequacy of the internationally accepted three-
tone test method can be overcome by using a modied col-
orbar test signal [61]. The colorbars are applied to the
transposer via a test transmitter. The colorbars and sound
carrier therefore apply the three tones to the transposer,
changing levels in rapid succession. With suitable pro-
cessing, based on sampling the demodulated colorbar sig-
nal for short intervals corresponding to a selected color,
intermodulation levels can be measured simultaneously at
seven different luminance levels and can be shown in his-
togram form [61].
7.1.4. Noise Power Ratio (NPR) Test. In the NPR test,
the input to the device under test is obtained from a white-
noise source that is bandlimited to the instantaneous fre-
quency range of interest. This emulates a situation with
many simultaneous input signals. Provided that none of
the signals dominate, according to the central-limit theo-
rem, the resulting voltage obtained when many uncorre-
lated signals are added will approach a Gaussian
distribution. True white noise covers a frequency range
of interest continuously, unlike discrete signals.
The NPR test measures the amount of intermodulation
products power between two frequency ranges of white
Gaussian noise. A white-noise generator is used with its
output frequency range limited by a bandpass lter ac-
cording to the bandwidth of the device under test. A quiet
channel is formed by a switchable band-reject lter, as
shown in Fig. 3. Then, the resulting white-noise signal is
applied to the input of the device under test. At the output
of the device under test is a receiver which is switch-tuned
to the frequency of the band-reject lter used to produce
the quiet channel. The NPR test is widely used for eval-
uating the intermodulation performance of systems whose
input signal spectrum distribution can be approximated
by that of white noise. However, the NPR may be degraded
by the noise oor of the system under test, especially
under very low loading conditions. It may also be degraded
by the distortion products produced under high loading
conditions [62].
7.1.5. Cross-Modulation. Cross-modulation occurs when
modulation from a single unwanted modulated signal
transfers itself across and modulates the wanted signal.
Cross-modulation is troublesome primarily if the desired
signal is weak and is adjacent to a strong unwanted signal.
Even when the carrier of the strong unwanted signal is not
passed through the system, the modulation on the unde-
sired carrier will be transferred to the desired carrier.
Cross-modulation is, therefore, a special case of intermod-
ulation. Recall that when the input to a non-
linear system is formed of a two-tone signal of the form of
Eq. (3), then the amplitudes of the output components at
frequencies o
1
and o
2
will be given by
b
1
k
1
V
1

3
4
k
3
V
3
1

3
2
k
3
V
1
V
2
2
13
and
c
1
k
1
V
2

3
4
k
3
V
3
2

3
2
k
3
V
2
1
V
2
14
respectively. Thus, the output obtained at each frequency
o
1
and o
2
, is dependent on the amplitude of the signal
component of the other frequency. If the amplitude of
the wanted unmodulated carrier is V
1
and the instanta-
neous amplitude of the unwanted amplitude-modulated
carrier is
V
2
t V
2
1 mcos o
m
t 15
then, using Eq. (13), the amplitude of the wanted carrier
will be
b
1
k
1
V
1

3
4
k
3
V
3
1

3
2
k
3
V
1
V
2
2
1 mcos o
m
t
2
16
For small values of m and with k
3
5k
1
, Eq. (16) can be ap-
proximated by
b
1
k
1
V
1
3k
3
V
1
V
2
2
mcos o
m
t 17
Thus the wanted carrier will be modulated by a modulation
index
p3
k
3
k
1
V
2
2
m 18
The cross-modulation factor is then dened as
K
p
m
19
Thus, one frequency will be modulated by the modulation of
the other frequency. Similar results can be obtained if the
unwanted carrier is FM-modulated.
Frequency
P
o
w
e
r

(
d
B
)

b
a a
A
B
Figure 3. The output spectrum of a noisepower ratio measure-
ment. (a) injected noise; (b) noise and intermodulation generated
in the measurement bandwidth do by the DUT. NPRAB.
INTERMODULATION MEASUREMENT 2223
Cross-modulation can be measured as the change in the
amplitude of the wanted unmodulated carrier as a func-
tion of the amplitude of the unwanted unmodulated car-
rier. This is the procedure recommended by the NCTA
(National Cable Television Association) standard cross-
modulation measurement [63]. Alternatively, cross-modu-
lation can be measured using the definition of Eq. (19):
measuring percentage modulation that appears on an un-
modulated desired carrier due to the presence of an un-
desired modulated carrier, divided by the percentage
modulation on the undesired carrier [64].
Cross-modulation can also be measured using two
equal-amplitude carriers. The wanted carrier, o
2
, is un-
modulated while the unwanted carrier, o
1
, is FM-modu-
lated. The output spectrum clearly shows the frequency
deviation of the wanted carrier. Moreover, it can be shown
that the frequency deviation of the intermodulation com-
ponents, of the output spectrum, is larger than that of the
original FM-modulated unwanted carrier. For the inter-
modulation product of frequency ao
1
7bo
2
, the deviation
will be multiplied by a. Thus, it may be easier to measure
the cross-modulation by measuring the deviation of an in-
termodulation product rather than the deviation of the
wanted unmodulated carrier [65].
7.1.6. Differential Gain. Differential gain (DG), a pa-
rameter of special interest in color-TVengineering, is con-
ventionally dened as the difference in gain encountered
by a low-level high-frequency sinusoid at two stated in-
stantaneous amplitudes of a superimposed slowly varying
sweep signal. In video signal transmission, the high-fre-
quency sinusoid represents the chromatic signal and the
low-frequency sinusoid represents the luminance signal.
Corresponding to the theoretical conditions of the differ-
ential measurement, DG measurement is performed by a
signal of the form of Eq. (3) with o
2
bo
1
and V
2
!0:0 at
V
1
0.0 and X [66]. Therefore, recalling that when the in-
put to a nonlinear system is formed of a two-tone signal of
the form of Eq. (3), the amplitude of the output component
at frequency o
2
will be given by
c
1
k
1
V
2

3
4
k
3
V
3
2

3
2
k
3
V
2
1
V
2
20
Thus, DG can be expressed as
DG1
k
1

3
4
k
3
V
2
2
k
1

3
4
k
3
V
2
2

3
2
k
3
X
2
21
DG can, therefore, be considered to some extent as a mea-
sure of the intermodulation performance of a system
under test.
7.1.7. Dynamic Range. Dynamic range can be dened
as the amplitude range over which a circuit or a system
can operate without performance degradation. The mini-
mum amplitude is dictated by the input thermal noise and
the noise contributed by the system. The maximum am-
plitude is dictated by the distortion mechanisms of the
system under consideration. In general, the amount of
tolerable distortion will depend on the type of signal and
the system under test. However, for the purpose of an ob-
jective definition the maximum amplitude will be consid-
ered the input signal level at which the intermodulation
distortion is equal to the minimum amplitude [67]. The
dynamic range can, therefore, be considered to some ex-
tent as a measure of the intermodulation performance of a
system under test.
A useful working definition of the dynamic range is
that it is (1) two-thirds of the difference in level between
the noise oor and the intercept point in a 3 kHz band-
width [68] or (2) the difference between the fundamental
response input level and the third-order response input as
measured along the noise oor (sometimes dened as 3 dB
bandwidth above the noise oor) in a 3 kHz bandwidth, as
shown in Fig. 4. Reducing the bandwidth improves dy-
namic range because of the effect on noise.
Because the power level at which distortion becomes
intolerable varies with signal type and application, a ge-
neric definition has evolved. The upper limit of a net-
works power span is the level at which the power of one
IM product of a specied order is equal to the networks
noise oor. The ratio of the noise oor power to the upper-
limit signal power is referred to as the networks dynamic
range (DR). Thus the DR can be determined from [69]
DR
n

n 1
n
IP
n;in
MDS 22
where DR
n
is the dynamic range in decibels, n is the order,
IP
in
is the input intercept power in dBm, and MDS is the
minimum detectable signal power in dBm.
Alternatively, in receiver circuits the spurious-free
dynamic range (SFDR) and the intermodulation-free
IP3
(a) (b)
(c)
Dynamic
range
40 0.0 40 80 120 160
40
0.0
40
80
120
160
O
u
t
p
u
t

l
e
v
e
l

(
d
B

a
b
o
v
e

i
n
p
u
t
)
Input level (dBuV)
Figure 4. The dynamic range is the difference between the fun-
damental response input level and the third-order response input
as measured along the noise oor: (a) fundamental response;
(b) third-order intermodulation response; (c) noise oor.
2224 INTERMODULATION MEASUREMENT
dynamic range (IFDR) are widely used to quantify the
capability of the receiver to listen to a weak station,
without disturbance from an intermodulation product
generated by strong stations on other frequencies. The
SFDR and the IFDR are in fact measures of how strong
two signals can be before the level of their intermodulation
products can reach the noise oor of the receiver. The
SFDR, or the IFDR, is dened as the difference in decibels
between the power levels of the third-order intermodula-
tion IM3 (assuming that there is only a third-order non-
linearity) and the carrier when the IM3 power level equals
the noise oor at a given noise bandwidth. It can be
expressed as [70]
SFDR
2
3
IIP3 EIN10 log
10
NBW 23
where IIP3 is the third-order input intercept point, EIN in
(dB/Hz) is the equivalent input noise, and NBW (in Hz) is
the noise bandwidth.
7.1.8. Adjacent- and Cochannel Power Ratio Tests. In
modern telecommunication circuits, signals constituting
one or more modulated carriers are handled. Character-
ization of the intermodulation performance of such cir-
cuits cannot, therefore, be performed using two-tone and
three-tone input signals; a combination of equally spaced
tonesin practice, more than B10 sinusoids [71], with
constant power and correlated or uncorrelated phases
would be more appropriate [72].
Because of the nonlinearity of the device under test,
intermodulation products will be generated. These inter-
modulation products can be classied as adjacent-channel
distortion when their frequencies are located to the right
or to the left of the original spectrum, or cochannel dis-
tortion when their frequencies are located exactly over the
original spectrum. The adjacent-channel power ratio
(ACPR) is dened as the ratio between the total linear
output power and the total output power collected in the
upper and lower adjacent channels [73]. The cochannel
power ratio (CCPR) is dened as the ratio between total
linear output power and total distortion power collected
in the input bandwidth [73]. The intermodulation distor-
tion ratio (IMR) is the ratio between the linear output
power per tone and the output power of adjacent-channel
tones [73].
In fact, the ACPR, CCPR, and IMR distortion measure-
ments are simple extensions of the two-tone intermodula-
tion measurement [74]. However, it is important to rst
generate a very clean multitone signal. This can be easily
achieved using the technique described in Ref. 75.
8. INTERMODULATION MEASUREMENT
8.1. Measurement Equipment
8.1.1. Multitone Tests. A block diagram of the system
used for multitone intermodulation measurement is
shown in Fig. 5. The multiple-frequency source can be
implemented from two or three synthesized sine/square/
triangular-wave generators. Amplier/attenuator pairs
can be added at the output of each generator. Bandpass
lters can also be added to suppress the harmonic con-
tents at the output of each generator. For RF measure-
ments, harmonic suppression and isolation between
different generators is achieved by using amplier/circu-
lator combinations and cavity resonators [76]. The syn-
thesized sources are combined using hybrids or combiners
of adequate isolation. Spectral purity at this point is cru-
cial to the accuracy of the measurement. The multitone
output is fed to the device under test (DUT). The output of
the DUT is fed to the spectrum analyzer. For RF measure-
ments, the output of the DUT can be fed to directional
couplers. The outputs of the directional couplers are fed to
a television oscilloscope and/or a spectrum analyzer.
Alternatively, for audiofrequency measurements, the
intermodulation components of interest can be ltered
out, using bandpass lters, and fed to AC voltmeters.
For audiofrequency measurements, resistive combiners
are widely used for combining the outputs of two or
more signal generators.
8.1.2. Measurement Using a Microcomputer. Intermod-
ulation can also be measured using a microcomputer [77].
A block diagram of this technique is shown in Fig. 6. This
technique is based on measuring the single-tone input
output characteristic of the DUT using a vector voltmeter.
The output of the vector voltmeter is fed to a microcom-
puter that converts it into three digital data lines repre-
senting the input amplitude, the output amplitude, and
the phase lag between the input and output signals. After
storing the data, the microcomputer increments the am-
plitude of the input signal. After storing all the necessary
data, the microcomputer, using a stochastic method, cal-
culates the amplitudes of the intermodulation components
of the DUT. Although the procedure reported in Ref. 77
uses a stochastic method for calculating the amplitudes of
the intermodulation components resulting from a two-
tone input signal, the same procedure can be applied to
any number of input tones using different analytic tech-
niques for modeling the nonlinear characteristics of the
DUT.
Alternatively, microcomputers can be added to the
measurement setup of Fig. 5 to
1. Control the frequencies of the signal sources, espe-
cially in the millimeter-wave range, where the
A
A
BPF
BPF
BPF C DUT
SG1
SG2
SA C
Figure 5. Block diagram of the two-tone test setup; multitone
tests require additional signal generators, combiners, ampliers,
and bandpass lters (SGsignal generator; Aamplier; BPF
bandpass lter; Ccombiner; DUTdevice under test; SAspec-
trum analyzer).
INTERMODULATION MEASUREMENT 2225
difference in frequencies between the signal sources
may be less than 0.001 of the base signal frequency
[78].
2. Scan the base signal frequency over the measure-
ment range of interest in predened steps [79].
3. Correct the power from each source so that power
delivery to the DUT will be the same across the
whole frequency range scanned.
4. Read and calculate the parameters of interest dur-
ing the measurements [80,81].
8.1.3. Noise Power Ratio Test. Figure 7 shows a block
diagram of a noise power ratio test setup [62]. The setup
consists of a white-noise generator that applies an accu-
rate level of white Gaussian noise power with known
bandwidth (equaling Do and centered around o
0
) to the
DUT. The output of the DUT is measured with the band-
reject lter out. When the band-reject lter, with band-
widthdo and centered around o
0
, is switched in, a nar-
row band of frequencies is attenuated by about 70 dB, and
a quiet channel, of width do and centered around o
0
, is
formed as shown in Fig. 3. At the output of the DUT, the
noise power is measured in the quiet channel, using a
bandpass lter with bandwidth do and centered around
o
0
. This noise power is due to the thermal noise and the
intermodulation introduced by the DUT. The NPR is the
ratio between the noise power measured without the
band-reject lter inserted before the DUT to that mea-
sured with the band-reject lter inserted. The white-noise
generator corrects the loading power level for the inser-
tion loss of the band-reject lter.
8.1.4. Noise Floor and SFDR Test. Figure 8 shows a test
setup for measurement of noise oor and the SFDR of a
communication link [70]. To measure the noise oor of the
communication link, the transmitter is switched off. Then
the noises of the low-noise amplier and the spectrum an-
alyzer are measured. Switching the transmitter on in-
creases the noise oor by the transmitter noise and
therefore the difference between the two noise measure-
ments is the noise generated by the transmitter.
To measure the SFDR, the input power is decreased
until the IM3 level equals the noise oor. Recall that de-
creasing the input power by 1dB decreases the IM3 level
by 3 dB. However, this is true only if the third-order non-
linearity is dominant. Higher-order nonlinearities will
contribute to the third-order intermodulation (IM3), and
DC DC DUT
VV
MC
SG
Figure 6. Block diagram of a microcomputer-based intermodu-
lation measurement setup (SGsignal generator; DCdirection-
al coupler; DUTdevice under test; VVvector voltmeter; MC
microcomputer).

WNG BPF1
BRF
DUT
BPF2
PM
Figure 7. Block diagram of the noise power
ratio test setup (WNGwhite-noise genera-
tor; BPF1bandpass lter with bandwidth
do centered around o
0
; BRFband-reject l-
ter with bandwidth do centered around o
0
;
DUTdevice under test; BPF2bandpass l-
ter with bandwidth do centered around o
0
;
PMpower meter).
Communication
Link
R
LNA
SA
SG1 SG2 CIR2 CIR1 C
T
Figure 8. Setup for noise oor and SFDR measurement (SG
signal generator; CIRcirculator; Ccombiner; Ttransmitter;
Rreceiver; LNAlow-noise amplier; SAspectrum analyzer).
2226 INTERMODULATION MEASUREMENT
in such cases the measured SFDR will be different from
calculations obtained using Eq. (23).
8.1.5. Externally Induced Intermodulation Test. This is a
two-tone test with one signal applied to the input and the
other signal applied to the output [9]. A test setup is
shown in Fig. 9. Two directional couplers are used to
gauge both the forward-carrier power and the intermod-
ulation product levels. Two more directional couplers are
added to inject the interfering signal and to measure the
actual injected value using the spectrum analyzer.
8.2. Measurement Accuracy
8.2.1. Multitone Tests. For accurate measurements of
the intermodulation products using multitone tests, it is
essential to reduce, or remove, the nonlinear distortion
originating in the signal sources and/or the measurement
equipment. Measurement accuracy may, therefore, be
affected by the purity of the signal sources, the linearity
of the combiners, and the performance of the spectrum
analyzer.
8.2.2. Signal Sources. Measurement of the amplitudes
of the intermodulation components requires the use of two
or more signals. The frequencies of these signals must be
noncommensurate. Otherwise, harmonics in one source
might interfere with the fundamental(s) of other signal(s)
and thus interfere with the desired intermodulation com-
ponents.
Ideally the signal generators would produce perfect si-
nusoids, but in reality all signals have imperfections. Of
particular interest here is the spectral purity, which is a
measure of the inherent frequency stability of the signal.
Perhaps the most common method used to quantify the
spectral purity of a signal generator is its phase noise [82].
In the time domain, the phase noise manifests itself as a
jitter in the zero crossings of a sine wave. In the frequency
domain, the phase noise appears as sidebands surround-
ing the original frequency. Thus, mixing with other fre-
quencies, due to the nonlinearities of the DUT, would
result in additional intermodulation products. It is, there-
fore, important to consider the intermodulation due to
phase noise when calculating the intermodulation perfor-
mance of the DUT [83].
Signal generators with automatic level control (ALC)
may produce signals with unwanted modulation. The ALC
is implemented by rectifying the output signal of the gen-
erator and feeding back the resulting DC voltage to drive
an amplitude modulator. If a second signal is applied to
the output of the signal generator, the detector will pro-
duce a signal at the point of difference between the two
frequencies. This signal will modulate the generators out-
put. The frequency of the modulation sidebands will share
the same spectral lines as the intermodulation products of
interest. Isolating the signal generators and the combin-
ers can minimize such an effect. This can be achieved by
ensuring that there is as much attenuation as possible
between them.
8.2.3. Combiners. Measurement of intermodulation
products is performed by applying to the input of the cir-
cuit, or the system, under test a signal consisting of two or
more different frequencies obtained from different signal
generators. The outputs of the signal generators are,
therefore, combined by a combiner. The combiner must
provide sufcient isolation between the signal sources to
reduce the possibility of producing intermodulation prod-
ucts before the combined input signal is applied to the
circuit or the system under test. While resistive combiners
are adequate for input signal levels up to a few millivolts,
for larger voltage levels the use of power combiners may
be inevitable [84]. Insertion of an attenuator in each arm
of the combiner helps minimize the distortion components
resulting from the interaction between the two signal
sources. Such components, if generated, should be at least
80 dB below the fundamental components.
A simple test to determine whether adequate isolation
has been achieved can be effected by introducing a vari-
able attenuator between the signal source combiner and
the DUT in Fig. 6. This is set to a low value during mea-
surements, but at setup, when IMPs have been located on
the spectrum analyzer, increasing the attenuation by 3 dB
will result in a reduction in the observed IMP level. If this
reduction is only 3 dB, it has to be assumed that the IMP
observed has originated in the signal sources, not in the
DUT. If, however, the reduction is 6 dB for a second-order
IMP or 9dB for a third-order IMP [see Eq. (4)], then it is
safe to assume that the IMP has originated in the DUT or
the spectrum analyzer.
Alternatively, a technique that attenuates the parasitic
intermodulation products that result from the interaction
between the generators of the fundamental components,
before the input of the spectrum analyzer, was described
in Ref. 85. A block diagram of the technique is shown in
Fig. 10. The input to the system under test is formed by
SG2 A BPF
DC
SA
DUT
SG1
DC
PM
Figure 9. Measurement of externally induced intermodulation
can be performed by using two tones: one injected at the input and
one injected at the output of the DUT (SGsignal generator;
DCdirectional coupler; PMpower meter; SAspectrum
analyzer; BPFbandpass lter; Aamplier).
INTERMODULATION MEASUREMENT 2227
combining the outputs of two signal generators at fre-
quencies o
1
and o
2
in the combiner. The rst hybrid com-
biner/splitter (HCS1) splits the combined signal into two
branches with voltage transfer ratio aa and b

1 a
2
p
at the rst and second outputs. Using Eq. (1), and assum-
ing that the system under test and the compensator have
identical nonlinear characteristics, the inputs of the sec-
ond hybrid combiner/splitter (HCS2) can be expressed as
V
a

N
n0
k
n
aV
i

n
24
and
V
b

N
n0
k
n

1 a
2
p
V
n
i
_ _
25
Using Eqs. (24) and (25), the output of the second hybrid
combiner/splitter (HCS2), with voltage transfer ratio op-
posite in sign and equal to the reciprocal of that of HCS1,
can be expressed as
V
out

N
n0
k
n

1 a
2
p
aV
i

n
a

1 a
2
p
V
n
i
_ _ _ _
26
According to Eq. (26), broadband compensation occurs for
the linear components of the combined signal, with n1.
Thus, all the linearly transformed spectral components
are eliminated. This is also true for the intermodulation
components that may result from the nonlinear interac-
tion between the two signal generators. The output of
HCS2 can, therefore, be applied directly to the spectrum
analyzer.
This technique does not require complicated high-order
selective lters and can attenuate the parasitic intermod-
ulation components and the fundamental frequency com-
ponents by about 50 dB over a wide range of frequencies
differing by 710 octaves. However, it requires a compen-
sator with a nonlinear characteristic similar to that of the
system under test.
8.2.4. Spectrum Analyzers. Spectrum analyzers are
widely used in measuring the intermodulation perfor-
mance of electronic circuits and systems. Internal circuits
of the spectrum analyzers are, themselves, imperfect and
will also produce distortion products [46]. The distortion
performance of the analyzers is usually specied by the
manufacturers, either directly or lumped into a dynamic
range specication. The performance of the analyzer can
be stretched, however, if the nature of these distortion
products is understood.
Amplitudes of the distortion products, resulting from
the internal circuits of the analyzer, can be reduced by
reducing the signal levels at the analyzers input. Thus,
using internal and/or external attenuators can reduce the
input signal levels to the analyzer and hence reduce its
distortion products and improve the intermodulation mea-
surement range of the spectrum analyzer. However, re-
duced input levels to the analyzer mean reduced signal-to-
noise ratio, and the distortion component to be measured
may be buried in the noise. While reducing the resolution
bandwidth of the analyzer can reduce noise, this may lead
to slower sweep rate. Thus, achieving an optimum dynam-
ic range involves tradeoffs between input signal levels and
analyzer distortion. Usually, datasheets of analyzers will
contain information about noise level in each resolution
bandwidth and distortion products generated by the ana-
lyzer for each input level. Using this information, one can
determine the dynamic range of the analyzer for various
input levels [86].
Whenever good selectivity, as well as sensitivity and
dynamic range, are of prime importance, test receivers
may be used in preference to spectrum analyzers [6]. Al-
ternatively, if the frequencies of the intermodulation com-
ponents of interest are sufciently lower (or higher) than
the fundamental frequencies, then lowpass (or highpass)
lters can be used to remove the fundamental components
that would give rise to other nonlinear distortion compo-
nents in the spectrum analyzer. Attenuation factors of
80 dB or more, at frequencies outside the band of interest,
are recommended. The insertion loss of the lowpass (or the
highpass) lter should be as small as possible; 0.4dB or
less is recommended.
If the frequency of the intermodulation component of
interest is not sufciently higher (or lower) than the fun-
damental frequencies, then it would be necessary to have
complicated multiple-section high-order lters with am-
plitudefrequency characteristics that are nearly rectan-
gular. Such lters will change, to some extent, the
amplitude of the intermodulation components, and this
will complicate calculation of the intermodulation perfor-
mance of the system under test. A method for compensat-
ing for a large fundamental component, thus allowing the
measurement of small intermodulation components in its
presence, was described in Ref. 87.
A block diagram of the compensation method is shown
in Fig. 11. The input to the system under test is formed of
one large amplitude signal at frequency o
1
and one small
amplitude signal at frequency o
2
with o
1
5o
2
. The output
of the system under test contains fundamental compo-
nents at frequencies o
1
and o
2
, and intermodulation com-
ponents at frequencies o
2
7no
1
, n1, 2, y, N. In order to
measure the small amplitude intermodulation compo-
nents, it is necessary to avoid applying to the analyzer
the fundamental component at frequency o
2
. This can be
achieved as follows. The output of the system under test is
SG1
SG2
C HSC1
DUT
CO
HSC2 SA
a
b
V
i

Figure 10. A technique for attenuating the intermodulation


products resulting from interaction between the signal genera-
tors of the fundamental components (SGsignal generator; C
combiner; DUTdevice under test; BRFband-reject lter; PS
phase shifter; DAdifferential amplier).
2228 INTERMODULATION MEASUREMENT
fed to the second band-reject lter BRF2 to suppress the
fundamental component at o
1
. The output of the signal
generator of frequency o
2
is fed to the rst band-reject l-
ter BRF1 to suppress any component at frequency o
1
be-
fore reaching the phase shifter through the combiner. The
phase shifter compensates, at the frequency o
2
, the phase
shift through the system under test.
Ideally, the voltages of frequency o
2
at the inputs of the
differential amplier are equal. Thus, the output of the
differential amplier at frequency o
2
is ideally zero. In
practice, the output voltage at o
2
will be attenuated by
5060 dB [6]. The output of the differential amplier,
with suppressed fundamental component at frequency
o
2
, can be applied to the spectrum analyzer. This compen-
sation technique, which entails additional lters and
matching units, can be used only for broadband measure-
ments with o
1
5o
2
.
Although spectrum analyzers using digital IF sections
may not suffer from the internally generated distortion,
discussed above, they may suffer from the relatively
low-level distortion products resulting from the analog-
to-digital conversion. The amplitudes of these products
is usually less sensitive to the amplitude of the signal
components.
8.2.5. Noise Power Ratio Test. The accuracy of the noise
power ratio (NPR) test is affected mainly by two factors:
(1) the noise oor of the amplier that will dominate under
very low loading conditions and (2) the distortion products
produced under very high loading conditions. It is, there-
fore, recommended to sweep the loading between two pre-
specied start and stop levels. The NPR is measured at
different levels, and the largest measured value of NPR is
considered as the worst case.
8.2.6. Microcomputer-Based Tests. Quantization errors
associated with the analog-to-digital conversion of the
data in microcomputer-based intermodulation test must
be taken into account. Measurement errors due to quan-
tization are affected by the length of the binary digits and
determine the dynamic range of operation [77].
BIBLIOGRAPHY
1. L. E. Kinsler, A. R. Frey, A. B. Coppens, and J. V. Sanders,
Fundamentals of Acoustics, Wiley, 1982, pp. 267268.
2. K. Y. Eng and O.-C. Yue, High-order intermodulation effects
in digital satellite channels, IEEE Trans. Aerospace Electron.
Sys. AES-17:438445 (1981).
3. C. D. Bod, C. S. Guenzer, and C. A. Carosella, Intermodulation
generation by electron tunneling through aluminum-oxide
lms, Proc. IEEE 67:16431652 (1979).
4. W. H. Higa, Spurious signals generated by electron tunneling
on large reector antennas, Proc. IEEE 63:306313 (1975).
5. P. L. Aspden and A. P. Anderson, Identication of passive in-
termodulation product generation in microwave reecting
surfaces, IEE Proc. H, 139:337342 (1992).
6. P. L. Liu, A. D. Rawlins, and D. W. Watts, Measurement of
intermodulation products generated by structural compo-
nents, Electron. Lett. 24:10051007 (1988).
7. M. Otala and J. Lammasniemi, Intermodulation at the am-
plier-loudspeaker interface, Wireless World, 86:4547 (Nov.
1980), 4244,55 (Dec. 1980).
8. E. M. Cherry and G. K. Cambrell, Output resistance and in-
termodulation distortion in feedback ampliers, J. Audio Eng.
Soc. 30:178191 (1982).
9. E. Franke, Test setup gauges externally-induced transmitter
IM, Microwave RF, 32:9598 (April 1993).
10. W. Wharton, S. Metcalfe, and G. C. Platts, Broadcast Trans-
mission Engineering Practice, Butterworth-Heinemann, Ox-
ford, UK, 1991, Chapter 5.
11. J. M. Lindsey, L. S. Riggs, and T. H. Shumpert, Intermodu-
lation effects induced on parallel wires by transient excita-
tion, IEEE Trans. Electromagn. Compat. 31:218222 (1989).
12. M. Otala, Non-linear distortion in audio ampliers, Wireless
World 83:4143 (Jan. 1977).
13. E. M. Cherry, Intermodulation distortion in audio ampliers,
Proc. IREE Conf. Int., Australia, 1983, pp. 639641.
14. W. G. Jung, M. L. Stephens, and C. C. Todd, An overview of
SID and TIMPart I, Audio 63:5972 (June 1979).
15. R. R. Cordell, Another view of TIM, Audio 64:3849 (Feb.
1980).
16. W. M. Leach, Transient IM distortion in power ampliers,
Audio 59:3441 (Feb. 1975).
17. S. A. Mass, Volterra analysis of spectral regrowth, IEEE Mi-
crowave Guided Wave Lett. 7:192193 (1997).
18. J. F. Sevic, M. B. Steer, and A. M. Pavio, Nonlinear analysis
methods for the simulation of digital wireless communication
systems, Int. J. Microwave Millimeter-wave Comput. Aid.
Design 6:197216 (1996).
19. J. F. Sevic and M. B. Steer, Analysis of GaAs MESFET spec-
trum regeneration driven by a DQPSK modulated source,
IEEE Int. Microwave Symp. Digest, June 1995, pp. 1375
1378.
20. K. G. Gard, H. M. Gutierrez, and M. B. Steer, Characteriza-
tion of spectral regrowth in microwave ampliers based on
SG1
SG2
C
DUT
BRF2 BRF1
PS
DA Output

Figure 11. Compensation method for the measurement of small-


amplitude intermodulation products in the presence of a large
fundamental (SGsignal generator; Ccombiner; HSChybrid
splitter/combiner; DUTdevice under test; COcompensator;
SAspectrum analyzer).
INTERMODULATION MEASUREMENT 2229
the nonlinear transformation of a complex Gaussian process,
IEEE Trans. Microwave Theory Tech. 47:10591069 (1999).
21. G. T. Zhou, Analysis of spectral regrowth of weakly nonlinear
ampliers, IEEE Commun. Lett. 4:357359 (2000).
22. W. M. Leach, Suppression of slew rate and transient IM dis-
tortions in audio power ampliers, J. Audio Eng. Soc. 25:466
473 (1977).
23. M. Schetzen, The Volterra and Wiener Theories of Nonlinear
Systems, Wiley, New York, 1980.
24. E. V. D. Eijnde and J. Schoukers, Steady-state analysis of a
periodically excited nonlinear system, IEEE Trans. Circuits
Syst. 37:232242 (1990).
25. S. Naryanan, Transistor distortion analysis using the Volter-
ra series representation, Bell Sys. Tech. J. 46:9991024
(1967).
26. D. D. Weitner and J. F. Spina, Sinusoidal Analysis and Mod-
eling of Weakly Nonlinear Circuits, Van Nostrand, New York,
1980.
27. P. Harrop and T. A. C. M. Claasen, Modelling of an FET mixer,
Electron. Lett. 14:369370 (1978).
28. W. G. Jung, M. L. Stephens, and C. C. Todd, An overview of
SID and TIMPart III, Audio 63:4259 (Aug. 1979).
29. M. T. Abuelmaatti, Prediction of the transient intermodula-
tion performance of operational ampliers, Int. J. Electron.
55:591602 (1983).
30. S. A. Mass, Nonlinear Microwave Circuits, Arech House, Nor-
wood, MA, 1988
31. P. Wambacq and W. Sansen, Distortion Analysis of Analog
Integrated Circuits, Kluwer Academic Publishers, Boston,
1998.
32. S. A. Mass, Applying Volterra-series analysis, Microwave RF
38:5564 (1999).
33. D. Atherton, Nonlinear Control EngineeringDescribing
Function Analysis, Van Nostrand-Reinhold, New York, 1975.
34. S. Collins and K. Flynn, Intermodulation characteristics of
ferrite-based directional couplers, Microwave J. 42:122130
(Nov. 1999).
35. M. Bayrak and F. A. Benson, Intermodulation products from
nonlinearities in transmission lines and connectors at micro-
wave frequencies, Proc. IEE 122:361367 (1975).
36. M. B. Amin and F. A. Benson, Nonlinear effects in coaxial ca-
bles at microwave frequencies, Electron. Lett. 13:768770
(1977).
37. K. Y. Eng and O. C. Yue, High-order intermodulation effects in
digital satellite channels, IEEE Trans. Aerospace Electron.
Syst. AES-17:438445 (1981).
38. P. L. Aspden and A. P. Anderson, Identication of passive in-
termodulation product generation on microwave reecting
surfaces, IEE Proc. H 139:337342 (1992).
39. M. Lang, The intermodulation problem in mobile communi-
cations, Microwave J. 38:2028 (May 1995).
40. P. L. Lui, A. D. Rawlins, and D. W. Watts, Measurement of
intermodulation products generated by structural compo-
nents, Electron. Lett. 24:10051007 (1988).
41. B. G. M. Helme, Passive intermodulation of ICT components,
Proc. IEE Colloq. Screening Effectiveness Measurements,
1998, pp. 1/11/8.
42. P. L. Lui and A. D. Rawlins, Passive nonlinearities in antenna
systems, Proc. IEE Colloq. Passive Intermodulation Products
in Antennas and Related Structures, 1989, pp. 6/16/7.
43. J. T. Kim, I.-K. Cho, M. Y. Jeong, and T.-G. Choy, Effects
of external PIM sources on antenna PIM measurements,
Electronics and Telecommunication Research Institute
(ETRI) J. 24:435442 (Dec. 2002).
44. J. G. Gardiner and H. Dincer, The measurement and charac-
terisation of non-linear interactions among emissions from
communal transmitting sites, Proc. 2nd Int. Conf. Radio Spec-
trumConservation Techniques, IEEPublication 224, 1983, pp.
3943.
45. M. T. Abuelmaatti, Prediction of passive intermodulation
arising from corrosion, IEE Proc. Sci. Meas. Technol.
150:3034 (2003).
46. R. A. Witte, Spectrum and Network Measurements, Prentice-
Hall, Englewood Cliffs, NJ, 1991, Chapter 7.
47. S. Hunziker and W. Baechtold, Simple model for fundamental
intermodulation analysis of RFampliers and links, Electron.
Lett. 32:18261827 (1996).
48. G. A. A. A. Hueber, B. Nijholt, and H. Tendeloo, Twin-tone
tape testing, J. Audio Eng. Soc. 24:542553 (1976).
49. J. Li, R. G. Bosisio and K. Wu, A simple dual-tone calibration
of diode detectors, Proc. IEEE Instrumentation and Measure-
ment Technology Conf., Hamamatsu, Japan, 1994, pp. 276
279.
50. J. D. Giacomini, Most ADC systems require intermodulation
testing, Electron. Design 40(17):5765 (1992).
51. M. Benkais, S. L. Masson, and P. Marchegay, A/D converter
characterization by spectral analysis in dual-tone mode,
IEEE Trans. Instrum. Meas. 44:940944 (1995).
52. B. D. Loughlin, Nonlinear amplitude relations and gamma
correction, in K. Mcllwain and C. Dean, eds., Principles of
Color Television, Wiley, New York, 1956, pp. 200256.
53. M. Kanno and I. Minowa, Application of nonlinearity mea-
suring method using two frequencies to electrical compo-
nents, IEEE Trans. Instrum. Meas. IM-34:590593 (1985).
54. L. Robles, M. A. Ruggero, and N. C. Rich, Two-tone distortion
in the basilar membrane of the cochlea, Nature 349:413414
(1991).
55. T. Maseng, On the characterization of a bandpass nonlinear-
ity by two-tone measurements, IEEE Trans. Commun. COM-
26:746754 (1978).
56. H. Roering, The twin-tone distortion meter: A new approach,
J. Audio Eng. Soc. 31:332339 (1983).
57. E. M. Cherry, Amplitude and phase intermodulation distor-
tion, J. Audio Eng. Soc. 31:298303 (1983).
58. H. H. Scott, Audible audio distortion, Electronics 18:126 (Jan.
1945).
59. A. N. Thiele, Measurement of nonlinear distortion in a band-
limited system, J. Audio Eng. Soc. 31:443445 (1983).
60. G. L. Heiter, Characterization of nonlinearities in microwave
devices and systems, IEEE Trans. Microwave Theory Tech.
MTT-21:797805 (1973).
61. A. D. Broadhurst, P. F. Bouwer, and A. L. Curle, Measuring
television transposer intermodulation distortion, IEEE Trans.
Broadcast. 34:344355 (1988).
62. B. Hessen-Schmidt, Test set speeds NPR measurements,
Microwaves RF 33:126128 (Jan. 1994).
63. B. Arnold, Third order intermodulation products in a CATV
system, IEEE Trans. Cable Television CATV-2:6779 (1977).
64. O. A. Dogha and M. B. Das, Cross-modulation and intermod-
ulation performance of MOSFETs in tuned high-frequency
ampliers, Int. J. Electron. 45:307320 (1978).
65. J. H. Foster and W. E. Kunz, Intermodulation and crossmod-
ulation in travelling-wave tubes, Proc. Conf. Int. Tubes pour
Hyperfrequences, Paris, 1964, pp. 7579.
2230 INTERMODULATION MEASUREMENT
66. Differential Phase and Gain at Work, Hewlett-Packard Ap-
plication Note 175-1, 1975.
67. J. Smith, Modern Communication Circuits, McGraw-Hill,
New York, 1987, Chapter 3.
68. J. Dyer, The facts and gures of HF receiver performance,
Electron. WorldWireless World 99:10261030 (1993).
69. U. L. Rohde and D. P. Newkirk, RF/Microwave Circuit Design
for Wireless Applications, J. Wiley, New York, 2000.
70. G. Steiner, W. Baechtold, and S. Hunziker, Bidirectional
single bre links for base station remote antenna feeding,
Proc. European Conf. Networks & Optical Communications,
Stuttgart, June 69, 2000, Germany, 2000.
71. R. Hajji, F. Beauregrd, and F. Ghannouchi, Multitone power
and intermodulation load-pull characterization of microwave
transistors suitable for linear SSPAs design, IEEE Trans.
Microwave Theory Tech. 45:10931099 (1997).
72. N. B. Carvalho and J. C. Pedro, Multi-tone intermodulation
distortion performance of 3
rd
order microwave circuits, IEEE
Int. Microwave Theory and Techniques Symp. Digest, 1999,
pp. 763766.
73. J. C. Pedro and N. B. Carvalho, On the use of multitone tech-
niques for assessing RF components intermodulation distor-
tion, IEEE Trans. Microwave Theory Tech. 47:23932402
(1999).
74. N. B. Carvalho and J. C. Pedro, Compact formulas to relate
ACPR and NPR to two-tone IMR and IP3, Mirowave J. 42:70
84 (Dec. 1999).
75. R. Hajji, F. Beauregrd, and F. Ghannouchi, Multi-tone tran-
sistor characterization for intermodulation and distortion
analysis, IEEE Int. Microwave Theory and Techniques
Symp. Digest, 1996, pp. 16911694.
76. G. Hamer, S. Kazeminejad, and D. P. Howson, Test set for the
measurement of IMDs at 900 MHz, IEE Colloq. Passive In-
termodulation Products in Antennas and Related Structures,
IEE Digest 1989/94, London, 1989.
77. T. Sasaki and H. Hataoka, Intermodulation measurement us-
ing a microcomputer, IEEE Trans. Instrum. Meas. IM-
30:262264 (1981).
78. P. A. Morton, R. F. Ormondroyd, J. E. Bowers, and M. S.
Demokan, Large-signal harmonic and intermodulation dis-
tortions in wide-bandwidth GaInAsP semiconductor lasers,
IEEE J. Quantum Electron. 25:15591567 (1989).
79. S. Mukherjee, Vector measurement of nonlinear transfer
function, IEEE Trans. Instrum. Meas. 44:892897 (1994).
80. C. Tsironis, Two tone intermodulation measurements using a
computer-controlled microwave tuner, Microwave J. 32:161
163 (Oct. 1989).
81. A. A. M. Saleh and M. F. Wazowicz, Efcient, linear ampli-
cation of varying-envelope signals using FETs with parabolic
transfer characteristics, IEEE Trans. Microwave Theory Tech.
MTT-33:703710 (1985).
82. B. Cheng, Signal generator spectral purity consideration in
RF communications testing, Microwave J. 42:2232 (Dec.
1999).
83. S. Ciccarelli, Predict receiver IM in the presence of LO phase
noise, Microwaves RF 35:8690 (1996).
84. A. M. Rudkin, ed., Electronic Test Equipment, Granada,
London, 1981, Chapter 2.
85. Yu. M. Bruk and V. V. Zakharenko, Broadband compensa-
tion for dynamic-range measurements by intermodulation,
Instrum. Exp. Tech. 36(Part 1)(4):557562 (1993).
86. Spectrum Analyzer Series, Hewlett-Packard Application Note
150-11, 1976.
87. V. G. Frenkeland M. S. Shterengas, Auxiliary unit for a spec-
trum analyzer when measuring intermodulation distortion,
Meas. Tech. 32:385387 (April 1989).
ITERATIVE METHODS
ROBERT J. BURKHOLDER
JIN-FA LEE
The Ohio State University
Columbus, Ohio
1. INTRODUCTION
Iterative methods are used in RF and microwave engi-
neering to solve complex systems by repeatedly rening
an approximate solution until a desired level of accuracy
or performance is achieved. In many such problems, an
exact solution does not exist and a direct numerical solu-
tion may not be feasible because of the very large number
of degrees of freedom. Typical applications include the so-
lution of large systems of differential and integral equa-
tions that may involve thousands or millions of unknown
variables. For these problems it may not be possible to
generate and store a full system matrix, and then solve it
directly (e.g., by inversion, factorization, or Gauss elimi-
nation). Iterative methods only need to apply an operator
(or system matrix) to the solution at each iteration. They
are particularly well suited for the solution of sparse ma-
trix systems because a large percentage of the operations
involved are negligible.
Mathematically, an iterative algorithm starts with an
initial approximate solution, and repeatedly applies an
operator to the solution to improve its accuracy at each
iteration. Eventually the solution should converge to a
given level of accuracy. Convergence is the primary issue
associated with any iterative method. The solution may
converge very slowly if the iterative operator is not well
conditioned, or it may even diverge. Figure 1 illustrates
the basic iterative loop.
Initial
approximate
solution

End
Yes
No
Is solution
sufficiently
accurate?
Apply iterative
operator to obtain
improved solution
from previous
solution
Figure 1. Schematic diagram of an iterative solution.
ITERATIVE METHODS 2231
There are two broad categories of iterative methods:
stationary and nonstationary. Stationary methods are
characterized by an operator that does not change with
each iteration. Classical iterative methods are included in
this category, such as Jacobi and GaussSeidel. Conjugate-
gradient methods are included in the class of nonstation-
ary methods, wherein some parameters in the operator
change with each iteration [1].
In general, conjugate-gradient iterative methods have
better convergence properties than do classical iterative
methods when compared over a wide range of problems.
The convergence of classical methods tends to be very
problem-dependent. In fact, classical methods are often
based on the underlying physics of a particular scenario.
For example, a classical iterative algorithm may be de-
signed to model the multiple electromagnetic (EM) wave
scattering between two or more objects. Such an algorithm
could be very rapidly convergent for that problem, but
slowly convergent or even divergent for a different prob-
lem. On the other hand, conjugate-gradient methods have
theoretically guaranteed convergence if the system matrix
is nonsingular, although in practice the limited numerical
precision of a computer may cause the algorithm to stall.
The convergence of any iterative method may be im-
proved by altering the formulation so that it is better con-
ditioned. This is referred to as preconditioning the
operator or system of equations. The accuracy of the solu-
tion at each iteration may be gauged in terms of the re-
sidual error, which is a measure of how well the solution
satises the original system of equations.
2. HISTORICAL REVIEW OF ITERATIVE METHODS IN
ELECTROMAGNETICS
Iterative methods in EM did not become popular until ad-
vances in computer technology made it possible to solve
large systems of equations. Classical iterative methods
were developed to model physical EM interactions be-
tween different parts of a geometry. Thiele et al. rst de-
veloped a hybrid technique to combine physical optics and
the method of moments in 1982 [24]. The solution iter-
ates between the optically lit region and the shadow re-
gion of an arbitrary scattering geometry. This method was
extended further and made more general by Hodges and
Rahmat-Samii [5], including the interactions between an-
tennas and their supporting platform. Domain decompo-
sition was used by Sullivan and Carin to break up a
method-of-moments (MoM) problem into multiple, sim-
pler, solution regions [6]. Iterative method of moments and
iterative physical optics have been used to solve multi-
bounce problems such as the EM scattering from large
open-ended cavities [7,8]. Classical iterative methods have
also been applied extensively to compute the scattering
from rough surfaces. The forward-backward method de-
veloped by Holliday et al. [9], and the method of ordered
multiple interactions of Kapp and Brown [10], take ad-
vantage of the dominant forward and backward propaga-
tion of EM waves over a rough surface. The generalized
forwardbackward method extended this work to include
an obstacle on the rough surface by modifying the matrix
splitting used in the forwardbackward method [11]. Com-
parisons of stationary with nonstationary iterative meth-
ods are presented in Refs. 1214.
The conjugate gradient (CG) method was developed in
1952 by Hestenes and Stiefel [15]. However, like the clas-
sical iterative methods, it was not used in the area of elec-
tromagnetics until advances in computers made it possible
to solve large linear systems. Sarkar and Rao used the CG
method to solve method of moments problems in 1984 [16],
and Sarkar and Arvas presented a more general CG de-
velopment for eletromagnetics problems in 1985 [17]. The
CGfast Fourier transform method (CG-FFT) became pop-
ular for solving quasiplanar geometries in the late 1980s
[18,19]. The development of fast integral equation meth-
ods, such as the CG-FFT, the fast multipole method [20],
the adaptive integral method [21], and the precorrected
FFT method [22] gave CG methods a boost. These methods
greatly reduce the computational cost of applying the in-
tegral equation operator, thereby allowing very large sys-
tems of equations to be solved.
3. MATRIX NOTATION FOR ITERATIVE METHODS
The solution of a system of equations with N degrees of
freedom, or unknown variables, may be expressed in ma-
trix format as
Ax b 1
where A is an NN system matrix, x is a column vector
containing the N unknown coefcients, and b is a known
excitation-dependent column vector. The individual ele-
ments of this equation may be expressed as
b
m

N
n1
A
mn
x
n
2
This matrix equation is obtained by discretizing the EM
operator governing the problem of interest, whether it is
from a differential equation or integral equation formula-
tion. The unknown quantity, such as the EM elds or
equivalent currents, are expanded into a set of N
known basis functions with unknown coefcients compris-
ing the column vector x. The N basis functions are tested
(or sampled) with N test functions to yield a system of N
equations.
The preceding equation for the unknown coefcients
may be solved using direct matrix inversion or factoriza-
tion. However, the operational complexity for the direct
approach is O(N
3
), that is, of order N-cubed. This means
that the number of computations necessary to solve the
system is proportional to N
3
, which may be too costly when
there are thousands or millions of unknowns. Iterative
methods have an operational complexity of no more than
O(N
2
) per iteration, which is the cost of doing one matrix
vector multiplication. So as long as the solution converges
quickly, the iterative method is much more efcient.
Iterative methods seek to solve Eq. (1) by succes-
sively improving an initial solution to a desired degree of
2232 ITERATIVE METHODS
accuracy. The residual error vector is a measure of the
accuracy of the solution after k iterations and is dened by
r
k
b Ax
k
3
The residual error norm, or simply the residual error, is
the length of this vector normalized to the length of the
excitation vector, r
k
_
_
_
_
= b k k, where jjr
k
jj

r
k
r
k
p
. Here,
the inner product (or vector product) of two column vectors
is Hermitian, dened by
ab

N
n1
a

n
b
n
4
where the * superscript denotes the complex conjugate.
(Note: The CG algorithm described later does not use the
complex conjugate in the inner product, as will be made
apparent.) The residual error tells us how well the solu-
tion satises the system of equations, and is most often
used as the criterion for halting the iterations. The abso-
lute error vector is dened by
e
k
x
k
x 5
where x is the exact solution to (1). The spectral radius of a
matrix is dened as the magnitude of its largest eigenval-
ue. This quantity is important for determining conver-
gence of classical iterative methods, whereas the
eigenvalue spectrum of a matrix determines the conver-
gence of CG methods.
4. CLASSICAL ITERATIVE METHODS
As is apparent from the historical review presented ear-
lier, classical iterative methods are often used to solve
problems via a physical decomposition of the geometry,
sometimes even using a different solution technique for
each region. All of these methods can be cast in the form of
matrix splittings, where the original system matrix is de-
composed in some manner that makes the problem easier
to solve. Figure 2 shows some common matrix splittings.
We will focus on the lowerupper (LU) triangular split-
ting. The block-diagonal and banded matrix splittings are
extensions of the LU splitting, where the diagonal D is
replaced by the block-diagonal or banded portion of the
matrices. Likewise, the hybrid decomposition is a special
case of the block-diagonal splitting with only two blocks on
the diagonal.
All the matrix splittings of Fig. 2 have the general form
AMN. We may then write an iterative equation from
Eq. (1) as
Mx
k
Nx
k1
b 6
starting with some initial solution candidate x
(0)
and solv-
ing repeatedly. It is easy to show that if x
(k)
x
(k 1)
, then
Eq. (1) is satised and x
(k)
x. To solve (6) for x
(k)
, we need
Mto be easily invertible or factorizable. Diagonal matrices
are trivial to invert, and block-diagonal matrices are
easily inverted by inverting each block independently of
the other blocks. Lower triangular and upper triangular
matrices are also easy to invert via backward and forward
substitutions, respectively [1]. All of these types of inver-
sions are computed much more efciently than inverting
the entire system matrix A.
The absolute error vector at the kth iteration may be
shown to be
e
k
M
1
N
k
x
0
x 7
Therefore, the spectral radius of the matrix M
1
N must
be less than unity to guarantee convergence [1]. This en-
sures that the absolute error approaches zero as k goes to
innity. The residual error vector for the kth iteration may
be shown to be
r
k
Nx
k
x
k1

which is easily computed by saving the matrixvector


product Nx
k1
from the previous iteration. Some com-
mon iterative algorithms based on the matrix splittings of
Fig. 2 are discussed next.
4.1. Jacobi Iteration
This is the simplest classical iteration algorithm. We
choose MD and N (LU), so the iterative equation
becomes
Dx
k
b LUx
k1
8
The magnetic eld integral equation (MFIE) has this
form, which is also used in the iterative physical optics
L
U
D
A
11
A
21
A
22
A
12
(a) (b)
(c) (d)
Figure 2. Some common matrix splittings. (a) hybrid decomposi-
tion; (b) lowerupper triangular; (c) block-diagonal; (d) banded.
ITERATIVE METHODS 2233
technique [8]. The operational cost is O(N
2
), which is the
cost of computing the matrixvector product LUx
k1
on the right-hand side (RHS) of (8).
4.2. GaussSeidel Method
This is an improvement over simple Jacobi iteration [1].
Here we choose MDL and N U, resulting in
DLx
k
b Ux
k1
9
This equation is solved using forward substitution. This is
easy to see by writing the expression for the individual
elements as
x
k
m

b
m

m1
n1
x
k
n

N
nm1
x
k1
n
D
m
10
The elements x
k
m
are updated sequentially for m
1,2,y,N, so the updated values can be used on the RHS
of (10). The convergence of GaussSeidel is expected to be
somewhat better than Jacobi, and with the same opera-
tional cost.
4.3. Symmetric GaussSeidel Method
The GaussSeidel method can be formulated using for-
ward or backward substitution. A symmetric form of
GaussSeidel iteration is obtained using both forward
and backward substitution in the following two-step algo-
rithm:
DLx
k1=2
b Ux
k1
DUx
k
b Lx
k1=2
11
This is the form of the forwardbackward method [9],
or the method of multiple ordered interactions [10]. This
two-step algorithm has the same operational cost of a one-
step algorithm because the half-matrixvector product
Lx
k1=2
is reused in step 2 of each iteration, and Ux
k1
may be saved from the previous iteration and reused.
4.4. Relaxation
Unless the problem geometry is very well ordered, or the
system matrix is strongly diagonally dominant, the clas-
sical iterative algorithms above will probably have poor
convergence properties. To improve convergence, a relax-
ation parameter [1] (or damping coefcient), o may be in-
troduced. This is a constant usually in the range 0ooo2
such that the relaxed iterative equations reduce to the
basic equations above for o1. The relaxed Jacobi itera-
tive equation is given by
Dx
k
ob 1 oDoLU x
k1
12
It is easy to show that (12) reduces to (8) for o1, and
if x
k
x
k1
then (1) is satised and x
(k)
x for any non-
zero o.
Likewise, the relaxed GaussSeidel method, also
known as successive overrelaxation (SOR) [1], is given by
DoLx
k
ob 1 oDoU x
k1
13
The relaxed form of symmetric GaussSeidel is known as
symmetric successive overrelaxation (SSOR) [1], and is
given by
DoLx
k1=2
ob 1 oDoU x
k1
DoUx
k
ob 1 oDoL x
k1=2
14
Figure 3 shows a plot of the convergence of relaxed Jacobi,
SOR, SSOR, and the biconjugate gradient stabilized
(BCGS) algorithms for the problem of radar scattering
from a perfect electrically conducting cylinder computed
by the method of moments [14]. For this relatively simple
problem, the SSOR method has the best convergence and
the BCGS, the worst. However, for more arbitrary geom-
etries the classical iterations may fail to converge, and
may eventually diverge.
5. CONJUGATE-GRADIENT (CG) METHODS
CG methods are superior to classical iterative methods
in the sense that they are theoretically guaranteed to
converge in no more than N iterations if the matrix is
0 10 20 30 40 50 60 70 80 90 100
0.001
0.002
0.003
0.005
0.01
0.02
0.03
0.04
0.05
0.1
0.2
0.3
0.4
0.5
1

R
e
s
i
d
u
a
l

e
r
r
o
r

n
o
r
m
Iteration units
SSOR
SOR
FB
Jacobi
Bi-CGStab
2m
2m
x
z

y
Frequency 300 MHz
Vertical polarization
Elevation angle 30 deg
E
i
Figure 3. Convergence of classical iterative methods compared
with BCGS.
2234 ITERATIVE METHODS
nonsingular. (In practice, the numerical precision of the
computer may limit this theoretical convergence.) They
are not limited to specific types of problems or physics, and
can be used as general matrix solvers. This is achieved by
generating a sequence of search vectors that will eventu-
ally span the entire N-space. The only difference between
the various CG versions is how these search vectors are
generated. In general, the search vectors are chosen such
that each new vector is linearly independent of all previ-
ous vectors, and the residual error is minimized. The basic
CG method is applicable only to symmetric systems and is
presented here rst, followed by two popular methods for
nonsymmetric systems, the modied biconjugate gradient
(BCG) and the generalized minimum residual (GMRES)
methods. An excellent resource for these and other CG
algorithms is the Templates book [23], for which associat-
ed computer subroutines are readily available.
5.1. CG Algorithm for Complex Symmetric Matrices
The basic CG algorithm for solving the complex symmetric
matrix equation A~ xx
~
bb is listed below. A complete deriva-
tion is included later in this article. In the following algo-
rithm the vector products are not Hermitian, that is, there
is no complex conjugation as in (4).
Conjugate Gradient Algorithm 1
Initialization:
~ vv
~
bb

b
~
A
~
bb
_ ; ~ xx 0; ~ rr
~
bb
Iteration:
1. ~ xx ~ xx a~ vv; a v
~
~
bb
2. ~ uuA~ vv; ~ rr ~ rr a ~ uu
3. Check jj ~ rrjj=jj
~
bbjj e; if yes, then stop, and ~ xx is a good
approximation; if no, then continue.
4. ~ pp ~ rr b~ vv; b u
~
~ rr
5. ~ vv ~ pp=

p
~
A~ pp
_
6. Go to step 1.
Each iteration of the CG algorithm involves one matrix
vector multiplication, which is at most an O(N
2
) operation.
The residual error of the solution is checked in step 3. If it
is less than some threshold error e, the iterations are halt-
ed. This threshold level determines the accuracy of the
solution. For most engineering applications a threshold
level in the range 0.00010.01 yields sufcient accuracy.
Of course, greater accuracy requires more iterations, and
it is possible for the algorithm to stall before reaching a
given threshold. The convergence properties of the CG
method are discussed later in this article.
5.2. Modied Biconjugate-Gradient Method for
Nonsymmetric Matrices
In solving the magnetic eld integral equation or combined
eld integral equations or many other RF engineering
applications (such as hybrid nite-element/integral equa-
tion formulation), we quite often end up with a nonsym-
metric matrix equation A~ xx
~
bb, where A is an NN
complex nonsymmetric matrix. There are many variants
of Krylov-based methods for solving this equation; here we
shall list the modied BCG method, followed by the
GMRES method. In the following algorithm the vector
products are not Hermitian; that is, there is no complex
conjugation as in (4).
Modied BCG Algorithm for Solving Complex Nonsymmetric
Matrix Equations
Initialization:
~ xx ~ xx
T
0 ~ rr
~
bb ~ rr
T

~
bb
~ pp ~ rr ~ pp
T
~ rr
T
~ vv
~ pp

p
~
T
A~ pp
_ ~ vv
T

~ pp
T

p
~
A
T
~ pp
T
_
Iteration:
1.
~ xx ~ xx a~ vv ~ xx v
~
T
~
bb
_ _
~ vv
~ xx
T
~ xx
T
a
T
~ vv
T
~ xx
T
v
~
~
bb
_ _
~ vv
T
2. Compute
~ uuA~ vv
~ uu
T
A
T
~ vv
T
3.
~ rr ~ rr a ~ uu
~ rr
T
~ rr
T
a
T
~ uu
T
4. Check convergence. If jj ~ rrjj=jj
~
bbjj e, then stop and ~ xx
is a good approximation; if not, continue.
5.
~ pp ~ rr b ~ pp b
u
~
T
~ rr
u
~
T
~ pp
~ pp
T
~ rr
T
b
T
~ pp
T
b
T

u
~
~ rr
T
u
~
~ pp
T
6.
~ vv
~ pp

p
~
T
A~ pp
_
~ vv
T

~ pp
T

p
~
A
T
~ pp
T
_
7. Go to step 1.
In this BCG algorithm it is noted that two matrixvector
products need to be computed for each iteration, one with
the original matrix A and the other with its transpose A
T
.
This makes the BCG methods roughly twice as computa-
tionally expensive as the CG method for symmetric sys-
tems. The difference between the basic BCG and the
modied BCG is that the former uses the Hermitian ma-
trix A
H
(i.e., complex conjugatetranspose) instead of the
transpose matrix A
T
.
ITERATIVE METHODS 2235
5.3. GMRES Method for Nonsymmetric Matrix Equations
Another useful Krylov space iterative method for nonsym-
metric systems is the generalized minimum residual
(GMRES) method [24]. Like the CG method, a sequence
of linearly independent search vectors is generated. How-
ever, unlike the CG method, the entire set of search vec-
tors is saved in memory. Coefcients are found that give
the minimum residual error over the complete set of
search vectors. In essence, it is a brute force CG meth-
od. The advantages are that only one matrixvector prod-
uct is computed per iteration and the transpose of the
matrix is not needed. Furthermore, the GMRES method
truly minimizes the residual at each iteration, so its con-
vergence is monotonic. The disadvantage is that all the
previous search vectors must be stored in memory. There-
fore, the memory requirement grows with the number of
iterations. This may not be a problem for dense system
matrices for which the matrix storage is generally much
larger than the storage of a set of search vectors (depend-
ing, of course, on how many search vectors are stored).
To alleviate the memory requirement, the GMRES al-
gorithm may be restarted after a certain number of iter-
ations. The solution vector after one set of iterations is
used as the initial solution for the next set of iterations.
However, the restarted version of the GMRES algorithm
is not guaranteed to converge because the reduced set
of expansion vectors may not span the entire solution
space. The GMRES algorithm is listed in the third-edition
book by Golub and Van Loan [1] and in Templates [23].
A simplied algorithm that is conceptually equivalent
to GMRES, the generalized conjugate residual (GCR)
method [25], is listed below. In the following algorithm
the vector products are Hermitian, using complex conju-
gation as in (4).
Generalized Conjugate Residual Algorithm
Initialization: x 0, r b, p
1
b, u
1
Ap
1
Iteration: k1; 2; . . .:
1. x x ap
k
, a u
k
r= u
k
k k
2
.
2. r r au
k
.
3. Check jjrjj=jjbjj e. If yes, then stop, and x is a good
approximation; if no, then continue.
4. b
i

u
i
Ar
u
i
k k
2
, for i 1; 2; . . . ; k.
5. p
k 1
r

k
i 1
b
i
p
i
.
6. u
k 1
Ar

k
i 1
b
i
u
i
.
7. Go to step 1.
This algorithm is very similar to the basic conjugate-
gradient method. Note that only one matrixvector prod-
uct is used per iteration (in step 4) if we store all the
vectors p
i
and u
i
for i 1; 2; . . . ; k: It is also helpful to store
u
i
k k
2
to avoid repeated computation in step 4. If storage
becomes excessive, the algorithm may be restarted after
the mth iteration starting with p
1
p
m
and u
1
u
m
.
6. PRECONDITIONERS FOR ITERATIVE METHODS
The convergence rate of iterative methods, both classical
and conjugate-gradient, can be very slow if the system
matrix is not well-conditioned. As mentioned in the sec-
tion on classical iterative methods, the convergence of
these methods depends on the spectral radius of the iter-
ation matrix M
1
N. Similarly, the convergence rate of
CG methods depends on the spectral properties of the sys-
tem matrix (see Section 7 for a discussion). Certain for-
mulations in electromagnetics give rise to poorly
conditioned systems, such as the electric eld integral
equation (EFIE). Sometimes the formulation may be
altered to give a better conditioned system, such as by
converting the EFIE to the combined eld integral equa-
tion. The choice of basis functions may also affect the con-
ditioning. Alternatively, one may apply a preconditioner
matrix M to the original system as
M
1
Ax M
1
b
Clearly, if the inverse of M approximates the inverse of A,
then the solution of this system should be easier, or, math-
ematically speaking, the matrix M
1
A should have better
spectral properties than the original matrix. The precon-
ditioner may be implemented in any iterative algorithm
by replacing matrixvector products of the form Ap with
M
1
Ap, and the excitation vector b with M
1
b. There are
cleverer ways to do this as described in Section 7.
The preconditioner should improve convergence, while
its inverse M
1
(or factorization) must be computed ef-
ciently. It is not a coincidence that the preconditioner ma-
trix M uses the same symbol as the classical iterative
matrix splitting M. In fact, the M matrix of all of the clas-
sical iteration matrix splittings discussed here and shown
in Fig. 2 may be used as a preconditioner, namely, diago-
nal, block-diagonal, lower or upper triangular, and banded.
Classical splittings often mimic wave interactions, which
make them useful as preconditioners. From the matrix-
splitting point of view, we want the matrix Mto contain the
dominant portion of the system matrix A. Then the in-
verse of M will approximate the inverse of A, and the it-
erative algorithm should therefore converge rapidly.
A very effective preconditioner for the EFIE with sub-
sectional basis functions is described in Ref. 26. The pre-
conditioner M is a sparse version of A, which contains the
matrix entries corresponding to basis interactions within
a specied distance. Incomplete factorization is used to
compute a sparse factorization of M. In fact, there is a
large class of preconditioners that use incomplete factor-
ization. Some common preconditioning approaches for
iterative algorithms are discussed in Refs. 1 and 23.
7. THEORY OF THE BASIC CG METHOD
The basic CG method is applicable only to symmetric sys-
tems. Consider the following complex symmetric matrix
equation
Ax b 15
2236 ITERATIVE METHODS
where A is an NN complex symmetric matrix, x is the
solution column vector, and b is the RHS excitation col-
umn vector. Before we derive the CG method, let us try to
answer a few related questions rst.
A-Conjugate Condition. Given a set of basis column vectors
v
0
; v
1
; . . . ; v
n1
f g with noN, how to determine the best ap-
proximate solution
~ xx
app

n1
i 0
c
i
~ vv
i
16
that solves (15). The answer is the Galerkin method, or
weighted residual. For each column vector, we shall form a
residual column vector
~
RR
~
bb A~ xx
app
17
Note also that equation (16) can be written in matrix
form as
~ xx
app
V~ cc
V ~ vv
0
~ vv
1
~ vv
n1
_
; ~ cc
c
0
c
1
.
.
.
c
n1
_

_
_

_
18
Subsequently, by requiring that the residual vector
~
RR be
orthogonal to all the basis vectors is equivalent to solving
the coefcient vector ~ cc through the following reduced ma-
trix equation
V
t
AV
_ _
~ cc V
t
~
bb 19
Let us take a closer look at Eq. (19). If the reduced matrix
V
t
AV turns out to be an identity matrix, then the coef-
cients can be simply computed by
c
i
v
~
i
~
bb; v
~
~ vv
t
20
What is more is that, as will be seen later in this section,
there is no need to store all these basis vectors in order to
nd the approximate matrix solution ~ xx
app
. Requiring
V
t
AV I implies that the basis vectors need to satisfy
v
~
i
A~ vv
j
d
ij
21
which is called the A-conjugate condition.
Before moving on to derive the CG methods, lets take a
few moments to restate what we have discussed in a more
fundamental way. You see, as in many applications, to
solve equations, whether innite dimensional problems
(integral equation formulations), or nite-dimensional
problems (like matrix equations), the Galerkin method is
a very good method of choice. Once again, in applying the
Galerkin method, we shall need to establish what the trial
and test function spaces are. When the operators are sym-
metric, some would argue that they need to be positive
definite as well, we can simply have both the trial and test
functions be the same. The next logical question will be
how to generate these basis vectors that span the trial and
test function spaces. As basic linear algebra taught us,
these basis vectors at least need to be linearly indepen-
dent, preferably orthonormal. This is where the A-conju-
gate condition comes in. When the operator is symmetric,
we can, with some violation when the operator is not pos-
itive definite, dene the vector inner product as
~ vv
i
; ~ vv
j
_
v
~
i
A~ vv
j
22
As you shall see, different definitions of the inner prod-
uct lead to different variants of CG methods.
For the matrix equation A~ xx
~
bb with a nonzero initial
solution ~ xx
i
, it is always possible to solve for the correction
equation A~ xx
0

~
bb
0

~
bb A~ xx
i
_ _
, and with ~ xx ~ xx
0
~ xx
i
. There-
fore, without loss of generality, we shall assume that we
will solve A~ xx
~
bb with initial guess zero. We shall derive
the CG method by induction.
k 0: With the initial solution ~ xx
0
0, the residual vector
is simply ~ rr
0

~
bb. The trial space for solving the matrix
equation can now be established as
V
0
~ vv
0
MGS
A
~ rr
0
f g 23
The notation MGS
A
~ aa
0
~ aa
1
~ aa
n1
_ _
means making
a orthonormal basis from the n column vectors,
~ aa
0
~ aa
1
~ aa
n1
, through the modied GramSchmidt
(MGS) process and the inner product is dened by the
A-conjugate condition:
~ vv
0

~ rr
0

r
~
0
A~ rr
0
_ 24
Notice that, in equation (24), we have an expression r
~
0
A~ rr
0
.
If the matrix A is positive definite, the expression r
~
0
A~ rr
0
will always be a positive nonzero number, thus Eq. (24)
will always be valid. Since in our case A is a complex sym-
metric matrix, it is possible that r
~
0
A~ rr
0
0 even though
~ rr
0
O0. This is referred to as breakdown in the CG meth-
od. Although, in practical computation, it rarely occurs,
but when the matrix A is poorly conditioned, it is possible
that r
~
0
A~ rr
0
% 0 and thus causes slow and even failure to
converge in the CG process. It should be emphasized here
that many researchers object to the use of CG method to
non-positive-definite matrix equations; in reality, with
good preconditioners (a topic which is of paramount im-
portance) the CG method may be used to solve complex
symmetric matrix equations.
k 1: The best solution in the trial space V
0
span ~ vv
0
f g,
from the Galerkin method, for the matrix equation
ITERATIVE METHODS 2237
A~ xx
~
bb is
~ xx
1
c
0
~ vv
0
; c
0
v
~
0
~
bb 25
Subsequently, the residual vector ~ rr
1
can be obtained as
~ rr
1

~
bb A~ xx
1
~ rr
0
c
0
A~ vv
0
26
Since the solution is solved through Galerkin method, and
v
~
0
~ rr
1
0, it is certain that ~ rr
1
is linearly independent with
vectors from V
0
span ~ vv
0
f g; therefore it would be a good
idea to have
V
1
V
0
[ span ~ rr
1
_ _
MGS
A
~ vv
0
~ rr
1
_ _
27
Consequently, our new basis vector is determined through
the modied GramSchmidt (MGS) process:
~ ww ~ rr
1
b~ vv
0
; bv
~
0
A~ rr
1
r
~
1
A~ vv
0
~ vv
1

~ ww

w
~
A ~ ww
_
28
It is easy to verify that v
~
0
A~ vv
1
v
~
1
A~ vv
0
0 and v
~
0
A~ vv
0

v
~
1
A~ vv
1
1. Moreover, we see from equation (26), A~ vv
0
2 V
1
.
To summarize, at k 1, we have the following condi-
tions:
1. V
1
span ~ vv
0
~ vv
1
_ _
MGS
A
~ vv
0
~ rr
1
_ _
2. V
t
1
AV
1
I
3. A~ vv
0
2 V
1
kth iteration: At this moment, we have the trial space
V
k1
span ~ vv
0
~ vv
1
~ vv
k1
_ _
and it satises
1. v
~
i
A~ vv
j
d
ij
; i; j 0; 1; . . . k 1
2. A~ vv
i
2 V
k1
; i 0; 1; . . . k 2
The best matrix solution in the trial space V
k1

span ~ vv
0
~ vv
1
~ vv
k1
_ _
is then
~ xx
k

k1
i 0
c
i
~ vv
i

k1
i 0
v
~
i
~
bb
_ _
~ vv
i
~ xx
k1
v
~
k1
~
bb
_ _
~ vv
k1
~ xx
k1
a~ vv
k1
29
and of course, the residual vector is computed through
~ rr
k

~
bb A~ xx
k
~ rr
k1
aA~ vv
k1
30
From the Galerkin method it follows that
v
~
i
~ rr
k
0; i 0; 1; . . . k 1 31
Since A~ vv
i
2 V
k1
; i 0; 1; . . . k 2 , we also have
v
~
i
A~ rr
k
0; i 0; 1; . . . k 2 32
Subsequently, the next basis vector will be computed by
~ pp ~ rr
k

k1
i 0
b
i
~ vv
i
~ rr
k

k1
i 0
v
~
i
A~ rr
k
_ _
~ vv
i
~ rr
k
v
~
k1
A~ rr
k
_ _
~ vv
k1
~ rr
k
b~ vv
k1
33
and
~ vv
k

~ pp

p
~
A~ pp
_ 34
Consequently, ~ vv
0
~ vv
1
~ vv
k
_ _
is an A-conjugate basis
for the trial space V
k
. This process continues until ~ rr
k
_
_
_
_
is
very small at a certain iteration k; it then implies, for all
practical purposes, that ~ xx
k
is the solution to the matrix
equation A~ xx
~
bb. Note that the process is extremely simple
and the recursive nature of the process makes it possible
not to store all the basis vectors.
The detailed induction argument above leads directly
to the basic CG algorithm 1 listed earlier in this article.
7.1. Convergence Rate of Conjugate-Gradient Methods
There are two features that can make CG converge fast:
(1) eigenvalue clusters and (2), a good condition number of
the matrix. To see why eigenvalue clusters are good for CG
method, lets look at the following theorem.
Theorem 1. Assume that matrix A, which is a diagonal-
izable NN symmetric matrix, has only k distinctive
eigenvalues, namely
lA l
0
l
0
. . . l
0
..
n
0
l
1
l
1
. . . l
1
..
n
1
. . . l
k1
l
k1
. . . l
k1
..
n
k1
_
_
_
_
_
_
n
0
n
1
. . . n
k1
N
35
then the dimension of the Krylov space
K
m
~ vv
0
; A ~ vv
0
A~ vv
0
A
m1
~ vv
0
_ _
36
will always be bounded by k regardless of the initial vector
~ vv
0
and m:
dim K
m
~ vv
0
; A k 37
Proof: Let ~ ee
p
i
be the ith eigenvector corresponding to
eigenvalue l
p
of the matrix A:
A~ ee
p
i
l
p
~ ee
p
i
; i 0; 1; . . . n
p
1
_ _
38
2238 ITERATIVE METHODS
Since these eigenvectors form a complete set of basis vec-
tors, any column vector ~ vv
0
can be written as a linear com-
bination of these eigenvectors:
~ vv
0

k1
p0

n
p
1
i 0
c
p
i
~ ee
p
i

k1
p0
~ vv
p
0
~ vv
p
0

np1
i 0
c
p
i
~ ee
p
i
39
It then follows that
A~ vv
0

k1
p0

n
p
1
i 0
l
p
c
p
i
~ ee
p
i

k1
p0
l
p

n
p
1
i 0
c
p
i
~ ee
p
i

k1
p0
l
p
~ vv
p
0
40
Moreover, we have
A
n
~ vv
0

k1
p0
l
n
p
~ vv
p
0
41
This means that for any Krylov vector A
n
~ vv
0
, it can always
be written as a linear combination of k independent vec-
tors ~ vv
0
0
; ~ vv
1
0
~ vv
k1
0
. Thus, we conclude that
dimK
m
~ vv
0
; A dimf ~ vv
0
A~ vv
0
. . . A
m1
~ vv
0
g k 42
regardless of the initial vector and the iteration number
m.
Consequently, in applying the CG method, or any
Krylov-based methods, to solve a matrix equation with
k distinctive eigenvalues, CG converges in at most k
iterations.
Next, lets examine the effect of condition number on
the convergence rate of the CG methods. To gain more in-
sight, let us assume further that matrix A is an NN
symmetric positive definite (SPD) matrix. With this as-
sumption, we can state a fact that at the mth iteration, the
CG method produces the same solution as the following
minimization problem.
Minimization: Seek ~ xx
m
2 K
m
~
bb; A
_ _

~
bb A
~
bb A
m1
~
bb
_ _
such that the quadratic form
x
~
x
~
m
_ _
A ~ xx ~ xx
m
_ _
43
is minimized.
Since A is SPD, and its eigenvectors form a complete
set of basis vectors, we can express the RHS vector
~
bb as
follows:
~
bbb
0
~ ee
0
b
1
~ ee
1
b
N1
~ ee
N1

N1
i 0
b
i
~ ee
i
44
It is easy to show then the exact solution ~ xx is
~ xx
b
0
l
0
~ ee
0

b
1
l
1
~ ee
1

b
N1
l
N1
~ ee
N1

N1
i 0
b
i
l
i
~ ee
i
45
Furthermore, a general trial vector in the Krylov space in
the mth iteration is of the form
~ vv

m1
i 0
c
i
A
i
~
bb
_ _

N1
i 0
c
0
c
1
l
i
c
m1
l
m1
i
_ _
b
i
~ ee
i
46
Subsequently
~ xx ~ vv

N1
i 0
1
l
i
1 c
0
l
i
c
1
l
2
i
c
m1
l
m
i
_ _
b
i
~ ee
i
47
and a quadratic functional F ~ vv can be dened as
F ~ vv x
~
v
~
A ~ xx ~ vv 48
Substituting (46) and (47) into Eq. (48), we have
F ~ vv

N1
i 0
1 c
0
l
i
c
1
l
2
i
c
m1
l
m
i

2
1
l
i
b
2
i
max
0iN1
l
i
1 c
0
l
i
c
1
l
2
i
c
m1
l
m
i

2

N1
i 0
1
l
i
b
2
i
max
0iN1
l
i
1 c
0
l
i
c
1
l
2
i
c
m1
l
m
i

2
x
~
A~ xx

49
Since the CG solution is the same as the one that mini-
mizes the quadratic functional, we have
F ~ xx
m
_ _
min
~ vv2K
m

~
bb;A
F ~ vv x
$
A~ xx

min
f c
0
c
1
c
m1
g
max
0iN1
l
i
1 c
0
l
i
c
1
l
2
i
c
m1
l
m
i

2
x
$
A~ xx

min
P
m
0 1
max
0iN1
P
m
l
i

2
50
where P
m
l is the mth polynomial in l. If we arrange
the eigenvalues of A in ascending manner, namely,
l
0
l
1
l
N1
, then we can replace the best appro-
ximation problem on the discrete set with the best
ITERATIVE METHODS 2239
approximation problem on the interval l
0
l
N1
_
. Note
that we have
min
Pm0 1
max
0iN1
l
i
P
m
l
i

min
Pm0 1
max
l
0
ll
N1
P
m
l

51
The solution to the minmax problem on an interval is
known; namely
min
P
m
0 1
max
l
0
ll
N1
P
m
l

1
T
m
l
N1
l
0
l
N1
l
0
_ _ max
l
0
ll
N1
T
m
l
N1
l
0
2l
l
N1
l
0
_ _

52
where T
m
x
1
2
x

x
2
1
p
_ _
m
x

x
2
1
p
_ _
m
_ _
is the
Chebyshev polynomial. Also, since max
1x1
T
m
x

1 and
1 l
N1
l
0
2l=l
N1
l
0
1, we then nd
min
P
m
0 1
max
l
0
ll
N1
P
m
l

1
T
m
l
N1
l
0
l
N1
l
0
_ _
2
s
m
1s
2m
;
s
1

l
0
l
N1
_
1

l
0
l
N1
_
53
In conclusion, the convergence rate of CG method, mea-
sured in A norm is

x
~
x
~
m
_ _
A ~ xx ~ xx
m

2
s
m
1s
2m

x
~
A~ xx
_
54
7.2. Preconditioned CG method
We conclude the previous section by observing that CG
method works well on matrices that are either well-con-
ditioned or have just a few distinct eigenvalues. For many
RF engineering applications (such as the electric eld in-
tegral equation), the system matrix equations are usually
not suitable directly for CG method. However, if a proper
preconditioning matrix, MC
t
C, can be found, then the
system matrix can be transformed into
A~ xx
~
bb )A
0
~ zz
~
bb
0
A
0
C
t
_ _
1
AC
1
_ _
; ~ xx C~ zz;
~
bb
0
C
t
_ _
1
~
bb
55
Applying the CG algorithm 1 to the transformed matrix
equation results in the following algorithm.
CG Algorithm 2
Initialization: ~ vv
0

~
bb
0

b
0
~
A
0 ~
bb
0
_ ; ~ zz 0; ~ rr
0

~
bb
0
Iteration:
1. ~ zz ~ zz a~ vv
0
; a v
~
0 ~
bb
0
.
2. ~ uu
0
A
0
~ vv
0
; ~ rr
0
~ rr
0
a ~ uu
0
.
3. Check jj ~ rr
0
jj=jj
~
bb
0
jj e. If yes, then stop, and ~ zz is a good
approximation; if no, then continue.
4. ~ pp
0
~ rr
0
b~ vv
0
; bu
~
0
~ rr
0
.
5. ~ vv
0
~ pp
0
=

p
~
0
A
0
~ pp
0
_
.
6. Go to step 1.
Of course, once we have ~ zz, then we can obtain ~ xx via
~ xx C
1
~ zz. However, it is possible to avoid explicit refer-
ence to the matrix C
1
by dening ~ pp
0
C~ pp; ~ zz C~ xx and
~ rr
0
C
t

1
~ rr in every CG iteration. Indeed, if we substi-
tute these definitions into CG algorithm 2 and recall
~
bb
0
C
t

1
~
bb, then we obtain
CG Algorithm 3
Initialization:
C~ vv
C
t
_ _
1
~
bb

b
~
C
1
C
t
_ _
1
_ _
A C
1
C
t
_ _
1
_ _
~
bb
; ~ zz 0; ~ rr
~
bb
Iteration:
1. C~ xx C~ xx aC~ vv; a v
~
~
bb.
2. C
t
_ _
1
~ uu C
t
_ _
1
A~ vv; C
t
_ _
1
~ rr
C
t
_ _
1
~ rr a C
t
_ _
1
~ uu.
3. Check jj ~ rr
0
jj=jj
~
bb
0
jj e, If yes, then stop, and ~ xx is a good
approximation; if no, then continue.
4. C~ pp C
t
_ _
1
~ rr bC~ vv; b u
~
C
1
C
t
_ _
1
~ rr.
5. C~ vv C~ pp=

~ ppA~ pp
_
.
6. Go to step 1.
Finally, the entire algorithm can be simplied by using
the preconditioner MC
t
C directly instead of referring to
C or C
t
. This is then the preconditioned CG algorithm.
Preconditioned CG Algorithm
Initialization: ~ zz 0; ~ rr
~
bb; ~ ppM
1
~ rr; ~ vv ~ pp=

p
~
A~ p p
_
Iteration:
1. ~ xx ~ xx a~ vv; a v
~
~
bb.
2. ~ uuA~ vv; ~ rr ~ rr a ~ uu.
3. Check jj ~ rrjj=jj
~
bbjj e. If yes, then stop, and ~ xx is a good
approximation; if no, then continue.
4. ~ ppM
1
~ rr b~ vv; b u
~
M
1
~ rr.
5. ~ vv ~ pp=

p
~
A~ pp
_
.
6. Go to step 1.
2240 ITERATIVE METHODS
BIBLIOGRAPHY
1. G. H. Golub and C. F. Van Loan, Matrix Computations, 3rd
ed., Johns Hopkins Univ. Press, Baltimore, 1996.
2. T. J. Kim and G. A. Thiele, A Hybrid diffraction technique
general theory and applications, IEEE Trans. Anten. Propag.
30:888897 (1982).
3. M. Kaye, P. K. Murthy, and G. A. Thiele, An iterative method
for solving scattering problems, IEEE Trans. Anten. Propag.
33:12721279 (1985).
4. P. K. Murthy, K. C. Hill, and G. A. Thiele, A hybrid-iterative
method for scattering problems, IEEE Trans. Anten. Propag.
34:11731180 (1986).
5. R. E. Hodges and Y. Rahmat-Samii, An iterative current-
based hybrid method for complex structures, IEEE Trans.
Anten. Propag. 45:265276 (1997).
6. A. Sullivan and L. Carin, Scattering from complex bodies us-
ing a combined direct and iterative technique, IEEE Trans.
Anten. Propag., 47:3339 (1999).
7. D. D. Reuster and G. A. Thiele, A eld iterative method for
computing the scattered electric elds at the apertures of
large perfectly conducting cavities, IEEE Trans. Anten. Prop-
ag. 43:286290 (1995).
8. F. Obelleiro, J. L. Rodriguez, and R. J. Burkholder, An itera-
tive physical optics approach for analyzing the electromag-
netic scattering by large open-ended cavities, IEEE Trans.
Anten. Propag. 43:356361 (1995).
9. D. Holliday, L. L. DeRaad, Jr., and G. J. St-Cyr, Forward-
backward: A new method for computing low-grazing angle
scattering, IEEE Trans. Anten. Propag. 44:722729 (1996).
10. D. A. Kapp and G. S. Brown, A new numerical method for
rough surface scattering calculations, IEEE Trans. Anten.
Propag. 44:711721 (1996).
11. M. R. Pino, L. Landesa, J. L. Rodriguez, F. Obelleiro, and R. J.
Burkholder, The generalized forward-backward method for
analyzing the scattering from targets on ocean-like rough
surfaces, IEEE Trans. Anten. Propag. 47:961969 (1999).
12. J. C. West and J. M. Sturm, On iterative approaches for elec-
tromagnetic rough-surface scattering problems, IEEE Trans.
Anten. Propag. 47:12811288 (1999).
13. A. R. Clark, A. P. C. Fourie, and D. C. Nitch, Stationary, non-
stationary, and hybrid iterative method of moments solution
schemes, IEEE Trans. Anten. Propag. 49:14621469 (2001).
14. R. J. Burkholder, On the use of classical iterative methods for
electromagnetic scattering problems, Proc. 4th Conf. Electro-
magnetic and Light Scattering by Nonspherical Particles:
Theory and Applications, Vigo, Spain, Sept. 2021, 1999, pp.
6572.
15. M. R. Hestenes and E. Stiefel, Methods of conjugate gradients
for solving linear systems, J. Res. Natl. Bur. Stand. 49:409
436 (1952).
16. T. K. Sarkar and S. M. Rao, The application of the conjugate
gradient method for the solution of electromagnetic scattering
from arbitrary oriented wire antennas, Trans. Anten. Propag.
32:398403 (1984).
17. T. K. Sarkar and E. Arvas, On a class of nite step iterative
methods (conjugate directions) for the solution of an operator
equation arising in electromagnetics, Trans. Anten. Propag.
33:10581066 (1985).
18. T. K. Sarkar, E. Arvas, and S. M. Rao, Application of FFT and
the conjugate gradient method for the solution of electromag-
netic radiation from electrically large and small conducting
bodies, Trans. Anten. Propag. 34:635640 (1986).
19. J. D. Collins, J. M. Jin, and J. L. Volakis, A combined nite
element-boundary element formulation for solution of 2-di-
mensional problems via CG-FFT, Electromagnetics, 10:423
437 (1990).
20. R. Coifman, V. Rokhlin, and S. Wandzura, The fast multipole
method for the wave equation: A pedestrian prescription,
IEEE Anten. Propag. Mag. 35:712 (1993).
21. E. Bleszynski, M. Bleszynski, and T. Jaroszewicz, A fast in-
tegral equation solver for electromagnetic scattering prob-
lems, Radio Sci. 31:12251251 (1996).
22. J. R. Phillips and J. White, A precorrected-FFT method for
electrostatic analysis of complicated 3D structures, IEEE
Trans. Comput. Aid. Design Integr. Circ. Syst. 16:10591072
(1997).
23. S. Barnett, M. Berry, T. F. Chan, J. Demmel, J. Donato, J.
Dongarra, V. Eijkhout, R. Pozo, C. Romine, and H. van der
Vorst, Templates for the Solution of Linear Systems: Building
Blocks for Iterative Methods, SIAM Publications, Philadel-
phia, 1993.
24. Y. Saad and M. Schultz, GMRES: A generalized minimal re-
sidual algorithm for solving nonsymmetric linear systems,
SIAM J. Sci. Stat. Comput. 7:856869 (1986).
25. S. C. Eisenstat, H. C. Elman, and M. H. Schultz, Variational
iteration methods for nonsymmetric systems of linear equa-
tions, SIAM J. Num. Anal. 20:345357 (1983).
26. J.-F. Lee and R. J. Burkholder, Loop star basis functions and a
robust preconditioner for EFIE scattering problems, IEEE
Trans. Anten. Propag. 51:18551863 (2003).
FURTHER READING
A. F. Peterson, S. L. Ray, and R. Mittra, Computational Methods
for Electromagnetics, Wiley, New York, 1997 (a good general
resource for computational electromagnetics, including the -
nite-element and nite-difference methods, the method of mo-
ments, basis expansions, and solution methods).
O. Axelsson, Iterative Solution Methods, Cambridge Univ. Press,
1996 (a good source for iterative methods, in general). Matrix
Computations by Golub and Van Loan [1] is an excellent ref-
erence on matrix theory, solution of matrix systems, and iter-
ative algorithms.
The Templates book [23] presents many iterative algorithms and
their underlying theories, along with a discussion of precondi-
tioners and parallelization; it is available online at http://
www.netlib.org/linalg/html_templates/Templates.html, and
the associated software may be downloaded from http://
www.netlib.org/templates/.
ITS RADIO SERVICE STANDARDS AND
WIRELESS ACCESS IN VEHICULAR
ENVIRONMENTS (ITS-WAVE) AT 5.9GHz
RAMEZ L. GERGES
IEEE-ITSC Standards
Committee
Goleta, California
1. INTRODUCTION
This article describes ongoing activities to create a new
family of standards that supports the emerging Intelligent
ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz 2241
Transportation Systems (ITS) and telematics wireless
markets. ITS-WAVE is a radiocommunication system in-
tended to provide seamless, interoperable services to sur-
face transportation systems. After an initial overview of
the ITS-WAVE family of standards, more emphasis will be
given to the radio (lower layers) part of the system, and
the use of orthogonal frequency-division multiplexing
(OFDM) for the physical layer [1].
1.1. ITS, Telematics, and Wireless Interoperability
The Intelligent Transportation Systems (ITS) initiative
was created by Congress in the Intermodal Surface Trans-
portation Efciency Act of 1991 (ISTEA) to improve the
mobility and safety of the surface transportation system.
ITS is dened as those systems utilizing synergistic tech-
nologies and systems engineering concepts to develop and
improve transportation systems of all kinds. Communica-
tion and information technologies are at the core of road-
side infrastructure and in-vehicle systems. These
technologies promise to enhance mobility by improving
the way we monitor and manage trafc ow, clear inci-
dents, reduce congestion, and provide alternate routes to
travelers. The telematics industry is focused on driver
comfort and safety, and while telematics in general
has meant the blending of computers and telecommuni-
cations, it is used within the ITS community with the
connotation of automotive telematics or the in-vehicle
subsystem of ITS.
In 1999, the Federal Communications Commission
(FCC) allocated the 5.8505.925-GHz band for use by the
ITS radio service for both public safety and commercial
ITS applications. Many standards development organiza-
tions (e.g., IEEE, IETF, ISO) are engaged in the process of
achieving an end-to-end ITS wireless interoperability.
This article addresses Wireless Access in Vehicular Envi-
ronments (WAVE), which is currently being developed un-
der the IEEE WG 802.11, WAVE Study Group.
1.2. ITS Radio Services
The proposed ITS-WAVE standard addresses broadband
wireless communications that operate in a long range
(r1000m) and at a high data rate [27Mbps (megabits
per second)] for all ITS applications. The proposed lower-
layer standard currently addresses communications
between roadside units and mostly high-speed, but occa-
sionally stopped and slow-moving, vehicles or between
high-speed vehicles. The ITS new spectrum will be used to
support multiple applications to enhance public safety and
transportation mobility and can be categorized as follows:
1. Public Safety: The primary use of this band is to offer
services such as emergency vehicle signal preemp-
tion and announcements for work zones. While the
FCC has allocated the 4.9GHz for communications
between rst responders, the 5.9-GHz band is ex-
pected to allow rst responders to communicate with
the general driving public on roads and freeways.
2. Mobility: Services such as electronic toll, vehicle
probes, traveler information, and public transporta-
tion integration are expected to enhance the trans-
portation system performance.
3. Driver Safety: New features such as support of col-
lision avoidance and warnings for excessive speed
and railroad crossings are expected to improve sys-
tem performance. More recently, vehicle manufac-
turers and telematics providers have shown interest
in the ITS-WAVE standards. There is no other ra-
diocommunication technology that can support the
real-time requirements for vehicle-to-vehicle com-
munications.
1.3. ITS-WAVE Development History
Attempts to develop standards for the wireless ITS envi-
ronment date back to the early 1990s, when California
adopted the Title 21 regulation to achieve a common stan-
dard for toll collections. The dedicated short-range com-
munications (DSRC) standard at 900 MHz, and Title 21
[2], predated the ITS initiative, and addressed only the
electronic toll collection; it was not intended to support a
national interoperable wireless ITS standard.
The Intermodal Surface Transportation Efciency Act
of 1991 (ISTEA) funded many research ITS programs. In
the mid-1990s, the author (then with the New Technology
program at Caltrans) initiated some of the rst technical
studies to develop an integrated wireless communications
system for all ITS applications [3]. In 1996, the U.S. Na-
tional System Architecture identied wireless communi-
cations as one of the critical enabling technologies needed
to support many of the ITS services. Later, the USDoT
funded more studies, and the California Department of
Transportation (Caltrans) established the Testbed Center
for Interoperability (TCFI) to study and test end-to-end
wireless interoperability. In May 1997, the Intelligent
Transportation Society of America (ITSA) led a Petition
for Rulemaking, requesting that the FCC allocate 75 MHz
of spectrum in the 5.8505.925-GHz band on a coprimary
basis for DSRC-based ITS services. In 1998 at the IEEE
Vehicular Technology Conference, the author suggested to
leverage the economical feasibility of the IEEE 802.11 to
achieve wireless ITS interoperability [4]. In 1999, the FCC
amended Parts 2 and 90 of the Commissions Rules to al-
locate the 5.8505.925-GHz band to the Mobile Service for
Dedicated Short Range Communications of Intelligent
Transportation Services.
1
The USDoT funded the Ameri-
can Society for Testing and Materials (ASTM) to initiate
the standard writing group for the DSRC at 5.9 GHz. In
2000, TCFI tested the rst video relay to a moving vehicle
at highway speed using OFDM technology.
2
The success-
ful test paved the way to use broadband technologies for
wireless ITS. Later, the ASTM selected the OFDM Forum
proposal to use the IEEE 802.11a [5,6] as the basis for the
new standard.
3
The new DSRC standard is now being
1
ET Docket 98-95, 14 FCC Record 18221.
2
Wireless LAN provided the OFDM equipment at 2.4 GHz; Fur-
ther infromation is available at http://www.wi-lan.com.
3
The OFDM-Forum proposal (802.11 RA) suggested changing the
physical layer of the IEEE 802.11 to match the requirements of
the road access environment. (http://www.ofdm-forum.com).
2242 ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz
completed within the IEEE WG 802.11. The Study Group
(SG) decided to use the ASTM standard [7] as the basis for
the ITS-WAVE proposal.
4
As the proposed standard is not
limited to short-range applications, the SG has named it
the Wireless Access in Vehicular Environments (WAVE)
instead of DSRC. This will also avoid any confusion with
the single carrier technology in use in the United States
(900-MHz band) or in Japan and Europe (at 5.8 GHz but
different standards).
On December 17, 2003 the FCC adopted the rules
for the ITS band. It is expected that the new standard
(possibly 802.11p) will be completed by the end of 2005.
2. ITS RADIO SERVICES SYSTEM-LEVEL DESCRIPTION
2.1. Spectrum Allocation for ITS, Telematics, and Public
Safety
The Broadband ITS Radio Service (ITS-RS) establishes a
common framework for providing wireless services in the
5.8505.925-GHz band. This band is allocated for ITS-RS
applications by the FCC.
5
Figure 1 shows the spectrum
allocation in the 4.95.9-GHz band. The differences be-
tween the ITS-WAVE and the IEEE 802.11 WLAN sys-
tems stem from the fact that the ITS-WAVE operates in a
licensed band, and it establishes reliable communications
between units operating at full vehicle mobility, a different
environment than the indoor WLAN.
These communications may occur with other units that
are (1) xed along the roadside or above the roadway, (2)
mounted in other high-speed moving vehicles, (3) mounted
in stationary vehicles, (4) mounted on mobile platforms
(e.g., watercraft, buoy, or a robotic platform), or (5) porta-
ble or handheld. In-vehicle communications units are
called onboard units (OBUs). Communication units xed
along the freeways, over the road on gantries or poles, or
off the road in private or public areas, are called roadside
units (RSUs). The WAVE RSUs may function as stations
or as access points (APs) and the WAVE OBUs only have
functions consistent with those of stations (STAs). The
common function between all RSUs is that these station-
ary units control access to the radiofrequency medium for
OBUs in their communication zone or relinquish control to
broadcast data only.
The vehicular mobility environment requires that we
design a system that can survive both the time-dispersive
(frequency-selective) multipath fading and the frequency-
dispersive (time-selective) fading environment. Tests con-
ducted at the Testbed Center For Interoperability (TCFI)
at UCSB show that we may encounter up to 400ns of
delay spread and up to 2200 Hz of Doppler spread as
explained later. Single-carrier transmission, with a time-
domain equalizer, has an inherent limitation due to con-
vergence and tracking problems which arises as the
number of taps increase. A coded OFDM (COFDM)
approach similar to the IEEE 802.11a/g standard offered
a more robust, as well as economically feasible solution
ITS-RS
China
Japan
USA
Europe
4.900 5.000
5.150 5.250
5.030 5.091
5.150
5.150
5.350
5.350 5.470
Indoor 200 mW / Outdoor 1 W EIRP
DFS & TPC DFS & TPC
4.900 5.000 5.100 5.200 5.300 5.400 5.500 5.600 5.700 5.800 5.900 6.000
DFS: Dynamic Frequency Selection
TPC: Transmit Power Control
Freq./GHz
Homeland Security
Indoor 200 mW EIRP Outdoor 1W EIRP
ITS-RS
Max mean
Tx power
Max peak
Tx power
5.725
5.725
5.85
5.825
5.850 5.725
5.925
Outdoor 4W EIRP
Frequency allocations 4.9-5.9 GHz:
Figure 1. Spectrum allocation at the 5-GHz Band. (This gure is available in full color at http://
www.mrw.interscience.wiley.com/erfme.)
4
The ASTM E2213-02 was approved but not published because of
copyright issues with the IEEE.
5
Title 47, Code of Federal Regulations (CFR), Part 90, Subpart M.
ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz 2243
based on the success of the WLAN industry. This economic
feasibility also made the COFDM approach a better can-
didate than the single-carrier transmission with a fre-
quency-domain equalizer. Although, the latter has the
same complexity and may have some good features [e.g.,
avoids the PAPR (peak-to-average power ratio) issues].
The 802.11a scheme would not be able to tolerate the
delay spread expected in the WAVE environment. Figure 2
shows the impact of delay spread on a 16-QAM signal
constellation for a 64-subcarrier OFDM system [1]. The
channel has a two-ray multipath; the second ray is 6dB
lower than the rst one: (1) delay spread less than guard
time (Fig. 2a), (2) delay spread greater than guard time
by 3% of the FFT interval (Fig. 2b), and (3) delay
spread greater than guard time by 9% of the FFT inter-
val (Fig. 2c).
We proposed to double the guard interval (GI) to be
more multipath-tolerant; in principle, using half the mas-
ter clock should double the GI and scale down the channel
bandwidth to 10 MHz, a desired outcome to increase the
number of channels within the allocated spectrum. Of
course the maximum data rate will be reduced to 27 Mbps,
which is still adequate for demanding ITS applications
(e.g., video relay). WLAN chips manufacturers (e.g., In-
tersil and Atheros) conrmed the feasibility of the ap-
proach using their current 802.11a implementations.
6
It is
expected that products with the correct front end operat-
ing at 5.9GHz (10 MHz bandwidth) will be available as the
market develops.
In order to accommodate the more dynamic vehicle en-
vironment with essentially the same radio technology, and
provide priority to public safety communications, the ITS
community is proposing a complementing set of standards
under the IEEE SCC32. These standards address the
upper layers including a different channel structure and
access mechanism than that of the IEEE 802.11 as ex-
plained later.
2.2. System Architecture and SDO Coordination
The International Standards Organization (ISO) and
the IEEE are coordinating their standards development
efforts to achieve ITS wireless interoperability. To this end
a common CALM/WAVE architecture has been developed
as shown in Figs. 3 and 4.
The current scope of the IEEE-WAVE proposed project
is to create an amendment of IEEE 802.11 to support com-
munication between vehicles and the roadside and be-
tween vehicles while operating at speeds up to a minimum
of 200km/h for communication ranges up to 1000 m. The
amendment will support communications in the 5-GHz
bands; specifically, the 5.8505.925-GHz band within
North America with the aim to enhance the mobility and
safety of all forms of surface transportation, including rail
and maritime transportation. Amendments to the PHY
and MAC will be limited to those required to support com-
munications under these operating environments within
the 5-GHz bands.
The IEEE SCC32 sponsors the IEEE P1556, DSRC
Security and Privacy and the ITS-WAVE (upper layers).
The WG P1556 is proposing a dual-certificate system for
public safety and vehicle safety to balance security and
anonymity requirements. The IEEE WG P1609 architec-
ture adopted IPV6 as the method of handling upper-layer
applications. It consists of a series of four standards:
1. P1609.4 denes the channelization approach and
considers integration issue with the IEEE 802.11e
and IEEE 802.11 h.
2. P1609.3 is based on the IPv6 specication and may
include a broad range of supporting standards de-
ned by the Internet Engineering Task Force
(IETF). It denes IPv6 addressing and congura-
tion issues, network services (e.g., WAVE router ad-
vertisement), and all the WAVE management
entities needed for registration and service table
exchanges.
3. P1609.2 denes applications services.
4. P1609.1 denes a resource manager for onboard
units (OBUs).
The ISO-Transport Information & Control (TC204)
Working Group 16 is developing standards for wide-area
wireless communications for transport information and
control. ISO-TC204-WG16 is developing the communica-
tion air interface for long and medium range (or short
media) (CALM) architecture. The CALM scope includes
0
0 2 4 6 2 4 6 0 2 4 6 2 4 6 0 2 4 6 2 4 6
2
4
6
2
4
6
(a)
0
2
4
6
2
4
6
0
2
4
6
2
4
6
(b) (c)
Figure 2. Impact of delay spread. (This gure is available in full color at http://www.mrw.
interscience.wiley.com/erfme.)
6
WLAN products from Intersil are now part of Conexant (http://
www.conexant.com), Atheros ( http://www.atheros.com).
2244 ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz
communications between xed sites and switching be-
tween communication media (e.g., 3G cellular and
WAVE), as well as issues such as handover and mobility
management. CALM mandates end-to-end system inter-
operability at all levels. CALM-M5 is adopting the IEEE-
WAVE proposal for the lower layers at 5 GHz.
2.3. Basic Concept of Operation
The ITS-WAVE typically consists of two types of radio
devices. The rst type is always used while stationary,
usually permanently mounted along the roadside, and
is referred to as the roadside unit (RSU). The second is
SNMP agent
(RFC 1157)
App 1
App Data
Sockets
App 2
App Data
Sockets
OBU IVN
App 3
App Data
Sockets
UDP
Networking Services
IVN
L2/L1
UDP (RFC 768)
Networking Services (IPv6 RFC 2460)
SNMP
MIB
SME
(1609.3)
Logical Link Control (802.2)
Channelization (1609.4)
MAC (802.11p)
PHY (802.11p)
WME
(1609.3)
MLME
(802.11p)
PLME
(802.11p)
IVN
L2/L1
IVN: In- Vehicle Network
Figure 3. Wave architecture.
CME
(Commun
ication
Managem
ent
Entity)
ISO
LME
(Link
Managem
ent Entity)
Common
Station,
PHY,
MAC, LLC
Managers
Service Access Point Management Service Access Point Data Transfer
CALM M5
CALM M5
3G
cellular
std
CALM 3G
NETWORK INTERFACE
Routing and Media Switching based on IPv6
ISO 21210-2
Directory
Services
Convergence
Layer
Convergence
Layer
Layer 5-7
INTERNET
Non-CALM-
aware
Point-to-point
Non-CALM-
aware
IP (Internet)
CALM-Aware
APPLICATIONS
Figure 4. CALM architecture. (This gure is available in full color at http://www.mrw.
interscience.wiley.com/erfme.)
ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz 2245
mobile, mounted on board vehicles, and is referred to as
the onboard unit (OBU). Three types of communication
are supported: a command/response type between a ser-
vice provider and a service user, a broadcast to listener,
and a peer-to-peer type that does not identify either device
as controlling the actions of the other. OBUs and RSUs
can initiate both types of communication. The command/
response type includes various forms of transactions be-
tween a service provider and a user of that service. To en-
sure scalable interoperability between ITS-WAVE units,
the proposed standards dene two levels of implementa-
tions. A minimal implementation only supports the lower
layers, those below the network layer, and will be referred
to as a WAVE radio. An implementation that has the full
ITS-WAVE protocol stack is referred to as a WAVE de-
vice. Multiple devices interact with each other through
the Networking Services (IEEE P1609).
Figure 5 represents the current ITS-WAVE band plan.
The ITS-WAVE uses a control channel and any of six ser-
vice channels. Licensing of both roadside RSUs and OBUs
are necessary to prevent unauthorized use of the control
channel. OBUs should be licensed by rule, since these
devices are mobile and can operate nationwide, communi-
cating with any other ITS-WAVE devices within range.
The onboard units (OBUs) are required to listen on the
control channel every few hundred milliseconds, in order
to check for public safety messages. The messages on the
control channel are of variable length, but are generally
kept short, to permit maximum access to the channel.
Control channel access will be performed via a standard
IEEE 802.11a, Multiple Access with Collision Avoidance
(CSMA/CA). By default, all devices when turned on are
tuned to the control channel. If an ITS-WAVE device
desires to transmit, but detects another message being
broadcast on the control channel, it must wait before at-
tempting to transmit. A request to send (RTS) is initiated,
and time is granted rst to high priority (public safety)
broadcasts, then to lower-priority transmissions. The
same control channel is used for roadside-to-vehicle, ve-
hicle-to-roadside, and vehicle-to-vehicle communications.
Control channel interval and service channel interval
are controlled by RSU beacon frames. Since the control
channel will be xed throughout the nation, all ITS-WAVE
devices will be able to access those services in an interop-
erable matter.
A registration process must occur before a WAVE device
can be considered ready for operation; the RSU broad-
casts beacon frames that include the provider service ta-
ble (PST) and the WAVE router advertisement (WRA)
on the control channel. Application initialization proce-
dures are based on SNMP, and the designated service
channel, priority, and power level are indicated in the
PST. At the end of the application initialization state,
the RSU commands the OBU to switch to the designated
service channel. The RSU, now on the service channel,
receives UDP datagrams sent by the OBU. The RSU
routes datagrams to and from the applications indicated
by the global IPv6.
The description above is included to give an idea about
the basic concept of operations, with the understanding
that the proposed standards are now under development.
The P1609.3, 1609.4, and the P1556 are currently the
most critical part of the WAVE family of standards as they
require integration and coordination with many other
standards such as IEEE 802.11e/h/i and many of the
IETF recommendations.
Shared public safety/private
Control Med Rng Service Short Rng Service
Dedicated public safety
High avail Intersections
40 dBm
Not currently implemented
Power limit
Power limit
Power limit
Not currently implemented
44.8 dBm
33 dBm
23 dBm
Uplink
Downlink
Public
safety
Veh-Veh
Ch 172
Public
safety/
private
Ch 174
Public
safety/
private
Ch 176
Control
channel
Ch 178
Public
safety/
private
Ch 180
Public
safety/
private
Ch 182 Ch 184
Public safety
intersections
Canadian special license zones*
Frequency (GHz)
5
.
8
2
5
5
.
8
3
0
5
.
8
3
5
5
.
8
4
0
5
.
8
4
5
5
.
8
5
0
5
.
8
6
0
5
.
8
6
5
5
.
8
7
0
5
.
8
7
5
5
.
8
8
0
5
.
8
8
5
5
.
8
9
0
5
.
8
9
5
5
.
9
0
0
5
.
9
0
5
5
.
9
1
0
5
.
9
1
5
5
.
9
2
0
5
.
9
2
5
5
.
8
5
5
Figure 5. ITS-wave band plan. (This gure is available in full color at http://www.mrw.
interscience.wiley.com/erfme.)
2246 ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz
It is expected that WAVE radios that implement only
the lower layers will develop rst, as they leverage the
existing IEEE 802.11a standard and chip technology.
3. DESIGNING FOR WIRELESS VEHICULAR
ENVIRONMENTS
3.1. Channel Impairments
There is extensive literature on different statistical mod-
els of the communication channels at different frequency
bands [8]. However, limited data of actual eld measure-
ments for vehiclevehicle and vehicleroadside communi-
cation are available at the ITS-WAVE frequency band.
Statistical models include large-scale path loss models and
small-scale fading models:
1. Large-scale propagation models characterize the
mean received power over large transmitterreceiv-
er separation distances. It is used to estimate radio
coverage area of a transmitter.
2. Small-scale (fading) models characterize the rapid
uctuations of the received signal strength and
phase over a very short distance. Multipath struc-
ture (power delay prole) is used to measure and
describe the fading effects.
Both large- and small-scale fading models are needed
for packet error rate characterization.
3.1.1. Time-Dispersive (Frequency-Selective) Multipath
Fading Channel. A time-dispersive channel is dened as a
channel for which the delay spread is much wider than the
signal duration. The classication of a channel as time
dispersive is therefore dependent on the data rate of the
system. For a single carrier, high-data-rate systems, time-
dispersive channels are commonly encountered. This type
of fading is often referred to as frequency-selective because
the signal may be simultaneously faded at one frequency
and not at another. OFDMis robust against delay spread by
design because of the longer symbol time and the fact that
each subcarrier experiences a at-fading channel. Similar
to the IEEE 802.11a, the insertion of guard interval, and
the use of forward error correction (FEC) are essential de-
sign elements to the coded OFDM scheme employed in the
WAVE physical layer. This multipath rejection capability
was one of the main reasons for selecting COFDM instead
of a single-carrier system, especially for ITS applications
that operate at longer ranges and at high data rates.
Short-range systems typically experience significantly
smaller delay spreads than does a longer-range system.
Previous studies show that 90% RMS (root-mean-square)
delay spread is less than 100ns for typical short-range
applications (e.g., toll collection) in urban environments.
7
RMS delay spread could be up to 300 ns [9,10] in a non-
line-of-sight (NLoS) heavy-multipath environment, as
may be expected in a freeway urban environment.
In order for subcarriers to perceive a at-fading chan-
nel, the bandwidth (subcarrier spacing) must be less than
the coherence bandwidth (B
c
) of the channel.
8
B
c
is the
bandwidth of the channel variation in frequency and is
dened [8] as
B
c
1=5s
where s is the RMS delay spread of the channel. The ITS-
WAVE has a subcarrier spacing (bandwidth) of 156kHz,
and each subcarrier will encounter at fading as long as
so100 ns (for rangeo300m). For long ranges (large delay
spread), the pilot channels are available to estimate the
channel in the frequency domain if they are well struc-
tured. In order to use the pilots for channel estimation, the
pilot spacing in frequency has to be less than B
c
(B2 MHz
for s 500ns). This may not be the case using the current
pilot structure of the IEEE 802.11a (pilot spacing 14
156 kHz 2.18 MHz4B
c
). Interpolation of pilot subcar-
rier in the current structure may not be sufcient to track
the frequency selective fades. It is expected that the rst
generation WAVE radios, those using modied 802.11a
chips, will be limited in range and may not be suitable for
long-range public safety applications.
3.1.2. Frequency-Dispersive (Time-Selective) Fading
Channel. Frequency-dispersive channels are classied as
channels that have a Doppler spread larger than the
channel bandwidth. Doppler spread is a direct result of
multiple Doppler shifts which are caused by motion of the
transmit and/or receive antenna. Doppler shifts can also
result from reections off of moving objects.
9
Distortion of
the power spectrum of the received signal results from
Doppler spread, which can be approximated by the Dopp-
ler spread B
d
B
d
f
m
.
cos a
where f
m
v
.
f
c
/c, where v is the vehicle speed in m/s, f
c
is
the carrier frequency in Hz, c is the speed of light in m/s,
and a is the angle between the direction of vehicle travel
and the ray of the communication path. In the case of the
ITS-WAVE where vehicle speeds of r120 mph (193 km/h)
must be supported (public safety), the maximum Doppler
shift for a vehicle traveling directly toward the roadside
antenna would be about 1100Hz at 5.9 GHz, and much
less for vehiclevehicle communication (two vehicles head-
ing in the same directions).
Time-selective fading caused by Doppler spread is
described by the coherence time (T
c
) of the channel. T
c
represents the duration over which the channel character-
istics do not change significantly, and is dened [8] as
T
c
0:423=f
m
7
For transmitreceive (Tx/Rx) separation of 30300m, both LoS
and NLoS (Xiongwen, etc.; IEEE JSAC 2002).
8
B
c
is dened as the bandwidth over which the frequency corre-
lation function is above 0.5.
9
If a sinusoidal signal is transmitted over a fading channel (com-
monly referred to as a constant wave), the Doppler spread B
d
is
dened as the range of frequencies over which the received Dopp-
ler spectrum is essentially nonzero.
ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz 2247
At a vehicle speed of 120 mph and a frequency of 5.9GHz,
T
c
400ms. When using pilot symbols at the start of a
packet, the assumption is that channel variations during
the rest of the packet are negligible. This limits the packet
duration to less than T
c
, and places an upper limit on the
packet size. At a data rate of 3Mbps,
1
2
-code-rate BPSK-
modulated signal, the maximum packet size
10
is 135 bytes.
Although this suggests that higher-order modulation
would give better performance, as their transmission
time is shorter, these modulation schemes degrade more
in the presence of channel impairment.
3.2. The ITS-WAVE Physical Layer
The ITS-WAVE physical layer is based on using the
robustness of the coded orthogonal frequency-division
multiplexing (OFDM) signal to achieve the required per-
formance in the wireless vehicular environments. OFDM
is a special case of multicarrier modulation (MCM), which
is the principle of transmitting data by dividing the data-
stream into several parallel bitstreams and modulating
each bitstream onto individual subcarriers. Each subcar-
rier is a narrowband signal, resulting in long bit intervals.
High data rates are achieved by using multiple orthogonal
subcarriers for a single data transmission. The OFDM
system differs from traditional MCM in that the spectra of
the subcarriers were allowed to overlap under the restric-
tion that they were all mutually orthogonal. An orthogo-
nal relationship between subcarriers is achieved if there
are integer numbers of subcarrier frequency cycles over
the symbol interval. This orthogonality guarantees that
each subcarrier has a null at the center frequency of all
other subcarriers as shown in Fig. 6.
Orthogonality is achieved with precision by modulating
the subcarriers with a discrete Fourier transform (DFT),
which is implemented in hardware with the fast Fourier
transform (FFT). By transmitting several symbols in par-
allel, the symbol duration is increased proportionately,
which reduces the effects of intersymbol interference (ISI)
caused by the dispersive fading environment. Additional
multipath rejection and resistance to intercarrier inter-
ference (ICI) is realized by cyclically extending each sym-
bol on each subcarrier. Rather than using an empty guard
space, a cyclic extension of the OFDM symbol is used to
ensure that delayed replicas of the OFDM symbol will
always have an integer number of cycles within the FFT
interval. This effectively converts the linear convolution of
the channel to a circular one, as long as the cyclic prex
(CP) is longer than the impulse response of the channel.
The penalty of using a CP is loss of signal energy propor-
tional to the length of the CP. In order to avoid excessive
bit errors on individual subcarriers that are in a deep fade,
forward error control (FEC) is typically applied.
The ITS-WAVE physical layer organizes the spectrum
into operating channels. Each 10-MHz channel is com-
posed of 52 subcarriers. Four of the subcarriers are used as
pilot carriers for monitoring path shifts and ICI, while the
other 48 subcarriers are used to transmit data symbols.
Subcarriers are spaced 156.25 kHz apart, giving a total
bandwidth of 8.8MHz. The composite waveform, consist-
ing of all 52 subcarriers, is upconverted to one of the seven
channels between 5.850 and 5.925 GHz. As shown
in Fig. 7, channels are numbered from 26 to 26. Sub-
carrier 0 is not used for signal processing reasons, and
pilot subcarriers are assigned to subcarriers 21, 7, 7,
and 21. To avoid strong spectral lines in the Fourier trans-
form, the pilot subcarriers transmit a xed bit sequence as
specied in the IEEE 802.11a using a conservative mod-
ulation technique. Table 1 compares the ITS-WAVE and
the IEEE 802.11a parameters. Table 2 lists ITS-WAVE
baseband modulation values.
3.2.1. Structure of the WAVE Physical Layer. The phy-
sical layer is structured as two sublayers: the physical-
layer convergence procedure (PLCP) sublayer and the
physical-medium-dependent (PMD) sublayer. The PLCP
communicates to MAC via primitives through the physi-
cal-layer service access point (SAP); it prepares the PLCP
protocol data unit (PPDU) shown in Fig. 8. The PPDU
provides for asynchronous transfer of the MAC protocol
data unit (MPDU) between stations. The PMD provides
Table 1. Comparison of 802.11a and ITS-WAVE Parameters
Parameter 802.11a ITS-WAVE
Channel bandwidth (MHz) 20 10
Subcarrier spacing (kHz) 312.5 156.25
T
FFT
(ms) 3.2 6.4
T
GI
(ns) 800 1600
T
SYM
(ms) 4 8
Channel symbol rate (Msps) 12 6
Minimum data rate (BPSK) (Mbps) 6 3.0
Maximum data rate (64-QAM) (Mbps) 54 27
A B C D E Tone
Figure 6. Subcarrier Orthogonality in OFDM systems. (This
gure is available in full color at http://www.mrw.interscience.
wiley.com/erfme.)
Center
frequency
Carrier
number
25 20 15 10 5 5 15 20 25 10
0
Figure 7. Structure of an Operating Channel.
10
Packet duration[10(162)(801)(80135)(8)(2/48)(80)]
100ns 400ms.
2248 ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz
actual transmission and reception of the physical layer
entities via the wireless medium, interfaces directly to the
medium, and provides modulation and demodulation of
the transmission frame.
3.2.2. Roles of Preamble, Training Sequences, and Pi-
lots. The ITS-WAVE species a preamble at the start of
every packet as shown in Fig. 9.
The PLCP preamble consists of 10 short training sym-
bols, each of which is 1.6 ms, followed by two long training
symbols, each of which is 6.4 ms including a 3.2-ms prex
that precedes the long training symbol. The long training
sequence contains a guard interval, T
GI2
, and two long
training symbols, each 6.4ms in duration. The short sym-
bols are used by the receiver for synchronization [signal
detection, AGC (automatic gain control), diversity selec-
tion, frequency offset estimation, and timing synchroniza-
tion]. The long symbols are used to ne-tune the frequency
offset and channel estimates. This training sequence is
transmitted over all 52 subcarriers and is QPSK-modu-
lated. In terms of algorithmic complexity, carrier frequen-
cy offset and timing recovery are by far the most difcult to
determine. The phase-locked-loop (PLL) on the radio sub-
system is responsible only for maintaining the 5-ppm volt-
age-controlled oscillator (VCO) requirement. Digital
signal processing is used, independent of the VCO, to
remove the carrier frequency offset. It is important
to note that once the carrier frequency offset is deter-
mined by the digital baseband hardware, there is no time
to provide a feedback signal to the WAVE radios VCO
since a PLL network will take too long to eliminate the
offset. The training sequences are followed by the SIGNAL
symbol, which is a single BPSK-modulated OFDM data
symbol containing information about the packet such as
data rate.
After preamble transmission, any common frequency
offset is tracked via the four pilot subcarriers as shown in
Fig. 10. It is not necessary to use pilots to estimate the
channel as long as the channel remains fairly stationary
over the duration of a single packet. The four pilot signals
facilitate coherent detection throughout the duration of
the packet. The remaining subcarriers carry the data
body of the packet. The pilot spacing is selected to be
less than the coherent bandwidth of the channel, as
explained later.
3.2.3. ITS-WAVE Performance Issues. The performance
of an OFDM receiver is affected by several factors, most
of which fall into the categories of hardware limita-
tions and channel impairments. Hardware limitations,
Table 2. ITS-WAVE Baseband Modulation
Data Rate (Mbps) Code Rate Modulation N_CBPS N_DBPS
3
1
2
BPSK 48 24
4.5
3
4
BPSK 48 36
6
1
2
QPSK 96 48
9
3
4
QPSK 96 72
24
1
2
16-QAM 192 96
18
3
4
16-QAM 192 144
24
2
3
64-QAM 288 192
27
3
4
64-QAM 288 216
PLCP - Header
Rate
4-bits
Reserved
1-bit
Length
12-bits
Parity
1-bit
Tall
6-bits
Service
16-bits
PSDU
Tall
6-bits
Pad
bits
Coded - OFDM
BPSK rate = 1/2
Coded - OFDM
Rate Indicated by signal symbol
PPDU
Data
Variable number of OFDM symbols
Signal
(1) OFDM symbol
PLCP preamble
12 - symbols
Figure 8. PPDU frame.
Signal detect,
AGC, diversity
selection
Coarse freq.
offset
estimation
timing
synchronize
Channel and
fine frequency
offset estimation
RATE
LENGTH
SERVICE + DATA DATA
16 + 16 = 32 s
10 x 1.6 = 16 s 2 x 1.6 + 2 x 6.4 = 16.0 s
1.6 + 6.4 = 8.0 s 1.6 + 6.4 = 8.0 s 1.6 + 6.4 = 8.0 s
t
1
t
2
t
3
t
4
t
5
t
6
t
7
t
8
t
9
t
10
G12 T
1
T
2 GI GI GI SIGNAL Data 1 Data 2
Figure 9. ITS-WAVE PLCP structure.
ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz 2249
particularly clock accuracy and oscillator stability, affect
the synchronization accuracy of the receiver. The channel
impairments discussed in Section 3.1 include delay
spread and Doppler spread, which result in frequency-
selective fading and time-selective fading, respectively.
OFDM is extremely sensitive to receiver synchroniza-
tion imperfections, which can cause degradation of system
throughput and performance. The overlap between sub-
carriers leads to a system that is extremely sensitive to
imperfections in carrier frequency synchronization. Also,
multiplexing symbols onto multiple subcarriers results in
a system that is extremely sensitive to imperfections in
timing synchronization. This requires that the receiver
architecture be structured to correct for frequency, timing,
and sampling. Figure 11 is a simplied block diagram [1]
depicting the major processing modules associated with
the ITS-WAVE physical layer.
3.2.3.1. Synchronization. Synchronization is a big hur-
dle in OFDM systems. The ITS-WAVE physical layer uses
the same synchronization scheme as in the IEEE 802.11a;
it usually consists of three processes:
1. Frame detection
2. Carrier frequency offset estimation and correction
3. Sampling error correction
Frame detection is used to determine the symbol
boundary so that correct samples of the symbol frame
can be taken. The rst 10 short symbols are identical and
are used for frame detection. The received signal is corre-
lated with the known short-symbol waveform that pro-
duces correlation peaks. The received signal is also
correlated with itself with a delay of one short symbol,
which creates a plateau for the length of 10 short symbols.
If the correlation peaks are within the plateau, the last
peak is used as the position from where the start of the
next symbol is determined.
Frequency offset estimation uses the long training
symbols, which are two FFT symbols back-to-back. The
Frame detection:
10 short symbol
Frequency offset
estimation:
Two FFT symbol
back-to-back
Data
Pilot
52 sub-carriers
Frequency
S
y
m
b
o
l
Figure 10. Pilot structure. (This gure is
available in full color at http://www.mrw.
interscience.wiley.com/erfme.)
Coding Interleaving
Binary input data
Binary output data
QAM
mapping
Pilot
insertion
Serial to
parallel
RF TX DAC
Parallel
to serial
Add cyclic
extension and
windowing
IFFT (TX)
FFT (RX)
Deinterleaving Decoding
QAM
demapping
Channel
correction
Parallel
to serial
RF RX ADC
Remove cyclic
extension
Timing and
frequency
synchronization
Serial to
parallel
Symbol timing
Figure 11. Basic OFDM block diagram.
2250 ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz
corresponding chips of the two FFT symbols are then
correlated to estimate the frequency offset. Channel esti-
mation uses the same two OFDM symbols as the frequen-
cy offset estimation. Once the frame start is detected,
frequency offset is estimated and signal samples are com-
pensated, the two long symbols are transformed into fre-
quency domain by FFT. After performing FFT on the
preambles, the frequency-domain values are compared
with the known preamble values to determine the chan-
nel response.
3.2.3.2. Carrier Frequency Offset. The ITS-WAVE (like
the 802.11a) species that the carrier frequency and sym-
bol clock be derived off the same oscillator. This allows the
receiver to compute symbol clock drift directly from the
carrier frequency offset (e.g., ppm error). Frequency syn-
chronization must be applied before the FFT. Without a
carrier frequency offset, the peak of any subcarrier corre-
sponds to the zero crossings of all other subcarriers. When
there is a random frequency offset, there is no longer an
integer number of cycles over T
FFT
, resulting in ICI. The
degradation in SNR that occurs due to random frequency
offset is approximated by D [1] in decibels
D %
10
3 ln 10
pDFT
FFT

2
E
s
N
0
DF is the frequency offset and W ( 1/T
FFT
) is the band-
width of the composite OFDM waveform (subcarrier spac-
ing). In essence, any carrier frequency offset results in a
shift of the received signal in the frequency domain. This
frequency error results in energy spillover between sub-
carriers, resulting in loss of their mutual orthogonality.
The approximation states that the degradation increases
with the square of normalized frequency offset. The major
tradeoffs encountered when selecting an appropriate car-
rier frequency offset correction algorithm include speed,
accuracy, and performance under noisy conditions.
Short training symbols can recognize offsets as high as
312.5 kHz [
1
2
(1/1.6 ms)]. However, their short duration
results in reduced accuracy since they produce only 16-
point FFT samples per symbol. Although there are 10
short training symbols, 5 or 6 are consumed during RSSI,
AGC, and timing recovery. Long training symbols provide
a much more accurate estimate of the frequency offset
since they produce 4 times as many FFT points compared
to the short training symbol. However, their long extent
limits the discernable frequency offset to 78 kHz [
1
2

(1/6.4 ms)] as shown in Fig. 12. Noise imparts variance on


the nal offset estimate, thereby mitigating its accuracy.
3.2.3.3. Symbol Timing. Errors in symbol timing syn-
chronization manifest as ISI and nonuniform phase shift
to the constellation points. Both of these effects naturally
lead to degradation of bit error rate (BER). The fast fourier
transform (FFT) demodulation process accumulates over
exactly one 6.4-ms OFDM interval. If the start of the sym-
bol time is not accurately established, the FFT demodu-
lation process will operate on two adjacent symbols
leading to ISI as shown in Fig. 13. Coarse synchroniza-
tion can resolve to within half the sampling period and
remove ISI. However, the residual sampling time offsets
must be identied, or a nonuniform phase shift will be
imparted to the constellation points.
The 6.4-ms FFT window is divided up into 64 time in-
stants separated by 100ns. Each point of the FFT is com-
puted at a rate of 10 Msps (megasamples, i.e., 10,000
samples, per second), which corresponds to 64 discrete
frequency-domain samples of the composite 6.4-ms symbol.
Since these 64 samples are 100ns apart [T
s
100ns], the
Long sync and
Data symbol spectrum:
52 sub-carriers
0 freq
Coarse frequency estimate must place 52 sub-carriers
To within frequency-bin of their true location: +/ 78.125 KHz
0.15625 MHz
freq
0
Short sync spectrum:
12 sub-carriers
+1 +1 +1 1 1 1 1 1 +1 +1 +1 +1
0.625 MHz
Figure 12. Carrier frequency offset.
T
Multi-path components
T
g
Sampling start

max
T
x
T
Figure 13. ISI and sampling-time offset. (This gure is available
in full color at http://www.mrw.interscience.wiley.com/erfme.)
ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz 2251
range of the maximum detectable sampling offset ranges
from 50 to 50 ns. This sampling time offset manifests
itself in the frequency domain as a phase shift, which is
proportional to the subcarrier frequency. Subcarriers at
the high end of the frequency range are affected dispro-
portionately relative to those subcarriers at the low end.
The effect of this phase shift on BER can be devastating, as
symbols that map to subcarriers at the edges will experi-
ence a phase shift that rotates the constellation point out
of its reliable detection region.
Sampling frequency offset does not negatively impact
performance on a symbol per symbol basis. However, it
can have harmful effects over large numbers of symbols.
The ITS-WAVE proposal calls out a 5ppm static center
frequency offset from the VCO for analog-to-digital/digi-
tal-to-analog clocks and carrier VCOs. At 10 MHz, a 5ppm
gure corresponds to a 50 Hz offset, which means that one
of the clocks is toggling 50 Hz faster than the other. In the
period of one 10-MHz clock (100 ns), one clock will advance
past the other by 0.5 psc. If we take into account the num-
ber of samples per symbol and the number of symbols in a
large packet, we nd that over a time span of 50 symbols
the sampling instants for symbols will have shifted by
2 ns.
11
This timeshift will manifest itself in the frequency
domain as a phase shift proportional to the subcarrier fre-
quency. This is clearly a receiver steady-state issue, and
cant be detected during training. During receiver track-
ing, this offset is taken care of by processing the pilots and
feeding back corrections to an interpolator.
3.2.4. ITS-WAVE Adjacent-channel and Cochannel Inter-
ference. Effects of adjacent-channel and cochannel inter-
ference has been studied using simulation [11], a Simulink
model developed to evaluate these types of interference as
shown in Fig. 14.
In the model shown in Fig. 14 we consider the type of
the device and apply the corresponding spectrum mask as
Bernoulli random
binary generator
Bernoulli
binary
Bernoulli random
binary generator1
Bernoulli
binary
Transmitter
In1
In1 Out1
Out1
Interferer
Int_gain
Tx_gain
K
[80x1]
[961]
[961]
[80x1] [80x1]
K K
[80x1]
K
[80x1] [80x1] [80x1]
Tx_path_loss
[80x1]
Receiver sensitivity
[80x1]
In 1 Out 1
[96x1]
[96x1]
Tx
Rx
Error rate
calculation
In1 Out1
Receiver
[96x1]
In_path_loss
[80x1]
AWGN
AWGN
Channel
OFDM demo:
Initial settings
Figure 14. Enhanced simulation model. (This gure is available in full color at http://
www.mrw.interscience.wiley.com/erfme.)
Table 3. ITS-WAVE Device Class Spectral Mask
Device
Class
7 4.5 MHz
Offset
75MHz
Offset
75.5MHz
Offset
710MHz
Offset
715MHz
Offset
A 0 10 20 28 40
B 0 16 20 28 40
C 0 26 32 40 50
D 0 35 45 55 65
Table 4. ITS-WAVE Classes and Transmit Power Levels
Device Class Maximum Device Output Power (dBm)
A 0
B 10
C 20
D 28.8
Table 5. ITS-WAVE Receiver Sensitivity
Data Rate (Mbps) Minimum Sensitivity (dBm)
3 85
4.5 84
6 82
9 80
12 77
18 70
24 69
27 67
11
0.5psc/sample 80 samples/symbol 50 symbols 2 ns.
2252 ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz
given in Table 3. The model also considers the fact that the
devices operate at the maximum power output according
to Table 4, which reects the increase of the out-of-band
attenuation for higher power devices. The model takes
into account the minimum receiver sensitivity as per
Table 5. The channel path loss is modeled according
to the two-segment model with a breakpoint of 164m as
given by
Ld
dB
20 log d 43:05 do164m
Ld
dB
40 log d 1:263 d
3
64 m
This is typical for models that use ray tracing [12], where
the path loss is generally proportional to 1/d
2
before the
breakpoint and 1/d
4
after the breakpoint. The breakpoint
represents the point at which the rst Fresnel zone touch-
es the ground, wherein the reected ray off the surface of
the ground cancels some of the power of the direct ray. The
breakpoint is approximated by d
bp
D(4h
t
h
r
)/l, where h
t
is
the transmit antennal height and h
r
is the receive anten-
na height.
3.3. The ITS-WAVE MAC Layer
Generally, for reliable system operation, the MAC must be
properly designed to match the physical layer so that its
impairments do not cause undue degradation at higher
layers. The IEEE 802.11 MAC is a very complex protocol;
it took over 10 years of development with the support of
dozens of corporations developing products for the WLAN
market. The ITS-WAVE Study Group intends to use the
IEEE 802.11a MAC without modication, except for
changes to the management information base (MIB).
The management information specific to each layer is rep-
resented as a MIB for that layer. The generic model of
MIB-related management primitives exchanged across
the management SAPs is to allow the SAP user entity
to either GET the value of a MIB attribute, or to SET the
value of a MIB attribute. The invocation of a SET:request
primitive may require that the layer entity perform
certain dened actions. Figure 15 depicts these generic
primitives.
4. CHALLENGES AND FUTURE DEVELOPMENTS
4.1. Validation, Verication, and Testing
Developing the ITS-WAVE family of standards is a com-
plex task, and the fact that these standards will be sup-
porting safety applications makes validation, verication,
testing, and system integration critical steps for develop-
ing this market. The USDoT has begun this process
through funding the Vehicle Safety Communications Con-
sortium (VSCC) and other industry participants. At the
Caltrans Testbed Center For Interoperability (TCFI), we
have developed lab and eld infrastructure [11] in order
to support these activities as the standards mature.
Figure 16 shows data collected over the air using Agilents
equipment (VSG, VXI, and PSA) at TCFI. In addition, we
have demonstrated passing data between the test equip-
ment and the simulation tool (Simulink). Validation of the
MAC layer will be a special challenge; currently we are
experimenting with Telelogics TAU G2, which supports
both the Specications and Description Language (SDL)
and the Unied Modeling Language (UML).
12
SDL is an
ITU formal language that was used to describe the orig-
inal IEEE 802.11 MAC specications.
4.2. System Integration: The Santa Barbara Radio
Access Network
Beyond the development of the ITS-WAVE family of stan-
dards and the availability of telematics products, the de-
velopment of the ITS wireless market will require reliable
roadside infrastructure [13]. This infrastructure requires
feeder and backhaul networks that may use both xed
wireless and landline networks. Figure 17 shows one such
infrastructure that has passed the planning stage: Santa
Barbaras Radio Access Network (RAN) [14]. The RF plan-
ning of 28 sites has been completed, and some sites are
Data link
Physical layer
MAC
MAC SAP
MAC
Management
Entity
PHY
Management
Entity
PLCP
PMD SAP
MIB
MIB
PLME GET/SET
PLME GET/SET
Station
Management
Entity
MLME GET/SET
PHY SAP
PMD
Figure 15. GET and SET operations.
12
Further information is available at http://www.telelogic.com.
ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz 2253
being installed through the collaboration of Caltrans,
local governments, and the university (UCSB). SB-RAN
is currently part of a new proposal to develop a public
safety testbed, which addresses wireless infrastructure
interoperability (WII) issues for both the rst responder
(4.9 GHz) and the ITS (5.9-GHz) bands.
4.3. Observations and Future Developments
While WAVE standards efforts are progressing within the
IEEE, there are remaining questions yet to be answered
regarding many issues such as the following:
*
Security architecture (P1556 and 802.11i)
*
Multiple-channel devices and the current concept of
operations
*
Interference mitigation in a real environment
*
MAC extension and its relation to IEEE 802.11e/h
*
Pilot structure and its impact on dedicated public
safety channels
*
Fast handover
*
IP-based internetworking
*
Wireless infrastructure interoperability
As we move from the descriptive phase of the standard
development to the performance and testing phases, the
need for better protocol development tools will continue to
be a challenge.
Roadside system integration issues are rarely ad-
dressed in the wireless ITS community. Issues such as
cost-effective feeder and backhaul networks to support the
wireless infrastructure are considered implementation
issues and are outside the scope of the national efforts.
4
2
*
6
9
7
8
10
11
13
12
14
15
16
18
17
22
20
19
23 25
27
21 26
24
28
1
3
5
*
TV Hill
*
Monopole
Figure 17. Santa Barbaras RAN site locations.
(This gure is available in full color at http://
www.mrw.interscience.wiley.com/erfme.)
Figure 16. Over-the-air ITS-WAVE signal18 Mbps. (This gure is available in full color at
http://www.mrw.interscience.wiley.com/erfme.)
2254 ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz
Presently, the ITS industry is addressing market-enabling
applications such as vehicle safety and toll 0applications.
The more general ITS public safety applications such as
work-zone safety (WZS), public protection disaster relief
(PPDR), and homeland security, are assumed to be the
role of public agencies.
The potential gains from considering advanced tech-
nologies such as software-dened radios (SDR), and mul-
tiple-input/multiple-output (MIMO) systems have not yet
been investigated for wireless ITS applications.
5. SUMMARY AND CONCLUSION
This article examined the emerging ITS-WAVE family of
standards, with emphasis on the mobile vehicle environ-
ment and the lower-layer standard. Our ndings con-
rmed the validity of adopting the IEEE 802.11a as the
basis for the broadband wireless ITS standard. The OFDM
Forum proposal, originally submitted to the ASTM for
wireless road access, is now well understood and it has
been accepted by the ITS industry.
The physical-layer, proposal is ready for standardiza-
tion with the exception of the new pilot structure issue.
Long-range dedicated public safety cannot be realized
without resolving this issue. The data-link layer proves
to be more challenging, as we integrate other IEEE stan-
dards (e.g., 802.11e/h/i).
As new devices and systems are introduced, there will
be a need for demonstration projects, testing standards,
compliance certication, and performance benchmarks.
Acronyms
AGC
ASTM
BER
BPSK
CALM
Caltrans
CFR
CP
CSMA/
CA
DFT
DoT
DSRC
FCC
FFT
ICI
IEEE
IETF
ISI
ISO
ISTEA
ITS
ITSA
ITS-RS
Automatic gain control
American Society for Testing and Materials
Bit error rate
Binary phase shift keying
Communications air interface for long and
medium range
California Department of Transportation
Code of Federal Regulations
Cyclic prex
Multiple access with collision avoidance
Discrete Fourier transform
Department of Transportation
Dedicated short-range communications
Federal Communications Commission
Fast Fourier transform
Intercarrier interference
Institute of Electrical and Electronics
Engineers
Internet Engineering Task Force
Intersymbol interference
International Standards Organization
Intermodel Surface Transportation Efciency
Act of 1991
Intelligent Transportation Systems
Intelligent Transportation Society of America
ITS Radio Services
ITU
MAC
MCM
MIB
MME
MPDU
Msps
OBUs
OFDM
OMG
PAPR
PER
PHY
PLCP
PMD
PPDR
PPDU
PSA
QAM
QPSK
RAN
RSSI
RSUs
SAP
SDL
SDO
SNMP
SNR
TCP/IP
UCSB
UML
UNII
VSA
VSCC
WAVE
WG
802.11
WII
WLAN
WZS
International Telecommunication Union
Medium access control
Multicarrier modulation
Management information base
MAC management entity
MAC protocol data unit
Megasymbol per second
Onboard units
Orthogonal frequency-division multiplexing
Object Management Group
Peak-to-average power ratio
Packet error rate
Physical layer (OSI)
Physical-layer convergence procedure
Physical-medium-dependent
Public protection disaster relief
PLCP protocol data unit
Power spectral analyzer
Quadrature amplitude modulation
Quadrature phase shift keying
Radio access network
Received signal strength indicator
Roadside units
Service access point
Specication(s) and Description Language
Standards development organization
Simple Network Management Protocol
Signal-to-noise ratio
Transmissions Control Protocol/Internet
Protocol
University of California, Santa Barbara
Unied Modeling Language
Unlicensed national information infrastruc-
ture
Vector spectrum analyzer
Vehicle Safety Communications Consortium
Wireless Access in Vehicular Environments
Work Group 802.11WLAN standards
Wireless infrastructure interoperability
Wireless local-area network
Work-zone safety
BIBLIOGRAPHY
1. R. Van Nee and R. Prasad, OFDM for Wireless Multimedia
Communications, Artech House, Boston, 2000.
2. R. Gerges, Communications Technologies for IVHS, UCLA,
1994.
3. A. Polydoros, R. Gerges et al, Integrated layer packet
radio study for AHS, Proc. 3rd IEEEE Mediterranean Symp.
New Directions in Control and Automation, Cypress, July
1995.
4. R. Gerges, Wireless communications and spectrum require-
ments for ITS, Paper presented at IEEE Vehicular Technology
Conf. Ottawa, May 1998.
5. IEEE 802.11 [full title: Information TechnologyTelecommu-
nications and Information Exchange between SystemsLocal
and Metropolitan Area NetworksSpecific Requirements
ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz 2255
Part 11: Wireless LAN Medium Access Control (MAC) and
Physical Layer (PHY) Specications], ANSI/IEEE Std 802.11,
1999 edition.
6. IEEE 802.11a (full title: Supplement to IEEE Standard for
Information TechnologyTelecommunications and informa-
tion exchange between systemsLocal and Metropolitan Area
NetworksSpecific RequirementsPart 11: Wireless LAN
Medium Access Control (MAC) and Physical Layer (PHY)
Specications, High-Speed Physical Layer in the 5 GHz
Band), IEEE Std 802.11a-1999.
7. ASTM E2213-02, Standard Specication for Telecommunica-
tions and Information Exchange between Roadside and
Vehicle Systems5GHz Band Dedicated Short Range Com-
munications (DSRC) Medium Access Control (MAC) and
Physical Layer (PHY).
8. T. S. Rappaport, Wireless Communications Principles and
Practice, Prentice-Hall, Englewood Cliffs, NJ, 2002.
9. A. Bohdanowicz, Wide Band Indoor and Outdoor Radio Chan-
nel Measurements at 17 GHz, Ubicom Technical Report/2000/
2, Feb. 2000.
10. H. Steendam, and M. Moeneclaey, Analysis and optimization
of the performance of OFDM on frequency-selective times-se-
lective fading channels, IEEE Trans. Commun. (Dec. 1999).
11. R. Gerges, Investigation of Broadband ITS-Radio Services at
5.9 GHz, Final Report to BattelleIPAS Program, UCSB-
TCFI, Nov. 1, 2003.
12. DSRC Physical Channel Characterization, Interim Report to
Caltrans TCFI, TechnoCom Corp., April 2000.
13. T. Maehata et al., DSRC Using OFDM for Roadside-Vehicle
Communication System, Radio Communications Technology
Group, Sumitomo Electric Industries.
14. R. Gerges, UCSB-TCFI 65V250 A3 Task Order 301, Interim
Report, 2000.
2256 ITS RADIO SERVICE STANDARDS AND WIRELESS ACCESS IN VEHICULAR ENVIRONMENTS (ITS-WAVE) AT 5.9 GHz

You might also like