You are on page 1of 30

Visit http://electronicdesign.

com
Engineering Essentials Vol. III electronicdesign.com/subscribe
46
02.09.12 ELECTRONIC DESIGN
LOUIS E. FRENZEL | COMMUNICATIONS EDITOR lou.frenzel@penton.com
F
undamental to all wireless communications is modula-
tion, the process of impressing the data to be transmit-
ted on the radio carrier. Most wireless transmissions
today are digital, and with the limited spectrum avail-
able, the type of modulation is more critical than it has
ever been.
The main goal of modulation today is to squeeze as much
data into the least amount of spectrum possible. That objective,
known as spectral efficiency, measures how quickly data can
be transmitted in an assigned bandwidth. The unit of measure-
ment is bits per second per Hz (bits/s/Hz). Multiple techniques
have emerged to achieve and improve spectral efficiency.
EngineeringEssentials
Todays designers can utilize myriad
modern modulation methods to pack
ever-increasing data into ever-decreasing
spectrum.
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
47
ASK AND FSK
There are three basic ways to modulate a sine wave radio
carrier: modifying the amplitude, frequency, or phase. More
sophisticated methods combine two or more of these variations
to improve spectral efficiency. These basic modulation forms
are still used today with digital signals.
Figure 1 shows a basic serial digital signal of binary zeros
and ones to be transmitted and the corresponding AM and
FM signals resulting from modulation. There are two types of
AM signals: on-off keying (OOK) and amplitude shift keying
(ASK). In Figure 1a, the carrier amplitude is shifted between
two amplitude levels to produce ASK. In Figure 1b, the binary
signal turns the carrier off and on to create OOK.
AM produces sidebands above and below the carrier equal
to the highest frequency content of the modulating signal. The
bandwidth required is two times the highest frequency content
including any harmonics for binary pulse modulating signals.
Frequency shift keying (FSK) shifts the carrier between two
different frequencies called the mark and space frequencies, or
f
m
and f
s
(Fig. 1c). FM produces multiple sideband frequencies
above and below the carrier frequency. The bandwidth pro-
duced is a function of the highest modulating frequency includ-
ing harmonics and the modulation index, which is:
m = f(T)
f is the frequency deviation or shift between the mark and
space frequencies, or:
f = f
s
f
m

T is the bit time interval of the data or the reciprocal of the
data rate (1/bit/s).
Smaller values of m produce fewer sidebands. A popular
version of FSK called minimum shift keying (MSK) specifies
m = 0.5. Smaller values are also used such as m = 0.3.
There are two ways to further improve the spectral efficiency
for both ASK and FSK. First, select data rates, carrier frequen-
cies, and shift frequencies so there are no discontinuities in the
sine carrier when changing from one binary state to another.
These discontinuities produce glitches that increase the har-
monic content and the bandwidth.
The idea is to synchronize the stop and start times of the
binary data with when the sine carrier is transitioning in ampli-
tude or frequency at the zero crossing points. This is called
continuous phase or coherent operation. Both coherent ASK/
OOK and coherent FSK have fewer harmonics and a narrower
bandwidth than non-coherent signals.
A second technique is to filter the binary data prior to modu-
lation. This rounds the signal off, lengthening the rise and fall
times and reducing the harmonic content. Special Gaussian
and raised cosine low pass filters are used for this purpose.
GSM cell phones widely use a popular combination, Gaussian
filtered MSK (GMSK), which allows a data rate of 270 kbits/s
in a 200-kHz channel.
BPSK AND QPSK
A very popular digital modulation scheme, binary phase
shift keying (BPSK), shifts the carrier sine wave 180 for
each change in binary state (Fig. 2). BPSK is coherent as the
phase transitions occur at the zero crossing points. The proper
demodulation of BPSK requires the signal to be compared to
a sine carrier of the same phase. This involves carrier recovery
and other complex circuitry.
A simpler version is differential BPSK or DPSK, where the
received bit phase is compared to the phase of the previous bit
signal. BPSK is very spectrally efficient in that you can trans-
mit at a data rate equal to the bandwidth or 1 bit/Hz.
In a popular variation of BPSK, quadrature PSK (QPSK),
the modulator produces two sine carriers 90 apart. The binary
data modulates each phase, producing four unique sine signals
shifted by 45 from one another. The two phases are added
together to produce the final signal. Each unique pair of bits
generates a carrier with a different phase (Table 1).
Figure 3a illustrates QPSK with a phasor diagram where the
phasor represents the carrier sine amplitude peak and its posi-
tion indicates the phase. A constellation diagram in Figure 3b
shows the same information. QPSK is very spectrally efficient
since each carrier phase represents two bits of data. The spec-
Binary
data
0 0
1 1
Carrier
sine
Higher
frequency
Lower frequency Lower frequency
(a) ASK
(b) OOK
(c) FSK
1. Three basic digital modulation formats are still very popular with low-
data-rate short-range wireless applications: amplitude shift keying (a), on-
off keying (b), and frequency shift keying (c). These waveforms are coher-
ent as the binary state change occurs at carrier zero crossing points.
Serial
binary
data
BPSK
Phase changes
1 1 1 0 0 0
2. In binary phase shift
keying, note how a
binary 0 is 0 while a
binary 1 is 180. The
phase changes when the
binary state switches so
the signal is coherent.
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
tral efficiency is 2 bits/Hz, meaning twice the data rate can be
achieved in the same bandwidth as BPSK.
DATA RATE AND BAUD RATE
The maximum theoretical data rate or channel capacity (C)
in bits/s is a function of the channel bandwidth (B) channel in
Hz and the signal-to-noise ratio (SNR):
C = B log
2
(1 + SNR)
This is called the Shannon-Hartley law. The maximum data
rate is directly proportional to the bandwidth and logarithmi-
cally proportional the SNR. Noise greatly diminishes the data
rate for a given bit error rate (BER).
Another key factor is the baud rate, or the number of modu-
lation symbols transmitted per second. The term symbol in
modulation refers to one specific state of a sine carrier signal. It
can be an amplitude, a frequency, a phase, or some combination
of them. Basic binary transmission uses one bit per symbol.
In ASK, a binary 0 is one amplitude and a binary 1 is another
amplitude. In FSK, a binary 0 is one carrier frequency and a
binary 1 is another frequency. BPSK uses a 0 shift for a binary
0 and a 180 shift for a binary 1. In each of these cases there is
one bit per symbol.
Data rate in bits/s is calculated as the reciprocal of the bit
time (t
b
):
bits/s = 1/t
b

With one symbol per bit, the baud rate is the same as the bit
rate. However, if you transmit more bits per symbol, the baud
rate is slower than the bit rate by a factor equal to the number of
bits per symbol. For example, if 2 bits per symbol are transmit-
ted, the baud rate is the bit rate divided by 2. For instance, with
QPSK a 70-Mbit/s data stream is transmitted at a baud rate of
35 symbols/s.
M-PSK
QPSK produces two bits per symbol, making it very spec-
trally efficient. QPSK can be referred to as 4PSK because
there are four amplitude-phase combinations. By using smaller
phase shifts, more bits can be transmitted per symbol. Some
popular variations are 8PSK and 16PSK.
8PSK uses eight symbols with constant carrier amplitude
45 shifts between them, enabling 3 bits to be transmitted for
each symbol. 16PSK uses 22.5 shifts of constant amplitude
carrier signals. This arrangement results in a transmission of 4
bits per symbol.
While M-PSK is much more spectrally efficient, the greater
the number of smaller phase shifts, the more difficult the sig-
nal is to demodulate in the presence of noise. The benefit of
M-PSK is that the constant carrier amplitude means that more
efficient nonlinear power amplification can be used.
QAM
The creation of symbols that are some combination of
amplitude and phase can carry the concept of transmitting
more bits per symbol further. This method is called quadrature
amplitude modulation (QAM). For example, 8QAM uses four
carrier phases plus two amplitude levels to transmit 3 bits per
symbol. Other popular variations are 16QAM, 64QAM, and
256QAM, which transmit 4, 6, and 8 bits per symbol respec-
tively (Fig. 4).
While QAM is enormously efficient of spectrum, it is more
difficult to demodulate in the presence of noise, which is most-
ly random amplitude variations. Linear power amplification
is also required. QAM is very widely used in cable TV, Wi-Fi
wireless local-area networks (LANs), satellites, and cellular
telephone systems to produce maximum data rate in limited
bandwidths.
APSK
Amplitude phase shift keying (APSK), a variation of both
M-PSK and QAM, was created in response to the need for an
improved QAM. Higher levels of QAM such as 16QAM and
above have many different amplitude levels as well as phase
shifts. These amplitude levels are more susceptible to noise.
Furthermore, these multiple levels require linear power
amplifiers (PAs) that are less efficient than nonlinear (e.g.,
class C). The fewer the number of amplitude levels or the
smaller the difference between the amplitude levels, the greater
the chance to operate in the nonlinear region of the PA to boost
power level.
APSK uses fewer amplitude levels. It essentially arranges
the symbols into two or more concentric rings with a constant
phase offset . For example, 16APSK uses a double-ring PSK
48
02.09.12 ELECTRONIC DESIGN
EngineeringEssentials
(a) (b)
0
90
180
270
0
90
180
270
00 01
11 10
3. Modulation can be represented without time domain waveforms. For example, QPSK can be
represented with a phasor diagram (a) or a constellation diagram (b), both of which indicate phase
and amplitude magnitudes.
0 180
90
270
4. 16QAM uses a mix of amplitudes and phas-
es to achieve 4 bits/Hz. In this example, there
are three amplitudes and 12 phase shifts.
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
49
EngineeringEssentials
format (Fig. 5). This is called 4-12 16APSK with four symbols
in the center ring and 12 in the outer ring.
Two close amplitude levels allow the amplifier to operate
closer to the nonlinear region, improving efficiency as well as
power output. APSK is used primarily in satellites since it is a
good fit with the popular traveling wave tube (TWT) PAs.
OFDM
Orthogonal frequency division multiplexing (OFDM) com-
bines modulation and multiplexing techniques to improve
spectral efficiency. A transmission channel is divided into
many smaller subchannels or subcarriers. The subcarrier fre-
quencies and spacings are chosen so theyre orthogonal to one
another. Their spectra wont interfere with one another, then,
so no guard bands are required (Fig. 6).
The serial digital data to be transmitted is subdivided into
parallel slower data rate channels. These lower data rate sig-
nals are then used to modulate each subcarrier. The most com-
mon forms of modulation are BPSK, QPSK, and several levels
of QAM. BPSK, QPSK, 16QAM, and 64QAM are defined
with 802.11n. Data rates up to about 300 Mbits/s are possible
with 64QAM.
The complex modulation process is only produced by digital
signal processing (DSP) techniques. An inverse fast Fourier
transform (IFFT) generates the signal to be transmitted. An
FFT process recovers the signal at the receiver.
OFDM is very spectrally efficient. That efficiency level
depends on the number of subcarriers and the type of modula-
tion, but it can be as high as 30 bits/s/Hz. Because of the wide
bandwidth it usually occupies and the large number of subcar-
riers, it also is less prone to signal loss due to fading, multipath
reflections, and similar effects common in UHF and micro-
wave radio signal propagation.
Currently, OFDM is the most popular form of digital modu-
lation. It is used in Wi-Fi LANs, WiMAX broadband wire-
less, Long-Term Evolution (LTE) 4G cellular systems, digital
subscriber line (DSL) systems, and in most power-line com-
munications (PLC) applications. For more, see Orthogonal
Frequency-Division Multiplexing (OFDM): FAQ Tutorial, at
http://mobiledevdesign.com/tutorials/ofdm.
DETERMINING SPECTRAL EFFICIENCY
Again, spectral efficiency is a measure of how quickly
data can be transmitted in an assigned bandwidth. The unit
of measurement for spectral efficiency is bits/s/Hz (b/s/Hz).
Each type of modulation has a maximum theoretical spectral
efficiency measure (Table 2).
SNR is another important factor that influences spectral
efficiency. It also can be expressed as the carrier to noise power
ratio (CNR). The measure is the BER for a given CNR value.
BER is the percentage of errors that occur in a given number of
bits transmitted. As the noise becomes larger compared to the
signal level, more errors occur.
Some modulation methods are more immune to noise than
others. Amplitude modulation methods like ASK/OOK and
QAM are far more susceptible to noise so they have a higher
BER for a given modulation. Phase and frequency modulation
(BPSK, FSK, etc.) fare better in a noisy environment so they
require less signal power for a given noise level (Fig. 7).
OTHER FACTORS AFFECTING SPECTRAL EFFICIENCY
While modulation plays a key role in the spectral efficiency
you can expect, other aspects in wireless design influence it as
well. For example, the use of forward error correction (FEC)
techniques can greatly improve the BER. Such coding methods
add extra bits so errors can be detected and corrected.
These extra coding bits add overhead to the signal, reduc-
ing the net bit rate of the data, but thats usually an acceptable
tradeoff for the single-digit dB improvement in CNR. Such
coding gain is common to almost all wireless systems today.
Digital compression is another useful technique. The digital
data to be sent is subjected to a compression algorithm that
greatly reduces the amount of information. This allows digital
signals to be reduced in content so they can be transmitted as
shorter, slower data
streams.
Fo r e x a mp l e ,
voice signals are
compressed for dig-
ital cell phones and
voice over Internet
pr ot ocol ( VoI P)
phones. Music is
compressed in MP3
TABLE 1: CARRIER PHASE
SHIFT FOR EACH PAIR OF BITS
REPRESENTED
Bit pairs Phase (degrees)
0 0 45
0 1 135
1 1 225
1 0 315
TABLE 2: SPECTRAL EFFICIENCY FOR POPULAR
DIGITAL MODULATION METHODS
Type of modulation Spectral efficiency (bits/s/Hz)
FSK <1 (depends on modulation index)
GMSK 1.35
BPSK 1
QPSK 2
8PSK 3
16QAM 4
64QAM 6
OFDM
>10 (depends on the type of modulation
and the number of subcarriers)
270
180
90
0
A1
A2

5. 16APSK uses two


amplitude levels, A1 and
A2, plus 16 different
phase positions with an
offset of . This tech-
nique is widely used in
satellites.
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
or AAC files for faster transmission and less storage. Video
is compressed so high-resolution images can be transmitted
faster or in bandwidth-limited systems.
Another factor affecting spectral efficiency is the utiliza-
tion of multiple-input multiple-output (MIMO), which is the
use of multiple antennas and transceivers to transmit two or
more bit streams. A single high-rate stream is divided into
two parallel streams and transmitted in the same bandwidth
simultaneously.
By coding the streams and their unique path characteristics,
the receiver can identify and demodulate each stream and reas-
semble it into the original stream. MIMO, therefore, improves
data rate, noise performance, and spectral efficiency. Newer
wireless LAN (WLAN) standards like 802.11n and 802.11ac/
ad and cellular standards like LTE and WiMAX use MIMO.
For more, see How MIMO Works at http://electronicdesign.
com/article/communications/how-mimo-works12998.aspx.
IMPLEMENTING MODULATION AND DEMODULATION
In the past, unique circuits implemented modulation and
demodulation. Today, most modern radios are software-
defined radios (SDR) where functions such as modulation
and demodulation are handled in software. DSP algorithms
manage the job that was previously assigned to modulator and
demodulator circuits.
The modulation process begins with the data to be trans-
mitted being fed to a DSP device that generates two digital
outputs, which are needed to define the amplitude and phase
information required at the receiver to recover the data. The
DSP produces two baseband streams that are sent to digital-to-
analog converters (DACs) that produce the analog equivalents.
These modulation signals feed the mixers along with the
carrier. There is a 90 shift between the carrier signals to the
mixers. The resulting quadrature output signals from the mix-
ers are summed to produce the signal to be transmitted. If the
carrier signal is at the final transmission frequency, the com-
posite signal is ready to be amplified and sent to the antenna.
This is called direct conversion. Alternately, the carrier signal
may be at a lower intermediate frequency (IF). The IF signal
is upconverted to the final carrier frequency by another mixer
before being applied to the transmitter PA.
At the receiver, the signal from the antenna is amplified
and downconverted to IF or directly to the original baseband
signals. The amplified signal from the antenna is applied to
mixers along with the carrier signal. Again, there is a 90 shift
between the carrier signals applied to the mixers.
The mixers produce the original baseband analog signals,
which are then digitized in a pair of analog-to-digital convert-
ers (ADCs) and sent to the DSP circuitry where demodulation
algorithms recover the original digital data.
There are three important points to consider. First, the
modulation and demodulation processes use two signals in
quadrature with one another. The DSP calculations call for two
quadrature signals if the phase and amplitude are to be pre-
served and captured during modulation or demodulation.
Second, the DSP circuitry may be a conventional program-
mable DSP chip or may be implemented by fixed digital logic
implementing the algorithm. Fixed logic circuits are smaller
and faster and are preferred for their low latency in the modula-
tion or demodulation process.
Third, the PA in the transmitter needs to be a linear amplifier
if the modulation is QPSK or QAM to faithfully reproduce the
amplitude and phase information. For ASK, FSK, and BPSK, a
more efficient nonlinear amplifier may be used.
THE PURSUIT OF GREATER SPECTRAL EFFICIENCY
With spectrum being a finite entity, it is always in short sup-
ply. The Federal Communications Commission (FCC) and
other government bodies have assigned most of the electro-
magnetic frequency spectrum over the years, and most of that
is actively used.
50
02.09.12 ELECTRONIC DESIGN
EngineeringEssentials
56 subcarriers
20-MHz channel
312.5-kHz
subcarrier
spacing
312.5-kHz
subcarrier
bandwidth
Each subcarrier
modulated by
BPSK, QPSK,
16QAM, or 64QAM
6. In the OFDM signal for the IEEE 802.11n Wi-Fi standard, 56 subcarriers
are spaced 312.5 kHz in a 20-MHz channel. Data rates to 300 Mbits/s can
be achieved with 64QAM.
B
E
R
10
3
10
4
10
5
10
6
10
7
10
8
10
9
6 10 14 18 22 26 30
CNR (dB)
BPSK
QPSK QPSK
8QAM 8QAM
8PSK
16QAM 16QAM
64QAM 64QAM
7. This is a comparison of several popular modulation methods and their
spectral efficiency expressed in terms of BER versus CNR. Note that for a
given BER, a greater CNR is needed for the higher QAM levels.
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
Shortages now exist in the cellular and
land mobile radio sectors, inhibiting the
expansion of services such as high data
speeds as well as the addition of new sub-
scribers. One approach to the problem
is to improve the efficiency of usage by
squeezing more users into the same or
less spectrum and achieving higher data
rates. Improved modulation and access
methods can help.
One of the most crowded areas of spec-
trum is the land mobile radio (LMR) and
private mobile radio (PMR) spectrum
used by the federal and state governments
and local public safety agencies like fire
and police departments. Currently theyre
assigned spectrum by FCC license in the
150- to 174-MHz VHF spectrum and the
421- to 512-MHz UHF spectrum.
Most radio systems and handsets use
FM analog modulation that occupies a
25-kHz channel. Recently the FCC has
required all such radios to switch over
to 12.5-kHz channels. This conversion,
known as narrowbanding, doubles the
number available channels.
Narrowbanding is expected to improve
a radios ability to get access to a chan-
nel. It also means that more radios can
be added to the system. This conversion
must take place before January 1, 2013.
Otherwise, an agency or business could
lose its license or be fined. This switcho-
ver will be expensive as new radio sys-
tems and handsets are required.
In the future, the FCC is expected to
mandate a further change from the 12.5-
kHz channels to 6.25-kHz channels,
again doubling capacity without increas-
ing the amount of spectrum assigned. No
date for that change has been assigned.
The new equipment can use either ana-
log or digital modulation. It is possible
to put standard analog FM in a 12.5-kHz
channel by adjusting the modulation
index and using other bandwidth-nar-
rowing techniques. However, analog FM
in a 6.25-kHz channel is unworkable, so
a digital technique must be used.
Digital methods digitize the voice sig-
nal and use compression techniques to
produce a very low-rate serial digital sig-
nal that can be modulated into a narrow
band. Such digital modulation techniques
are expected to meet the narrowbanding
goal and provide some additional perfor-
mance advantages.
New modulation techniques and pro-
tocolsincluding P25, TETRA, DMR,
dPMR, and NXDNhave been devel-
oped to meet this need. All of these new
methods must meet the requirements of
the FCCs Part 90 regulations and/or the
regulations of the European Telecommu-
nications Standards Institute (ETSI) stan-
dards such as TS-102 490 and TS-102-
658 for LMR.
The most popular digital LMR tech-
nology, P25, is already in wide use in
the U.S. with 12.5-kHz channels. Its fre-
quency division multiple access (FDMA)
method divides the assigned spectrum
into 6.25-kHz or 12.5-kHz channels.
Phase I of the P25 project uses a
four-symbol FSK (4FSK) modulation.
Standard FSK, covered earlier, uses two
frequencies or tones to achieve 1 bit/
Hz. However, 4FSK is a variant that
uses four frequencies to provide 2-bit/
Hz efficiency. With this scheme, the stan-
dard achieves a 9600-bit/s data rate in a
12.5-kHz channel. With 4FSK, the car-
rier frequency is shifted by 1.8 kHz or
600 Hz to achieve the four symbols.
In Phase 2, a compatible QPSK modu-
lation scheme is used to achieve a simi-
lar data rate in a 6.25-kHz channel. The
phase is shifted either 45 or 135 to
get the four symbols. A unique demodu-
lator has been developed to detect either
the 4FSK or QPSK signal to recover the
digital voice. Only different modulators
on the transmit end are needed to make
the transition from Phase 1 to Phase 2.
The most widespread digital LMR
technology outside of the U.S. is TET-
RA, or Terrestrial Trunked Radio. This
ETSI standard is universally used in
Europe as well as in Africa, Asia, and
Latin America. Its time division multiple
access (TDMA) approach multiplexes
four digital voice or data signals into a
25-kHz channel.
A single channel is used to support a
digital stream of four time slots for the
digital data for each subscriber. This is
equivalent to four independent signals in
adjacent 6.25-kHz channels. The modu-
lation is /4-DQPSK, and the data rate is
7.2 kbits/s per time slot.
Another ETSI standard, digital mobile
radio (DMR), uses a 4FSK modulation
scheme in a 12.5-kHz channel. It can
achieve a 6.25-kHz channel equivalent
02.09.12 ELECTRONIC DESIGN
EngineeringEssentials
Over
2500 Std. Models
Surface Mount and Thru-Hole
00-00 0enverters
2V to 10,000 VDC Outputs
Low Profile / lsolated
Up to 10,000 volts Standard
Regulated Models Available
S
ee P
lC
O
's full catalog im
m
ed
iately
w
w
w
.p
ic
o
e
le
c
tro
n
ic
s
.c
o
m
h|gh
Power
Up to 350 VDC Outputs
(Un|ts up to 150 Watts|
Regu/aIed / W/de lnpuI Range
lso/aIed OuIpuIs
INDUSTRIAL COTS MILITARY
De||very Stock to One Week
for samp|e quant|t|es
PICOELECTRONICS, Inc.
143 Sparks Ave., Pelham, New York 10803
See EEM or send direct for Free PlCO Catalog
Call Toll Free 800-431-1064 FAX 914-738-8225
E Mail: info@picoelectronics.com
H

G
H

V
O
L
T
A
G
E
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
in a 12.5-kHz channel by using two-slot
TDMA. The voice is digitally coded with
error correction, and the basic rate is 3.6
kbits/s. The data rate in the 12.5-kHz
band is 9600 kbits/s.
A similar technology is dPMR, or digi-
tal private mobile radio standard. This
ETSI standard also uses a 4FSK modula-
tion scheme, but the access is FDMA in
6.25-kHz channels. The voice coding rate
is also 3.6 kbits/s with error correction.
LMR manufacturers Icom and Ken-
wood have developed NXDN, another
standard for LMR. It is designed to oper-
ate in either 12.5- or 6.25-kHz channels
using digital voice compression and a
four-symbol FSK system. A channel may
be selected to carry voice or data.
The basic data rate is 4800 bits/s. The
access method is FDMA. NXDN and
dPMR are similar, as they both use 4FSK
and FDMA in 6.25-kHz channels. The
two methods are not compatible, though,
as the data protocols and other features
are not the same.
Because all of these digital techniques
are similar and operate in standard fre-
quency ranges, Freescale Semiconductor
was able to make a single-chip digital
radio that includes the RF transceiver
plus an ARM9 processor that can be pro-
grammed to handle any of the digital
standards. The MC13260 system-on-
a-chip (SoC) can form the basis of a
handset radio for any one if not multiple
protocols. For more, see Chip Makes
Two-Way Radio Easy at http://electron-
icdesign.com/article/communications/
Chip-Makes-Two-Way-Radio-Easy.aspx.
Another example of modulation tech-
niques improving spectral efficiency and
increasing data throughput in a given
channel is a new technique from Novel-
Sat called NS3 modulation. Satellites are
positioned in an orbit around the equator
about 22,300 miles from earth. This is
called the geostationary orbit, and satel-
lites in it rotate in synchronization with
the earth so they appear fixed in place,
making them a good signal relay plat-
form from one place to another on earth.
Satellites carry several transponders
that pick up the weak uplink signal from
earth and retransmit it on a different fre-
quency. These transponders are linear
and have a fixed bandwidth, typically 36
MHz. Some of the newer satellites have
72-MHz channel transponders. With a
fixed bandwidth, the data rate is some-
what fixed as determined by the modula-
tion scheme and access methods.
The question is how one deals with the
need to increase the data rate in a remote
satellite as required by the ever increas-
ing demand for more traffic capacity.
The answer lies in simply creating and
implementing a more spectrally efficient
modulation method. Thats what Nov-
elSat did. Its NS3 modulation method
increases bandwidth capacity up to 78%.
That level of improvement comes from
a revised version of APSK modulation
covered earlier. One commonly used sat-
ellite transmission standard, DVB-S2, is
a single carrier (typically L-band, 950 to
1750 MHz) that can use QPSK, 8PSK,
16APSK, and 32APSK modulation with
different forward error correction (FEC)
schemes. The most common application
is video transmission.
NS3 improves on DVB-S2 by offer-
ing 64APSK with multiple amplitude
and phase symbols to improve efficien-
cy. Also included is low density parity
check (LDPC) coding. This combination
provides a maximum data rate of 358
Mbits/s in a 72-MHz transponder.
Because the modulation is APSK, the
TWT PAs dont have to be backed off to
preserve perfect linearity. As a result, they
can operate at a higher power level and
achieve the higher data rate with a lower
CNR than DVB-S2. NovelSat offers its
NS1000 modulator and NS2000 demod-
ulator units to upgrade satellite systems
to NS3. In most applications, NS3 pro-
vides a data rate boost over DVB-S2 for
a given CNR.
ACKNOWLEDGMENT
Special thanks to marketing director
Debbie Greenstreet and technical mar-
keting manager Zhihong Lin at Texas
Instruments as well as David Fursten-
berg, chairman of NovelSat, for their
help with this article.
EngineeringEssentials
MORE FROM LOU FRENZEL
SEE MORE communications coverage
from Lou at http://electronicdesign.
com/author/1843/LouisEFrenzel.
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
64
03.08.12 ELECTRONIC DESIGN
FACTOR
PFC INTO YOUR
POWER-SUPPLY
DESIGN
SAM DAVIS | CONTRIBUTING EDITOR samdavis2@earthlink.net
B
efore the latest IEC61000-3-2 standard took effect in 2005, most power
supplies for PCs, monitors, and TVs generated excessive line harmonics
when operating from single-phase, 110- to 120-V, 60-Hz ac. Spurred on by
this newer and stricter IEC standard, power-supply manufacturers aim to
minimize power-line harmonics by adding power factor correction (PFC).
To understand the impact of IEC61000-3-2, its best to first look at the
ideal situation, which places a load resistor (R) directly across the power line (Fig. 1).
Here, the sinusoidal line current, I
AC
, is directly proportional to and in phase with the
line voltage V
AC
. Therefore:
I (t) =
V (t)
R
(1)
This means that for the most efficient and distortion-free power-line operation, all
loads should behave as an effective resistance (R), whereby the power used and deliv-
ered is the product of the RMS line voltage and line current.
EngineeringEssentials
St r i ct er gui del i nes
i mposed by versi on
3 of the IEC standard
for harmonic current
emissions push design-
ers to embrace power-
factor-correction meth-
odologies.
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
65
However, loads for many electronic systems require an ac-
to-dc conversion. In this case, the load on the power line from
a typical power supply consists of a diode bridge driving a
capacitor (Fig. 2).
Its a nonlinear load for the power line because two diodes
of the bridge rectifier lie in the direct power path for either the
positive or negative half-cycle of the input ac line voltage. This
nonlinear load draws line current only during the peak of the
sinusoidal line voltage, resulting in the peaky input line cur-
rent that causes line harmonics (Fig. 3).
A nonlinear load causes harmonics comparable in magni-
tude to the fundamental harmonic current at line frequency.
Figure 4 shows the magnitude of higher-order harmonics cur-
rents normalized with respect to the magnitude of the funda-
mental harmonic at line frequency.
However, only the harmonic current at the same frequency
as the line frequency and in phase with the line voltage (in this
case, the fundamental harmonic at line frequency) given in
Figure 1 contributes to the average power delivered to the load.
These harmonics currents can affect operation of other equip-
ment on the same utility line.
The magnitude of line harmonics depends on a power sup-
plys power factor, which varies from 0 to 1. A low power-factor
value causes higher harmonics, while a high power-factor value
produces lower harmonics. Power factor (PF) is defined as:
PF =
P
I V
(2)
where P = real power in watts; I
RMS
= RMS line current; V
RMS

= RMS line voltage; and V
RMS
I
RMS
= apparent power in
volt-amperes (VA). PF also equals the cosine of the phase
angle () between line current and voltage; in that regard,
Equation 2 can be rewritten as:
P = (I V )cos (3)
The value of cos is a number between 0 and 1.
If = 0, cos = 1 and P = I
RMS
V
RMS
, which is the same
as for a resistor load. When the PF is 1, the load consumes all
of the energy supplied by the source.
If = 90, then cos = 0; therefore, the load receives zero
power. The generator thats providing the power must deliver
I
RMS
V
RMS
power, even though no power is used for useful
work.
Thus, for the diode bridge-capacitor case in Figure 2, the
only variable left in the PF definition of Equation 2 is the line
current I
RMS
, since line voltage (V
RMS
) is fixed by power-line
generators to 120 V. The higher the I
RMS
the power line draws
for the given average power delivered to the load, the lower the
power factor (PF).
The ac-dc converter in Figure 2, which operates from 120-V
ac line voltage and delivers 600 W to the load while drawing
10 A of the line current, has a PF = 0.5. However, Figure 1s
resistive load with a PF of 1, which draws 600 W from the 120-
V ac line, draws only 5 A from the line.
The electric utility suffers from low PF loads because it
must provide higher generating capability to support demands
for increased line current due to poor load PF. Nonetheless, it
charges the user only for delivery of average power in watts
not the generation of volt-amperes.
This difference between volt-amperes and watts either
appears as heat or is reflected back to the ac power line. The
most common means of correcting this condition is to employ
power factor correction.
POWER-FACTOR CORRECTION
The IEC-61000-3-2 standard defines the maximum har-
monic current allowed for a given power level. Initial versions
of the standard in 1995 and 2001 were changed by the 2005
Edition 3. It imposed stricter requirements on power-line
harmonic currents for (Class D) PCs, monitors, and TVs con-
suming between 75 and 600 W and 16 A per phase. To meet
those requirements, designers must employ active power-
factor correction (PFC) in Class D power supplies.
Many PFC circuits employ a boost converter. One limitation
in the conventional
boost PFC converter
is that it can operate
only from the recti-
fied ac line, which
involves two-stage
V
R
(a) (b)
V
AC
I
AC
V
AC
, I
AC
I
AC
V
AC
t
1. With a resistive load on the power line (a), line current is proportional
and in phase with the line voltage (b).
C
+

V
I
AC
V
AC
V
AC
2. A diode bridge and capacitor across the power line results in a nonlin-
ear load.
I
AC
3. Line current is
peaky and out of
phase with the diode
bridge-capacitor loads
line voltage.
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
power processing (Fig. 5). Waveforms generated by the con-
verter better illustrate this problem (Fig. 6). In addition, theres
no simple and effective way to introduce isolation in a conven-
tional boost converter.
Using a full-bridge extension of the boost converter, which
is then controlled as a PFC converter, is one way to introduce
isolation (Fig. 7). However, this adds the complexity of four
transistors on the primary side and four diode rectifiers on the
secondary, both operating at the switching frequency of, say,
100 kHz. Plus, four more diodes are in the input bridge rectifier
operating at the line frequency of 50/60 Hz.
Besides low-frequency sinusoidal current, the line current
will have superimposed input inductor ripple current at the
high switching frequency, which needs to be filtered out by an
additional high-frequency filter on the ac line. The presence
of 12 switches operating in the hard-switching mode results
in high conduction and switching losses. The best efficiency
reported for this two-stage approach and its supplementary
switching devices is 87%.
Such a method also suffers from the startup problem due to
step-up dc conversion gain. It needs additional circuitry to pre-
charge the output capacitor so the converter can start up.
To achieve 1 kW or higher power, designers often employ
a three-stage approach (Fig. 8). Here, the standard boost PFC
converter and an isolated step-down converter follow the inputs
bridge rectifier. This requires a total of 14 switches. At least six
of those switches are high voltage, further decreasing efficien-
cy and increasing the cost. Still, with the
highest efficiency based on best present
switching devices reaching about 90%,
its better than the two-stage approach.
For medium and low power, theres an alternative approach
that reduces the amount of switches by using a forward con-
verter for the isolation stage (Fig. 9). Before going this route,
one must be aware that although there are now 10 switches, the
four switching devices in the forward converter impose greater
voltage stresses on both primary and secondary side switches
than the full-bridge solution. In addition, the full-bridge solu-
tion requires four magnetic components.
BRIDGELESS PFC CONVERTER
Breaking new ground in this arena, Dr. Slobodan Cuk, presi-
dent of Teslaco, developed a bridgeless PFC converter (patent
pending) that operates directly from the ac line. Its claimed to
be the first true single-stage bridgeless ac-dc PFC converter.
To accomplish this feat, Cuk employs a new switching pow-
er-conversion method, termed hybrid-switching. It employs
a converter topology consisting of only three switches: one
controllable switch S and two passive current rectifier switches
(CR1 and CR2) (Fig. 10).
The two rectifiers turn on and off in response to the state of
the main switch (S) for either positive or negative polarity of
the input ac voltage. This topology consists of an inductor in
series with the input, the floating energy-transferring capacitor
that acts as a resonant capacitor for the part of the switching
cycle, and a resonant inductor.
Because the conventional converters based on PWM square-
wave switching use inductors and capacitors, they require
66
03.08.12 ELECTRONIC DESIGN
EngineeringEssentials
C
+

V
V
AC
V
R
I
R
O
V
AC
S
97% 97% = 94%
L CR
Harmonic number
(f = fundamental)
1.0
0.8
0.6
0.4
0.2
0
H
a
r
m
o
n
i
c

a
m
p
l
i
t
u
d
e
(
n
o
r
m
a
l
i
z
e
d

t
o

f
u
n
d
a
m
e
n
t
a
l
)
1 5 9 13 17 21 25 29
4. Peaky line
current generates
current harmonics
comparable in mag-
nitude to the fun-
damental harmonic
current at the line
frequency. 5. Two-stage power processing is required in this simplified conventional
PFC boost converter.
V
R
I
R
6. Shown are voltage and current waveforms
from conventional PFC boost converter.

<
Isolated boost PFC
Bridge
D
B1
V
AC
D
B2
D
B3
D
B4
n:1
C
R
D1 D2
D3 D4
S1 S2
S3 S4
7. A full-bridge extension of the boost converter, controlled as a PFC converter, provides isolation.
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
67
complementary pair switches. When one switch is on, its com-
plementary switch is off and vice versa. As a result, only an
even number of switches are allowed, compared with an odd
number (three) in the new hybrid switching PFC converter.
In this setup, no such complementary switches exist. One
active switch S solely controls both diodes, whose roles change
automatically according to the polarity of the ac input voltage.
For the positive polarity of the ac input voltage, CR1 conducts
during the on-time interval of switch S. For the negative polarity
of ac input voltage, CR1 conducts during the off-time interval
of switch S. CR2 also responds automatically to the state of
switch S and input ac-voltage polarity. For a positive polarity, it
conducts during the off-time interval of switch S; for negative
polarity, it conducts during the on-time interval of switch S.
Thus, the three switches operate at all times for both positive
and negative half-cycles of the input ac line voltage. Hence, this
true bridgeless PFC converter operates without the full-bridge
rectifier because the converter topology actually performs ac
line rectification. The end result is the same dc output voltage
for either polarity of input ac line voltage. Elimination of the
full-bridge rectifier directly eliminates
losses, especially for an 85-V low line.
The active switch S on the prima-
ry side is modulated and operated at
the switching frequency, which mea-
sures three orders of magnitude higher
than the line frequency (e.g., 50-kHz
switching frequency compared to a low
ac line frequency of 50/60 Hz). Duty
ratio (D) can be defined with respect
to on-time of the controlling switch S
and all steady-state quantities, such as
dc conversion ratios, and dc current of
inductor L is expressed in terms of D.
The full-wave input line voltage and
input line currents are then sensed and
sent as input to the bridgeless PFC IC
controller. In turn, the controller modu-
lates switch S on the primary side to
force the input line current to be propor-
tional to the input line voltage, provid-
ing the desired unity power factor.
This PFC converters truly remark-
able property is that a galvanically iso-
lated extension retains the same sim-
plicity of the three-switch converter
in Figure 10. Basically, the resonant
capacitor splits into two, in series, and
the isolation transformer is inserted at
the point of their split.
1,2

DIGITALLY CONTROLLED PFC
The availability of low-cost, high-
performance digital controllers intend-
ed for power supplies has led to their
use in PFC designs. Digital controllers
provide programmable configuration,
nonlinear control, and low part counts, as well as the abil ity to
implement complex functions that are usually difficult with an
analog approach.
Most digital power controllers, such as Texas Instruments
UCD3020
3
, provide integrated power-control peripherals and
a power-manage ment core, including digital loop compensa-
tors, fast analog-to-digital converters (ADCs), high-resolution
EngineeringEssentials
Bridge Isolated full bridge Boost PFC
D
B
D
B1
D
B2
D
B3
D
B4
V
AC
S
B
48 V 400 V
C
B
+
L
F
C
F

L
C
n:1
R
D1 D2
D3 D4
S1 S2
S3 S4
8. Power supplies handling at least 1 kW typically employ a three-stage PFC converter.
C
H
CR1
CR2
CR3
C
L
+

V
AC
Bridge Boost PFC Forward converter
C
R
D
B1
D
B2
D
B3
D
B4
L
F
PFC IC controller
S1
S2
S3
9. This PFC circuit uses an isolated forward converter, a setup usually reserved in medium- and low-
power situations.
CR2
+

CR1
S C
V
R
C
R L I
L
V
AC
L
R L
F
C
F
I
AC
Bridgeless PFC IC controller
10. This bridgeless PFC uses a hybrid-switching method that employs a
three-switch converter topology: one controllable switch (S) and two pas-
sive current rectifier switches (CR1 and CR2).
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
digital pulse-width modula-
tors (DPWMs) with built-in
dead-time, low-power con-
sumption microcontrollers,
etc. They support a complex,
high-perfor mance power-
supply design, such as a
bridge less PFC.
For exam ple, a bridgeless
PFC can incorporate two dc-
dc boost circuits: L1, D1, S1
and L2, D2, S2 (Fig. 11). D3
and D4 are slow-recovery
diodes. Separately sensing
the line and neutral voltages
referenced to internal power
ground enables the input ac
voltage measurement.
By comparing the sensed
line and neutral signals, firm-
ware can tell if its a positive
or negative half-cycle. During
a positive half-cycle, the first
dc-dc boost circuit (L1-S1-D1)
is active and the boost current
returns to ac neutral through
D4. During a negative half-
cycle, L2-S2-D2 is active and
the boost current returns to the
ac line through D3.
Compared with conventional
single-phase PFCs using the
same power devices, a bridge-
less PFC and a single-phase
PFC should have the same
switching losses. However, a
bridgeless PFC current passes
only one slow diode (D4 for
positive half-cycle and D3 for
negative half-cycle) instead of two at any
time. Thus, efficiency improvement relies
on the difference in conduction loss between
one diode and two diodes.
Bridgeless PFC efficiency also can be
improved by turning the inactive switch
fully on. For example, during a positive
cycle, S2 can be fully turned on while S1
is controlled by the PWM signal. Since the
voltage drop on MOSFET S2 may be lower
than D4 when the flowing current is below a
certain value, the return current partially or
totally flows through L1-D1-RL-S2-L2 and
then back to the ac source. This decreases
conduction loss and improves circuit effi-
ciency, especially at light loads. Similarly,
during a negative cycle, S1 gets turned on
fully while S2 is switching.
68
03.08.12 ELECTRONIC DESIGN
EngineeringEssentials
AC
Line
Neutral
D3 D4
L1
L2
D1
D2
CT1
CT1 CT2
CT2
S1 S2
RL
DPWM1A DPWM2A EADC
ADC_01
ADC_02
UCD3020 ADC_03
11. A digitally controlled bridgeless PFC consists of two phase-boost circuits, but only one phase is active at a
time.

1
2
3
4
5
6
7
8
24
23
22
21
20
19
18
17
RES
RTD
ADD
SYNC
INRUSH
SCL
SDA
VAC
VFB
OVP
IBAL
ILIM
PGND
AGND
CS
9
10
11
12
16
15
14
13
PSON
DGND
CS+
PWM2
VCORE PWM
AC_OK
PGOOD
VDD
ADP1048
V
REC
Relay
T3
V
OUT
T2
Bulk
capacitor
T1
Q2 Q1
T1 + T2 + T3
3.3 V
PMBus
AC
input
12. Analog Devices ADP1048 digital PFC is configured as a bridgeless PFC.
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
EngineeringEssentials
69
With the same input ac voltage and dc output voltage, the
output current is proportional to voltage loop output. Armed
with this knowledge, the frequency and output voltage thus
can be adjusted accordingly. Firmware implements the volt-
age loop in digital controllers. Because the output is already
known, its easy to imple ment this feature, and less costly than
an analog approach.
MORE DIGITAL CONTROL
Analog Devices recently introduced the ADP1047 and
ADP1048
4
digital PFCs controllers, which also provide input
power metering and inrush current control. The ADP1047
is intended for single-phase PFC applications, while the
ADP1048 targets interleaved and bridgeless PFC applications.
The digital PFC function is based on a conventional boost
circuit to provide optimum harmonic correction and power
factor for ac-dc systems. All signals are converted into the
digital domain to maximize flexibility; key parameters can be
reported and adjusted via a PMBus interface.
Overall, the ADP1047 and the ADP1048 were configured
to assist designers in optimizing system performance and
maximizing efficiency across the load range. The two ICs
accurately measure RMS input voltage, current, and power.
Then that data can be reported to the power supplys micro-
controller via the PMBus.
The ADP1048s bridgeless boost configuration allows
removal of conduction losses caused by the PFC converters
input bridge (Fig. 12). In this configuration, the two power
MOSFETs must be driven separately to achieve the highest
efficiency. Signals from the ADP1048 make this possible. The
IBAL pin detects the ac line phase and zero crossings. The
maximum rating on the IBAL pin is V
DD
+ 0.3 V, so it needs to
be protected with a suitable clamp circuit.
During the positive ac line phase, only one boost stage is
effectively working. The second stage is passive; the current
flows in Q2 from the source to the drain. Turning the Q2 FET
fully on during this phase minimizes conduction losses in Q2.
When the ac line phase becomes negative, the roles of Q1
and Q2 are reversed, and Q2 switches actively while Q1 is
always on. The phase information is detected from the ac
line via the IBAL pin. During the soft start phase, both FETs
switch as a precautionary measure. The same situation hap-
pens when phase information on the IBAL pin becomes cor-
rupted or inaccurate.
REFERENCES
1. Cuk, Slobodan, True Bridgeless PFC Converter Achieves Over 98%
Efficiency, 0.999 Power Factor, Power Electronics Technology, July
2010.
2. Cuk, Slobodan, True Bridgeless PFC Converter Achieves Over 98%
Efficiency, 0.999 Power Factor, Part 2, Power Electronics Technology,
August 2010.
3. Bosheng Sun and Zhong Ye, Digital Control Improves Bridgeless
PFC Performance, Power Electronics Technology, March 2011.
4. Analog Devices ADP1047/ADP1048, Digital Power Factor
Controller with Accurate AC Power Metering data sheet,
September 2011.
B0b ARN0R
P0R 8wITcBE8
Standard bIack or gray
or matched to any coIor
IP66/68 Rated
High tear-strength hostiIe
environment-resistant
siIicone rubber
Temperature range
-94F to +400F
Patented perimeter
seaIing rib prevents
Ieakage past mounting
hoIe. No O-ring
required.
MoIded-in mounting nut
100,000 min. actuations
Switch/Circuit
Breaker housing
Back-up secondary seaI
HEXSEAL

HERMETC BOOTS DEFEND


AGANST HOSTLE ENVRONMENTS
HEXSEAL boots increase reIiabiIity & proIong Iife by
hermeticaIIy protecting unseaIed switches, circuit breakers,
potentiometers and encoders from hostiIe contaminants
& actuator-function interference by bIocking water, dust,
dirt, ice, soIvents, etc. Many HEXSEAL

seaIing boots can


aIso suppress EMI/RFI. Meets MIL-DTL-5423 specs. ToggIe,
pushbutton, rotary, rocker & armored versions.

bL Recoqni/ed Comonenl

callJcllck Ior catalog & samplos


Tho world's Nost Bostllo Envlronmonts
Aro 0ur Provlng 0rounds'
800.498.9034 apmhexseaI.com
View website
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
03.22.12 ELECTRONIC DESIGN
THE FUNDAMENTALS
OF FLASH
MEMORY
STORAGE
BILL WONG | EMBEDDED/SYSTEMS/SOFTWARE EDITOR bill.wong@penton.com
F
lash memory is ubiquitous, especially in mobile
devices. Available in a wide range of form factors,
it continues to push hard-disk drives from more and
more platforms as its costs go down and capacities
and operational lifetimes go up (Fig. 1).
NAND and NOR flash memory dominate the solid-
state nonvolatile memory (NVM) arena, but they arent the
only technologies that are available. Form factors that dont
Theres more to flash memory
than NAND and NOR as new
technol ogi es drasti cal l y
improve storage capaci-
ties while reducing real estate
and power requirements.
EngineeringEssentials
34
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
35
expose flash memory explicitly are possible targets for replacement with
non-flash technologies. For example, non-flash products are cropping up
in serial storage.
NONVOLATILE SOLID STORAGE
At one end of the spectrum are one-time programmable (OTP) memo-
ries. These days, OTP memory is normally used for storing security keys
or network IDs. It is implemented using a range of technologies such as
fuse, antifuse, and floating gates. It also can be implemented using standard
CMOS technologies.
Moving up the NVM scale are a number of multi-time programmable
(MTP) memory technologies that can write hundreds or thousands of times
(see Kilopass Delivers OTP And MTP Memory at electronicdesign.com).
MTP memories often are used to implement boot code that rarely changes.
Like OTP, MTP is usually implemented using CMOS technologies, allow-
ing it to be included with digital logic.
Floating-gate EEPROM has been commonly used for data storage. Its
ability to write a single byte plus its good endurance and data retention
properties have made it popular, but flash technologies outclassed it in
density. EEPROM emulation often is called a feature of some flash imple-
mentations that hide flashs block erase requirements so an individual byte
can be written.
Other nonvolatile technologies keep bumping the edges of flash domi-
nance, including magnetoresistive RAM (MRAM), ferroelectric RAM
(FRAM), phase change memory (PCM), and up and coming NVM tech-
nologies (see Magnetic Cores To MRAM: Nonvolatile Tipping Point? at
electronicdesign.com). These technologies have better overall performance
figures than other NVM technologies including NAND and NOR flash,
including write speed, voltage requirements, lack of a page erase cycle,
long-term endurance, data retention, and scalability.
These technologies started targeting niche markets where their higher
costs, at least initially, werent as much of an issue and their advantages
were significant. Theyre even giving SRAM and DRAM a run for their
money.
The Texas Instruments 16-bit MSP430FR57xx family only boasts up
to 16 kbytes of FRAM for data storage and program storage. The fam-
ily typically has a mix of SRAM, flash, and EEPROM storage. A single
approach reduces the number of stock keeping units (SKUs) and simplifies
developers jobs since they no longer need to juggle RAM requirements
with program storage.
These alternative NVM technologies will be found in more designs in the
future. But for now, flash memory is the dominant NVM technology.
FLASH TECHNOLOGIES
Flash memory implementations are divided into NAND and NOR imple-
mentations with a host of variations from different vendors. In general, they
employ a floating-gate transistor. The two approaches indicate how the
transistors are connected and used rather than incorporating the transistors
as part of digital logic as with an FPGA or custom logic.
NOR flash transistors are connected to ground and a bit line, enabling
individual bits to be accessed. It provides better write endurance than
NAND flash. NOR flash is typically used where code and data may exist.
Microcontrollers with on-chip flash normally incorporate NOR flash.
NAND flash transistors are generally connected in groups to a word line.
This allows a higher density than NOR flash. NAND flash is typically used
for block-oriented data storage. NAND flash can be less reliable than NOR
from a transistor standpoint, so error detection and correction hardware
1. Flash memory comes in a range of form fac-
tors, including SecureDigital (a), MicroSD (b),
Sony Memory Stick (c), Compact Flash (d), and
mSATA (e). They typically employ NAND flash
storage.
(a)
(b)
(c)
(d)
(e)
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
or software is part of NAND storage
platforms. NAND is typically used for
high-capacity data storage.
Flash memory uses an erase-write
cycle. The erase essentially sets the
memory to all 1s. Writing sets bits to 0,
and its possible to write different data
as long as existing 1s are changed to 0s.
Flash file systems can take advantage
of this feature because it permits opera-
tions to be performed without a long
and electrically expensive erase cycle.
NAND flash always works at the block
level, while NOR normally has a finer
grain access.
Flash memory started with single-
level cell (SLC) data encoding where each storage transistor
encoded a 1 or a 0. Multi-level cell (MLC) flash normally refers
to the ability to store 2 bits of information per cell instead of
one. Everything is analog at the transistor level, but its simpler
to build a two-level detection circuit than it is to build a four-
level detection circuit required for MLC flash.
Likewise, programming an MLC cell requires the ability to
generate four distinct levels. Triple-level cell (TLC) flash takes
this a step further, packing 3 bits or eight levels into a single
storage cell like Microns 3-bit, 34-nm NAND flash memory
chip (Fig. 2).
The obvious advantage to MLC or TLC storage is higher
densities. The tradeoff is usually in performance, especially in
terms of endurance.
The typical SLC NAND flash has a write endurance on the
order of 100k cycles, while SLC NOR flash is on the order of
1M cycles. MLC flash cuts this by a factor of 10, and TLC cuts
it even further. Technology continues to improve these num-
bers. SLC has better write endurance, while MLC and TLC
will be more cost efficient.
Flash system lifetime depends on a number of factors,
including how its managed. Unmanaged flash storage has a
problem if one area wears out, which occurs when a write fails
to store the proper information. Error detection systems can
help determine when this happens, but once it does the device
is usually worthless. Worse, its failure could cause significant
problems. This is why devices such as microcontrollers with
built-in flash storage that do not track wear rely on NOR flash
with its higher write endurance characteristics.
Several techniques can be used to improve the overall sys-
tem lifetime, such as wear leveling. This approach requires
the ability to remap the location of information. It works best
with a block-oriented device, although it could be applied with
a block size of a single word. There is overhead to implement
wear leveling so large block sizes will be more efficient.
Wear leveling distributes writes across the storage device.
The systems lifetime then can be viewed as the systems total
write capacity rather than the maximum for a single block. Wear
leveling requires the ability to track block write usage and to
record and utilize this information. Defects often can reduce a
blocks lifetime to less than its recommended write lifetime.
In this case, the remapping mecha-
nism can be used if the memory is over-
provisioned. Extra blocks or sectors are
common on hard-disk drives, and the
same technique applies to flash memory.
The only difference is that an extra block
will be used if an uncorrectable error is
detected in a regular block.
If wear leveling is used, then typically
all blocks are part of a pool. If the system
is implemented in software, it may also
be possible to select the logical device
size based on the desired lifetime of the
system. A smaller logical size provides
more extra blocks.
Other technologies like FRAM,
MRAM, and PCM dont suffer from the same write endurance
issues as flash. But techniques such as memory over-provision-
ing and remapping may still apply, especially in larger devices
where other errors such as hardware defects may be common.
FLASH SOFTWARE AND CONTROLLERS
Controlled access to flash memory allows software to ignore
many of the challenges of supporting flash from erase require-
ments to write endurance. Where and how this control is imple-
mented varies greatly.
Software flash file systems are one way developers deal with
raw flash. These systems are device drivers that have access to
the flash chip interface. The driver handles all the flash chores
like error detection, wear leveling, and bad block remapping
transparently with respect to the operating system and applica-
tions. It may utilize part of the flash storage for internal tables,
and it may account for flash erase and write characteristics.
The drive may provide some level of file and directory man-
agement, or it may simply present a logical, low-level block
device. There are advantages to both approaches, and the
choice depends upon the application environment.
A block level interface is normally provided if a hardware
approach is taken. A hardware implementation can also incor-
porate a more robust error correction and mapping system
because of hardware acceleration that would normally be
unavailable to a software implementation. Initially, there were
many flash controller companies, but they have been snapped
up by flash memory companies looking to provide a more inte-
grated solution.
Placing the flash memory behind a hardware controller
does a number of things. For example, it can simplify the
device interface, provide more advanced features such as pow-
er reduction, including various sleep modes, and implement
hybrid memory systems.
Hybrid systems mix memory types in the same package. This
approach allows block devices like NAND flash to be addressed
at a byte or word level by adding RAM to the mix. Samsungs
OneNAND mixes SRAM with its NAND flash controller (see
The Storage Hierarchy Gets More Complex at electronicde-
sign.com). This allows the system to be used as program storage
with blocks being cached in the SRAM as required.
36
03.22.12 ELECTRONIC DESIGN
EngineeringEssentials
2. Microns triple-level cell (TLC) flash memory
stores 3 bits of data in each transistor.
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
RAM is also faster than flash, especial-
ly for writes. It doesnt suffer from flashs
write endurance limitations either. And,
RAM isnt restricted to block access. A
hybrid system can provide many of the
advantages of flash as well as those of
RAM when its used as a general cached
system. Data is flushed from RAM to
flash as necessary since there is usually
more flash memory than RAM in these
kinds of designs.
Hybrid systems can get even more
complex as demonstrated by Seagates
Momentus XT hard drive (see Seagate
Delivers 2nd Generation Hybrid Hard
Drive at electronicdesign.com). This
storage system mixes three types of stor-
age: DRAM, SLC flash, and rotating
magnetic storage. It has a SATA inter-
face, so theres a SATA controller in
addition to controllers for the flash and
hard-disk drive. This is completely trans-
parent to users.
The use of hardware controllers for
flash memory also enables designers to
add other functions such as security and
encryption to the mix. Hardware accel-
eration benefits these types of features
as well.
Standardizing the flash interface would
definitely make a system designers job
easier. The Open NAND Flash Interface
(ONFI) Working Group has been doing
this type of work, releasing the OFNI
3.0 specification in 2011. The spec is
designed to deliver 400 Mtransfers/s
with double data rate (DDR) transfers.
Its Toggle Mode 2.0 optionally employs
differential signaling. OFNI addition-
ally specifies chip-level form factors, but
flash storage covers a very wide range of
form factors.
FLASH FORM FACTORS
Form factors for small serial flash
devices vary widely. There are three-pin
devices that support the 1Wire protocol
as well as a wide range of devices that
support I
2
C and SPI. Quad SPI (QSPI)
NVM devices increase the number of
bits transferred by a factor of four, and
there are even microcontrollers that can
execute programs directly from QSPI
serial memory devices like NXPs
LPC1800 family (see Cortex M3 Can
Run From Quad SPI Flash at electron-
icdesign.com).
Storing programs in serial flash is
not uncommon. Most PCs have their
BIOS stored in serial flash memory. The
chip boot loader copies this program
into RAM where it is executed. NXPs
LPC1800 reads the memory an instruc-
tion at a time.
Serial memories were one of the first
places were other technologies like
FRAM and MRAM were used. Serial
memories often contain other subsys-
tems such as temperature sensors and
real-time clocks (RTCs). Some RTCs
even utilize the memory for storing time-
stamp information.
JEDEC e-MMC (embedded multi-
media card) form-factor chips like San-
Disks iNAND use the same serial inter-
face as the removable, seven-pin MMC
form factor (Fig. 3). The advantage for
developers lies in having the same inter-
face for fixed and removable storage.
The seven-pin MMC device fits into
the same slot as nine-pin SD and nine-pin
SDIO devices, so I/O devices can reside
on the card. The SD has the same pinout
as MMC with two extra pins added near
the outside edges. The MMC interface is
essentially SPI with SD being QSPI. The
11-pin miniSD and eight-pin microSD
cards use the same type of interface but
in a smaller package. The transfer rate of
these serial devices is 832 Mbits/s.
Removable flash storage shows up
with USB, SATA, and SAS interfaces
as well. SAS tends to be used only
on drives, while SATA is found on
disk-drive form-factor flash drives as
well as embedded devices like Viking
Technologys SATA Cube 3 (Fig. 4). The
SATA Cube 3 is a stack of circuit boards
with flash memory and a controller. More
boards mean more storage.
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
3. SanDisks iNand implements JEDECs
e-MMC interface.
EngineeringEssentials
www.arm.com/os5
1-800-348-8051
A tull teatureo
oevelopment solutlon
tor all APM Powereo

plattorms.
Leading
Enbedded
Developnent
Tools
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
On-board SATA devices also include
standards such as mSATA and Slim
SATA modules. SATA interfaces provide
significantly higher throughput com-
pared to SPI/QSPI used with media like
SD cards. Larger SATA flash storage can
be found in 1.8-, 2.5-, and 3.5-in. hard-
drive form factors.
IDE-based Compact Flash storage is
still a common feature on many embedded
motherboards. This has been changing as
microcontrollers have moved from IDE
and PCI to SATA and PCI Express. Still,
Compact Flash is found in many mobile
devices like digital camcorders, although
cameras tend to utilize SD-style cards.
USB flash drives have effectively
replaced CDs, DVDs, and floppy disks.
The first USB 1.x flash drives were tiny
in capacity compared to todays average
size. These days the top-end platforms
are massive and run USB 3.0 (see USB
3.0: A Tale Of Two Busses at electron-
icdesign.com).
Capacity and speed arent the only
things that have been changing with USB
flash drives. Additional functionality,
especially in security, is more common.
For example, Apricorns Aegis Secure
Key has a built-in keyboard for entering
a security code, preventing key-logging
viruses from capturing the code (Fig. 5).
It works with any operating system.
Most other security-related solutions
use a device driver or application that runs
on the host and uses the host for entering
any decode key. The Aegis Secure Key
has an admin and user password. These
features are used to decode a key that
encrypts and decrypts data stored in the
flash memory.
USB flash drives are normally used
for portability, but they have also found
a home inside embedded devices. Many
motherboards have an internal Type A
connector. Most motherboards only have
Type A connectors on the back panel.
Some devices like Eurotechs Helios
Edge Controller only have USB inter-
faces for flash storage and peripheral
interfaces (see Hands-On Eval Of Euro-
techs Helios Edge Controller at elec-
tronicdesign.com).
USB headers are also common on
motherboards. Theyre used for addition-
al external USB interfaces via cables and
backplane connections. They can also be
used for USB storage devices like those
from Swissbit (Fig. 7). The Swissbit USB
Flash Module plugs into standard nine-
pin USB headers found on most mother-
boards. The mounting hole isnt always
found on motherboards, but it does pro-
vide a rugged solution when the board
can be bolted to the motherboard.
Modules like mSATA and Swiss-
bits USB Flash Module arent the only
boards-based flash solutions. Flash
memory also can be found in dual-inline
memory module (DIMM) and small-out-
line DIMM (SODIMM) form factors, but
theres no standard for flash-only solu-
tions as there is with DRAM.
On the other hand, several solutions
like Viking Technologys
ArxCis-NV blend DRAM
with flash memory (Fig.
6). The flash memory is
03.22.12 ELECTRONIC DESIGN
4. Viking Technologys SATA Cube 3 has a SATA
interface that provides access to flash chips
stacked on multiple circuit boards.
5. Apricorns Aegis Secure Key lets users enter
the digital key via a keypad rather than the
hosts keyboard.
EngineeringEssentials
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
used as a backup to store the contents of
the DRAM when power is lost. A super-
capacitor can provide sufficient power to
perform the copy operation.
The challenge with using these types of
hybrid memory is that the software needs
to account for the nonvolatile feature. In
the past, computers with magnetic core
memory could be turned off and on with-
out reloading the operating system or
applications. This can save a significant
amount of time and would be very handy
for embedded applications.
These days, main memory is normal-
ly DRAM. Turn off the system and the
contents of this memory are lost, so the
default recovery process reboots the sys-
tem. The boot program stored in flash
normally remains constant, unlike these
nonvolatile solutions that have stored the
prior contents of the DRAM.
Most of these hybrid solutions target
enterprise systems, but they can be easily
incorporated into embedded applications
because they use standard DIMM sockets
and look like standard DDR2 or DDR3
DRAM to the system hardware.
FLASHY PCI EXPRESS
Bandwidth is one thing flash memory
can use, but many interfaces such as USB
and SATA have restrictions that prevent
full utilization of the speed of flash mem-
ory. PCI Express is one way to get data
moving quickly.
The Non-Volatile Memory Host Con-
troller Interface (NVMHCI) Working
Group developed and manages NVM
Express, which provides an interface to
nonvolatile memory that, at this point,
essentially means flash storage.
SCSI Express is another standard in
the works that will bring flash storage
directly to the PCI Express interface
(see Storage Standards Move Towards
12-Gbit/s Speeds at electronicdesign.
com). The di f-
f er ence i s t hat
the interface is
an SCSI adapter.
SAS us e s t he
SCSI command
set, so it effec-
tively defines a
s t a nda r d SAS
interface. Con-
vent i onal SAS
controllers require device drivers from
their respective vendors.
SATA Express from the Serial ATA
Organization is a similar standard, except
it provides a SATA interface. Like SCSI
Express, SATA Express could just as eas-
ily deliver hard-disk storage via the inter-
face along with flash storage.
NVM Express and SCSI Express serve
the enterprise. Board and drive standards
with hot-swap support are in the mix.
These platforms may find their way
into embedded systems as they become
more common. They lend themselves to
embedded applications because they pro-
vide a high-speed solution that can reside
on the same board as the processing and
networking hardware.
STANDARDS ORGANIZATIONS
Most of the major flash-related orga-
nizations have already been mentioned,
such as JEDEC, the ONFI Working
Group and the NVMHCI Working Group.
The SD Association is responsible for the
SD card family of removable storage.
Likewise, the CompactFlash Association
handles the CompactFlash standard. T10
handles SCSI and SCSI Express. The
Serial ATA Organization handles SATA
Express.
DDR3 NV-DIMM 16-Gbyte
integrated SSD
Battery-free
maintenance
Zero recurring
costs
6. Viking Technologys ArxCis-NV hybrid blends DDR3 DRAM with flash
backup storage in a DDR3 DIMM form factor.
7. Swissbits USB Flash Module plugs into the
nine-pin USB headers found on most mother-
boards.
EngineeringEssentials
fo| FREE PlCO Catalog
Ca// Io// free 800-431-1064
|n NY ca|| 914-738-1400
Fax 914-738-8225
PlCOE/ecIron/cs, lnc.
143 Sparks Ave. Pelham, N.Y. 10803
E Ma||: |nfo@o|coe|ect|on|cs.com
www.picoelectronics.com
0ver 25OO Std.
BCBC Converters
Surlace Mount
From 2v to 1O,OOO
vBC 0utut
18OO watt Modules
solated/ Requlated/
Froqrammable Models
Available
Military bqrades Available
Custom Models,
Consult Factory
0
0
-
0
0

0
a
a
v
a
r
t
a
r
s
2


t
a

1
0
,
0
0
0

0
0

0
u
t
p
u
t
D
elivery-Stock to one w
eek
for sam
ple quantities
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
64
05.03.12 ELECTRONIC DESIGN
DON TUITE | ANALOG/POWER EDITOR dontuitel@penton.com
EngineeringEssentials
Todays LEDs for lighting applications prom
ise uncertain lifetim
es, flicker m
igraines,
and the dreaded droop. But that doesnt m
ean theyre too good to be true.
D
A
R
K

S
I
D
E
?
D
O

L
E
D
s

H
A
V
E

A
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
65
L
EDs are hotin terms of market potential, if not actual temperature. As incandescent lamp bans
spread around the world, LED lighting seems to have unlimited potential (see the table). However,
the technology has its flaws. How can we state with authority how long they will really last in ser-
vice? Whats this flicker issue that keeps coming up? And, why are we suddenly reading about
droop in The Wall Street Journal and The New York Times?
USEFUL LIFE
How long do LEDs last? If theyre properly installed and given an efficient thermal path to conduct away
the heat they generate, the answer is generally a long time. The engineering question has become how
long it takes for their light output to fall to some fraction of its original value (Fig. 1). Yet that isnt totally
satisfactory. A procedure that assesses rated life in a manner similar to the way that the rated life of con-
ventional incandescents and fluorescents is still evolving.
Last August, the Illuminating Engineering Society of North America (IESNA) released IES TM-21-11:
Projecting Long Term Lumen Maintenance of LED Light Sources.
1
The document describes how to
take data that was already being measured by an approved process and extrapolate it. The report is avail-
able for $40 (or $28 if you belong to IESNA).
However, TM-21 only applies to specific light source components (package, module, array), not an
entire luminaire. A complete luminaire is a complex system with many other components that can affect
lifetime such as the driver, optics, thermal management, and housing. The failure of any one of these
components can mean the end of the luminaires useful life, even if the LEDs are still going strong.
Any meaningful projection of lifetime must account for all of these components and not simply focus
on the LEDs.
When incandescent and fluorescent light bulbs are evaluated, a large and statistically significant
sample is operated until 50% have failed. That point, in terms of operating hours, defines the rated
life for those lamps.
2
That doesnt work for LEDs, which typically dont fail abruptly. Instead, their
light output slowly diminishes over time.
Also, that notion about LEDs lasting a long time means that acquiring real application data on
long-term reliability becomes time-challenging. Moreover, the light output and useful life of indi-
vidual LEDs tend to be influenced more by how much current theyre driven by and how hot they
get in the luminaire where theyre mounted.
LUMEN MAINTENANCE LIFE AND RATED LIFE
Before explaining how TM-21 is applied, it will be useful to distinguish between maintenance
life, which TM-21 addresses, and rated life, which relies on a procedure that doesnt have the
same authority as TM-21 yet. Again, rated life is used to assess conventional lamps.
Conceptually, the value of lumen maintenance (Lp) derived from test data by TM-21 describes
the number of hours of operation over which the LED light source will maintain a certain per-
centage, p, of its initial light output. For example, L70 would be the number of hours until an
LEDs light output had decayed to 70% of what it was when the LED was new.
For the last several years, the industry has used a test procedure described in IESNA as
LM-80-8 to measure Lp for LED packages, arrays, or modules driven by auxiliary drivers.
3
In
the LM-80 procedure, LEDs are driven with external current sources. Their case temperature
is controlled during operation, with measurements made at room temperature.
In more detail, the devices under test are operated at three case temperatures: 55C, 85C,
and one other temperature thats selected by the manufacturer. Air temperature must be main-
tained to within 5C and case temperature to within 2C. Relative humidity must be less
than 65%.
This environment is maintained for a minimum of 6000 hours (roughly 38 weeks). Data
is collected every 1000 hours. The data collected comprises lumen output, changes in chro-
maticity (color), and any incidents of catastrophic failure (burnouts).
B specs add a target statistical confidence interval. Thus, B50 indicates that no more
than 50% of a sample of LED devices would be expected to have their light output drop
below a target lumen maintenance level. B10 would mean no more than 10% of the sample
met that L standard within the given time.
LIMITS OF LM-80
LM-80 is a test procedure only. Deliberately, it does
not include a way of getting from those recorded test
results to any values of Lp for the devices under test.
After the industry agreed on LM-80, it still needed
pass/fail criteria, or a way of graphing results so peo-
1. An array of Philips LED bulbs like this success-
fully completed 18 months of field, lab, and product
testing to meet the rigorous requirements of the U.S.
Department of Energys L Prize competition. Ironically,
until last year, the LED industry had no procedure for
extrapolating test data to assign actual lifetime values to
specific products.

Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ple could make sense of the data. No curve-fitting methods
were recommended for extrapolating from the data to predict
L70 values.
Sample sizes and the values or even the number of drive cur-
rents were left up to whoever created any particular test. There
was even a problem with determining what kinds of LEDs the
data applied to, because there were no criteria for what LED
package changes would require new testing.
This was particularly critical because the industry is constant-
ly innovating packaging to improve the heat-flow characteristics
of the devices. Because LEDs do not shed heat by radiation, like
incandescent lights, the effectiveness of packaging in providing
a thermal path from the LED junction to heatsinks and the ambi-
ent environment can have significant effects on Lp.
Some of these issues were addressed early. The U.S. Envi-
ronmental Protection Agency (EPA) introduced some stan-
dardization for testing residential and non-residential indoor
and outdoor lamps. It required LM-80 testing be conducted by
labs accredited by the National Institute of Standards and Tech-
nologys (NISTs) National Voluntary Laboratory Accredita-
tion Program (NVLAP).
For each combination of current and external temperature,
the EPA required a minimum of 25 samples. To pass, after 6000
hours, the value for LM had to be better than 91.8% for products
intended for residential indoor use or better than 94.1% for
non-residential and residential outdoor use. That clarified some
points, but everybody was waiting for IESNA TM-21.
TM-21 DETAILS
The new document specifies precisely how to extrapolate
the LM-80-08 lumen maintenance data (Fig. 2). Heres how it
works:
For each unit in the data set, the measured light output at the
start of is normalized to a value of 1.
At each point where light output is measured, the normal-
ized data for all units is averaged. (In other words, the results
depict the average behavior of the whole set of units.)
All data from the start of the test to 1000 hours is discarded.
If the test stops at 6000 hours, the average lumen maintenance
data points from 1000 hours to 6000 hours are fit to a simple
exponential extrapolation model using a least-squares curve fit.
If the test runs for 6000 to 10,000 hours, only the last 5000
hours of data are used for the extrapolation.
For tests that run more than 10,000 hours, the data points
from the last 50% of the total measurement time are used.
However, if the last 50% of the total measurement time is not
an integer multiple of 1000 hours, take more than 50% until
the data comes out to an integral multiple of 1000 hours. The
times-six rule is intended to limit the length of lumen main-
tenance predictions.
If that description is too concise to really understand, dont
worry. In January, the EPA made available the Energy Star
TM-21 calculator.
4
Its availability makes it possible for users
to request LM-80 data sets from LED vendors and interpolate
(for example) from test values at 55C and 85C to obtain
lumen depreciation values for 75C.
RATED LIFE
The best reference for understanding the difference between
TM-21 lumen-maintenance life and rated life is an article by
Jianzhong Jiao of Osram Opto Semiconductors in LEDs Maga-
zine, Understanding the Difference between LED Rated Life
and Lumen-Maintenance Life.
5
To explain the difference, Jiao refers readers to ANSI/IES
RP-16, which describes the process for consistently deter-
mining the life value for conventional lamp types. In RP-16,
rated life is designated Bp and expressed in hours, where p is a
percentage. Thus, a B50 of 1000 hours means that 50% of the
tested products lasted 1000 hours without failure.
B50 is also known as the products rated average life. For
example, if a product has a B10 rated life of 1000 hours, 10%
of the tested products failed within 1000 hours and could
be compared favorably to a product with a B50 rated life of
1000 hours. While Bp life is a statistical measure, Lp life is a
defined durability measure, Jiao says.
Bp life testing, then, requires a large and statistically mean-
ingful sample size. There is no similar requirement for Lp life
testing. The catch, Jiao notes, is that when LM-80 test data is
used to make lumen-maintenance projections per TM-21, the
sample size will affect the uncertainty of the projection. Conse-
quently, a smaller sample size will lead to shorter projected life
to increase the statistical certainty.
With that caveat in mind, the first thing that must be defined
to provide a sensible basis for LED rated life estimates is what
constitutes a failure of an LED that has lost luminance but
hasnt burned out.
For example, Jiao says, failure might be defined as when
the light output of an LED reaches 70% or lower of the initial
light output (including if the LEDs light output is zero). In
other words, for a given period of time, if an LED produces
insufficient light or no light, the LED is considered at failure.
That would make it possible to combine a new statistical
measure with the defined durability measure. Jiao suggests this
would be a BpLp value. If an LED light source claimed to have
66
05.03.12 ELECTRONIC DESIGN
EngineeringEssentials
Points used
Normalized average light output
1000- to 6000-hour extrapolation
2. The extrapolation process described in IES TM-21-11, Projecting Long
Term Lumen Maintenance of LED Light Sources, provides a bridge from
LM-80 test results to predictions of lumen maintenance.
DARK
S
ID
E?
D
O
LED
s
H
A
V
E A
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
B50L70 of 30,000 hours, then 50% of tested samples should
have a lumen-maintenance life of 30,000 hours, Jiao says.
To support that, Jiao recommends integrating the statistical
failure measurement with lumen-maintenance measurements
during the life test. This would require a large enough LED
sample size to be statistically meaningful, as well as additional
tracking and recording of sample behaviors. A key point is how
long the testing would continue.
Instead of stopping at some arbitrary multiple of 1000
hours, when 50% of the tested samples reached a light output
equal to 70% of initial lumens, including the samples that
failed to produce light, then B50L70 (in hours) [would be]
obtained, Jiao says.
Jiao acknowledges a practical problem with that. Its reason-
able to expect B50L70 values to turn out to be on the order of
30,000 hours, so we really need a way to make a projection
based on shorter testing periods. Fortunately, LED makers
have already figured much of this out. They have taken two
approaches.
One approach carries out LM-80 testing on large samples
recording both light-output changes and failures. The data is
then fitted into a mathematical model with a statistical-certain-
ty band. By analyzing the lumen-maintenance projection curve
along with the associated sample distribution bandwidth, its
possible to project an estimated B50L70 life.
Alternatively, manufacturers have always tested for real fail-
ures (the light goes out) separately from official LM-80 testing.
Infant mortality is fundamentally a manufacturing-process
problem, and process control is a key to profits.
Whats needed is a way to combine the data from both types of
testing in a way that everybody agrees is fair. Then, using TM-21,
the lumen-maintenance projection can be established, and the
data collected in the accelerated-failure-modes test could be
modeled with a different mathematical expression, with the rated
life projected by mathematically combining both models.
Thats Jiaos and Osrams recommendation. Before the
industry establishes a recommendation for a standard practice,
though, LED integrators may need to request more testing and
modeling information from the manufacturers in regards to the
statistical failures of LED light sources.
FLICKER
The light output of devices driven by ac can flicker at twice the
line frequency, at harmonics of that frequency, and sometimes
at the fundamental of the line frequency. Flicker generally isnt
observed in incandescent bulbs because of their thermal inertia.
However, flicker can be observed with fluorescent tubes and
with cold cathode fluorescent lamp (CCFL) and LED back-
lighting of video displays. Medical research associates flicker
with migraines and epilepsy in a segment of the population.
For LEDs, the solution is to clean up the output stages of
driver circuits. Scott Brown, senior vice president of market-
ing at iWatt, believes upcoming European regulations might
become part of IEC 61000-3-2, the European standard for
power-factor correction (PFC) in ac-dc supplies.
Brown agrees with Matt Reynolds, applications manager for
solid-state lighting at Texas Instruments Silicon Valley Analog
Division (the former National Semiconductor), and Suresh
Hariharan, applications director at Maxim Integrated Prod-
uctsthere is something serious behind the issue, though the
problem can (and should) be dealt with at a small cost delta.
Hariharan says flicker comes down to legacy triac dimmers
and the way LED drivers turn their chopped ac cycles into the
pulse-modulated dc that regulates the light output of the device.
A properly designed dimmable driver, all three companies agree,
has three stages: an ac-dc stage and two dc stages, the final one
of which pulse-modulates the current to control light output.
Proper design also demands power-factor correction
(PFC) in the ac-dc stage to keep the harmonics of the ac line
frequency off the power lines.
6
Yet not all dimmer-compatible
drivers provide PFC, Hariharan says.
Plenty of companies around the world make generic LED
light bulbs. The driver electronics all fit into the base of
the bulb, so nobody knows whose electronics are in there. If
a copycat company wants to save a few cents on the bill-of-
materials and use a cheaper driver chip, who can tell?
Also, the cheaper drivers might not display any flicker.
Thats because the triac in the dimmer in the wall of the build-
ing is often the element where the problem starts. If the triac
switches ON at a different point in the first half-cycle of the
ac waveform than it does in the second half-cycle, a series of
harmonics is produced that (among other things) shows up as
flicker at the ac line frequency.
What to do about that depends on the driver design. But
the ultimate solution, Hariharan believes, is more expensive
dc circuitry in the delivery stage. Encouraging this requires a
standards-based approach. That may be the IEC in Europe or
the IEEE in North America.
ENTER THE IEEE
This is where IEEE Projects Authorization Request PAR
1789 enters the picture. The standards body is working on
68
05.03.12 ELECTRONIC DESIGN
O
p
t
i
c
a
l

p
o
w
e
r
No droop
With droop
LED current
3. With conventional LEDs, light output does not increase linearly with
current. Something happens with the recombination of election-hole pairs
that normally results in the emission of a photon. A phenomenon called
auger scattering, in which an electron is generated instead of a photon, is
presently considered the best candidate for an explanation. This is leading
at least one company to pursue GaN-on-GaN as a possible solution.
EngineeringEssentials
DARK
S
ID
E?
D
O
LED
s
H
A
V
E A
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
P1789, tentatively titled Recommended Practices of Modulat-
ing Current in High Brightness LEDs for Mitigating Health
Risks to Viewers. Brad Lehman of Northeastern University,
the standards chair, can be reached at lehman@ece.neu.edu.
The first of the groups efforts has been released for public
comment.
7
The report has been out for more than a year, so
theres no urgency. But that doesnt mean the document is
uninteresting. It includes detailed references to and summaries
of multiple studies of the effects of flicker on humans who are
exposed to it through fluorescent lighting.
For instance, photosensitive epilepsy is more common than
one might think, affecting about one in 4000 individuals,
according to the group. Factors that may combine to affect the
likelihood of seizures include flash frequency in the range of 3
to 65 Hz, and especially in the range from 15 to 20 Hz. Thats
why line frequency fundamentals (50 or 60 Hz, depending on
country) are important and why asymmetric behavior of the
external triac controller is significant.
Deep red flicker and alternating red and blue flashes may
be particularly hazardous, the group notes. Bright flicker can
be more hazardous when the eyes are closed, partly because the
entire retina is then stimulated.
DROOP
Droop refers to the phenomenon in LEDs where more cur-
rent produces more lumens of output only up to a point, beyond
which lumen output no longer increases linearly with increas-
ing current (Fig. 3). This phenomenon in LEDs has always
been hard to explain.
The quantum process by which the generation and recombi-
nation of electron-hole pairs causes photon emission doesnt
behave nicely. Beyond a certain current, the recombinations
apparently produce another electron, instead of a photon.
Semiconductor physicists who deal with LEDs have been
chasing several suspect processes because knowing which
one it is could provide a handle for dealing with droop. The
prime candidate today is Auger scattering, named after Pierre
Victor Auger, a twentieth century French
physicist.
The droop issue came to the forefront
in February when Soraa, the LED startup
founded by Shuji Nakamura, the inven-
tor of the blue laser and LED, described
the companys GaN-on-GaN (gallium
nitride) LEDs at the Strategies in Light
show. According to Soraa, its LED mate-
rial is 1000 times freer of dislocations
than the usual silicon carbide. Also, its
LEDs can be driven much harder (250
A/cm
2
) than traditional LEDs without
exhibiting significant droop.
At the same time, Soraa made a full-
court press with the business media,
including The New York Times and The
Wall Street Journal, but did not engage
the technical trade press or issue a press
release about product availability or pric-
ing. GaN-on-GaN bears watching, given Soraas intellectual
property (IP) portfolio and technology team, but its too soon to
speculate about where in the lighting spectrum it will fit.
REFERENCES
1. IES TM-21-11: Projecting Long Term Lumen Maintenance of LED
Light Sources, http://www.ies.org/store/product/projecting-long-
term-lumen-maintenance-of-led-light-sources-1253.cfm
2. Don Tuite, High Brightness White LEDs Light The Way To Greener
Illumination http://electronicdesign.com/content/catpath/com-
ponents/page/2?topic=high-brightness-white-leds-light-the-way-to-
greener+illumination
3. Another test, LM-79, is an approved method for taking electrical
and photometric measurements of solid-state lighting (SSL) products.
It covers total flux, electrical power, efficacy, chromaticity, and inten-
sity distribution and applies to LED-based products that incorporate
control electronics and heatsinks, including integrated LED products
and complete luminaires, but not to bare LED packages and mod-
ules, nor to fixtures designed for LED products but sold without a light
source. Unlike traditional photometric evaluation, which involves sep-
arate testing of lamps and luminaires, LM-79 tests the complete LED
luminaire because of the critical interactive thermal effects. While
LM-79 doesnt address product reliability or life, it does provide for the
important calculation of complete luminaire initial efficacy.
4. The EPAs Energy Star TM-21 calculator can be downloaded at
www.energystar.gov/TM-21calculator.
5. Jianzhing Jiao, Understanding the Difference between LED Rated
Life and Lumen-Maintenance Life, http://www.ledsmagazine.com/
features/8/10/12
6. Don Tuite, Whats The Difference Between Reactive Power Factor
And AC-DC Supply Power Factor? http://electronicdesign.com/
article/power/whats-difference-reactive-power-factor-acdc-supply-
power-factor-73569
7. A Review of the Literature on Light Flicker: Ergonomics, Biological
Attributes, Potential Health Effects, and Methods in Which Some
LED Lighting May Introduce Flicker, http://grouper.ieee.org/
groups/1789/FlickerTR1_2_26_10.pdf
70
05.03.12 ELECTRONIC DESIGN
INCANDESCENT LAMP LIMITS
2010 2011 2012 2013 2014
U.S. 100 W 75 W 60 to 40 W
Canada 100 W (deferred) 75 W (deferred) 60 to 40 W (deferred)
Mexico 100 W 75 W 60 to 40 W
China 100 W 60 W
Cuba Banned
Argentina Banned
European Union 100 W 75 W 60 W 40 to 15 W Banned
U.K. 100 to 75 W 60 W 40 to 15 W Banned
South Korea Banned
Japan Banned
Philippines Banned
Malaysia 100 W 75 W 60 W 40 W
Australia Banned
EngineeringEssentials
DARK
S
ID
E?
D
O
LED
s
H
A
V
E A
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ROGER ALLAN | CONTRIBUTING EDITOR rsallan@optonline.net
EngineeringEssentials
WARM UP TO
THE LATEST
PCB COOLING
TECHNIQUES
A
s consumer demands for smaller
and faster intensify, mammoth
challenges emerge when it comes
to beating the heat generated by
ever-denser printed-circuit boards
(PCBs). As stacked-up micropro-
cessors and logic elements reach into the giga-
hertz range of operation, cost-effective thermal
management becomes perhaps the highest priority
among engineers in the design and packaging and
materials fields.
Adding to those headaches is the current trend of
manufacturing 3D ICs for greater functional den-
sities. Simulations show that a 10C rise in tem-
perature can double a 3D IC chips heat density,
degrading performance by more than one-third.
MICROPROCESSOR CHALLENGES
Projections by the International Technology
Roadmap for Semiconductors (ITRS) show that
within the next three years, interconnect wiring in
difficult-to-cool regions of a microprocessor will
An array of advances in ther-
mal-management products and
methodologies, some border-
ing on exotic, arm designers
with essenial weapons to battle
the heat.
52
10.20.11 ELECTRONIC DESIGN
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
consume up to 80% of the chips power. Thermal design power
(TDP) is one measure to assess a microprocessors propensity
to handle heat. It defines the upper point of the thermal profile
as well as the associated case temperature.
The latest microprocessors from Intel and Advanced Micro
Devices (AMD) feature TDPs ranging from 32 to 140 W. This
number continues to rise in conjunction with increasing micro-
processor operating frequencies.
Large data centers that employ hundreds of computer serv-
ers are particularly susceptible to heating problems. Accord-
ing to some estimates, the servers cooling fanswhich draw
up to 15% of the electrical poweractually become consider-
able heat sources in and of themselves. On top of that, the cost
of cooling a data center can constitute about 40% to 45% of
the centers power consumption. All of these factors create a
greater demand for local and remote temperature sensing and
fan control.
The thermal-management challenge becomes trickier when
it involves PCBs housing multicore processors. While each
processor core in the array may dissipate less power (and thus
less heat) than a single-core processor, the net effect on large
computer servers is the addition of more heat dissipation to a
data centers computer system. Simply put, many more proces-
sor cores run for a given amount of PCB space.
Another thorny issue with IC thermal management concerns
the appearance of hot spots on a chips package. Heat fluxes
can climb as high as 1000 W/cm
2
, which is a condition thats
difficult to track.
PCBs play a critical role in thermal management, thus
requiring a thermal design layout. Whenever possible,
designers should keep power components as far away from
each other as possible. Furthermore, they should be kept
away from the PCBs corners, which will help maximize the
amount of PCB area around the power components to facili-
tate thermal dissipation.
Its common for exposed power pads to be soldered to a
PCB. Often, exposed-pad-type power pads conduct about 80%
of the heat generated through the bottom of the IC package and
into the PCB. The remaining heat dissipates through the pack-
ages sides and leads.
HEAT HELPERS
Designers now can seek help via a number of improved heat-
management products. They include heatsinks, heat pipes, and
fans that allow for active and passive convection, radiation,
and conduction cooling. Even the manner of the PCB-mounted
chips interconnection helps mitigate heat problems.
For example, the common exposed-pad approach used for
interconnecting an IC chip to a PCB may increase heat prob-
lems. When soldering the exposed path to a PCB, the heat
travels quickly out of the package and into the board. The heat
then dissipates through the boards layers and into the sur-
rounding air.
Thus, Texas Instruments (TI) devised a PowerPAD method
that mounts the IC die to a metal plate (Fig. 1). This die pad,
which supports the die during fabrication, serves as a good
thermal heat path to remove the heat away from the chip.
According to Matt Romig, analog packaging product man-
ager at TI, its PowerStack method is the first 3D packaging
technology to stack high-side vertical MOSFETs. It combines
both high-side and low-side MOSFETs held in place by cop-
per clips and uses a ground potential exposed pad to provide
thermal optimization (Fig. 2). Employing two copper clips
to connect the input and output voltage pins results in a more
integrated quad flat no-lead
(QFN) package.
Heat management for power
devices is an even greater chal-
lenge. Higher-frequency sig-
nal processing and the need
to shrink package size are
pushing conventional cooling
techniques to the brink. Kaver
Azar, president and CEO of
Advanced Thermal Solutions,
proposes the use of an embed-
ded thin-film thermoelectric
device that includes water-
cooled microchannels.
Signal
trace
PowerPAD
pad-to-board
solder area
Internal copper planes Thermal via area
Lead
Encapsulation material Die Bond wire
1. The die pad in Texas Instruments PowerPAD supports the die during fabrication, thus serving as a good ther-
mal heat path to remove the heat away from the chip.
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
53
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
Azar envisions one solution that minimizes spreading resis-
tance, the largest resistance in the path of heat transfer, with a
forced thermal spreader bonded directly to the microprocessor
die (Fig 3).
This approach distributes the concentrated heat of a small
microprocessor die to the larger base area of the heatsink,
which transfers the heat to the ambient environment. Such
a built-in forced thermal spreader combines micro and mini
channels in the silicon package. The water flow rate inside the
channels is approximately 0.5 to 1 liter/minute.
Simulation results showed that on a 10- by 10-mm die within
a ball-grid-array (BGA) package, a 120- by 120-mm heatsink
base-plate area yielded a thermal resistance of 0.055K/W.
Using a heatsink material with thermal conductivity equal to or
higher than diamond yielded 0.030K/W.
Paul Magill, vice president of marketing and business devel-
opment for Nextreme Thermal Solutions, also suggests ther-
moelectric cooling, advocating that cooling should start at the
chip level. The company offers localized thermal management
deep inside electronic components using tiny thin-film ther-
moelectric (eTEC) structures known as thermal bumps (Fig.
4). The thermally active material is embedded into flip-chip
interconnects (e.g., copper pillar solder bumps) for use in elec-
tronic packaging.
Localized cooling at the chip wafer, die, and package lev-
els delivers important economic benefits. For instance, in
a data center that employs
hundreds and thousands of
advanced microprocessors,
its far more efficient than
removing heat with more
expensive and bulkier air-
conditioning systems.
In some devices like LEDs,
a combination of passive and
active cooling techniques
can improve device perfor-
mance and lifetime (Fig. 5).
For example, using a fan
inside a heatsink often will
reduce thermal resistance
to 0.5C/W, which is a sig-
nificant improvement over
the typical 10C/W achieved
with passive cooling (heat-
sinking) alone.
SIMULATE AND SIMULATE AGAIN
Thermal control has always been, and continues to be, one
of the limiting factors to achieving greater IC performance.
With space at a premium in these ever-smaller ICs and their
packages, theres little or no room to help cool them. It has
forced designers to consider exotic cooling techniques and
new, evolving cooling materials.
Nonetheless, the basic premise remains: Designers must pay
more attention to the science of thermodynamics for optimal
cooling solutions. And the entire process should start with
thermal analysis softwarewell before a design is put into
production.
Thats where simulation software tools enter the picture.
Products like the Mentor Graphics Flotherm 3D V.9 software
tool help 3D IC designers quantify thermal quantities, enabling
them to address thermal problems as they arise. This compu-
tational fluid-dynamics (CFD) product provides images of
bottleneck (Bn) and shortcut (Sc) fields. As a result, engineers
can identify where and why heat-flow congestion occurs in
their designs.
According to Erich Brgel, general manager of Mentor
Graphics mechanical analysis division, innovative Bn fields
show where a designs heat path is being congested as it
attempts to flow from high-junction temperature points to the
ambient point. The Sc fields highlight possible approaches to
create a new effective heat-flow path by adding a simple ele-
ment such as a gap pad or a chassis extrusion.
Flotherm 3D V.9 supports the importing of XML model and
geometry data to enable the softwares integration into data
flows. It also has a direct interface to Mentor Graphics Expe-
dition PCB design platform. As a result, users can add, edit, or
delete objects such as heatsinks, thermal vias, board cutouts,
and electromagnetic cans for more accurate thermal modeling.
With thermal simulation, designers can accurately predict
the thermal performance of the initial and subsequent designs
without having to build and test a prototype. Design variables
54
10.20.11 ELECTRONIC DESIGN
EngineeringEssentials
High-side MOSFET
V
SW
clip
Ground thermal pad
Low-side MOSFET
V
IN
clip
2. TIs PowerStack 3D packaging technology stacks high-side vertical
MOSFETs. It combines both high-side and low-side MOSFETs held in place
by copper clips and uses a ground potential exposed pad to provide ther-
mal optimization.
Die
Fan
Heatsink
Forced heat
spreader
Active BGA
packaging
3. An embedded thin-film thermoelectric device proposed by Advanced Thermal Solutions uses water-cooled
microchannels. A forced thermal spreader thats bonded directly to the microprocessor die minimizes spreading
resistance, the largest resistance in the path of heat transfer. (Source: Cooling High-Power Packages, Kaveh Azar, Advanced
Packaging)
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
ELECTRONIC DESIGN GO TO WWW.ELECTRONICDESIGN.COM
such as the number of heatsink fins, fin thickness, heatsink
base thickness, and thermal resistance of the thermal-interface
materials should be considered.
Proper thermal models are essential for future 3D ICs that
plan to use stacked logic and memory devices consisting of
thin die, which strongly reduces lateral heat spreading. As a
dies thickness shrinks, higher-temperature spots become more
common. Hot spots on the logic die cause local temperature
increases in the memory die, possibly reducing DRAM reten-
tion time.
Researchers at Belgiums Interuniversity Micro Electronics
Center (IMEC) have already proven correct thermal models
for the design of next-generation 3D mixed-stack ICs. These
3D stacks, which closely resemble commercial chips of the
future, consist of IMEC proprietary logic CMOS ICs stacked
on top of commercially available DRAMs. Stacking is accom-
plished with through-silicon vias (TSVs) and micro-bumps.
The research was a collaborative effort between IMEC and
partners Amkor, Fujitsu, Globalfoundries, Intel, Micron, Pana-
sonic, Qualcomm, Samsung, Sony, and TSMC.
IBM plans to use microchannel water cooling for its future
3D IC processors, such as the Power8 processor scheduled for
introduction in 2013 (Fig. 6). Bruno Michel, manager of the
Advanced Thermal Packaging Group for IBMs Zurich, Swit-
zerland research facility, says that energy-efficient, hot-water
cooling technology is part of IBMs concept of a zero-emis-
sions data center. To cool 3D chip stacks, which generate more
heat than a single processor in nearly the same space, water
rather than air was used to reduce energy consumption.
Liquid cooling of CPUs is also performed in the XLR8 GTX
580 GeForce graphics card from PNY Technologies, which
addresses challenging graphics-intensive gaming products.
PNY and Asetek, a specialist in CPU thermal management,
joined forces to produce a product for gaming enthusiasts and
their GPU/CPU cooling systems.
Engineered with a closed-loop system and built with Aseteks
sealed water cooler already attached, the combination design
offers consumers an out-of-the-box, ready-to-install product
that costs $649.99. PNY claims the new system offers up to
30% cooler temperatures, quieter acoustics, and faster perfor-
mance than the standard-reference-designed Nvidia GeForce
GTX 580 graphics card.
EngineeringEssentials
55
4. Nextreme Thermal Solutions offers localized thermal management deep
inside electronic components thanks to tiny thin-film thermoelectric (eTEC)
structures known as thermal bumps. The thermally active material is
embedded into flip-chip interconnects, such as copper pillar solder bumps,
for electronic packaging. (Source: Ensuring Optimal High-Power LED Performance With
Thermal Management, by Jon Domingo, Lumex, ECN, April 11, 2011)
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe
Thermal management via water cooling also is employed in
a wide variety of power devicesthyristors, MOSFETs, and
silicon-controlled rectifiers (SCRs) are just a few. One example
is the XW180GC34A/B developed by Westcode Semiconduc-
tors Ltd., a subsidiary of Ixys Corp. The nickel-plated heatsink
has a 127-mm diameter contact plate, suiting it for press-pack
devices with electrode contacts up to 125 mm in diameter.
Typical heatsink to input water thermal resistance, for flow
rates of 10 L/min., is 4.3K/kW (two coolers plus one semi-
conductor device) and 5.6K/kW (three coolers plus two semi-
conductor devices). The heatsink comes with or without an
integral connecting bus bar.
Typical applications for the coolers would be mini mega-
watt-power-level devices and high-power rectifiers, as in heavy
industrial applications, or for electric train trackside substations,
as well as in applications in electricity generation and distribu-
tion, says Frank Wakeman, Westcodes marketing and techni-
cal support manager. The high-efficiency cooling provided
with these coolers enables customers to achieve high-power
density in their systems with much reduced footprint.
10.20.11 ELECTRONIC DESIGN
56
6. IBM plans to use microchannel hot-water cooling for future 3D IC pro-
cessors, such as the Power8 processor due to arrive in 2013.
3
D

s
t
a
c
k
Through silicon via
CMOS circuitry
M
i
c
r
o
c
h
a
n
n
e
l
P
r
o
c
e
s
s
o
r

la
y
e
r
M
e
m
o
r
y

la
y
e
r
A
n
a
lo
g

c
i
r
c
u
i
ts
R
F

c
i
r
c
u
i
ts
5. In LEDs or similar devices, a combination of passive and active cooling
techniques can boost performance and lifetime. For example, using a fan
inside a heatsink often reduces thermal resistance to 0.5C/W, which is a
considerably improvement over the typical 10C/W achieved with passive
cooling (heatsinking) alone.
EngineeringEssentials
Don'r Ier rougb
spees keep you
trom bIddIng
Yet lrew elet|rerit:. We
lrew elet|rerit: jetlejirj.
ite t:e etr me:| tlellerjirj
jetlejirj jre|lem eri we ter
erjireer i| ler et.
tr elet|rerit: jetlejirj i:
t:ei ir mili|er, :ettri|,
|eletemmtrite|ier:,
:jete re:eertl eri
mer e|ler lieli:.
Wle|ler i|': ettej|ier
ell lijl ||/K|| e|
|erte|ier, |CC/\9|,
||/Iemje:|, il'jet
&IJ er 7JI, :ei:mit re:i:|erte er jre
|et|ier ejeir:| it:|, :letl eri ti|re
|ier we'te jre|e|l erjireerei i|.
|tij|e |let|rerit:' etjeriertei
ertle:tre erjireer: ter werl lrem
etr irewirj:, retjl :le|tle:, eter
terter:e|ier:.
Cell t: |eie ler e lree ter:tl|e|ier.
Ieje|ler we ter wir etr |ii.
&JJZJ1ZZS *JJ&S7&1J * www.etij|eelet.tem
:ele:etij|eelet.tem * /trere, || JSJ
IS
O
9
0
0
1
:2
0
0
8
R
o
H
s
C
o
m
p
lia
n
t
Visit http://electronicdesign.com
Engineering Essentials Vol. III electronicdesign.com/subscribe

You might also like