You are on page 1of 4

The minimum feature size on an integrated circuit has continued to evolve at a rapid rate,

decreasing from 8 μm in 1969 to 130 nm* today. The rate of evolution can be appreciated by
plotting the minimum feature size (on a logarithmic scale) versus year of first commercial
production, as shown in Figure j. The straight line on this plot indicates the exponential decrease in
the minimum feature size. The exponential change was first quantified by Gordon Moore of Intel
Corporation and is known as "Moore's Law." (Moore's law is so well recognized and frequently cited
that it has even been mentioned in the cartoon "Dilbert" [1]. Along with the rapid decrease in
minimum feature size, the size of the IC chip continues to increase, although less rapidly than the
minimum feature size decreases. Because of the combination of smaller features and larger chips, the
number of transistors on an Ie chip increases even more rapidly--doubling every 18-24 months
(Figure g). This rapid increase is made possible by continuing technological development, while the
basic physics of the transistors remains relatively constant. (However, secondary effects that were
less important in large transistors can dominate the behavior of smaller transistors.)

Because different manufacturing equipment is often needed to produce circuits with smaller
features, the decrease of feature size is not continuous. Rather, the area of a transistor for each device
"generation" decreases by a ratio that provides enough benefit to justify the cost of new equipment.
Typically the area decreases by a factor of two, so the linear dimension decreases by a factor of 1.4
(i.e., the dimension "scales" by 0.7). Device features of 60 nm are produced today. and features of
20-30 nm have been demonstrated [2].

Once Moore's law was well accepted, it became almost a self-fulfilling prophecy. By
extrapolating from the past evolution of feature size shown in Figure f, a target value for minimum
feature size is determined for each future year. Projections for feature size and other physical and
electrical characteristics have been quantified in an "International Technology Roadmap for
Semiconductors" (ITRS) [3], which is updated frequently. Semiconductor manufacturers then devote
the resources needed to develop the technology required to produce features of the predetermined
size. In fact, the changes can sometimes exceed the predicted rate of improvement. If the
predetermined size is typical for the industry, then each company tries to develop the technology
more rapidly than predicted to give it a competitive advantage in the market.

Device scaling cannot continue indefinitely, however. It will be limited by two factors. First,
as the device features scale to smaller dimensions, the number of electrons within each transistor
decreases, as shown in Figure h. As the number of electrons n decreases, the statistical fluctuations in
the number (~square root of n) becomes an increasing fraction of the total, limiting circuit
performance and making circuit design more difficult. Several years ago, these statistical
fluctuations began to impact the performance of analog circuits. Within a few years, similar
fluctuations will influence digital circuit design. Considerably after statistical fluctuations become
important, we will reach the time at which each transistor contains only one electron (perhaps about
2015). At that point, the entire concept of electronic devices must change. A number of different
alternatives are being investigated in advanced research laboratories, but no favored approach has
yet emerged.

Second, even in the shorter term with conventional devices, each generation of technology
becomes more difficult and more expensive. The lithography needed to define increasingly small
dimensions is often limiting. Eventually, the cost of technology development and manufacturing
equipment is likely to limit the further evolution of IC technology. A modern IC manufacturing
facility can cost several billion U.S. dollars today, and the cost is continuing to increase. The high
cost of the manufacturing facilities limits the number of companies that can afford to manufacture
ICs. In fact, many companies have the circuits they design manufactured by "foundries" that
specialize in high-volume manufacturing of ICs for other companies. However, even with these
future limitations, the planar process will continue to dominate electronics for a number of years.
The benefits it has provided for computers, communications, and consumer products have led to the
continuing huge investment in research and development and manufacturing facilities. The planar
process is the foundation for the production of silicon integrated circuits and continually evolves to
allow production of increasingly complex circuits. Making the best use of its many degrees of
freedom in understanding and designing devices requires a fairly thorough understanding of the
basic elements of silicon technology. Much of the remainder of this chapter is directed
toward providing such an understanding and a basis for the discussion of devices in the following
chapters.
Evolution of Ie technology. (a) First commercial silicon planar transistor (1959) (outer diameter 0.87 mm). (b) Diode-transistor logic (DTL)
circuit (1964) (chip size 1.9 mm square). (c) 256-bit bipolar random access memory (RAM) circuit (1970) (chip size 2.8 x 3.6 mm). (d) VLSI
central-processor computer chip containing 450,000 transistors (1981). The different functions carried out by the IC are labeled on the
figure (chip size 6.3 mm square). [(a), (b)' (c) courtesy of B.E. Deal-Fairchild Semiconductor, (d) Courtesy of Hewlett-Packard Co.l (e)
Block diagram of Pentium 4 processor with 42 million transistors (2000); the corresponding chips photo is shown on the book cover.
(Courtesy of Intel Corporation.) (f) Minimum feature size versus year of first commercial production. (g) Another embodiment of Moore's
law shows that the number of transistors per chip has doubled every 18-24 months for approximately 30 years. (h) Along with decreasing
feature size, the number of electrons in each device decreases. [(f}-(h) adapted from Mark Bohr, Intel; Howard Huff, Sematech; Joel
Birnbaum, Hewlett-Packard; Motorola.

The minimum surface dimension in a MOSFET process is a key benchmark for the density
of devices that can be built in a process. This dimension, typically the MOSFET channel length, has
decreased continually for more than 35 years as subsequent new generations of MOSFET circuits
were built with ever-increasing numbers of active devices. This trend is most frequently described in
terms of Moore's Law, which was first stated by Gordon Moore in the early 1970s. Moore's Law
predicts that the number of transistors in an integrated circuit will double through advances in
technology and design every 18-24 months. Basic intuition tells us that, as with any rapidly
developing field, a limit will eventually be reached. In the early 1980s, the minimum possible
surface dimension was predicted to be approximately 0.5 μm. Ten years later, the predicted
minimum dimension had decreased to about 0.1μm and now, in the early 21st century, a frequently
quoted limit is roughly 25 nm.

You might also like