You are on page 1of 121

Preface to the Premier Edition

In the decade since the first edition of this book appeared, the world has changed, and
PCs have not only been part of that change but have caused some of the change. The PC
has grown up, shrunk down, gained importance, and lost status. PCs have grown to have
the power not just to capture our thoughts and imagination but to create their own reality.
They have shrunk so that you can carry the most powerful machines from work to home
and not spend the evening recovering. They have become a major part of international
commerce. And from items of reverence they have entered society as little more than
appliances. All they need is a good coat of white enamel and a cord too short to let you
plug them in and put them where you want them. Some need only the enamel.

Paging through the manuscript of the previous edition, I might have been shocked to see
how out of date it had become in a few years. “Might have been shocked” because, after
almost two decades of chasing after computer technology, nothing comes as a shock
anymore except reaching inside a PC after forgetting to switch off the power. Between
the third and this fourth, "Premier" Edition, however, the PC industry has taken yet
another unforeseen twist, one that leads down yet another path to who knows where. At
least the journey is always fun.

Nowhere in the third edition will you find the word “Internet.” Certainly the Internet was
there, but it hadn't yet been discovered by the umpteen million folk who now log on
every day. And no one, not even its most fervent promoters, would have believed that in
a few short years son-of-Internet, the World Wide Web, would become the one most
important reason people would consider buying a PC.

In this edition, you'll find the magical word “Internet” splashed in nearly every chapter.
After all, no computer book can be authoritative without a good dose of the hottest
buzzwords. Of course, there have been more changes than just the Internet, so I've
captured a whole hive of buzzwords for this edition—and given them surprisingly full
discussions.

While I was adding the new spice, I also diligently expunged a good deal of ancient
history, keeping only enough to put things in perspective. Most that is now gone is the
stuff that many of us approaching codger-hood would like to forget—words like IBM
(search and replace with Intel and Microsoft hegemony), PS/2, OS/2, megabyte (search
and replace with gigabyte), even DOS. So much of what we once took for granted has
finally—and often thankfully—taken leave.

Some of it is fondly remembered, some not so. As I was working on this revision,
however, I discovered that too much can't be forgotten even if you don't want to be
reminded of it every day.

How to deal with these issues tormented me for a long while, but I persevered, mixing
discussions of new technologies with historic perspective. When my manuscript plopped
down on the publisher's doorstep and cracked the flagstone underneath, however, cries of
anguish reverberated from coast to coast. The publisher moaned over the massive
deforestation needed to make the paper for printing just one copy of the resulting book,
let alone what would be required for the several that they were likely to sell (or at least
report on the royalty statement).

The real issue was not ecological, however. The publisher was kind enough to point out
that my writing itself was a worse pollution than would be caused by clear-cutting the
entire continent. The problem was that, as originally drafted, this Premier edition could
not have been printed in a single volume—not even using bible paper (yes, there is a such
thing, but it's named after a somewhat more worthy work than this).

Another kind of clear-cutting was called for. The publisher sent a whole cadre of editors
to help me—all in nice, white coats and carrying butterfly nets. But no matter how
surgically we wielded our blue pencils, we had to cut enough important stuff that it piled
up knee deep on the floor. We quickly discarded our initial thought—hack away with
abandon because no one reads past page two, anyhow—and sought a solution that
preserved the integrity of the book. The result was two books in one.

What you hold in your hand is the lean-and-mean abridged bible, trimmed considerably
to publishable size. Inside a pocket in the back jacket you'll find a CD containing the big
book (as well as a number of other delights), the entire new edition as originally
conceived (but with the benefit of adept editing. -ed.)

The publisher calls the stuff on CD “Electronic Bonus Sections” to give you the
impression you're getting more for your money. Of course you don't need a Nobel prize
in economics to figure out that you've actually paid for the disk. What you're really
getting is a convenient way of dealing with an overwhelming amount of information.

The division between the paper and electronic versions is hardly random. I've moved
material of mostly historic interest and more detailed and technical data about current
(and upcoming) topics to the CD. In other words, the CD lets you probe further into
matters of particular interest to you. To alert you to such electronic elaborations, the
publisher has scattered emendations throughout the book, dire warnings that look like
this:
See Electronic Bonus Chapter 5 on the CD for a complete section called "System Identification Bytes,"
which includes Table 5.F, "IBM Model and Submodel Byte Values."

The trend at the time the previous edition rolled off my screen was toward platform
independence. Everyone and her sister was announcing a new RISC microprocessor and
all were going to run some new, great operating system, one that grew up somewhere
other than the state of Washington. How could anyone have been so naïve? (Ask about
my love life and you'll know.) But today we do have our choice of operating systems. Of
course, all start with Windows—take your pick, 95 or NT.

Despite all the changes in the world and PCs, I've tried to keep this edition consistent
with the earlier three, retaining the same general layout but spinning off a new chapter on
mass storage interfaces, a subject broad and complex enough to deserve its own
discussion. I've shuffled some chapters around to improve the organization and make my
publisher think I actually did some work on the revision. I've tried to include as much
information as was available at the time this was prepared about all new and important
technologies. Besides that Internet thing, you'll find discussions of all the latest standards
you'll want in your next PC—USB, IEEE 1284, 1394, IrDA, CardBus, Pentium II, MMX,
and enough kinds of RAM to ensure a good slumber even if you don't need to count
sheep.

Look, look, look! Finally this author has gotten it together to deliver his tables and
figures to the publisher with the manuscript, and the publisher got them in the right places
in the text. Some even have the right captions. A few are even useful. More tables than
ever before. More illustrations. And less idle time for the author (as if I had any to begin
with!)

As with any major revision to a successful book, I approached this project happily—you
know, a minor rewrite and the money would pour in. Okay, once you take into account
the publisher's bookkeeping, it's more of a dribble and usually a drought. I had dreams
that chapters could remain essentially intact and I could just add a reference to Pentium
here and there.

After about ten minutes of such revision, I was ripping chapters apart, reorganizing,
purging, and adding hundreds of pages of new material. Here's a quick run through of the
changes and some other important stuff you'll find inside.

 Chapter 1, "Background," is a basic introduction to PCs and the standards that define
a starting point in PC hardware—for example Microsoft's PC 97 standard that designates
the minimum you'll want for the latest operating system, unnamed as this is written.
(Microsoft has just drafted a new PC 98 standard, so you might guess what the new
operating system may be called. -ed.)
 Chapter 2, "Motherboards," has gotten a good dose of reality, including a discussion
of new technologies and (in the Electronic Bonus Sections) some of the latest commercial
motherboard products.
 Chapter 3, "Microprocessors," of course includes the latest incarnation of Intel's
wonder chips up through the Pentium II and also some surprisingly strong competitors
from AMD and Cyrix.
 Chapter 4, "Memory," shifts emphasis to new technologies and from chips to
modules. You'll find discussions of more memories than at a 50th class reunion.
 Chapter 5, "The BIOS," admits itself to be an exercise in irrelevance, given the rise of
the overwhelming operating system. Of course, trends in BIOS design are all covered, up
through Plug-and-Play and a pointer to ACPI later in the book.
 Chapter 6, "Chipsets and Support Circuits," adds more relevant new technologies to
the dusty circuitry still required for compatibility with machines designed before some
current PC users were born.
 Chapter 7, "The Expansion Bus," puts more emphasis on PCI and CardBus, along
with (in the Electronic Bonus Sections) some of their industrial kin. There's enough about
where we came from to help you see why we're going where we are.
 Chapter 8, "Mass Storage Technology," builds upon its foundation in previous
editions, trading megabytes for gigabytes and mixing in such novel concepts as
rewritable CDs and various shades of RAID.
 Chapter 9, "Storage Interfaces," a new chapter, tackles all the new and some coming
designs, giving just a tip of the hat to old, best forgotten, friends. You'll find SCSI-3,
various shades of EIDE, SSA and even Fibre Channel in its pages.
 Chapter 10, "Hard Disks," discusses several new technologies and generally tries to
keep up with this fast spinning technology.
 Chapter 11, "Floppy Disks," covers old ground but leaps ahead to the new 100MB
designs as well.
 Chapter 12, "Compact Discs," reaches into DVD territory and also presents a good
look at CD-R, the likely replacement for both CDs and cartridge hard disks as the
preeminent data exchange medium.
 Chapter 13, "Tape," takes note of the new technologies that are keeping this ancient
medium alive, things like DLT and helical tape formats, reserving discussions of older
technologies to the Electronic Bonus Sections.
 Chapter 14, "Input Devices," updates old technologies, adds a few topics that fell
through the cracks in previous editions (things like joysticks) and even reaches into 3D
systems.
 Chapter 15, "The Display System," still covers the basics of image making but
approaches new issues arising from 3D accelerators and multimedia needs.
 Chapter 16, "Display Adapters," covers all the requirements you'll want in your
display system, including the latest chips.
 Chapter 17, "Displays," illustrates more issues involved with CRTs but puts more
emphasis on LCDs and, a new technology called FED that holds a promise of someday
replacing both CRTs and LCDs.
 Chapter 18, "Audio," looks at everything you'll want to hear from your PC, from
microphones to speakers with an improved discussion of MIDI as well.
 Chapter 19, "Parallel Ports," covers the new IEEE 1284 standard and the bus-like
ECP and fast EPP.
 Chapter 20, "Printers and Plotters," reflects the latest trends in output technology—in
particular, printers—with an emphasis on lasers and today's high resolution color inkjets.
 Chapter 21, "Serial Ports," not only brings a vastly improved discussion of RS232 but
also includes several new standards—ACCESS.bus, IrDA, USB, and IEEE 1394.
 Chapter 22, "Modems," starts off with familiar modem technology (including the
latest 56K designs) but also emphasizes the new all digital systems including land line,
cable, and satellite connections. There are a few mentions of the Internet in here, too.
Maybe more than a few.
 Chapter 23, Networking, outlines the basics you need to know to build a small office
or home network, including an in depth "grow your own" discussion.
 Chapter 24, "Power," includes discussions of the latest standards from APM and
ACPI to Smart Battery, as well as familiar power protection materials.
 Chapter 25, "Cases," gets physical and discusses the physical aspects of PC
packaging and peripheral installation.

If that's not enough, the CD also includes four appendixes. These serve mostly as
reference material that would have otherwise interrupted the smooth flow of the book
(which has been described as a Level 5 rapids by whitewater rafters). The chosen four
include:

 Appendix A, "PC History," which outlines the relevant history of computers and
other technologies relevant to PCs
 Appendix B, "Regulations," which covers government regulations pertaining to PCs
that you should know about
 Appendix C, "Health and Safety," which discusses issues of health and safety
 Appendix D, "Data Coding," which lists the important data coding systems,
information you need or at least need to be able to find somewhere.

In addition, I've included an updated drive parameter table in electronic form so that you
can quickly find the setup values you need to get an old, even ancient, hard disk drive
working.

My publisher and I have done our best to assure the accuracy of what you find in here. It
falls short of the scholarly mark in missing footnotes, endnotes, and sources. I've got a
bunch of excuses, none too compelling, except apparatus like that would have put me
even further behind schedule on a book project that's very time sensitive. In any event,
much of what you'll find here is based on original research, conversations with the people
behind the technologies, the promoters of the standards, and hands-on experience. Too
often when I went in search of the literature for more detail, the only references I found
were my own.

That said, you can depend on the names and dates given here. No date is mentioned
lightly. It reflects when a given standard or technology was developed or released,
specifically stated as such. Where names are too often forgotten, I've made an effort to
put credit where it is due for many of the minor inventions that have made PCs as
successful as they are.
This book has always served two purposes: It's an introductory text to help anyone get up
to speed on PCs and how they work. But once you know what's going on inside, it
continues to serve as the ultimate PC reference. I've tried to update both as well as keep it
relevant in today's vastly changed world of computing. As in previous editions, if you
need an answer about PCs, how they work, how to make them work, or when they began
to work, you will find it answered here, definitively.

-wlr, 18 April 1997

Introduction

Tools define our culture. We aren't so much what we make as what we use to make it.
Even barbarians can make holes with a bow and shaft; we have drill presses, hydraulic
punches, and lasers. More importantly, the development of tools defines civilization. No
culture is considered civilized unless it uses the latest tools. The PC is the tool that
defines today's current age and culture.

Once a tool only for initiates and specialists, today the PC has become as common as,
well, all those monitors sitting on office desks and overweight attache cases bulging with
keyboard-screen-and-battery combinations. The influence and infiltration of the PC
stretches beyond comparison with any other modern tool, even beyond the reach of
common metaphors. No office machine is as common; none so well used; none so
revered—and so often reviled. Unlike the now nearly forgotten typewriter that was
restricted to secretaries and stenographic pools, the PC now resides resplendently on once
bare drafting tables, executive desks, and kitchen counters. Unlike fax machines,
calculators, and television sets, the PC doesn't do just one thing but meddles in nearly
everything you do at work and at home. Unlike your telephone, pager, or microwave
oven, the PC isn't something that you use and take for granted, it's something you wonder
about, something you want to improve and expand, perhaps even something that you
would like to understand.
Indeed, to use any tool effectively you have to understand it—what it can do, how it
works, how you can use it most effectively. Making the most of your PC demands that
you know more than how to rip through the packing tape on the box without lacerating
your palms. You cannot just drop it on your desk, stand back, and expect knowledge to
pour out as if you had tapped into a direct line to a fortune cookie factory.

Unfortunately, despite the popularity of the PC, the machine remains a mystery to too
many people. For most people, the only one more baffling than programming a VCR.
Everyone knows that something happens between the time your fingers press down on
the keys and a letter pops up on the screen, or a page curls out of the printer, or a sound
never heard before by human ears shatters the cone of your multimedia loudspeakers.
Something happens, but that something seems beyond human comprehension.

It's not. The personal computer is the most logical of modern machines. Its most complex
thoughts are no more difficult to understand than the function of a light switch. Its power
comes from combinations—a confluence of ideas, circuits, and functional blocks—each
of which is easy to understand in itself. The mystery arises only when your vision is
blocked by the steel wall of the computer case that blinds you to the simplicity fitted
inside.

This book is designed to be both x-ray vision and explanation. It lets you see what's
inside your PC, helps you understand how it works, and lets you master this masterpiece
of modern technology. Moreover, you learn what you can do to make your PC better,
how you can use it more effectively, how you can match the right components to make it
better suit what you want to do, and how you can get the most speed, the most quality,
the most satisfaction from today's technology.

In home and workplace, the personal computer is today's most technically advanced tool.
With a PC, you can get yourself and your office organized, you can tackle jobs that
would otherwise be too time consuming to try, you can extend your imagination to see
forms your mind cannot make, and you can relieve yourself of the tedium of repetitive,
busy work. Most importantly, the PC is a tool for acquiring information, for learning, for
communicating. It is your primary portal for connecting with the Internet and World
Wide Web.

For some, tools, or the technologies behind them, define the ages of man. Humankind
started in the Stone Age. The apes that came before (and still persist in a few place where
civilization has bowed to nature) pounded with rocks. Our ancestors went further,
shaping rocks to better suit their needs: hammers, points, and knives. The story of
civilization has been much the same during the passing millennia. As we have progressed
from stone to bronze to iron, the aim has been the same: making the tool that best suits
the task at hand, one that's sharper, stronger, and better fits the hand and job.

The story of the PC fits this same pattern. When the PC was introduced, it was like the
rock in the raw. You grabbed it because you could. It was the right size to get your hands
on, clumsy to use, and accomplished a job without any particular elegance. You sort of
pounded away on the thing and hoped for the best.

The coming of metal meant that tools could be molded to their purpose. In the Bronze
Age, then the Iron Age, work was faster and more precise. The modern PC offers the
same kind of malleability. It's a machine made to be molded to your exact needs.
Fortunately, you don't need a forge to fit your PC to the function you desire. The physical
part of the change is simple; the hard work is all mental—you have to know your PC to
use it. You have to understand this most modern of tools.

Only a poor workman blames his tools when his job goes awry. But what kind of
workman doesn't understand his tools at all, doesn't know how they work, and cannot
even tell a good tool from a bad one? Certainly you wouldn't trust such a workman on an
important job, one critical to your business, one that might affect your income or budget,
or take control of your leisure pursuits. Yet far too many people profess ignorance about
the PC, a tool vital to their businesses, their hobbies, or their households.

The familiar PC is the most important business tool to emerge from electronic
technology. It is vital to organizing, auditing, even controlling, most contemporary
businesses. In fact, anywhere there is work to do, you likely will find a PC at work. If you
don't understand this modern tool, you will do worse than a bad job. Soon, you may have
no job at all.

Unlike a hammer or screwdriver, however, the personal computer is a tool that frightens
those uninitiated into its obscure cabala. Tinkering with a personal computer is held in
the same awe as open heart surgery, except that cardiovascular surgeons, too, are as
apprehensive as the rest of us about tweaking around the insides of a computer. But
computers are merely machines, made by people, meant to be used by people, and
capable of being understood by people.

As with other tools, they are unapproachable only by those inexperienced with them. An
automobile mechanic may reel at the sight of a sewing machine. The seamstress or tailor
may throw up his hands at the thought of tuning his car. The computer is no different. In
fact, today's personal computer is purposely designed to be easy to take apart and put
back together, easy to change and modify, and generally invincible except at the hands of
the foolish or purposely destructive. As machines go, the personal computer is sturdy and
trouble free. Changing a card in a computer is safer and more certain of success than
fixing a simple home appliance, such as a toaster, or changing the oil in your car.

Times change, and so has the personal computer. No longer is it an end in itself. It is a
means to an end. It takes care of the office work, it lets you send and receive e-mail, it
lets you explore online, it even plays games. In the future, the PC will likely be the
centerpiece of both your home entertainment system and your communications systems.

Although the PC has gained tremendously in power in the last few years, its technology is
more accessible than ever. New developments promise to make your next PC easier to set
up, too. A far-reaching new standard called Plug-and-Play can make upgrading your
system as easy as plugging in new components—no adjustments, no configuration, no
brains necessary. But even these innovations don't mean that you can take full advantage
of your investment in your PC without understanding it and the underlying technology.

If anything puts people off from trying to understand PCs, it is the computer mystique.
After all, the computer is a thinking machine, and that one word implies all sorts of
preposterous nonsense. The thinking machine could be a devious machine, one hatching
plots against you as it sits on your desk, thinking of evil deeds that will cause you endless
frustration. Although you might attribute Satanic motivation to the machine that swallows
up a day's work the instant your lights flicker, you would be equally justified in
attributing evil intent to the bar of soap that makes you slip in the shower.

Even if you don't put your PC on an altar and sacrifice your firstborn program to it, the
image of a thinking machine can mislead you. A thinking machine has a brain, therefore
opening it up and working inside is brain surgery, and the electronic patient is just as
likely to suffer irreversible damage at the hands of an unskilled operator as a human. A
thinking machine must work in the same unfathomable way as does the human mind,
something so complicated that in thousands of years of attempts by the best geniuses, no
one has yet satisfactorily explained it.

But computers think only in the way a filing cabinet or adding machine thinks—hardly in
the same way as you do or Albert Einstein did. The computer has no emotions or
motivations. It does nothing on its own, without explicit instructions that specify each
step it must take. Moreover, the PC has no brain waves. The impulses traveling through
the computer are no odd mixture of chemicals and electrical activity, of activation and
repression. The computer deals in simple pulses of electricity, well understood and
carefully controlled. The intimate workings of the computer are probably better
understood than the seemingly simple flame that inhabits the internal combustion engine
inside you car. Nothing mysterious lurks inside the thinking machine called the computer.

Computers are thought fearsome because they are based on electrical circuits. Electricity
can be dangerous, as the ashes of anyone struck by lightning will attest. But inside the
computer, the danger is low. At its worst, it measures 12 volts, which makes the inside of
a computer as safe as playing with an electric train—even safer because you cannot trip
over a PC's tracks (they're safely ensconced in the PC's disk drive). In fact, nothing that's
readily accessible inside the computer will shock you, straighten your hair, or shorten
your life. The personal computer is designed that way. It's made for tinkering, adding in
accessories, and taking them out.

Computers are thought delicate because they are built from supposedly delicate electronic
circuits. So delicate that their manufacturers warn you to ground yourself to a cold water
pipe before touching them. So delicate that, even though they cost $500 each, they can be
destroyed by an evil glance. In truth, even the most susceptible of these circuits, the one
variety of electronic component that's really delicate, only requires extreme protection
when it's not installed where it belongs. Although pulses of static electricity can damage
circuits, the circuitry in which the component is installed naturally keeps the static under
control. (Although static pulses are a million times smaller than lightning bolts, the
circuits inside semiconductor chips are a million times smaller and more delicate than
you are.) Certainly a bolt of lightning or a good spark of static can still do harm, but the
risk of either can be minimized simply. In most situations and work places, you should
have little fear of damaging the circuits inside your computer.

Most people don't want to deal with the insides of their computers because the machines
are complex and confusing. In truth, they are and they aren't. It all depends on how you
look at them. Watching a movie on videotape is hardly a mental challenge but
understanding the whirring heads inside the machine and how the image is synchronized
and the hi-fi sound is recorded is something that will spin your brain for hours. Similarly,
changing a board or adding a disk drive to a computer is simple. It's designing the board
and understanding the Boolean logic that controls the digital gates on it that takes an
engineering degree.

As operating systems get more complicated, computers are becoming easier to use. Grab
a mouse and point the cursor at an onscreen window, and you can be a skilled computer
operator in minutes. That may be enough to content you. But you will be shortchanging
yourself and your potential. Without knowing more about your system, you probably
won't tap all the power of the PC. You won't be able to add to it and make it more
powerful. You may not even know how to use everything that's there. You definitely
won't know whether you have got the best computer for your purposes or some
overpriced machine that cannot do what simpler models excel at.

In other words, although you don't need skill or an in-depth knowledge of computer or
data processing theory, you do need to know what you want to accomplish and what you
can accomplish, the how and why of working with, expanding, even building a personal
computer system.

And that's the purpose of this book, to help you understand your present or future
personal computer so that you can use rather than fear it. This text is designed to give you
an overview of what makes up a computer system. It will give you enough grounding in
how the machine works so that you can understand what you're doing if you want to dig
in and expand or upgrade your system.

At the same time, the charts and tables provide you with the reference materials you need
to put that knowledge in perspective and to put it to work. Not only can you pin down the
basic dates of achievements in technology, find the connections you need to link a printer
or modem, and learn the meaning of every buzz word, this book will help you understand
the general concept of your personal computer and give you the information you need to
choose a computer and its peripherals. As you become more familiar with your system,
this book will serve as a guide. It will even help you craft your own adapters and cables,
if you choose to get your hands dirty.
The computer is nothing to fear and it need not be a mystery. It is a machine, and a
straightforward one at that. One that you can master in the few hours it takes to read this
book.

Chapter 1: Background
What is a PC? Where did it come from? And why should you care? The answers are
already cloudy and the questions may one day become the mystery of the ages. But the
PC’s origins are not obscure; its definition is malleable but manageable, and your
involvement is, well, personal but promising. This chapter offers you an overview of what
a PC is, how its software and hardware work together, the various hardware components
of a PC, and the technologies that underlie their construction. The goal is perspective, an
overview of how the various parts of a PC work together. The rest of this book fills in the
details.

 Personal Computers
 History
 Characteristics
 Interactivity
 Dedicated Operation
 Programmability
 Connectivity
 Accessibility
 Labeling Standards
 MPC 1.0
 MPC 2.0
 MPC 3.0
 PC 95
 PC 97
 MMX
 Variations on the PC Theme
 Workstation
 Server
 Simply Interactive Personal Computer
 Network Computer
 Numerical Control Systems
 Personal Digital Assistants
 Laptops and Notebooks
 Software
 Applications
 Utilities
 DOS Utilities
 Windows Utilities
 Operating Systems
 Programming Languages
 Machine Language
 Assembly Language
 High Level Languages
 Batch Languages
 Linking Hardware and Software
 Device Interfaces
 Input/Output Mapping
 Memory Mapping
 Addressing
 Resource Allocation
 BIOS
 Device Drivers
 Hardware Components
 System Unit
 Motherboard
 Microprocessor
 Memory
 BIOS
 Support Circuits
 Expansion Slots
 Mass Storage
 Hard Disk Drives
 CD ROM Drives
 Floppy Disk Drives
 Tape Drives
 Display Systems
 Graphics Adapters
 Monitors
 Flat Panel Display Systems
 Peripherals
 Input Devices
 Printers
 Connectivity
 Input/Output Ports
 Modems
 Networks

Background

In carving a timeline out of the events of the first few millennia of civilization, the
historians of an age some time hence will list the events that changed the course of the
world and human development: the Yucatan-bound meteor that blasted dinosaurs to
extinction, the taming of fire, coaxing iron from stone, inventing a machine to press ink
from type to paper, and—most important for this book if not the future—creating
personal computers that fit in your hand and link to every other person and PC in the
world.

The PC and the technologies developed around it truly stand to change the course of
civilization and the world as dramatically as any of humankind’s other great inventions.
For nearly two decades, PCs already have changed the way people work and play. As
time goes by, PCs and their offspring are working their way more deeply into our lives.
Even today they are changing how we see the world, the way we communicate, and even
how we think.

A PC is an extension of human abilities that lets us reach beyond our limitations. In a


word, a PC is a tool. Like any tool, from stone ax to Cuisinart, the PC assists you in
achieving some goal. It makes your work easier, be it keeping your books, organizing
your inventory, tracking your recipes, or honing your wordplay. It makes otherwise
impossible tasks—for example, logging onto the Internet—manageable, often even fun.

Compared to the stone ax, a computer is complicated, though it probably takes no longer
to really master. Compared to the modern tools of everyday life, it is one of the most
expensive, second only to the tools of transportation like cars, trucks, and airplanes.
Compared to the other equipment in office or kitchen, it is the most versatile tool at your
disposal.

At heart, however, a computer is no different than any other tool. To wield it effectively,
you must learn how to use it. You must understand it, its capabilities, and its limitations.
A simple description of the gadgets you can buy today is not enough. Names and
numbers mean nothing in the abstract. After all, an entire class of people memorize
specifications and give the appearance of knowing what they are talking about. As
salespeople, they even try to guide your decisions in buying computer equipment. Getting
the most that modern technology has to offer—that is, both taking care of your work
efficiently and acquiring the best hardware to handle your chores— without spending
more than you need to requires an understanding of the underlying technology.

Personal Computers

Before you can talk about PCs at all, you need to know what you’re talking about. The
question is simple: What is a PC?

PCs have been around long enough that the only folks likely not to recognize one by sight
have vague notions that mankind may someday harness the mystery of fire. But defining
exactly what a personal computer is is one of those tasks that seems amazingly
straightforward until you actually try to do it. When you take up the challenge, the term
transforms itself into a strange cross mating between amoeba and chameleon (biologists
alone should shudder even at the thought of that one). The meaning of the term changes
with the person you speak to and in the circumstance in which you speak.

A personal computer is a computer designed to be used by one person. In this, the age of
the individual, in which everything is becoming personal—even stalwart team sports are
giving way to superstar showcases—a computer designed for use by a single person
seems natural, not even warranting a special description. But the personal computer was
born in an age when computers were so expensive that neither the largest corporations
nor governments could afford to make one the exclusive playground of a single person.
In fact, when the term was first applied, personal computer was almost an oxymoron,
self-contradictory, even simply impossible.

Computers, of course, were machines for computing, calculating numbers. Personal


computers were for people who had to make computations, to quantify quadratic
equations, to factor prime numbers, to solve the structure of transparent aluminum. About
the only people needing such computing power were engineers, scientists, statisticians,
and similar social outcasts. Then a strange thing happened, the digital age. Suddenly
anything worth talking about or looking at turned into numbers—books, music, movies,
telephone calls—just about everything short of a cozy hug from Mom. As the premier
digital device, the PC is the center of it all.

History

The first product to bear the PC designation was the IBM Personal Computer, introduced
in August, 1981. After the introduction of that machine, hobbyists, engineers, and
business people quickly adopted the term to refer to small desktop computers meant to be
used by one person. The initials PC quickly took over as the predominant term, saving
four syllables in every utterance.

In the first years of the PC, the term was nondenominational. It referred to any machine
with certain defining characteristics, which we will discuss shortly. In fact, the term
"personal computer" was in general use before IBM slapped it on its early desktop iron.

Over the years, however, the term "PC" has taken a more specialized application. It
serves to distinguish a particular kind of computer design. Because that design currently
happens to be the dominant one worldwide, many people use the term in its original
sense. That works in most polite conversation, unless you’re in a conversation with
someone whose favorite computer does not follow the dominant design. When you refer
to his hardware as a "PC," the polite part of the conversation is likely to quickly
disappear as he disparages the PC and rues the days when his favorite—Amiga or
Macintosh—once parried for marketshare.

The specialized definition of a PC means a machine that’s compatible with the first IBM
Personal Computer—that is, a computer that uses a microprocessor that understands the
same programs and languages as the one in the first PC—though it is likely to understand
more than just that and do a heckuva lot better job of it! In fact, what we now call PCs
were once IBM-compatible, because in those primeval years (roughly 1981 to 1987), the
IBM design was the accepted industry standard, which all manufacturers essentially
copied. After 1987, IBM overplayed its role in defining the industry, and lost its position
in the marketplace. The term "IBM-compatible" fell into disuse. PC and, now rarely, PC-
compatible have taken over.

Under this more limited definition, a PC is a machine with a design broadly based on the
first IBM PC. Its microprocessor is made by Intel or, if made by another company, is
designed to emulate an Intel microprocessor. The rest of the hardware follows designs set
by industry standards discussed throughout the rest of this book.

Characteristics

Like so much of the modern world, a modern personal computer is something that’s easy
to recognize and difficult to define. In truth, the personal computer is not defined by its
parts (because the same components are common across the entire range of computers
from pocket calculators to super-computing vector processors) but by how it is used.
Every computer has a central processing unit and memory, and the chips used by PCs are
used by just about every size of machine. Most computers also have the same mass
storage systems and similar display systems to those of the PC. Although you’ll find
some variety in keyboards and connecting ports, at the signal level all have much in
common, including the electrical components used to build them.
In operation, however, you use your PC in an entirely different manner from any other
type of computer. The way you work and the way the PC works with you is the best
definition of the PC. Among the many defining characteristics of the personal computer,
you’ll find that the most important are interactivity, dedicated operation,
programmability, connectivity, and accessibility. Each of these characteristics helps make
the PC into the invaluable tool it has become, distinguishing it from the computers that
came before and an array of other devices with computer-like pretensions.

Interactivity

A PC is interactive. That is, you work with it, and it responds to what you do. You press a
key and it responds, sort of like a rat in a Skinner box pressing a lever for a food pellet.

Although that kind of give and take, stimulus and response relationship may seem natural
to you, it’s not the way computers have always been. Most of the first three decades that
computers were used commercially, nearly all worked on the batch system. You had to
figure out everything you wanted to do ahead of time, punch out a stack of cards, and
dump them on the desk of the computer operator, who, when he finished his crossword
puzzle, would submit your cards to processing. The next day, after a 16-hour overnight
wait, your program, which took only seconds to run, generated results that you would get
on a pile of paper big enough to push a paper company into profitability and wipe out a
small section of the Pacific Northwest. And odds are (if your job was your first stab at
solving a particular problem) the paper would be replete with error messages, basically
guaranteeing you lifelong tenure at your job because that’s how long it would take to get
the program to run without all the irritating error flags.

In other words, unlike the old batch system that made you wait overnight to find out how
stupid you are, today’s interactive PC tells you in a fraction of a second.

Dedicated Operation

A personal computer is dedicated. Like a dog with one master, the PC responds only to
you—if only because it’s sitting on your desk and has only one keyboard. Although the
PC may be connected to other machines and people through modems and telephone wires
or through a network and world-reaching web, the link-up is for your convenience. In
general, you use the remote connection for individual purposes, such as storing your own
files, digging out data, and sending and receiving messages meant for your eyes (and
those of the recipient) alone.

In effect, the PC is an extension of yourself. It increases your potential and capabilities.


Although it may not help you actually think faster, it answers questions for you, stores
information (be it numeric, graphic, or audible), and manages your time and records. By
helping you get organized, it cuts through the normal clutter of the work day and helps
you streamline what you do.

Programmability

A personal computer is versatile. Its design defines only its limitations, not the tasks that
it can perform. Within its capabilities, a PC can do almost anything. Its function is
determined by programs, which are the software applications that you buy and load. The
same hardware can serve as a message center, word processor, file system, image editor,
or presentation display.

Programmability has its limits, of course. Some PCs adroitly edit prime time television
shows, while others are hard pressed to present jerky movie files gleaned off the Internet.
The hardware you attach to your PC determines its ultimate capabilities in such
specialized applications. Underneath that hardware, however, all PCs speak the same
language and understand the same program code. In effect, the peripheral hardware you
install acts much like a program and adapts the PC for your application.

Connectivity

A personal computer is cooperative. That means that it is connected and communicative.


It can work with other computer systems no matter their power, location, or even the
standard they follow. A computer is equally adept at linking to hand held Personal Digital
Assistants and supercomputers. You can exchange information and often even take
control. Through a hard-wired modem or wireless connection, you can tie your personal
computer into the world-wide information infrastructure and probe into other computers
and their databases no matter their location.

PC connectivity takes many forms. The main link-up today is, of course, the Internet and
specifically its World Wide Web. One wire ties your PC and you into the entire digital
world. Connectivity involves a lot more than the Internet—and in many ways, much less,
too. Despite the dominance of the Web, you can still connect to bulletin boards (at least
those that haven’t migrated to the web) or tie into your own network. For example, one
favored variety of connection brings together multiple personal computers to form a
network or workgroup. These machines can cooperate continuously, sharing expensive
resources, such as high speed printers and huge storage systems. Although such networks
are most common in businesses, building one in your home lets you share files, work in
any room, and just be as flexible as you want in what you do.

Accessibility
A personal computer is accessible. That is, you can get at one when you need it. It’s
either on your desk at work, tucked into a corner of your home office, or in your kid’s
bedroom.

The PC is so accessible, even ubiquitous, because it is affordable. Save your lunch money
for a few weeks and you can buy a respectable PC. Solid starter machines cost about the
same as a decent television or mediocre stereo system.

More importantly, when you need to use a PC, you can. Today’s software is nearly all
designed so that you can use it intuitively. Graphic operating systems, online help,
consistent commands, and enlightened programmers have turned the arcana of computer
control into something feared only by those foolish enough not to try their hands on a
keyboard.

A PC brings together those five qualities to help you do your work or extend your
enjoyment. No other modern appliance (for that is what PCs have become) have this
unique mix of characteristics. In fact, you could call your personal computer your
interactive, dedicated, versatile, cooperative, and accessible problem solver but the
acronym PC is much easier to remember; besides, that’s what everyone else is calling the
contraption anyway.

Labeling Standards

The characteristics that define a PC are a more lofty goal. In order to accomplish what
you expect of it, a PC has to run PC software. Although that statement sounds
straightforward, it is complicated by the least enduring aspect of progress: as technology
races ahead, investments get left behind. After nearly two decades of development,
today’s PC is a beast quite unlike that original machine that dropped off IBM’s first
assembly line. More importantly, today’s PC software and even our expectations are
leagues apart from those of even a few years ago. A problem arises when you try to bring
two worlds or two dominions of time together.

PCs do remarkably well working back in time with software. New PCs still adroitly run
most old software, some stretching back to the dawn of civilization. This backward
compatibility, although expected, is actually the exception among computers. Most
hardware improvements made in other computer platforms have rendered old software
incompatible, necessitating that you update both hardware and software at the same time.

Going in the other direction, however, using new software with an old PC poses
problems. Over the years, Intel has added new features to its microprocessors, and
peripheral manufacturers have developed system add-ons that are so desirable they are
expected in every PC. Program writers have taken advantage of both the new
microprocessor capabilities and the new peripherals, and the software they write often
won’t run on older PCs that lack the innovations. Even when new software runs on old
PCs, you often won’t want to try. Programmers work with the latest, fastest PCs and craft
their products to work with ample power. Older systems that lack modern performance
may run programs so slowly that most people won’t tolerate the results. In order that you
can be assured that your PC will work with their products and deliver adequate
performance, software developers, both individually and as groups, have developed
standards for PCs.

The standards are actually certifications that PCs meet a minimum level of compliance.
They are not true industry standards, such as those published by major industry bodies
like the ANSI (American National Standards Institute), IEEE (Institute of Electrical and
Electronic Engineers), or ISO (International Standards Organization), nor are they
mandated by any government organization. Rather, they are specifications arbitrarily set
by a single company or industry marketing organization. The only enforcement power is
that of trademark law: the promoter of the standard owns the trademark that appears on
the associated label, and it can designate who can use the trademark.

Two major organizations, the Multimedia PC Marketing Council and Microsoft


Corporation, have promoted such PC certification standards, and they judge whether a
given product meets the standards they set. Products that qualify are permitted to wear a
label designating their compliance. The label tells you that a given PC meets the
organization’s requirements for acceptable operation with its software. A recent addition
has been the MMX certification logo from Intel. Unlike the other certifications, the
MMX logo represents hardware compatibility, and it is displayed on software products.

The Multimedia PC Marketing Council developed the first qualification standard to


assure that you could easily select a PC that would run any multimedia software. They
developed the concept of a Multimedia PC, a computer equipped with all the peripherals
necessary for producing sounds and onscreen images acceptable in multimedia
presentations. Producing the data to supply those peripherals also required a powerful
microprocessor and ample storage, so the Multimedia PC Council also added minimum
requirements for those aspects of the PC into its standards. As the programmers created
multimedia software with greater realism that demanded faster response and more power
from PCs, the council added a second, then a third, higher standard to earn certification.
In 1996, The Multimedia PC Marketing Council was superseded as the custodian of the
MPC specifications by the Multimedia PC Working Group, a committee of the Software
Publishers Association.

The early incarnations of Windows software had long been criticized for draining too
much power from PCs, so during the development of Window 95, Microsoft developed a
set of minimal requirements that a PC needed to meet to wear the Windows logo. This set
of requirements became known as PC 95. To match the needs of the next generation of
Windows, Microsoft revised these specifications into PC 97.

The Microsoft standards go beyond merely the system hardware, that is, what the system
has, and include the firmware, what the system does. Taken together with the Windows
operating system, these standards define what a PC can do.
Table 1.1. Comparison of Major PC Labeling Standards

Standard MPC 1.0 MPC 2.0 MPC 3.0 PC 95 PC 97


Effective date 1990 May 1993 June 1995 November January 1997
1995
Microprocessor type 386SX 486SX Pentium No Pentium
requirement
Microprocessor 16 MHz 25 MHz 75 MHz No 120 MHz
speed requirement
Memory 2 MB 4 MB 8 MB 4MB 16MB
Floppy disk 1.44MB 1.44MB 1.44 MB Not required Not required
Hard disk 30 MB 160 MB 540 MB Not required No requirement
CD ROM speed 1x 2x 4x
CD ROM access 1000 ms 400 ms 250 ms
time
Serial port One RS- One RS- 16550A One RS-232 USB
232 232
Parallel port One SPP One SPP One SPP One SPP ECP
Game port One One One or Not required Not required
USB

In reading left to right, the table implies a progression, and this short list of standards
demonstrates exactly that. MPC 1.0 was the PC industry’s first attempt to break with old
technology, to leave lesser machines behind so you and your expectations could advance
to a new level unfettered by past limitations. All the ensuing standards lift the level
higher, demanding more from every PC so that you can take advantage of everything that
twenty years of progress in small computers (not to mention a few millennia of
civilization) has to offer.

MPC 1.0

Figure 1.1 The MPC 1.0 logo.

The primary concern of the Multimedia PC Council was, of course, that you be delighted
with the multimedia software that you buy for your PC. As multimedia products became
available in 1990, many people were frustrated that their own computers, some of which
might date back to XT days, were unable to run their new software. The members of the
council realized consumers need a quick and easy way to determine if a given new
computer could adequately handle the demands of new multimedia software.
Consequently, the MPC specifications summarized in Table 1.2 are aimed primarily at
performance and ensuring the inclusion of multimedia peripherals.

Table 1.2. Multimedia PC Requirements Under MPC 1.0

Feature Requirement
Microprocessor type 386SX
Microprocessor speed 16 MHz
Required memory 2 MB
Recommended memory No recommendation
Floppy disk capacity 1.44 MB
Hard disk capacity 30 MB
CD ROM transfer rate 150 KB/sec
CD ROM access time 1000 milliseconds
Audio DAC sampling 22.05 KHz, 8-bit, mono
Audio ADC sampling 11.025 KHz, 8-bit, mono
Keyboard 101-key
Mouse Two-button
Ports Serial, parallel, MIDI, game

As the lowest industry performance qualification for PCs, the original MPC standard
requires the least from a PC. As originally propounded, the Multimedia PC Council asked
only for 286-style microprocessors running at speeds of 12 MHz or higher. The memory
handling shortcomings of 286 and earlier microprocessors, however, quickly became
apparent, so the council raised the requirement to a minimum of a 386SX in the current
MPC 1.0 standard. Because of this chip choice, the minimum speed required is 16 MHz,
the lowest rating Intel gives the 386SX chip. For memory, a system complying with MPC
1.0 must have at least 2.0 megabytes of RAM, enough to get Windows 3.1 off the
ground. The specification also required a full range of mass storage devices, including a
3.5-inch floppy disk drive capable of reading and writing 1.44MB media, a 30MB hard
disk (small even at the time), and a CD ROM drive.

Because at the time MPC 1.0 was created, CD ROMs were relatively new and
unstandardized, the specification defined several drive parameters. The minimum data
transfer rate was set at a sustained 150KB/sec transfer rate, the rate required by
stereophonic audio CD playback. The standard also required the CD ROM drive to have
an average seek time of 1 second or less, which fairly represented the available
technology of the time (although not, perhaps, user expectations). For software
compatibility, the standard demanded the availability of a Microsoft-compatible
(MSCDEX 2.2) driver that understood advanced audio program interfaces, as well as the
ability to read the fundamental CD standards (mode 1, with mode 2 and form 1 and 2
being optional).

Because audio is an implicit part of multimedia, the MPC 1.0 specification required an
extensive list of capabilities in addition to playing standard digital audio disks (those
conforming with the Red Book specification of the CD industry). The standard also
required a front panel volume control for the audio coming off music CDs, a requirement
included in all ensuing MPC specifications. The apparent objective was to allow you to
listen to your CDs while you worked on other programs and data sources.

Another part of MPC 1.0 was the requirement for a sound board that must be able to play
back, record, synthesize, and mix audio signals with well-defined minimum quality
levels. The MPC 1.0 specifications required a digital to analog converter (for playback)
and an analog to digital converter (to sample and record audio). Under MPC 1.0, the
requirements for each differed slightly.

The DAC (playback) circuitry requires a minimum of 8-bit linear PCM (pulse code
modulation) sampling with a 16-bit converter recommended. That 8-bit sampling was a
true minimum, the same level used by the telephone company for normal long-distance
calls, hardly up to the level of a good clock radio. Playback sampling rates were set at 11
and 22 KHz with CD-quality 44.1 KHz "desirable." The lower of two standard sampling
rates is a bit better than telephone quality (which is 8 KHz); the higher falls short of FM
radio quality. The analog to digital conversion (recording) sampling rate requirements
include only linear PCM (that is, no compression) with low quality 11 KHz sampling,
both 22 and 44.1 KHz being optional.

In effect, the MPC 1.0 specification froze in place the state of the art in sound boards and
CD ROM players at the time of its creation while allowing the broadest possible range of
PCs to bear the MPC moniker. The only machines it outlawed outright were those with
technology so old no reputable computer company was willing to market them at any
price. After all, the council was comprised of people trying to sell PCs as well as
multimedia software, and they wanted as many of their machines as possible to qualify
for certification.

Far from perfect, far from pure, MPC 1.0 did draw an important line, one that showed
backward compatibility has its limits. In effect, it said, "Progress has come, so let’s raise
our expectations." In hindsight, the initial standard didn't raise expectations or the MPC
requirements high enough, but for the first time it freed software publishers from the need
to make their products backward-compatible with every cobwebbed computer made since
time began.

MPC 2.0

Figure 1.2 The MPC 2.0 logo.


As multimedia software became more demanding, the MPC 1.0 standard proved too low
to guarantee good response and acceptable performance. Consequently, in May, 1993, the
Multimedia PC Council published a new standard, MPC 2.0, for more advanced systems.
As with the previous specification, MPC 2.0 was held back by practical considerations:
keeping hardware affordable for you and profitable for the manufacturer. Although it set
a viable minimum standard, something for programmers to design down to, it did not
represent a demanding, or even desirable, level of performance for multimedia systems.
Table 1.3 summarizes the basic requirements of MPC 2.0.

Table 1.3. Multimedia PC Requirements Under MPC 2.0.

Feature Requirement
Microprocessor type 486SX
Microprocessor speed 25 MHz
Required memory 4 MB
Recommended memory 8 MB
Floppy disk capacity 1.44 MB
Hard disk capacity 160 MB
CD ROM transfer rate 300 KB/sec
CD ROM access time 400 milliseconds
Audio DAC sampling 44.1 KHz, 16-bit, stereo
Audio ADC sampling 44.1 KHz, 16-bit, stereo
Keyboard 101-key
Mouse Two-button
Ports Serial, parallel, MIDI, game

The most important aspect of MPC 2.0 is that it raised the performance level required by
a PC in nearly every hardware category, reflecting the vast improvement in technology in
the slightly more than two years since the release of MPC 1.0. It required more than
double the microprocessor power, with a 486SX chip running at 25 MHz being the
minimal choice. More importantly, MPC 2.0 demanded 4 megabytes of RAM, with an
additional 4 megabytes recommended.

While MPC 2.0 still required a 1.44 MB floppy so that multimedia software vendors need
only worry about one distribution disk format, it pushed its hard disk capacity
recommendation up to 160 MB. This huge, factor-of-five expansion reflects both the
free-for-all plummet in disk prices as well as the blimp-like expansion of multimedia
software.

CD ROM requirements were toughened two ways by MPC 2.0. The standard demands a
much faster access time, 400 milliseconds versus one full second for MPC 1.0, and it
requires double-speed operation (a data transfer rate of 300 KB/sec). Although triple- and
quadruple-speed CD ROM drives were already becoming available when MPC 2.0 was
adopted, most multimedia software of the time gained little from them, so the double-
speed requirement was the most cost-effective for existing applications.

Under MPC 2.0, the CD ROM drive must be able to play back commercially recorded
music CDs and decode track their identifications (using data embedded in subchannel Q).
In addition, the specification required that the drive handle extended architecture CD
ROMs (and recommends the extended architecture capabilities include audio) and be
capable of handling PhotoCDs and other disks written in multiple sessions.

The primary change in the requirement for analog to digital and digital to analog
converters was that MPC 2.0 made CD-quality required all the way. Sound boards under
MPC 2.0 must allow both recording and playback at full 44.1 KHz sampling in stereo
with a 16-bit depth. Lower rate sampling (11.025 and 22.05 KHz) must also be available.
MPC 2.0 also required an integral synthesizer that can produce multiple voices and play
up to six melody notes and two percussion notes at the same time.

In addition, the sound system in an MPC 2.0 machine must be able to mix at least three
sound sources (four are recommended) and deliver them to a standard stereophonic
output on the rear panel, which you can plug into your stereo system or active
loudspeakers. The three sources for the mixer included Compact Disc audio from the CD
ROM drive, the music synthesizer, and a wavetable synthesizer or other digital to analog
converter. An auxiliary input was also recommended. Each input must have an 8-step
volume control.

An MPC 2.0 system must have at least a VGA display system (video board and monitor)
with 640 pixel by 480 pixel resolution in graphics mode and the capability to display
65,535 colors (16-bit color). The standard recommended that the video system be capable
of playing back quarter-screen (that is, 320 pixel by 200 pixel) video images at 15 frames
per second.

Port requirements under MPC 2.0 matched the earlier standard: parallel, serial, game
(joystick), and MIDI. Both a 101-key keyboard (or its equivalent) and a two-button
mouse were also mandatory.

MPC 3.0

Figure 1.3 The MPC 3.0 logo.

As the demands of multimedia increased, the Multimedia PC Marketing Council once


again raised the hurdle in June, 1995, when it adopted MPC 3.0. The new standard
pushed up hardware requirements in nearly every area with the goal of achieving full-
screen, full-motion video with CD-quality sound. In addition, the council shifted its
emphasis from specific hardware requirements to performance requirements measured by
a test suite developed by the council. Table 1.4 lists many of the basic requirements of
MPC 3.0.

Table 1.4. Multimedia PC Requirements Under MPC 3.0.

Feature Requirement
Microprocessor type Pentium or equivalent
Microprocessor speed 75 MHz
Required memory 8 MB
Minimum memory bandwidth 100 MB/sec
Floppy disk capacity 1.44 MB, optional in notebook PCs
Hard disk capacity 540 MB
Hard disk access time < 12 ms
Hard disk throughput > 3 MB/sec
CD ROM transfer rate 550 KB/sec
CD ROM access time 250 milliseconds
Audio DAC sampling 44.1 KHz, 16-bit, stereo
Audio ADC sampling 44.1 KHz, 16-bit, stereo
Graphics interface PCI 2.0 or later
Graphics performance 352x240x15 colors at 30 frames/sec
Keyboard 101-key
Mouse Two-button
Ports Serial, parallel, MIDI, game or USB
Modem V.34 with fax

The MPC 3.0 standard does not require any particular microprocessor. Rather, the
standard notes that a 75 MHz Pentium—or its equivalent from another manufacturer—is
sufficient to achieve the performance level required by the test suite.

For mass storage, MPC 3.0 retains the same floppy disk requirement, although it makes
the inclusion of any floppy drive optional in portable systems. More important are the
hard disk requirements. To give adequate storage, the standard mandates a 540 MB disk
as the minimum, of which 500 MB must be usable capacity. Recognizing the need for
fast storage in video applications, the disk performance requirements are demanding. Not
only does the standard require an average access time of 12 milliseconds or less, it also
asks for a high transfer rate as well. The disk interface must be able to pass 9 MB per
second while the disk medium itself must be able to be read at a rate of 3 MB per second
(a buffer or cache in the drive itself making possible the faster interface transfers). The
access timings require that any MPC 3.0-compliant disk spin faster than 4000 RPM.

CD storage must be compatible with all the prevailing standards, including standard CD
ROM, CD-ROM XA, Photo CD, CD-R (recordable CDs), CD Extra and CD-interactive.
Drives must achieve nearly a 4x transfer rate; the standard requires a 550 KB/sec transfer
rate rather than the 600 KB/sec of a true 4x drive. All multimedia PCs, both desktop and
portable, require CD drives, although the access time requirements differ. Desktop units
must have rated access times better than 250 milliseconds; portable units abide a less
stringent 400 ms specification.

The audio requirements of MPC 3.0 extend not only to sampling and digitization but
through the full system from synthesis to speakers. The digital aspects of the audio
system must handle at least stereophonic CD-quality audio (44.1 KHz sampling with 16-
bit depth) with Yamaha OPL3 synthesis support (although use of the Yamaha chip is not
a requirement). Speaker and amplifier quality is strictly defined. Two-piece speaker
systems must have a range from 120 Hz to 17.5 KHz with at least three watts of power
per channel (at 1% distortion) across the audio spectrum. A subwoofer, which must be
backed by at least 15 watts, extends the range down to 40 Hz or lower.

MPC 3.0 requires a compliant system to deliver full-motion video, that is, 30 frames per
second, for an image that fills about one-third of a standard VGA screen (352 by 288
resolution) with realistic 15-bit color (that’s five bits for each primary color). MPC 3.0
also requires support of a number of video standards including PCI 2.0 or newer for the
hardware connection with the video controller and MPEG expansion (in hardware or
software) for playback.

The basic ports required under previous MPC standards face further refinement under
MPC 3.0. The mandatory serial port must have a 16550A UART, and when you connect
the mouse or other pointing device to an MPC 3.0 system, you must still have at least one
serial port available. The standard allows for a USB connection to the mouse and for the
use of USB instead of a game port (which would otherwise be required). A modem
complying with the V.34 standard and capable of sending and receiving facsimile
transmissions is also required. The standard parallel port and MIDI port are carried over
from previous incarnations of the MPC specifications.

PC 95

Although aimed at setting a minimum compatibility level for machines designed to run
Windows 95, Microsoft officially released its PC 95 standard months after the operating
system became available. The specification was released November 1, 1995. In early
1996, some of its requirements were made more demanding and it was augmented by
several design "recommendations" that would assure more satisfactory operation.
In truth, the primary concern with PC 95 is operability rather than performance. The
standard does not make specific requirements as to the microprocessor, its speed, or the
disk capacity of a PC to earn the "Designed for Windows 95" sticker. Instead it seeks
compliance with specific standards—many of which were originally promulgated by
Microsoft—that link Windows 95 to your hardware.

Implicit in PCs designed for Windows 95 (or any modern operating system, for that
matter) is the need for a 386 or later microprocessor to take advantage of Windows 95’s
advanced operating modes and superior memory addressing. By the time Microsoft
issued PC 95, however, all microprocessors less than a 486 were unthinkable in a new PC
and lower speed Pentiums had almost become the choice for entry-level systems. PC 95
does address the need for memory and sets its requirement at Windows 95’s bare bones
minimum, four megabytes. The 1996 revision adds a recommendation of eight megabytes
reserved exclusively for Windows.

PC 95 puts more emphasis on compliance with the Plug-and-Play standard, requiring


BIOS support of the standard. The proper operation of the automatic configuration
features of Windows 95 make this level of support mandatory. The intention is to make
PCs more accessible to people who do not wish to be bothered by the details of
technology.

The chief other requirements of the PC 95 standard include a minimum range of ports
(one serial and one basic parallel port), a pointing device, and a color display system that
can handle basic graphics, VGA-level resolution in 256 colors. Table 1.5 summarizes the
major requirements of the PC 95 standard.

Table 1.5. Major Requirements of the "Designed for Windows 95" Label

Feature Original Revised PC Revised Chapter


95 Requirement Recommendation
Microprocessor NR NR NR 3
System memory 4 MB 8MB 8MB 4
16-bit I/O decoding No Yes 4 7
Local bus No Yes Yes 7
Sound board No No Yes 18
Parallel port SPP ECP ECP 19
Serial port RS-232 16650A 16550 UART 21
UART
USB No No Yes 21
Display resolution 640x480x8 800x600x16 800x600x16 15
and and 1024x768x8
1024x768x8
Display memory NR 1MB 2MB 15
Display connection ISA Local bus Local bus 15
Hard disk drive NR Required Required 10
CD ROM NR NR Recommended 12
Plug-and-Play BIOS Required Required Required 5
Software setting of No Yes Yes 5 and 6
resources
Note: NR indicates the
standard makes no
specific requirement

Even at the time of its release, PC 95 was more a description of a new PC than a
prescription dictating its design. Most systems already incorporated all of the requirement
of PC 95 and stood able to wear the Window 95 compatibility sticker.

PC 97

In anticipation of the successor to Windows 95, Microsoft developed a new, higher


standard in mid-1996. Generally known as PC 97 in the computer industry, its terms set
up the requirements for labeling PCs as designed for the Windows 95 successor, likely
termed Windows 97.

The PC 97 standard is both more far-reaching and more diverse than that of PC 95. PC 97
sets minimum hardware requirements, interface standards, and other required
conformances necessary to give the new operating system full functionality. In addition,
it contemplates the fragmentation of the PC industry into separate and distinct business
and home applications. As a result, the standard is actually three. One standard is for the
minimal PC for the new operating system, Basic PC 97; another defines the requirements
of the business computer, Workstation PC 97; and a third describes the home computer,
the Entertainment PC 97. Table 1.6 lists the major requirements of each of these three
facets of PC 97.

Table 1.6. Major Requirements for PC 97 Designations

Feature Basic PC 97 Workstation PC Entertainment PC 97 Chapter


97
Microprocessor Pentium Pentium Pentium 3
Speed 120 MHz 166 MHz 166 MHz 3
Memory 16 MB 32 MB 16 MB 4
ACPI Required Required Required 24
OnNow Required Required Required 24
Plug-and-Play Required Required Required 5
USB 1 port 1 port 2 ports 21
IEEE 1394 Recommended Recommended Required 21
ISA bus Optional Optional Optional 7
Keyboard Conventional Conventional USB or wireless 14
Pointing device Conventional Conventional USB or wireless 14
Wireless interface Recommended Recommended Remote control 14
required
Audio Recommended Recommended Advanced audio 18
Modem or ISDN Recommended Recommended Required 22
Display resolution 800x600x16 1024x768x16 1024x768x16 151
Display memory 1 MB 2 MB 2 MB 15
Local bus video Required Required Required 15
Bus master Required Required Required 7 and 9
controller

The most striking addition is Microsoft’s new demand for performance, setting the
standard microprocessor for an entry-level system—the Basic PC 97—at a 120 MHz
Pentium, which, little more than three years ago, would have been the top of the range. In
addition, all systems require high speed local bus video connections and bus mastering in
their mass storage systems.

With the PC 97 standard, Microsoft has taken the initiative in moving PC expansion
beyond the limitations of the original 1981 PC design. Microsoft has relegated the
conventional PC expansion bus, best known as ISA and resident in nearly all PCs for
more than a decade, to the status of a mere option. Microsoft has put its full support
behind the latest version of PCI, 2.1, as the next expansion standard in PC 97 hardware.

The difference in memory requirements for business and home systems reflects the needs
of their respective applications. A business machine is more likely to run several
simultaneous applications and more likely to mash huge blocks of data, pushing up its
memory needs. On the other hand, the home system demands high speed connections
such as IEEE 1394 (see Chapter 21, "Serial Ports") for making the most of multimedia,
an extra USB port, and mandatory modem and high quality audio.

Windows 95 took so long to boot up that Microsoft apparently feared some people would
switch to another operating system before it finished (or so you might assume by the
company’s insistence on the OnNow standard for quick booting). PC 97 also requires
ACPI for conserving power (see Chapter 24, "Power") and full Plug-and-Play compliance
for setup convenience. PC 97 also includes a wealth of additional requirements for
portable systems.

All systems designed for the next generation of color need the ability to render images in
true to life, 16-bit color, which allows the simultaneous display of up to 65,536 hues.

The advances between PC 95 and PC 97 reflect the fast changes in computer technology.
You simply could not buy a machine with all the capabilities of PC 97 when Microsoft
announced the PC 95 standard, yet PC 97 reflects the minimum you should expect in any
new computer. Even if you don’t target a "Designed for Windows" sticker for your next
PC purchase, you should aim for a system equipped with the features of the PC 97
standard.

MMX

With the introduction of its MMX-enhanced Pentium microprocessors in January, 1997,


Intel added a designation with the MMX logo that appears as a certification on products.
This MMX label, as shown in Figure 1.1, indicates that software uses a set of commands
specially written to take advantage of the new features of the MMX-enhanced
microprocessors offered by Intel as well as makers of Intel-compatible chips, including
AMD and Cyrix. On some operations—notably the manipulation of audio and video data
as might be found in multimedia presentations and many games—this software will see
about a 50% to 60% performance enhancement when run in computers equipped with an
MMX-enhanced microprocessor.

Figure 1.4 The MMX certification logo.

MMX does not indicate the innate power of a microprocessor, PC, or program.
Computers that use MMX technology may deliver any of several levels of performance
based on the speed of their microprocessor, their microprocessor type, and the kind of
software you run. A computer with an MMX microprocessor will run MMX-certified
software faster than a computer with the same type of microprocessor without MMX. The
MMX-enhanced system will show little performance advantage over a system without
MMX on ordinary software. From the other perspective, software without the MMX label
will run at the about same speed in either an MMX-enhanced PC or one without an MMX
processor, providing the two have the same speed rating (in megahertz) and the same
processor type (for example, Pentium).

In other words, the MMX label on a box of software is only half of what you need. To
take advantage of the MMX software label, you need an MMX-enhanced PC. And an
MMX-enhanced PC requires MMX-labeled software to distinguish itself from PCs
without MMX.
Variations on the PC Theme

What about the PC pretenders? True PCs aren’t the only small computers in use today. A
variety of hardware devices share many of the characteristics of the true PC—and
sometimes even the name—but differ in fundamental design or application. For example,
some machines skimp on features to gain some other benefit, such as increased
ruggedness or small size. Others forego the standard microprocessors and operating
systems to gain added speed in specialized applications. Others are simply repackage (or
rename) jobs applied to conventional PCs.

In any case, any worthwhile discussion of PCs and their technologies requires that the
various themes and variations be distinguished. The following list includes the most
common terms often used for PCs and PC wannabes:

Workstation

The term "workstation" is often ambiguous because it commonly takes two definitions.
The term derives from the function of the machine. It is the computer at which an office
worker stations himself.

In some circles, the term "workstation" is reserved for a PC that is connected to a network
server. Because the server is often also a PC, the term "PC" doesn’t distinguish the two
machines from one another. Consequently, people often refer to a PC that functions as a
network node as a workstation, and the machine linking the workstations together is the
server. (The term "node" can’t substitute for workstation because devices other than PCs
can also be nodes.)

The other application of the term "workstation" refers to powerful, specialized computers
still meant to be worked upon by a single individual. For instance, a graphic workstation
typically is a powerful computer designed to manipulate technical drawings or video
images at high speed. Although this sort of workstation has all the characteristics of a PC,
engineers distinguish these machines with the workstation term because the machines do
not use the Intel-based microprocessor architecture typical of PCs. Moreover, the people
who sell these more powerful computer want to distinguish their products from run-of-
the-mill machines so they can charge more.

Of course, the term "workstation" also has a definition older than the PC, one that refers
to the physical place at which someone does work. Under such a definition, the
workstation can be a desk, cubicle, or workbench. In the modern office, even this kind of
workstation includes a PC.

Server
Server describes a function rather than a particular PC technology or design. A server is a
computer that provides resources that can be shared by other computers. Those resources
include files such as programs, databases, and libraries; output devices such as printers,
plotters, and film recorders; and communications devices such as modems and Internet
access facilities.

Traditionally a server is a fast, expensive computer. A server does not need to be as


powerful as the PCs it serves, however, particularly when serving smaller networks.
Compared to the work involved in running the graphic interface of a modern operating
system, retrieving files from a disk drive and dispatching them to other PCs is small
potatoes indeed. An ordinary PC often suffices.

For example, if you want to run Windows 95 on a PC as a workstation, you need at least
eight megabytes of RAM to avoid the feeling of working in a thick pot of oatmeal.
Dedicate the same PC as a server of other Windows 95 PCs, and you’ll get good response
with as little as the minimal four megabytes. The difference is all the overhead needed to
run a user interface that the server need not bother with.

On the other hand, the server in a large corporation requires a high level of security and
reliability because multiple workstations and even the entire business may depend on it.
Ideally, such a big-business server displays fault tolerance, the ability withstand the
failure of one or more major systems, such as a disk drive or one of several
microprocessors, and continue uninterrupted operation.

Compared to ordinary PCs, most servers are marked by huge storage capacity. A server
typically must provide sufficient storage space for multiple users—dozens or even
hundreds of them.

Most of the time, the server stands alone, unattended. No one sits at its keyboard and
screen monitoring its operation. It runs on autopilot, a slave to the other systems in the
network. Although it interacts with multiple PCs, it rarely runs programs in its own
memory for a single user. Its software is charged mostly with reading files and
dispatching them to the proper place. In other words, although the server interacts, it’s
not interactive in the same way as an individual PC.

Simply Interactive Personal Computer

In early 1996 Microsoft coined the term SIPC to stand for Simply Interactive Personal
Computer, the software giant’s vision of what the home computer will eventually
become. The name hardly reflects the vision behind the concept. The SIPC is hardly
simple, but a complete home entertainment device that will be the centerpiece of any
home electronics system, if not the home entertainment system. The goal of the design
initiative is to empower the PC with the capabilities of all its electronic rivals. It is to be
as adept at video games as any Sega or Nintendo system, as musically astute as a stereo
(and also able to create and play sounds as a synthesizer), and able to play video better
than your VCR. In other words, the SIPC is a home PC with a vengeance. In fact, it’s
what the home PC has supposed to be all along but the hardware (and software) were
never capable of.

Compared to the specification of any home PC, the minimal SIPC is a powerhouse,
starting with a 150 MHz Pentium with 16 MB of RAM and taking off from there. More
striking, it is designed as a sealed box, one that you need never tinker with. Expansion,
setup, and even simple repair are designed to be on the same level as maintaining a
toaster.

Rather than something grand or some new, visionary concept, the SIPC is more a
signpost pointing the way the traditional PC is headed. The PC 97 specification covers
nearly all the requirements of the SIPC so, in effect, the SIPC is here now, lurking on
your desk in the disguise of an ordinary PC.

Network Computer

The opposite direction for the home PC is one stripped of power instead of enhanced.
Instead of being a general purpose machine, this sort of home PC would be particularly
designed for interactive use, with data and software supplied by outside sources. The
primary source is universally conceived as the Internet. Consequently, one of the favored
terms for this sort of design is often termed an Internet Box.

The same concept underlies the Network Computer, commonly abbreviated as NC. As
with the Internet Box, an NC is a scaled-down PC aimed primarily at making Internet
connections. They allow browsing the World Wide Web, sending and receiving
electronic mail, and running Java-based utilities distributed through the net, but they lack
the extensive data storage abilities of true PCs. Similar in concept and name but
developed with different intentions and by different organizations is the NetPC, a more
conventional PC designed to lower maintenance costs.

The revised home PC concept of the Network Computer (NC rather than NetPC)
envisions a machine that can either have its own screen or work with the monitor that’s
part of your home entertainment system, typically a television set. In contrast, a related
technology often called the Set Top Box was meant to be an Internet link that used your
television as the display. It earned its name from its likely position, sitting on top of your
television set.

Only the names Internet and Set Top Box (and even NC) are new. The concept harks
back to the days before PCs. The SIPC is, in fact, little more than a newer name for a
smart terminal.
A terminal is the start and endpoint of a data stream, hence the name. It features a
keyboard to allow you to type instructions and data, which can then be relayed to a real
computer. It also incorporates a monitor or other display screen to let you see the data the
computer sends back at you.

The classic computer terminal deals strictly with text. A graphic terminal has a display
system capable of generating graphic displays. A printing terminal substitutes a printer
for the screen.

A smart terminal has built-in data processing abilities. It can run programs within its own
confines, programs which are downloaded from a real computer. The limitation of
running only programs originating outside the system distinguishes the smart terminal
from a PC. In general, the smart terminal lacks the ability to store programs or data
amounting to more than the few kilobytes that fit into its memory.

Although smart terminals are denizens of the past, the Internet Box has a promising
future, or, more correctly, a future of promises. Its advocates point out that its storage is
almost limitless with the full reach of the Internet at its disposal, not mere megabytes, not
gigabytes, but terabyte territory and beyond. But downloading that data is like trying to
drain the ocean through a soda straw. Running programs or simply viewing files across a
network is innately slower than loading from a local hard disk. Insert a severe bottleneck
like a modem and local telephone line in the network connection, and you’ll soon
rediscover the entertainment value of watching paint dry and the constellations realign.
Instead of the instant response you get to your keystrokes with a PC, you face long delays
whenever your Internet Box needs to grab data or program code from across the net.
Until satellite and cable modems become the norm (both for you and Internet servers),
slow performance will hinder both the interactivity and the wide acceptance of the
Internet Box.

The principal difference between a true Network Computer and its previous incarnations
is that a consortium of companies, including Apple, IBM, Oracle, Netscape, and Sun
Microsystems, has developed a standard for them. Called the NC Reference Profile, the
standard is more an Internet’s Greatest Hits of the specifications world. It requires
compliance with the following languages and protocols:

• The Java language, including the Java Application Environment, the Java Virtual
Machine, and Java class libraries

• HTML (HyperText Markup Language), the publishing format used for Web pages

• HTTP, which browser software uses to communicate across the Web

• Three E-mail protocols including Simple Mail Transfer Protocol, Internet


Message Access Protocol Version 4, and Post Office Protocol Version 3

• Four multimedia file formats including AU, GIF, JPEG, and WAV
• Internet Protocol, so that they can connect to any IP-based network including the
World Wide Web

• Transmission Control Protocol, also used on the Internet and other networks

• File Transfer Protocol, use to exchange files across the Internet

• Telnet, a standard that lets terminals access host computers

• Network File System, which allows the NC to have access to files on a host
computer

• User Datagram Protocol, which lets applications share data through the file
system

• Simple Network Management Protocol, which helps organize and manage the NC
on the network

• Dynamic Host Configuration Protocol, which lets the NC boot itself across the
network and log in

• Bootp, which is also required for booting across a network

• Several optional security standards

The hardware requirements of the NC Reference Profile are minimal. They include a
minimum screen resolution at the VGA level (640 by 480 pixels), a pointing device of
some kind, text input capability that could be implemented with handwriting recognition
or as a keyboard, and an audio output. The original profile made no demand for video
playback standards or internal mass storage such as a hard or even floppy disk.

Sun Microsystems introduced the first official NC on October 22, 1996.

[The NetPC, on the other hand, represents an effort by industry leaders Intel and
Microsoft (assisted by Compaq Computer Corporation, Dell Computer Corporation, and
Hewlett-Packard Company) to create a specialized business computer that lowers the
overall cost of using and maintaining small computers for a business. The NetPC and
ordinary PC share many of the same features. They differ mostly in that, as fits the name,
the NetPC is designed so that it can be updated through a network connection. The PC
manager in a business can thus control all of the NetPCs in the business from his desk
instead of having to traipse around to every desk to make changes. In addition, the NetPC
eliminates most of the PC manager’s need to tinker with hardware. All hardware features
of the NetPC are controlled through software and can be altered remotely, through the
network connection. The case of the NetPC is, in fact, sealed so that the hardware itself is
never changed.
The classic ISA expansion bus—long the hallmark of a conventional PC—is omitted
entirely from the NetPC. The only means of expansion provided for a NetPC is external,
such as the PC Card and CardBus slots otherwise used by notebook computers.

The design requirements to make a NetPC are set to become an industry standard, but at
the time this was written they were still under development. Intel and Microsoft
introduced a NetPC draft standard for industry comment on March 12, 1997.

Numerical Control Systems

In essence, a numerical control system is a PC with its hard hat on. That is, an NCS is a
PC designed for harsh environments such as factories and machine shops. One favored
term for the construction of the NCS is ruggedized, which essentially means made darned
near indestructible with a thick steel or aluminum case that’s sealed against oil, shavings,
dust, dirt, and probably even the less-than-savory language that permeates the shop floor.

The NCS gains its name from how it is used. As an NCS, a PC’s brain gets put to work
controlling traditional shop tools like lathes and milling machines. An operator programs
the NCS—and thus the shop tool—by punching numbers into the PC’s keypad. The PC
inside crunches the numbers and controls the movement of the operating parts of the tool,
for example adjusting the pitch of a screw cut on a lathe.

Not all NCSes are PCs, at least the variety of PCs with which this book is concerned.
Some NCSes are based on proprietary computers, more correctly, computerized control
systems, built into the shop tools they operate. But many NCSes are ordinary PCs
reduced to their essence, stripped of the friendly accouterments that make them desktop
companions and reduced to a single, tiny circuit board that fits inside a control box. They
adhere to the PC standard to allow their software designers to use a familiar and
convenient platform for programming. They can use any of the languages and software
tools designed for programming PCs to bring their numerical control system to life.

Personal Digital Assistants

Today’s tiniest personal computers fit in the palm of your hand (provided your hand is
suitably large) or slide into your pocket (provided you dress like Bozo the Clown.) To get
the size down, manufacturers limit the programmability and hardware capabilities of
these hand held devices mostly to remembering things that would otherwise elude you.
Because they take on some of the same duties as a good administrative assistant, these
almost-computers are usually called Personal Digital Assistants.

The PDA is a specialized device designed for a limited number of applications. After all,
you have few compelling reasons for building a spreadsheet on a screen three inches
square or writing a novel using a keyboard the size of a commemorative postage stamp.
For the most part, the PDA operates as a scheduling and memory augmentation system. It
keeps track of the details of your life so you can focus on the bigger issues.

The tiny dimensions and specialized application of the PDA make them unique devices
that have no real need to adhere to the same standards as larger PCs. Being impractically
small for a usable keyboard, many PDAs rely on alternate input strategies such as
pointing with pens and handwriting recognition. Instead of giving you the big picture,
their smaller screens let you see just enough information. Because of these physical
restraints and differences in function, most PDAs have their own hardware and their own
operating systems, entirely unlike those used by PCs.

Although they are designed to work with PCs, they do not run as PCs or use PC software.
In that way, they fall short of PCs in versatility. They adroitly handle their appointed
tasks, but they don’t aspire to do everything; you won’t draw the blueprints of a
locomotive on your PDA. In other words, although they are almost intimately personal
and are truly computers, they are not real PCs.

Laptops and Notebooks

A laptop or notebook PC is a PC but one with a special difference in packaging. The


simple and most misleading definition of a laptop computer is one that you can use on
your lap. Most people use them on airline tray tables or coffee tables, and with unusual
good judgment, the industry has refrained from labeling them as coffee table computers.
Although the terms laptop and notebook are often used interchangeably, this book prefers
the term "notebook" to better reflect the various places you’re apt to use one.

The better definition is that a laptop or notebook is a portable PC that is entirely self-
contained. A single package includes all processing power and memory, the display
system, the keyboard, and a stored energy supply (batteries). All laptop PCs have flat-
panel display systems because they fit, both physically into the case and into the strict
energy budgets dictated by the power available from rechargeable batteries.

Notebook computers almost universally use a clamshell design. The display screen folds
flat atop the keyboard to protect both when traveling. Like a clamshell, the two parts are
hinged together at the rear. This design and weight distinguishes the laptop from the
lunchbox PC, an older design that is generally heavier (10-15 pounds) and marked by a
keyboard which detaches from a vertical processing unit that holds the display panel.
Laptop PCs generally weigh from 5 to 10 pounds, most falling almost precisely in the
middle of that range.

PCs weighing less than five pounds are usually classified as sub-notebook PCs. The
initial implementations of sub-notebook PCs achieved their small dimensions by slighting
on the keyboard, typically reducing its size by 20% horizontally and vertically. More
recent sub-notebook machines trim their mass by slimming down—reducing their overall
thickness to about one inch—instead of paring length or width. The motivation for this
change is pragmatic: larger screens require space enough for a normal size keyboard. In
addition, most sub-notebook machines are favored as remote entry devices. That is,
journalists prefer them for typing in drafts while they are on the move. The larger
keyboard makes the machines more suited to this application.

Software

The most important part of a PC isn’t the lump of steel, plastic, and ingenuity sitting on
your lap or desktop but what that lump does for you. The PC is a means to an end. Unless
you’re a collector specializing in objets d’art that depreciate at alarming rates, acquiring
PC hardware is no end in itself. After all, by itself a computer does nothing but take up
space. Plug it in and turn it on, and it will consume electricity—and sit there like an in-
law overstaying his welcome. Like that in-law, a PC without something to make it work
represents capabilities without purpose. You need to motivate it, tell it what to do, and
how to do it. In other words, the reason you buy a PC is not to have the hardware but to
have something to run software.

Hardware is simply a means to an end. To fully understand and appreciate computer


hardware, you need a basic understanding of how it relates to software and how software
relates to it. Computer hardware and software work together to make a complete system
that carries out the tasks you ask of it.

Computer software is more than the box you buy when you want to play Microsoft Office
or Myst. The modern PC runs several programs simultaneously, even when you think
you’re using just one. These programs operate at different levels, each one taking care of
its own specific job, invisibly linking to the others to give you the impression you’re
working with a single, smooth-running machine.

The most important of these programs are the applications, the programs like Myst or
Office that you actually buy and load onto your PC, the ones that boldly emblazon their
names on your screen every time you launch them. In addition, you need utilities to keep
your PC in top running order, protect yourself from disasters, and automate repetitive
chores. An operating system links your applications and utilities together and to the
actual hardware of your PC. At or below the operating system level, you use
programming languages to tell your PC what to do. You can write applications, utilities,
or even your own operating system with the right programming language.

Applications
The programs you run to do actual work on your PC are its applications, short for
application software, programs with a purpose, programs you apply to get something
done. They are the dominant beasts of computing, the top of the food chain. Everything
else in your computer system, hardware and software alike, exists merely to make your
applications run. Your applications determine what you need in your PC simply because
they won’t run—or run well—if you don’t supply them with what they want.

Strange as it may seem, little of the program code in most applications deals with the job
for which you buy it. Most of the program revolves around you, trying to make the
rigorous requirements of the computer hardware more palatable to your human ideas,
aspirations, and whims. The part of the application that serves as the bridge between your
human understanding and the computer’s needs is called the user interface. It can be
anything from a typewritten question mark that demands you type some response to a
Technicolor graphic menu luring your mouse to point and click.

In fact, the most important job of most modern applications is simply translation. The
user interface of the program converts your commands, instructions, and desires into a
form digestible by your computer system. At the same time, the user interface
reorganizes the data you give your PC into the proper form for its storage or takes data
from storage and reorganizes it to suit your requirements, be they stuffing numbers into
spreadsheet cells, filling database fields, or moving bits of sound and images to speakers
and screen.

The user interface acts as an interpreter and translates your actions and responses into
digital code. Although the actual work of making this translation is straightforward, even
mechanistic (after all, the computer that’s doing all the work is a machine), it demands a
great deal of computer power. For example, displaying a compressed bitmap that fills a
quarter of your screen in a multimedia video involves just a few steps. Your PC need
only read a byte from memory, perform a calculation on it, and send it to your monitor.
The trick is in the repetition. While you may press only one key to start the operation,
your PC has to repeat those simple steps over a million times each second. Such a chore
can easily drain the resources available in your PC. That’s why you need a powerful PC
to run today’s video-intensive multimedia applications—and why multimedia didn’t
catch on until microprocessors with Pentium power caught up with the demands of your
software.

The actual function of the program, the algorithms that it carries out, are only a small part
of its code, typically a tiny fraction. The hardcore computing work performed by major
applications—the kind of stuff that the first Univac and other big mainframe computers
were created to handle—is amazingly minimal. For example, even a tough statistical
analysis may involve but a few lines of calculations (though repeated again and again).
Most of what your applications do is simply organize and convert static data from one
form to another.

Application software often is divided into several broad classes based on what the
programs are meant to accomplish. These traditional functions include:
• Word processing the PC-equivalent of the typewriter with a memory and an
unlimited correction button

• Spreadsheets the accountant’s ledger made automatic to calculate arrays of


numbers

• Databases a filing system with instant access and ability to automatically sort
itself

• Communications for linking with other computers, exchanging files, and


browsing the Internet

• Drawing and painting to create images such as blueprints and cartoon cells that
can be filed and edited with electronic ease

• Multimedia software for displaying images and sound like a movie theater under
the control of an absolute dictator (you)

The lines between many of these applications are blurry. For example, many people find
that spreadsheets serve all their database needs, and most spreadsheets now incorporate
their own graphics for charting results.

Several software publishers completely confound the distinctions by combining most of


these applications functions into a single package that includes a database, graphics,
spreadsheet, and word processing. These combinations are termed application suites.
Ideally, they offer several advantages. Because many functions (and particularly the user
interface) are shared between applications, large portions of code need not be duplicated
as would be the case with stand-alone applications. Because the programs work together,
they better know and understand one another’s resource requirements, which means you
should encounter fewer conflicts and memory shortfalls. Because they are all packaged
together, you stand to get a better price from the publisher.

Although application suites have vastly improved since their early years, they sometimes
show their old weaknesses. Even the best sometimes falls short of the ideal, comprised of
parts that don’t perfectly mesh together, created by different design teams over long
periods. Even the savings can be elusive because you may end up buying several
applications you rarely use among the ones you want. Nevertheless, suites like Microsoft
Office have become popular because they are single-box solutions that fill the needs of
most people, handling more tasks with more depth than they ordinarily need. In other
words, the suite is an easy way to ensure you’ll have the software you need for almost
anything you do.

Utilities
Even when you’re working toward a specific goal, you often have to make some side
trips. Although they seem unrelated to where you’re going, they are as much a necessary
part of the journey as any other. You may run a billion-dollar pickle packing empire from
your office, but you might never get your business negotiations done were it not for the
regular housekeeping that keeps the place clean enough for visiting dignitaries to sit
down.

The situation is the same with software. Although you need applications to get your work
done, you need to take care of basic housekeeping functions to keep your system running
in top condition and working most efficiently. The programs that handle these auxiliary
functions are called utility software.

From the name alone you know that utilities do something useful, which in itself sets
them apart from much of the software on today’s market. Of course, the usefulness of any
tool depends on the job you have to do—a pastry chef has little need for the hammer that
so well serves the carpenter or PC technician—and most utilities are crafted for some
similar, specific need. You might want a better desktop than Microsoft chooses to give
you, an expedient way of dispensing with software you no longer want, a means of
converting data files from one format to another, backup and anti-virus protection for
your files, improved memory handling (and more free bytes), or diagnostics for finding
system problems. Each of these needs has spawned its own category of specialized
utilities.

Some utilities, however, are useful to nearly everyone and every PC. No matter what kind
of PC you have or what you do with it, you want to keep your disk organized and running
at top speed, to prevent disasters by detecting disk problems and viruses, and to save your
sanity should you accidentally erase a file. The most important of these functions are
included with today’s PC operating systems, either integrated into the operating system
itself or as individual programs that are part of the operating system package.

DOS Utilities

DOS utilities are those that you run from the DOS command prompt. Because they are
not burdened with mating with the elaborate interfaces of more advanced operating
systems, DOS utilities tend to be lean and mean, a few kilobytes of code to carry out
complex functions. Although they are often not much to look at, they are powerful and
can take direct hardware control of your PC.

Many DOS utilities give you only command-line control. You type in the program name
followed by one or more filenames and options (sometimes called "switches"), typically a
slash or hyphen followed by a more or less mnemonic letter identifying the option. Some
DOS utilities run as true applications with colorful screens and elaborate menus for
control.
The minimal set of DOS utilities are those that come with the operating system. These are
divided into two types, internal and external.

Internal utilities are part of the DOS command interpreter, the small program that put the
C> command prompt on your screen. Whenever the prompt appears on the screen, you
can run the internal utilities by typing the appropriate command name. Internal
commands include the most basic functions of your PC: copying files (COPY),
displaying the contents of files (TYPE), erasing files (DEL), setting a path (PATH), and
changing the prompt itself (PROMPT).

External utilities are separate programs, essentially applications in miniature. Some are
entire suites of programs that aspire to be full-fledged applications and are distinguished
only by what they do. Being external from the operating system kernel, most external
utilities load every time you use them, and your system must be able to find the
appropriate file to make the external utility work. In other words, they must be in the
directory you’re currently logged into or in your current search path. Because they are
essentially standalone programs, you can erase or overwrite them whenever you want, for
example to install a new or improved version.

Windows Utilities

Under advanced operating systems like Windows or OS/2, you have no need to
distinguish internal and external utilities. The operating systems are so complex that all
utilities are essentially external. They are separate programs that load when you call upon
them. Although some functions are integrated into the standard command shell of these
operating systems so running them is merely a matter of making a menu choice, they are
nevertheless maintained as separate entities on disk. Others have the feel of classic
external utilities and must be started like ordinary applications, for example by clicking
on the appropriate icon or using the Run option. No matter how they are structured or
how you run them, however, utilities retain the same function: maintaining your PC.

Operating Systems

The basic level of software with which you will work on your PC is the operating system.
It’s what you see when you don’t have an application or utility program running. But an
operating system is much more than what you see on the screen. As the name implies, the
operating system tells your PC how to operate, how to carry on its most basic functions.
Early operating systems were designed simply to control how you read from and wrote to
files on disks and were hence termed disk operating systems (which is why DOS is called
DOS). Today’s operating systems add a wealth of functions for controlling every possible
PC peripheral from keyboard (and mouse) to monitor screen.
The operating system in today’s PCs has evolved from simply providing a means of
controlling disk storage into a complex web of interacting programs that perform several
functions. The most important of these is linking the various elements of your computer
system together. These linked elements include your PC hardware, your programs, and
you. In computer language, the operating system is said to provide a common hardware
interface, a common programming interface, and a common user interface.

Of these interfaces only one, the operating system’s user interface, is visible to you. The
user interface is the place at which you interact with your computer at its most basic
level. Sometimes this part of the operating system is called the user shell. In today’s
operating systems, the shell is simply another program, and you can substitute one shell
for another. In effect, the shell is the starting point to get your applications running and
the home base that you return to between applications. Under DOS, the default shell is
COMMAND.COM; under Windows versions through 3.11, the shell is Program Manager
(PROGMAN EXE).

Behind the shell, the Application Program Interface, or API of the operating system,
gives programmers a uniform set of calls, key words that instruct the operating system to
execute a built-in program routine that carries out some pre-defined function. For
example, a program can call a routine from the operating system that draws a menu box
on the screen. Using the API offers programmers the benefit of having the complicated
aspects of common program procedures already written and ready to go. Programmers
don’t have to waste their time on the minutiae of moving every bit on your monitor
screen or other common operations. The use of a common base of code also eliminates
duplication, which makes today’s overweight applications a bit more svelte. Moreover,
because all applications use basically the same code, they have a consistent look and
work in a consistent manner. This prevents your PC from looking like the accidental
amalgamation of the late night work of thousands of slightly aberrant engineers that it is.

As new technologies, hardware, and features get added to the repertory you expect from
your PC, the operating system maker must expand the API to match. Old operating
systems required complete upgrades or replacements to accommodate the required
changes. Modern operating systems are more modular and accept extensions of their
APIs with relatively simple installations of new code. For example, one of the most
important additions to the collection of APIs used by Windows 95 was a set of
multimedia controls called DirectX. Although now considered part of Windows 95, this
collection of four individual APIs, later expanded to six, didn’t become available until
two months after the initial release of the operating system. The DirectX upgrade APIs
supplemented the original API multimedia control program code in the original release
with full 32-bit versions.

Compared to the API, the hardware interface of an operating system works in the
opposite direction. Instead of commands sent to the operating system to carry out, the
hardware interface comprises a set of commands the operating system sends out to make
the hardware do its tricks. These commands take a generalized form for a particular class
of hardware. That is, instead of being instructions for a particular brand and model of
disk drive, they are commands that all disk drives must understand, for example to read a
particular cluster from the disk. The hardware interface (and the programmer) doesn’t
care about how the disk drive reads the cluster, only that it does—and delivers the results
to the operating system. The hardware engineer can then design the disk drive to do its
work any way he wants as long as it properly carries out the command.

In the real world, the operating system hardware interface doesn’t mark the line between
hardware and software. Rather, it draws the line between the software written as part of
the operating system and that written by (or for) the hardware maker. The hardware
interface ties into an additional layer of software called a driver that’s specifically created
for the particular hardware being controlled. Each different piece of hardware—
sometimes down to the brand and model number—gets its own special driver. Moreover,
drivers themselves may be layered. For example, the most recent versions of Windows
use a mini-driver model in which a class of hardware devices gets one overall driver, and
a specific product gets matched to it by a mini-driver.

Not all operating systems provide a common hardware interface. In particular, DOS
makes few pretenses of linking hardware. It depends on software publishers to write their
own links between their program and specific hardware (or hardware drivers). This
method of direct hardware control is fully described in the "Linking Hardware and
Software" section later in this chapter.

Outside of the shell of the user interface, you see and directly interact with little of an
operating system. The bulk of the operating system program code works invisibly (and
continuously). And that’s the way it’s designed to be.

Programming Languages

A computer program is nothing more than a list of instructions for a microprocessor to


carry out. A microprocessor instruction, in turn, is a specific pattern of bits, a digital
code. Your computer sends the list of instructions making up a program to your
microprocessor one at a time. Upon receiving each instruction, the microprocessor looks
up what function the code says to do, then it carries out the appropriate action.

Every microprocessor understands its own repertoire of instructions just as a dog might
understands a few spoken commands. Where your pooch might sit down and roll over
when you ask it to, your processor can add, subtract, move bit patterns around, and
change them. Every family of microprocessor has a set of instructions that it can
recognize and carry out, the necessary understanding designed into the internal circuitry
of each chip. The entire group of commands that a given microprocessor model
understands and can react to is called that microprocessor’s instruction set or its
command set. Different microprocessor families recognize different instruction sets, so
the commands meant for one chip family would be gibberish to another. The Intel family
of microprocessors understands one command set; the IBM/Motorola PowerPC family of
chips recognize an entirely different command set.

As a mere pattern of bits, a microprocessor instruction itself is a simple entity, but the
number of potential code patterns allows for incredibly rich command sets. For example,
the Intel family of microprocessors understands more than eight subtraction instructions,
each subtly different from the others.

Some microprocessor instructions require a series of steps to be carried out. These multi-
step commands are sometimes called complex instructions because of their composite
nature. Although the complex instruction looks like a simple command, it may involve
much work. A simple instruction would be something like "pound a nail"; a complex
instruction may be as far ranging as "frame a house." Simple subtraction or addition of
two numbers may actually involve dozens of steps, including the conversion of the
numbers from decimal to binary (1’s and 0’s) notation that the microprocessor
understands.

Broken down to its constituent parts, a computer program is nothing but a list of symbols
that correspond to patterns of bits that signal a microprocessor exactly as letters of the
alphabet represent sounds that you might speak. Of course, with the same back to the real
basics reasoning, an orange is a collection of quarks squatting together with reasonable
stability in the center of your fruit bowl. The metaphor is apt. The primary constituents of
an orange—whether you consider them quarks, atoms, or molecules—are essentially
interchangeable, even indistinguishable. By itself, every one is meaningless. Only when
they are taken together do they make something worthwhile (at least from a human
perspective), the orange. The overall pattern, not the individual pieces, is what’s
important.

Letters and words work the same way. A box full of vowels wouldn’t mean anything to
anyone not engaged in a heated game of Wheel of Fortune. Match the vowels with
consonants and arrange them properly, and you might make words of irreplaceable value
to humanity: the works of Shakespeare, Einstein’s expression of general relativity, or the
formula for Coca-Cola. The meaning is not in the pieces but their patterns.

Everything that the microprocessor does consists of nothing more than a series of these
step-by-step instructions. A computer program is simply a list of microprocessor
instructions. The instructions are simple, but long and complex computer programs are
built from them just as epics and novels are built from the words of the English language.
Although writing in English seems natural, programming feels foreign because it requires
that you think in a different way, in a different language. You even have to think of jobs,
such as adding numbers, typing a letter, or moving a block of graphics, as a long series of
tiny steps. In other words, programming is just a different way of looking at problems
and expressing the process of solving them.

These bit-patterns used by microprocessors can be represented as binary codes, which


can be translated into numbers in any format—hexadecimal and decimal being most
common. In this form, the entire range of these commands for a microprocessor is called
machine language. Most human beings find words or pseudo-words to be more
comprehensible symbols. The list of word-like symbols that control a microprocessor is
termed assembly language.

You make a computer program by writing a list of commands for a microprocessor to


carry out. At this level, programming is like writing reminder notes for a not-too-bright
person of an ethnic background you want to denigrate—first socks, then shoes.

This step-by-step command system is perfect for control freaks but otherwise is more
than most people want to tangle with. Even simple computer operations require dozens of
microprocessor operations, so writing complete lists of commands in this form can be
more than many programmers want to deal with. Moreover, machine and assembly
language commands are microprocessor-specific: they work only with the specific chips
that understand them. Worse, because the microprocessor controls all computer
functions, assembly language programs usually work only on a specific hardware
platform.

It needs software, that list of instructions called a program, to make it work. But a
program is more than a mere list. It is carefully organized and structured so that the
computer can go through the instruction list step by step, executing each command in
turn. Each builds on the previous instructions to carry out a complex function. The
program is essentially a recipe for a microprocessor.

Microprocessors by themselves only react to patterns of electrical signals. Reduced to its


purest form, the computer program is information that finds its final representation as the
ever-changing pattern of signals applied to the pins of the microprocessor. That electrical
pattern is difficult for most people to think about, so the ideas in the program are
traditionally represented in a form more meaningful to human beings. That representation
of instructions in human-recognizable form is called a programming language.

Programming languages create their own paradox. You write programs on a computer to
make the computer run and do what you want it to do. But without a program the
computer can do nothing. Suddenly you’re confronted with a basic philosophic question:
Which came first, the computer or the program?

The answer really lays an egg. With entirely new systems, the computer and its first
programs are conceived and created at the same time. As a team of hardware engineers
builds the computer hardware, another team of software engineers develops its basic
software. They both work from a set of specifications, the list of commands the computer
can carry out. The software engineers use the commands in the specifications, and the
hardware engineers design the hardware to carry out those commands. With any luck,
when both are finished the hardware and software come together perfectly and work the
first time they try. It’s sort of like digging a tunnel from two directions and hoping the
two crews meet in the middle of the mountain.
Moreover, programs don’t have to be written on the machine for which they are meant.
The machine that a programmer uses to write program code does not need to be able to
actually run the code. It only has to edit and store the program so that it can later be
loaded and run on the target computer. For example, programs for game machines are
often written on more powerful computers called development systems. Using a more
powerful machine for writing gives programmers more speed and versatility.

Similarly, you can create a program that runs under one operating system using a
different operating system. For example, you can write DOS programs under Windows.
Moreover, you can write an operating system program while running under another
operating system. After all, writing the program is little more than using a text editor to
string together commands. The final code is all that matters; how you get there is
irrelevant to the final program. The software writer simply chooses the programming
environment he’s most comfortable in, just as he choose the language he prefers to use.

Machine Language

The most basic of all coding systems for microprocessor instructions merely documents
the bit pattern of each instruction in a form that human beings can see and appreciate.
Because this form of code is an exact representation of the instructions that the computer
machine understands, it is termed machine language.

The bit pattern of electrical signals in machine language can be expressed directly as a
series of ones and zeros, such as 0010110. Note that this pattern directly corresponds to a
binary (or base-two) number. As with any binary number, the machine language code of
an instruction can be translated into other numerical systems as well. Most commonly,
machine language instructions are expressed in hexadecimal form (base-16 number
system). For example, the 0010110 subtraction instruction becomes 16(hex).

Assembly Language

People can and do program in machine language. But the pure numbers assigned to each
instruction require more than a little getting used to. After weeks, months, or years of
machine language programming, you begin to learn which numbers do what. That’s great
if you want to dedicate your life to talking to machines but not so good if you have better
things to do with your time.

For human beings, a better representation of machine language codes involves


mnemonics rather than strictly numerical codes. Descriptive word fragments can be
assigned to each machine language code so that 16(Hex) might translate into SUB (for
subtraction). Assembly language takes this additional step, enabling programmers to
write in more memorable symbols.
Once a program is written in assembly language, it must be converted into the machine
language code understood by the microprocessor. A special program, called an assembler
handles the necessary conversion. Most assemblers do even more to make the
programmer’s life manageable. For example, they enable blocks of instructions to be
linked together into a block called a subroutine, which can later be called into action by
using its name instead of repeating the same block of instructions again and again.

Most of assembly language involves directly operating the microprocessor using the
mnemonic equivalents of its machine language instructions. Consequently, programmers
must be able to think in the same step-by-step manner as the microprocessor. Every
action that the microprocessor does must be handled in its lowest terms. Assembly
language is consequently known as a low level language because programmers write at
the most basic level.

High Level Languages

Just as an assembler can convert the mnemonics and subroutines of assembly language
into machine language, a computer program can go one step further, translating more
human-like instructions into multiple machine language instructions that would be
needed to carry them out. In effect, each language instruction becomes a subroutine in
itself.

The breaking of the one to one correspondence between language instruction and
machine language code puts this kind of programming one level of abstraction farther
from the microprocessor. Such languages are called high level languages. Instead of
dealing with each movement of a byte of information, high level languages enable the
programmer to deal with problems as decimal numbers, words, or graphic elements. The
language program takes each of these high level instructions and converts it into a long
series of digital code microprocessor commands in machine language.

High level languages can be classified into two types: interpreted and compiled. Batch
languages are a special kind of interpreted language.

Compiled Languages

Compiled languages execute like a program written in assembler but the code is written
in a more human-like form. A program written with a compiled language gets translated
from high level symbols into machine language just once. The resulting machine
language is then stored and called into action each time you run the program. The act of
converting the program from the English-like compiled language into machine language
is called compiling the program; to do this you use a language program called a compiler.
The original, English-like version of the program, the words and symbols actually written
by the programmer, is called the source code. The resulting machine language makes up
the program’s object code.

Compiling a complex program can be a long operation, taking minutes, even hours. Once
the program is compiled, however, it runs quickly because the computer needs only to
run the resulting machine language instructions instead of having to run a program
interpreter at the same time. Most of the time, you run a compiled program directly from
the DOS prompt or by clicking on an icon. The operating system loads and executes the
program without further ado. Examples of compiled languages include C, COBOL,
FORTRAN, and Pascal.

Object-oriented languages are special compiled languages designed so that programmers


can write complex programs as separate modules termed objects. A programmer writes
an object for a specific, common task and gives it a name. To carry out the function
assigned to an object, the programmer needs only to put its name in the program without
reiterating all the object’s code. A program may use the same object in many places and
at many different times. Moreover, a programmer can put a copy of an object into
different programs without the need to rewrite and test the basic code, which speeds up
the creating of complex programs. The newest and most popular programming languages
like C++ are object-oriented.

Because of the speed and efficiency of compiled languages, compilers have been written
that convert interpreted language source code into code that can be run like any compiled
program. A BASIC compiler, for example, will produce object code that will run from
the DOS prompt without the need for running the BASIC interpreter. Some languages,
like Microsoft Quick BASIC, incorporate both interpreter and compiler in the same
package.

When PCs were young, getting the best performance required using a low level language.
High level languages typically include error routines and other overhead that bloats the
size of programs and slows their performance. Assembly language enabled programmers
to minimize the number of instructions they needed and to ensure that they were used as
efficiently as possible.

Optimizing compilers do the same thing but better. By adding an extra step (or more) to
the program compiling process, the optimizing compiler checks to ensure that program
instructions are arranged in the most efficient order possible to take advantage of all the
capabilities of a RISC microprocessor. In effect, the optimizing compiler does the work
that would otherwise require the concentration of an assembly language programmer.

In the end, however, the result of using any language is the same. No matter how high the
level of the programming language, no matter what you see on your computer screen, no
matter what you type to make your machine do its daily work, everything the
microprocessor does is reduced to a pattern of digital pulses to which it reacts in knee
jerk fashion. Not exactly smart on the level of an Albert Einstein or even the trouble-
making kid next door, but the microprocessor is fast, efficient, and useful. It is the
foundation of every PC.

Interpreted Languages

An interpreted language is translated from human to machine form each time it is run by
a program called an interpreter. People who need immediate gratification like interpreted
programs because they can be run immediately, without intervening steps. If the
computer encounters a programming error, it can be fixed, and the program can be tested
again immediately. On the other hand, the computer must make its interpretation each
time the program is run, performing the same act again and again. This repetition wastes
the computer’s time. More importantly, because the computer is doing two things at once,
both executing the program and interpreting it at the same time, it runs more slowly.

BASIC, an acronym for the Beginner’s All-purpose Symbolic Instruction Set, is the most
familiar programming language. BASIC, as an interpreted language, was built into every
personal computer IBM made in the first decade of personal computing. Another
interpreted language, Java, promises to change the complexion of the Internet.

Java, the Internet language, is also interpreted. Your PC downloads a list of Java
commands and converts them into executable form inside your PC. The interpreted
design of Java helps make it universal. The Java code contains instructions that any PC
can carry out regardless of its operating system. The Java interpreter inside your PC
converts the universal code into the specific machine language instructions your PC and
its operating system understand.

In classic form using an interpreted language involved two steps. First, you would start
the language interpreter program, which gave you a new environment to work in,
complete with its own system of commands and prompts. Once in that environment, you
executed your program, typically starting it with a "Run" instruction. More modern
interpreted systems like Java hide the actual interpreter from you. The Java program
appears to run automatically by itself, although in reality the interpreter is hidden in your
Internet browser or operating system. Microsoft’s Visual Basic gets its interpreter support
from a run-time module which must be available to your PC’s operating system for
Visual Basic programs to run.

Batch Languages

A batch language allows you to submit a program directly to your operating system for
execution. That is, the batch language is a set of operating system commands that your
PC executes sequentially as a program. The resulting batch program works like an
interpreted language in that each step gets evaluated and executed only as it appears in
the program.

Applications often include their own batch languages. These, too, are merely lists of
commands for the application to carry out in the order that you’ve listed them to perform
some common, everyday function. Communications programs use this type of
programming to automatically log into the service of your choice and even retrieve files.
Databases use their own sort of programming to automatically generate reports that you
regularly need. The process of transcribing your list of commands is usually termed
scripting. The commands that you can put in your program scripts are sometimes called
the scripting language.

Scripting actually is programming. The only difference is the language. Because you use
commands that are second nature to you (at least after you’ve learned to use the program)
and follow the syntax that you’ve already learned running the program, the process seems
more natural than writing in a programming language.

Linking Hardware and Software

Software is from Venus. Hardware is from Mars—or, to ruin the allusion for sake of
accuracy, Vulcan. Software is the programmer’s labor of love, an ephemeral spirit that
can only be represented. Hardware is the physical reality, the stuff pounded out in
Vulcan’s forge—enduring, unchanging, and often priced like gold. Bringing the two
together is a challenge that even self-help books would find hard to manage. Yet every
PC not only faces that formidable task but tackles it with aplomb (or so you hope!)

Your PC takes ephemeral ideas and gives them the power to control physical things. In
other words, it allows its software to command its hardware. The challenge is making the
link.

In the basic PC, every instruction in a program gets targeted on the microprocessor.
Consequently, the instructions can control only the microprocessor and don’t themselves
reach beyond. The circuitry of the rest of the computer and all of the peripherals
connected to it must get their commands and data relayed by the microprocessor.
Somehow the microprocessor must be able to send signals to these devices.

Device Interfaces

Two methods are commonly used, input/output mapping and memory mapping.
Input/output mapping relies on sending instructions and data through ports. Memory
mapping requires passing data through memory addresses. Ports and addresses are similar
in concept but different in operation.
Input/Output Mapping

A port is an address but not a physical location. The port is a logical construct that
operates as an addressing system separate from the address bus of the microprocessor
even though it uses the same address lines. If you imagine normal memory addresses as a
set of pigeon holes for holding bytes, input/output ports act like a second set of pigeon
holes on the other side of the room. To distinguish which set of holes to use, the
microprocessor controls a flag signal on its bus called memory-I/O. In one condition it
tells the rest of the computer the signals on the address bus indicate a memory location; in
its other state, the signals indicate an input/output port.

The microprocessor’s internal mechanism for sending data to a port also differs from
memory access. One instruction, move, allows the microprocessor to move bytes from
any of its registers to any memory location. Some microprocessor operations can even be
performed in immediate mode, directly on the values stored at memory locations.

Ports, however, use a pair of instructions, In, to read from a port, and Out, to write to a
port. The values read can only be transferred into one specific register of the
microprocessor (called the accumulator), and can only be written from that register. The
accumulator has other functions as well. Immediate operations on values held at port
locations is impossible—which means a value stored in a port cannot be changed by the
microprocessor. It must load the port value into the accumulator, alter it, then reload the
new value back into the port.

Memory Mapping

The essence of memory mapping is sharing. The microprocessor and the hardware device
it controls share access to a specific range of memory addresses. To send data to the
device, your microprocessor simply moves the information into the memory locations
exactly as if it were storing something for later recall. The hardware device can then read
those same locations to obtain the data.

Memory-mapped devices, of course, need direct access to your PC’s memory bus.
Through this connection, they can gain speed and operate as fast as the memory system
and its bus connection allows. In addition, the microprocessor can directly manipulate the
data at the memory location used by the connection, eliminating the multi-step
load/change/reload process required by I/O mapping.

The most familiar memory-mapped device is your PC’s display. Most graphic systems
allow the microprocessor to directly address the frame buffer that holds the image which
appears on your monitor screen. This design allows the video system to operate at the
highest possible speed.
The addresses used for memory mapping must be off-limits to the range in which the
operating system loads your programs. If a program should transgress on the area used
for the hardware connection, it can inadvertently change the data there—nearly always
with bad results. Moreover, the addresses used by the interface cannot serve any other
function, so they take away from the maximum memory addressable by a PC. Although
such deductions are insignificant with today’s PCs, it was a significant shortcoming for
old systems, many of which were limited to 16 megabytes.

Addressing

To the microprocessor the difference between ports and memory is one of perception:
memory is a direct extension of the chip. Ports are the external world. Writing to I/O
ports is consequently more cumbersome and usually requires more time and
microprocessor cycles.

I/O ports give the microprocessor and computer designer greater flexibility. And they
give you a headache when you want to install multimedia accessories.

Implicit in the concept of addressing, whether memory or port addresses, is proper


delivery. You expect a letter carrier to bring your mail to your address and not deliver it
to someone else’s mailbox. Similarly, PCs and their software assume that deliveries of
data and instructions will always go where they are supposed to. To assure proper
delivery, addresses must be correct and unambiguous. If someone types a wrong digit on
a mailing label, it will likely get lost in the postal system.

In order to use port or memory addresses properly, your software needs to know the
proper addresses used by your peripherals. Many hardware functions have fixed or
standardized addresses that are the same in every PC. For example, the memory
addresses used by video boards are standardized (at least in basic operating modes), and
the ports used by most hard disk drives are similarly standardized. Programmers can
write the addresses used by this fixed-address hardware into their programs and not
worry whether their data will get where it’s going.

The layered BIOS approach helps eliminate the need of writing explicit hardware
addresses in programs. Drivers accomplish a similar function. They are written with the
necessary hardware addresses built in.

Resource Allocation

The basic hardware devices were assigned addresses and memory ranges early in the
history of the PC and for compatibility reasons have never changed. These fixed values
include those of serial and parallel ports, keyboards, disk drives, and the frame buffer that
stores the monitor image. Add-in devices and more recent enhancements to the traditional
devices require their own assignments of system resources. Unfortunately, beyond the
original hardware assignments there are no standards for the rest of the resources.
Manufacturers consequently pick values of their own choices for new products. More
often than you’d like, several products may use the same address values.

Manufacturers attempt to avoid conflicts by allowing a number of options for the


addresses used by their equipment. You select among the choices offered by
manufacturers using switches or jumpers (on old technology boards) or through software
(new technology boards, including those following the old Micro Channel and EISA
standards). The latest innovation, Plug-and-Play, attempts to put the responsibility for
properly allocating system resources in the hands of your PC and its operating system,
although the promise often falls short of reality when you mix new and old products.

With accessories that use traditional resource allocation technology, nothing prevents
your setting the resources used by one board to the same values used by another in your
system. The result is a resource conflict that may prevent both products from working.
Such conflicts are the most frequent cause of problems in PCs, and eliminating them was
the goal of the modern, automatic resource allocation technologies.

BIOS

The Basic Input/Output System or BIOS of a PC has many functions, as discussed in


Chapter 5. One of these is to help match your PC’s hardware to software.

No matter the kind of device interface (I/O mapped or memory mapped) used by a
hardware device, software needs to know the addresses it uses to take control of it. Using
direct hardware control requires that programs or operating systems are written using the
exact values of the port and memory addresses of all the devices installed in the PC. All
PCs running such software that takes direct hardware control consequently must assign
their resources identically if the software is to have any hope of working properly.

PC designers want greater flexibility. They want to be able to assign resources as they see
fit, even to the extent of making some device that might be memory-mapped in one PC
into an I/O-mapped device in another. To avoid permanently setting resource values and
forever locking all computer designs to some arbitrary standard, one that might prove
woefully inadequate for future computer designs, the creators of the first PCs developed
BIOS.

The BIOS is program code that’s permanently recorded (or semi-permanently in the case
of Flash BIOS systems) in special memory chips. The code acts like the hardware
interface of an operating system but at a lower level; it is a hardware interface that’s
independent of the operating system. Programs or operating systems send commands to
the BIOS, and the BIOS sends out the instructions to the hardware using the proper
resource values. If the designer of a PC wants to change the resources used by the system
hardware in a new PC, he only has to change the BIOS to make most software work
properly. The BIOS code of every PC includes several of these built-in routines to handle
accessing floppy disk drives, the keyboard, printers, video, and parallel and serial port
operation.

Device Drivers

Device drivers have exactly the same purpose as the hardware interface in the BIOS
code. They link your system to another device by giving your PC a handy set of control
functions. Drivers simply take care of devices not in the repertory of the BIOS. Rather
than being permanently encoded in the memory of your PC, drivers are software that
must be loaded into your PC’s memory. As with the BIOS links, the external device
driver provides a library of programs that it can easily call to carry out a complex
function of the target hardware or software device.

All device drivers have to link with your existing software somehow. The means of
making that link varies with your operating system. As you should expect, the
fundamental device driver architecture is that used by DOS. Drivers that work with DOS
are straightforward, single-minded, and sometimes dangerous. Advanced operating
systems like Windows and OS/2 have built-in hooks for device drivers that make them
more cooperative and easier to manage.

You need to tangle with device drivers because no programmer has an unlimited
imagination. No programmer can possibly conceive of every device that you’d want to
link to your PC. In fact, programmers are hard pressed to figure out everything you’d
want to do with your PC—otherwise they’d write perfect programs that would do exactly
everything that you want.

Thanks to an industry with a heritage of inter-company cooperation only approximated


by a good pie fight, hardware designer tend to go their own directions when creating the
control systems for their products. For example, the command one printer designer might
pick for printing graphics dots may instruct a printer of a different brand to advance to the
next sheet of paper. Confuse the codes, and your office floor will soon look like the
training ground for housebreaking a new puppy.

Just about every class of peripheral has some special function shared with no other
device. Printers need to switch ribbon colors; graphics boards need to put dots on screen
at high resolution; sound boards need to blast fortissimo arpeggios; video capture boards
must grab frames; and mice have to do whatever mice do. Different manufacturers often
have widely different ideas about the best way to handle even the most fundamental
functions. No programmer or even collaborative program can ever hope to know all the
possibilities. It’s even unlikely that you could fit all the possibilities into a BIOS with
fewer chips than a Las Vegas casino or operating system with code that would fit onto a
stack of disks you could carry. There are just too many possibilities.

Drivers let you customize. Instead of packing every control or command you might
potentially need, the driver packages only those that are appropriate for a particular
product. If all you want is to install a sound board, your operating system doesn’t need to
know how to capture a video frame. The driver contains only the command specific to the
type, brand, and model of a product that you actually connect to your PC.

Device drivers give you a further advantage. You can change them almost as often as you
change your mind. If you discover a bug in one driver—say sending an upper case F to
your printer causes it to form feed through a full ream of paper before coming to a
panting stop—you can slide in an updated driver that fixes the problem. In some cases,
new drivers extend the features of your existing peripherals because programmers didn’t
have enough time or inspiration to add everything to the initial release.

The way you and your system handle drivers depends on your operating system. DOS,
16-bit versions of Windows, Windows 95, and OS/2 each treat drivers somewhat
differently. All start with the model set by DOS, then add their own innovations. All 16-
bit versions of Windows run under DOS, so require that you understand (and use) some
DOS drivers. In addition, these versions of Windows add their own method of installing
drivers as well as several new types of drivers. Windows 95 accommodates both DOS
and 16-bit Windows drivers to assure you of compatibility with your old hardware and
software. In addition, Windows 95 brings its own 32-bit protected-mode drivers and a
dynamic installation scheme. OS/2 also follows the pattern set by DOS but adds its own
variations as well.

Driver software matches the resource needs of your hardware to your software
applications. The match is easy when a product doesn’t allow you to select among
resource values; the proper addresses can be written right into the driver. When you can
make alternate resource allocations, however, the driver software needs to know which
values you’ve chosen. In most cases, you make your choices known to the driver by
adding options to the command that loads the driver (typically in your PC’s
CONFIG.SYS) or through configuration files. Most new add-in devices include an
installation program that indicates the proper options to the driver by adding the values to
the load command or configuration file, though you can almost always alter the options
with a text-based editor.

This complex setup system gives you several places to make errors that will cause the
add-in device, or your entire PC, to operate erratically or fail altogether. You might make
duplicate resource assignments, mismatch drivers and options, or forget to install the
drivers at all. Because multimedia PCs use so many add-in devices, they are particularly
prone to such problems. Sound boards in particular pose installation problems because
they usually incorporate several separate hardware devices (the sound system proper, a
MIDI interface, and often a CD ROM interface) each of which has its own resource
demands.
Hardware Components

Before you can jump into any discussion about personal computers, you have to speak
the language. You can’t talk intelligently about anything if you don’t know what you’re
talking about. You need to know the basic terms and buzzwords so you don’t fall under
some charlatan’s spell.

Every PC is built from an array of components, each of which serves a specific function
in making the overall machine work. As with the world of physical reality, a PC is built
from fundamental elements combined together. Each of these elements adds a necessary
quality or feature to the final PC. These building blocks are hardware components, built
of electronic circuits and mechanical parts to carry out a defined function. Although all of
the components work together, they are best understood by examining them and their
functions individually. Consequently this book is divided into sections and chapters by
component.

Over the years of the development of the PC, the distinctions between many of these
components have turned out not to be hard and fast. In the early days of PCs, most
manufacturers followed the same basic game plan using the same components in the
same arrangement, but today greater creativity and diversity rules. What once were
separate components have merged together; others have been separated out. Their
functions, however, remain untouched. For example, although modern PCs may lack the
separate timer chips of early machines, the function of the timer has been incorporated
into the support circuitry chipsets.

For purposes of this book and discussion, we’ll divide the PC into several major
component areas, including the system unit, the mass storage system, the display system,
peripherals, and connectivity features. Each of these major divisions can be, in turn,
subdivided into the major components (or component functions) required in a complete
PC.

System Unit

The part of a PC that most people usually think of as the computer—the box that holds all
the essential components except, in the case of desktop machines, the keyboard and
monitor—is the system unit. Sometimes called CPU—for Central Processing Unit, a term
also used to describe microprocessors as well as mainframe computers—the system unit
is the basic computer component. It houses the main circuitry of the computer and
provides the jacks (or outlets) that link the computer to the rest of its accouterments
including the keyboard, monitor, and peripherals. A notebook computer combines all of
these external components into one but is usually called simply the computer rather than
the system unit or CPU.
One of the primary functions of the system unit is physical. It gives everything in your
computer a place to be. It provides the mechanical mounting for all the internal
components that make up your computer, including the motherboard, disk drives and
expansion boards. The system unit is the case of the computer that you see and
everything that is inside it. The system unit supplies power to operate the PC and its
internal expansion, disk drives, and peripherals.

Motherboard

The centerpiece of the system unit is the motherboard. All the other circuitry of the
system unit is usually part of the motherboard or plugs directly into it.

The electronic components on the motherboard carry out most of the function of the
machine: running programs, making calculations, even arranging the bits that will display
on the screen.

Because the motherboard defines each computer’s functions and capabilities and because
every computer is different, it only stands to reason that every motherboard is different,
too. Not exactly. Many different computers have the same motherboard designs inside.
And oftentimes a single computer model might have any of several different
motherboards depending on when it came down the production line (and what
motherboard the manufacturer got the best deal on).

The motherboard holds the most important elements of your PC, those that define its
function and expandability. These include the microprocessor, BIOS, memory, mass
storage, expansion slots, and ports.

Microprocessor

The most important of the electronic components on the motherboard is the


microprocessor. It does the actual thinking inside the computer. Which microprocessor,
of the dozens currently available, determines not only the processing power of the
computer but also what software language it understands (and thus what programs it can
run).

Many older computers also had a coprocessor that added more performance to the
computer on some complex mathematical problems such as trigonometric functions.
Modern microprocessors generally internally incorporate all the functions of the
coprocessor.

Memory
Just as you need your hands and workbench to hold tools and raw materials to make
things, your PC’s microprocessor needs a place to hold the data it works on and the tools
to do its work. Memory, which is often described by the more specific term RAM (which
means Random Access Memory) serves as the microprocessor’s workbench. Usually
located on the motherboard, your PC’s microprocessor needs memory to carry out its
calculations. The amount and architecture of the memory of a system determines how it
can be programmed and, to some extent, the level of complexity of the problems that it
can work on. Modern software often requires that you install a specific minimum of
memory—a minimum measured in megabytes—to execute properly. With modern
operating systems, more memory often equates to faster overall system performance.

BIOS

A computer needs a software program to work. It even needs a simple program just to
turn itself on and be able to load and read software. The Basic Input/Output System or
BIOS of a computer is a set of permanently recorded program routines that gives the
system its fundamental operational characteristics, including instructions telling the
computer how to test itself every time it is turned on.

In older PCs, the BIOS determines what the computer can do without loading a program
from disk and how the computer reacts to specific instructions that are part of those disk-
based programs. Newer PCs may contain simpler or more complex BIOSes. A BIOS can
be as simple as a bit of code telling the PC how to load the personality it needs from disk.
Some newer BIOSes also include a system to help the machine determine what options
you have installed and how to get them to work best together.

At one time, the origins of a BIOS determined the basic compatibility of a PC. Newer
machines—those made in the last decade—are generally free from worries about
compatibility. The only compatibility issue remaining is whether a given BIOS supports
the Plug-and-Play standard that allows automatic system configuration (which is a good
thing to look for in a new PC but its absence is not fatal in older systems).

Modern operating systems automatically replace the BIOS code with their own software
as soon as your PC boots up. For the most part, the modern BIOS only boots and tests
your PC, then steps out of the way so that your software can get the real work done.

Support Circuits

The support circuitry on your PC’s motherboard links its microprocessor to the rest of the
PC. A microprocessor, although the essence of a computer, is not a computer in itself (if
it were, it would be called something else, such as a computer). The microprocessor
requires additional circuits to bring it to life: clocks, controllers, and signal converters.
Each of these support circuits has its own way of reacting to programs, and thus helps
determine how the computer works.

In today’s PCs, all of the traditional functions of the support circuitry have been squeezed
into chipsets, which are relatively large integrated circuits. In that most PCs are now
based on a small range of microprocessors, their chipsets distinguish their motherboards
and performance as much as do their microprocessors. In fact, for some folks the choice
of chipset is a major purchasing criterion.

Expansion Slots

Exactly as the name implies, the expansion slots of a PC allow you to expand its
capabilities by sliding in accessory boards, cleverly termed expansion boards. The slots
are spaces inside the system unit of the PC that provide special sockets or connectors to
plug in your expansion boards. The expansion slots of notebook PCs accept modules the
size of credit cards that deliver the same functions as expansion boards.

The standards followed by the expansion slots in a PC determine both what boards you
can plug in and how fast the boards can perform. Over the years, PCs have used several
expansion slot standards. In new PCs, the choices have narrowed to three—and you
might want all of them in your next system.

Mass Storage

To provide your computer with a way to store the huge amounts of programs and data
that it works with every day, your PC uses mass storage devices. In nearly all of today’s
computers, the primary repository for this information is a hard disk drive. Floppy disks
and CD ROM drives give you a way of transferring programs and data to (and from) your
PC. One or more mass storage interfaces link the various storage systems to the rest of
your PC. In modern systems, these interfaces are often part of the circuitry of the
motherboard.

Hard Disk Drives

The basic requirements of any mass storage system are speed, capacity, and low price.
No technology delivers as favorable a combination of these virtues as the hard disk drive,
now a standard part of nearly every PC. The hard disk drive stores all of your programs
and other software so that they can be loaded into your PC’s memory almost without
waiting. In addition, the hard disk also holds all the data you generate with your PC so
that you can recall and reuse it whenever you want. In general, the faster the hard disk
and the more it can hold, the better.

Hard disks also have their weaknesses. Although they are among the most reliable
mechanical devices ever made—some claim to be able to run for 30 years without a
glitch—they lack some security features. The traditional hard disk is forever locked
inside your PC, and that makes the data stored on it vulnerable to any evil that may befall
your computer. A thief or disaster can rob you of your system and your data in a single
stroke. Just as you can’t get the typical hard disk out of your PC to store in a secure place,
the hard disk gives you no easy way to put large blocks of information or programs into
your PC.

CD ROM Drives

Getting data into your PC requires a distribution medium, and when you need to move
megabytes, the medium of choice today is the CD ROM drive. Software publishers have
made the CD ROM their preferred means of getting their products to you. A single CD
that costs about the same as a floppy disk holds hundreds of times more information and
keeps it more secure. CD’s are vulnerable to neither random magnetic fields nor casual
software pirates. CD ROM drives are a necessary part of all multimedia PCs, which
means just about any PC you’d want to buy today.

The initials stand for Compact Disc, Read-Only Memory, which describe the technology
at the time it was introduced for PC use. Although today’s CD ROMs are based on the
same silver laser-read discs that spin as CDs in your stereo system, they are no longer
read-only and soon won’t be mere CDs. Affordable drives to write your own CDs with
computer data or stereo music are readily available. Many makers of CD ROM drives are
now shifting to the DVD (Digital Video Disc) standard to give their products additional
storage capacity.

Floppy Disk Drives

Inexpensive, exchangeable, and technically unchallenging, the floppy disk was the first,
and at one time only, mass storage system of many PCs. Based on well-proven
technologies and mass produced by the millions, the floppy disk provided the first PCs
with a place to keep programs and data and, over the years, served well as a distribution
system through which software publishers could make their products available.

In the race with progress, however, the simple technology of the floppy disk has been
hard-pressed to keep pace. The needs of modern programs far exceed what floppy disks
can deliver, and other technologies (like those CD ROM drives) provide less expensive
distribution. New incarnations of floppy disk technology that pack 50 to 100 times more
data per disk hold promise but at the penalty of a price that will make you look more than
twice at other alternatives.

All that said, the floppy disk drive remains a standard part of all but a few highly
specialized PCs, typically those willing to sacrifice everything to save a few ounces (sub-
notebooks) and those that need to operate in smoky, dusty environments that would make
Superman cringe and Wonder Woman cough.

Tape Drives

Tape is for backup, pure and simple. It provides an inexpensive place to put your data just
in case—in case some light-fingered freelancer decides to separate your PC from your
desktop, in case the fire department hoses to death everything in your office that the fire
and smoke failed to destroy, in case you think DEL *.* means "display all file names," in
case that nagging head cold turns out to be a virus that infects your PC and formats your
hard disk, in case your next-door neighbor bewitches your PC and turns it into a golden
chariot pulled by a silver charger that once was your mouse, in case an errant asteroid
ambles through your roof. Having an extra copy of your important data helps you recover
from such disasters and those that are even less likely.

Computer tape drives work on the same principles as the cassette recorder in your stereo.
Some are, in fact, cassette drives. All such drives use tape as an inexpensive medium for
storing data. All modern tape systems put their tape into cartridges that you can lock
safely away or send across the continent. And all are slower than you’d like and less
reliable than you’d suspect. Nevertheless, tape remains the backup medium of choice for
most people who choose to make backups.

Display Systems

Your window into the mind of your PC is its display system, the combination of a
graphics adapter or video board and a monitor or flat-panel display. The display system
gives your PC the means to tell you what it is thinking, to show you your data in the form
that you best understand, be it numbers, words, or pictures.

The two halves of the display system work hand-in-hand. The graphics adapter uses the
digital signals inside your PC to built an electronic map of what the final image should
look like, storing the data for every dot on your monitor in memory. Electronics generate
the image that appears on your monitor screen.

Graphics Adapters
Your PC’s graphics adapter forms the image that you will see on your monitor screen. It
converts digital code into a bit pattern that maps each dot that you’ll see. Because it
makes the actual conversion, the graphics adapter determines the number of colors that
can appear on your monitor as well as the ultimate resolution of the image. In other
words, the graphics adapter sets the limit on the quality of the images your PC can
produce. Your monitor cannot make an image any better than what comes out of the
graphics adapter. The graphics adapter also determines the speed of your PC’s video
system; a faster board will make smoother video displays.

Many PCs now include at least a rudimentary form of graphics adapter in the form of
display electronics on their motherboards; others put the display electronics on an
expansion board.

Monitors

The monitor is the basic display system that’s attached to most PCs. Monitors are
television sets with Michael Millken’s appetite for money. While a 21-inch TV might
cost $300 in your local appliance store, the same size monitor will likely cost $2000 and
still not show the movies you rent.

Although both monitor and television are based on the same aging technology, one which
dates back to the 1920’s, they have different aims. The monitor makes more detail, the
television sets its sights on the mass market and makes up for its shortcomings in volume.
In any case, both rely on big glass bottles coated with glowing phosphors that shine
bright enough to light a room.

The quality of the monitor attached to your PC determines the quality of the image you
see. Although it cannot make anything look better than what’s in the signals from your
graphics adapter, it can make them look much worse and limit both the range of colors
and the resolution (or sharpness) of the images.

Flat Panel Display Systems

Big, empty bottles are expensive to make and delicate to move. Except for a select elite,
most engineers have abjured putting fire-filled bottles of any kind in their circuit designs,
the picture tube being the last remnant of this ancient technology. Replacing it are display
systems that use solid-state designs based on liquid crystals. Lightweight, low in power
requirements, and generally shock and shatter resistant, LCD panels have entirely
replaced conventional monitors in notebook PCs and promise to take over desktops in the
coming decade. Currently, they remain expensive (several times the cost of a picture
tube) and more limited in color range, but research into flat panel systems is racing
ahead, while most labs have given up picture tube technology as dead.

Peripherals

The accessories you plug into your computer are usually called peripherals. The name is
a carryover from the early beginnings of computers when the parts of a computer that did
not actually compute were located some distance from the central processing unit, on the
periphery, so to speak.

Today’s PCs have two types of peripherals, the internal and external. Internal peripherals
fit inside the system unit and usually directly connect to its expansion bus. External
peripherals are physically separate from the system unit, connect to the port connectors
on the system unit, and often (but not always) require their own source of power.
Although the keyboard and monitor of a PC fit the definition of external peripherals, they
are usually considered to be part of the PC itself and not peripherals.

Input Devices

You communicate with your PC, telling it what to do, using two primary input devices,
the keyboard and the mouse. The keyboard remains the most efficient way to enter text
into applications, faster than even the most advanced voice recognition systems that let
you talk to your PC. The mouse—more correctly termed a pointing device to include
mouse-derived devices such as trackballs and the proprietary devices used by notebook
PCs—relays graphic instructions to your computer, letting you point to your choices or
sketch, draw, and paint. If you want to sketch images directly onto your monitor screen, a
digitizing tablet works more as you would with a pen.

To transfer images to your PC, a scanner copies graphics into bit-images. With the right
software, it becomes an optical character recognition, or OCR system, that reads text and
transforms words into electronic form.

A voice recognition or voice input system tries to make sense out of your voice. It uses a
microphone to turn the sound waves of your voice into electrical signals, a processing
board that makes those signals digital, and sophisticated software that attempts to discern
the individual words you’ve spoken from the digital signal.

Printers
The electronic thoughts of a PC are notoriously evanescent. Pull the plug and your work
disappears. Moreover, monitors are frustratingly difficult to pass around and post through
the mail when you want to show off your latest digital art creation. Hard copy, the print-
out on paper, solves the problem. And the printer makes your hard copy.

More than any other aspect of computing, printer technology has transformed the
industry in the last decade. Where printers were once the clamorous offspring of
typewriters, they’ve now entered the space age with jets and lasers. The modern PC
printer is usually a high speed, high quality laser printer that creates four or more pages
per minute at a quality level that rivals commercial printing. Inkjet printers sacrifice the
utmost in speed and quality for lower cost and the capability of printing color without
depleting the green in your budget.

Connectivity

The real useful work that PCs do involves not just you but also the outside world. The
ability of a PC to send and receive data to different devices and computers is called
connectivity. Your PC can link to any of a number of hardware peripherals through its
input/output ports. Better still, through modems, networks, and related technologies it can
connect with nearly any PC in the world.

Input/Output Ports

Your PC links to its peripherals through its input and output ports. Every PC needs some
way of acquiring information and putting it to work. Input/output ports are the primary
route for this information exchange. In the past, the standard equipment of most PCs was
simple and almost pre-ordained—one serial port and one parallel port, typically as part of
the motherboard circuitry. Today, new and wonderful port standards are proliferating
faster than dandelions in a new lawn. Hard-wired serial connections are moving to the
new Universal Serial Bus (USB) while the Infrared Data Association (IrDA) system
provides wireless links. Similarly the simple parallel port has become an external
expansion bus capable of linking dozens of devices to a single jack.

Modems

To connect with other PCs and information sources such as the Internet through the
international telephone system, you need a modem. Essentially a signal converter, the
modem adapts your PC’s data to a form compatible with the telephone system.
In a quest for faster transfers than the ancient technology of the classic telephone circuit
can provide, however, data communications are shifting to newer systems such as digital
telephone services (like ISDN), high speed cable connections, and direct digital links
with satellites. Each of these requires its own variety of connecting device, not, strictly
speaking, a modem but called that for consistency’s sake. Which you need depends on
the speed you want and the connections available to you.

Networks

Any time you link two or more PCs together, you’ve made a network. Keep the machines
all in one place—one home, one business, one site in today’s jargon—and you have a
Local Area Network (LAN). Spread them across the country, world, or universe with
telephone, cable, or satellite links, and you get a Wide Area Network (WAN).

Once you link up to the World Wide Web, your computer is no longer merely the box on
your desk. Your PC becomes part of a single, massive international computer system.
Even so, it retains all the features and abilities you expect from a PC—it only becomes
even more powerful.

Chapter 1: Background
What is a PC? Where did it come from? And why should you care? The answers are
already cloudy and the questions may one day become the mystery of the ages. But the
PC’s origins are not obscure; its definition is malleable but manageable, and your
involvement is, well, personal but promising. This chapter offers you an overview of what
a PC is, how its software and hardware work together, the various hardware components
of a PC, and the technologies that underlie their construction. The goal is perspective, an
overview of how the various parts of a PC work together. The rest of this book fills in the
details.
 Personal Computers
 History
 Characteristics
 Interactivity
 Dedicated Operation
 Programmability
 Connectivity
 Accessibility
 Labeling Standards
 MPC 1.0
 MPC 2.0
 MPC 3.0
 PC 95
 PC 97
 MMX
 Variations on the PC Theme
 Workstation
 Server
 Simply Interactive Personal Computer
 Network Computer
 Numerical Control Systems
 Personal Digital Assistants
 Laptops and Notebooks
 Software
 Applications
 Utilities
 DOS Utilities
 Windows Utilities
 Operating Systems
 Programming Languages
 Machine Language
 Assembly Language
 High Level Languages
 Batch Languages
 Linking Hardware and Software
 Device Interfaces
 Input/Output Mapping
 Memory Mapping
 Addressing
 Resource Allocation
 BIOS
 Device Drivers
 Hardware Components
 System Unit
 Motherboard
 Microprocessor
 Memory
 BIOS
 Support Circuits
 Expansion Slots
 Mass Storage
 Hard Disk Drives
 CD ROM Drives
 Floppy Disk Drives
 Tape Drives
 Display Systems
 Graphics Adapters
 Monitors
 Flat Panel Display Systems
 Peripherals
 Input Devices
 Printers
 Connectivity
 Input/Output Ports
 Modems
 Networks

Background

In carving a timeline out of the events of the first few millennia of civilization, the
historians of an age some time hence will list the events that changed the course of the
world and human development: the Yucatan-bound meteor that blasted dinosaurs to
extinction, the taming of fire, coaxing iron from stone, inventing a machine to press ink
from type to paper, and—most important for this book if not the future—creating
personal computers that fit in your hand and link to every other person and PC in the
world.

The PC and the technologies developed around it truly stand to change the course of
civilization and the world as dramatically as any of humankind’s other great inventions.
For nearly two decades, PCs already have changed the way people work and play. As
time goes by, PCs and their offspring are working their way more deeply into our lives.
Even today they are changing how we see the world, the way we communicate, and even
how we think.
A PC is an extension of human abilities that lets us reach beyond our limitations. In a
word, a PC is a tool. Like any tool, from stone ax to Cuisinart, the PC assists you in
achieving some goal. It makes your work easier, be it keeping your books, organizing
your inventory, tracking your recipes, or honing your wordplay. It makes otherwise
impossible tasks—for example, logging onto the Internet—manageable, often even fun.

Compared to the stone ax, a computer is complicated, though it probably takes no longer
to really master. Compared to the modern tools of everyday life, it is one of the most
expensive, second only to the tools of transportation like cars, trucks, and airplanes.
Compared to the other equipment in office or kitchen, it is the most versatile tool at your
disposal.

At heart, however, a computer is no different than any other tool. To wield it effectively,
you must learn how to use it. You must understand it, its capabilities, and its limitations.
A simple description of the gadgets you can buy today is not enough. Names and
numbers mean nothing in the abstract. After all, an entire class of people memorize
specifications and give the appearance of knowing what they are talking about. As
salespeople, they even try to guide your decisions in buying computer equipment. Getting
the most that modern technology has to offer—that is, both taking care of your work
efficiently and acquiring the best hardware to handle your chores— without spending
more than you need to requires an understanding of the underlying technology.

Personal Computers

Before you can talk about PCs at all, you need to know what you’re talking about. The
question is simple: What is a PC?

PCs have been around long enough that the only folks likely not to recognize one by sight
have vague notions that mankind may someday harness the mystery of fire. But defining
exactly what a personal computer is is one of those tasks that seems amazingly
straightforward until you actually try to do it. When you take up the challenge, the term
transforms itself into a strange cross mating between amoeba and chameleon (biologists
alone should shudder even at the thought of that one). The meaning of the term changes
with the person you speak to and in the circumstance in which you speak.

A personal computer is a computer designed to be used by one person. In this, the age of
the individual, in which everything is becoming personal—even stalwart team sports are
giving way to superstar showcases—a computer designed for use by a single person
seems natural, not even warranting a special description. But the personal computer was
born in an age when computers were so expensive that neither the largest corporations
nor governments could afford to make one the exclusive playground of a single person.
In fact, when the term was first applied, personal computer was almost an oxymoron,
self-contradictory, even simply impossible.
Computers, of course, were machines for computing, calculating numbers. Personal
computers were for people who had to make computations, to quantify quadratic
equations, to factor prime numbers, to solve the structure of transparent aluminum. About
the only people needing such computing power were engineers, scientists, statisticians,
and similar social outcasts. Then a strange thing happened, the digital age. Suddenly
anything worth talking about or looking at turned into numbers—books, music, movies,
telephone calls—just about everything short of a cozy hug from Mom. As the premier
digital device, the PC is the center of it all.

History

The first product to bear the PC designation was the IBM Personal Computer, introduced
in August, 1981. After the introduction of that machine, hobbyists, engineers, and
business people quickly adopted the term to refer to small desktop computers meant to be
used by one person. The initials PC quickly took over as the predominant term, saving
four syllables in every utterance.

In the first years of the PC, the term was nondenominational. It referred to any machine
with certain defining characteristics, which we will discuss shortly. In fact, the term
"personal computer" was in general use before IBM slapped it on its early desktop iron.

Over the years, however, the term "PC" has taken a more specialized application. It
serves to distinguish a particular kind of computer design. Because that design currently
happens to be the dominant one worldwide, many people use the term in its original
sense. That works in most polite conversation, unless you’re in a conversation with
someone whose favorite computer does not follow the dominant design. When you refer
to his hardware as a "PC," the polite part of the conversation is likely to quickly
disappear as he disparages the PC and rues the days when his favorite—Amiga or
Macintosh—once parried for marketshare.

The specialized definition of a PC means a machine that’s compatible with the first IBM
Personal Computer—that is, a computer that uses a microprocessor that understands the
same programs and languages as the one in the first PC—though it is likely to understand
more than just that and do a heckuva lot better job of it! In fact, what we now call PCs
were once IBM-compatible, because in those primeval years (roughly 1981 to 1987), the
IBM design was the accepted industry standard, which all manufacturers essentially
copied. After 1987, IBM overplayed its role in defining the industry, and lost its position
in the marketplace. The term "IBM-compatible" fell into disuse. PC and, now rarely, PC-
compatible have taken over.

Under this more limited definition, a PC is a machine with a design broadly based on the
first IBM PC. Its microprocessor is made by Intel or, if made by another company, is
designed to emulate an Intel microprocessor. The rest of the hardware follows designs set
by industry standards discussed throughout the rest of this book.
Characteristics

Like so much of the modern world, a modern personal computer is something that’s easy
to recognize and difficult to define. In truth, the personal computer is not defined by its
parts (because the same components are common across the entire range of computers
from pocket calculators to super-computing vector processors) but by how it is used.
Every computer has a central processing unit and memory, and the chips used by PCs are
used by just about every size of machine. Most computers also have the same mass
storage systems and similar display systems to those of the PC. Although you’ll find
some variety in keyboards and connecting ports, at the signal level all have much in
common, including the electrical components used to build them.

In operation, however, you use your PC in an entirely different manner from any other
type of computer. The way you work and the way the PC works with you is the best
definition of the PC. Among the many defining characteristics of the personal computer,
you’ll find that the most important are interactivity, dedicated operation,
programmability, connectivity, and accessibility. Each of these characteristics helps make
the PC into the invaluable tool it has become, distinguishing it from the computers that
came before and an array of other devices with computer-like pretensions.

Interactivity

A PC is interactive. That is, you work with it, and it responds to what you do. You press a
key and it responds, sort of like a rat in a Skinner box pressing a lever for a food pellet.

Although that kind of give and take, stimulus and response relationship may seem natural
to you, it’s not the way computers have always been. Most of the first three decades that
computers were used commercially, nearly all worked on the batch system. You had to
figure out everything you wanted to do ahead of time, punch out a stack of cards, and
dump them on the desk of the computer operator, who, when he finished his crossword
puzzle, would submit your cards to processing. The next day, after a 16-hour overnight
wait, your program, which took only seconds to run, generated results that you would get
on a pile of paper big enough to push a paper company into profitability and wipe out a
small section of the Pacific Northwest. And odds are (if your job was your first stab at
solving a particular problem) the paper would be replete with error messages, basically
guaranteeing you lifelong tenure at your job because that’s how long it would take to get
the program to run without all the irritating error flags.

In other words, unlike the old batch system that made you wait overnight to find out how
stupid you are, today’s interactive PC tells you in a fraction of a second.

Dedicated Operation
A personal computer is dedicated. Like a dog with one master, the PC responds only to
you—if only because it’s sitting on your desk and has only one keyboard. Although the
PC may be connected to other machines and people through modems and telephone wires
or through a network and world-reaching web, the link-up is for your convenience. In
general, you use the remote connection for individual purposes, such as storing your own
files, digging out data, and sending and receiving messages meant for your eyes (and
those of the recipient) alone.

In effect, the PC is an extension of yourself. It increases your potential and capabilities.


Although it may not help you actually think faster, it answers questions for you, stores
information (be it numeric, graphic, or audible), and manages your time and records. By
helping you get organized, it cuts through the normal clutter of the work day and helps
you streamline what you do.

Programmability

A personal computer is versatile. Its design defines only its limitations, not the tasks that
it can perform. Within its capabilities, a PC can do almost anything. Its function is
determined by programs, which are the software applications that you buy and load. The
same hardware can serve as a message center, word processor, file system, image editor,
or presentation display.

Programmability has its limits, of course. Some PCs adroitly edit prime time television
shows, while others are hard pressed to present jerky movie files gleaned off the Internet.
The hardware you attach to your PC determines its ultimate capabilities in such
specialized applications. Underneath that hardware, however, all PCs speak the same
language and understand the same program code. In effect, the peripheral hardware you
install acts much like a program and adapts the PC for your application.

Connectivity

A personal computer is cooperative. That means that it is connected and communicative.


It can work with other computer systems no matter their power, location, or even the
standard they follow. A computer is equally adept at linking to hand held Personal Digital
Assistants and supercomputers. You can exchange information and often even take
control. Through a hard-wired modem or wireless connection, you can tie your personal
computer into the world-wide information infrastructure and probe into other computers
and their databases no matter their location.
PC connectivity takes many forms. The main link-up today is, of course, the Internet and
specifically its World Wide Web. One wire ties your PC and you into the entire digital
world. Connectivity involves a lot more than the Internet—and in many ways, much less,
too. Despite the dominance of the Web, you can still connect to bulletin boards (at least
those that haven’t migrated to the web) or tie into your own network. For example, one
favored variety of connection brings together multiple personal computers to form a
network or workgroup. These machines can cooperate continuously, sharing expensive
resources, such as high speed printers and huge storage systems. Although such networks
are most common in businesses, building one in your home lets you share files, work in
any room, and just be as flexible as you want in what you do.

Accessibility

A personal computer is accessible. That is, you can get at one when you need it. It’s
either on your desk at work, tucked into a corner of your home office, or in your kid’s
bedroom.

The PC is so accessible, even ubiquitous, because it is affordable. Save your lunch money
for a few weeks and you can buy a respectable PC. Solid starter machines cost about the
same as a decent television or mediocre stereo system.

More importantly, when you need to use a PC, you can. Today’s software is nearly all
designed so that you can use it intuitively. Graphic operating systems, online help,
consistent commands, and enlightened programmers have turned the arcana of computer
control into something feared only by those foolish enough not to try their hands on a
keyboard.

A PC brings together those five qualities to help you do your work or extend your
enjoyment. No other modern appliance (for that is what PCs have become) have this
unique mix of characteristics. In fact, you could call your personal computer your
interactive, dedicated, versatile, cooperative, and accessible problem solver but the
acronym PC is much easier to remember; besides, that’s what everyone else is calling the
contraption anyway.

Labeling Standards

The characteristics that define a PC are a more lofty goal. In order to accomplish what
you expect of it, a PC has to run PC software. Although that statement sounds
straightforward, it is complicated by the least enduring aspect of progress: as technology
races ahead, investments get left behind. After nearly two decades of development,
today’s PC is a beast quite unlike that original machine that dropped off IBM’s first
assembly line. More importantly, today’s PC software and even our expectations are
leagues apart from those of even a few years ago. A problem arises when you try to bring
two worlds or two dominions of time together.

PCs do remarkably well working back in time with software. New PCs still adroitly run
most old software, some stretching back to the dawn of civilization. This backward
compatibility, although expected, is actually the exception among computers. Most
hardware improvements made in other computer platforms have rendered old software
incompatible, necessitating that you update both hardware and software at the same time.

Going in the other direction, however, using new software with an old PC poses
problems. Over the years, Intel has added new features to its microprocessors, and
peripheral manufacturers have developed system add-ons that are so desirable they are
expected in every PC. Program writers have taken advantage of both the new
microprocessor capabilities and the new peripherals, and the software they write often
won’t run on older PCs that lack the innovations. Even when new software runs on old
PCs, you often won’t want to try. Programmers work with the latest, fastest PCs and craft
their products to work with ample power. Older systems that lack modern performance
may run programs so slowly that most people won’t tolerate the results. In order that you
can be assured that your PC will work with their products and deliver adequate
performance, software developers, both individually and as groups, have developed
standards for PCs.

The standards are actually certifications that PCs meet a minimum level of compliance.
They are not true industry standards, such as those published by major industry bodies
like the ANSI (American National Standards Institute), IEEE (Institute of Electrical and
Electronic Engineers), or ISO (International Standards Organization), nor are they
mandated by any government organization. Rather, they are specifications arbitrarily set
by a single company or industry marketing organization. The only enforcement power is
that of trademark law: the promoter of the standard owns the trademark that appears on
the associated label, and it can designate who can use the trademark.

Two major organizations, the Multimedia PC Marketing Council and Microsoft


Corporation, have promoted such PC certification standards, and they judge whether a
given product meets the standards they set. Products that qualify are permitted to wear a
label designating their compliance. The label tells you that a given PC meets the
organization’s requirements for acceptable operation with its software. A recent addition
has been the MMX certification logo from Intel. Unlike the other certifications, the
MMX logo represents hardware compatibility, and it is displayed on software products.

The Multimedia PC Marketing Council developed the first qualification standard to


assure that you could easily select a PC that would run any multimedia software. They
developed the concept of a Multimedia PC, a computer equipped with all the peripherals
necessary for producing sounds and onscreen images acceptable in multimedia
presentations. Producing the data to supply those peripherals also required a powerful
microprocessor and ample storage, so the Multimedia PC Council also added minimum
requirements for those aspects of the PC into its standards. As the programmers created
multimedia software with greater realism that demanded faster response and more power
from PCs, the council added a second, then a third, higher standard to earn certification.
In 1996, The Multimedia PC Marketing Council was superseded as the custodian of the
MPC specifications by the Multimedia PC Working Group, a committee of the Software
Publishers Association.

The early incarnations of Windows software had long been criticized for draining too
much power from PCs, so during the development of Window 95, Microsoft developed a
set of minimal requirements that a PC needed to meet to wear the Windows logo. This set
of requirements became known as PC 95. To match the needs of the next generation of
Windows, Microsoft revised these specifications into PC 97.

The Microsoft standards go beyond merely the system hardware, that is, what the system
has, and include the firmware, what the system does. Taken together with the Windows
operating system, these standards define what a PC can do.

Table 1.1. Comparison of Major PC Labeling Standards

Standard MPC 1.0 MPC 2.0 MPC 3.0 PC 95 PC 97


Effective date 1990 May 1993 June 1995 November January 1997
1995
Microprocessor type 386SX 486SX Pentium No Pentium
requirement
Microprocessor 16 MHz 25 MHz 75 MHz No 120 MHz
speed requirement
Memory 2 MB 4 MB 8 MB 4MB 16MB
Floppy disk 1.44MB 1.44MB 1.44 MB Not required Not required
Hard disk 30 MB 160 MB 540 MB Not required No requirement
CD ROM speed 1x 2x 4x
CD ROM access 1000 ms 400 ms 250 ms
time
Serial port One RS- One RS- 16550A One RS-232 USB
232 232
Parallel port One SPP One SPP One SPP One SPP ECP
Game port One One One or Not required Not required
USB

In reading left to right, the table implies a progression, and this short list of standards
demonstrates exactly that. MPC 1.0 was the PC industry’s first attempt to break with old
technology, to leave lesser machines behind so you and your expectations could advance
to a new level unfettered by past limitations. All the ensuing standards lift the level
higher, demanding more from every PC so that you can take advantage of everything that
twenty years of progress in small computers (not to mention a few millennia of
civilization) has to offer.

MPC 1.0

Figure 1.1 The MPC 1.0 logo.

The primary concern of the Multimedia PC Council was, of course, that you be delighted
with the multimedia software that you buy for your PC. As multimedia products became
available in 1990, many people were frustrated that their own computers, some of which
might date back to XT days, were unable to run their new software. The members of the
council realized consumers need a quick and easy way to determine if a given new
computer could adequately handle the demands of new multimedia software.
Consequently, the MPC specifications summarized in Table 1.2 are aimed primarily at
performance and ensuring the inclusion of multimedia peripherals.

Table 1.2. Multimedia PC Requirements Under MPC 1.0

Feature Requirement
Microprocessor type 386SX
Microprocessor speed 16 MHz
Required memory 2 MB
Recommended memory No recommendation
Floppy disk capacity 1.44 MB
Hard disk capacity 30 MB
CD ROM transfer rate 150 KB/sec
CD ROM access time 1000 milliseconds
Audio DAC sampling 22.05 KHz, 8-bit, mono
Audio ADC sampling 11.025 KHz, 8-bit, mono
Keyboard 101-key
Mouse Two-button
Ports Serial, parallel, MIDI, game

As the lowest industry performance qualification for PCs, the original MPC standard
requires the least from a PC. As originally propounded, the Multimedia PC Council asked
only for 286-style microprocessors running at speeds of 12 MHz or higher. The memory
handling shortcomings of 286 and earlier microprocessors, however, quickly became
apparent, so the council raised the requirement to a minimum of a 386SX in the current
MPC 1.0 standard. Because of this chip choice, the minimum speed required is 16 MHz,
the lowest rating Intel gives the 386SX chip. For memory, a system complying with MPC
1.0 must have at least 2.0 megabytes of RAM, enough to get Windows 3.1 off the
ground. The specification also required a full range of mass storage devices, including a
3.5-inch floppy disk drive capable of reading and writing 1.44MB media, a 30MB hard
disk (small even at the time), and a CD ROM drive.

Because at the time MPC 1.0 was created, CD ROMs were relatively new and
unstandardized, the specification defined several drive parameters. The minimum data
transfer rate was set at a sustained 150KB/sec transfer rate, the rate required by
stereophonic audio CD playback. The standard also required the CD ROM drive to have
an average seek time of 1 second or less, which fairly represented the available
technology of the time (although not, perhaps, user expectations). For software
compatibility, the standard demanded the availability of a Microsoft-compatible
(MSCDEX 2.2) driver that understood advanced audio program interfaces, as well as the
ability to read the fundamental CD standards (mode 1, with mode 2 and form 1 and 2
being optional).

Because audio is an implicit part of multimedia, the MPC 1.0 specification required an
extensive list of capabilities in addition to playing standard digital audio disks (those
conforming with the Red Book specification of the CD industry). The standard also
required a front panel volume control for the audio coming off music CDs, a requirement
included in all ensuing MPC specifications. The apparent objective was to allow you to
listen to your CDs while you worked on other programs and data sources.

Another part of MPC 1.0 was the requirement for a sound board that must be able to play
back, record, synthesize, and mix audio signals with well-defined minimum quality
levels. The MPC 1.0 specifications required a digital to analog converter (for playback)
and an analog to digital converter (to sample and record audio). Under MPC 1.0, the
requirements for each differed slightly.

The DAC (playback) circuitry requires a minimum of 8-bit linear PCM (pulse code
modulation) sampling with a 16-bit converter recommended. That 8-bit sampling was a
true minimum, the same level used by the telephone company for normal long-distance
calls, hardly up to the level of a good clock radio. Playback sampling rates were set at 11
and 22 KHz with CD-quality 44.1 KHz "desirable." The lower of two standard sampling
rates is a bit better than telephone quality (which is 8 KHz); the higher falls short of FM
radio quality. The analog to digital conversion (recording) sampling rate requirements
include only linear PCM (that is, no compression) with low quality 11 KHz sampling,
both 22 and 44.1 KHz being optional.

In effect, the MPC 1.0 specification froze in place the state of the art in sound boards and
CD ROM players at the time of its creation while allowing the broadest possible range of
PCs to bear the MPC moniker. The only machines it outlawed outright were those with
technology so old no reputable computer company was willing to market them at any
price. After all, the council was comprised of people trying to sell PCs as well as
multimedia software, and they wanted as many of their machines as possible to qualify
for certification.

Far from perfect, far from pure, MPC 1.0 did draw an important line, one that showed
backward compatibility has its limits. In effect, it said, "Progress has come, so let’s raise
our expectations." In hindsight, the initial standard didn't raise expectations or the MPC
requirements high enough, but for the first time it freed software publishers from the need
to make their products backward-compatible with every cobwebbed computer made since
time began.

MPC 2.0

Figure 1.2 The MPC 2.0 logo.

As multimedia software became more demanding, the MPC 1.0 standard proved too low
to guarantee good response and acceptable performance. Consequently, in May, 1993, the
Multimedia PC Council published a new standard, MPC 2.0, for more advanced systems.
As with the previous specification, MPC 2.0 was held back by practical considerations:
keeping hardware affordable for you and profitable for the manufacturer. Although it set
a viable minimum standard, something for programmers to design down to, it did not
represent a demanding, or even desirable, level of performance for multimedia systems.
Table 1.3 summarizes the basic requirements of MPC 2.0.

Table 1.3. Multimedia PC Requirements Under MPC 2.0.

Feature Requirement
Microprocessor type 486SX
Microprocessor speed 25 MHz
Required memory 4 MB
Recommended memory 8 MB
Floppy disk capacity 1.44 MB
Hard disk capacity 160 MB
CD ROM transfer rate 300 KB/sec
CD ROM access time 400 milliseconds
Audio DAC sampling 44.1 KHz, 16-bit, stereo
Audio ADC sampling 44.1 KHz, 16-bit, stereo
Keyboard 101-key
Mouse Two-button
Ports Serial, parallel, MIDI, game
The most important aspect of MPC 2.0 is that it raised the performance level required by
a PC in nearly every hardware category, reflecting the vast improvement in technology in
the slightly more than two years since the release of MPC 1.0. It required more than
double the microprocessor power, with a 486SX chip running at 25 MHz being the
minimal choice. More importantly, MPC 2.0 demanded 4 megabytes of RAM, with an
additional 4 megabytes recommended.

While MPC 2.0 still required a 1.44 MB floppy so that multimedia software vendors need
only worry about one distribution disk format, it pushed its hard disk capacity
recommendation up to 160 MB. This huge, factor-of-five expansion reflects both the
free-for-all plummet in disk prices as well as the blimp-like expansion of multimedia
software.

CD ROM requirements were toughened two ways by MPC 2.0. The standard demands a
much faster access time, 400 milliseconds versus one full second for MPC 1.0, and it
requires double-speed operation (a data transfer rate of 300 KB/sec). Although triple- and
quadruple-speed CD ROM drives were already becoming available when MPC 2.0 was
adopted, most multimedia software of the time gained little from them, so the double-
speed requirement was the most cost-effective for existing applications.

Under MPC 2.0, the CD ROM drive must be able to play back commercially recorded
music CDs and decode track their identifications (using data embedded in subchannel Q).
In addition, the specification required that the drive handle extended architecture CD
ROMs (and recommends the extended architecture capabilities include audio) and be
capable of handling PhotoCDs and other disks written in multiple sessions.

The primary change in the requirement for analog to digital and digital to analog
converters was that MPC 2.0 made CD-quality required all the way. Sound boards under
MPC 2.0 must allow both recording and playback at full 44.1 KHz sampling in stereo
with a 16-bit depth. Lower rate sampling (11.025 and 22.05 KHz) must also be available.
MPC 2.0 also required an integral synthesizer that can produce multiple voices and play
up to six melody notes and two percussion notes at the same time.

In addition, the sound system in an MPC 2.0 machine must be able to mix at least three
sound sources (four are recommended) and deliver them to a standard stereophonic
output on the rear panel, which you can plug into your stereo system or active
loudspeakers. The three sources for the mixer included Compact Disc audio from the CD
ROM drive, the music synthesizer, and a wavetable synthesizer or other digital to analog
converter. An auxiliary input was also recommended. Each input must have an 8-step
volume control.

An MPC 2.0 system must have at least a VGA display system (video board and monitor)
with 640 pixel by 480 pixel resolution in graphics mode and the capability to display
65,535 colors (16-bit color). The standard recommended that the video system be capable
of playing back quarter-screen (that is, 320 pixel by 200 pixel) video images at 15 frames
per second.
Port requirements under MPC 2.0 matched the earlier standard: parallel, serial, game
(joystick), and MIDI. Both a 101-key keyboard (or its equivalent) and a two-button
mouse were also mandatory.

MPC 3.0

Figure 1.3 The MPC 3.0 logo.

As the demands of multimedia increased, the Multimedia PC Marketing Council once


again raised the hurdle in June, 1995, when it adopted MPC 3.0. The new standard
pushed up hardware requirements in nearly every area with the goal of achieving full-
screen, full-motion video with CD-quality sound. In addition, the council shifted its
emphasis from specific hardware requirements to performance requirements measured by
a test suite developed by the council. Table 1.4 lists many of the basic requirements of
MPC 3.0.

Table 1.4. Multimedia PC Requirements Under MPC 3.0.

Feature Requirement
Microprocessor type Pentium or equivalent
Microprocessor speed 75 MHz
Required memory 8 MB
Minimum memory bandwidth 100 MB/sec
Floppy disk capacity 1.44 MB, optional in notebook PCs
Hard disk capacity 540 MB
Hard disk access time < 12 ms
Hard disk throughput > 3 MB/sec
CD ROM transfer rate 550 KB/sec
CD ROM access time 250 milliseconds
Audio DAC sampling 44.1 KHz, 16-bit, stereo
Audio ADC sampling 44.1 KHz, 16-bit, stereo
Graphics interface PCI 2.0 or later
Graphics performance 352x240x15 colors at 30 frames/sec
Keyboard 101-key
Mouse Two-button
Ports Serial, parallel, MIDI, game or USB
Modem V.34 with fax
The MPC 3.0 standard does not require any particular microprocessor. Rather, the
standard notes that a 75 MHz Pentium—or its equivalent from another manufacturer—is
sufficient to achieve the performance level required by the test suite.

For mass storage, MPC 3.0 retains the same floppy disk requirement, although it makes
the inclusion of any floppy drive optional in portable systems. More important are the
hard disk requirements. To give adequate storage, the standard mandates a 540 MB disk
as the minimum, of which 500 MB must be usable capacity. Recognizing the need for
fast storage in video applications, the disk performance requirements are demanding. Not
only does the standard require an average access time of 12 milliseconds or less, it also
asks for a high transfer rate as well. The disk interface must be able to pass 9 MB per
second while the disk medium itself must be able to be read at a rate of 3 MB per second
(a buffer or cache in the drive itself making possible the faster interface transfers). The
access timings require that any MPC 3.0-compliant disk spin faster than 4000 RPM.

CD storage must be compatible with all the prevailing standards, including standard CD
ROM, CD-ROM XA, Photo CD, CD-R (recordable CDs), CD Extra and CD-interactive.
Drives must achieve nearly a 4x transfer rate; the standard requires a 550 KB/sec transfer
rate rather than the 600 KB/sec of a true 4x drive. All multimedia PCs, both desktop and
portable, require CD drives, although the access time requirements differ. Desktop units
must have rated access times better than 250 milliseconds; portable units abide a less
stringent 400 ms specification.

The audio requirements of MPC 3.0 extend not only to sampling and digitization but
through the full system from synthesis to speakers. The digital aspects of the audio
system must handle at least stereophonic CD-quality audio (44.1 KHz sampling with 16-
bit depth) with Yamaha OPL3 synthesis support (although use of the Yamaha chip is not
a requirement). Speaker and amplifier quality is strictly defined. Two-piece speaker
systems must have a range from 120 Hz to 17.5 KHz with at least three watts of power
per channel (at 1% distortion) across the audio spectrum. A subwoofer, which must be
backed by at least 15 watts, extends the range down to 40 Hz or lower.

MPC 3.0 requires a compliant system to deliver full-motion video, that is, 30 frames per
second, for an image that fills about one-third of a standard VGA screen (352 by 288
resolution) with realistic 15-bit color (that’s five bits for each primary color). MPC 3.0
also requires support of a number of video standards including PCI 2.0 or newer for the
hardware connection with the video controller and MPEG expansion (in hardware or
software) for playback.

The basic ports required under previous MPC standards face further refinement under
MPC 3.0. The mandatory serial port must have a 16550A UART, and when you connect
the mouse or other pointing device to an MPC 3.0 system, you must still have at least one
serial port available. The standard allows for a USB connection to the mouse and for the
use of USB instead of a game port (which would otherwise be required). A modem
complying with the V.34 standard and capable of sending and receiving facsimile
transmissions is also required. The standard parallel port and MIDI port are carried over
from previous incarnations of the MPC specifications.

PC 95

Although aimed at setting a minimum compatibility level for machines designed to run
Windows 95, Microsoft officially released its PC 95 standard months after the operating
system became available. The specification was released November 1, 1995. In early
1996, some of its requirements were made more demanding and it was augmented by
several design "recommendations" that would assure more satisfactory operation.

In truth, the primary concern with PC 95 is operability rather than performance. The
standard does not make specific requirements as to the microprocessor, its speed, or the
disk capacity of a PC to earn the "Designed for Windows 95" sticker. Instead it seeks
compliance with specific standards—many of which were originally promulgated by
Microsoft—that link Windows 95 to your hardware.

Implicit in PCs designed for Windows 95 (or any modern operating system, for that
matter) is the need for a 386 or later microprocessor to take advantage of Windows 95’s
advanced operating modes and superior memory addressing. By the time Microsoft
issued PC 95, however, all microprocessors less than a 486 were unthinkable in a new PC
and lower speed Pentiums had almost become the choice for entry-level systems. PC 95
does address the need for memory and sets its requirement at Windows 95’s bare bones
minimum, four megabytes. The 1996 revision adds a recommendation of eight megabytes
reserved exclusively for Windows.

PC 95 puts more emphasis on compliance with the Plug-and-Play standard, requiring


BIOS support of the standard. The proper operation of the automatic configuration
features of Windows 95 make this level of support mandatory. The intention is to make
PCs more accessible to people who do not wish to be bothered by the details of
technology.

The chief other requirements of the PC 95 standard include a minimum range of ports
(one serial and one basic parallel port), a pointing device, and a color display system that
can handle basic graphics, VGA-level resolution in 256 colors. Table 1.5 summarizes the
major requirements of the PC 95 standard.

Table 1.5. Major Requirements of the "Designed for Windows 95" Label

Feature Original Revised PC Revised Chapter


95 Requirement Recommendation
Microprocessor NR NR NR 3
System memory 4 MB 8MB 8MB 4
16-bit I/O decoding No Yes 4 7
Local bus No Yes Yes 7
Sound board No No Yes 18
Parallel port SPP ECP ECP 19
Serial port RS-232 16650A 16550 UART 21
UART
USB No No Yes 21
Display resolution 640x480x8 800x600x16 800x600x16 15
and and 1024x768x8
1024x768x8
Display memory NR 1MB 2MB 15
Display connection ISA Local bus Local bus 15
Hard disk drive NR Required Required 10
CD ROM NR NR Recommended 12
Plug-and-Play BIOS Required Required Required 5
Software setting of No Yes Yes 5 and 6
resources
Note: NR indicates the
standard makes no
specific requirement

Even at the time of its release, PC 95 was more a description of a new PC than a
prescription dictating its design. Most systems already incorporated all of the requirement
of PC 95 and stood able to wear the Window 95 compatibility sticker.

PC 97

In anticipation of the successor to Windows 95, Microsoft developed a new, higher


standard in mid-1996. Generally known as PC 97 in the computer industry, its terms set
up the requirements for labeling PCs as designed for the Windows 95 successor, likely
termed Windows 97.

The PC 97 standard is both more far-reaching and more diverse than that of PC 95. PC 97
sets minimum hardware requirements, interface standards, and other required
conformances necessary to give the new operating system full functionality. In addition,
it contemplates the fragmentation of the PC industry into separate and distinct business
and home applications. As a result, the standard is actually three. One standard is for the
minimal PC for the new operating system, Basic PC 97; another defines the requirements
of the business computer, Workstation PC 97; and a third describes the home computer,
the Entertainment PC 97. Table 1.6 lists the major requirements of each of these three
facets of PC 97.

Table 1.6. Major Requirements for PC 97 Designations

Feature Basic PC 97 Workstation PC Entertainment PC 97 Chapter


97
Microprocessor Pentium Pentium Pentium 3
Speed 120 MHz 166 MHz 166 MHz 3
Memory 16 MB 32 MB 16 MB 4
ACPI Required Required Required 24
OnNow Required Required Required 24
Plug-and-Play Required Required Required 5
USB 1 port 1 port 2 ports 21
IEEE 1394 Recommended Recommended Required 21
ISA bus Optional Optional Optional 7
Keyboard Conventional Conventional USB or wireless 14
Pointing device Conventional Conventional USB or wireless 14
Wireless interface Recommended Recommended Remote control 14
required
Audio Recommended Recommended Advanced audio 18
Modem or ISDN Recommended Recommended Required 22
Display resolution 800x600x16 1024x768x16 1024x768x16 151
Display memory 1 MB 2 MB 2 MB 15
Local bus video Required Required Required 15
Bus master Required Required Required 7 and 9
controller

The most striking addition is Microsoft’s new demand for performance, setting the
standard microprocessor for an entry-level system—the Basic PC 97—at a 120 MHz
Pentium, which, little more than three years ago, would have been the top of the range. In
addition, all systems require high speed local bus video connections and bus mastering in
their mass storage systems.

With the PC 97 standard, Microsoft has taken the initiative in moving PC expansion
beyond the limitations of the original 1981 PC design. Microsoft has relegated the
conventional PC expansion bus, best known as ISA and resident in nearly all PCs for
more than a decade, to the status of a mere option. Microsoft has put its full support
behind the latest version of PCI, 2.1, as the next expansion standard in PC 97 hardware.

The difference in memory requirements for business and home systems reflects the needs
of their respective applications. A business machine is more likely to run several
simultaneous applications and more likely to mash huge blocks of data, pushing up its
memory needs. On the other hand, the home system demands high speed connections
such as IEEE 1394 (see Chapter 21, "Serial Ports") for making the most of multimedia,
an extra USB port, and mandatory modem and high quality audio.

Windows 95 took so long to boot up that Microsoft apparently feared some people would
switch to another operating system before it finished (or so you might assume by the
company’s insistence on the OnNow standard for quick booting). PC 97 also requires
ACPI for conserving power (see Chapter 24, "Power") and full Plug-and-Play compliance
for setup convenience. PC 97 also includes a wealth of additional requirements for
portable systems.

All systems designed for the next generation of color need the ability to render images in
true to life, 16-bit color, which allows the simultaneous display of up to 65,536 hues.

The advances between PC 95 and PC 97 reflect the fast changes in computer technology.
You simply could not buy a machine with all the capabilities of PC 97 when Microsoft
announced the PC 95 standard, yet PC 97 reflects the minimum you should expect in any
new computer. Even if you don’t target a "Designed for Windows" sticker for your next
PC purchase, you should aim for a system equipped with the features of the PC 97
standard.

MMX

With the introduction of its MMX-enhanced Pentium microprocessors in January, 1997,


Intel added a designation with the MMX logo that appears as a certification on products.
This MMX label, as shown in Figure 1.1, indicates that software uses a set of commands
specially written to take advantage of the new features of the MMX-enhanced
microprocessors offered by Intel as well as makers of Intel-compatible chips, including
AMD and Cyrix. On some operations—notably the manipulation of audio and video data
as might be found in multimedia presentations and many games—this software will see
about a 50% to 60% performance enhancement when run in computers equipped with an
MMX-enhanced microprocessor.

Figure 1.4 The MMX certification logo.

MMX does not indicate the innate power of a microprocessor, PC, or program.
Computers that use MMX technology may deliver any of several levels of performance
based on the speed of their microprocessor, their microprocessor type, and the kind of
software you run. A computer with an MMX microprocessor will run MMX-certified
software faster than a computer with the same type of microprocessor without MMX. The
MMX-enhanced system will show little performance advantage over a system without
MMX on ordinary software. From the other perspective, software without the MMX label
will run at the about same speed in either an MMX-enhanced PC or one without an MMX
processor, providing the two have the same speed rating (in megahertz) and the same
processor type (for example, Pentium).

In other words, the MMX label on a box of software is only half of what you need. To
take advantage of the MMX software label, you need an MMX-enhanced PC. And an
MMX-enhanced PC requires MMX-labeled software to distinguish itself from PCs
without MMX.

Variations on the PC Theme

What about the PC pretenders? True PCs aren’t the only small computers in use today. A
variety of hardware devices share many of the characteristics of the true PC—and
sometimes even the name—but differ in fundamental design or application. For example,
some machines skimp on features to gain some other benefit, such as increased
ruggedness or small size. Others forego the standard microprocessors and operating
systems to gain added speed in specialized applications. Others are simply repackage (or
rename) jobs applied to conventional PCs.

In any case, any worthwhile discussion of PCs and their technologies requires that the
various themes and variations be distinguished. The following list includes the most
common terms often used for PCs and PC wannabes:

Workstation

The term "workstation" is often ambiguous because it commonly takes two definitions.
The term derives from the function of the machine. It is the computer at which an office
worker stations himself.

In some circles, the term "workstation" is reserved for a PC that is connected to a network
server. Because the server is often also a PC, the term "PC" doesn’t distinguish the two
machines from one another. Consequently, people often refer to a PC that functions as a
network node as a workstation, and the machine linking the workstations together is the
server. (The term "node" can’t substitute for workstation because devices other than PCs
can also be nodes.)

The other application of the term "workstation" refers to powerful, specialized computers
still meant to be worked upon by a single individual. For instance, a graphic workstation
typically is a powerful computer designed to manipulate technical drawings or video
images at high speed. Although this sort of workstation has all the characteristics of a PC,
engineers distinguish these machines with the workstation term because the machines do
not use the Intel-based microprocessor architecture typical of PCs. Moreover, the people
who sell these more powerful computer want to distinguish their products from run-of-
the-mill machines so they can charge more.

Of course, the term "workstation" also has a definition older than the PC, one that refers
to the physical place at which someone does work. Under such a definition, the
workstation can be a desk, cubicle, or workbench. In the modern office, even this kind of
workstation includes a PC.

Server

Server describes a function rather than a particular PC technology or design. A server is a


computer that provides resources that can be shared by other computers. Those resources
include files such as programs, databases, and libraries; output devices such as printers,
plotters, and film recorders; and communications devices such as modems and Internet
access facilities.

Traditionally a server is a fast, expensive computer. A server does not need to be as


powerful as the PCs it serves, however, particularly when serving smaller networks.
Compared to the work involved in running the graphic interface of a modern operating
system, retrieving files from a disk drive and dispatching them to other PCs is small
potatoes indeed. An ordinary PC often suffices.

For example, if you want to run Windows 95 on a PC as a workstation, you need at least
eight megabytes of RAM to avoid the feeling of working in a thick pot of oatmeal.
Dedicate the same PC as a server of other Windows 95 PCs, and you’ll get good response
with as little as the minimal four megabytes. The difference is all the overhead needed to
run a user interface that the server need not bother with.

On the other hand, the server in a large corporation requires a high level of security and
reliability because multiple workstations and even the entire business may depend on it.
Ideally, such a big-business server displays fault tolerance, the ability withstand the
failure of one or more major systems, such as a disk drive or one of several
microprocessors, and continue uninterrupted operation.

Compared to ordinary PCs, most servers are marked by huge storage capacity. A server
typically must provide sufficient storage space for multiple users—dozens or even
hundreds of them.

Most of the time, the server stands alone, unattended. No one sits at its keyboard and
screen monitoring its operation. It runs on autopilot, a slave to the other systems in the
network. Although it interacts with multiple PCs, it rarely runs programs in its own
memory for a single user. Its software is charged mostly with reading files and
dispatching them to the proper place. In other words, although the server interacts, it’s
not interactive in the same way as an individual PC.

Simply Interactive Personal Computer

In early 1996 Microsoft coined the term SIPC to stand for Simply Interactive Personal
Computer, the software giant’s vision of what the home computer will eventually
become. The name hardly reflects the vision behind the concept. The SIPC is hardly
simple, but a complete home entertainment device that will be the centerpiece of any
home electronics system, if not the home entertainment system. The goal of the design
initiative is to empower the PC with the capabilities of all its electronic rivals. It is to be
as adept at video games as any Sega or Nintendo system, as musically astute as a stereo
(and also able to create and play sounds as a synthesizer), and able to play video better
than your VCR. In other words, the SIPC is a home PC with a vengeance. In fact, it’s
what the home PC has supposed to be all along but the hardware (and software) were
never capable of.

Compared to the specification of any home PC, the minimal SIPC is a powerhouse,
starting with a 150 MHz Pentium with 16 MB of RAM and taking off from there. More
striking, it is designed as a sealed box, one that you need never tinker with. Expansion,
setup, and even simple repair are designed to be on the same level as maintaining a
toaster.

Rather than something grand or some new, visionary concept, the SIPC is more a
signpost pointing the way the traditional PC is headed. The PC 97 specification covers
nearly all the requirements of the SIPC so, in effect, the SIPC is here now, lurking on
your desk in the disguise of an ordinary PC.

Network Computer

The opposite direction for the home PC is one stripped of power instead of enhanced.
Instead of being a general purpose machine, this sort of home PC would be particularly
designed for interactive use, with data and software supplied by outside sources. The
primary source is universally conceived as the Internet. Consequently, one of the favored
terms for this sort of design is often termed an Internet Box.

The same concept underlies the Network Computer, commonly abbreviated as NC. As
with the Internet Box, an NC is a scaled-down PC aimed primarily at making Internet
connections. They allow browsing the World Wide Web, sending and receiving
electronic mail, and running Java-based utilities distributed through the net, but they lack
the extensive data storage abilities of true PCs. Similar in concept and name but
developed with different intentions and by different organizations is the NetPC, a more
conventional PC designed to lower maintenance costs.

The revised home PC concept of the Network Computer (NC rather than NetPC)
envisions a machine that can either have its own screen or work with the monitor that’s
part of your home entertainment system, typically a television set. In contrast, a related
technology often called the Set Top Box was meant to be an Internet link that used your
television as the display. It earned its name from its likely position, sitting on top of your
television set.

Only the names Internet and Set Top Box (and even NC) are new. The concept harks
back to the days before PCs. The SIPC is, in fact, little more than a newer name for a
smart terminal.

A terminal is the start and endpoint of a data stream, hence the name. It features a
keyboard to allow you to type instructions and data, which can then be relayed to a real
computer. It also incorporates a monitor or other display screen to let you see the data the
computer sends back at you.

The classic computer terminal deals strictly with text. A graphic terminal has a display
system capable of generating graphic displays. A printing terminal substitutes a printer
for the screen.

A smart terminal has built-in data processing abilities. It can run programs within its own
confines, programs which are downloaded from a real computer. The limitation of
running only programs originating outside the system distinguishes the smart terminal
from a PC. In general, the smart terminal lacks the ability to store programs or data
amounting to more than the few kilobytes that fit into its memory.

Although smart terminals are denizens of the past, the Internet Box has a promising
future, or, more correctly, a future of promises. Its advocates point out that its storage is
almost limitless with the full reach of the Internet at its disposal, not mere megabytes, not
gigabytes, but terabyte territory and beyond. But downloading that data is like trying to
drain the ocean through a soda straw. Running programs or simply viewing files across a
network is innately slower than loading from a local hard disk. Insert a severe bottleneck
like a modem and local telephone line in the network connection, and you’ll soon
rediscover the entertainment value of watching paint dry and the constellations realign.
Instead of the instant response you get to your keystrokes with a PC, you face long delays
whenever your Internet Box needs to grab data or program code from across the net.
Until satellite and cable modems become the norm (both for you and Internet servers),
slow performance will hinder both the interactivity and the wide acceptance of the
Internet Box.

The principal difference between a true Network Computer and its previous incarnations
is that a consortium of companies, including Apple, IBM, Oracle, Netscape, and Sun
Microsystems, has developed a standard for them. Called the NC Reference Profile, the
standard is more an Internet’s Greatest Hits of the specifications world. It requires
compliance with the following languages and protocols:

• The Java language, including the Java Application Environment, the Java Virtual
Machine, and Java class libraries

• HTML (HyperText Markup Language), the publishing format used for Web pages

• HTTP, which browser software uses to communicate across the Web

• Three E-mail protocols including Simple Mail Transfer Protocol, Internet


Message Access Protocol Version 4, and Post Office Protocol Version 3

• Four multimedia file formats including AU, GIF, JPEG, and WAV

• Internet Protocol, so that they can connect to any IP-based network including the
World Wide Web

• Transmission Control Protocol, also used on the Internet and other networks

• File Transfer Protocol, use to exchange files across the Internet

• Telnet, a standard that lets terminals access host computers

• Network File System, which allows the NC to have access to files on a host
computer

• User Datagram Protocol, which lets applications share data through the file
system

• Simple Network Management Protocol, which helps organize and manage the NC
on the network

• Dynamic Host Configuration Protocol, which lets the NC boot itself across the
network and log in

• Bootp, which is also required for booting across a network

• Several optional security standards

The hardware requirements of the NC Reference Profile are minimal. They include a
minimum screen resolution at the VGA level (640 by 480 pixels), a pointing device of
some kind, text input capability that could be implemented with handwriting recognition
or as a keyboard, and an audio output. The original profile made no demand for video
playback standards or internal mass storage such as a hard or even floppy disk.

Sun Microsystems introduced the first official NC on October 22, 1996.


[The NetPC, on the other hand, represents an effort by industry leaders Intel and
Microsoft (assisted by Compaq Computer Corporation, Dell Computer Corporation, and
Hewlett-Packard Company) to create a specialized business computer that lowers the
overall cost of using and maintaining small computers for a business. The NetPC and
ordinary PC share many of the same features. They differ mostly in that, as fits the name,
the NetPC is designed so that it can be updated through a network connection. The PC
manager in a business can thus control all of the NetPCs in the business from his desk
instead of having to traipse around to every desk to make changes. In addition, the NetPC
eliminates most of the PC manager’s need to tinker with hardware. All hardware features
of the NetPC are controlled through software and can be altered remotely, through the
network connection. The case of the NetPC is, in fact, sealed so that the hardware itself is
never changed.

The classic ISA expansion bus—long the hallmark of a conventional PC—is omitted
entirely from the NetPC. The only means of expansion provided for a NetPC is external,
such as the PC Card and CardBus slots otherwise used by notebook computers.

The design requirements to make a NetPC are set to become an industry standard, but at
the time this was written they were still under development. Intel and Microsoft
introduced a NetPC draft standard for industry comment on March 12, 1997.

Numerical Control Systems

In essence, a numerical control system is a PC with its hard hat on. That is, an NCS is a
PC designed for harsh environments such as factories and machine shops. One favored
term for the construction of the NCS is ruggedized, which essentially means made darned
near indestructible with a thick steel or aluminum case that’s sealed against oil, shavings,
dust, dirt, and probably even the less-than-savory language that permeates the shop floor.

The NCS gains its name from how it is used. As an NCS, a PC’s brain gets put to work
controlling traditional shop tools like lathes and milling machines. An operator programs
the NCS—and thus the shop tool—by punching numbers into the PC’s keypad. The PC
inside crunches the numbers and controls the movement of the operating parts of the tool,
for example adjusting the pitch of a screw cut on a lathe.

Not all NCSes are PCs, at least the variety of PCs with which this book is concerned.
Some NCSes are based on proprietary computers, more correctly, computerized control
systems, built into the shop tools they operate. But many NCSes are ordinary PCs
reduced to their essence, stripped of the friendly accouterments that make them desktop
companions and reduced to a single, tiny circuit board that fits inside a control box. They
adhere to the PC standard to allow their software designers to use a familiar and
convenient platform for programming. They can use any of the languages and software
tools designed for programming PCs to bring their numerical control system to life.
Personal Digital Assistants

Today’s tiniest personal computers fit in the palm of your hand (provided your hand is
suitably large) or slide into your pocket (provided you dress like Bozo the Clown.) To get
the size down, manufacturers limit the programmability and hardware capabilities of
these hand held devices mostly to remembering things that would otherwise elude you.
Because they take on some of the same duties as a good administrative assistant, these
almost-computers are usually called Personal Digital Assistants.

The PDA is a specialized device designed for a limited number of applications. After all,
you have few compelling reasons for building a spreadsheet on a screen three inches
square or writing a novel using a keyboard the size of a commemorative postage stamp.
For the most part, the PDA operates as a scheduling and memory augmentation system. It
keeps track of the details of your life so you can focus on the bigger issues.

The tiny dimensions and specialized application of the PDA make them unique devices
that have no real need to adhere to the same standards as larger PCs. Being impractically
small for a usable keyboard, many PDAs rely on alternate input strategies such as
pointing with pens and handwriting recognition. Instead of giving you the big picture,
their smaller screens let you see just enough information. Because of these physical
restraints and differences in function, most PDAs have their own hardware and their own
operating systems, entirely unlike those used by PCs.

Although they are designed to work with PCs, they do not run as PCs or use PC software.
In that way, they fall short of PCs in versatility. They adroitly handle their appointed
tasks, but they don’t aspire to do everything; you won’t draw the blueprints of a
locomotive on your PDA. In other words, although they are almost intimately personal
and are truly computers, they are not real PCs.

Laptops and Notebooks

A laptop or notebook PC is a PC but one with a special difference in packaging. The


simple and most misleading definition of a laptop computer is one that you can use on
your lap. Most people use them on airline tray tables or coffee tables, and with unusual
good judgment, the industry has refrained from labeling them as coffee table computers.
Although the terms laptop and notebook are often used interchangeably, this book prefers
the term "notebook" to better reflect the various places you’re apt to use one.

The better definition is that a laptop or notebook is a portable PC that is entirely self-
contained. A single package includes all processing power and memory, the display
system, the keyboard, and a stored energy supply (batteries). All laptop PCs have flat-
panel display systems because they fit, both physically into the case and into the strict
energy budgets dictated by the power available from rechargeable batteries.

Notebook computers almost universally use a clamshell design. The display screen folds
flat atop the keyboard to protect both when traveling. Like a clamshell, the two parts are
hinged together at the rear. This design and weight distinguishes the laptop from the
lunchbox PC, an older design that is generally heavier (10-15 pounds) and marked by a
keyboard which detaches from a vertical processing unit that holds the display panel.
Laptop PCs generally weigh from 5 to 10 pounds, most falling almost precisely in the
middle of that range.

PCs weighing less than five pounds are usually classified as sub-notebook PCs. The
initial implementations of sub-notebook PCs achieved their small dimensions by slighting
on the keyboard, typically reducing its size by 20% horizontally and vertically. More
recent sub-notebook machines trim their mass by slimming down—reducing their overall
thickness to about one inch—instead of paring length or width. The motivation for this
change is pragmatic: larger screens require space enough for a normal size keyboard. In
addition, most sub-notebook machines are favored as remote entry devices. That is,
journalists prefer them for typing in drafts while they are on the move. The larger
keyboard makes the machines more suited to this application.

Software

The most important part of a PC isn’t the lump of steel, plastic, and ingenuity sitting on
your lap or desktop but what that lump does for you. The PC is a means to an end. Unless
you’re a collector specializing in objets d’art that depreciate at alarming rates, acquiring
PC hardware is no end in itself. After all, by itself a computer does nothing but take up
space. Plug it in and turn it on, and it will consume electricity—and sit there like an in-
law overstaying his welcome. Like that in-law, a PC without something to make it work
represents capabilities without purpose. You need to motivate it, tell it what to do, and
how to do it. In other words, the reason you buy a PC is not to have the hardware but to
have something to run software.

Hardware is simply a means to an end. To fully understand and appreciate computer


hardware, you need a basic understanding of how it relates to software and how software
relates to it. Computer hardware and software work together to make a complete system
that carries out the tasks you ask of it.

Computer software is more than the box you buy when you want to play Microsoft Office
or Myst. The modern PC runs several programs simultaneously, even when you think
you’re using just one. These programs operate at different levels, each one taking care of
its own specific job, invisibly linking to the others to give you the impression you’re
working with a single, smooth-running machine.
The most important of these programs are the applications, the programs like Myst or
Office that you actually buy and load onto your PC, the ones that boldly emblazon their
names on your screen every time you launch them. In addition, you need utilities to keep
your PC in top running order, protect yourself from disasters, and automate repetitive
chores. An operating system links your applications and utilities together and to the
actual hardware of your PC. At or below the operating system level, you use
programming languages to tell your PC what to do. You can write applications, utilities,
or even your own operating system with the right programming language.

Applications

The programs you run to do actual work on your PC are its applications, short for
application software, programs with a purpose, programs you apply to get something
done. They are the dominant beasts of computing, the top of the food chain. Everything
else in your computer system, hardware and software alike, exists merely to make your
applications run. Your applications determine what you need in your PC simply because
they won’t run—or run well—if you don’t supply them with what they want.

Strange as it may seem, little of the program code in most applications deals with the job
for which you buy it. Most of the program revolves around you, trying to make the
rigorous requirements of the computer hardware more palatable to your human ideas,
aspirations, and whims. The part of the application that serves as the bridge between your
human understanding and the computer’s needs is called the user interface. It can be
anything from a typewritten question mark that demands you type some response to a
Technicolor graphic menu luring your mouse to point and click.

In fact, the most important job of most modern applications is simply translation. The
user interface of the program converts your commands, instructions, and desires into a
form digestible by your computer system. At the same time, the user interface
reorganizes the data you give your PC into the proper form for its storage or takes data
from storage and reorganizes it to suit your requirements, be they stuffing numbers into
spreadsheet cells, filling database fields, or moving bits of sound and images to speakers
and screen.

The user interface acts as an interpreter and translates your actions and responses into
digital code. Although the actual work of making this translation is straightforward, even
mechanistic (after all, the computer that’s doing all the work is a machine), it demands a
great deal of computer power. For example, displaying a compressed bitmap that fills a
quarter of your screen in a multimedia video involves just a few steps. Your PC need
only read a byte from memory, perform a calculation on it, and send it to your monitor.
The trick is in the repetition. While you may press only one key to start the operation,
your PC has to repeat those simple steps over a million times each second. Such a chore
can easily drain the resources available in your PC. That’s why you need a powerful PC
to run today’s video-intensive multimedia applications—and why multimedia didn’t
catch on until microprocessors with Pentium power caught up with the demands of your
software.

The actual function of the program, the algorithms that it carries out, are only a small part
of its code, typically a tiny fraction. The hardcore computing work performed by major
applications—the kind of stuff that the first Univac and other big mainframe computers
were created to handle—is amazingly minimal. For example, even a tough statistical
analysis may involve but a few lines of calculations (though repeated again and again).
Most of what your applications do is simply organize and convert static data from one
form to another.

Application software often is divided into several broad classes based on what the
programs are meant to accomplish. These traditional functions include:

• Word processing the PC-equivalent of the typewriter with a memory and an


unlimited correction button

• Spreadsheets the accountant’s ledger made automatic to calculate arrays of


numbers

• Databases a filing system with instant access and ability to automatically sort
itself

• Communications for linking with other computers, exchanging files, and


browsing the Internet

• Drawing and painting to create images such as blueprints and cartoon cells that
can be filed and edited with electronic ease

• Multimedia software for displaying images and sound like a movie theater under
the control of an absolute dictator (you)

The lines between many of these applications are blurry. For example, many people find
that spreadsheets serve all their database needs, and most spreadsheets now incorporate
their own graphics for charting results.

Several software publishers completely confound the distinctions by combining most of


these applications functions into a single package that includes a database, graphics,
spreadsheet, and word processing. These combinations are termed application suites.
Ideally, they offer several advantages. Because many functions (and particularly the user
interface) are shared between applications, large portions of code need not be duplicated
as would be the case with stand-alone applications. Because the programs work together,
they better know and understand one another’s resource requirements, which means you
should encounter fewer conflicts and memory shortfalls. Because they are all packaged
together, you stand to get a better price from the publisher.
Although application suites have vastly improved since their early years, they sometimes
show their old weaknesses. Even the best sometimes falls short of the ideal, comprised of
parts that don’t perfectly mesh together, created by different design teams over long
periods. Even the savings can be elusive because you may end up buying several
applications you rarely use among the ones you want. Nevertheless, suites like Microsoft
Office have become popular because they are single-box solutions that fill the needs of
most people, handling more tasks with more depth than they ordinarily need. In other
words, the suite is an easy way to ensure you’ll have the software you need for almost
anything you do.

Utilities

Even when you’re working toward a specific goal, you often have to make some side
trips. Although they seem unrelated to where you’re going, they are as much a necessary
part of the journey as any other. You may run a billion-dollar pickle packing empire from
your office, but you might never get your business negotiations done were it not for the
regular housekeeping that keeps the place clean enough for visiting dignitaries to sit
down.

The situation is the same with software. Although you need applications to get your work
done, you need to take care of basic housekeeping functions to keep your system running
in top condition and working most efficiently. The programs that handle these auxiliary
functions are called utility software.

From the name alone you know that utilities do something useful, which in itself sets
them apart from much of the software on today’s market. Of course, the usefulness of any
tool depends on the job you have to do—a pastry chef has little need for the hammer that
so well serves the carpenter or PC technician—and most utilities are crafted for some
similar, specific need. You might want a better desktop than Microsoft chooses to give
you, an expedient way of dispensing with software you no longer want, a means of
converting data files from one format to another, backup and anti-virus protection for
your files, improved memory handling (and more free bytes), or diagnostics for finding
system problems. Each of these needs has spawned its own category of specialized
utilities.

Some utilities, however, are useful to nearly everyone and every PC. No matter what kind
of PC you have or what you do with it, you want to keep your disk organized and running
at top speed, to prevent disasters by detecting disk problems and viruses, and to save your
sanity should you accidentally erase a file. The most important of these functions are
included with today’s PC operating systems, either integrated into the operating system
itself or as individual programs that are part of the operating system package.

DOS Utilities
DOS utilities are those that you run from the DOS command prompt. Because they are
not burdened with mating with the elaborate interfaces of more advanced operating
systems, DOS utilities tend to be lean and mean, a few kilobytes of code to carry out
complex functions. Although they are often not much to look at, they are powerful and
can take direct hardware control of your PC.

Many DOS utilities give you only command-line control. You type in the program name
followed by one or more filenames and options (sometimes called "switches"), typically a
slash or hyphen followed by a more or less mnemonic letter identifying the option. Some
DOS utilities run as true applications with colorful screens and elaborate menus for
control.

The minimal set of DOS utilities are those that come with the operating system. These are
divided into two types, internal and external.

Internal utilities are part of the DOS command interpreter, the small program that put the
C> command prompt on your screen. Whenever the prompt appears on the screen, you
can run the internal utilities by typing the appropriate command name. Internal
commands include the most basic functions of your PC: copying files (COPY),
displaying the contents of files (TYPE), erasing files (DEL), setting a path (PATH), and
changing the prompt itself (PROMPT).

External utilities are separate programs, essentially applications in miniature. Some are
entire suites of programs that aspire to be full-fledged applications and are distinguished
only by what they do. Being external from the operating system kernel, most external
utilities load every time you use them, and your system must be able to find the
appropriate file to make the external utility work. In other words, they must be in the
directory you’re currently logged into or in your current search path. Because they are
essentially standalone programs, you can erase or overwrite them whenever you want, for
example to install a new or improved version.

Windows Utilities

Under advanced operating systems like Windows or OS/2, you have no need to
distinguish internal and external utilities. The operating systems are so complex that all
utilities are essentially external. They are separate programs that load when you call upon
them. Although some functions are integrated into the standard command shell of these
operating systems so running them is merely a matter of making a menu choice, they are
nevertheless maintained as separate entities on disk. Others have the feel of classic
external utilities and must be started like ordinary applications, for example by clicking
on the appropriate icon or using the Run option. No matter how they are structured or
how you run them, however, utilities retain the same function: maintaining your PC.
Operating Systems

The basic level of software with which you will work on your PC is the operating system.
It’s what you see when you don’t have an application or utility program running. But an
operating system is much more than what you see on the screen. As the name implies, the
operating system tells your PC how to operate, how to carry on its most basic functions.
Early operating systems were designed simply to control how you read from and wrote to
files on disks and were hence termed disk operating systems (which is why DOS is called
DOS). Today’s operating systems add a wealth of functions for controlling every possible
PC peripheral from keyboard (and mouse) to monitor screen.

The operating system in today’s PCs has evolved from simply providing a means of
controlling disk storage into a complex web of interacting programs that perform several
functions. The most important of these is linking the various elements of your computer
system together. These linked elements include your PC hardware, your programs, and
you. In computer language, the operating system is said to provide a common hardware
interface, a common programming interface, and a common user interface.

Of these interfaces only one, the operating system’s user interface, is visible to you. The
user interface is the place at which you interact with your computer at its most basic
level. Sometimes this part of the operating system is called the user shell. In today’s
operating systems, the shell is simply another program, and you can substitute one shell
for another. In effect, the shell is the starting point to get your applications running and
the home base that you return to between applications. Under DOS, the default shell is
COMMAND.COM; under Windows versions through 3.11, the shell is Program Manager
(PROGMAN EXE).

Behind the shell, the Application Program Interface, or API of the operating system,
gives programmers a uniform set of calls, key words that instruct the operating system to
execute a built-in program routine that carries out some pre-defined function. For
example, a program can call a routine from the operating system that draws a menu box
on the screen. Using the API offers programmers the benefit of having the complicated
aspects of common program procedures already written and ready to go. Programmers
don’t have to waste their time on the minutiae of moving every bit on your monitor
screen or other common operations. The use of a common base of code also eliminates
duplication, which makes today’s overweight applications a bit more svelte. Moreover,
because all applications use basically the same code, they have a consistent look and
work in a consistent manner. This prevents your PC from looking like the accidental
amalgamation of the late night work of thousands of slightly aberrant engineers that it is.

As new technologies, hardware, and features get added to the repertory you expect from
your PC, the operating system maker must expand the API to match. Old operating
systems required complete upgrades or replacements to accommodate the required
changes. Modern operating systems are more modular and accept extensions of their
APIs with relatively simple installations of new code. For example, one of the most
important additions to the collection of APIs used by Windows 95 was a set of
multimedia controls called DirectX. Although now considered part of Windows 95, this
collection of four individual APIs, later expanded to six, didn’t become available until
two months after the initial release of the operating system. The DirectX upgrade APIs
supplemented the original API multimedia control program code in the original release
with full 32-bit versions.

Compared to the API, the hardware interface of an operating system works in the
opposite direction. Instead of commands sent to the operating system to carry out, the
hardware interface comprises a set of commands the operating system sends out to make
the hardware do its tricks. These commands take a generalized form for a particular class
of hardware. That is, instead of being instructions for a particular brand and model of
disk drive, they are commands that all disk drives must understand, for example to read a
particular cluster from the disk. The hardware interface (and the programmer) doesn’t
care about how the disk drive reads the cluster, only that it does—and delivers the results
to the operating system. The hardware engineer can then design the disk drive to do its
work any way he wants as long as it properly carries out the command.

In the real world, the operating system hardware interface doesn’t mark the line between
hardware and software. Rather, it draws the line between the software written as part of
the operating system and that written by (or for) the hardware maker. The hardware
interface ties into an additional layer of software called a driver that’s specifically created
for the particular hardware being controlled. Each different piece of hardware—
sometimes down to the brand and model number—gets its own special driver. Moreover,
drivers themselves may be layered. For example, the most recent versions of Windows
use a mini-driver model in which a class of hardware devices gets one overall driver, and
a specific product gets matched to it by a mini-driver.

Not all operating systems provide a common hardware interface. In particular, DOS
makes few pretenses of linking hardware. It depends on software publishers to write their
own links between their program and specific hardware (or hardware drivers). This
method of direct hardware control is fully described in the "Linking Hardware and
Software" section later in this chapter.

Outside of the shell of the user interface, you see and directly interact with little of an
operating system. The bulk of the operating system program code works invisibly (and
continuously). And that’s the way it’s designed to be.

Programming Languages

A computer program is nothing more than a list of instructions for a microprocessor to


carry out. A microprocessor instruction, in turn, is a specific pattern of bits, a digital
code. Your computer sends the list of instructions making up a program to your
microprocessor one at a time. Upon receiving each instruction, the microprocessor looks
up what function the code says to do, then it carries out the appropriate action.

Every microprocessor understands its own repertoire of instructions just as a dog might
understands a few spoken commands. Where your pooch might sit down and roll over
when you ask it to, your processor can add, subtract, move bit patterns around, and
change them. Every family of microprocessor has a set of instructions that it can
recognize and carry out, the necessary understanding designed into the internal circuitry
of each chip. The entire group of commands that a given microprocessor model
understands and can react to is called that microprocessor’s instruction set or its
command set. Different microprocessor families recognize different instruction sets, so
the commands meant for one chip family would be gibberish to another. The Intel family
of microprocessors understands one command set; the IBM/Motorola PowerPC family of
chips recognize an entirely different command set.

As a mere pattern of bits, a microprocessor instruction itself is a simple entity, but the
number of potential code patterns allows for incredibly rich command sets. For example,
the Intel family of microprocessors understands more than eight subtraction instructions,
each subtly different from the others.

Some microprocessor instructions require a series of steps to be carried out. These multi-
step commands are sometimes called complex instructions because of their composite
nature. Although the complex instruction looks like a simple command, it may involve
much work. A simple instruction would be something like "pound a nail"; a complex
instruction may be as far ranging as "frame a house." Simple subtraction or addition of
two numbers may actually involve dozens of steps, including the conversion of the
numbers from decimal to binary (1’s and 0’s) notation that the microprocessor
understands.

Broken down to its constituent parts, a computer program is nothing but a list of symbols
that correspond to patterns of bits that signal a microprocessor exactly as letters of the
alphabet represent sounds that you might speak. Of course, with the same back to the real
basics reasoning, an orange is a collection of quarks squatting together with reasonable
stability in the center of your fruit bowl. The metaphor is apt. The primary constituents of
an orange—whether you consider them quarks, atoms, or molecules—are essentially
interchangeable, even indistinguishable. By itself, every one is meaningless. Only when
they are taken together do they make something worthwhile (at least from a human
perspective), the orange. The overall pattern, not the individual pieces, is what’s
important.

Letters and words work the same way. A box full of vowels wouldn’t mean anything to
anyone not engaged in a heated game of Wheel of Fortune. Match the vowels with
consonants and arrange them properly, and you might make words of irreplaceable value
to humanity: the works of Shakespeare, Einstein’s expression of general relativity, or the
formula for Coca-Cola. The meaning is not in the pieces but their patterns.
Everything that the microprocessor does consists of nothing more than a series of these
step-by-step instructions. A computer program is simply a list of microprocessor
instructions. The instructions are simple, but long and complex computer programs are
built from them just as epics and novels are built from the words of the English language.
Although writing in English seems natural, programming feels foreign because it requires
that you think in a different way, in a different language. You even have to think of jobs,
such as adding numbers, typing a letter, or moving a block of graphics, as a long series of
tiny steps. In other words, programming is just a different way of looking at problems
and expressing the process of solving them.

These bit-patterns used by microprocessors can be represented as binary codes, which


can be translated into numbers in any format—hexadecimal and decimal being most
common. In this form, the entire range of these commands for a microprocessor is called
machine language. Most human beings find words or pseudo-words to be more
comprehensible symbols. The list of word-like symbols that control a microprocessor is
termed assembly language.

You make a computer program by writing a list of commands for a microprocessor to


carry out. At this level, programming is like writing reminder notes for a not-too-bright
person of an ethnic background you want to denigrate—first socks, then shoes.

This step-by-step command system is perfect for control freaks but otherwise is more
than most people want to tangle with. Even simple computer operations require dozens of
microprocessor operations, so writing complete lists of commands in this form can be
more than many programmers want to deal with. Moreover, machine and assembly
language commands are microprocessor-specific: they work only with the specific chips
that understand them. Worse, because the microprocessor controls all computer
functions, assembly language programs usually work only on a specific hardware
platform.

It needs software, that list of instructions called a program, to make it work. But a
program is more than a mere list. It is carefully organized and structured so that the
computer can go through the instruction list step by step, executing each command in
turn. Each builds on the previous instructions to carry out a complex function. The
program is essentially a recipe for a microprocessor.

Microprocessors by themselves only react to patterns of electrical signals. Reduced to its


purest form, the computer program is information that finds its final representation as the
ever-changing pattern of signals applied to the pins of the microprocessor. That electrical
pattern is difficult for most people to think about, so the ideas in the program are
traditionally represented in a form more meaningful to human beings. That representation
of instructions in human-recognizable form is called a programming language.

Programming languages create their own paradox. You write programs on a computer to
make the computer run and do what you want it to do. But without a program the
computer can do nothing. Suddenly you’re confronted with a basic philosophic question:
Which came first, the computer or the program?

The answer really lays an egg. With entirely new systems, the computer and its first
programs are conceived and created at the same time. As a team of hardware engineers
builds the computer hardware, another team of software engineers develops its basic
software. They both work from a set of specifications, the list of commands the computer
can carry out. The software engineers use the commands in the specifications, and the
hardware engineers design the hardware to carry out those commands. With any luck,
when both are finished the hardware and software come together perfectly and work the
first time they try. It’s sort of like digging a tunnel from two directions and hoping the
two crews meet in the middle of the mountain.

Moreover, programs don’t have to be written on the machine for which they are meant.
The machine that a programmer uses to write program code does not need to be able to
actually run the code. It only has to edit and store the program so that it can later be
loaded and run on the target computer. For example, programs for game machines are
often written on more powerful computers called development systems. Using a more
powerful machine for writing gives programmers more speed and versatility.

Similarly, you can create a program that runs under one operating system using a
different operating system. For example, you can write DOS programs under Windows.
Moreover, you can write an operating system program while running under another
operating system. After all, writing the program is little more than using a text editor to
string together commands. The final code is all that matters; how you get there is
irrelevant to the final program. The software writer simply chooses the programming
environment he’s most comfortable in, just as he choose the language he prefers to use.

Machine Language

The most basic of all coding systems for microprocessor instructions merely documents
the bit pattern of each instruction in a form that human beings can see and appreciate.
Because this form of code is an exact representation of the instructions that the computer
machine understands, it is termed machine language.

The bit pattern of electrical signals in machine language can be expressed directly as a
series of ones and zeros, such as 0010110. Note that this pattern directly corresponds to a
binary (or base-two) number. As with any binary number, the machine language code of
an instruction can be translated into other numerical systems as well. Most commonly,
machine language instructions are expressed in hexadecimal form (base-16 number
system). For example, the 0010110 subtraction instruction becomes 16(hex).

Assembly Language
People can and do program in machine language. But the pure numbers assigned to each
instruction require more than a little getting used to. After weeks, months, or years of
machine language programming, you begin to learn which numbers do what. That’s great
if you want to dedicate your life to talking to machines but not so good if you have better
things to do with your time.

For human beings, a better representation of machine language codes involves


mnemonics rather than strictly numerical codes. Descriptive word fragments can be
assigned to each machine language code so that 16(Hex) might translate into SUB (for
subtraction). Assembly language takes this additional step, enabling programmers to
write in more memorable symbols.

Once a program is written in assembly language, it must be converted into the machine
language code understood by the microprocessor. A special program, called an assembler
handles the necessary conversion. Most assemblers do even more to make the
programmer’s life manageable. For example, they enable blocks of instructions to be
linked together into a block called a subroutine, which can later be called into action by
using its name instead of repeating the same block of instructions again and again.

Most of assembly language involves directly operating the microprocessor using the
mnemonic equivalents of its machine language instructions. Consequently, programmers
must be able to think in the same step-by-step manner as the microprocessor. Every
action that the microprocessor does must be handled in its lowest terms. Assembly
language is consequently known as a low level language because programmers write at
the most basic level.

High Level Languages

Just as an assembler can convert the mnemonics and subroutines of assembly language
into machine language, a computer program can go one step further, translating more
human-like instructions into multiple machine language instructions that would be
needed to carry them out. In effect, each language instruction becomes a subroutine in
itself.

The breaking of the one to one correspondence between language instruction and
machine language code puts this kind of programming one level of abstraction farther
from the microprocessor. Such languages are called high level languages. Instead of
dealing with each movement of a byte of information, high level languages enable the
programmer to deal with problems as decimal numbers, words, or graphic elements. The
language program takes each of these high level instructions and converts it into a long
series of digital code microprocessor commands in machine language.
High level languages can be classified into two types: interpreted and compiled. Batch
languages are a special kind of interpreted language.

Compiled Languages

Compiled languages execute like a program written in assembler but the code is written
in a more human-like form. A program written with a compiled language gets translated
from high level symbols into machine language just once. The resulting machine
language is then stored and called into action each time you run the program. The act of
converting the program from the English-like compiled language into machine language
is called compiling the program; to do this you use a language program called a compiler.
The original, English-like version of the program, the words and symbols actually written
by the programmer, is called the source code. The resulting machine language makes up
the program’s object code.

Compiling a complex program can be a long operation, taking minutes, even hours. Once
the program is compiled, however, it runs quickly because the computer needs only to
run the resulting machine language instructions instead of having to run a program
interpreter at the same time. Most of the time, you run a compiled program directly from
the DOS prompt or by clicking on an icon. The operating system loads and executes the
program without further ado. Examples of compiled languages include C, COBOL,
FORTRAN, and Pascal.

Object-oriented languages are special compiled languages designed so that programmers


can write complex programs as separate modules termed objects. A programmer writes
an object for a specific, common task and gives it a name. To carry out the function
assigned to an object, the programmer needs only to put its name in the program without
reiterating all the object’s code. A program may use the same object in many places and
at many different times. Moreover, a programmer can put a copy of an object into
different programs without the need to rewrite and test the basic code, which speeds up
the creating of complex programs. The newest and most popular programming languages
like C++ are object-oriented.

Because of the speed and efficiency of compiled languages, compilers have been written
that convert interpreted language source code into code that can be run like any compiled
program. A BASIC compiler, for example, will produce object code that will run from
the DOS prompt without the need for running the BASIC interpreter. Some languages,
like Microsoft Quick BASIC, incorporate both interpreter and compiler in the same
package.

When PCs were young, getting the best performance required using a low level language.
High level languages typically include error routines and other overhead that bloats the
size of programs and slows their performance. Assembly language enabled programmers
to minimize the number of instructions they needed and to ensure that they were used as
efficiently as possible.

Optimizing compilers do the same thing but better. By adding an extra step (or more) to
the program compiling process, the optimizing compiler checks to ensure that program
instructions are arranged in the most efficient order possible to take advantage of all the
capabilities of a RISC microprocessor. In effect, the optimizing compiler does the work
that would otherwise require the concentration of an assembly language programmer.

In the end, however, the result of using any language is the same. No matter how high the
level of the programming language, no matter what you see on your computer screen, no
matter what you type to make your machine do its daily work, everything the
microprocessor does is reduced to a pattern of digital pulses to which it reacts in knee
jerk fashion. Not exactly smart on the level of an Albert Einstein or even the trouble-
making kid next door, but the microprocessor is fast, efficient, and useful. It is the
foundation of every PC.

Interpreted Languages

An interpreted language is translated from human to machine form each time it is run by
a program called an interpreter. People who need immediate gratification like interpreted
programs because they can be run immediately, without intervening steps. If the
computer encounters a programming error, it can be fixed, and the program can be tested
again immediately. On the other hand, the computer must make its interpretation each
time the program is run, performing the same act again and again. This repetition wastes
the computer’s time. More importantly, because the computer is doing two things at once,
both executing the program and interpreting it at the same time, it runs more slowly.

BASIC, an acronym for the Beginner’s All-purpose Symbolic Instruction Set, is the most
familiar programming language. BASIC, as an interpreted language, was built into every
personal computer IBM made in the first decade of personal computing. Another
interpreted language, Java, promises to change the complexion of the Internet.

Java, the Internet language, is also interpreted. Your PC downloads a list of Java
commands and converts them into executable form inside your PC. The interpreted
design of Java helps make it universal. The Java code contains instructions that any PC
can carry out regardless of its operating system. The Java interpreter inside your PC
converts the universal code into the specific machine language instructions your PC and
its operating system understand.

In classic form using an interpreted language involved two steps. First, you would start
the language interpreter program, which gave you a new environment to work in,
complete with its own system of commands and prompts. Once in that environment, you
executed your program, typically starting it with a "Run" instruction. More modern
interpreted systems like Java hide the actual interpreter from you. The Java program
appears to run automatically by itself, although in reality the interpreter is hidden in your
Internet browser or operating system. Microsoft’s Visual Basic gets its interpreter support
from a run-time module which must be available to your PC’s operating system for
Visual Basic programs to run.

Batch Languages

A batch language allows you to submit a program directly to your operating system for
execution. That is, the batch language is a set of operating system commands that your
PC executes sequentially as a program. The resulting batch program works like an
interpreted language in that each step gets evaluated and executed only as it appears in
the program.

Applications often include their own batch languages. These, too, are merely lists of
commands for the application to carry out in the order that you’ve listed them to perform
some common, everyday function. Communications programs use this type of
programming to automatically log into the service of your choice and even retrieve files.
Databases use their own sort of programming to automatically generate reports that you
regularly need. The process of transcribing your list of commands is usually termed
scripting. The commands that you can put in your program scripts are sometimes called
the scripting language.

Scripting actually is programming. The only difference is the language. Because you use
commands that are second nature to you (at least after you’ve learned to use the program)
and follow the syntax that you’ve already learned running the program, the process seems
more natural than writing in a programming language.

Linking Hardware and Software

Software is from Venus. Hardware is from Mars—or, to ruin the allusion for sake of
accuracy, Vulcan. Software is the programmer’s labor of love, an ephemeral spirit that
can only be represented. Hardware is the physical reality, the stuff pounded out in
Vulcan’s forge—enduring, unchanging, and often priced like gold. Bringing the two
together is a challenge that even self-help books would find hard to manage. Yet every
PC not only faces that formidable task but tackles it with aplomb (or so you hope!)

Your PC takes ephemeral ideas and gives them the power to control physical things. In
other words, it allows its software to command its hardware. The challenge is making the
link.
In the basic PC, every instruction in a program gets targeted on the microprocessor.
Consequently, the instructions can control only the microprocessor and don’t themselves
reach beyond. The circuitry of the rest of the computer and all of the peripherals
connected to it must get their commands and data relayed by the microprocessor.
Somehow the microprocessor must be able to send signals to these devices.

Device Interfaces

Two methods are commonly used, input/output mapping and memory mapping.
Input/output mapping relies on sending instructions and data through ports. Memory
mapping requires passing data through memory addresses. Ports and addresses are similar
in concept but different in operation.

Input/Output Mapping

A port is an address but not a physical location. The port is a logical construct that
operates as an addressing system separate from the address bus of the microprocessor
even though it uses the same address lines. If you imagine normal memory addresses as a
set of pigeon holes for holding bytes, input/output ports act like a second set of pigeon
holes on the other side of the room. To distinguish which set of holes to use, the
microprocessor controls a flag signal on its bus called memory-I/O. In one condition it
tells the rest of the computer the signals on the address bus indicate a memory location; in
its other state, the signals indicate an input/output port.

The microprocessor’s internal mechanism for sending data to a port also differs from
memory access. One instruction, move, allows the microprocessor to move bytes from
any of its registers to any memory location. Some microprocessor operations can even be
performed in immediate mode, directly on the values stored at memory locations.

Ports, however, use a pair of instructions, In, to read from a port, and Out, to write to a
port. The values read can only be transferred into one specific register of the
microprocessor (called the accumulator), and can only be written from that register. The
accumulator has other functions as well. Immediate operations on values held at port
locations is impossible—which means a value stored in a port cannot be changed by the
microprocessor. It must load the port value into the accumulator, alter it, then reload the
new value back into the port.

Memory Mapping
The essence of memory mapping is sharing. The microprocessor and the hardware device
it controls share access to a specific range of memory addresses. To send data to the
device, your microprocessor simply moves the information into the memory locations
exactly as if it were storing something for later recall. The hardware device can then read
those same locations to obtain the data.

Memory-mapped devices, of course, need direct access to your PC’s memory bus.
Through this connection, they can gain speed and operate as fast as the memory system
and its bus connection allows. In addition, the microprocessor can directly manipulate the
data at the memory location used by the connection, eliminating the multi-step
load/change/reload process required by I/O mapping.

The most familiar memory-mapped device is your PC’s display. Most graphic systems
allow the microprocessor to directly address the frame buffer that holds the image which
appears on your monitor screen. This design allows the video system to operate at the
highest possible speed.

The addresses used for memory mapping must be off-limits to the range in which the
operating system loads your programs. If a program should transgress on the area used
for the hardware connection, it can inadvertently change the data there—nearly always
with bad results. Moreover, the addresses used by the interface cannot serve any other
function, so they take away from the maximum memory addressable by a PC. Although
such deductions are insignificant with today’s PCs, it was a significant shortcoming for
old systems, many of which were limited to 16 megabytes.

Addressing

To the microprocessor the difference between ports and memory is one of perception:
memory is a direct extension of the chip. Ports are the external world. Writing to I/O
ports is consequently more cumbersome and usually requires more time and
microprocessor cycles.

I/O ports give the microprocessor and computer designer greater flexibility. And they
give you a headache when you want to install multimedia accessories.

Implicit in the concept of addressing, whether memory or port addresses, is proper


delivery. You expect a letter carrier to bring your mail to your address and not deliver it
to someone else’s mailbox. Similarly, PCs and their software assume that deliveries of
data and instructions will always go where they are supposed to. To assure proper
delivery, addresses must be correct and unambiguous. If someone types a wrong digit on
a mailing label, it will likely get lost in the postal system.

In order to use port or memory addresses properly, your software needs to know the
proper addresses used by your peripherals. Many hardware functions have fixed or
standardized addresses that are the same in every PC. For example, the memory
addresses used by video boards are standardized (at least in basic operating modes), and
the ports used by most hard disk drives are similarly standardized. Programmers can
write the addresses used by this fixed-address hardware into their programs and not
worry whether their data will get where it’s going.

The layered BIOS approach helps eliminate the need of writing explicit hardware
addresses in programs. Drivers accomplish a similar function. They are written with the
necessary hardware addresses built in.

Resource Allocation

The basic hardware devices were assigned addresses and memory ranges early in the
history of the PC and for compatibility reasons have never changed. These fixed values
include those of serial and parallel ports, keyboards, disk drives, and the frame buffer that
stores the monitor image. Add-in devices and more recent enhancements to the traditional
devices require their own assignments of system resources. Unfortunately, beyond the
original hardware assignments there are no standards for the rest of the resources.
Manufacturers consequently pick values of their own choices for new products. More
often than you’d like, several products may use the same address values.

Manufacturers attempt to avoid conflicts by allowing a number of options for the


addresses used by their equipment. You select among the choices offered by
manufacturers using switches or jumpers (on old technology boards) or through software
(new technology boards, including those following the old Micro Channel and EISA
standards). The latest innovation, Plug-and-Play, attempts to put the responsibility for
properly allocating system resources in the hands of your PC and its operating system,
although the promise often falls short of reality when you mix new and old products.

With accessories that use traditional resource allocation technology, nothing prevents
your setting the resources used by one board to the same values used by another in your
system. The result is a resource conflict that may prevent both products from working.
Such conflicts are the most frequent cause of problems in PCs, and eliminating them was
the goal of the modern, automatic resource allocation technologies.

BIOS

The Basic Input/Output System or BIOS of a PC has many functions, as discussed in


Chapter 5. One of these is to help match your PC’s hardware to software.

No matter the kind of device interface (I/O mapped or memory mapped) used by a
hardware device, software needs to know the addresses it uses to take control of it. Using
direct hardware control requires that programs or operating systems are written using the
exact values of the port and memory addresses of all the devices installed in the PC. All
PCs running such software that takes direct hardware control consequently must assign
their resources identically if the software is to have any hope of working properly.

PC designers want greater flexibility. They want to be able to assign resources as they see
fit, even to the extent of making some device that might be memory-mapped in one PC
into an I/O-mapped device in another. To avoid permanently setting resource values and
forever locking all computer designs to some arbitrary standard, one that might prove
woefully inadequate for future computer designs, the creators of the first PCs developed
BIOS.

The BIOS is program code that’s permanently recorded (or semi-permanently in the case
of Flash BIOS systems) in special memory chips. The code acts like the hardware
interface of an operating system but at a lower level; it is a hardware interface that’s
independent of the operating system. Programs or operating systems send commands to
the BIOS, and the BIOS sends out the instructions to the hardware using the proper
resource values. If the designer of a PC wants to change the resources used by the system
hardware in a new PC, he only has to change the BIOS to make most software work
properly. The BIOS code of every PC includes several of these built-in routines to handle
accessing floppy disk drives, the keyboard, printers, video, and parallel and serial port
operation.

Device Drivers

Device drivers have exactly the same purpose as the hardware interface in the BIOS
code. They link your system to another device by giving your PC a handy set of control
functions. Drivers simply take care of devices not in the repertory of the BIOS. Rather
than being permanently encoded in the memory of your PC, drivers are software that
must be loaded into your PC’s memory. As with the BIOS links, the external device
driver provides a library of programs that it can easily call to carry out a complex
function of the target hardware or software device.

All device drivers have to link with your existing software somehow. The means of
making that link varies with your operating system. As you should expect, the
fundamental device driver architecture is that used by DOS. Drivers that work with DOS
are straightforward, single-minded, and sometimes dangerous. Advanced operating
systems like Windows and OS/2 have built-in hooks for device drivers that make them
more cooperative and easier to manage.

You need to tangle with device drivers because no programmer has an unlimited
imagination. No programmer can possibly conceive of every device that you’d want to
link to your PC. In fact, programmers are hard pressed to figure out everything you’d
want to do with your PC—otherwise they’d write perfect programs that would do exactly
everything that you want.

Thanks to an industry with a heritage of inter-company cooperation only approximated


by a good pie fight, hardware designer tend to go their own directions when creating the
control systems for their products. For example, the command one printer designer might
pick for printing graphics dots may instruct a printer of a different brand to advance to the
next sheet of paper. Confuse the codes, and your office floor will soon look like the
training ground for housebreaking a new puppy.

Just about every class of peripheral has some special function shared with no other
device. Printers need to switch ribbon colors; graphics boards need to put dots on screen
at high resolution; sound boards need to blast fortissimo arpeggios; video capture boards
must grab frames; and mice have to do whatever mice do. Different manufacturers often
have widely different ideas about the best way to handle even the most fundamental
functions. No programmer or even collaborative program can ever hope to know all the
possibilities. It’s even unlikely that you could fit all the possibilities into a BIOS with
fewer chips than a Las Vegas casino or operating system with code that would fit onto a
stack of disks you could carry. There are just too many possibilities.

Drivers let you customize. Instead of packing every control or command you might
potentially need, the driver packages only those that are appropriate for a particular
product. If all you want is to install a sound board, your operating system doesn’t need to
know how to capture a video frame. The driver contains only the command specific to the
type, brand, and model of a product that you actually connect to your PC.

Device drivers give you a further advantage. You can change them almost as often as you
change your mind. If you discover a bug in one driver—say sending an upper case F to
your printer causes it to form feed through a full ream of paper before coming to a
panting stop—you can slide in an updated driver that fixes the problem. In some cases,
new drivers extend the features of your existing peripherals because programmers didn’t
have enough time or inspiration to add everything to the initial release.

The way you and your system handle drivers depends on your operating system. DOS,
16-bit versions of Windows, Windows 95, and OS/2 each treat drivers somewhat
differently. All start with the model set by DOS, then add their own innovations. All 16-
bit versions of Windows run under DOS, so require that you understand (and use) some
DOS drivers. In addition, these versions of Windows add their own method of installing
drivers as well as several new types of drivers. Windows 95 accommodates both DOS
and 16-bit Windows drivers to assure you of compatibility with your old hardware and
software. In addition, Windows 95 brings its own 32-bit protected-mode drivers and a
dynamic installation scheme. OS/2 also follows the pattern set by DOS but adds its own
variations as well.

Driver software matches the resource needs of your hardware to your software
applications. The match is easy when a product doesn’t allow you to select among
resource values; the proper addresses can be written right into the driver. When you can
make alternate resource allocations, however, the driver software needs to know which
values you’ve chosen. In most cases, you make your choices known to the driver by
adding options to the command that loads the driver (typically in your PC’s
CONFIG.SYS) or through configuration files. Most new add-in devices include an
installation program that indicates the proper options to the driver by adding the values to
the load command or configuration file, though you can almost always alter the options
with a text-based editor.

This complex setup system gives you several places to make errors that will cause the
add-in device, or your entire PC, to operate erratically or fail altogether. You might make
duplicate resource assignments, mismatch drivers and options, or forget to install the
drivers at all. Because multimedia PCs use so many add-in devices, they are particularly
prone to such problems. Sound boards in particular pose installation problems because
they usually incorporate several separate hardware devices (the sound system proper, a
MIDI interface, and often a CD ROM interface) each of which has its own resource
demands.

Hardware Components

Before you can jump into any discussion about personal computers, you have to speak
the language. You can’t talk intelligently about anything if you don’t know what you’re
talking about. You need to know the basic terms and buzzwords so you don’t fall under
some charlatan’s spell.

Every PC is built from an array of components, each of which serves a specific function
in making the overall machine work. As with the world of physical reality, a PC is built
from fundamental elements combined together. Each of these elements adds a necessary
quality or feature to the final PC. These building blocks are hardware components, built
of electronic circuits and mechanical parts to carry out a defined function. Although all of
the components work together, they are best understood by examining them and their
functions individually. Consequently this book is divided into sections and chapters by
component.

Over the years of the development of the PC, the distinctions between many of these
components have turned out not to be hard and fast. In the early days of PCs, most
manufacturers followed the same basic game plan using the same components in the
same arrangement, but today greater creativity and diversity rules. What once were
separate components have merged together; others have been separated out. Their
functions, however, remain untouched. For example, although modern PCs may lack the
separate timer chips of early machines, the function of the timer has been incorporated
into the support circuitry chipsets.
For purposes of this book and discussion, we’ll divide the PC into several major
component areas, including the system unit, the mass storage system, the display system,
peripherals, and connectivity features. Each of these major divisions can be, in turn,
subdivided into the major components (or component functions) required in a complete
PC.

System Unit

The part of a PC that most people usually think of as the computer—the box that holds all
the essential components except, in the case of desktop machines, the keyboard and
monitor—is the system unit. Sometimes called CPU—for Central Processing Unit, a term
also used to describe microprocessors as well as mainframe computers—the system unit
is the basic computer component. It houses the main circuitry of the computer and
provides the jacks (or outlets) that link the computer to the rest of its accouterments
including the keyboard, monitor, and peripherals. A notebook computer combines all of
these external components into one but is usually called simply the computer rather than
the system unit or CPU.

One of the primary functions of the system unit is physical. It gives everything in your
computer a place to be. It provides the mechanical mounting for all the internal
components that make up your computer, including the motherboard, disk drives and
expansion boards. The system unit is the case of the computer that you see and
everything that is inside it. The system unit supplies power to operate the PC and its
internal expansion, disk drives, and peripherals.

Motherboard

The centerpiece of the system unit is the motherboard. All the other circuitry of the
system unit is usually part of the motherboard or plugs directly into it.

The electronic components on the motherboard carry out most of the function of the
machine: running programs, making calculations, even arranging the bits that will display
on the screen.

Because the motherboard defines each computer’s functions and capabilities and because
every computer is different, it only stands to reason that every motherboard is different,
too. Not exactly. Many different computers have the same motherboard designs inside.
And oftentimes a single computer model might have any of several different
motherboards depending on when it came down the production line (and what
motherboard the manufacturer got the best deal on).
The motherboard holds the most important elements of your PC, those that define its
function and expandability. These include the microprocessor, BIOS, memory, mass
storage, expansion slots, and ports.

Microprocessor

The most important of the electronic components on the motherboard is the


microprocessor. It does the actual thinking inside the computer. Which microprocessor,
of the dozens currently available, determines not only the processing power of the
computer but also what software language it understands (and thus what programs it can
run).

Many older computers also had a coprocessor that added more performance to the
computer on some complex mathematical problems such as trigonometric functions.
Modern microprocessors generally internally incorporate all the functions of the
coprocessor.

Memory

Just as you need your hands and workbench to hold tools and raw materials to make
things, your PC’s microprocessor needs a place to hold the data it works on and the tools
to do its work. Memory, which is often described by the more specific term RAM (which
means Random Access Memory) serves as the microprocessor’s workbench. Usually
located on the motherboard, your PC’s microprocessor needs memory to carry out its
calculations. The amount and architecture of the memory of a system determines how it
can be programmed and, to some extent, the level of complexity of the problems that it
can work on. Modern software often requires that you install a specific minimum of
memory—a minimum measured in megabytes—to execute properly. With modern
operating systems, more memory often equates to faster overall system performance.

BIOS

A computer needs a software program to work. It even needs a simple program just to
turn itself on and be able to load and read software. The Basic Input/Output System or
BIOS of a computer is a set of permanently recorded program routines that gives the
system its fundamental operational characteristics, including instructions telling the
computer how to test itself every time it is turned on.

In older PCs, the BIOS determines what the computer can do without loading a program
from disk and how the computer reacts to specific instructions that are part of those disk-
based programs. Newer PCs may contain simpler or more complex BIOSes. A BIOS can
be as simple as a bit of code telling the PC how to load the personality it needs from disk.
Some newer BIOSes also include a system to help the machine determine what options
you have installed and how to get them to work best together.

At one time, the origins of a BIOS determined the basic compatibility of a PC. Newer
machines—those made in the last decade—are generally free from worries about
compatibility. The only compatibility issue remaining is whether a given BIOS supports
the Plug-and-Play standard that allows automatic system configuration (which is a good
thing to look for in a new PC but its absence is not fatal in older systems).

Modern operating systems automatically replace the BIOS code with their own software
as soon as your PC boots up. For the most part, the modern BIOS only boots and tests
your PC, then steps out of the way so that your software can get the real work done.

Support Circuits

The support circuitry on your PC’s motherboard links its microprocessor to the rest of the
PC. A microprocessor, although the essence of a computer, is not a computer in itself (if
it were, it would be called something else, such as a computer). The microprocessor
requires additional circuits to bring it to life: clocks, controllers, and signal converters.
Each of these support circuits has its own way of reacting to programs, and thus helps
determine how the computer works.

In today’s PCs, all of the traditional functions of the support circuitry have been squeezed
into chipsets, which are relatively large integrated circuits. In that most PCs are now
based on a small range of microprocessors, their chipsets distinguish their motherboards
and performance as much as do their microprocessors. In fact, for some folks the choice
of chipset is a major purchasing criterion.

Expansion Slots

Exactly as the name implies, the expansion slots of a PC allow you to expand its
capabilities by sliding in accessory boards, cleverly termed expansion boards. The slots
are spaces inside the system unit of the PC that provide special sockets or connectors to
plug in your expansion boards. The expansion slots of notebook PCs accept modules the
size of credit cards that deliver the same functions as expansion boards.

The standards followed by the expansion slots in a PC determine both what boards you
can plug in and how fast the boards can perform. Over the years, PCs have used several
expansion slot standards. In new PCs, the choices have narrowed to three—and you
might want all of them in your next system.
Mass Storage

To provide your computer with a way to store the huge amounts of programs and data
that it works with every day, your PC uses mass storage devices. In nearly all of today’s
computers, the primary repository for this information is a hard disk drive. Floppy disks
and CD ROM drives give you a way of transferring programs and data to (and from) your
PC. One or more mass storage interfaces link the various storage systems to the rest of
your PC. In modern systems, these interfaces are often part of the circuitry of the
motherboard.

Hard Disk Drives

The basic requirements of any mass storage system are speed, capacity, and low price.
No technology delivers as favorable a combination of these virtues as the hard disk drive,
now a standard part of nearly every PC. The hard disk drive stores all of your programs
and other software so that they can be loaded into your PC’s memory almost without
waiting. In addition, the hard disk also holds all the data you generate with your PC so
that you can recall and reuse it whenever you want. In general, the faster the hard disk
and the more it can hold, the better.

Hard disks also have their weaknesses. Although they are among the most reliable
mechanical devices ever made—some claim to be able to run for 30 years without a
glitch—they lack some security features. The traditional hard disk is forever locked
inside your PC, and that makes the data stored on it vulnerable to any evil that may befall
your computer. A thief or disaster can rob you of your system and your data in a single
stroke. Just as you can’t get the typical hard disk out of your PC to store in a secure place,
the hard disk gives you no easy way to put large blocks of information or programs into
your PC.

CD ROM Drives

Getting data into your PC requires a distribution medium, and when you need to move
megabytes, the medium of choice today is the CD ROM drive. Software publishers have
made the CD ROM their preferred means of getting their products to you. A single CD
that costs about the same as a floppy disk holds hundreds of times more information and
keeps it more secure. CD’s are vulnerable to neither random magnetic fields nor casual
software pirates. CD ROM drives are a necessary part of all multimedia PCs, which
means just about any PC you’d want to buy today.
The initials stand for Compact Disc, Read-Only Memory, which describe the technology
at the time it was introduced for PC use. Although today’s CD ROMs are based on the
same silver laser-read discs that spin as CDs in your stereo system, they are no longer
read-only and soon won’t be mere CDs. Affordable drives to write your own CDs with
computer data or stereo music are readily available. Many makers of CD ROM drives are
now shifting to the DVD (Digital Video Disc) standard to give their products additional
storage capacity.

Floppy Disk Drives

Inexpensive, exchangeable, and technically unchallenging, the floppy disk was the first,
and at one time only, mass storage system of many PCs. Based on well-proven
technologies and mass produced by the millions, the floppy disk provided the first PCs
with a place to keep programs and data and, over the years, served well as a distribution
system through which software publishers could make their products available.

In the race with progress, however, the simple technology of the floppy disk has been
hard-pressed to keep pace. The needs of modern programs far exceed what floppy disks
can deliver, and other technologies (like those CD ROM drives) provide less expensive
distribution. New incarnations of floppy disk technology that pack 50 to 100 times more
data per disk hold promise but at the penalty of a price that will make you look more than
twice at other alternatives.

All that said, the floppy disk drive remains a standard part of all but a few highly
specialized PCs, typically those willing to sacrifice everything to save a few ounces (sub-
notebooks) and those that need to operate in smoky, dusty environments that would make
Superman cringe and Wonder Woman cough.

Tape Drives

Tape is for backup, pure and simple. It provides an inexpensive place to put your data just
in case—in case some light-fingered freelancer decides to separate your PC from your
desktop, in case the fire department hoses to death everything in your office that the fire
and smoke failed to destroy, in case you think DEL *.* means "display all file names," in
case that nagging head cold turns out to be a virus that infects your PC and formats your
hard disk, in case your next-door neighbor bewitches your PC and turns it into a golden
chariot pulled by a silver charger that once was your mouse, in case an errant asteroid
ambles through your roof. Having an extra copy of your important data helps you recover
from such disasters and those that are even less likely.

Computer tape drives work on the same principles as the cassette recorder in your stereo.
Some are, in fact, cassette drives. All such drives use tape as an inexpensive medium for
storing data. All modern tape systems put their tape into cartridges that you can lock
safely away or send across the continent. And all are slower than you’d like and less
reliable than you’d suspect. Nevertheless, tape remains the backup medium of choice for
most people who choose to make backups.

Display Systems

Your window into the mind of your PC is its display system, the combination of a
graphics adapter or video board and a monitor or flat-panel display. The display system
gives your PC the means to tell you what it is thinking, to show you your data in the form
that you best understand, be it numbers, words, or pictures.

The two halves of the display system work hand-in-hand. The graphics adapter uses the
digital signals inside your PC to built an electronic map of what the final image should
look like, storing the data for every dot on your monitor in memory. Electronics generate
the image that appears on your monitor screen.

Graphics Adapters

Your PC’s graphics adapter forms the image that you will see on your monitor screen. It
converts digital code into a bit pattern that maps each dot that you’ll see. Because it
makes the actual conversion, the graphics adapter determines the number of colors that
can appear on your monitor as well as the ultimate resolution of the image. In other
words, the graphics adapter sets the limit on the quality of the images your PC can
produce. Your monitor cannot make an image any better than what comes out of the
graphics adapter. The graphics adapter also determines the speed of your PC’s video
system; a faster board will make smoother video displays.

Many PCs now include at least a rudimentary form of graphics adapter in the form of
display electronics on their motherboards; others put the display electronics on an
expansion board.

Monitors

The monitor is the basic display system that’s attached to most PCs. Monitors are
television sets with Michael Millken’s appetite for money. While a 21-inch TV might
cost $300 in your local appliance store, the same size monitor will likely cost $2000 and
still not show the movies you rent.
Although both monitor and television are based on the same aging technology, one which
dates back to the 1920’s, they have different aims. The monitor makes more detail, the
television sets its sights on the mass market and makes up for its shortcomings in volume.
In any case, both rely on big glass bottles coated with glowing phosphors that shine
bright enough to light a room.

The quality of the monitor attached to your PC determines the quality of the image you
see. Although it cannot make anything look better than what’s in the signals from your
graphics adapter, it can make them look much worse and limit both the range of colors
and the resolution (or sharpness) of the images.

Flat Panel Display Systems

Big, empty bottles are expensive to make and delicate to move. Except for a select elite,
most engineers have abjured putting fire-filled bottles of any kind in their circuit designs,
the picture tube being the last remnant of this ancient technology. Replacing it are display
systems that use solid-state designs based on liquid crystals. Lightweight, low in power
requirements, and generally shock and shatter resistant, LCD panels have entirely
replaced conventional monitors in notebook PCs and promise to take over desktops in the
coming decade. Currently, they remain expensive (several times the cost of a picture
tube) and more limited in color range, but research into flat panel systems is racing
ahead, while most labs have given up picture tube technology as dead.

Peripherals

The accessories you plug into your computer are usually called peripherals. The name is
a carryover from the early beginnings of computers when the parts of a computer that did
not actually compute were located some distance from the central processing unit, on the
periphery, so to speak.

Today’s PCs have two types of peripherals, the internal and external. Internal peripherals
fit inside the system unit and usually directly connect to its expansion bus. External
peripherals are physically separate from the system unit, connect to the port connectors
on the system unit, and often (but not always) require their own source of power.
Although the keyboard and monitor of a PC fit the definition of external peripherals, they
are usually considered to be part of the PC itself and not peripherals.

Input Devices
You communicate with your PC, telling it what to do, using two primary input devices,
the keyboard and the mouse. The keyboard remains the most efficient way to enter text
into applications, faster than even the most advanced voice recognition systems that let
you talk to your PC. The mouse—more correctly termed a pointing device to include
mouse-derived devices such as trackballs and the proprietary devices used by notebook
PCs—relays graphic instructions to your computer, letting you point to your choices or
sketch, draw, and paint. If you want to sketch images directly onto your monitor screen, a
digitizing tablet works more as you would with a pen.

To transfer images to your PC, a scanner copies graphics into bit-images. With the right
software, it becomes an optical character recognition, or OCR system, that reads text and
transforms words into electronic form.

A voice recognition or voice input system tries to make sense out of your voice. It uses a
microphone to turn the sound waves of your voice into electrical signals, a processing
board that makes those signals digital, and sophisticated software that attempts to discern
the individual words you’ve spoken from the digital signal.

Printers

The electronic thoughts of a PC are notoriously evanescent. Pull the plug and your work
disappears. Moreover, monitors are frustratingly difficult to pass around and post through
the mail when you want to show off your latest digital art creation. Hard copy, the print-
out on paper, solves the problem. And the printer makes your hard copy.

More than any other aspect of computing, printer technology has transformed the
industry in the last decade. Where printers were once the clamorous offspring of
typewriters, they’ve now entered the space age with jets and lasers. The modern PC
printer is usually a high speed, high quality laser printer that creates four or more pages
per minute at a quality level that rivals commercial printing. Inkjet printers sacrifice the
utmost in speed and quality for lower cost and the capability of printing color without
depleting the green in your budget.

Connectivity

The real useful work that PCs do involves not just you but also the outside world. The
ability of a PC to send and receive data to different devices and computers is called
connectivity. Your PC can link to any of a number of hardware peripherals through its
input/output ports. Better still, through modems, networks, and related technologies it can
connect with nearly any PC in the world.

Input/Output Ports
Your PC links to its peripherals through its input and output ports. Every PC needs some
way of acquiring information and putting it to work. Input/output ports are the primary
route for this information exchange. In the past, the standard equipment of most PCs was
simple and almost pre-ordained—one serial port and one parallel port, typically as part of
the motherboard circuitry. Today, new and wonderful port standards are proliferating
faster than dandelions in a new lawn. Hard-wired serial connections are moving to the
new Universal Serial Bus (USB) while the Infrared Data Association (IrDA) system
provides wireless links. Similarly the simple parallel port has become an external
expansion bus capable of linking dozens of devices to a single jack.

Modems

To connect with other PCs and information sources such as the Internet through the
international telephone system, you need a modem. Essentially a signal converter, the
modem adapts your PC’s data to a form compatible with the telephone system.

In a quest for faster transfers than the ancient technology of the classic telephone circuit
can provide, however, data communications are shifting to newer systems such as digital
telephone services (like ISDN), high speed cable connections, and direct digital links
with satellites. Each of these requires its own variety of connecting device, not, strictly
speaking, a modem but called that for consistency’s sake. Which you need depends on
the speed you want and the connections available to you.

Networks

Any time you link two or more PCs together, you’ve made a network. Keep the machines
all in one place—one home, one business, one site in today’s jargon—and you have a
Local Area Network (LAN). Spread them across the country, world, or universe with
telephone, cable, or satellite links, and you get a Wide Area Network (WAN).

Once you link up to the World Wide Web, your computer is no longer merely the box on
your desk. Your PC becomes part of a single, massive international computer system.
Even so, it retains all the features and abilities you expect from a PC—it only becomes
even more powerful.

You might also like