You are on page 1of 27

A Standard Motherboard Layout

I/O Buses
Introduction to the I/O buses
The PC's buses are the fundamental data "highways" on the system board. The
"first" bus is the system bus, which connects the CPU with RAM. In older designs it
was a local bus. In newer designs this bus is called the front side bus (FSB).

The typical local bus has a speed and width depending on the type CPU installed on
the motherboard. Typically, the system bus will be 64 bits wide and run at 66, 100 or
133 MHz. These high speeds create electrical noises and other problems. Therefore,
the speed must be reduced for data reaching the expansion cards and other more
peripheral components.
Very few expansion cards can operate at more than 40 MHz. Then the electronics
shut down. The chips can just not react faster. Therefore, the PC has additional
buses.

Originally only one bus

However, the first PCs had only one bus, which was common for the CPU, RAM and
I/O components:

The older first and second generation CPUs ran at relatively low clock frequencies,
and all system components could keep up with those speeds.

RAM on adapters

Among other things, that allowed additional RAM to be installed in expansion slots in
the PC, by installing an adapter in a vacant expansion slot. An adapter, where RAM
was mounted:

This setup is not applicable today. However it is truely a local bus. All units are united
on one bus using the same clock.
First in 1987, Compaq figured out how to separate system bus from I/O bus, so they
could run at different speeds. This multi-bus architecture has been industry standard
ever since. Modern PCs also have more than one I/O bus.

What does an I/O bus do?

I/O buses connect the CPU to all other components, except RAM. Data are moved on
the buses from one component to another, and data from other components to the
CPU and RAM. The I/O buses differ from the system bus in speed. Their speed will
always be lower than the system bus speed. Over the years, different I/O buses have
been developed. On modern PCs, you will usually find four buses:

• The ISA bus, which is an old low speed bus, soon to be excluded from the PC
design.
• The PCI bus, which is a new high speed bus.
• The USB bus (Universal Serial Bus), which is a new low speed bus.
• The AGP bus which solely is used for the graphics card.

As mentioned earlier, I/O buses are really extensions to the system bus. On the
motherboard, the system bus ends in a controller chip, which forms a bridge to the
I/O buses.

The buses have had a very central placement in the PC's data exchange. Actually, all
components except the CPU communicate with each other and with RAM via the
different I/O buses. The logic can be given as:-
The physical aspects of the I/O buses

Physically, the I/O bus consists of tracks on the printed circuit board. These tracks
are used as:

• Data tracks, which each can move one bit at a time


• Address tracks, which identify where data should be sent to
• Other tracks for clock ticks, voltage, verification signals, etc.

When data are sent on the bus, they must be supplied with a receiver. Therefore,
each device on the bus has an address. Similarly, the RAM is divided in sections,
each having its address. Prior to sending data, a number is sent on the address
track, to identify where the data should be sent to.

The bus width

The number of data tracks determine the data transfer capacity. The ISA bus is slow,
partly because it only has 16 data tracks. The modern PCs send 32 bits per clock
tick. On the ISA bus, 32 bits must be divided in two packages of 16 bits. This delays
the data transfer. Another I/O bus concept is wait states.

Wait states

Wait states are small pauses. If an ISA adapter cannot keep up with the incoming
data flow, its controller sends wait states to the CPU. Those are signals to the CPU to
"hold on for a sec." A wait state is a wasted clock tick. The CPU skips a clock tick,
when not occupied. Thus the old and slow ISA adapter can significantly reduce the
operating speed of a modern computer.

Another aspect is the IRQ signals, which the components use to attract attention
from the CPU.

Technical and historical background for the I/O


buses

In modern PCs you only find the PCI and ISA buses. But, over the years, there have
been other buses. Here is an overview of the various I/O buses.

Bus Max.
Bus
Year Bus speed throughput
width
(theoretical)

Synchronous with
1980-
PC and XT 8 bit CPU: 4-6 MBps
82
4.77 - 6 MHz

ISA (AT) 1984 16 bit Synchronous: 8 MBps


Simple bus. 8-10 MHz

MCA. Advanced, 1987 32 bit Asynchronous: 40 MBps


intelligent bus by IBM. 10.33 MHz

EISA. 1988 32 bit Synchronous: 32 MBps


Bus for servers. max. 8 MHz
VL. High speed bus, 1993 32 bit Synchronous: 100-160
used in 486s. 33-50 MHz MBps

PCI. Intelligent, 1993 32 bit Asynchronous: 132 MBps


advanced high speed bus. 33 MHz

USB. Modern, simple, and 1996 Serial 1.2 MBps


intelligent bus.

FireWire (IEEE1394). 1999 Serial 80 MBps


High-speed I/O bus for
storage, video etc.

USB 2.0 2001 Serial 12-40 MBps

SCSI is another type of bus.

Introduction to the ISA bus

Since about 1984, standard bus for PC I/O functions has been named ISA (Industry
Standard Architecture). It is still used in all PCs to maintain backwards compatibility.
In that way modern PCs can accept expansion cards of the old ISA type.

ISA was an improvement over the original IBM XT bus, which was only 8 bit wide.
IBM's trademark is AT bus. Usually, it is just referred to as ISA bus.

ISA is 16 bit wide and runs at a maximum of 8 MHz. However, it requires 2-3 clock
ticks to move 16 bits of data. The ISA bus works synchronous with the CPU. If the
system bus is faster than 10 MHz, many expansion boards become flaky and the ISA
clock frequency is reduced to a fraction of the system bus clock frequency.

The ISA bus has an theoretical transmission capacity of about 8 MBps. However, the
actual speed does not exceed 1-2 MBps, and it soon became too slow.

Two faces

The ISA bus has two "faces" in the modern PC:


• The internal ISA bus, which is used on the simple ports, like keyboard,
diskette drive, serial and parallel ports.
• As external expansion bus, which can be connected with 16 bit ISA adapters.

ISA slots are today mostly used for the common 16 bit SoundBlaster compatible
sound cards.
Problems

The problem with the ISA bus are:


• It is narrow and slow.
• It has no intelligence.

The ISA bus cannot transfer enough bits at a time. It has a very limited bandwidth.
Let us compare the bandwidths of ISA bus and the newer PCI bus:

Bus Transmission time Data volume per transmission


ISA 375 ns 16 bit
PCI 30 ns 32 bit

Clearly, there is a vast difference between the capacity of the two buses. The ISA bus
uses a lot of time for every data transfer, and it only moves 16 bits in one operation.

The other problem with the ISA bus is the lack of intelligence. This means that the
CPU has to control the data transfer across the bus. The CPU cannot start a new
assignment, until the transfer is completed. You can observe that, when your PC
communicates with the floppy drive, while the rest of the PC is waiting. Quite often
the whole PC seems to be sleeping. That is the result of a slow and unintelligent ISA
bus.

Problems with IRQs

The ISA bus can be a tease, when you install new expansion cards (for example a
sound card). Many of these problems derive from the tuning of IRQ and DMA, which
must be done manually on the old ISA bus.

Every component occupies a specific IRQ and possibly a DMA channel. That can
create conflict with existing components.

The ISA bus is out


As described, the ISA bus is quite outdated and should not be used in modern pcs.
There is a good chance, that this "outdated legacy technology" (quoting Intel) will
disappear completely.

The USB bus is the technology that will replace it. It has taken many years to get
this working and accepted, but it works now.

Intel's chip set 810 was the first not to include ISA support.

Introducing the PCI bus


The PCI is the high speed bus of the 1990s. PCI stands for Peripheral Component
Interconnect. This bus is made by Intel. It is used today in all PCs and other
computers for connecting adapters, such as network-controllers, graphics cards,
sound cards etc.

Some graphics cards however use the AGP-bus, which is a separate bus only
intended for graphics.

The PCI bus is the central I/O bus, which you find in all PCs!

A 32 bit bus

The PCI is actually 32 bit wide, but in practice it functions like a 64 bit bus. Running
at 33 MHz, it has a maximum transmission capacity of 132 MBps.

According to the specifications - not in practice, it can have up to 8 units with a


speed up to 200 MHz. The bus is processor independent. Therefore, it can be used
with all 32 or 64 bit processors, and it is also found in other computers than PCs.

The PCI bus is compatible with the ISA bus in that it can react on ISA bus signals,
create the same IRQs, etc.

Buffering and PnP

The PCI bus is buffered in relation to the CPU and the peripheral components. This
means, that the CPU can deliver its data to the buffer, and then proceed with other
tasks. The bus handles the further transmission in its own tempo. Conversely, the
PCI adapters can also transmit data to the buffer, regardless of whether the CPU is
free to process them. They are placed in a queue, until the system bus can forward
them to the CPU. Under optimal conditions, the PCI bus transmits 32 bits per clock
tick. Sometimes, it requires two clock ticks.

Because of this, the peripheral PCI units operate asynchronous . Therefore, the PCI
(contrary to the VL bus) is not a local bus in a strict sense. Finally, the PCI bus is
intelligent relative to the peripheral components, in that Plug and Play is included in
the PCI specifications. All adapter cards for the PCI configure themselves. Plug and
Play is abbreviated PnP.

PCI with two faces

On modern system boards, the PCI bus (like ISA) has two "faces:"
• Internal PCI bus, which connects to EIDE channels on the motherboard.
• The PCI expansion bus, which typically has 3-4 slots for PCI adapters.

The PCI bus is continuously being developed further. There is a PCI Special Interest
Group, consisting of the most significant companies (Intel, IBM, Apple, and others),
which coordinate and standardize the development.

Soon we shall see PCI with a higher bus speed (66 MHz) and greater width (64 bit).
However alternative buses are also marketed. An example is the high speed AGP
video bus (Accelerated Graphics Port) and the FireWire Bus. AGP is fundamentally a
66 MHz PCI bus (version 2.1) which has been enhanced with other technologies
making it suitable for the graphics system.

The power and speed of computer components has increased at a steady rate since
desktop computers were first developed decades ago. Software makers create new
applications capable of utilizing the latest advances in processor speed and hard
drive capacity, while hardware makers rush to improve components and design new
technologies to keep up with the demands of high-end software.
Along Comes PCI
During the early 1990s, Intel introduced a new bus standard for consideration, the
Peripheral Component Interconnect (PCI) bus. PCI presents a hybrid of sorts
between ISA and VL-Bus. It provides direct access to system memory for connected
devices, but uses a bridge to connect to the frontside bus and therefore to the CPU.
Basically, this means that it is capable of even higher performance than VL-Bus while
eliminating the potential for interference with the CPU.

The frontside bus is a physical connection that actually connects the processor to
most of the other components in the computer, including main memory (RAM), hard
drives and the PCI slots. These days, the frontside bus usually operates at 400-MHz,
with newer systems running at 800-MHz.

PCI cards use 47 pins.

The backside bus is a separate connection between the processor and the Level 2
cache. This bus operates at a faster speed than the frontside bus, usually at the
same speed as the processor, so all that caching works as efficiently as possible.
Backside buses have evolved over the years. In the 1990s, the backside bus was a
wire that connected the main processor to an off-chip cache. This cache was actually
a separate chip that required expensive memory. Since then, the Level 2 cache has
been integrated into the main processor, making processors smaller and cheaper.
Since the cache is now on the processor itself, in some ways the backside bus isn't
really a bus anymore.

PCI can connect more devices than VL-Bus, up to five external components. Each of
the five connectors for an external component can be replaced with two fixed devices
on the motherboard. Also, you can have more than one PCI bus on the same
computer, although this is rarely done. The PCI bridge chip regulates the speed of
the PCI bus independently of the CPU's speed. This provides a higher degree of
reliability and ensures that PCI-hardware manufacturers know exactly what to design
for.

PCI originally operated at 33 MHz using a 32-bit-wide path. Revisions to the standard
include increasing the speed from 33 MHz to 66 MHz and doubling the bit count to
64. Currently, PCI-X provides for 64-bit transfers at a speed of 133 MHz for an
amazing 1-GBps (gigabyte per second) transfer rate!

PCI cards use 47 pins to connect (49 pins for a mastering card, which can control the
PCI bus without CPU intervention). The PCI bus is able to work with so few pins
because of hardware multiplexing, which means that the device sends more than one
signal over a single pin. Also, PCI supports devices that use either 5 volts or 3.3
volts.
Although Intel proposed the PCI standard in 1991, it did not achieve popularity until
the arrival of Windows 95 (in 1995). This sudden interest in PCI was due to the fact
that Windows 95 supported a feature called Plug and Play (PnP), which we'll talk
about in the next section.

Plug and Play (PnP) means that you can connect a device or insert a card into your
computer and it is automatically recognized and configured to work in your system.
PnP is a simple concept, but it took a concerted effort on the part of the computer
industry to make it happen. Intel created the PnP standard and incorporated it into
the design for PCI. But it wasn't until several years later that a mainstream operating
system, Windows 95, provided system-level support for PnP. The introduction of PnP
accelerated the demand for computers with PCI, very quickly supplanting ISA as the
bus of choice.

To be fully implemented, PnP requires three things:

• PnP BIOS - The core utility that enables PnP and detects PnP devices. The
BIOS also reads the ESCD for configuration information on existing PnP
devices.
• Extended System Configuration Data (ESCD) - A file that contains information
about installed PnP devices.
• PnP operating system - Any operating system, such as Windows XP, that
supports PnP. PnP handlers in the operating system complete the
configuration process started by the BIOS for each PnP device. PnP automates
several key tasks that were typically done either manually or with an
installation utility provided by the hardware manufacturer. These tasks include
the setting of:
o Interrupt requests (IRQ) - An IRQ, also known as a hardware
interrupt, is used by the various parts of a computer to get the
attention of the CPU. For example, the mouse sends an IRQ every time
it is moved to let the CPU know that it's doing something. Before PCI,
every hardware component needed a separate IRQ setting. But PCI
manages hardware interrupts at the bus bridge, allowing it to use a
single system IRQ for multiple PCI devices.
o Direct memory access (DMA) - This simply means that the device is
configured to access system memory without consulting the CPU first.
o Memory addresses - Many devices are assigned a section of system
memory for exclusive use by that device. This ensures that the
hardware will have the needed resources to operate properly.
o Input/Output (I/O) configuration - This setting defines the ports used
by the device for receiving and sending information.

While PnP makes it much easier to add devices to your computer, it is not infallible.

Variations in the software routines used by PnP BIOS developers, PCI device
manufacturers and Microsoft have led many to refer to PnP as "Plug and Pray." But
the overall effect of PnP has been to greatly simplify the process of upgrading your
computer to add new devices or replace existing ones.

Let's say that you have just added a new PCI-based sound card to your Windows XP
computer. Here's an example of how it would work.

1. You open up your computer's case and plug the sound card into an empty PCI
slot on the motherboard.
2. You close the computer's case and power up the computer.
3. The system BIOS initiates the PnP BIOS.

This motherboard has four PCI slots.


4. The PnP BIOS scans the PCI bus for hardware. It does this by sending out a
signal to any device connected to the bus, asking the device who it is.
5. The sound card responds by identifying itself. The device ID is sent back
across the bus to the BIOS.
6. The PnP BIOS checks the ESCD to see if the configuration data for the sound
card is already present. Since the sound card was just installed, there is no
existing ESCD record for it.
7. The PnP BIOS assigns IRQ, DMA, memory address and I/O settings to the
sound card and saves the data in the ESCD.
8. Windows XP boots up. It checks the ESCD and the PCI bus. The operating
system detects that the sound card is a new device and displays a small
window telling you that Windows has found new hardware and is determining
what it is.
9. In many cases, Windows XP will identify the device, find and load the
necessary drivers, and you'll be ready to go. If not, the "Found New Hardware
Wizard" will open up. This will direct you to install drivers off of the disc that
came with the sound card.
10. Once the driver is installed, the device should be ready for use. Some devices
may require that you restart the computer before you can use them. In our
example, the sound card is immediately ready for use.
11. You want to capture some audio from an external tape deck that you have
plugged into the sound card. You set up the recording software that came
with the sound card and begin to record.
12. The audio comes into the sound card via an external audio connector. The
sound card converts the analog signal to a digital signal.
13. The digital audio data from the sound card is carried across the PCI bus to the
bus controller. The controller determines which device on the PCI device has
priority to send data to the CPU. It also checks to see if data is going directly
to the CPU or to system memory.
14. Since the sound card is in record mode, the bus controller assigns a high
priority to the data coming from it and sends the sound card's data over the
bus bridge to the system bus.
15. The system bus saves the data in system memory. Once the recording is
complete, you can decide whether the data from the sound card is saved to a
hard drive or retained in memory for additional processing.
As processor speeds steadily climb in the GHz range, many companies are working
feverishly to develop a next-generation bus standard. Many feel that PCI, like ISA
before it, is fast approaching the upper limit of what it can do.

All of the proposed new standards have something in common. They propose doing
away with the shared-bus technology used in PCI and moving to a point-to-point
switching connection. This means that a direct connection between two devices
(nodes) on the bus is established while they are communicating with each other.
Basically, while these two nodes are talking, no other device can access that path. By
providing multiple direct links, such a bus can allow several devices to communicate
with no chance of slowing each other down.

HyperTransport, a standard proposed by Advanced Micro Devices, Inc. (AMD), is


touted by AMD as the natural progression from PCI. For each session between nodes,
it provides two point-to-point links. Each link can be anywhere from 2 bits to 32 bits
wide, supporting a maximum transfer rate of 6.4 GB per second. HyperTransport is
designed specifically for connecting internal computer components to each other, not
for connecting external devices such as removable drives. The development of bridge
chips will enable PCI devices to access the HyperTransport bus.

Will PCI be replaced by HyperTransport?

PCI-Express, developed by Intel (and formerly know as 3GIO or 3rd Generation I/O),
looks to be the "next big thing" in bus technology. At first, faster buses were
developed for high-end servers. These were called PCI-X and PCI-X 2.0, but they
weren't suitable for the home computer market, because it was very expensive to
build motherboards with PCI-X.

PCI-Express is a completely different beast - it is aimed at the home computer


market, and could revolutionize not only the performance of computers, but also the
very shape and form of home computer systems. This new bus isn't just faster and
capable of handling more bandwidth than PCI. PCI-Express is a point-to-point
system, which allows for better performance and might even make the
manufacturing of motherboards cheaper. PCI-Express slots will also accept older PCI
cards, which will help them become popular more quickly than they would if
everyone's PCI components were suddenly useless.

It's also scalable. A basic PCI-Express slot will be a 1x connection. This will provide
enough bandwidth for high-speed Internet connections and other peripherals. The 1x
means that there is one lane to carry data. If a component requires more bandwidth,
PCI-Express 2x, 4x, 8x, and 16x slots can be built into motherboards, adding more
lanes and allowing the system to carry more data through the connection. In fact,
PCI-Express 16x slots are already available in place of the AGP graphics card slot on
some motherboards. PCI-Express 16x video cards are at the cutting edge right now,
costing more than $500. As prices come down and motherboards built to handle the
newer cards become more common, AGP could fade into history.

The Distant Future


PCI-Express could mean more than faster
computers. As the technology develops, computer
makers could design a motherboard with PCI-
Express connectors that attach to special cables.
This could allow for completely modular computer
system, much like home stereo systems. You would
have a small box with the motherboard and
processor and a series of PCI-Express connection
jacks. An external hard drive could connect via USB
2.0 or PCI-Express. Small modules containing
sound cards, video cards, and modems could also
attach. Instead of one large box, your computer
could be arranged any way you want, and it would
only be as large as the components you need.

USB 961(UNIVERSAL SERIAL BUS)

Just about any computer that you buy today comes with one or more Universal
Serial Bus connectors on the back. These USB connectors let you attach everything
from mice to printers to your computer quickly and easily. The operating system
supports USB as well, so the installation of the device drivers is quick and easy, too.
Compared to other ways of connecting devices to your computer (including parallel
ports, serial ports and special cards that you install inside the computer's case), USB
devices are incredibly simple!

In this article, we will look at USB ports from both a user and a technical standpoint.
You will learn why the USB system is so flexible and how it is able to support so
many devices so easily -- it's truly an amazing system!

Anyone who has been around computers for more than two or three years knows the
problem that the Universal Serial Bus is trying to solve -- in the past, connecting
devices to computers has been a real headache!

• Printers connected to parallel printer ports, and most computers only came
with one. Things like Zip drives, which need a high-speed connection into the
computer, would use the parallel port as well, often with limited success and
not much speed.
• Modems used the serial port, but so did some printers and a variety of odd
things like Palm Pilots and digital cameras. Most computers have at most two
serial ports, and they are very slow in most cases.
• Devices that needed faster connections came with their own cards, which had
to fit in a card slot inside the computer's case. Unfortunately, the number of
card slots is limited and you needed a Ph.D. to install the software for some of
the cards.

The goal of USB is to end all of these headaches. The Universal Serial Bus gives you
a single, standardized, easy-to-use way to connect up to 127 devices to a
computer.
Just about every peripheral made now comes in a USB version. A sample list of USB
devices that you can buy today includes:

• Printers
• Scanners
• Mice
• Joysticks
• Flight yokes
• Digital cameras
• Webcams
• Scientific data acquisition devices
• Modems
• Speakers
• Telephones
• Video phones
• Storage devices such as Zip drives

Network connections USB Cables and Connectors

Connecting a USB device to a computer is simple -- you find the USB connector on
the back of your machine and plug the USB connector into it.

The rectangular socket is a typical USB socket on the


back of a PC.

If it is a new device, the operating system auto-detects it and asks for the driver
disk. If the device has already been installed, the computer activates it and starts
talking to it. USB devices can be connected and disconnected at any time.
A typical USB connector, called an "A" connection

Many USB devices come with their own built-in cable, and the cable has an "A"
connection on it. If not, then the device has a socket on it that accepts a USB "B"
connector.

A typical "B" connection

The USB standard uses "A" and "B" connectors to avoid confusion:

• "A" connectors head "upstream" toward the computer.


• "B" connectors head "downstream" and connect to individual devices.

• By using different connectors on the upstream and downstream end, it is


impossible to ever get confused -- if you connect any USB cable's "B"
connector into a device, you know that it will work. Similarly, you can plug
any "A" connector into any "A" socket and know that it will work.

USB Hubs

Most computers that you buy today come with one or two USB sockets. With so
many USB devices on the market today, you easily run out of sockets very quickly.
For example, on the computer that I am typing on right now, I have a USB printer, a
USB scanner, a USB Webcam and a USB network connection. My computer has only
one USB connector on it, so the obvious question is, "How do you hook up all the
devices?"

The easy solution to the problem is to buy an inexpensive USB hub. The USB
standard supports up to 127 devices, and USB hubs are a part of the standard.

A typical USB four-port hub accepts 4 "A"


connections.

A hub typically has four new ports, but may have many more. You plug the hub into
your computer, and then plug your devices (or other hubs) into the hub. By chaining
hubs together, you can build up dozens of available USB ports on a single computer.

Hubs can be powered or unpowered. As you will see on the next page, the USB
standard allows for devices to draw their power from their USB connection.
Obviously, a high-power device like a printer or scanner will have its own power
supply, but low-power devices like mice and digital cameras get their power from the
bus in order to simplify them. The power (up to 500 milliamps at 5 volts) comes from
the computer. If you have lots of self-powered devices (like printers and scanners),
then your hub does not need to be powered -- none of the devices connecting to the
hub needs additional power, so the computer can handle it. If you have lots of
unpowered devices like mice and cameras, you probably need a powered hub. The
hub has its own transformer and it supplies power to the bus so that the devices do
not overload the computer's supply.

The USB Process

When the host powers up, it queries all of the devices connected to the bus and
assigns each one an address. This process is called enumeration -- devices are also
enumerated when they connect to the bus. The host also finds out from each device
what type of data transfer it wishes to perform:

• Interrupt - A device like a mouse or a keyboard, which will be sending very


little data, would choose the interrupt mode.
• Bulk - A device like a printer, which receives data in one big packet, uses the
bulk transfer mode. A block of data is sent to the printer (in 64-byte chunks)
and verified to make sure it is correct.
• Isochronous - A streaming device (such as speakers) uses the isochronous
mode. Data streams between the device and the host in real-time, and there
is no error correction.

The host can also send commands or query parameters with control packets.

As devices are enumerated, the host is keeping track of the total bandwidth that all
of the isochronous and interrupt devices are requesting. They can consume up to 90
percent of the 480 Mbps of bandwidth that is available. After 90 percent is used up,
the host denies access to any other isochronous or interrupt devices. Control packets
and packets for bulk transfers use any bandwidth left over (at least 10 percent).

The Universal Serial Bus divides the available bandwidth into frames, and the host
controls the frames. Frames contain 1,500 bytes, and a new frame starts every
millisecond. During a frame, isochronous and interrupt devices get a slot so they are
guaranteed the bandwidth they need. Bulk and control transfers use whatever space
is left. The technical links at the end of the article contain lots of detail if you would
like to learn more.

How SCSI Works

A computer is full of busses -- highways that take information and power from one
place to another. For example, when you plug an MP3 player or digital camera into
your computer, you're probably using a universal serial bus (USB) port. Your USB
port is good at carrying the data and electricity required for small electronic devices
that do things like create and store pictures and music files. But that bus isn't big
enough to support a whole computer, a server or lots of devices simultaneously.

SCSI devices usually connect to a controller card like


this one.
See more pictures of SCSI connectors and cables.

For that, you'd need something more like SCSI. SCSI originally stood for Small
Computer System Interface, but it's really outgrown the "small" designation. It's a
fast bus that can connect lots of devices to a computer at the same time, including
hard drives, scanners, CD-ROM/RW drives, printers and tape drives. Other
technologies, like serial-ATA (SATA), have largely replaced it in new systems, but
SCSI is still in use. This article will review SCSI basics and give you lots of
information on SCSI types and specifications.
SCSI Basics

SCSI is based on an older, proprietary bus interface called Shugart Associates


System Interface (SASI). SASI was originally developed in 1981 by Shugart
Associates in conjunction with NCR Corporation. In 1986, the American National
Standards Institute (ANSI) ratified SCSI (pronounced "scuzzy"), a modified
version of SASI. SCSI uses a controller to send and receive data and power to
SCSI-enabled devices, like hard drives and printers.

SCSI connector

SCSI has several benefits. It's fairly fast, up to 320 megabytes per second (MBps).
It's been around for more than 20 years and it's been thoroughly tested, so it has a
reputation for being reliable. Like Serial ATA and FireWire, it lets you put multiple
items on one bus. SCSI also works with most computer systems.

However, SCSI also has some potential problems. It has limited system BIOS
support, and it has to be configured for each computer. There's also no common
SCSI software interface. Finally, all the different SCSI types have different speeds,
bus widths and connectors, which can be confusing. When you know the meaning
behind "Fast," "Ultra" and "Wide," though, it's pretty easy to understand. We'll look
at these SCSI types next.

RAID
SCSI is often used to control a redundant array of
independent discs (RAID). Other technologies, like serial-ATA
(SATA), can also be used for this purpose. Newer SATA drives
tend to be faster and cheaper than SCSI drives.

A RAID is a series of hard drives treated as one big drive. These


drives can read and write data at the same time, known as
striping. The RAID controller determines which drive gets
which chunk of data. While that drive writes the data, the
controller sends data to or reads it from another drive.

RAID also improves fault tolerance through mirroring and


parity. Mirroring makes an exact duplicate of one drive's
data on a second hard drive. Parity uses a minimum of
three hard drives, and data is written sequentially to each
drive, except the last one. The last drive stores a number
that represents the sum of the data on the other drives.
For more information on RAID and fault tolerance, check
out this SCSI Types
SCSI has three basic specifications:

• SCSI-1: The original specification developed in 1986,


SCSI-1 is now obsolete. It featured a bus width of 8 bits
and clock speed of 5 MHz.
• SCSI-2: Adopted in 1994, this specification included the
Common Command Set (CCS) -- 18 commands
considered an absolute necessity for support of any SCSI
device. It also had the option to double the clock speed
to 10 MHz (Fast), double the bus width from to 16 bits
and increase the number of devices to 15 (Wide), or do
both (Fast/Wide). SCSI-2 also added command
queuing, allowing devices to store and prioritize
commands from the host computer.
• SCSI-3: This specification debuted in 1995 and included
a series of smaller standards within its overall scope. A
set of standards involving the SCSI Parallel Interface
(SPI), which is the way that SCSI devices communicate
with each other, has continued to evolve within SCSI-3.
Most SCSI-3 specifications begin with the term Ultra,
such as Ultra for SPI variations, Ultra2 for SPI-2
variations and Ultra3 for SPI-3 variations. The Fast and
Wide designations work just like their SCSI-2
counterparts. SCSI-3 is the standard currently in use.

Different combinations of doubled bus speed, doubled clock


speed and SCSI-3 specifications have led to lots of SCSI
variations. The chart below compares several of them. Many of
the slower ones are no longer in use -- we've included them for
comparison.

# of Bus Bus
Name Specification MBps
Devices Width Speed
Asynchronous 4
SCSI-1 8 8 bits 5 MHz
SCSI MBps
Synchronous 5
SCSI-1 8 8 bits 5 MHz
SCSI MBps
10
Wide SCSI-2 16 16 bits 5 MHz
MBps
10 10
Fast SCSI-2 8 8 bits
MHz MBps
10 20
Fast/Wide SCSI-2 16 16 bits
MHz MBps
SCSI-3 20 20
Ultra 8 8 bits
SPI MHz MBps
SCSI-3 20 40
Controllers, Devices and Cables

A SCSI controller coordinates between all of the other devices on the SCSI bus and
the computer. Also called a host adapter, the controller can be a card that you plug
into an available slot or it can be built into the motherboard. The SCSI BIOS is also
on the controller. This is a small ROM or Flash memory chip that contains the
software needed to access and control the devices on the bus.

Each SCSI device must have a unique identifier (ID) in order for it to work properly.
For example, if the bus can support sixteen devices, their IDs, specified through a
hardware or software setting, range from zero to 15. The SCSI controller itself must
use one of the IDs, typically the highest one, leaving room for 15 other devices on
the bus.

Internal SCSI devices connect to a ribbon cable.

Internal devices connect to a SCSI controller with a ribbon cable. External SCSI
devices attach to the controller in a daisy chain using a thick, round cable. (Serial
Attached SCSI devices use SATA cables.) In a daisy chain, each device connects to
the next one in line. For this reason, external SCSI devices typically have two SCSI
connectors -- one to connect to the previous device in the chain, and the other to
connect to the next device.

External SCSI devices connect using thick, round


cables.

The cable itself typically consists of three layers:

• Inner layer: The most protected layer, this contains the actual data being
sent.
• Media layer: Contains the wires that send control commands to the device.
• Outer layer: Includes wires that carry parity information, which ensures that
the data is correct.

Different SCSI variations use different connectors, which are often incompatible with
one another. These connectors usually use 50, 68 or 80 pins. SAS uses smaller,
SATA-compatible connectors.

68-pin Alternative 3 SCSI connector

50-pin Centronics SCSI connector

Once all of the devices on the bus are installed and have their own IDs, each end of
the bus must be closed. We'll look at how to do this next.

Termination

If the SCSI bus were left open, electrical signals sent down the bus could reflect back
and interfere with communication between devices and the SCSI controller. The
solution is to terminate the bus, closing each end with a resistor circuit. If the bus
supports both internal and external devices, then the last device on each series must
be terminated.

Types of SCSI termination can be grouped into two main categories: passive and
active. Passive termination is typically used for SCSI systems that run at the
standard clock speed and have a distance of less than 3 feet (1 m) from the devices
to the controller. Active termination is used for Fast SCSI systems or systems with
devices that are more than 3 feet (1 m) from the SCSI controller.

Some SCSI terminators are built into the SCSI


device, while others may require an external
terminator like this one.

SCSI also employs three distinct types of bus signaling, which also affect
termination. Signaling is the way that the electrical impulses are sent across the
wires.

• Single-ended (SE): The controller generates the signal and pushes it out to
all devices on the bus over a single data line. Each device acts as a ground.
Consequently, the signal quickly begins to degrade, which limits SE SCSI to a
maximum of about 10 ft (3 m). SE signaling is common in PCs.
• High-voltage differential (HVD): Often used for servers, HVD uses a
tandem approach to signaling, with a data high line and a data low line. Each
device on the SCSI bus has a signal transceiver. When the controller
communicates with the device, devices along the bus receive the signal and
retransmit it until it reaches the target device. This allows for much greater
distances between the controller and the device, up to 80 ft (25 m).
• Low-voltage differential (LVD): LVD is a variation on HVD and works in
much the same way. The big difference is that the transceivers are smaller
and built into the SCSI adapter of each device. This makes LVD SCSI devices
more affordable and allows LVD to use less electricity to communicate. The
downside is that the maximum distance is half of HVD -- 40 ft (12 m).

An active terminator
Both HVD and LVD normally use passive terminators, even though the distance
between devices and the controller can be much greater than 3 ft (1 m). This is
because the transceivers ensure that the signal is strong from one end of the bus to
the other.

Accelerated Graphics Port (AGP)

The need for increased bandwidth between the main processor and the video
subsystem originally lead to the development of the local I/O bus on the PCs,
starting with the VESA local bus and eventually leading to the popular PCI bus. This
trend continues, with the need for video bandwidth now starting to push up against
the limits of even the PCI bus.

Much as was the case with the ISA bus before it, traffic on the PCI bus is starting to
become heavy on high-end PCs, with video, hard disk and peripheral data all
competing for the same I/O bandwidth. To combat the eventual saturation of the PCI
bus with video information, a new interface has been pioneered by Intel, designed
specifically for the video subsystem. It is called the Accelerated Graphics Port or
AGP.

AGP was developed in response to the trend towards greater and greater
performance requirements for video. As software evolves and computer use
continues into previously unexplored areas such as 3D acceleration and full-motion
video playback, both the processor and the video chipset need to process more and
more information. The PCI bus is reaching its performance limits in these
applications, especially with hard disks and other peripherals also in there fighting for
the same bandwidth.

Another issue has been the increasing demands for video memory. As 3D computing
becomes more mainstream, much larger amounts of memory become required, not
just for the screen image but also for doing the 3D calculations. This traditionally has
meant putting more memory on the video card for doing this work. There are two
problems with this:

• Cost: Video card memory is very expensive compared to regular system RAM.
• Limited Size: The amount of memory on the video card is limited: if you
decide to put 6 MB on the card and you need 4 MB for the frame buffer, you
have 2 MB left over for processing work and that's it (unless you do a
hardware upgrade). It's not easy to expand this memory, and you can't use it
for anything else if you don't need it for video processing.

AGP gets around these problems by allowing the video processor to access the main
system memory for doing its calculations. This is more efficient because this memory
can be shared dynamically between the system processor and the video processor,
depending on the needs of the system.

The idea behind AGP is simple: create a faster, dedicated interface between the video
chipset and the system processor. The interface is only between these two devices;
this has three major advantages: it makes it easier to implement the port, makes it
easier to increase AGP in speed, and makes it possible to put enhancements into the
design that are specific to video.
AGP is considered a port, and not a bus, because it only involves two devices (the
processor and video card) and is not expandable. One of the great advantages of
AGP is that it isolates the video subsystem from the rest of the PC so there isn't
nearly as much contention over I/O bandwidth as there is with PCI. With the video
card removed from the PCI bus, other PCI devices will also benefit from improved
bandwidth.

AGP is a new technology and was just introduced to the market in the third quarter
of 1997. The first support for this new technology will be from Intel's 440LX Pentium
II chipset. More information on AGP will be forthcoming as it becomes more
mainstream and is seen more in the general computing market. Interestingly, one of
Intel's goals with AGP was supposed to be to make high-end video more affordable
without requiring sophisticated 3D video cards. If this is the case, it really makes me
wonder why they are only making AGP available for their high-end, very expensive
Pentium II processor line. :^) Originally, AGP was rumored to be a feature on the
430TX Pentium socket 7 chipset, but it did not materialize. Via and other companies
are carrying the flag for future socket 7 chipset development now that Intel has
dropped it, and several non-Intel AGP-capable chipsets will be entering the market in
1998.

AGP Interface

The AGP interface is in many ways still quite similar to PCI. The slot itself is similar
physically in shape and size, but is offset further from the edge of the motherboard
than PCI slots are. The AGP specification is in fact based on the PCI 2.1 specification,
which includes a high-bandwidth 66 MHz speed that was never implemented on the
PC. AGP motherboards have a single expansion card slot for the AGP video card, and
usually one less PCI slot, and are otherwise quite similar to PCI motherboards.

AGP Bus Width, Speed and Bandwidth

The AGP bus is 32 bits wide, just the same as PCI is, but instead of running at half of
the system (memory) bus speed the way PCI does, it runs at full bus speed. This
means that on a standard Pentium II motherboard AGP runs at 66 MHz instead of the
PCI bus's 33 MHz. This of course immediately doubles the bandwidth of the port;
instead of the limit of 127.2 MB/s as with PCI, AGP in its lowest speed mode has a
bandwidth of 254.3 MB/s. Plus of course the benefits of not having to share
bandwidth with other PCI devices.

In addition to doubling the speed of the bus, AGP has defined a 2X mode, which uses
special signaling to allow twice as much data to be sent over the port at the same
clock speed. What the hardware does is to send information on both the rising and
falling edges of the clock signal. Each cycle, the clock signal transitions from "0", to
"1" ("rising edge"), and back to "0" ("falling edge"). While PCI for example only
transfers data on one of these transitions each cycle, AGP transfers data on both.
The result is that the performance doubles again, to 508.6 MB/s theoretical
bandwidth. There is also a plan to implement a 4X mode, which will perform four
transfers per clock cycle: a whopping 1,017 MB/s of bandwidth!

This is certainly very exciting, but we must temper this excitement somewhat (and
not just because AGP is new and we don't have much that is practical to evaluate
yet). It's great fun to talk about 1 GB/s bandwidth for the video card, but there's
only one problem: this is more than the bandwidth of the entire system bus of a
modern PC! If you recall, the data bus of a Pentium class or later PC is 64 bits wide
and runs at 66 MHz. This gives a total of 508.6 MB/s bandwidth, so the 1 GB/s
maximum isn't going to do much good until we get the data bus running much faster
than 66 MHz. Future motherboard chipsets will take the system bus to 100 MHz,
which will increase total memory bandwidth to 763 MB/s, a definite step in the right
direction, but still not enough to make 4X transfers feasible.

Also worth remembering is that the CPU also needs to have access to the system
memory, not just the video subsystem. If all 508.6 MB/s of system bandwidth is
taken up by video over AGP, what is the processor going to do? Again here, going to
100 MHz system speed will help immensely. In practical terms, the jury is still out on
AGP and will be for a while, though there can be no denying its tremendous promise.
AGP will deliver a peak bandwidth that is 4 times higher than the PCI bus using
pipelining, sideband addressing, and more data transfers per clock.
It will also enable graphic cards to execute texture maps directly from system
memory instead of forcing it to pre-load the texture data to the graphics card's local
memory.

Features that set AGP apart from PCI

• Probably the most important feature of AGP is DIME (direct memory execute).
This gives AGP chips the capability to access main memory directly for the
complex operations of texture mapping.
• AGP provides the graphics card with two methods of directly accessing texture
maps in system memory: pipelining and sideband addressing.
• AGP makes multiple requests for data during a bus or memory access, while
PCI makes one request, and does not make another until the data it
requested has been transferred.
• AGP doesn't share bandwidth with other devices, whereas the PCI bus does
share bandwidth.

AGP PCI
1 Pipelined Requests Non-pipelined
2 Address/Data de-multiplexed Address/Data multiplexed
3 Peak at 533 MB/s in32 bits Peak at 133 MB/s in 32 bits
4 Single target, single master Multi target, multi master
Memory read/write only
5 Link to entire system
No other input/output operations
6 High/low priority queues No priority queues

Further comparisons between AGP and PCI

• AGP is a port (it only connects two nodes) while PCI is a bus
• AGP does not replace the PCI bus, it is a dedicated connection that can be
used only by the graphics subsystem
• AGP and PCI also differ in terms of their minimum length and alignment
requirements for transactions. AGP transaction are multiples of 8 bytes in
length and are aligned on 8 byte boundaries, while PCI transactions must be
multiples of 4 bytes and are aligned on 4 byte boundaries.

You might also like