You are on page 1of 84

NAITA

COMPUTER HARDWARE

Compiled by IT DIVISION, NAITA | W.K.Heshan

Computer Hardware

01. The Mouse

04

02. The Keyboard

06

03. Centeral Processing Unit (CPU)

07

04. Motherbooard

15

05. Random Access Memory (RAM)

19

06. Video Card

27

07. Hard Disk Drive (HDD)

33

08. Optical Disk Drive

56

09. Power Supply Unit

64

10. Expansion Slots

77

Compiled by IT DIVISION, NAITA

1|Page

Computer Hardware

What is Hardware?
Your PC (Personal Computer) is a system, consisting of many components. Some of those components, like
Windows XP, and all your other programs, are software. The stuff you can actually see and touch, and
would likely break if you threw it out a fifth-story window, is hardware.
Not everybody has exactly the same hardware. But those of you who have a desktop system, like the
example shown in Figure 1, probably have most of the components shown in that same figure. Those of you
with notebook computers probably have most of the same components. Only in your case the components
are all integrated into a single book-sized portable unit.

Figure 1
The system unit is the actual computer; everything else is called a peripheral device. Your computer's
system unit probably has at least one floppy disk drive, and one CD or DVD drive, into which you can insert
floppy disks and CDs. There's another disk drive, called the hard disk inside the system unit, as shown in
Figure 2. You can't remove that disk, or even see it. But it's there. And everything that's currently "in your
computer" is actually stored on that hard disk. (We know this because there is no place else inside the
computer where you can store information!).

Compiled by IT DIVISION, NAITA

2|Page

Computer Hardware

Figure 2

The floppy drive and CD drive are often referred to as drives with removable media or removable drives for
short, because you can remove whatever disk is currently in the drive, and replace it with another. Your
computer's hard disk can store as much information as tens of thousands of floppy disks, so don't worry
about running out of space on your hard disk any time soon. As a rule, you want to store everything you
create or download on your hard disk. Use the floppy disks and CDs to send copies of files through the mail,
or to make backup copies of important items.

Compiled by IT DIVISION, NAITA

3|Page

Computer Hardware

01. The Mouse


Obviously you know how to use your mouse, since you must have used it to get here.
But let's take a look at the facts and buzzwords anyway. Your mouse probably has at
least two buttons on it. The button on the left is called the primary mouse button, the
button on the right is called the secondary mouse buttonor just the right mouse button.
I'll just refer to them as the left and right mouse buttons. Many mice have a small wheel
between the two mouse buttons, as illustrated in Figure 3.

Figure 3
The idea is to rest your hand comfortably on the mouse, with
your index finger touching (but not pressing on) the left mouse
button. Then, as you move the mouse, the mouse pointer (the
little arrow on the screen) moves in the same direction. When
moving the mouse, try to keep the buttons aimed toward the
monitor -- don't "twist" the mouse as that just makes it all the
harder to control the position of the mouse pointer.
If you find yourself reaching too far to get the mouse pointer where you want it to be on
the screen, just pick up the mouse, move it to where it's comfortable to hold it, and place
it back down on the mousepad or desk. The buzzwords that describe how you use the
mouse are as follows:

Compiled by IT DIVISION, NAITA

4|Page

Computer Hardware

Point: To point to an item means to move the mouse pointer so that it's touching
the item.
Click: Point to the item, then tap (press and release) the left mouse button.
Double-click: Point to the item, and tap the left mouse button twice in rapid
succession - click-click as fast as you can.
Right-click: Point to the item, then tap the mouse button on the right.
Drag: Point to an item, then hold down the left mouse button as you move the
mouse. To drop the item, release the left mouse button.
Right-drag: Point to an item, then hold down the right mouse button as you move
the mouse. To drop the item, release the right mouse button.

Compiled by IT DIVISION, NAITA

5|Page

Computer Hardware

02. The Keyboard


Like the mouse, the keyboard is a means of interacting with your computer. You really only need to use the
keyboard when you're typing text. Most of the keys on the keyboard are laid out like the keys on a
typewriter. But there are some special keys like Esc (Escape), Ctrl (Control), and Alt (Alternate). There are
also some keys across the top of the keyboard labeled F1, F2, F3, and so forth. Those are called the function
keys, and the exact role they play depends on which program you happen to be using at the moment.
Most keyboards also have a numeric keypad with the keys laid out like the keys on a typical adding
machine. If you're accustomed to using an adding machine, you might want to use the numeric keypad,
rather than the numbers across the top of the keyboard, to type numbers. It doesn't really matter which keys
you use. The numeric keypad is just there as a convenience to people who are accustomed to adding
machines.

Figure 4
Most keyboards also contain a set of navigation keys. You can use the navigation keys to move around
around through text on the screen. The navigation keys won't move the mouse pointer. Only the mouse
moves the mouse pointer.
On smaller keyboards where space is limited, such as on a notebook computer, the navigation keys and
numeric keypad might be one in the same. There will be a Num Lock key on the keypad. When the Num
Lock key is "on", the numeric keypad keys type numbers. When the Num Lock key is "off", the navigation
keys come into play. The Num Lock key acts as a toggle. Which is to say, when you tap it, it switches to the
opposite state. For example, if Num Lock is on, tapping that key turns it off. If Num Lock is off, tapping that
key turns Num Lock on

Compiled by IT DIVISION, NAITA

6|Page

Computer Hardware

03. Central Processing Unit (CPU)


What is the processor? Well in the simplest of terms, its your computers brain. The processor tells
your computer what to do and when to do it, it decides which tasks are more important and prioritizes them
to your computers needs.

There is and has been many processors on the market, running at many different speeds. The speed is
measured in Megahertz or MHz. A single MHz is a calculation of 1 million cycles per second (or computer
instructions), so if you have a processor running at 2000 MHz, then your computer is running at
2000,000,000 cycles per second, which in more basic terms is the amount of instructions your computer can
carry out. Another important abbreviation is Gigahertz or GHz. A single GHz or 1 GHz is the same as 1000
MHz . Sounds a bit confusing, so here is a simple conversion:
1000 MHz (Megahertz) = 1GHz (Gigahertz) = 1000,000,000 Cycles per second (or computer instructions).
Now you can see why they abbreviate it, could you imagine going to a PC store and asking for a one
thousand million cycle PC please. A bit of a mouth full isnt it?
So when buying a new computer always look for fastest you can afford. The fastest on the market at the time
of writing this article is 3.8 GHz (3800 MHz). Remember though that it is not necessary to purchase such a
fast processor, balance your needs, do you really need top of the range? Especially when the difference say
between a 3.5 GHz (3500 MHz) and a 3.8 GHz (3800 MHz) processor will be barely noticed (if noticed at
all) by you, while the price difference is around 100. With the money you save you could get a nice printer
and scanner package.
Now that we have covered the speeds, there is one more important subject to cover. Which processor? There
are 3 competitors at present, the AMD Athlon, Intel Pentium and the Intel Celeron. They come in many
guises, but basically the more cores they have and the higher the speed means better and faster.
Processors now come as dual core, triple core and quad core. These processors are the equivalent of running
two cpu's (Dual core), three CPU's ( Triple core) or four (Quad core).

Compiled by IT DIVISION, NAITA

7|Page

Computer Hardware

In the past Intel Pentium the best and most expensive of them all, and remains today one of
the most popular on the market. In laymans terms it is/was the designer processor, although
AMD have some superb if not better releases and equally highly priced and advanced
products. It would be hard to say which is best as they are direct competitors.

Lastly there is the Intel Celeron; this processor is a budget version of the Intel Pentium 4, the
processor you find in most budget computers. If the purse is tight, and you need a computer,
then this is your port of call. You will find many sub 400 computers fitted with this
processor.

Compiled by IT DIVISION, NAITA

8|Page

Computer Hardware

CPU Type and Socket

Year
of End
Of Life

CPU families

Package

1970s

Still
available

Intel 8086
Intel 8088

DIP

40

2.54mm

5/10 MHz

Still
available

Intel 80186
Intel 80286
Intel 80386

PLCC

68,
132

1.27mm

640 MHz

Socket 1

1989

Intel 80486

PGA

169

1650 MHz

Socket 2

Intel 80486

PGA

238

1650 MHz

Socket 3

1991

Intel 80486

PGA

237

1650 MHz

Socket
name

DIP

PLCC

Year of
introduction

Compiled by IT DIVISION, NAITA

Pin
Pin pitch
count

Bus speed

Notes

9|Page

Computer Hardware

Socket 4

Intel Pentium

PGA

273

6066 MHz

Socket 5

Intel Pentium
AMD K5
IDT WinChip C6
IDT WinChip 2

PGA

320

5066 MHz

Socket 6

Intel 80486

PGA

235

Socket 7

1994

Intel Pentium
Intel Pentium MMX
AMD K6

PGA

321

5066 MHz

Super Socket
7

1998

AMD K6-2
AMD AMD K6-III
Rise mP6
Cyrix MII

PGA

321

66100 MHz

Socket 8

1995

Intel Pentium Pro

PGA

387

6066 MHz

Slot 1

1997

Intel Pentium II

Slot

242

66133 MHz

Intel Pentium III

Compiled by IT DIVISION, NAITA

Celeron (Covington,
Mendocino)
Pentium II (Klamath)
Pentium III (Katmai)all versions
Pentium III
(coppermine)

10 | P a g e

Computer Hardware

1998

Intel Pentium II Xeon

Slot

330

100133 MHz

Socket 463/
Socket
NexGen

NexGen Nx586

PGA

463

Socket 499

Alpha 21164A

Slot

587

Slot A

1999

AMD Athlon

Slot

242

100 MHz

Slot B

Alpha 21264

Slot

587

Socket 370

1999

Intel Pentium III


Intel Celeron
VIA Cyrix III
VIA C3

PGA

370

1.27mm[1]

66133 MHz

Socket 462/
Socket A

2000

AMD Athlon
AMD Duron
AMD Athlon XP
AMD Athlon XP-M
AMD Athlon MP
AMD Sempron

PGA

462

100200 MHz This is


a double data rate bus
having a 400 MT/s

Slot 2

(megatransfers/second)
fsb in the later models

Socket 423

2000

Intel Pentium 4

PGA

423

Socket 478/
Socket N

2000

Intel Pentium 4
Intel Celeron

PGA

478

Compiled by IT DIVISION, NAITA

1mm[2]

400 MT/s (100 MHz)

Willamette core only

1.27mm[3] 400800 MT/s (100


200 MHz)
11 | P a g e

Computer Hardware

Intel Pentium 4 EE
Intel Pentium 4 M
Socket 495

2000

Intel Celeron

PGA

495

1.27mm[4]

PAC418

2001

Intel Itanium

PGA

418

133 MHz

Socket 603

2001

Intel Xeon

PGA

603

PAC611

2002

Intel Itanium 2
HP PA-8800, PA-8900

PGA

611

Socket 604

2002

Intel Xeon

PGA

604

1.27mm[5] 4001066 MT/s (100


266 MHz)

Socket 754

2003

AMD Athlon 64
AMD Sempron
AMD Turion 64

PGA

754

1.27mm[6]

200800 MHz

Socket 940

2003

AMD Opteron Athlon 64 FX

PGA

940

1.27mm[7]

2001000 MHz

400533 MT/s (100


133 MHz)

[8]

Socket 479

2003

Intel Pentium M
Intel Celeron M

PGA

Socket 939

2004

11/2008

AMD Athlon 64
AMD Athlon 64 FX
AMD Athlon 64 X2
AMD Opteron

PGA

939

1.27mm[7]

2001000 MHz

LGA 775/
Socket T

2004

Intel Pentium 4
Intel Pentium D
Intel Celeron
Intel Celeron D
Intel Pentium XE
Intel Core 2 Duo
Intel Core 2 Quad
Intel Xeon

LGA

775

1.09mm x
1.17mm[9]

1600 MHz

Socket 563

AMD Athlon XP-M

PGA

563

Socket M

2006

Intel Core Solo

PGA

478

533667 MT/s (133

Compiled by IT DIVISION, NAITA

479

1.27mm[5] 400533 MT/s (100


133 MHz)

Support of Athlon 64
FX to 1 GHz
Support
of Opteron limited to
100-series only

For notebook platform


12 | P a g e

Computer Hardware

Intel Core Duo


Intel Dual-Core Xeon
Intel Core 2 Duo

166 MHz)

LGA 771/
Socket J

2006

Intel Xeon

LGA

771

Socket S1

2006

AMD Turion 64 X2

PGA

638 1.27mm[11]

200800 MHz

Socket AM2

2006

AMD Athlon 64
AMD Athlon 64 X2

PGA

940

1.27mm[7]

2001000 MHz

Replaces Socket 754


and Socket 939

Socket F

2006

AMD Athlon 64 FX
AMD Opteron

LGA

1207 1.1mm[12]

Replaces Socket 940

Socket AM2+

2007

AMD Athlon 64
AMD Athlon X2
AMD Phenom
AMD Phenom II

PGA

940

2002600 MHz

Separated power planes


Replaces Socket AM2
AM2+ Pkg. CPUs can
work in Socket AM2
AM2 Pkg. CPUs can
work in Socket AM2+

Socket P

2007

Intel Core 2

PGA

478

Socket 441

2008

Intel Atom

PGA

441

LGA 1366/
Socket B

2008

Intel Core i7 (900 series)


Intel Xeon (35xx, 36xx, 55xx, 56xx
series)

LGA

1366

Socket AM3

2009

AMD Phenom II
AMD Athlon II
AMD Sempron

PGA

941[13] 1.27mm[7]

LGA 1156/
Socket H

2009

Intel Core i7 (800 series)


Intel Core i5 (700, 600 series)
Intel Core i3 (500 series)

LGA

1156

Compiled by IT DIVISION, NAITA

1.09mm x
1.17mm[10]

Replaces Socket 479

1.27mm[7]

1600 MHz

5331066 MT/s (133 For notebook platform


266 MHz)
Replaces Socket M
?

400667 MHz
4.8-6.4 GT/s

Replaces serveroriented Socket J (LGA


771) in the entry level.

2003200 MHz

Separated power planes


Replaces Socket AM2+
AM3 Pkg. CPUs can
work in Socket
AM2/AM2+
Sempron 140 Only

2.5 GT/s

DMI bus is a (perhaps


modified) PCI-E x4
v1.1 interface
13 | P a g e

Computer Hardware

Intel Xeon (X3400, L3400 series)


Intel Pentium (G6000 series)
Intel Celeron (G1000 series)
Socket G34

2010

AMD Opteron (6000 series)

LGA

1974

2003200 MHz

Replaces Socket F

Socket C32

2010

AMD Opteron (4000 series)

LGA

1207

2003200 MHz

Replaces Socket
F, Socket AM3

LGA 1248

2010

Intel Intel Itanium 9300-series

LGA

1248

4.8 GT/s

LGA 1567

2010

Intel Intel Xeon 6500/7500-series

LGA

1567

4.8-6.4 GT/s

LGA 1155/
Socket H2

2011/Q1

Intel Sandy Bridge-DT

LGA

1155

5 GT/s

Supports 20 PCI-E 2.0


lanes.

LGA 2011/
Socket R

Future
(2011/Q3)

Intel Sandy Bridge B2

LGA

2011

4.8-6.4 GT/s

Supports 40 PCI-E 3.0


lanes.

Socket FM1

2011

AMD Llano Processor

PGA

905

1.27mm

Year of
introduction

Year
of EOL

CPU families

Package

Bus speed

Notes

Socket
name

Compiled by IT DIVISION, NAITA

Pin
Pin pitch
count

14 | P a g e

Computer Hardware

04. Motherboard
The motherboard is the most essential component in a personal computer . it is the piece of
hardware which contains the computer's micro-processing chip and everything attached to it
is vital to making the computer run.

Motherboard Components
If you open your computer's case, the motherboard is the flat, rectangular piece of circuit
board to which everything seems to connect to for one reason or another. It contains the
following key components:

Microprocessor "socket" : defines what kind of central processing unit the motherboard
uses;
Chipset which forms the computer's logic system. It is usually composed of two parts
called bridges (a "north" bridge and its opposite, "south" bridge), which connects
theCPU to the rest of the system;
Basic Input/Output System (BIOS) chip: which controls the most basic function of a
computer, and how to repair it; and
Real-time clock which is a battery-operated chip which maintains the system's time,
and other basic functions.

The motherboard also has slots or ports for the attachment of various peripherals or support
system/hardware. There is an Accelerated Graphics Port(AGP), which is used exclusively for
video cards; Integrated Drive Electronics(IDE), which provides the interfaces for the hard
disk drives; Memory or RAM cards; and Peripheral Component Interconnect (PCI), which
provides electronic connections for video capture cards and network cards, among others.

Compiled by IT DIVISION, NAITA

15 | P a g e

Computer Hardware

How a Motherboard Works


The most important thing to remember about the motherboard is that it is a printed circuit
board which provides all the connections, pathways and "lines" connecting the different
components of the computer to each other specifically, the Central Processing Unit or CPU,
which is where (as its name implies) all the "processing" is going on to everything else.
The CPU or "chip" (the most popular of which is Intel's Pentium series) is an assembly of
transistors and other devices (Pentium IV has over 4 million transistors) which perform or
processes myriad programmed tasks.
The CPU rests in a "socket" on the motherboard which is connected to the other components
through the board's printed circuits. The most important connections are to the chipsets
especially the northbridge chipset which is connected to the main computer memory (hard
disk and RAM), while the southbridge set is connected to the peripherals video and audio
cards, IDE controllers, etc.
Aside from these, the most important element of the motherboard is the BIOS chip which
performs key functions like checking power supply, the hard disk drive, operating system, etc.
before the computer actually starts "booting up". Turning on the computer automatically
starts the BIOS chip up to perform its diagnostic functions, after which it powers up the CPU
which in its turn starts powering up the other peripherals (hard disk, operating system,
video and audio, etc.).
This is why the motherboard is the key component of the computer. It is, in effect, the
"housing" for the CPU the place where the latter resides and from which commands,
instructions, and power course through before being sent out to other components.

Compiled by IT DIVISION, NAITA

16 | P a g e

Computer Hardware

Compiled by IT DIVISION, NAITA

17 | P a g e

Computer Hardware

Integrated peripherals
Block diagram of a modern motherboard, which supports many on-board peripheral functions as well
as several expansion slots.
With the steadily declining costs and size of integrated circuits, it is now possible to include support
for many peripherals on the motherboard. By combining many functions on one PCB, the physical size
and total cost of the system may be reduced; highly integrated motherboards are thus especially
popular in small form factor and budget computers.
For example, the ECS RS485M-M,[6] a typical modern budget motherboard for computers based on
AMD processors, has on-board support for a very large range of peripherals:
disk controllers for a floppy disk drive, up to 2 PATA drives, and up to 6 SATA drives (including RAID
0/1 support)

integrated graphics controller supporting 2D


and 3D graphics, with VGA and TV output
integrated sound card supporting 8-channel
(7.1) audio and S/PDIF output
Fast Ethernet network controller for 10/100
Mbit networking
USB 2.0 controller supporting up to 12 USB
ports
IrDA controller for infrared data
communication (e.g. with an IrDA-enabled
cellular phone or printer)
temperature, voltage, and fan-speed sensors
that allow software to monitor the health of
computer components
Expansion cards to support all of these
functions would have cost hundreds of dollars
even a decade ago; however, as of April 2007
such highly integrated motherboards are
available for as little as $30 in the USA.

Compiled by IT DIVISION, NAITA

18 | P a g e

Computer Hardware

05. Random Access Memory (RAM)


Computer Memory often refers to a few entities. Made up of a few different components, this
decides how fast the processing of a CPU is. Although the CPU can accept large amounts of data at a given
time, it virtually comes to a stop if the data is not available to it. This part of processing is what has evolved
over the years.
Although providing a memory that supplies the CPU with all the data it needs is very expensive,
experts have tiered it so the most inexpensive and large storage of computer memory is at the farthest end
while an expensive but small memory component services the central processing unit. Between the hard
disk, the RAM and the cache, the flow of bytes and then finally to the CPU is what runs the computer.
If you look at the hierarchy of the computer memory, at the base is the most inexpensive storage that
includes the hard disk, custom USB drives etc. Above it is the RAM or random access memory which is
comprised of the virtual memory and the physical RAM. Above it in the pyramid is the even smaller cache
which is finally built into the CPU. An application when opened is loaded into RAM, although it is not
loaded in its entirety which happens only when needed. This is followed by loading the files if any that are
required by the application.
What is in the RAM is what the CPU might need immediately which is why this is called a
temporary storage area. The CPU continuously reads and writes data to the RAM. The speed of the RAM is
again dependent on the speed of the bus which is the number of bytes of data that are transferred to and fro.
Burst mode where more than one byte of data are read at one time from the RAM and pipelining or working
in the assembly line mode are two ways the latency problem of the RAM is solved to further enhance the
speed of processing.
Then there is the cache right on top of the pyramid of computer memory. The level 1 cache which is
a small but expensive memory component is built right into the CPU. The Cache and registers hold all the
information that the CPU requires for the immediate access. The cache uses a special type of RAM called
Static random access memory or SRAM. The registers are memory cells built into CPU and contain specific
pieces of data that are required by CPU.
Computer memory can also be volatile or non-volatile memory. While volatile memory like that of
RAM is erased as soon as the power is switched off, the non-volatile memory, e.g. that of ROM and the
latest flash and custom USB drives, stays even after the power is switched off. Flash memory like memory
sticks are solid state storage devices which are used fast and easy storage of data and because of their
compact size are used in gaming consoles and digital cameras.

Compiled by IT DIVISION, NAITA

19 | P a g e

Computer Hardware

Types of Ram

SDRAM (Synchronous Dynamic Random-Access Memory)


Almost all systems used to ship with 3.3 volt, 168-pin SDRAM DIMMs. SDRAM is not an extension of
older EDO DRAM but a new type of DRAM altogether. SDRAM started out running at 66 MHz, while
older fast page mode DRAM and EDO max out at 50 MHz SDRAM is able to scale to 133 MHz (PC133)
officially, and unofficially up to 180MHz or higher. As processors get faster, new generations of memory
such as DDR and RDRAM are required to get proper performance

DDR SDRAM(Double Data Rate Synchronous Dynamic Random-Access


Memory)
DDR basically doubles the rate of data transfer of standard SDRAM by transferring data on the up and down
tick of a clock cycle. DDR memory operating at 333MHz actually operates at 166MHz * 2 (aka PC333 /
PC2700) or 133MHz*2 (PC266 / PC2100). DDR is a 2.5 volt technology that uses 184 pins in its DIMMs. It
is incompatible with SDRAM physically, but uses a similar parallel bus, making it easier to implement than
RDRAM, which is a different technology.

Compiled by IT DIVISION, NAITA

20 | P a g e

Computer Hardware

Operation of the random access memory


The random access memory comprises hundreds of thousands of small capacitors that store loads. When
loaded, the logical state of the capacitor is equal to 1, otherwise it is 0, meaning that each capacitor
represents one memory bit.
Given that the capacitors become discharged they must be constantly recharged (the exact term isrefresh) at
regular intervals, known as the refresh cycle. DRAM memories for example require refresh cycles of
around 15 nanoseconds (ns).
Each capacitor is coupled with a transistor (MOS-type) enabling "recovery" or amendment of the status of
the capacitor. These transistors are arranged in the form of a table (matrix) thus we access a memory
box(also called memory point) via a line and a column.

Each memory point is thus characterised by an address which corresponds to a row number and a column
number. This access is not instant and the access time period is known as latency time. Consequently, time
required for access to data in the memory is equal to cycle time plus latency time.
Thus, for a DRAM memory, access time is 60 nanoseconds (35ns cycle time and 25ns latency time). On a
computer, the cycle time corresponds to the opposite of the clock frequency; for example, for a computer
with frequency of 200 MHz, cycle time is 5 ns (1/200*106)).
Consequently a computer with high frequency using memories with access time much longer than the
processor cycle time must perform wait states to access the memory. For a computer with frequency of 200
MHz using DRAM memories (and access time of 60ns), there are 11 wait states for a transfer cycle. The
computer's performance decreases as the number of wait states increases, therefore we recommend the use
of faster memories.

Compiled by IT DIVISION, NAITA

21 | P a g e

Computer Hardware

RAM module formats


There are many type of random access memory. They exist in the form of memory modules that can be
plugged into the mother board.
Early memories existed in the form of chips called DIP (Dual Inline Package). Nowadays, memories
generally exist in the form of modules, which are cards that can be plugged into connectors for this purpose.
There are generally three types of RAM module:
modules in SIMM format (Single Inline Memory Module): these are printed circuit boards with one side
equipped with memory chips. There are two types of SIMM modules, according to the number of
connectors:

SIMM modules with 30 connectors (dimensions are 89x13mm) are 8-bit memories with which firstgeneration PCs were equipped (286, 386).

SIMM modules with 72 connectors (dimensions are 108x25mm) are memories able to store 32 bits
of data simultaneously. These memories are found on PCs from the 386DX to the first Pentiums. On
the latter, the processor works with a 64-bit data bus; this is why these computers must be equipped
with two SIMM modules. 30-pin modules cannot be installed on 72-connector positions because a
notch (at the centre of the connectors) would prevent it from being plugged in.

modules in DIMM format (Dual Inline Memory Module) are 64-bit memories, which explains why they
do not need pairing. DIMM modules have memory chips on both sides of the printed circuit board and
also have 84 connectors on each side, giving them a total of 168 pins. In addition to having larger
dimensions than SIMM modules (130x25mm), these modules have a second notch to avoid confusion.

Compiled by IT DIVISION, NAITA

22 | P a g e

Computer Hardware

It may be interesting to note that the DIMM connectors have been enhanced to make insertion easier, thanks
to levers located either side of the connector.

Smaller modules also exist; they are known as SO DIMM (Small Outline DIMM), designed for portable
computers. SO DIMM modules have only 144 pins for 64-bit memories and 77 pins for 32-bit memories.
modules in RIMM format (Rambus Inline Memory Module, also called RD-RAM or DRD-RAM) are 64bit memories developed by Rambus. They have 184 pins. These modules have two locating notches to
avoid risk of confusion with the previous modules.

Given their high transfer speed, RIMM modules have a thermal film which is supposed to improve heat
transfer.
As for DIMMs, smaller modules also exist; they are known as SO RIMM (Small Outline RIMM), designed
for portable computers. SO RIMM modules have only 160 pins.

DRAM PM
The DRAM (Dynamic RAM) is the most common type of memory at the start of this millennium. This is a
memory whose transistors are arranged in a matrix in rows and columns. A transistor, coupled with a
capacitor, gives information on a bit. Since 1 octet contains 8 bits, a DRAM memory module of 256 Mo will
thus contain 256 * 2^10 * 2^10 = 256 * 1024 * 1024 = 268,435,456 octets = 268,435,456 * 8 =
2,147,483,648 bits = 2,147,483,648 transistors. A module of 256 Mo thus has a capacity of 268,435,456
octets, or 268 Mo! These memories have access times of 60 ns.
Furthermore, access to memory generally concerns data stored consecutively in the memory. Thus burst
mode allows access to the three pieces of data following the first piece with no additional latency time. In
this burst mode, time required to access the first piece of data is equal to cycle time plus latency time, and
the time required to access the other three pieces of data is equal to just the cycle time; the four access times
are thus written in the form X-Y-Y-Y, for example 5-3-3-3 indicates a memory for which 5 clock cycles are
needed to access the first piece of data and 3 for the subsequent ones.

DRAM FPM
To speed up access to the DRAM, there is a technique, known as paging, which involves accessing data
located in the same column by changing only the address of the row, thus avoiding repetition of the column
number between reading of each row. This is known as DRAM FPM (Fast Page Mode). FPM achieves
access times of around 70 to 80 nanoseconds for operating frequency between 25 and 33 Mhz.

DRAM EDO
DRAM EDO (Extended Data Out, sometimes also called hyper-page") was introduced in 1995. The
technique used with this type of memory involves addressing the next column while reading the data in a
column. This creates an overlap of access thus saving time on each cycle. EDO memory access time is thus
around 50 to 60 nanoseconds for operating frequency between 33 and 66 Mhz.
Thus the RAM EDO, when used in burst mode, achieves 5-2-2-2 cycles, representing a gain of 4 cycles on
access to 4 pieces of data. Since the EDO memory did not work with frequencies higher than 66 Mhz, it was
abandoned in favour of the SDRAM.

Compiled by IT DIVISION, NAITA

23 | P a g e

Computer Hardware

SDRAM
The SDRAM (Synchronous DRAM), introduced in 1997, allows synchronised reading of data with the
mother-board bus, unlike the EDO and FPM memories (known as asynchronous) which have their own
clock. The SDRAM thus eliminates waiting times due to synchronisation with the mother-board. This
achieves a 5-1-1-1 burst mode cycle, with a gain of 3 cycles in comparison with the RAM EDO. The
SDRAM is thus able to operate with frequency up to 150 Mhz, allowing it to achieve access times of around
10 ns.

DR-SDRAM (Rambus DRAM)


The DR-SDRAM (Direct Rambus DRAM) is a type of memory that lets you transfer data to a 16-bit bus at
frequency of 800Mhz, giving it a bandwidth of 1.6 Go/s. As with the SDRAM, this type of memory is
synchronised with the bus clock to enhance data exchange. However, the RAMBUS memory is a proprietary
technology, meaning that any company wishing to produce RAM modules using this technology must pay
royalties to both RAMBUS and Intel.

DDR-SDRAM
The DDR-SDRAM (Double Data Rate SDRAM) is a memory, based on the SDRAM technology, which
doubles the transfer rate of the SDRAM using the same frequency.
Data are read or written into memory based on a clock. Standard DRAM memories use a method known
as SDR (Single Data Rate) involving reading or writing a piece of data at each leading edge.

The DDR doubles the frequency of reading/writing, with a clock at the same frequency, by sending data to
each leading edge and to each trailing edge.

DDR memories generally have a product name such as PCXXXX where "XXXX" represents the speed in
Mo/s.

Compiled by IT DIVISION, NAITA

24 | P a g e

Computer Hardware

DDR2-SDRAM
DDR2 (or DDR-II) memory achieves speeds that are twice as high as those of the DDR with the same
external frequency.
QDR (Quadruple Data Rate or quad-pumped) designates the reading and writing method used. DDR2
memory in fact uses two separate channels for reading and writing, so that it is able to send or receive twice
as much data as the DDR.

DDR2 also has more connectors than the classic DDR (240 for DDR2 compared with 184 for DDR).

Summary table
The table below gives the equivalence between the mother-board frequency (FSB), the memory (RAM)
frequency and its speed:
Memory

Name

Frequency (RAM)

Frequency (FSB)

Speed

DDR200

PC1600

200 MHz

100 MHz

1.6 Go/s

DDR266

PC2100

266 MHz

133 MHz

2.1 Go/s

DDR333

PC2700

333 MHz

166 MHz

2.7 Go/s

DDR400

PC3200

400 MHz

200 MHz

3.2 Go/s

DDR433

PC3500

433 MHz

217 MHz

3.5 Go/s

DDR466

PC3700

466 MHz

233 MHz

3.7 Go/s

DDR500

PC4000

500 MHz

250 MHz

4 Go/s

DDR533

PC4200

533 MHz

266 MHz

4.2 Go/s

DDR538

PC4300

538 MHz

269 MHz

4.3 Go/s

DDR550

PC4400

550 MHz

275 MHz

4.4 Go/s

DDR2-400

PC2-3200

400 MHz

100 MHz

3.2 Go/s

DDR2-533

PC2-4300

533 MHz

133 MHz

4.3 Go/s

DDR2-667

PC2-5300

667 MHz

167 MHz

5.3 Go/s

DDR2-675

PC2-5400

675 MHz

172.5 MHz

5.4 Go/s

DDR2-800

PC2-6400

800 MHz

200 MHz

6.4 Go/s

Compiled by IT DIVISION, NAITA

25 | P a g e

Computer Hardware

Synchronisation (timings)
It is not unusual to see scores such as 3-2-2-2 or 2-3-3-2 to describe the parameterisation of the random
access memory. This succession of four figures describes the synchronisation of the memory (timing), i.e.
the succession of clock cycles needed to access a piece of data stored in the RAM. These four figures
generally correspond, in order, to the following values:
CAS delay or CAS latency (CAS meaning Column Address Strobe): this is the number of clock cycles
that elapse between the reading command being sent and the piece of data actually arriving. In other
words, it is the time needed to access a column.
RAS Precharge Time (known as tRP, RAS meaning Row Address Strobe): this is the number of clock
cycles between two RAS instructions, i.e. between two accesses to a row. operation.
RAS to CAS delay (sometimes called tRCD): this is the number of clock cycles corresponding to access
time from a row to a column.
RAS active time (sometimes called tRAS): this is the number of clock cycles corresponding to the time
needed to access a row.
The memory cards are equipped with a device called SPD (Serial Presence Detect), allowing the BIOS to
find out the nominal setting values defined by the manufacturer. It is an EEPROM whose data will be loaded
by the BIOS if the user chooses "auto" setting.

Error correction
Some memories have mechanisms for correcting errors to ensure the integrity of the data they contain. This
type of memory is generally used on systems working on critical data, which is why this type of memory is
found in servers.

Parity bit
Modules with parity bit ensure that the data contained in the memory are the ones required. To achieve this,
one of the bits from each octet stored in the memory is used to store the sum of the data bits. The parity bit is
1 when the sum of the data bits is an odd number and 0 in the opposite case.
Thus the modules with parity bit allow the integrity of data to be checked but do not provide for error
correction. Moreover, for 9 Mo of memory, only 8 will be used to store data since the last mega octet is used
to store the parity bits.

ECC modules
ECC (Error Correction Coding) memory modules are memories with several bits dedicated to error
correction (they are known as control bits). These modules, used mainly in servers, allow detection and
correction of errors.

Dual Channel
Some memory controllers offer a dual channel for the memory. The memory modules are used in pairs to
achieve higher bandwidth and thus make the best use of the system's capacity. When using the Dual
Channel, it is vital to use identical modules in a pair (same frequency and capacity and preferably the same
brand).

Compiled by IT DIVISION, NAITA

26 | P a g e

Computer Hardware

06. Video card


A video card Graphics Card, or Graphics adapter is an expansion card which generates
output images to a display. Most video cards offer various functions such as accelerated rendering of 3D
scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or the ability to connect multiple monitors
(multi-monitor). Other modern high performance video cards are used for more graphically demanding
purposes, such as PC games.

Video hardware is often integrated into the motherboard, however all modern
motherboards provide expansion ports to which a video card can be attached. In this configuration it is
sometimes referred to as a video controller or graphics controller. Modern low-end to mid-range
motherboards often include a graphics chipset manufactured by the developer of the northbridge (i.e. an
nForce chipset with Nvidia graphics or an Intel chipset with Intel graphics) on the motherboard. This
graphics chip usually has a small quantity of embedded memory and takes some of the system's main RAM,
reducing the total RAM available. This is usually called integrated graphics or on-board graphics, and is
low-performance and undesirable for those wishing to run 3D applications. A dedicated graphics card on the
other hand has its own RAM and Processor specifically for processing video images, and thus offloads this
work from the CPU and system RAM. Almost all of these motherboards allow the disabling of the
integrated graphics chip in BIOS, and have an AGP, PCI, or PCI Express slot for adding a higherperformance graphics card in place of the integrated graphics.

Components
A modern video card consists of a printed circuit board on which the components are mounted. These
include:

Graphics Processing Unit


A GPU is a dedicated processor optimized for accelerating graphics. The processor is designed specifically
to perform floating-point calculations, which are fundamental to 3D graphics rendering and 2D picture
drawing. The main attributes of the GPU are the core clock frequency, which typically ranges from
250 MHz to 4 GHz and the number of pipelines (vertex andfragment shades), which translate a 3D image
characterized by vertices and lines into a 2D image formed by pixels.
Modern GPUs are massively parallel, and fully programmable. Their computing power in orders of
magnitude are higher than that of CPUs. As consequence, they challenge CPUs in high performance
computing, and push leading manufacturers on processors.

Compiled by IT DIVISION, NAITA

27 | P a g e

Computer Hardware

Video BIOS
The video BIOS or firmware contains the basic program, which is usually hidden, that governs the video
card's operations and provides the instructions that allow the computer and software to interact with the card.
It may contain information on the memory timing, operating speeds and voltages of the graphics processor,
RAM, and other information. It is sometimes possible to change the BIOS (e.g. to enable factory-locked
settings for higher performance), although this is typically only done by video card overclockers and has the
potential to irreversibly damage the card.

Video memory

Type Memory clock rate (MHz) Bandwidth (GB/s)


The memory capacity of most modern video
166 - 950
1.2 - 30.4
DDR
cards ranges from 128 MB to 8 GB. Since video
533 - 1000
8.5 - 16
memory needs to be accessed by the GPU and the DDR2
display circuitry, it often uses special high-speed GDDR3
700 - 2400
5.6 - 156.6
or multi-port memory, such as VRAM, WRAM,
2000 - 3600
128 - 200
GDDR4
SGRAM, etc. Around 2003, the video memory
900 - 5600
130 - 230
was typically based on DDR technology. During GDDR5
and after that year, manufacturers moved
towards DDR2, GDDR3, GDDR4 and GDDR5. The effective memory clock rate in modern cards is
generally between 400 MHz and 3.8 GHz.
Video memory may be used for storing other data as well as the screen image, such as the Z-buffer, which
manages the depth coordinates in 3D graphics, textures, vertex buffers, and compiled shader programs.

RAMDAC
The RAMDAC, or Random Access Memory Digital-to-Analog Converter, converts digital signals to analog
signals for use by a computer display that uses analog inputs such as CRTdisplays. The RAMDAC is a kind
of RAM chip that regulates the functioning of the graphics card. Depending on the number of bits used and
the RAMDAC-data-transfer rate, the converter will be able to support different computer-display refresh
rates. With CRT displays, it is best to work over 75 Hz and never under 60 Hz, in order to minimize flicker.
(With LCD displays, flicker is not a problem.) Due to the growing popularity of digital computer displays
and the integration of the RAMDAC onto the GPU die, it has mostly disappeared as a discrete component.
All current LCDs, plasma displays and TVs work in the digital domain and do not require a RAMDAC.
There are few remaining legacy LCD and plasma displays that feature analog inputs (VGA,
component, SCART etc.) only. These require a RAMDAC, but they reconvert the analog signal back to
digital before they can display it, with the unavoidable loss of quality stemming from this digital-to-analogto-digital conversion.

Compiled by IT DIVISION, NAITA

28 | P a g e

Computer Hardware

Outputs

Video In Video Out (VIVO) for S-Video (TV-out), Digital Visual Interface (DVI) for High-definition
television (HDTV), and DB-15 for Video Graphics Array (VGA)
The most common connection systems between the video card and the computer display are:

Video Graphics Array (VGA) (DB-15)

Video Graphics Array (VGA) (DE-15).


Analog-based standard adopted in the late 1980s designed for CRT displays, also called VGA connector.
Some problems of this standard areelectrical noise, image distortion and sampling error evaluating pixels.

Digital Visual Interface (DVI)

Digital Visual Interface (DVI-I).


Digital-based standard designed for displays such as flat-panel displays (LCDs, plasma screens, wide highdefinition television displays) and video projectors. In some rare cases high end CRT monitors also use DVI.
It avoids image distortion and electrical noise, corresponding each pixel from the computer to a display
pixel, using its native resolution. It is worth to note that most manufacturers include DVI-I connector,
allowing(via simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input.

Compiled by IT DIVISION, NAITA

29 | P a g e

Computer Hardware

Video In Video Out (VIVO) for S-Video, Composite video and Component
video

9-pin mini-DIN connector, frequently used for VIVO connections.

Included to allow the connection with televisions, DVD players, video recorders and video game consoles.
They often come in two 10-pin mini-DIN connector variations, and the VIVO splitter cable generally comes
with either 4 connectors (S-Video in and out + composite video in and out), or 6 connectors (S-Video in and
out + component PB out + component PR out + component Y out [also composite out] + composite in).

High-Definition Multimedia Interface (HDMI)

High-Definition Multimedia Interface (HDMI)

An advanced digital audio/video interconnect released in 2003 and is commonly used to connect game
consoles and DVD players to a display. HDMI supports copy protection through HDCP.

DisplayPort

Display Port

An advanced license- and royalty-free digital audio/video interconnect released in 2007. Display
Port intends to replace VGA and DVI for connecting a display to a computer.

Compiled by IT DIVISION, NAITA

30 | P a g e

Computer Hardware

Motherboard interface

S-100 bus: designed in 1974 as a part of the Altair 8800, it was the first industry-standard bus for the
microcomputer industry.
ISA: Introduced in 1981 by IBM, it became dominant in the marketplace in the 1980s. It was an 8 or 16bit bus clocked at 8 MHz.
NuBus: Used in Macintosh II, it was a 32-bit bus with an average bandwidth of 10 to 20 MB/s.
MCA: Introduced in 1987 by IBM it was a 32-bit bus clocked at 10 MHz.
EISA: Released in 1988 to compete with IBM's MCA, it was compatible with the earlier ISA bus. It was
a 32-bit bus clocked at 8.33 MHz.
VLB: An extension of ISA, it was a 32-bit bus clocked at 33 MHz.
PCI: Replaced the EISA, ISA, MCA and VESA buses from 1993 onwards. PCI allowed dynamic
connectivity between devices, avoiding the jumpers manual adjustments. It is a 32-bit bus clocked
33 MHz.
UPA: An interconnect bus architecture introduced by Sun Microsystems in 1995. It had a 64-bit bus
clocked at 67 or 83 MHz.
USB: Although mostly used for miscellaneous devices, such as secondary storage devices and toys, USB
displays and display adapters exist.
AGP: First used in 1997, it is a dedicated-to-graphics bus. It is a 32-bit bus clocked at 66 MHz.
PCI-X: An extension of the PCI bus, it was introduced in 1998. It improves upon PCI by extending the
width of bus to 64-bit and the clock frequency to up to 133 MHz.
PCI Express: Abbreviated PCIe, it is a point to point interface released in 2004. In 2006 provided double
the data-transfer rate of AGP. It should not be confused with PCI-X, an enhanced version of the original
PCI specification.

In the attached table is a comparison between a selection of the features of some of those interfaces.

Compiled by IT DIVISION, NAITA

31 | P a g e

Computer Hardware

Bus

Width (bits) Clock rate (MHz) Bandwidth (MB/s)

Style

ISA XT

4,77

Parallel

ISA AT
MCA
NUBUS
EISA
VESA
PCI
AGP 1x
AGP 2x
AGP 4x
AGP 8x
PCIe x1
PCIe x4
PCIe x8
PCIe x16
PCIe x16 2.0

16
32
32
32
32
32 - 64
32
32
32
32
1
14
18
1 16
1 16

8,33
10
10
8,33
40
33 - 100
66
66
66
66
2500 / 5000
2500 / 5000
2500 / 5000
2500 / 5000
5000 / 10000

16
20
10-40
32
160
132 - 800
264
528
1000
2000
250 / 500
1000 / 2000
2000 / 4000
4000 / 8000
8000 / 16000

Parallel
Parallel
Parallel
Parallel
Parallel
Parallel
Parallel
Parallel
Parallel
Parallel
Serial
Serial
Serial
Serial
Serial

Cooling devices
Video cards may use a lot of electricity, which is converted into heat. If the heat isn't dissipated, the video
card could overheat and be damaged. Cooling devices are incorporated to transfer the heat elsewhere. Three
types of cooling devices are commonly used on video cards:

Heat sink: a heat sink is a passive-cooling device. It conducts heat away from the graphics card's core, or
memory, by using a heat-conductive metal (most commonly aluminum or copper); sometimes in
combination with heat pipes. It uses air (most common), or in extreme cooling situations, water
(see water block), to remove the heat from the card. When air is used, a fan is often used to increase
cooling effectiveness.
Computer fan: an example of an active-cooling part. It is usually used with a heat sink. Due to the
moving parts, a fan requires maintenance and possible replacement. The fan speed or actual fan can be
changed for more efficient or quieter cooling.
Water block: a water block is a heat sink suited to use water instead of air. It is mounted on the graphics
processor and has a hollow inside. Water is pumped through the water block, transferring the heat into
the water, which is then usually cooled in a radiator. This is the most effective cooling solution without
extreme modification.

Power demand
As the processing power of video cards has increased, so has their demand for electrical power. Current
high-performance video cards tend to consume a great deal of power. While CPU and power supply makers
have recently moved toward higher efficiency, power demands of GPUs have continued to rise, so the video
card may be the biggest electricity user in a computer. Although power supplies are increasing their power
too, the bottleneck is due to the PCI-Express connection, which is limited to supplying 75 Watts. Modern
video cards with a power consumption over 75 Watts usually include a combination of six-pin (75W) or
eight-pin (150W) sockets that connect directly to the power supply

Compiled by IT DIVISION, NAITA

32 | P a g e

Computer Hardware

07. Hard Disk Drive (HDD)

Mechanical interior of a modern hard disk drive


Date invented

24 December 1954

Invented by

IBM team led by Rey Johnson

A hard disk drive (HDD; also hard drive or hard disk) is a non-volatile, random access
digital magnetic data storage device. It features rotating rigid platters on a motor-driven spindle within a
protective enclosure. Data is magnetically read from and written to the platter by read/write heads that float
on a film of air above the platters. Introduced by IBM in 1956, hard disk drives have decreased in cost and
physical size over the years while dramatically increasing in capacity.
Hard disk drives have been the dominant device for secondary storage of data in general purpose computers
since the early 1960s. They have maintained this position because advances in their recording density have
kept pace with the requirements for secondary storage. Today's HDDs operate on high-speed serial
interfaces; i.e., serial ATA (SATA) or serial attached SCSI (SAS).

History
Hard disk drives were introduced in 1956 as data storage for an IBM real time transaction processing
computer and were developed for use with general purpose mainframe and mini computers.
As the 1980s began, hard disk drives were a rare and very expensive additional feature on personal
computers (PCs); however by the late '80s, hard disk drives were standard on all but the cheapest PC.
Most hard disk drives in the early 1980s were sold to PC end users as an add on subsystem, not under the
drive manufacturer's name but by systems integrators such as the Corvus Disk System or the systems
manufacturer such as the Apple Profile. The IBM PC/XT in 1983 included an internal standard 10MB hard
disk drive, and soon thereafter internal hard disk drives proliferated on personal computers.
External hard disk drives remained popular for much longer on the Apple Macintosh. Every Mac made
between 1986 and 1998 has a SCSI port on the back, making external expansion easy; also, "toaster"
Compact Macs did not have easily accessible hard drive bays (or, in the case of the Mac Plus, any hard drive
bay at all), so on those models, external SCSI disks were the only reasonable option.
Driven by areal density doubling every two to four years since their invention, HDDs have changed in many
ways. A few highlights include:

Compiled by IT DIVISION, NAITA

33 | P a g e

Computer Hardware

Capacity per HDD increasing from 3.75 megabytes to greater than 1 terabyte, a greater than 270thousand-to-1 improvement.
Size of HDD decreasing from 87.9 cubic feet (a double wide refrigerator) to 0.002 cubic feet (2inch form factor, a pack of cards), a greater than 44-thousand-to-1 improvement.
Price decreasing from about $15,000 per megabyte to less than $0.0001 per megabyte ($100/1
terabyte), a greater than 150-million-to-1 improvement.
Average access time decreasing from greater than 0.1 second to a few thousandths of a second, a
greater than 40-to-1 improvement.
Market application expanding from general purpose computers to most computing applications
including consumer applications.

Technology
Magnetic recording

Diagram labeling the major components of a computer hard disk drive

HDDs record data by magnetizing ferromagnetic material directionally. Sequential changes in the direction
of magnetization represent patterns of binary data bits. The data are read from the disk by detecting the
transitions in magnetization and decoding the originally written data. Different encoding schemes, such
as modified frequency modulation, group code recording, run-length limited encoding, and others are used.
A typical HDD design consists of a spindle that holds flat circular disks, also called platters, which hold the
recorded data. The platters are made from a non-magnetic material, usually aluminum alloy, glass, or
ceramic, and are coated with a shallow layer of magnetic material typically 1020 nm in depth, with an outer
layer of carbon for protection. For reference, a standard piece of copy paper is 0.070.18 millimetre
(70,000180,000 nm).

Magnetic cross section & frequency modulation encoded binary data


Compiled by IT DIVISION, NAITA

34 | P a g e

Computer Hardware

Recording of single magnetisations of bits on a hddplatter (recorded using CMOS-MagView).

Longitudinal recording (standard) & perpendicular


recording diagram

The platters in contemporary HDDs are spun at speeds varying from 4200 rpm in energy-efficient portable
devices, to 15,000 rpm for high performance servers. The first hard drives spun at 1200 rpm , and for many
years, 3600 rpm was the norm. Information is written to, and read from a platter as it rotates past devices
called read-and-write headsthat operate very close (tens of nanometers in new drives) over the magnetic
surface. The read-and-write head is used to detect and modify the magnetization of the material immediately
under it. In modern drives there is one head for each magnetic platter surface on the spindle, mounted on a
common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the
platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm
is moved using a voice coil actuator or in some older designs a stepper motor.
The magnetic surface of each platter is conceptually divided into many small sub-micrometer-sized
magnetic regions referred to as magnetic domains. In older disk designs the regions were oriented
horizontally and parallel to the disk surface, but beginning about 2005, the orientation was changed
to perpendicular to allow for closer magnetic domain spacing. Due to the polycrystalline nature of the
magnetic material each of these magnetic regions is composed of a few hundred magnetic grains. Magnetic
grains are typically 10 nm in size and each form a single magnetic domain. Each magnetic region in total
forms a magnetic dipole which generates a magnetic field.
For reliable storage of data, the recording material needs to resist self-demagnetization, which occurs when
the magnetic domains repel each other. Magnetic domains written too densely together to a weakly
magnetizable material will degrade over time due to physical rotation of one or more domains to cancel out
these forces. The domains rotate sideways to a halfway position that weakens the readability of the domain
and relieves the magnetic stresses. Older hard disks used iron(III) oxide as the magnetic material, but current
disks use a cobalt-based alloy.
A write head magnetizes a region by generating a strong local magnetic field. Early HDDs used
an electromagnet both to magnetize the region and to then read its magnetic field by using electromagnetic
induction. Later versions of inductive heads included metal in Gap (MIG) heads and thin film heads. As data
density increased, read heads using magneto resistance (MR) came into use; the electrical resistance of the
head changed according to the strength of the magnetism from the platter. Later development made use
of spintronics; in these heads, the magneto resistive effect was much greater than in earlier types, and was
dubbed "giant" magneto resistance (GMR). In today's heads, the read and write elements are separate, but in
close proximity, on the head portion of an actuator arm. The read element is typically magneto
resistive while the write element is typically thin-film inductive.

Compiled by IT DIVISION, NAITA

35 | P a g e

Computer Hardware

The heads are kept from contacting the platter surface by the air that is extremely close to the platter; that air
moves at or near the platter speed. The record and playback head are mounted on a block called a slider, and
the surface next to the platter is shaped to keep it just barely out of contact. This forms a type of air bearing.
In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be
lost because of thermal effects. To counter this, the platters are coated with two parallel magnetic layers,
separated by a 3-atom layer of the non-magnetic element ruthenium, and the two layers are magnetized in
opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to
allow greater recording densities is perpendicular, first shipped in 2005, and as of 2007 the technology was
used in many HDDs.

Components

HDD with disks and motor hub removed exposing copper colored stator coils surrounding a bearing in the center of
the spindle motor. Orange stripe along the side of the arm is thin printed-circuit cable, spindle bearing is in the center
and the actuator is in the upper left.

A typical hard disk drive has two electric motors; a disk motor that spins the disks and an actuator (motor)
that positions the read/write head assembly across the spinning disks.
The disk motor has an external rotor attached to the disks; the stator windings are fixed in place.
Opposite the actuator at the end of the head support arm is the read-write head (near center in photo); thin
printed-circuit cables connect the read-write heads to amplifier electronics mounted at the pivot of the
actuator. A flexible, somewhat U-shaped, ribbon cable, seen edge-on below and to the left of the actuator
arm continues the connection to the controller board on the opposite side.
The head support arm is very light, but also stiff; in modern drives, acceleration at the head reaches 550 g.
The silver-colored structure at the upper left of the first image is the top plate of the actuator, a permanentmagnet and moving coil motor that swings the heads to the desired position (it is shown removed in the
second image). The plate supports a squat neodymium-iron-boron (NIB) high-flux magnet. Beneath this
plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is
attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the
motor (some drives only have one magnet).

Compiled by IT DIVISION, NAITA

36 | P a g e

Computer Hardware

A disassembled and labeled 1997 hard drive. All major components were placed on a mirror, which created the
symmetrical reflections.

The voice coil itself is shaped rather like an arrowhead, and made of doubly coated copper magnet wire. The
inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a
form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point
to the actuator bearing center) interact with the magnetic field, developing a tangential force that rotates the
actuator. Current flowing radially outward along one side of the arrowhead and radially inward on the other
produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces
that would cancel each other out. Therefore the surface of the magnet is half N pole, half S pole, with the
radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and
produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial
forces that do not rotate the head.

Error handling
Modern drives also make extensive use of Error Correcting Codes (ECCs), particularly ReedSolomon error
correction. These techniques store extra bits for each block of data that are determined by mathematical
formulas. The extra bits allow many errors to be fixed. While these extra bits take up space on the hard
drive, they allow higher recording densities to be employed, resulting in much larger storage capacity for
user data. In 2009, in the newest drives, low-density parity-check codes (LDPC) are supplanting ReedSolomon. LDPC codes enable performance close to the Shannon Limit and thus allow for the highest
storage density available.
Typical hard drives attempt to "remap" the data in a physical sector that is going bad to a spare physical
sectorhopefully while the errors in that bad sector are still few enough that the ECC can recover the data
without loss. The S.M.A.R.T. system counts the total number of errors in the entire hard drive fixed by ECC,
and the total number of remapping, in an attempt to predict hard drive failure.

Future development
Due to bit-flipping errors and other issues, perpendicular recording densities may be supplanted by other
magnetic recording technologies. Toshiba is promoting bit-patterned recording (BPR), while Xyratex are
developing heat-assisted magnetic recording (HAMR).
October 2011: TDK has developed a special laser that heats up a hard disk's surface with a precision of a
few dozen nanometers. TDK also used the new material in the magnetic head and redesigned its structure to
expand the recording density. This new technology apparently makes it possible to store one terabyte on one
platter and for the initial. TDK will produce HDD with 2 platters.

Compiled by IT DIVISION, NAITA

37 | P a g e

Computer Hardware

Capacity
The capacity of an HDD may appear to the end user to be a different amount than the amount stated by a
drive or system manufacturer due to amongst other things, different units of measuring capacity, capacity
consumed in formatting the drive for use by an operating system and/or redundancy.
Units of storage capacity

Advertised capacity
by manufacturer
(using decimal multiples)

With
prefix

Expected capacity
by consumers in class
action
(using binary multiples)
Diff.

Reported capacity
Windows Mac OS X 10.6+
(using
(using decimal
binary
multiples)

multiples)

Bytes

Bytes

100 MB

100,000,000

104,857,600 4.86%

95.4 MB

100.0 MB

100 GB

100,000,000,000

107,374,182,400 7.37%

93.1 GB,
95,367 MB

100.00 GB

1 TB 1,000,000,000,000 1,099,511,627,776 9.95%

931 GB,

1000.00 GB,

953,674 MB

1000,000 MB

The capacity of hard disk drives is given by manufacturers in megabytes (1 MB = 1,000,000


bytes), gigabytes (1 GB = 1,000,000,000 bytes) or terabytes (1 TB = 1,000,000,000,000 bytes). This
numbering convention, where prefixes like mega- and giga- denote powers of 1000, is also used for data
transmission rates and DVD capacities. However, the convention is different from that used by
manufacturers of memory (RAM, ROM) and CDs, where prefixes like kilo- and mega- mean powers of
1024.
When the unit prefixes like kilo- denote powers of 1024 in the measure of memory capacities, the
1024n progression (for n = 1, 2, ) is as follows:

kilo = 210 = 10241 = 1024,


mega = 220 = 10242 = 1,048,576,
giga = 230 = 10243 = 1,073,741,824,
tera = 240 = 10244 = 1,099,511,627,776,

And so forth.
The practice of using prefixes assigned to powers of 1000 within the hard drive and computer industries
dates back to the early days of computing. By the 1970s million, mega and M were consistently being used
in the powers of 1000 sense to describe HDD capacity. As HDD sizes grew the industry adopted the prefixes
G for giga and T for tera denoting 1,000,000,000 and 1,000,000,000,000 bytes of HDD capacity
respectively.
Likewise, the practice of using prefixes assigned to powers of 1024 within the computer industry also traces
its roots to the early days of computing By the early 1970s using the prefix K in a powers of 1024 sense to
describe memory was common within the industry. As memory sizes grew the industry adopted the prefixes
M for mega and G for giga denoting 1,048,576 and 1,073,741,824 bytes of memory respectively.

Compiled by IT DIVISION, NAITA

38 | P a g e

Computer Hardware

Computers do not internally represent HDD or memory capacity in powers of 1024; reporting it in this
manner is just a convention. Creating confusion, operating systems report HDD capacity in different ways.
Most operating systems, including the Microsoft Windows operating systems use the powers of
1024 convention when reporting HDD capacity, thus an HDD offered by its manufacturer as a 1 TB drive is
reported by these OS as a 931 GB HDD. Apple's current OS, beginning with Mac OS X 10.6 (Snow
Leopard), use powers of 1000when reporting HDD capacity, thereby avoiding any discrepancy between
what it reports and what the manufacturer advertises.
In the case of mega-, there is a nearly 5% difference between the powers of 1000 definition and
the powers of 1024 definition. Furthermore, the difference is compounded by 2.4% with each incrementally
larger prefix (gigabyte, terabyte, etc.) The discrepancy between the two conventions for measuring capacity
was the subject of several class action suits against HDD manufacturers. The plaintiffs argued that the use of
decimal measurements effectively misled consumers while the defendants denied any wrongdoing or
liability, asserting that their marketing and advertising complied in all respects with the law and that no
Class Member sustained any damages or injuries.
In December 1998, an international standards organization attempted to address these dual definitions of the
conventional prefixes by proposing unique binary prefixes and prefix symbols to denote multiples of 1024,
such as megabytes (MB), which exclusively denotes 220 or 1,048,576 bytes. In the over-12 years that have
since elapsed, the proposal has seen little adoption by the computer industry and the conventionally prefixed
forms of byte continue to denote slightly different values depending on context.

HDD formatting
The presentation of an HDD to its host is determined by its controller. This may differ substantially from the
drive's native interface particularly in mainframes or servers.
Modern HDDs, such as SAS and SATA drives, appear at their interfaces as a contiguous set of logical
blocks; typically 512 bytes long but the industry is in the process of changing to 4,096 byte logical blocks;
see Advanced Format.
The process of initializing these logical blocks on the physical disk platters is called low level
formatting which is usually performed at the factory and is not normally changed in the field.
High level formatting then writes the file system structures into selected logical blocks to make the
remaining logical blocks available to the host OS and its applications. The operating system file system uses
some of the disk space to organize files on the disk, recording their file names and the sequence of disk areas
that represent the file. Examples of data structures stored on disk to retrieve files include the MS DOS file
allocation table (FAT), and UNIX inodes, as well as other operating system data structures. As a
consequence not all the space on a hard drive is available for user files. This file system overhead is usually
less than 1% on drives larger than 100 MB.

Redundancy
In modern HDDs spare capacity for defect management is not included in the published capacity; however
in many early HDDs a certain number of sectors were reserved for spares, thereby reducing capacity
available to end users.
In some systems, there may be hidden partitions used for system recovery that reduce the capacity available
to the end user.
For RAID drives, data integrity and fault-tolerance requirements also reduce the realized capacity. For
example, a RAID1 drive will be about half the total capacity as a result of data mirroring. For RAID5 drives
with x drives you would lose 1/x of your space to parity. RAID drives are multiple drives that appear to be
one drive to the user, but provides some fault-tolerance. Most RAID vendors use some form
of checksums to improve data integrity at the block level. For many vendors, this involves using HDDs with
sectors of 520 bytes per sector to contain 512 bytes of user data and 8 checksum bytes or using separate 512
byte sectors for the checksum data.
Compiled by IT DIVISION, NAITA

39 | P a g e

Computer Hardware

HDD parameters to calculate capacity

PC hard disk drive capacity (in GB) over time. The vertical axis is logarithmic, so the fit line corresponds
to exponential growth.

Because modern disk drives appear to their interface as a contiguous set of logical blocks their gross
capacity can be calculated by multiplying the number of blocks by the size of the block. This information is
available from the manufacturers specification and from the drive itself through use of special utilities
invoking low level commands.
The gross capacity of older HDDs can be calculated by multiplying for each zone of the drive the number
of cylinders by the number of heads by the number of sectors/zone by the number of bytes/sector (most
commonly 512) and then summing the totals for all zones. Some modernATA drives will also
report cylinder, head, sector (C/H/S) values to the CPU but they are no longer actual physical parameters
since the reported numbers are constrained by historic operating-system interfaces.
The old C/H/S scheme has been replaced by logical block addressing. In some cases, to try to "force-fit" the
C/H/S scheme to large-capacity drives, the number of heads was given as 64, although no modern drive has
anywhere near 32 platters.

Form factors

2.5" SATA HDD from a Sony VAIO laptop

Compiled by IT DIVISION, NAITA

5 full height 110 MB HDD


2 (8.5 mm) 6495 MB HDD

40 | P a g e

Computer Hardware

Six hard drives with 8, 5.25, 3.5, 2.5, 1.8, and 1 hard disks with a ruler to show the length of platters and readwrite heads.

Mainframe and minicomputer hard disks were of widely varying dimensions, typically in free standing
cabinets the size of washing machines (e.g. HP 7935 and DEC RP06 Disk Drives) or designed so that
dimensions enabled placement in a 19" rack (e.g. Diablo Model 31). In 1962, IBM introduced its model
1311 disk, which used 14 inch (nominal size) platters. This became a standard size for mainframe and
minicomputer drives for many years,[48] but such large platters were never used with microprocessor-based
systems.
With increasing sales of microcomputers having built in floppy-disk drives (FDDs), HDDs that would fit to
the FDD mountings became desirable, and this led to the evolution of the market towards drives with
certain Form factors, initially derived from the sizes of 8-inch, 5.25-inch, and 3.5-inch floppy disk drives.
Smaller sizes than 3.5 inches have emerged as popular in the marketplace and/or been decided by various
industry groups.

8 inch: 9.5 in 4.624 in 14.25 in (241.3 mm 117.5 mm 362 mm)


In 1979, Shugart Associates' SA1000 was the first form factor compatible HDD, having the same
dimensions and a compatible interface to the 8 FDD.
5.25 inch: 5.75 in 3.25 in 8 in (146.1 mm 82.55 mm 203 mm)
This smaller form factor, first used in an HDD by Seagate in 1980, was the same size as full-height 514inch-diameter (130 mm) FDD, 3.25-inches high. This is twice as high as "half height"; i.e., 1.63 in
(41.4 mm). Most desktop models of drives for optical 120 mm disks (DVD, CD) use the half height 5
dimension, but it fell out of fashion for HDDs. The Quantum Bigfoot HDD was the last to use it in the
late 1990s, with "low-profile" (25 mm) and "ultra-low-profile" (20 mm) high versions.
3.5 inch: 4 in 1 in 5.75 in (101.6 mm 25.4 mm 146 mm) = 376.77344 cm
This smaller form factor is similar to that used in an HDD by Rodime in 1983, which was the same size

Compiled by IT DIVISION, NAITA

41 | P a g e

Computer Hardware

as the "half height" 3 FDD, i.e., 1.63 inches high. Today, the 1-inch high ("slim line" or "low-profile")
version of this form factor is the most popular form used in most desktops.
2.5 inch: 2.75 in 0.2750.59 in 3.945 in (69.85 mm 715 mm 100 mm) = 48.895104.775 cm3
This smaller form factor was introduced by PrairieTek in 1988; there is no corresponding FDD. It is
widely used today for solid-state drives and for hard disk drives in mobile devices (laptops, music
players, etc.) and as of 2008 replacing 3.5 inch enterprise-class drives. It is also used in the PlayStation
3 and Xbox 360 video game consoles. Today, the dominant height of this form factor is 9.5 mm for
laptop drives (usually having two platters inside), but higher capacity drives have a height of 12.5 mm
(usually having three platters). Enterprise-class drives can have a height up to 15 mm. Seagate released a
7mm drive aimed at entry level laptops and high end netbooks in December 2009.
1.8 inch: 54 mm 8 mm 71 mm = 30.672 cm
This form factor, originally introduced by Integral Peripherals in 1993, has evolved into the ATA-7 LIF
with dimensions as stated. It was increasingly used in digital audio players and subnotebooks, but is
rarely used today. An original variant exists for 25GB sized HDDs that fit directly into a PC
card expansion slot. These became popular for their use in iPods and other HDD based MP3 players.
1 inch: 42.8 mm 5 mm 36.4 mm
This form factor was introduced in 1999 as IBM's Microdrive to fit inside a CF Type II slot. Samsung
calls the same form factor "1.3 inch" drive in its product literature.
0.85 inch: 24 mm 5 mm 32 mm
Toshiba announced this form factor in January 2004 for use in mobile phones and similar applications,
including SD/MMC slot compatible HDDs optimized for video storage on 4G handsets. Toshiba
currently sells a 4 GB (MK4001MTD) and 8 GB (MK8003MTD) version and holds the Guinness World
Record for the smallest hard disk drive.

3.5-inch and 2.5-inch hard disks currently dominate the market.


By 2009 all manufacturers had discontinued the development of new products for the 1.3-inch, 1-inch and
0.85-inch form factors due to falling prices of flash memory, which is slightly more stable and resistant to
damage from impact and/or dropping.
The inch-based nickname of all these form factors usually do not indicate any actual product dimension
(which are specified in millimeters for more recent form factors), but just roughly indicate a size relative to
disk diameters, in the interest of historic continuity.

Current hard disk form factors


Form factor Width (mm)

Height (mm)

Largest capacity Platters (Max)

3.5

102

19 or 25.4

2.5

69.9

7, 9.5, 11.5, or 15 1.5 TB (2010)

1.8

54

5 or 8

4 TB (2011)

320 GB(2009)

Obsolete hard disk form factors


Form factor

Width (mm) Largest capacity Platters (Max)

5.25 FH

146

47 GB (1998)

14

5.25 HH

146

19.3 GB (1998)

1.3

43

40 GB (2007)

1 (CFII/ZIF/IDE-Flex) 42

20 GB (2006)

0.85

8 GB (2004)

Compiled by IT DIVISION, NAITA

24

42 | P a g e

Computer Hardware

Performance characteristics
Access time
The factors that limit the time to access the data on a hard disk drive (Access time) are mostly related to the
mechanical nature of the rotating disks and moving heads. Seek time is a measure of how long it takes the
head assembly to travel to the track of the disk that contains data. Rotational latency is incurred because the
desired disk sector may not be directly under the head when data transfer is requested. These two delays are
on the order of milliseconds each. The bit rate or data transfer rate once the head is in the right position
creates delay which is a function of the number of blocks transferred; typically relatively small, but can be
quite long with the transfer of large contiguous files. Delay may also occur if the drive disks are stopped to
save energy, see Power management.
An HDD's Average Access Time is its average Seek time which technically is the time to do all possible
seeks divided by the number of all possible seeks, but in practice is determined by statistical methods or
simply approximated as the time of a seek over one-third of the number of tracks
Defragmentation is a procedure used to minimize delay in retrieving data by moving related items to
physically proximate areas on the disk. Some computer operating systems perform defragmentation
automatically. Although automatic defragmentation is intended to reduce access delays, the procedure can
slow response when performed while the computer is in use.
Access time can be improved by increasing rotational speed, thus reducing latency and/or by decreasing
seek time. Increasing areal density increases throughput by increasing data rate and by increasing the
amount of data under a set of heads, thereby potentially reducing seek activity for a given amount of data.
Based on historic trends, analysts predict a future growth in HDD areal density (and therefore capacity) of
about 40% per year. Access times have not kept up with throughput increases, which themselves have not
kept up with growth in storage capacity.

Interleave

Low-level formatting software to find highest performance interleave choice for 10 MB IBM PC XT hard disk drive.

Compiled by IT DIVISION, NAITA

43 | P a g e

Computer Hardware

Sector interleave is a mostly obsolete device characteristic related to access time, dating back to when
computers were too slow to be able to read large continuous streams of data. Interleaving introduced gaps
between data sectors to allow time for slow equipment to get ready to read the next block of data. Without
interleaving, the next logical sector would arrive at the read/write head before the equipment was ready,
requiring the system to wait for another complete disk revolution before reading could be performed.
However, because interleaving introduces intentional physical delays into the drive mechanism, setting the
interleave to a ratio higher than required causes unnecessary delays for equipment that has the performance
needed to read sectors more quickly. The interleaving ratio was therefore usually chosen by the end-user to
suit their particular computer system's performance capabilities when the drive was first installed in their
system.
Modern technology is capable of reading data as fast as it can be obtained from the spinning platters, so hard
drives usually have a fixed sector interleave ratio of 1:1, which is effectively no interleaving being used.

Seek time
Average seek time ranges from 3 MS for high-end server drives, to 15 MS for mobile drives, with the most
common mobile drives at about 12 MS and the most common desktop type typically being around 9 Ms.
The first HDD had an average seek time of about 600 MS and by the middle 1970s HDDs were available
with seek times of about Ms. Some early PC drives used a stepper motor to move the heads, and as a result
had seek times as slow as 80120 MS, but this was quickly improved by voice coil type actuation in the
1980s, reducing seek times to around 20 Ms. Seek time has continued to improve slowly over time.
Some desktop and laptop computer systems allow the user to make a tradeoff between seek performance and
drive noise. Faster seek rates typically require more energy usage to quickly move the heads across the
platter, causing loud noises from the pivot bearing and greater device vibrations as the heads are rapidly
accelerated during the start of the seek motion and decelerated at the end of the seek motion. Quiet operation
reduces movement speed and acceleration rates, but at a cost of reduced seek performance.

Rotational latency

Rotational speed
Latency is the delay for the rotation of the disk to bring the required disk
[rpm]
sector under the read-write mechanism. It depends on rotational speed of a
15000
disk, measured in revolutions per minute (rpm). Average rotational latency
10000
is shown in the table below, based on the empirical relation that the
average latency in milliseconds for such a drive is one-half the rotational 7200
period.
5400

Average
latency [MS]

Data transfer rate

6.25

4800

2
3
4.16
5.55

As of 2010, a typical 7200 rpm desktop hard drive has a sustained "disk-tobuffer" data transfer rate up to 1030 Mbits/sec. This rate depends on the track location, so it will be higher
for data on the outer tracks (where there are more data sectors) and lower toward the inner tracks (where
there are fewer data sectors); and is generally somewhat higher for 10,000 rpm drives. A current widely used
standard for the "buffer-to-computer" interface is 3.0 Gbit/s SATA, which can send about 300 megabyte/s
(10 bit encoding) from the buffer to the computer, and thus is still comfortably ahead of today's disk-tobuffer transfer rates. Data transfer rate (read/write) can be measured by writing a large file to disk using
special file generator tools, then reading back the file. Transfer rate can be influenced by file system
fragmentation and the layout of the files.
HDD data transfer rate depends upon the rotational speed of the platters and the data recording density.
Because heat and vibration limit rotational speed, advancing density becomes the main method to improve
sequential transfer rates. While areal density advances by increasing both the number of tracks across the
disk and the number of sectors per track, only the latter will increase the data transfer rate for a given rpm.
Since data transfer rate performance only tracks one of the two components of areal density, its performance
improves at a lower rate.
Compiled by IT DIVISION, NAITA

44 | P a g e

Computer Hardware

Power consumption
Power consumption has become increasingly important, not only in mobile devices such as laptops but also
in server and desktop markets. Increasing data center machine density has led to problems delivering
sufficient power to devices (especially for spin up), and getting rid of the waste heat subsequently produced,
as well as environmental and electrical cost concerns (see green computing). Heat dissipation is tied directly
to power consumption, and as drives age, disk failure rates increase at higher drive temperatures. Similar
issues exist for large companies with thousands of desktop PCs. Smaller form factor drives often use less
power than larger drives. One interesting development in this area is actively controlling the seek speed so
that the head arrives at its destination only just in time to read the sector, rather than arriving as quickly as
possible and then having to wait for the sector to come around (i.e. the rotational latency). Many of the hard
drive companies are now producing Green Drives that require much less power and cooling. Many of these
Green Drives spin slower (<5400 rpm compared to 7200, 10,000 or 15,000 rpm) thereby generating less
heat. Power consumption can also be reduced by parking the drive heads when the disk is not in use
reducing friction, adjusting spin speeds, and disabling internal components when not in use.
Also in systems where there might be multiple hard disk drives, there are various ways of controlling when
the hard drives spin up since the highest current is drawn at that time.

On SCSI hard disk drives, the SCSI controller can directly control spin up and spin down of the drives.

On Parallel ATA (PATA) and Serial ATA (SATA) hard disk drives, some support power-up in
standby or PUIS. The hard disk drive will not spin up until the controller or system BIOS issues a
specific command to do so. This limits the power draw or consumption upon power on.

Some SATA II hard disk drives support Spin-up Staggered, allowing the computer to spin up the drives
in sequence to reduce load on the power supply when booting.

Power management
Most hard disk drives today support some form of power management which uses a number of specific
power modes that save energy by reducing performance. When implemented an HDD will change between a
full power mode to one or more power saving modes as a function of drive usage. Recovery from the
deepest mode, typically called Sleep, may take as long as several seconds.

Audible noise
Measured in dBA, audible noise is significant for certain applications, such as DVRs, digital audio recording
and quiet computers. Low noise disks typically use fluid bearings, slower rotational speeds (usually
5400 rpm) and reduce the seek speed under load (AAM) to reduce audible clicks and crunching sounds.
Drives in smaller form factors (e.g. 2.5 inch) are often quieter than larger drives.

Shock resistance
Shock resistance is especially important for mobile devices. Some laptops now include active hard drive
protection that parks the disk heads if the machine is dropped, hopefully before impact, to offer the greatest
possible chance of survival in such an event. Maximum shock tolerance to date is 350 g for operating and
1000 g for non-operating.

Compiled by IT DIVISION, NAITA

45 | P a g e

Computer Hardware

Access and interfaces


Hard disk drives are accessed over one of a number of bus types, including parallel ATA (PATA, also called
IDE or EIDE), Serial ATA (SATA), SCSI, Serial Attached SCSI (SAS), and Fibre Channel. Bridge circuitry
is sometimes used to connect hard disk drives to buses with which they cannot communicate natively, such
as IEEE 1394, USB and SCSI.
For the ST-506 interface, the data encoding scheme as written to the disk surface was also important. The
first ST-506 disks used Modified Frequency Modulation (MFM) encoding, and transferred data at a rate of
5 megabits per second. Later controllers using 2,7 RLL (or just "RLL") encoding caused 50% more data to
appear under the heads compared to one rotation of an MFM drive, increasing data storage and data transfer
rate by 50%, to 7.5 megabits per second.
Many ST-506 interface disk drives were only specified by the manufacturer to run at the 1/3 lower MFM
data transfer rate compared to RLL, while other drive models (usually more expensive versions of the same
drive) were specified to run at the higher RLL data transfer rate. In some cases, a drive had sufficient margin
to allow the MFM specified model to run at the denser/faster RLL data transfer rate (not recommended nor
guaranteed by manufacturers). Also, any RLL-certified drive could run on any MFM controller, but with 1/3
less data capacity and as much as 1/3 less data transfer rate compared to its RLL specifications.
Enhanced Small Disk Interface (ESDI) also supported multiple data rates (ESDI disks always used 2,7 RLL,
but at 10, 15 or 20 megabits per second), but this was usually negotiated automatically by the disk drive and
controller; most of the time, however, 15 or 20 megabit ESDI disk drives were not downward compatible
(i.e. a 15 or 20 megabit disk drive would not run on a 10 megabit controller). ESDI disk drives typically also
had jumpers to set the number of sectors per track and (in some cases) sector size.
Modern hard drives present a consistent interface to the rest of the computer, no matter what data encoding
scheme is used internally. Typically a DSP in the electronics inside the hard drive takes the raw analog
voltages from the read head and uses PRML and ReedSolomon error correction to decode the sector
boundaries and sector data, then sends that data out the standard interface. That DSP also watches the error
rate detected by error detection and correction, and performs bad sector remapping, data collection for SelfMonitoring, Analysis, and Reporting Technology, and other internal tasks.
SCSI originally had just one signaling frequency of 5 MHz for a maximum data rate of 5 megabytes/second
over 8 parallel conductors, but later this was increased dramatically. The SCSI bus speed had no bearing on
the disk's internal speed because of buffering between the SCSI bus and the disk drive's internal data bus;
however, many early disk drives had very small buffers, and thus had to be reformatted to a different
interleave (just like ST-506 disks) when used on slow computers, such as early Commodore Amiga, IBM
PC compatibles and Apple Macintoshes.
ATA disks have typically had no problems with interleave or data rate, due to their controller design, but
many early models were incompatible with each other and could not run with two devices on the same
physical cable in a master/slave setup. This was mostly remedied by the mid-1990s, when ATA's
specification was standardized and the details began to be cleaned up, but still causes problems occasionally
(especially with CD-ROM and DVD-ROM disks, and when mixing Ultra DMA and non-UDMA devices).
Serial ATA does away with master/slave setups entirely, placing each disk on its own channel (with its own
set of I/O ports) instead.
FireWire/IEEE 1394 and USB(1.0/2.0/3.0) hard drives consist of enclosures containing generally ATA or
Serial ATA disks with built-in adapters to these external buses.

Compiled by IT DIVISION, NAITA

46 | P a g e

Computer Hardware

Disk interface families used in personal computers

Several Parallel ATA hard disk drives

Historical bit serial interfaces connect a hard disk drive (HDD) to a hard disk controller (HDC) with two
cables, one for control and one for data. (Each drive also has an additional cable for power, usually
connecting it directly to the power supply unit). The HDC provided significant functions such as
serial/parallel conversion, data separation, and track formatting, and required matching to the drive (after
formatting) in order to assure reliability. Each control cable could serve two or more drives, while a
dedicated (and smaller) data cable served each drive.

ST506 used MFM (Modified Frequency Modulation) for the data encoding method.
ST412 was available in either MFM or RLL (Run Length Limited) encoding variants.
Enhanced Small Disk Interface (ESDI) was an industry standard interface similar to ST412 supporting
higher data rates between the processor and the disk drive.

Modern bit serial interfaces connect a hard disk drive to a host bus interface adapter (today typically
integrated into the "south bridge") with one data/control cable. (As for historical bit serial interfaces above,
each drive also has an additional power cable, usually direct to the power supply unit.)

Fibre Channel (FC) is a successor to parallel SCSI interface on enterprise market. It is a serial protocol.
In disk drives usually the Fibre Channel Arbitrated Loop (FC-AL) connection topology is used. FC has
much broader usage than mere disk interfaces, and it is the cornerstone of storage area networks (SANs).
Recently other protocols for this field, like iSCSI and ATA over Ethernet have been developed as well.
Confusingly, drives usually use copper twisted-pair cables for Fibre Channel, not fibre optics. The latter
are traditionally reserved for larger devices, such as servers or disk array controllers.
Serial ATA (SATA). The SATA data cable has one data pair for differential transmission of data to the
device, and one pair for differential receiving from the device, just like EIA-422. That requires that data
be transmitted serially. A similar differential signaling system is used in RS485, Local Talk, USB, Fire
wire, and differential SCSI.
Serial Attached SCSI (SAS). The SAS is a new generation serial communication protocol for devices
designed to allow for much higher speed data transfers and is compatible with SATA. SAS uses a
mechanically identical data and power connector to standard 3.5-inch SATA1/SATA2 HDDs, and many
server-oriented SAS RAID controllers are also capable of addressing SATA hard drives. SAS uses serial
communication instead of the parallel method found in traditional SCSI devices but still uses SCSI
commands.

Compiled by IT DIVISION, NAITA

47 | P a g e

Computer Hardware

Inner view of a 1998 Seagate hard disk drive which used Parallel ATA interface

Word serial interfaces connect a hard disk drive to a host bus adapter (today typically integrated into the
"south bridge") with one cable for combined data/control. (As for all bit serial interfaces above, each drive
also has an additional power cable, usually direct to the power supply unit.) The earliest versions of these
interfaces typically had a 8 bit parallel data transfer to/from the drive, but 16-bit versions became much
more common, and there are 32 bit versions. Modern variants have serial data transfer. The word nature of
data transfer makes the design of a host bus adapter significantly simpler than that of the precursor HDD
controller.

Integrated Drive Electronics (IDE), later renamed to ATA, with the alias P-ATA or PATA ("parallel
ATA") retroactively added upon introduction of the new variant Serial ATA. The original name reflected
the innovative integration of HDD controller with HDD itself, which was not found in earlier disks.
Moving the HDD controller from the interface card to the disk drive helped to standardize interfaces,
and to reduce the cost and complexity. The 40-pin IDE/ATA connection transfers 16 bits of data at a
time on the data cable. The data cable was originally 40-conductor, but later higher speed requirements
for data transfer to and from the hard drive led to an "ultra DMA" mode, known as UDMA.
Progressively swifter versions of this standard ultimately added the requirement for a 80-conductor
variant of the same cable, where half of the conductors provides grounding necessary for enhanced highspeed signal quality by reducing cross talk. The interface for 80-conductor only has 39 pins, the missing
pin acting as a key to prevent incorrect insertion of the connector to an incompatible socket, a common
cause of disk and controller damage.
EIDE was an unofficial update (by Western Digital) to the original IDE standard, with the key
improvement being the use of direct memory access (DMA) to transfer data between the disk and the
computer without the involvement of the CPU, an improvement later adopted by the official ATA
standards. By directly transferring data between memory and disk, DMA eliminates the need for the
CPU to copy byte per byte, therefore allowing it to process other tasks while the data transfer occurs.
Small Computer System Interface (SCSI), originally named SASI for Shugart Associates System
Interface, was an early competitor of ESDI. SCSI disks were standard on servers,
workstations, Commodore Amiga, and Apple Macintosh computers through the mid-1990s, by which
time most models had been transitioned to IDE (and later, SATA) family disks. Only in 2005 did the
capacity of SCSI disks fall behind IDE disk technology, though the highest-performance disks are still
available in SCSI, SAS and Fibre Channel only. The range limitations of the data cable allows for
external SCSI devices. Originally SCSI data cables used single ended (common mode) data
transmission, but server class SCSI could use differential transmission, either low voltage
differential (LVD) or high voltage differential (HVD). ("Low" and "High" voltages for differential SCSI
are relative to SCSI standards and do not meet the meaning of low voltage and high voltage as used in
general electrical engineering contexts, as apply e.g. to statutory electrical codes; both LVD and HVD
use low voltage signals (3.3 V and 5 V respectively) in general terminology.)

Compiled by IT DIVISION, NAITA

48 | P a g e

Computer Hardware

Acronym or
abbreviation

Meaning

Description

SASI

Shugart Associates
System Interface

Historical predecessor to SCSI.

SCSI

Small Computer
System Interface

Bus oriented that handles concurrent operations.

SAS

Serial Attached
SCSI

Improvement of SCSI, uses serial communication instead of


parallel.

ST-506

Seagate Technology Historical Seagate interface.

ST-412

Seagate Technology Historical Seagate interface (minor improvement over ST-506).

ESDI

Enhanced Small
Disk Interface

Historical; backwards compatible with ST-412/506, but faster and


more integrated.

ATA (PATA)

Advanced
Technology
Attachment

Successor to ST-412/506/ESDI by integrating the disk controller


completely onto the device. Incapable of concurrent operations.

SATA

Serial ATA

Modification of ATA, uses serial communication instead of


parallel.

Compiled by IT DIVISION, NAITA

49 | P a g e

Computer Hardware

Integrity

Close-up HDD head resting on disk platter

Due to the extremely close spacing between the heads and the disk surface, hard disk drives are vulnerable
to being damaged by a head crasha failure of the disk in which the head scrapes across the platter surface,
often grinding away the thin magnetic film and causing data loss. Head crashes can be caused by electronic
failure, a sudden power failure, physical shock, contamination of the drive's internal enclosure, wear and
tear, corrosion, or poorly manufactured platters and heads.
The HDD's spindle system relies on air pressure inside the disk enclosure to support the heads at their
proper flying height while the disk rotates. Hard disk drives require a certain range of air pressures in order
to operate properly. The connection to the external environment and pressure occurs through a small hole in
the enclosure (about 0.5 mm in breadth), usually with a filter on the inside (the breather filter). If the air
pressure is too low, then there is not enough lift for the flying head, so the head gets too close to the disk,
and there is a risk of head crashes and data loss. Specially manufactured sealed and pressurized disks are
needed for reliable high-altitude operation, above about 3,000 m (10,000 feet). Modern disks include
temperature sensors and adjust their operation to the operating environment. Breather holes can be seen on
all disk drivesthey usually have a sticker next to them, warning the user not to cover the holes. The air
inside the operating drive is constantly moving too, being swept in motion by friction with the spinning
platters. This air passes through an internal recirculation (or "recirc") filter to remove any leftover
contaminants from manufacture, any particles or chemicals that may have somehow entered the enclosure,
and any particles or outgassing generated internally in normal operation. Very high humidity for extended
periods can corrode the heads and platters.
For giant magneto resistive (GMR) heads in particular, a minor head crash from contamination (that does
not remove the magnetic surface of the disk) still results in the head temporarily overheating, due to friction
with the disk surface, and can render the data unreadable for a short period until the head temperature
stabilizes (so called "thermal asperity", a problem which can partially be dealt with by proper electronic
filtering of the read signal).

Compiled by IT DIVISION, NAITA

50 | P a g e

Computer Hardware

Actuation of moving arm

Head stack with an actuator coil on the left and read/write heads on the right

The hard drive's electronics control the movement of the actuator and the rotation of the disk, and perform
reads and writes on demand from the disk controller. Feedback of the drive electronics is accomplished by
means of special segments of the disk dedicated to servo feedback. These are either complete concentric
circles (in the case of dedicated servo technology), or segments interspersed with real data (in the case of
embedded servo technology). The servo feedback optimizes the signal to noise ratio of the GMR sensors by
adjusting the voice-coil of the actuated arm. The spinning of the disk also uses a servo motor. Modern disk
firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors
of the media which have failed.

Landing zones and load/unload technology

Read/write head from circa-1998 Fujitsu3.5" hard disk

Microphotograph of an older generation hard disk drive

(approx. 2.0 mm x 3.0 mm)

head and slider (1990's)

During normal operation heads in HDDs fly above the data recorded on the disks. Modern HDDs prevent
power interruptions or other malfunctions from landing its heads in the data zone by either physically
moving (parking) the heads to a special landing zone on the platters that is not used for data storage, or by
physically locking the heads in a suspended (unloaded) position raised off the platters. Some early PC
HDDs did not park the heads automatically when power was prematurely disconnected and the heads would
land on data. In some other early units the user manually parked the heads by running a program to park the
HDD's heads.

Compiled by IT DIVISION, NAITA

51 | P a g e

Computer Hardware

Landing zones
A landing zone is an area of the platter usually near its inner diameter (ID), where no data are stored. This
area is called the Contact Start/Stop (CSS) zone. Disks are designed such that either a spring or, more
recently, rotational inertia in the platters is used to park the heads in the case of unexpected power loss. In
this case, the spindle motor temporarily acts as a generator, providing power to the actuator.
Spring tension from the head mounting constantly pushes the heads towards the platter. While the disk is
spinning, the heads are supported by an air bearing and experience no physical contact or wear. In CSS
drives the sliders carrying the head sensors (often also just called heads) are designed to survive a number of
landings and takeoffs from the media surface, though wear and tear on these microscopic components
eventually takes its toll. Most manufacturers design the sliders to survive 50,000 contact cycles before the
chance of damage on startup rises above 50%. However, the decay rate is not linear: when a disk is younger
and has had fewer start-stop cycles, it has a better chance of surviving the next startup than an older, highermileage disk (as the head literally drags along the disk's surface until the air bearing is established). For
example, the Seagate Barracuda 7200.10 series of desktop hard disks are rated to 50,000 start-stop cycles, in
other words no failures attributed to the head-platter interface were seen before at least 50,000 start-stop
cycles during testing.
Around 1995 IBM pioneered a technology where a landing zone on the disk is made by a precision laser
process (Laser Zone Texture = LZT) producing an array of smooth nanometer-scale "bumps" in a landing
zone, thus vastly improving stiction and wear performance. This technology is still largely in use today
(2008), predominantly in desktop and enterprise (3.5 inch) drives. In general, CSS technology can be prone
to increased stiction (the tendency for the heads to stick to the platter surface), e.g. as a consequence of
increased humidity. Excessive stiction can cause physical damage to the platter and slider or spindle motor.

Unloading
Load/Unload technology relies on the heads being lifted off the platters into a safe location, thus
eliminating the risks of wear and stiction altogether. The first HDD RAMAC and most early disk drives
used complex mechanisms to load and unload the heads. Modern HDDs use ramp loading, first introduced
by Memorex in 1967, to load/unload onto plastic "ramps" near the outer disk edge.
All HDDs today still use one of these two technologies listed above. Each has a list of advantages and
drawbacks in terms of loss of storage area on the disk, relative difficulty of mechanical tolerance control,
non-operating shock robustness, cost of implementation, etc.
Addressing shock robustness, IBM also created a technology for their ThinkPad line of laptop computers
called the Active Protection System. When a sudden, sharp movement is detected by the builtin accelerometer in the ThinkPad, internal hard disk heads automatically unload themselves to reduce the
risk of any potential data loss or scratch defects. Apple later also utilized this technology in
their PowerBook, iBook, MacBook Pro, and MacBook line, known as the Sudden Motion Sensor. Sony, HP
with their HP 3D Drive Guard and Toshiba have released similar technology in their notebook computers.

Failures and metrics


Most major hard disk and motherboard vendors now support S.M.A.R.T. (Self-Monitoring, Analysis, and
Reporting Technology), which measures drive characteristics such as operating temperature, spin-up time,
data error rates, etc. Certain trends and sudden changes in these parameters are thought to be associated with
increased likelihood of drive failure and data loss.
However, not all failures are predictable. Normal use eventually can lead to a breakdown in the inherently
fragile device, which makes it essential for the user to periodically back up the data onto a separate storage
Compiled by IT DIVISION, NAITA

52 | P a g e

Computer Hardware

device. Failure to do so can lead to the loss of data. While it may sometimes be possible to recover lost
information, it is normally an extremely costly procedure, and it is not possible to guarantee success. A 2007
study published by Google suggested very little correlation between failure rates and either high temperature
or activity level; however, the correlation between manufacturer/model and failure rate was relatively
strong. Statistics in this matter is kept highly secret by most entities. Google did not publish the
manufacturer's names along with their respective failure rates, though they have since revealed that they use
Hitachi Deskstar drives in some of their servers. While several S.M.A.R.T. parameters have an impact on
failure probability, a large fraction of failed drives do not produce predictive S.M.A.R.T.
parameters. S.M.A.R.T. parameters alone may not be useful for predicting individual drive failures.
A common misconception is that a colder hard drive will last longer than a hotter hard drive. The Google
study seems to imply the reverse"lower temperatures are associated with higher failure rates". Hard drives
with S.M.A.R.T.-reported average temperatures below 27 C (80.6 F) had higher failure rates than hard
drives with the highest reported average temperature of 50 C (122 F), failure rates at least twice as high as
the optimum S.M.A.R.T.-reported temperature range of 36 C (96.8 F) to 47 C (116.6 F).
SCSI, SAS, and FC drives are typically more expensive and are traditionally used in servers and disk arrays,
whereas inexpensive ATA and SATA drives evolved in the home computer market and were perceived to be
less reliable. This distinction is now becoming blurred.
The mean time between failures (MTBF) of SATA drives is usually about 600,000 hours (some drives such
as Western Digital Raptor have rated 1.4 million hours MTBF), while SCSI drives are rated for upwards of
1.5 million hours. However, independent research indicates that MTBF is not a reliable estimate of a drive's
longevity. MTBF is conducted in laboratory environments in test chambers and is an important metric to
determine the quality of a disk drive before it enters high volume production. Once the drive product is in
production, the more valid metric is annualized failure rate (AFR). AFR is the percentage of real-world drive
failures after shipping.
SAS drives are comparable to SCSI drives, with high MTBF and high reliability.
Enterprise S-ATA drives designed and produced for enterprise markets, unlike standard S-ATA drives, have
reliability comparable to other enterprise class drives.
Typically enterprise drives (all enterprise drives, including SCSI, SAS, enterprise SATA, and FC)
experience between 0.70%0.78% annual failure rates from the total installed drives.
Eventually all mechanical hard disk drives fail, so to mitigate loss of data, some form of redundancy is
needed, such as RAID or a regular backup system.

External removable drives

Toshiba 1 TB 2.5" external USB 2.0 hard disk drive

Compiled by IT DIVISION, NAITA

3.0 TB 3.5" Seagate FreeAgent GoFlex plug and play


external USB 3.0-compatible drive (left), 750GB 3.5"
Seagate Technology push-button external USB 2.0 drive
(right), and a 500 GB 2.5" generic brand plug and play

53 | P a g e

Computer Hardware
external USB 2.0 drive (front).

External removable hard disk drives offer independence from system integration, establishing
communication via connectivity options, such as USB.
Plug and play drive functionality offers system compatibility, and features large volume data storage
options, but maintains a portable design.
These drives with an ability to function and be removed simplistically, have had further applications due
their flexibility. These include:

Disk cloning

Data storage

Data recovery

Backup of files and information

Storing and running virtual machines

Scratch disk for video editing applications and video recording

Booting operating systems (e.g. Linux, Windows, Windows To Go a.k.a. Live USB)

External hard disk drives are available in two main sizes (physical size), 2.5" and 3.5".Features such as
biometric security or multiple interfaces are available at a higher cost.

Market segments

As of September 2011, the highest capacity consumer HDDs is 4 TB.


"Desktop HDDs" typically store between 250 GB and 2 TB and rotate at 5400 to 10,000 rpm, and have
a media transfer rate of 0.5 Gbit/s or higher. (1 GB = 109 bytes; 1 Gbit/s = 109 bit/s)
Enterprise HDDs are typically used with multiple-user computers running enterprise software.
Examples are

transaction processing databases;

internet infrastructure (email, webserver, e-commerce);

scientific computing software;

nearline storage management software.


The fastest enterprise HDDs spin at 10,000 or 15,000 rpm, and can achieve sequential media transfer
speeds above 1.6 Gbit/s. and a sustained transfer rate up to 1 Gbit/s. Drives running at 10,000 or
15,000 rpm use smaller platters to mitigate increased power requirements (as they have less air drag)
and therefore generally have lower capacity than the highest capacity desktop drives.
Enterprise drives commonly operate continuously ("24/7") in demanding environments while
delivering the highest possible performance without sacrificing reliability. Maximum capacity is not
the primary goal, and as a result the drives are often offered in capacities that are relatively low in
relation to their cost.

Compiled by IT DIVISION, NAITA

54 | P a g e

Computer Hardware

Mobile HDDs or laptop HDDs, smaller than their desktop and enterprise counterparts, tend to be
slower and have lower capacity. A typical mobile HDD spins at either 4200 rpm, 5200 rpm,
5400 rpm, or 7200 rpm, with 5400 rpm being the most prominent. 7200 rpm drives tend to be
more expensive and have smaller capacities, while 4200 rpm models usually have very high
storage capacities. Because of smaller platter(s), mobile HDDs generally have lower capacity

HDDs are commonly symbolized with a

RAID diagram icon symbolizing

1970s vintage disk pack with the

drive icon

the array of disks

cover removed

than their greater desktop counterparts.


The exponential increases in disk space and data access speeds of HDDs have enabled the
commercial viability of consumer products that require large storage capacities, such as digital and
digital audio players. In addition, the availability of vast amounts of cheap storage has made viable a
variety of web-based services with extraordinary capacity requirements, such as free-of-charge web
search, web archiving, and video sharing (Google, Internet Archive, YouTube, etc.).

Sales
Worldwide revenue from shipments of HDDs is expected to reach $27.7 billion in 2010, up 18.4% from
$23.4 billion in 2009 corresponding to a 2010 unit shipment forecast of 674.6 million compared to
549.5 million units in 2009.

Icons
HDDs are traditionally symbolized as a stylized stack of platters or as a cylinder and are found in diagrams
or on lights to indicate hard drive access.
In most modern operating systems, hard drives are represented by an illustration or photograph of the drive
enclosure, as shown in the examples below.

Manufacturers
More than 200 companies have manufactured hard disk drives over time. As of December 2010,
most hard drives are made by:

Western Digital (31.2%)


Seagate (29.2%)
Hitachi GST (18.1%) (being sold to Western Digital)
Toshiba (10.8%)
Samsung (10.7%) (being sold to Seagate)

Compiled by IT DIVISION, NAITA

55 | P a g e

Computer Hardware

08. Optical disc drive


In computing, an optical disc drive (ODD) is a disk drive that uses laser light or electromagnetic
waves near the light spectrum as part of the process of reading or writing data to or from optical discs. Some
drives can only read from discs, but recent drives are commonly both readers and recorders, also called
burners or writers. Compact discs, DVDs, and Blu-ray discs are common types of optical media which can
be read and recorded by such drives. Optical drive is the generic name; drives are usually described as "CD"
"DVD", or "Blu-Ray", followed by "drive", "writer", etc.
Optical disc drives are an integral part of stand-alone consumer appliances such as CD players,
DVD players and DVD recorders. They are also very commonly used in computers to read software and
consumer media distributed on disc, and to record discs for archival and data exchange purposes. Floppy
disk drives, with capacity of 1.44 MB, have been made obsolete: optical media are cheap and have vastly
higher capacity to handle the large files used since the days of floppy discs, and the vast majority of
computers and much consumer entertainment hardware have optical writers. USB flash drives, highcapacity, small, and inexpensive, are suitable where read/write capability is required.
Disc recording is restricted to storing files playable on consumer appliances (films, music, etc.),
relatively small volumes of data (e.g., a standard DVD holds 4.7 gigabytes) for local use, and data for
distribution, but only on a small-scale; mass-producing large numbers of identical discs is cheaper and faster
than individual recording.
Optical discs are used to back up relatively small volumes of data, but backing up of entire hard
drives, as of 2011 typically containing many hundreds of gigabytes, is less practical than with the smaller
capacities available previously. Large backups are often made on external hard drives, as their price has
dropped to a level making this viable; in professional environments magnetic tape drives are also used.

A Blu-ray (BD-RE DL) writer tray in a Sony Vaio E


series laptop

A CD/DVD-ROM Drive

History
The first laser disk, demonstrated in 1972, was the Laser vision 12-inch video disk. The video signal was
stored as an analog format like a video cassette. The first digitally recorded optical disc was a 5-inch audio
compact disc (CD) in a read-only format created by Philips and Sony in 1975. Five years later, the same two
companies introduced a digital storage solution for computers using this same CD size called a CD-ROM.
Not until 1987 did Sony demonstrate the erasable and rewritable 5.25-inch optical drive.

Compiled by IT DIVISION, NAITA

56 | P a g e

Computer Hardware

Key components
Laser and optics
The most important part of an optical disc drive is an optical path, placed in a pickup head (PUH),usually
consisting of semiconductor laser, a lens for guiding the laser beam, and photodiodes detecting the light
reflection from disc's surface
Initially, CD lasers with a wavelength of 780 nm were used, being within infrared range. For DVDs, the
wavelength was reduced to 650 nm (red color), and the wavelength for Blu-ray Disc was reduced to 405 nm
(violet color).
Two main servomechanisms are used, the first one to maintain a correct distance between lens and disc, and
ensure the laser beam is focused on a small laser spot on the disc. The second servo moves a head along the
disc's radius, keeping the beam on a groove, a continuous spiral data path.
On read only media (ROM), during the manufacturing process the groove, made of pits, is pressed on a flat
surface, called land. Because the depth of the pits is approximately one-quarter to one-sixth of the laser's
wavelength, the reflected beam's phase is shifted in relation to the incoming reading beam, causing mutual
destructive interference and reducing the reflected beam's intensity. This is detected by photodiodes that
output electrical signals.
A recorder encodes (or burns) data onto a recordable CD-R, DVD-R, DVD+R, or BD-R disc (called a blank)
by selectively heating parts of an organic dye layer with a laser[citation needed]. This changes the
reflectivity of the dye, thereby creating marks that can be read like the pits and lands on pressed discs. For
recordable discs, the process is permanent and the media can be written to only once. While the reading laser
is usually not stronger than 5 mW, the writing laser is considerably more powerful. The higher writing
speed, the less time a laser has to heat a point on the media, thus its power has to increase proportionally.
DVD burners' lasers often peak at about 200 mW, either in continuous wave and pulses, although some have
been driven up to 400 mW before the diode fails.
For rewritable CD-RW, DVD-RW, DVD+RW, DVD-RAM, or BD-RE media, the laser is used to melt a
crystalline metal alloy in the recording layer of the disc. Depending on the amount of power applied, the
substance may be allowed to melt back (change the phase back) into crystalline form or left in an amorphous
form, enabling marks of varying reflectivity to be created.
Double-sided media may be used, but they are not easily accessed with a standard drive, as they must be
physically turned over to access the data on the other side.
Double layer (DL) media have two independent data layers separated by a semi-reflective layer. Both layers
are accessible from the same side, but require the optics to change the laser's focus. Traditional single layer
(SL) writable media are produced with a spiral groove molded in the protective polycarbonate layer (not in
the data recording layer), to lead and synchronize the speed of recording head. Double-layered writable
media have: a first polycarbonate layer with a (shallow) groove, a first data layer, a semi-reflective layer, a
second (spacer) polycarbonate layer with another (deep) groove, and a second data layer. The first groove
spiral usually starts on the inner edge and extends outwards, while the second groove starts on the outer edge
and extends inwards.
Some drives support Hewlett-Packard's LightScribe photothermal printing technology for labeling specially
coated discs.

Compiled by IT DIVISION, NAITA

57 | P a g e

Computer Hardware

Rotational mechanism

Comparison of several forms of disk storage showing


tracks (not-to-scale);

A CD-ROM Drive (without case)

green denotes start and red denotes end.

* Some CD-R(W) and DVD-R(W)/DVD+R(W) recorders operate in ZCLV, CAA or CAV modes.
Optical drives' rotational mechanism differs considerably from hard disk drives', in that the latter keep
a constant angular velocity (CAV), in other words a constant number of revolutions per minute (RPM). With
CAV, a higher throughput is generally achievable at an outer disc area, as compared to inner area.
On the other hand, optical drives were developed with an assumption of achieving a constant throughput, in
CD drives initially equal to 150KiB/s. It was a feature important for streaming audio data that always tend to
require a constant bit rate. But to ensure no disc capacity is wasted, a head had to transfer data at a
maximum linear rate at all times too, without slowing on the outer rim of disc. This had led to optical
drivesuntil recentlyoperating with a constant linear velocity (CLV). The spiral groove of the disc
passed under its head at a constant speed. Of course the implication of CLV, as opposed to CAV, is that
disc angular velocity is no longer constant, and spindle motor need to be designed to vary speed between
200 RPM on the outer rim and 500 RPM on the inner rim.
Later CD drives kept the CLV paradigm, but evolved to achieve higher rotational speeds, popularly
described in multiples of a base speed. As a result, a 4 drive, for instance, would rotate at
800-2000
RPM
, while transferring data steadily at 600 KB/s, which is equal to 4 150 KB/s.
For DVD base speed, or "1 speed", is 1.385 MB/s, equal to 1.32 MB/s, approximately 9 times faster than
the CD base speed. For Blu-ray drives base speed is 6.74 MB/s, equal to 6.43 MB/s.
There are mechanical limits to how quickly a disc can be spun. Beyond a certain rate of rotation, around
10000 RPM, centrifugal stress can cause the disc plastic to creep and possibly shatter. On the outer edge of
the CD disc, 10000 RPM limitation roughly equals to 52 speed, but on the inner edge only to 20. Some
drives further lower their maximum read speed to around 40 on the reasoning that blank discs will be clear
of structural damage, but that discs inserted for reading may not be. Without higher rotational speeds,
increased read performance may be attainable by simultaneously reading more than one point of a data
groove, but drives with such mechanisms are more expensive, less compatible, and very uncommon.

Compiled by IT DIVISION, NAITA

58 | P a g e

Computer Hardware

The Z-CLV recording strategy is easily visible after burning a DVD-R.

Because keeping a constant transfer rate for the whole disc is not so important in most contemporary CD
uses, to keep the rotational speed of the disc safely low while maximizing data rate, a pure CLV approach
needed to be abandoned. Some drives work in partial CLV (PCLV) scheme, by switching from CLV to
CAV only when a rotational limit is reached. But switching to CAV requires considerable changes in
hardware design, so instead most drives use the zoned constant linear velocity (Z-CLV) scheme. This
divides the disc into several zones, each having its own different constant linear velocity. A Z-CLV recorder
rated at "52", for example, would write at 52 on the innermost zone and then progressively decrease the
speed in several discrete steps down to 20 at the outer rim.

Loading mechanisms
Current optical drives use either a tray-loading mechanism, where the disc is loaded onto a motorised or
manually operated tray, or a slot-loading mechanism, where the disc is slid into a slot and drawn in by
motorized rollers. Slot-loading drives have the disadvantage that they cannot usually accept the smaller
80 mm discs or any non-standard sizes; however, the Wii and PlayStation 3 video game consoles seem to
have defeated this problem, for they are able to load standard size DVDs and 80 mm discs in the same slotloading drive.
A small number of drive models, mostly compact portable units, have a top-loading mechanism where the
drive lid is opened upwards and the disc is placed directly onto the spindle (for example, all PlayStation 1
consoles, portable CD players, and some standalone CD recorders all feature top-loading drives).
These sometimes have the advantage of using spring-loaded ball bearings to hold the disc in place,
minimizing damage to the disc if the drive is moved while it is spun up.
Some early CD-ROM drives used a mechanism where CDs had to be inserted into
special cartridges or caddies, somewhat similar in appearance to a 3.5" floppy diskette. This was intended to
protect the disc from accidental damage by enclosing it in a tougher plastic casing, but did not gain wide
acceptance due to the additional cost and compatibility concernssuch drives would also inconveniently
require "bare" discs to be manually inserted into an openable caddy before use.

Compiled by IT DIVISION, NAITA

59 | P a g e

Computer Hardware

Computer interfaces

Digital audio output, analog audio output, and parallel ATA interface.

Most internal drives for personal computers, servers and workstations are designed to fit in a standard
5.25" drive bay and connect to their host via an ATA or SATA interface. Additionally, there may be digital
and analog outputs for Red Book audio. The outputs may be connected via a header cable to the sound card
or the motherboard. At one time, computer software resembling cd players controlled playback of the CD.
Today the information is extracted from the disc as data, to be played back or converted to other file
formats.
External drives usually have USB or FireWire interfaces. Some portable versions for laptop use power
themselves off batteries or off their interface bus.
Drives with SCSI interface were made, but they are less common and tend to be more expensive, because of
the cost of their interface chipsets, more complex SCSI connectors, and small volume of sales.
When the optical disc drive was first developed, it was not easy to add to computer systems. Some
computers such as the IBM PS/2 were standardizing on the 3.5" floppy and 3.5" hard disk, and did not
include a place for a large internal device. Also IBM PCs and clones at first only included a
single ATA drive interface, which by the time the CDROM was introduced, was already being used to
support two hard drives. Early laptops simply had no built-in high-speed interface for supporting an external
storage device.
This was solved through several techniques:

Early sound cards could include a second ATA interface, though it was often limited to supporting a
single optical drive and no hard drives. This evolved into the modern second ATA interface included as
standard equipment
A parallel port external drive was developed that connected between a printer and the computer. This
was slow but an option for laptops
A PCMCIA optical drive interface was also developed for laptops
A SCSI card could be installed in desktop PCs for an external SCSI drive enclosure, though SCSI was
typically much more expensive than other options

A Blu-ray drive holds independent lenses for Blu-ray and


DVD media. Pictured are lenses from a Blu-ray writer in
a Sony Vaio E series laptop
The CD/DVD drive lens on an Acer laptop

Compiled by IT DIVISION, NAITA

60 | P a g e

Computer Hardware

Compatibility
Most optical drives are backwards compatible with their ancestors up to CD, although this is not required by
standards.
Compared to a CD's 1.2 mm layer of polycarbonate, a DVD's laser beam only has to penetrate 0.6 mm in
order to reach the recording surface. This allows a DVD drive to focus the beam on a smaller spot size and
to read smaller pits. DVD lens supports a different focus for CD or DVD media with same laser.

Pressed CDCD
R

CDRW

Pressed DVDDVDDVD+R
DVD+R
DVD+RW
DVD
R
RW
DL

Pressed
CAT
BD

BDR

BD- BD-R
RE
DL

BDRE
DL

Audio CD player

Read

Read Read None

None None

None

None

None

None

None None None None

CD-ROM drive

Read

Read Read None

None None

None

None

None

None

None None None None

CD-R recorder

Read

Write Read None

None None

None

None

None

None

None None None None

CD-RW recorder

Read

Write Write None

None None

None

None

None

None

None None None None

DVD-ROM drive

Read

Read Read Read

Read

Read

Read

Read

Read

None

None None None None

DVD-R recorder

Read

Write Write Read

Write Read

Read

Read

Read

None

None None None None

DVD-RW recorder

Read

Write Write Read

Write Read

Write Read

Read

None

None None None None

DVD+R recorder

Read

Write Write Read

Read

Write

Read

Read

Read

None

None None None None

DVD+RW recorder

Read

Write Write Read

Read

Write

Read

Write

Read

None

None None None None

DVDRW recorder

Read

Write Write Read

Write Write

Write Write

Read

None

None None None None

DVDRW/DVD+R
DL recorder

Read

Write Write Read

Write Write

Write Write

Write

None

None None None None

BD-ROM

Read

Read Read Read

Read

Read

Read

Read

Read

Read Read Read Read

BD-R recorder

Read

Write Write Read

Write Write

Write Write

Write

Read

Write Read Read Read

BD-RE recorder

Read

Write Write Read

Write Write

Write Write

Write

Read

Write Write Read Read

BD-R DL recorder

Read

Write Write Read

Write Write

Write Write

Write

Read

Write Read Write Read

BD-RE DL recorder Read

Write Write Read

Write Write

Write Write

Write

Read

Write Write Write Write

Read

Some types of CD-R media with less-reflective dyes may cause problems.

May not work in non MultiRead-compliant drives.

May not work in some early-model DVD-ROM drives.

A large-scale compatibility test conducted by cdrinfo.com in July 2003 found DVD-R discs playable by
96.74%, DVD+R by 87.32%, DVD-RW by 87.68% and DVD+RW by 86.96% of consumer DVD
players and DVD-ROM drives.

Read compatibility with existing DVD drives may vary greatly with the brand of DVD+R DL media
used.

Need information on read compatibility.

May not work in non DVD Multi-compliant drives.

Recorder firmware may blacklist or otherwise refuse to record to some brands of DVD-RW media.

Need information on read compatibility.

As of April 2005, all DVD+R DL recorders on the market are Super Multi-capable.

As of October 2006, recently released BD drives are able to read and write CD media.

Compiled by IT DIVISION, NAITA

61 | P a g e

Computer Hardware

Recording performance
Optical recorder drives are often marked with three different speed ratings. In these cases, the first speed is
for write-once (R) operations, second for re-write (RW or RE) operations, and one for read-only (ROM)
operations. For example a 12/10/32 CD drive is capable of writing to CD-R discs at 12 speed (1.76
MB/s), write to CD-RW discs at 10 speed (1.46 MB/s), and read from any CD discs at 32 speed (4.69
MB/s).
In the late 1990s, buffer under runs became a very common problem as high-speed CD recorders began to
appear in home and office computers, whichfor a variety of reasonsoften could not muster the I/O
performance to keep the data stream to the recorder steadily fed. The recorder, should it run short, would be
forced to halt the recording process, leaving a truncated track that usually renders the disc useless.
In response, manufacturers of CD recorders began shipping drives with "buffer under run protection" (under
various trade names, such as Sanyo's "BURN-Proof", Ricoh's "Just Link" and Yamaha's "Lossless Link").
These can suspend and resume the recording process in such a way that the gap the stoppage produces can
be dealt with by the error-correcting logic built into CD players and CD-ROM drives. The first of these
drives were rated at 12 and 16.

Recording schemes
CD recording on personal computers was originally a batch-oriented task in that it required
specialized authoring software to create an "image" of the data to record, and to record it to disc in the one
session. This was acceptable for archival purposes, but limited the general convenience of CD-R and CDRW discs as a removable storage medium.
Packet writing is a scheme in which the recorder writes incrementally to disc in short bursts, or packets.
Sequential packet writing fills the disc with packets from bottom up. To make it readable in CD-ROM and
DVD-ROM drives, the disc can be closed at any time by writing a final table-of-contents to the start of the
disc; thereafter, the disc cannot be packet-written any further. Packet writing, together with support from
the operating system and a file system like UDF, can be used to mimic random write-access as in media like
flash memory and magnetic disks.
Fixed-length packet writing (on CD-RW and DVD-RW media) divides up the disc into padded, fixed-size
packets. The padding reduces the capacity of the disc, but allows the recorder to start and stop recording on
an individual packet without affecting its neighbours. These resemble the block-writable access offered by
magnetic media closely enough that many conventional file systems will work as-is. Such discs, however,
are not readable in most CD-ROM and DVD-ROM drives or on most operating systems without additional
third-party drivers.
The DVD+RW disc format goes further by embedding more accurate timing hints in the data groove of the
disc and allowing individual data blocks to be replaced without affecting backwards compatibility (a feature
dubbed "lossless linking"). The format itself was designed to deal with discontinuous recording because it
was expected to be widely used in digital video recorders. Many such DVRs use variable-rate video
compression schemes which require them to record in short bursts; some allow simultaneous playback and
recording by alternating quickly between recording to the tail of the disc whilst reading from elsewhere.
Mount Rainier aims to make packet-written CD-RW and DVD+RW discs as convenient to use as that of
removable magnetic media by having the firmware format new discs in the background and manage media
Compiled by IT DIVISION, NAITA

62 | P a g e

Computer Hardware

defects (by automatically mapping parts of the disc which have been worn out by erase cycles to reserve
space elsewhere on the disc). As of February 2007, support for Mount Rainier is natively supported
in Windows Vista. All previous versions of Windows require a third-party solution, as does Mac OS X.

Recorder Unique Identifier


Owing to pressure from the music industry, as represented by the IFPI and RIAA, Philips developed
the Recorder Identification Code (RID) to allow media to be uniquely associated with the recorder that has
written it. This standard is contained in the Rainbow Books. The RID-Code consists of a supplier code (e.g.
"PHI" for Philips), a model number and the unique ID of the recorder. Quoting Philips, the RID "enables a
trace for each disc back to the exact machine on which it was made using coded information in the recording
itself. The use of the RID code is mandatory."
Although the RID was introduced for music and video industry purposes, the RID is included on every disc
written by every drive, including data and backup discs.

Source Identification Code


The Source Identification Code (SID) is an eight character supplier code that is placed on every CD-ROM.
The SID identifies not only manufacturer, the individual factory, and even the machine that produced the
(blank, writeable) disc.
Quoting Philips "The Source Identification Code (SID Code) provides an optical disc production facility
with the means to identify:

all discs mastered and/or replicated in its plant;


and the individual Laser Beam Recorder (LBR) signal processor or mould that produced a particular
stamper or disc."

Use of RID and SID together in forensics


The standard use of RID and SID mean that each disc written contains a record of the machine that produced
a disc (the SID), and which drive wrote it (the RID). This combined knowledge may be very useful to law
enforcement, to investigative agencies, and to private and/or corporate investigators.

Compiled by IT DIVISION, NAITA

63 | P a g e

Computer Hardware

09.Power supply unit


A power supply unit (PSU) supplies direct current (DC) power to the other components in a computer. It
converts general-purpose alternating current (AC) electric power from the mains (110 V to 120 V at 60 Hz
[115 V nominal] in North America, parts of South America, Japan, and Taiwan; 220 V to 240 V at 50 Hz
[230 V nominal] in most of the rest of the world) to low-voltage (for a desktop computer: 12 V, 5 V, 5VSB,
3V3, 5 V, and 12 V) DC power for the internal components of the computer. Some power supplies have a
switch to select either 230 V or 115 V. Other models are able to accept any voltage and frequency between
those limits and some models only operate from one of the two mains supply standards.
Most modern desktop computer power supplies conform to the ATX form factor. ATX power supplies are
turned on and off by a signal from the motherboard. They also provide a signal to the motherboard to
indicate when the DC power lines are correct so that the computer is able to boot up. While an ATX power
supply is connected to the mains supply it provides a 5 V stand-by (5VSB) line so that the standby functions
on the computer and certain peripherals are powered. The most recent ATX PSU standard is version 2.31 of
mid-2008.

Computer power supply unit with top cover removed

Power rating and efficiency


Computer power supplies are rated based on their maximum output power. Typical power ranges are from
500 W to lower than 300 W for small form factor systems intended as ordinary home computers, the use of
which is limited to web-surfing and burning and playing DVDs. Power supplies used by gamers and
enthusiasts mostly range from 450 W to 1400 W. Typical gaming PCs feature power supplies in the range of
350800 W, with higher-end PCs demanding 8001400 W supplies. The highest-end units are up to 2 kW
strong and are intended mainly for servers and, to a lesser degree, extreme performance computers with
multiple processors, several hard disks and multiple graphics cards. The power rating of a PC power supply
is not officially certified and is self-claimed by each manufacturer. A common way to reach the power figure
for PC PSUs is by adding the power available on each rail, which will not give a true power figure.
Therefore it is possible to overload a PSU on one rail without having to use the maximum rated power.

Compiled by IT DIVISION, NAITA

64 | P a g e

Computer Hardware

This may mean that if:

PSU A has a peak rating of 550 watts at 25C, with 25 amps (300 W) on the 12 volt line, and
PSU B has a continuous rating of 450 watts at 40C, with 33 amps (400 W) on the 12 volt line,

And if those ratings are accurate, then PSU B would have to be considered a vastly superior unit, despite its
lower overall power rating. PSU A may only be capable of delivering a fraction of its rated power under real
world conditions.
This tendency has led in turn to greatly over specified power supply recommendations, and a shortage of
high-quality power supplies with reasonable capacities. Simple, general purpose computers rarely require
more than 300350 watts maximum.[1] Higher end computers such as servers and gaming machines with
multiple high power GPUs are among the few exceptions.

Appearance
Most computer power supplies are a square metal box, and have a large bundle of wires emerging from one
end. Opposite the wire bundle is the back face of the power supply, with an air vent and an IEC 60320 C14
connector to supply AC power. There may optionally be a power switch and/or a voltage selector switch. A
label on one side of the box lists technical information about the power supply, including safety
certifications maximum output power. Common certification marks for safety are the UL mark, GS mark,
TV, NEMKO, SEMKO, DEMKO, FIMKO, CCC, CSA, VDE, GOST R and BSMI. Common certificate
marks for EMI/RFI are the CE mark, FCC and C-tick. The CE mark is required for power supplies sold in
Europe and India.
A RoHS or 80 PLUS can also sometimes be seen.
Dimensions of an ATX power supply are 150 mm width, 86 mm height, and typically 140 mm depth,
although the depth can vary from brand to brand.

Compiled by IT DIVISION, NAITA

65 | P a g e

Computer Hardware

Connectors

Various connectors from a computer PSU.

Typically, power supplies have the following connectors (all are Molex (USA) Inc Mini-Fit Jr, unless
otherwise indicated):
PC Main power connector (usually called P1): This is the connector that goes to the motherboard to provide
it with power. The connector has 20 or 24 pins. One of the pins belongs to the PS-ON wire (it is usually
green). This connector is the largest of all the connectors. In older AT power supplies, this connector was
split in two: P8 and P9. A power supply with a 24-pin connector can be used on a motherboard with a 20-pin
connector. In cases where the motherboard has a 24-pin connector, some power supplies come with two
connectors (one with 20-pin and other with 4-pin) which can be used together to form the 24-pin connector.
ATX12V 4-pin power connector (also called the P4 power connector). A second connector that goes to the
motherboard (in addition to the main 24-pin connector) to supply dedicated power for the processor. For
high-end motherboards and processors, more power is required, therefore EPS12V has an 8 pin connector.
4-pin Peripheral power connectors: These are the other, smaller connectors that go to the various disk drives
of the computer. Most of them have four wires: two black, one red, and one yellow. Unlike the standard
mains electrical wire color-coding, each black wire is a ground, the red wire is +5 V, and the yellow wire is
+12 V. In some cases these are also used to provide additional power to PCI cards such as FireWire 800
cards.
4-pin Molex (Japan) Ltd power connectors (usually called Mini-connector or "mini-Molex"): This is one of
the smallest connectors that supplies the floppy drive with power. In some cases, it can be used as an
auxiliary connector for AGP video cards. Its cable configuration is similar to the Peripheral connector.
Auxiliary power connectors: There are several types of auxiliary connectors designed to provide additional
power if it is needed.
Serial ATA power connectors: a 15-pin connector for components which use SATA power plugs. This
connector supplies power at three different voltages: +3.3, +5, and +12 volts.
6-pin Most modern computer power supplies include 6-pin connectors which are generally used for PCI
Express graphics cards, but a newly introduced 8-pin connector should be seen on the latest model power
supplies. Each PCI Express 6-pin connector can output a maximum of 75 W.
6+2 pin For the purpose of backwards compatibility, some connectors designed for use with high end PCI
Express graphics cards feature this kind of pin configuration. It allows either a 6-pin card or an 8-pin card to
be connected by using two separate connection modules wired into the same sheath: one with 6 pins and
another with 2 pins.
A IEC 60320 C14 connector with an appropriate C13 cord is used to attach the power supply to the local
power grid.

Compiled by IT DIVISION, NAITA

66 | P a g e

Computer Hardware

Computer form factor


In computing, a form factor specifies the physical dimensions of major system components. Specifically, in
the IBM PC compatible industry, standard form factors ensure that parts are interchangeable across
competing vendors and generations of technology, while in enterprise computing, form factors ensure that
server modules fit into existing rackmount systems. Traditionally, the most significant specification is for
that of the motherboard, which generally dictates the overall size of the case. Small form factors have been
developed and implemented, but further reduction in overall size is hampered by current power supply
technology.

Overview of form factors

Pictorial comparison of some common computer form factors.

A PC motherboard is the main circuit board within a typical desktop computer, laptop or server. Its main
functions are as follows:

to serve as a central backbone to which all other modular parts such as CPU, RAM, and hard drives can
be attached as required to create a modern computer;
to accept (on many motherboards) different components (in particular CPU and expansion cards) for the
purposes of customization;
to distribute power to PC components;
to electronically co-ordinate and interface the operation of the components.

As new generations of components have been developed, the standards of motherboards have changed too;
for example, with AGP being introduced, and more recently PCI Express. However, the standardized size
and layout of motherboard have changed much more slowly, and are controlled by their own standards. The
list of components a motherboard must include changes far more slowly than the components themselves.
For example, north bridge controllers have changed many times since their introduction, with many
manufacturers bringing out their own versions, but in terms of form factor standards, the requirement to
allow for a north bridge has remained fairly static for many years.
Although it is a slower process, form factors do evolve regularly in response to changing demands. The
original PC standard (AT) was superseded in 1995 by the current industry standard ATX, which still dictates
the size and design of the motherboard in most modern PCs. The latest update to the ATX standard was
released in 2007. A divergent standard by chipset manufacturer VIA called EPIA (also known as ITX, and
not to be confused with EPIC) is based upon smaller form factors and its own standards.
Differences between form factors are most apparent in terms of their intended market sector, and involve
variations in size, design compromises and typical features. Most modern computers have very similar
requirements, so form factor differences tend to be based upon subsets and supersets of these. For example,
a desktop computer may require more sockets for maximal flexibility and many optional connectors and
other features on-board, whereas a computer to be used in a multimedia system may need to be optimized
for heat and size, with additional plug-in cards being less common. The smallest motherboards may sacrifice
CPU flexibility in favor of a fixed manufacturer's choice.

Compiled by IT DIVISION, NAITA

67 | P a g e

Computer Hardware

Comparisons
Tabular information
Typical
Form factor

Originated

Max. size

feature-set
(compared
to ATX)

Typical
CPU
flexibility

Power

Notes

handling

(typical usage, Market adoption, etc.)

Obsolete, see Industry Standard Architecture.


The IBM Personal Computer XT was the
XT

IBM 1983

8.5 11 in

successor to the original IBM PC, its first home

216 279 mm

computer. As the specifications were open,


many clone motherboards were produced and it
became a de facto standard.
Obsolete, see Industry Standard Architecture.

AT (Advanced
Technology)

12 1113 in
IBM 1984

305 279
330 mm

Created by IBM for the IBM Personal


Computer/AT, an Intel 80286 machine. Also
known as Full AT, it was popular during the era
of the Intel 80386 microprocessor. Superseded
by ATX.

Baby-AT

IBM 1985

8.5 1013 in

IBM's 1985 successor to the AT motherboard.

216 254

Functionally equivalent to the AT, it became

330 mm

popular due to its significantly smaller size.


Created by Intel in 1995. As of 2007, it is the

ATX

Intel 1996

12 9.6 in
305 244 mm

most popular form factor for commodity


motherboards. Typical size is 9.6 12 in
although some companies extend that to
10 12 in.
Created by the Server System
Infrastructure (SSI) forum. Derived from the

SSI CEB

SSI

12 10.5 in
305 267 mm

EEB and ATX specifications. This means that


SSI CEB motherboards have the same
mounting holes and the same IO connector area
as ATX motherboards.
Created by the Server System
Infrastructure (SSI) forum. Derived from the

SSI EEB

SSI

12 13 in
305 330 mm

EEB and ATX specifications. This means that


SSI CEB motherboards have the same
mounting holes and the same IO connector area
as ATX motherboards.
Created by the Server System
Infrastructure (SSI) forum. Derived from the

SSI MEB

SSI

16.2 13 in

EEB and ATX specifications. This means that

411 330 mm

SSI CEB motherboards have the same


mounting holes and the same IO connector area
as ATX motherboards.

Compiled by IT DIVISION, NAITA

68 | P a g e

Computer Hardware
A smaller variant of the ATX form factor
microATX

1996

9.6 9.6 in
244 244 mm

(about 25% shorter). Compatible with most


ATX cases, but has fewer slots than ATX, for a
smaller power supply unit. Very popular for
desktop and small computers as of 2007.
Mini-ATX is slightly smaller than Micro-ITX.
Mini-ATX motherboards were design with

Mini-ATX

A Open 2005

5.9 5.9 in
150 150 mm

MoDT (Mobile on Desktop Technology) which


adapt mobile CPU for lower power
requirement, less heat generation and better
application capability.

9.0 7.5 in
FlexATX

Intel 1999

228.6 190.5 mm
max.

Mini-ITX

Nano-ITX

VIA 2001

VIA 2003

Pico-ITX

VIA 2007

Mobile-ITX

VIA 2007

A subset of microATX developed by Intel in


1999. Allows more flexible motherboard
design, component positioning and shape. Can
be smaller than regular microATX.

6.7 6.7 in

A small, highly-integrated form factor, designed

170 170 mm

for small devices such as thin clients and set-top

max.

boxes.

4.7 4.7 in
120 120 mm

Targeted at smart digital entertainment devices


such as PVRs, set-top boxes, centers and Car
PCs, and thin devices.

100 72 mm
max.
2.953 1.772 in
75 45 mm
A standard proposed by Intel as a successor to
ATX in the early 2000s, according to Intel the
layout has better cooling. BTX Boards are
flipped in comparison to ATX Boards, so a
12.8 10.5 in

BTX (Balanced
Technology

Intel 2004

BTX or MicroBTX Board needs a BTX case,

325 267 mm

while an ATX style board fits in an ATX case.

max.

The RAM slots and the PCI slots are parallel to

Extended)

each other.
Processor is placed closest to the fan. May
contain a CNR board.
MicroBTX or
uBTX

10.4 10.5 in
Intel 2004

264 267 mm
max.
8.0 10.5 in

PicoBTX

Intel 2004

203 267 mm
max.

DTX

AMD 2007

Mini-DTX

AMD 2007

smart Module

Digital-Logic

200 244 mm
max.
200 170 mm
max.
66 85 mm

Compiled by IT DIVISION, NAITA

Used in embedded systems and single board


computers. Requires a baseboard.

69 | P a g e

Computer Hardware

ETX
COM Express
Basic
COM Express
Compact

Kontron

95 114 mm

PICMG

95 125 mm

PICMG

95 95 mm

Used in embedded systems and single board


computers. Requires a baseboard.
Used in embedded systems and single board
computers. Requires a carrier board.
Used in embedded systems and single board
computers. Requires a carrier board.
Used in embedded systems and single board

nanoETXexpress

Kontron

55 84 mm

computers. Requires a carrier board. Also


known as COM Express Ultra and adheres to
pin-outs Type 1 or Type 10[1]

Core Express

SFF-SIG

58 65 mm

Used in embedded systems and single board


computers. Requires a carrier board.
Used in rack mount server systems. Typically
used for server-class type motherboards with

Extended
ATX(EATX)

Unknown

12 13 in

dual processors and too much circuitry for a

305 330 mm

standard ATX motherboard. The mounting hole


pattern for the upper portion of the board
matches ATX.
Based on a design by Western Digital, it

9 1113 in
LPX

Unknown

229 279
330 mm

allowed smaller cases than the AT standard, by


putting the expansion card slots on a Riser
card. Used in slimline retail PCs. LPX was
never standardized and generally only used by
large OEMs.

89 1011 in
Mini-LPX

Unknown

203229 254

Used in slimline retail PCs.

279 mm

PC/104

PC/104-Plus

PC/104
Consortium1992

PC/104
Consortium1997

PCI/104-

PC/104

Express

Consortium2008

PCIe/104

NLX

UTX

WTX

HPTX

PC/104
Consortium2008

Intel 1999

TQ-Components
2001

Intel 1998

EVGA 2008

Used in embedded systems. AT Bus (ISA)


3.8 3.6 in

architecture adapted to vibration-tolerant header


connectors.
Used in embedded systems. PCI Bus

3.8 3.6 in

architecture adapted to vibration-tolerant header


connectors.
Used in embedded systems.

3.8 3.6 in

PCI Express architecture adapted to vibrationtolerant header connectors.

3.8 3.6 in

Used in embedded systems.


PCI/104-Express without the legacy PCI bus.

89 1013.6 in

A low-profile design released in 1997. It also

203229 254

incorporated a riser for expansion cards, and

345 mm

never became popular.

88 108 mm

14 16.75 in
355.6 425.4 mm
13.6 15 in

Compiled by IT DIVISION, NAITA

Used in embedded systems and IPCs. Requires


a baseboard.
A large design for servers and high-end
workstations featuring multiple CPUs and hard
drives.
A large design by EVGA, it has dual-CPU

70 | P a g e

Computer Hardware
345.44 381 mm

(Intel Xeon 55xx and 56xx) support, Four-Way


nVIDIA SLI or ATi Crossfire support, up to 8
3.5in HDD support, and supports 48GB of
RAM. Cases need to have at least 9 expansion
slots and the required dimensions to be
compatible.

XTX

2005

Used in embedded systems. Requires a

95 114 mm

baseboard.

Graphical comparison of physical sizes

Maximum number of PCI/AGP/PCI-e slots


ATX case compatible:
Specification Number
HPTX

ATX

MicroATX

FlexATX

DTX

Mini-DTX/DTX 2
Mini-ITX

Compiled by IT DIVISION, NAITA

71 | P a g e

Computer Hardware

Visual examples of different form factors

Different form factors

ATX (Abit KT7)

Mini-ITX (VIA EPIA 5000AG)


Pico-ITX (VIA EPIA PX10000G)

PC/104 and EBX


PC/104 is an embedded computer standard which defines both a form factor and computer bus. PC/104 is
intended for embedded computing environments. Single board computers built to this form factor are often
sold by COTS vendors, which benefits users who want a customized rugged system, without months of
design and paper work.
The PC/104 form factor was standardized by the PC/104 Consortium in 1992.[3] An IEEE standard
corresponding to PC/104 was drafted as IEEE P996.1, but never ratified.
The 5.75 8.0 in Embedded Board expandable (EBX) specification, which was derived from Ampro's
proprietary Little Board form-factor, resulted from collaboration between Ampro and Motorola Computer
Group.
As compared with PC/104 modules, these larger (but still reasonably embeddable) SBCs tend to have
everything of a full PC on them, including application oriented interfaces like audio, analog, or digital I/O in
many cases. Also it's much easier to fit Pentium CPUs, whereas it's a tight squeeze (or expensive) to do so
on a PC/104 SBC. Typically, EBX SBCs contain: the CPU; upgradeable RAM subassemblies (e.g., DIMM);
Flash memory for solid state disk; multiple USB, serial, and parallel ports; onboard expansion via a PC/104
module stack; off-board expansion via ISA and/or PCI buses (from the PC/104 connectors); networking
interface (typically Ethernet); and video (typically CRT, LCD, and TV).

Compiled by IT DIVISION, NAITA

72 | P a g e

Computer Hardware

AT vs. ATX

A typical installation of an ATX forms factor computer power supply.

There are two basic differences between AT and ATX power supplies: The connectors that provide power to
the motherboard, and the soft switch. On older AT power supplies, the Power-on switch wire from the front
of the computer is connected directly to the power supply.
On newer ATX power supplies, the power switch on the front of the computer goes to the motherboard over
a connector labeled something like; PS ON, Power SW, SW Power, etc. This allows other hardware and/or
software to turn the system on and off.
The motherboard controls the power supply through pin #14 of the 20 pin connector or #16 of the 24 pin
connector on the motherboard. This pin carries 5V when the power supply is in standby. It can be grounded
to turn the power supply on without having to turn on the rest of the components. This is useful for testing or
to use the computer ATX power supply for other purposes.
AT stands for Advanced Technology when ATX means Advanced Technology extended.

Other Form Factors


The Thin Form Factor with 12 Volt connector (TFX12V) configuration has been optimized for small and
low profile microATX and FlexATX system layouts. The long narrow profile of the power supply (shown in
Figure 1) fits easily into low profile systems. The fan placement can be used to efficiently exhaust air from
the processor and core area of the motherboard, making possible smaller, more efficient systems using
common industry ingredients.

Laptops
Most portable computers have power supplies that provide 25 to 200 watts. In portable computers (such
as laptops) there is usually an external power supply (sometimes referred to as a "power brick" due to its
similarity, in size, shape and weight, to a real brick) which converts AC power to one DC voltage (most
commonly 19 V), and further DC-DC conversion occurs within the laptop to supply the various DC voltages
required by the other components of the portable computer.

Servers
Some web servers use a single-voltage 12 volt power supply. All other voltages are generated by voltage
regulator modules on the motherboard.

Energy efficiency
Computer power supplies are generally about 7075% efficient. That means in order for a 75% efficient
power supply to produce 75 W of DC output it would require 100 W of AC input and dissipate the
remaining 25 W in heat. Higher-quality power supplies can be over 80% efficient; higher energy
efficient PSU's waste less energy in heat, and requires less airflow to cool, and as a result will be quieter.
Google's server power supplies are more than 90% efficient. HP's server power supplies have reached 94%
efficiency. Standard PSUs sold for server workstations have around 90% efficiency, as of 2010.

Compiled by IT DIVISION, NAITA

73 | P a g e

Computer Hardware

It's important to match the capacity of a power supply to the power needs of the computer. The energy
efficiency of power supplies drops significantly at low loads. Efficiency generally peaks at about 5075%
load. The curve varies from model to model (examples of how this curve looks can be seen on test reports of
energy efficient models found on the PLUS website). As a rule of thumb for standard power supplies it is
usually appropriate to buy a supply such that the calculated typical consumption of one's computer is about
60% of the rated capacity of the supply provided that the calculated maximum consumption of the computer
does not exceed the rated capacity of the supply. Note that advice on overall power supply ratings often
given by the manufacturer of single component, typically graphics cards, should be treated with great
skepticism. These manufacturers want to minimize support issues due to under rating of the power supply
specifications and advise customers to use a more powerful power supply to avoid these issues.
Various initiatives are underway to improve the efficiency of computer power supplies. Climate savers
computing initiative promotes energy saving and reduction of greenhouse gas emissions by encouraging
development and use of more efficient power supplies. 80 PLUS certifies power supplies that meet certain
efficiency criteria, and encourages their use via financial incentives. On top of that the businesses end up
using less electricity to cool the PSU and the computer's themselves and thus save an initially large sum(i.e.
incentive + saved electricity = higher profit).

Facts

Redundant power supply.

Life span is usually measured in mean time between failures (MTBF). Higher MTBF ratings are
preferable for longer device life and reliability. Quality construction consisting of industrial
grade electrical components and/or a larger or higher speed fan can help to contribute to a higher MTBF
rating by keeping critical components cool, thus preventing the unit from overheating. Overheating is a
major cause of PSU failure. MTBF value of 100,000 hours (about 11 years continuous operation) is not
uncommon.

Compiled by IT DIVISION, NAITA

74 | P a g e

Computer Hardware

Power supplies may have passive or active power factor correction (PFC). Passive PFC is a simple way
of increasing the power factor by putting a coil in series with the primary filter capacitors. Active PFC is
more complex and can achieve higher PF, up to 99%.
In computer power supplies that have more than one +12V power rail, it is preferable for stability
reasons to spread the power load over the 12V rails evenly to help avoid overloading one of the rails on
the power supply.

Multiple 12V power supply rails are separately current limited as a safety feature; they are not
generated separately. Despite widespread belief to the contrary, this separation has no effect on
mutual interference between supply rails.

The ATX12V 2.x and EPS12V power supply standards defer to the IEC 60950 standard, which
requires that no more than 240 volt-ampsbe present between any two accessible points. Thus, each
wire must be current-limited to no more than 20 A; typical supplies guarantee 18 A without
triggering the current limit. Power supplies capable of delivering more than 18 A at 12 V connect
wires in groups to two or more current sensors which will shut down the supply if excess current
flows. Unlike a fuse or circuit breaker, these limits reset as soon as the overload is removed.

Because of the above standards, almost all high-power supplies claim to implement separate rails,
however this claim is often false; many omit the necessary current-limit circuitry, both for cost

reasons and because it is an irritation to customers. (The lack is sometimes advertised as a feature
under names like "rail fusion" or "current sharing".)
When the computer is powered down but the power supply is still on, it can be started remotely
via Wake-on-LAN and Wake-on-ring or locally via Keyboard Power ON (KBPO) if the motherboard
supports it.
Early PSUs used a conventional (heavy) step-down transformer, but most modern computer power
supplies are a type of switched-mode power supply (SMPS) with a ferrite-cored high
frequency transformer.
Computer power supplies may have short circuit protection, overpower (overload) protection,
overvoltage protection, under voltage protection, overcurrent protection, and over temperature
protection.
Some power supplies come with sleeved cables, which is aesthetically nicer, makes wiring easier and
cleaner and have less detrimental effect on airflow.
Since supplies are self-certified, a manufacturer's claimed output may be double or more what is actually
provided. Although a too-large power supply will have an extra margin of safety as far as not overloading, a larger unit is often less efficient at lower loads (under 20% of its total capability) and therefore
will waste more electricity than a more appropriately sized unit. Additionally, computer power supplies
generally do not function properly if they are too lightly loaded. (Less than about 15% of the total load.)
Under no-load conditions they may shut down or malfunction. For this reason the no-load protection was
introduced in some power supplies.
The most important factor for judging PSUs suitability for certain graphics cards is the PSUs total 12V
output, as it is that voltage on which modern graphics cards operate. If the total 12V output stated on the
PSU is higher than the suggested minimum of the card, then that PSU can fully supply the card. It is
however recommended that a PSU should not just cover the graphics cards' demands, as there are other
components in the PC that depend on the 12 V output.
Power supplies can feature magnetic amplifiers or double-forward converter circuit design.

Compiled by IT DIVISION, NAITA

75 | P a g e

Computer Hardware

Wiring diagrams
24-pin ATX12V 2.x power supply connector
(20-pin omits the last four: 11, 12, 23 and 24)
Color

Signal

P8.1

Power Good

Red

P8.2

+5 V

Yellow

P8.3

+12 V

Blue

P8.4

12 V

Black

P8.5

Ground

Black

P8.6

Ground

Black

P9.1

Ground

Black

P9.2

Ground

N/C

P9.3

5 V

Red

P9.4

+5 V

Red

P9.5

+5 V

Red

P9.6

+5 V

+3.3 V

Orange

+3.3 V sense

Brown

13

Orange

+3.3 V

14 12 V

Blue

Black

Ground

15 Ground

Black

+5 V

16 Power on

Green

Ground

17 Ground

Black

+5 V

18 Ground

Black

Black

Ground

19 Ground

Black

Grey

Power good

20 Reserved

N/C

Purple

+5 V standby

21 +5 V

Red

Yellow

+12 V 10

22 +5 V

Red

Yellow

+12 V 11

23 +5 V

Red

Orange

+3.3 V 12

24 Ground

Black

Signal

Orange

Color

+3.3 V

Red
Pin

Signal

Orange

AT power connector (Used on older AT style mainboards)


Color

Pin Pin

Red

Black

Pins 8, and 16 (shaded) are control signals, not power:

Power on is pulled up to +5 V by the PSU, and must


be driven low to turn on the PSU.

Power good is low when other outputs have not yet


reached, or are about to leave, correct voltages.

Pin 13 supplies +3.3 V power and also has a second thinner


wire for remote sensing.

Pin 20 (formerly 5 V, white wire) is absent in current


power supplies; it was optional in ATX and ATX12V ver.
1.2, and deleted as of ver. 1.3.

The right-hand pins are numbered 1120 in the 20-pin


version.

Modular power supplies


A modular power supply is an approach to cabling which allows users to omit unused cables. Whereas a
conventional design has numerous cables permanently connected to the power supply, a modular power
supply provides connectors at the power supply end, allowing unused cables to be detached from the power
supply, producing less clutter, a neater appearance and less interference with airflow. It also makes it
possible to supply a wider variety of cables, providing different lengths of Serial ATA power connectors
instead of Molex connectors.
While modular cabling can help reduce case clutter, they have often been criticized for creating electrical
resistance. Some third party websites that do power supply testing have confirmed that the quality of the
connector, the age of the connector, the number of times it was inserted/removed, and various other
variables such as dust can all raise resistance. However, this is somewhat inconsequential as the amount of
this resistance in a good connector is small compared to the resistance generated by the length of the wire
itself.

Compiled by IT DIVISION, NAITA

76 | P a g e

Computer Hardware

10. Expansion Slots


Industry Standard Architecture (ISA)

Five 16-bit and one 8-bit ISA slots on a motherboard


Year created
1981
Created by
IBM
Superseded by
PCI (1993)
Width in bits
8 or 16
Number of devices
Up to 6 devices
Style
Parallel
Hotplugging interface
no
External interface
no

From top to bottom: XT 8-bit, ISA 16-bit, EISA

Industry Standard Architecture (ISA) is a computer bus standard for IBM PC compatible computers
introduced with the IBM Personal Computer to support its Intel 8088 microprocessor's 8-bit external data
bus and extended to 16 bits for the IBM Personal Computer/AT's Intel 80286 processor. The ISA bus was
further extended for use with 32-bit processors as Extended Industry Standard Architecture (EISA). For
general desktop computer use it has been supplanted by later buses such as IBM Micro Channel, VESA
Local Bus, Peripheral Component Interconnect and other successors. A derivative of the AT bus structure is
still used in the PC/104 bus, and internally within Super I/O chips.

Compiled by IT DIVISION, NAITA

77 | P a g e

Computer Hardware

Extended Industry Standard Architecture (EISA)

A SCSI Controller (Adaptec AHA-1740).

Enhanced Industry Standard Architecture


Year created
Created by
Superseded by
Width in bits
Number of devices
Capacity
Style
Hotplugging interface
External interface

1988
Gang of Nine
PCI (1993)
32
1 per slot
8.33 MHz
Parallel
No
No

The Extended Industry Standard Architecture (in practice almost always shortened to EISA and frequently
pronounced "eee-suh") is a bus standard for IBM PC compatible computers. It was announced in late 1988
by PC clone vendors (the "Gang of Nine") as a counter to IBM's use of its proprietary Micro Channel
architecture (MCA) in its PS/2 series.
EISA extends the AT bus, which the Gang of Nine retroactively renamed to the ISA bus to avoid infringing
IBM's trademark on its PC/AT computer, to 32 bits and allows more than one CPU to share the bus. The bus
mastering support is also enhanced to provide access to 4 GB of memory. Unlike MCA, EISA can accept
older XT and ISA boards the lines and slots for EISA are a superset of ISA.
EISA was much favoured by manufacturers due to the proprietary nature of MCA, and even IBM produced
some machines supporting it. It was somewhat expensive to implement (though not as much as MCA), so it
never became particularly popular in desktop PCs. However, it was reasonably successful in the server
market, as it was better suited to bandwidth-intensive tasks (such as disk access and networking). Most
EISA cards produced were either SCSI or network cards. EISA was also available on some non-IBM
compatible machines such as the AlphaServer, HP 9000-D, SGI Indigo2 and MIPS Magnum.
By the time there was a strong market need for a bus of these speeds and capabilities, the VESA Local Bus
and later PCI filled this niche and EISA vanished into obscurity

Compiled by IT DIVISION, NAITA

78 | P a g e

Computer Hardware

Micro Channel Architecture (MCA)

Year created
Created by
Supersedes
Superseded by
Width in bits
Capacity
Style
Hotplugging interface
External interface

1987
IBM
ISA
PCI (1993)
16 or 32
10 MHz
Parallel
no
no

Micro Channel Architecture (MCA) was a proprietary 16- or 32-bit parallel computer bus introduced by
IBM in 1987 which was used on PS/2 and other computers through the mid-1990s.

Compiled by IT DIVISION, NAITA

79 | P a g e

Computer Hardware

Peripheral Component Interconnect (PCI)

Three 5-volt 32-bit PCI expansion slots on a motherboard (PC bracket on left side)
Year created
Created by
Supersedes
Superseded by
Width in bits
Capacity

Style
Hotplugging interface

July 1993
Intel
ISA, EISA, MCA, VLB
PCI Express (2004)
32 or 64
133 MB/s (32-bit at 33 MHz)
266 MB/s (32-bit at 66 MHz or
64-bit at 33 MHz)
533 MB/s (64-bit at 66 MHz)
Parallel
Optional

Conventional PCI (PCI is an initialism formed from Peripheral Component Interconnect, part of the PCI
Local Bus standard and often shortened to PCI) is a computer bus for attaching hardware devices in a
computer. These devices can take either the form of an integrated circuit fitted onto the motherboard itself,
called a planar device in the PCI specification, or an expansion card that fits into a slot. The PCI Local Bus
was implemented in PCs, where it displaced ISA and VESA Local Bus as the standard expansion bus, and it
in other computer types. PCI is being replaced by PCI-X and PCI Express, but as of 2011, many
motherboards are still made with one or more PCI slots
The PCI specification covers the physical size of the bus (including the size and spacing of the circuit board
edge electrical contacts), electrical characteristics, bus timing, and protocols. The specification can be
purchased from the PCI Special Interest Group (PCI-SIG).
Typical PCI cards used in PCs include: network cards, sound cards, modems, extra ports such as USB or
serial, TV tuner cards and disk controllers. PCI video cards replaced ISA cards until growing bandwidth
requirements outgrew the capabilities of PCI; the preferred interface for video cards became AGP, and then
PCI Express. PCI video cards remain available for use with old PCs without AGP or PCI Express slots.
Many devices previously provided on expansion cards are either commonly integrated onto motherboards,
or more commonly available in USB and PCI Express versions. Modern PCs often have no cards fitted.
However, PCI is still used for certain specialized cards
Compiled by IT DIVISION, NAITA

80 | P a g e

Computer Hardware

Accelerated Graphics Port (AGP)

AGP slot (Purple color


Year created
Created by
Superseded by
Width in bits
Number of devices
Capacity
Style

1997
Intel
PCI Express (2004)
32
1 device/slot
up to 2133 MB/s
Parallel

The Accelerated Graphics Port (often shortened to


AGP) is a high-speed point-to-point channel for
attaching a video card to a computer's motherboard,
primarily to assist in the acceleration of 3D computer graphics. Since 2004 AGP has been progressively
phased out in favor of PCI Express (PCIe). By mid-2009 PCIe cards dominated the market; AGP cards and
motherboards were still produced, but OEM driver support was minimal.

Advantages over PCI


As computers became increasingly graphically-oriented, successive generations of graphics adapters began
to push the limits of PCI, a bus with shared bandwidth. This led to the development of AGP, a "bus"
dedicated to graphics adapters.
The primary advantage of AGP over PCI is that it provides a dedicated pathway between the slot and the
processor rather than sharing the PCI bus. In addition to a lack of contention for the bus, the direct
connection allows for higher clock speeds. AGP also uses sideband addressing, meaning that the address and
data buses are separated so the entire packet does not need to be read to get addressing information. This is
done by adding eight extra 8-bit buses which allow the graphics controller to issue new AGP requests and
commands at the same time with other AGP data flowing via the main 32 address/data (AD) lines. This
results in improved overall AGP data throughput.
In addition, to load a texture, a PCI graphics card must copy it from the system's RAM into the card's
framebuffer, whereas an AGP card is capable of reading textures directly from system RAM using the
graphics address remapping table, which reapportions main memory as needed for texture storage, allowing
the graphics card to access them directly. The maximum amount of system memory available to AGP is
defined as the AGP aperture.

Compiled by IT DIVISION, NAITA

81 | P a g e

Computer Hardware

Peripheral Component Interconnect Express (PCI Express)

1. PCI Express 4
2. PCI Express 16
Year created
Created by
Supersedes
Width in bits
Number of devices

Capacity

Style
Hotplugging interface
External interface

Various PCI slots. From top to bottom:


3. PCI Express 1
4. PCI Express 16

5. Conventional PCI (32-bit)

2004
Intel Dell IBM HP
AGP PCI PCI-X
132
One device each on each endpoint of each connection.
PCI Express switches can create multiple endpoints out of one endpoint to allow sharing one
endpoint with multiple devices.
Per lane (each direction):
v1.x: 250 MB/s (2.5 GT/s)
v2.x: 500 MB/s (5 GT/s)
v3.0: 1 GB/s (8 GT/s)
16 lane slot (each direction):
v1.x: 4 GB/s (40 GT/s)
v2.x: 8 GB/s (80 GT/s)
v3.0: 16 GB/s (128 GT/s)
Serial
Yes, if Express Card or PCI Express Express Module
Yes, with PCI Express External Cabling such as Intel Thunderbolt.

PCI Express officially abbreviated as PCIe, is a computer expansion card standard designed to replace the older PCI,
PCI-X, and AGP bus standards. PCIe has numerous improvements over the aforementioned bus standards, including
higher maximum system bus throughput, lower I/O pin count and smaller physical footprint, better performancescaling for bus devices, a more detailed error detection and reporting mechanism, and native hot plug functionality.
More recent revisions of the PCIe standard support hardware I/O virtualization.
The PCIe electrical interface is also used in a variety of other standards, most notably Express Card, a laptop
expansion card interface.
Format specifications are maintained and developed by the PCI-SIG (PCI Special Interest Group), a group of more
than 900 companies that also maintain the Conventional PCI specifications. PCIe 3.0 is the latest standard for
expansion cards that is available on mainstream personal computers.

Compiled by IT DIVISION, NAITA

82 | P a g e

Computer Hardware

Compiled by IT DIVISION, NAITA

83 | P a g e

You might also like